uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,941,325,220,788
arxiv
\section{Introduction} Let $\Omega$ be a domain in $\mathbb R^n$, and consider the one-phase free boundary problem ($u\ge 0$) \begin{equation}\label{fbintro} \left \{ \begin{array}{ll} \Delta u =0, & \hbox{in $\Omega^+(u):= \{x \in \Omega : u(x)>0\}$,} \\ |\nabla u| =1, & \hbox{on $F(u):= \partial \Omega \cap \Omega^+(u)$.} \\ \end{array}\right. \end{equation} The set $F(u)$ is known as the free boundary. There are strong parallels between the theory of these hypersurfaces and the theory of minimal surfaces. The existence of solutions $u$ and partial regularity (smoothness almost everywhere with respect to surface measure) of the free boundary $F(u)$ was proved by Alt and Caffarelli \cite{AC}. We will begin by formulating our main result in the special case of energy-minimizing solutions. We call $u$ energy-minimizing on $\bar \Omega$ if $u$ minimizes the functional \[ J(v)= \int_{\Omega}(|\nabla v|^2 + \chi_{\{v>0\}})dx, \] among all functions with the same boundary values as $u$. The first variation (Euler-Lagrange) equations for $u$ are \eqref{fbintro}. Indeed, it is easy to show that $u$ is harmonic in $\Omega^+(u)$, and it follows from deeper results of \cite{AC,C1,C2} that $u$ satisfies the free boundary condition $|\nabla u| = 1$ on $F(u)$ in a viscosity sense, defined in Section 2. Define a cylinder of height $2L$, with base the ball of radius $B_r$ in $\mathbb{R}^{n-1}$, by \[ \mathcal{C}(r,L) := B_r \times (-L,L) \subset \mathbb{R}^{n}; \quad \mbox{(and }\mathcal{C}_L := \mathcal{C}(1,L) ). \] \begin{thm}\label{intromain} If an energy-minimizing solution $u$ on the cylinder $\mathcal{C}_L$ is monotone in the vertical direction, \[ \partial u/\partial x_n \ge 0 \ \mbox{ on }\ \mathcal{C}_L^+(u), \] and its free boundary $F(u)$ is a fixed distance from the top and bottom of the cylinder, i.~e., \[ F(u) \subset \mathcal{C}_{L-\epsilon} \quad \mbox{ for some } \epsilon >0, \] then $F(u)$ is the graph of a smooth function $\varphi$, \[ F(u) = \{(x,y): x\in B_1; \quad y = \varphi(x)\} \] with \[ \sup_{x\in B_{1/2}} |\nabla \varphi(x)| \le C \] for a constant $C$ depending only on $L$, $\epsilon$, and $n$. \end{thm} Let us compare our theorem with the classical gradient bound on minimal surfaces due to Bombieri, De Giorgi, and Miranda, which can be stated as follows. \begin{thm}\label{boundMS} \cite{BDM} Let $\phi \in C^\infty(B_1)$ be a solution to the minimal surface equation \begin{equation}\label{MS} \textrm{div}\left(\frac{\nabla \phi}{\sqrt{1+|\nabla \phi|^2}}\right)= 0 \quad \text{in} \quad B_1, \end{equation}with $|\phi|\leq M.$ Then \begin{equation} |\nabla \phi| \leq C \quad \text{in} \quad {B}_{1/2}\end{equation} with $C$ depending on $n$ and $M$. \end{thm} \noindent The hypothesis $\partial u/\partial x_n \ge 0$ in Theorem \ref{intromain} implies, by the strong maximum priniciple, that $\partial u/\partial x_n > 0$. Therefore the level surfaces $\{x: u(x) =c\}$ for $c>0$ are graphs. The hypothesis that the free boundary is a fixed distance from the top and bottom of the cylinder replaces the hypothesis in Theorem \ref{boundMS} that the oscillation of the function $\phi$ is bounded by $M$. Furthermore, the minimal surface equation \eqref{MS} implies that the graph of $\phi$ is area-minimizing, so that the assumption in Theorem \ref{intromain} that the free boundary is energy minimizing is analogous. In the theory of minimal surfaces, it is well-known that minimal graphs are real analytic in the interior of the their domain of definition. The key first step in the proof of full regularity of the minimal graphs is to establish that the graph is Lipschitz, that is, the graph of a function with a bounded gradient. The gradient bound proved here leads, likewise, to full regularity. If the free boundary is a Lipschitz graph, then Caffarelli \cite{C1} proved that the graph is $C^{1,\alpha}$ for some $\alpha>0$. Higher regularity results of \cite{KN} then yield the local analyticity of $F(u)$. So real analyticity follows if one can confirm the Lipschitz property, i.~e., the gradient bound. In \cite{D2}, an a priori gradient bound for smooth free boundary graphs is proved in the case when $n=2,3.$ The proof given there is also motivated by the strong analogy with minimal surfaces, but is completely different. An advantage of the results here is that because they work in all dimensions, they can be expected to apply to the free boundary analogue of the Bernstein problem. The application we have in mind is to the construction (as yet unrealized) of a global solution to the free boundary problem (other than the obvious solution $u(x) = x_1^+$) whose level surfaces are graphs. This would be analogous to the counterexample to the Bernstein conjecture --- a complete non-planar mimimal graph constructed in \cite{BDG} in $\mathbb{R}^9$. In \cite{DJ}, it is shown that a certain cone in $\mathbb R^7$ is the free boundary analogue of the Simons cone in minimal surface theory. Based on this example, one should expect to find a free boundary whose level surfaces are non-flat graphs in $\mathbb{R}^8$. The theorem whose proof occupies most of this paper has a more technical statement. See Section 2 for the definition of a viscosity solution and nontangentially accessible (NTA) domains. \begin{thm}\label{main} Let $u$ be a viscosity solution to \eqref{fbintro} in the cylinder $\mathcal{C}_L$. Suppose that $u$ is monotone in the vertical direction, \[ \partial u/\partial x_n \ge 0 \ \mbox{ on }\ \mathcal{C}_L^+(u), \] and its free boundary is given as the graph of a continuous function $\varphi$, $F(u)=\{(x,y): x\in B_1; \quad y = \varphi(x)\}$. Suppose that the oscillation of $\varphi$ is bounded, \[ \max_{x\in B_1}|\varphi(x)| \le L-1, \] and, finally, that there is a nontangentially accessible (NTA) domain $\mathcal D$ such that $$ \mathcal{C}\left(\frac{9}{10},L-\frac{1}{2}\right) \cap \mathcal{C}^+_L(u) \subset \mathcal D \subset \mathcal{C}^+_L(u).$$ Then \[ \sup_{x\in B_{1/2}} |\nabla \varphi(x)| \le C \] for a constant $C$ depending only on $L$, the NTA constants, and $n$. \end{thm} Theorem \ref{intromain} will follow from Theorem \ref{main} using results of \cite{D1}. Roughly speaking, \cite{D1} shows that the hypotheses of Theorem \ref{intromain} imply the hypotheses of Theorem \ref{main}. In particular, a key estimate from \cite{D1} is that the positive phase satisfies an NTA property on any smaller cylinder. Moreover, it is also proved in \cite{D1} that under the hypotheses of Theorem \ref{intromain}, the free boundary is the graph of a continuous function $\varphi$. The proof of Theorem \ref{main} is based on comparing $u(x)$ to its vertical translates $u(x + te_n)$. One constructs a family of supersolutions related to $u(x + te_n)$ and uses a deformation maximum principle argument to show that $u(x+te_n) \ge u(x) + ct$ for sufficiently small $t>0$. The function $u(x)$ is comparable to the distance from $x$ to the free boundary. The estimate shows that the change in $u$ in the vertical direction is comparable to the change in $u$ in the direction normal to each level surface, which is equivalent to a Lipschitz bound on the graph of the level surface. The construction of the family of supersolutions makes use of the basic estimates on NTA domains which were the reason the notion of NTA was introduced in \cite{JK}. The NTA property guarantees that every positive harmonic function that vanishes on the boundary vanishes at the same rate as $u$. The NTA property was first used in connection with regularity of free boundaries by Aguilera, Caffarelli and Spruck \cite{ACS}, who proved a partial regularity result. The NTA property also holds for the singular conic solution of Alt and Caffarelli. (This cone is not a graph, of course. Otherwise it would contradict Theorem \ref{main}.) Our proof of the gradient bound for free boundaries leads to a new proof of the classical gradient bound for minimal graphs. This new proof of Theorem \ref{boundMS} is related to a much simpler proof due to N. Korevaar \cite{Ko}. The hope is that this new method, while more complicated than the method in \cite{Ko}, will ultimately apply to classes of semilinear problems that include both free boundary problems and minimal surface problems as singular limits. An interesting aspect of our proof is that it deepens the analogy between minimal surfaces and free boundaries. The paper is organized as follows. In Section 2 after briefly recalling some standard definitions and known results, we prove Theorem \ref{main} and deduce Theorem \ref{intromain}. We present our proof of Theorem \ref{boundMS} in Section 3. In Section 4, we examine the parallels between the two proofs and especially between two key parallel ingredients, namely the boundary Harnack inequality for NTA domains and the intrinsic Harnack inequality of Bombieri and Giusti \cite{BG}. \section{Gradient bound for free boundary graphs} \subsection{Preliminaries.} We recall the definition of a viscosity solution \cite{C1}. \begin{defn}\label{defsol} Let $u$ be a nonnegative continuous function in $\Omega$. We say that $u$ is a viscosity solution to \eqref{fbintro} in $\Omega$ if and only if the following conditions are satisfied: \begin{enumerate} \item $\Delta u = 0$ in $\Omega^+(u)$; \item If $x_0 \in F(u)$ and $F(u)$ has at $x_0$ a tangent ball $\mathcal{B}_{\epsilon}$ from either the positive or the zero side, then, for $\nu $ the unit radial direction of $\partial \mathcal{B}_{\epsilon}$ at $x_0$ into $\Omega^+(u)$, \begin{equation}\nonumber u(x)= \langle x-x_0,\nu\rangle^+ + o(|x-x_0|), \ \text{as $x\rightarrow x_0.$} \end{equation} \end{enumerate} \end{defn} Standard elliptic regularity theory implies that if $F(u)$ is a smooth surface near $x_0$, then $u$ is smooth up to the free boundary near $x_0$ and the free boundary condition $|\nabla u|=1$ is valid in the classical sense in such a neighborhood. Denote by $d(x)=\text{dist}(x,F(u))$. In this section, the balls $\mathcal{B}_r = \mathcal{B}_r(0)$ and $\mathcal{B}_r(x)$ will be in $\mathbb R^n$ while the balls $B_r(x)$ will be in $\mathbb R^{n-1}$. The following result follows easily from the Hopf lemma and interior regularity of elliptic equations (see for example \cite{CS},\cite{D2}). \begin{lem}\label{Lipconst}Let $u$ be a viscosity solution to \eqref{fbintro} in $\mathcal{B}_1$, $0 \in F(u)$. Then, $u$ is Lipschitz continuous in $\mathcal{B}_{1/2}$ and there is a dimensional constant $K$ such that \[ \sup_{\mathcal{B}_{1/2}}|\nabla u| \le K, \] and \[ u(x) \leq K d(x), \ \ \ \text{for all $x \in \mathcal{B}_{1/2}$}. \] \end{lem} \begin{defn}\label{nondeg} We say that a viscosity solution $u$ is nondegenerate in $\mathcal{B}_1$ if there is a constant $c>0$ such that $u(x)\ge cd(x)$ for all $x\in \mathcal{B}_1^+(u)$. \end{defn} \smallskip We now recall the notion of nontangentially accessible (NTA) domains. \begin{defn}\label{NTA}A bounded domain $D$ in $\mathbb R^n$ is called NTA, when there exist constants $M$ and $r_0 >0$ such that: \begin{enumerate} \item Corkscrew condition. For any $x \in \partial D,$ $r < r_0,$ there exists $y=y_r(x) \in D$ such that $M^{-1} r < |y-x|<r$ and $\text{dist}(y,\partial D) > M^{-1}r;$ \item The Lebesgue density of $D^{c}$ at any of its points is bounded below uniformly by a positive constant $c$, i.e for all $x \in \partial D, 0 < r <r_0,$ $$\frac{|\mathcal{B}_r(x) \setminus D |}{|\mathcal{B}_r(x)|} \geq c;$$ \item Harnack chain condition. If $\epsilon >0$ and $x_1, x_2$ belong to $D$, $\text{dist}(x_j,\partial D)>\epsilon$ and $|x_1- x_2|< C_1\epsilon,$ then there exists a sequence of $C_2$ balls of radius $c\epsilon$ such that the first ball is centered at $x_1$, the last at $x_2$, such that the centers of consecutive balls are at most $c\epsilon/2$ apart. The number of balls $C_2$ in the chain depends on $C_1$, but not on $\epsilon$. \end{enumerate} \end{defn} We recall some results about NTA domains \cite{JK}. We start with the following boundary Harnack principle for harmonic functions. \begin{thm}\label{BHP}(Boundary Harnack principle) Let $D$ be an $\textrm{NTA}$ domain and let $V$ be an open set. For any compact set $K \subset V,$ there exists a constant $C$ such that for all positive harmonic functions $u$ and $v$ in $D$ vanishing continuously on $\partial D \cap V,$ and $x_0 \in D \cap K,$ $$C^{-1}\frac{v(x_0)}{u(x_0)}u(x) \leq v(x) \leq C\frac{v(x_0)}{u(x_0)}u(x), \ \ \textit{for all} \ \ x \in K \cap \overline{D}. $$ \end{thm} \smallskip The boundary Harnack inequality above will be our main tool in the proof of Theorem \ref{main}. We will also need some further facts. First, recall that for any bounded domain $D \subset \mathbb R^n$ and any arbitrary $y_0 \in D$, one can define the harmonic measure $\omega^{y_0}$ of $D$ evaluated at $y_0$ (for the definition see for example \cite{JK}). We note that for any $y_1,y_2 \in D$, the measures $\omega^{y_1}$ and $\omega^{y_2}$ are mutually absolutely continuous. Hence, from now on we fix a point $y_0 \in D$ and denote $\omega=\omega^{y_0}.$ A nontangential region at $x_0 \in \partial D$ is defined as $$\Gamma_\alpha(x_0)=\{x \in D : |x-x_0|<(1+\alpha)\text{dist}(x,\partial D)\}.$$ Let $u$ be defined on $D$ and $f$ on $\partial D$. We say that $u$ converges to $f$ nontangentially at $x_0 \in \partial D$ if for any $\alpha,$ $$\lim_{x\rightarrow x_0} u(x)=f(x_0) \quad \text{for} \ x\in \Gamma_\alpha(x_0).$$ The following Fatou-type theorem was proved in \cite{JK}. \begin{thm}\label{Fatou} Let $D$ be an NTA domain. If $u$ is a positive harmonic function in $D$, then $u$ has finite nontangential limits for $\omega$-almost every $x_0 \in \partial D.$ \end{thm} We deduce from this the following regularity result for NTA free boundaries. \begin{lem}\label{full harmonic measure}Let $u$ be a viscosity solution to \eqref{fbintro} in $\mathcal{B}_1$, $u$ non-degenerate in $\mathcal{B}_{3/4}$, and $0 \in F(u)$. Assume that there is an NTA domain $D$ such that $D\subset \mathcal{B}_1^+(u)$ and $F(u)\cap \mathcal{B}_{3/4}\subset \partial D$. Then, $F(u)\cap \mathcal{B}_{1/2}$ is smooth almost everywhere with respect to harmonic measure $\omega$ of $D$. \end{lem} \begin{proof} Since each partial derivative $\partial u/\partial x_j$ is a bounded harmonic function, Theorem \ref{Fatou} implies that for $\omega$-almost every $x_0\in F(u)\cap \mathcal{B}_{1/2}$, there exists $a\in \mathbb R^n$ such that for every $\alpha<\infty$, $\nabla u(x)\to a$ as $x\to x_0$, for $x\in \mathcal{B}_1^+(u)$, $|x-x_0| < (1+\alpha)\mbox{dist}(x,F(u))$. We will prove that $F(u)$ is flat and hence smooth in a neighborhood of $x_0$. The idea of the proof is to show that for $x$ near $x_0$, $u$ is close to a linear function with gradient $a$. Provided that $a$ is not the zero vector, this will show us that the level sets of $u$ are flat and hence (by \cite{AC, C2}) that the free boundary is smooth near $x_0$. For notational simplicity assume $x_0=0$. Denote by $u_r$ the rescaling of $u$, $u_r(x)=u(rx)/r$. We will use the notation $A_1 \approx A_2$ for positive numbers that are comparable modulo constants that depend only on the NTA constants and the ratio of $u(x)$ to the distance to the free boundary (bounded above and below by Lemma \ref{Lipconst} and nondegeneracy). Consider a point $z\in \mathcal{B}_1^+(u_r)$ such that $u_r(z) \approx 1$. Note that although the point $z$ depends on $r$, we require the constants in comparability of $u_r(z)$ with $1$ to be independent of $r$ as $r\to0$. For any $x\in \mathcal{B}_1\cap \overline{\{u_r > 0\}}$, the NTA properties imply there is a (nontangential, corkscrew) path $p(t)$ such that $p(0)=x$, $p(1)= z$, $|p'(t)|\le C$ and \[ u_r(p(t)) \approx \mbox{dist}(p(t),F(u_r)) \approx t + u_r(p(0)) \] independent of $r$. Fix $C_2 <<C_1<\infty$, $\delta>0$, and denote, \[ T_\delta^r =\{x \in \mathcal{B}_{C_1}^+(u_r) : u_r(x) \ge \delta\}, \] \[ \Gamma_{\alpha}^r=\{x \in \mathcal{B}_{1/r}^+(u_r): |x|< (1+\alpha)\text{dist}(x,F(u_r))\}. \] Since $u_r(x)$ is comparable to the distance from $x$ to $F(u_r)$, for any $x\in T_\delta^r\cap \mathcal{B}_{C_2}$, there is a constant $c>0$ such that the path $p(t)$ from $x$ to $z$ belongs to $T_{c\delta}^r$. Choose $\alpha$ sufficiently large depending on $\delta$ and $c>0$ and $r$ sufficiently small depending on $C_1$ such that \[ T_{c\delta}^r \subset \Gamma_{\alpha}^r. \] Thus there is $r_0>0$ (depending on $C_1$, $\delta$, and $\alpha$) such that that for $r<r_0$, \[ |\nabla u_r(x) - a| < \delta, \quad \mbox{for } x\in T_{c\delta}^r. \] Define a linear function of $x$, by $L(x) = u_r(z) + a\cdot(x-z)$. For all $x\in T_\delta^r\cap \mathcal{B}_{C_2}$, since $u_r(z) - L(z) = 0$, and $p(t)\in T_{c\delta}^r$ , \[ |u_r(x) - L(x)| = \left|\int_0^1 (\nabla u_r(p(t))-a)\cdot p'(t)dt\right| \le C_3\delta. \] In all, we have shown that for every $x\in \mathcal{B}_{C_2}$ such that $u_r(x)\ge \delta$, \[ |u_r(x) - L(x)| \le C_3\delta. \] Next, we deduce that $|a|\approx 1$. (The upper bound $|a|\le K$ already follows from the upper bound on $|\nabla u|$.) Since $u_r(0)=0$ and $u_r(z)\approx 1$, for some $0<t <1$, the point $x=tz$ satisfies $u_r(x)=\delta$. So $x\in T_\delta^r\cap \mathcal{B}_{C_2}$ and $|\delta - u_r(z) -a\cdot(x-z)|\le C_3\delta$. Hence, $|a| \ge |a\cdot(x-z)| \ge u_r(z) - \delta - C_3\delta \ge u_r(z)/2$. (All we need in what follows is that $a$ is bounded and nonzero.) We can now conclude that the free boundary is flat in the appropriate sense. Consider a point $x\in F(u_r)\cap \mathcal{B}_1$ and its path $p(t)$ to $z$. There is $t>0$ such that $u_r(p(t))=\delta$. Denote $y=p(t)$. Then $|y-x|\le C\delta$ and $y\in T_\delta\cap \mathcal{B}_{C_2}$. The preceding argument says $|u_r(y)-L(y)| \le C_3\delta$. Therefore, \[ |L(x)| \leq |L(y)| + |L(x)-L(y)| \le |u_r(y) -L(y)| + |u_r(y)| + |a\cdot(x-y)| \le C_4\delta \] for a larger constant $C_4$. Since $a$ is bounded away from $0$ in length, the bound on $L(x)$ implies that every point of $F(u_r)\cap \mathcal{B}_1$ is within a distance a constant times $\delta $ of the plane $L(x)=0$. For sufficiently small $\delta$, this flatness condition implies smoothness of the free boundary (see \cite{AC,C2}). \end{proof} \subsection{The proof of Theorem \ref{main}.} Throughout the proof, $c_i, C_i$ denote constants depending on $L,n$, and possibly on the NTA constants. Also, a point $x \in \mathbb{R}^n$ may be denoted by $(x',x_n),$ with $x'=(x_1,\ldots,x_{n-1})$. We divide the proof in three steps. \medskip \noindent \textbf{Step 1: Nondegeneracy and separation of level sets.} \medskip We show first the nondegeneracy of $u$, namely that if $\mathcal{B}_\rho(x_0) \subset C_L^+(u)$, $\rho <1$, then \begin{equation}\label{nondegeq} u(x_0) \ge \gamma_n\rho \end{equation} for a dimensional constant $\gamma_n>0$. Denote by $g$ a strictly superharmonic function on the annulus $E = \mathcal{B}_2\backslash \mathcal{B}_1$\ such that \[ \begin{cases} g= a_n & \text{on $\partial \mathcal{B}_2$}, \\ g= 0 & \text{on $ \partial \mathcal{B}_1 $},\\ |\nabla g| < 1 & \text{on $\partial \mathcal{B}_1$,} \end{cases} \] with $a_n>0$ small dimensional constant. Let $r = \rho/4$. Denote $g_r(x) = rg(x/r)$, and \[ h_t(x) = g_r(x-x_0-te_n) \] defined on the closed annulus $E_t = \bar{\mathcal{B}}_{2r}(x_0 + te_n) \backslash \mathcal{B}_r(x_0+te_n)$. For $t$ sufficiently small, $E_t \subset \{x: -L < x_n < -L +1\}$ so that $h_t(x) \ge 0 = u(x) $ for $x\in E_t$. Increasing $t$ translates the region $E_t$ upwards. Let $t_0$ be the least $t$ for which the graph of $h_t$ touches the graph of $u$, i.~e., so that there is a point $z_0\in E_t$ for which $h_{t}(z_0) = u(z_0) > 0$. Because $h_t$ is a strict supersolution the point $z_0$ belongs to the outer boundary, $z_0\in \partial \mathcal{B}_{2r}(x_0 + t_0e_n)$. Furthermore, because the free boundary of $u$ and $h_t$ can't touch, $t_0 \le -\rho - r < 0$. Monotonicity of $u$ implies $u(z_0 - t_0e_n) \ge u(z_0) = h_{t_0}(z_0) = a_nr$. Finally, since $|z_0-t_0e_n -x_0| = 2r = \rho/2$, Harnack's inequality comparing the value of $u$ at $z_0 - t_0 e_n$ and $x_0$ implies that there is a dimensional constant $\gamma_n>0$ such that $u(x_0) \ge \gamma_n\rho$, as required. Next, we will show that level sets near the top of the cylinder are separated by an appropriate amount. Let $\epsilon >0$ and denote by \[ v(x)=u(x-\epsilon e_n). \] Since $u$ is strictly monotone in the vertical direction, $v(x) < u(x)$ on $\mathcal{C}^+_{L}(u)$. We claim that \begin{equation}\label{v<u} v(x) \leq u(x) - c_1\epsilon \ \ \ \textrm{on $B_{9/10}(0) \times \{L-1/2\}$} \end{equation} for $\epsilon < \epsilon_n$ a dimensional constant, and a constant $c_1>0$ depending only on $L$ and $n$. To prove \eqref{v<u}, note first that from \eqref{nondegeq} it follows that $u(x) \ge b_n$ for all $x\in B_{9/10}(0) \times \{L-1/2\}$. Write $x = (x',L-1/2)$ and let $t_n$ be such that $u(x',t_n) = b_n/2$, then by monotonicity $u(x',t) \ge b_n/2$ for all $t \ge t_n$. Consider the segment from $(x',t_n)$ to $(x',L-1/2)$. It follows from the Lipschitz bound (Lemma \ref{Lipconst}) that the distance from any point of the segment to the free boundary is greater than a dimensional constant. Thus by Harnack's inequality the values of $w(x) = (\partial/\partial x_n)u(x)$ on this segment are comparable with a constant depending only on $n$ and $L$. Furthermore, \[ b_n - b_n/2 \le u(x)-u(x',t_n) = \int_{t_n}^{L-1/2} w(x',t) dt. \] Therefore, the minimum of $w$ on this segment is bounded below by a constant $c_1>0$, depending only on $n$ and $L$. In particular, \[ u(x)-v(x) = \int_{L-1/2 - \epsilon}^{L-1/2} w(x',t) dt \ge c_1 \epsilon. \] \medskip \noindent \textbf{Step 2: Construction of a family of supersolutions.} \medskip The hypothesis of Theorem \ref{main} implies (by the construction of P. W. Jones \cite{J}) that there is an NTA domain between any pair $\mathcal{C}(r_1,L-a_1)$ and $\mathcal{C}(r_2,L-a_2)$ for $r_1 < r_2 \leq 9/10$ and $a_1 > a_2 \geq 1/2$. Thus the boundary Harnack inequality, Theorem \ref{BHP}, has the following corollary. \begin{cor}\label{corharnack} Let $u$ be as in Theorem \ref{main} and let $r_1 < r_2 \le 9/10$ and $a_1 > a_2 \ge 1/2$. Then there is a constant $A$ depending on $L$, the NTA constants of $\mathcal D$, $r_2-r_1>0$, and $a_1-a_2>0$ such that if $h_1$ and $h_2$ are positive harmonic functions on $\mathcal{C}(r_2,L-a_2)\cap \mathcal{C}_L^+(u)$, vanishing on $\partial D \cap \mathcal{C}(r_2,L-a_2)$ then \[ h_1(x)/h_2(x) \le Ah_1(y)/h_2(y) \] for every $x$ and $y$ in $\mathcal{C}(r_1,L-a_1)\cap \mathcal{C}_L^+(u)$. \end{cor} In this step we start our analysis on the cylinder $\mathcal{C}(9/10,L-1/2)$ which by abuse of notation we denote by $\mathcal{C}_1$. Then we restrict to smaller cylinders $\mathcal{C}_2, \mathcal{C}_3$ with base $B_{8/10}$ and $B_{7/10}$ respectively, height $M$ with $L-1< M < L-1/2$ and $\mathcal{C}_3 \subset \subset \mathcal{C}_2 \subset \subset \mathcal{C}_1$. Let $w$ be the harmonic function in $\mathcal{C}_1^+(u),$ satisfying the following boundary conditions: \begin{align} &w=0, \ \ \ \textrm{on $F(u)$},\\ &v < w \leq u, \ \ \ \textrm{on $ \overline{\mathcal{C}^+_{1}(u)} \cap \partial \mathcal{C}_{1}$,}\\\label{v+<w<u-} &v + \frac{c_1}{4}\epsilon < w < u - \frac{c_1}{4}\epsilon, \ \ \ \textrm{on $B_{9/10} \times \{L-1/2\}$}. \end{align} Notice that \eqref{v+<w<u-} can be achieved because of the gap \eqref{v<u} between $u$ and $v$. Since $v$ is subharmonic and $u$ is harmonic in $\mathcal{C}^+_{1}(u)$, the maximum principle implies \begin{equation}\label{w<u}v < w < u \ \ \ \textrm{in $\mathcal{C}^+_{1}(u).$}\end{equation} Moreover, $\mathcal{C}^+_{1}(w)=\mathcal{C}^+_{1}(u),$ and $F(w)=F(u)\cap \mathcal{C}_{1}$. We claim next that in the smaller cylinder $\overline{\mathcal{C}}_2$, \begin{equation}\label{wislip}|\nabla w|(x) \leq C_1, \quad x \in \overline{\mathcal{C}}_2.\end{equation} Define $d(x) = {\rm dist}(x,F(u))$. At points $x \in \overline{\mathcal{C}}_2 \cap \mathcal{C}_1^+(u)$ such that $d(x) \ge 1/10$, this follows from standard elliptic regularity and the fact that $w$ is bounded. On the other hand, at points that are close to $F(u)$, we have that $B_{d(x)}(x) \subset \mathcal{C}^+_{1}(u)$ and from Lemma \ref{Lipconst}, $$w(x) < u(x) \leq K d(x).$$ A standard argument using rescaling implies the bound \eqref{wislip}. Now, set $h=u-w$. Then $h$ is a positive (see \eqref{w<u}) harmonic function on $\mathcal{C}^+_{1}(u)$ vanishing continuously on $F(u)$. Let $H$ be the harmonic function in the cylinder $B_{9/10} \times (L-1, L-1/2)$, with boundary data $c_1/2$ on the top of the cylinder and vanishing on the remaining part of the boundary. Then, in view of \eqref{v+<w<u-}, $h \geq \epsilon H$. Thus, $h(x_1) \geq c_1\epsilon/4$, at $x_1= (L-1/2-\delta_n)e_n$ for a small dimensional constant $\delta_n >0$. Moreover, by the Lipschizt continuity of $u$ we get that $h(x_1) < (u - v)(x_1) \leq K \epsilon$. Using non-degeneracy and Lipschizt continuity of $u$ we also have that $b_n \leq u (x_1) \leq 2LK$. Thus, Corollary \ref{corharnack} gives \[ c_2 \epsilon u \leq h \leq C_2\epsilon u \ \ \textrm{on $ \overline{\mathcal{C}^+_2(u)}$}. \] The upper bound on $h$ implies, \begin{equation}\label{upper}w(x) \geq (1-C_2\epsilon)u(x) \ \ \textrm{on $ \overline{\mathcal{C}^+_2(u)}$},\end{equation} while the lower bound gives \begin{equation}\label{lower} w(x) \leq (1-c_2\epsilon)u(x) \ \ \textrm{on $ \overline{\mathcal{C}^+_2(u)}$}. \end{equation} In particular, if $F(u)$ is smooth around a point $x_0 \in \mathcal{C}_2$ then $|\nabla u|(x_0)= 1,$ which combined with \eqref{lower} gives \begin{equation}\label{nabla}|\nabla w|(x_0) \leq 1 - c_2\epsilon.\end{equation} According to Lemma \ref{full harmonic measure} we then have \begin{equation}\label{nabla2}|\nabla w|\leq 1 - c_2\epsilon \quad \text{$\omega$-almost everywhere on $F(u) \cap \mathcal{C}_2$}.\end{equation} Next we use \eqref{nabla2} to show that, by restricting on the smaller cylinder $\mathcal{C}_3$, we have \begin{equation}\label{strictinterior} |\nabla w| \leq 1-c_2\epsilon + C_3u\ \ \ \textrm{on $\mathcal{C}^+_{3}(u)$}. \end{equation} Let $\tilde{h}$ be the largest harmonic function $\tilde h \le C_1$ in $\mathcal{C}^+_{2}(u)$ such that \[ \tilde{h}=1-c_2 \epsilon \quad \text{on} \ F(u)\cap \mathcal{C}_2 \] with $C_1$ the constant in \eqref{wislip}. Since $|\nabla w|$ is subharmonic, it satisfies \eqref{wislip}-\eqref{nabla2} we get \begin{equation}\label{nablaw<h} |\nabla w| \leq \tilde{h}. \end{equation} On the other hand, $\tilde{h} - (1-c_2\epsilon)$ is a positive harmonic function on $\mathcal{C}^+_{2}(u),$ and it is zero on $F(u)$. Since by non-degeneracy $u$ is bounded below by a dimensional constant on the top of $\mathcal{C}_3$, Corollary \ref{corharnack} gives \[ \tilde{h} -(1-c_2\epsilon) \leq C_3 u \quad \text{on $\mathcal{C}^+_{3}(u).$} \] Combining this inequality with \eqref{nablaw<h} we obtain \eqref{strictinterior}. We now use \eqref{strictinterior} to construct a family of strict supersolutions. Define for $t\ge0$, \[ w_t(x)=w(x)-tg(x), \ \ \ \textrm{$x \in \mathcal{C}_{1}$} \] with \[ g(x) = e^{Ax_n}\phi\left(|x'|\right) \] where $A$ is a positive constant to be chosen later, and $\phi\ge0$ is a smooth bump function such that \[ \phi(r)=\left\{% \begin{array}{ll} 1, & \hbox{if $r < 1/2$} \\ 0, & \hbox{if $r \geq 7/10$.} \\ \end{array}% \right. \] Moreover, we will choose $\phi$ such that $\phi(r)>0$ for $r < 7/10$ \[ \phi''(r) + \frac{n-2}{r}\phi'(r) \geq 0, \ \ \text{if \quad $6/10 \leq r \leq 7/10$}. \] Indeed, let $\displaystyle \psi(s) = e^{-2n/s}$ for $s>0$ and $\psi(s) = 0$ for $s\le 0$. Then for $0 \le s \le 1$, \[ \psi''(s) - 2n\psi'(s) = [(2n + 4n^2)/s^2 - (2n)^2/s]e^{-2n/s} \ge 0. \] Because $(n-2)/r \le 2n$ for $r \ge 1/2$, the function $\phi_1(r) = \psi(7/10 - r)$ satisfies the differential inequality for $\phi$ above in the range $r \ge 1/2$. Using a partition of unity, $\phi_1$ can be modified without changing its values for $r \ge 6/10$, to obtain a function $\phi$ that is equal to $1$ for $r\le 1/2$. Finally, using the inequalities for $\phi$, \[ \Delta g = A^2 e^{Ax_n}\phi\left(|x'|\right) + e^{Ax_n}\Delta \phi\left(|x'|\right)\geq 0 \] as long as $A$ is a sufficiently large dimensional constant. Thus, $w_t$ is superharmonic on $\mathcal{C}_{1}^+(w_t).$ Moreover, condition \eqref{strictinterior} together with \eqref{upper} imply that, $$|\nabla w_t| \leq |\nabla w| + t|\nabla g| \leq 1-c_2\epsilon +C_4w + t|\nabla g|, \quad \text{on} \ \mathcal{C}_{3}^+(u).$$ In particular, on $F(w_t)\cap \mathcal{C}_{3}$, $t>0$, since $w=tg$ we obtain \[ |\nabla w_t|\leq 1-c_2\epsilon + C_4tg + t|\nabla g|. \] Therefore, for $0<t \leq c_3\epsilon$, with $c_3$ small depending on $c_2$, $C_4$, and $A$, we deduce that \begin{equation}\label{smallerthan1} |\nabla w_t|\leq 1-\frac{c_2}{2}\epsilon \quad \text{on} \ F(w_t)\cap \mathcal{C}_{3}. \end{equation} \medskip \noindent \textbf{Step 3: Comparison.} \medskip Observe that because $g$ vanishes on the ``sides'' we have that \begin{equation}\label{bc1} w_t = w > v \quad \text{on} \ (\partial B_{9/10} \times [-L,L]) \cap \overline{\mathcal{C}_{1}^+(w_t)}, \end{equation} and according to \eqref{v+<w<u-} we have that \begin{equation}\label{bc2} w_t > v \quad \text{on} \ B_{9/10} \times \{L-1/2\} \quad \text{for} \ t \leq \frac{c_2}{4}e^{-A(L-1/2)}\epsilon = c_4\epsilon. \end{equation} Let $E= \{t \in [0,c_4\epsilon] : v \leq w_t \ \text{in} \ \overline{\mathcal{C}_{1}}\}$. We claim that $E=[0,c_4\epsilon].$ Indeed, $0 \in E$ and clearly $E$ is closed. We need to show that $E$ is open. Let $t_0 \in E$, then since $w_{t}$ is superharmonic in its positive phase and satisfies \eqref{bc1}-\eqref{bc2} we only need to show that $w_{t_0}>v=0$ on $F(v)\cap \mathcal{C}_1$. In the case $t_0=0$, $w_0 = w > 0$ on $F(v)$ follows from the assumption that $F(u)$ is a graph in the vertical direction. In fact for all $t$, $w_t= w >0$ on $F(v) \cap (\mathcal C_1 \backslash \mathcal C_3)$ because $g$ is zero there. It remains to rule out the case, in which $t_0>0$, and $F(v)$ touches $F(w_{t_0})$ in $\mathcal C_3$, that is, where $g(x_0)\neq0$. Suppose by contradiction $x_0 \in F(v) \cap F(w_{t_0})\cap \mathcal C_3$, $t_0>0$. If $\nabla w_{t_0}(x_0) \neq 0$, then by the implicit function theorem $F(w_{t_0})$ is smooth in a neighborhood of $x_0$ and hence there exists an exterior tangent ball $\mathcal{B}$ at $x_0$ for $F(v)$. Therefore, for $\nu$ the outward unit normal to $\mathcal{B}$ at $x_0$ we have that $$w_{t_0}(x) \geq v(x) = (x-x_0,\nu)^+ + o(|x-x_0|)$$ as $x \rightarrow x_0$, contradicting \eqref{smallerthan1}. On the other hand, if $\nabla w_{t_0}(x_0)=0$, then in a small neighborhood $\mathcal{B}_r(x_0)$ we have that $$w_{t_0}(x) \leq Cr^2.$$ However, according to the corkscrew condition, there exists a ball $\mathcal{B}_{\delta r}(y) \subset \mathcal{B}_r(x_0)\cap \mathcal{C}_1^+(v)$, for some small $\delta>0$. By the non-degeneracy of $v$ we then obtain $$\sup_{\mathcal{B}_r(x_0)} v \geq c r,$$ and again we reach a contradiction. Thus $c_4\epsilon \in E$, and \[ v \leq w_{c_4 \epsilon} \ \ \text{on} \ \ \overline{\mathcal{C}}_1. \] Hence, according to the definition of $g$, \[ \{w \leq c_4 e^{-AL}\epsilon \}\cap \{|x'|< 1/2\} \subset \{v=0\} \cap \{|x'|< 1/2\}. \] Moreover, by \eqref{wislip} $$w \leq C_1 d(x).$$ Thus, $$\{d(x)\leq c_6 \epsilon \}\cap \{|x'|<1/2\} \subset \{v=0\} \cap \{|x'|<1/2\}.$$This implies the Lipschitz continuity of $F(u) \cap \{|x'|< 1/2\}$ with bound depending only on $L,n$ and the NTA constants. \qed \section{A priori gradient bound for minimal surfaces} In this section we present our proof of Theorem \ref{boundMS}. Recall that $B_r$ denotes an open $(n-1)$-dimensional ball of radius $r$, while $\mathcal{B}_r$, denotes an open $n$-dimensional ball of radius $r$. Our proof is parallel to one in the free boundary setting above. One main ingredient which will allow us to apply our deformation argument will be the (weak) Harnack inequality for solutions to elliptic equations on minimal surfaces due to Bombieri and Giusti \cite{BG}. We recall its statement in the form in which we will use it later in the proof. Let $\Delta_S$ denotes the Laplace-Beltrami operator on the surface $S$. \begin{thm}\label{weakHarnack} Let $p< \dfrac{n-1}{n-3}$. There is a constant $C(p)<\infty$ and $\beta>0$ depending on dimension such that if $S$ is an area minimizing hypersurface in $\mathcal B_R = \mathcal B_R(x_0)$ and $x_0\in S$ and $v$ is a positive supersolution to the Laplace-Beltrami operator, $\Delta_S v \le 0$, in $\mathcal B_R \cap S$, then \begin{equation} \left(\fint_{\mathcal B_r \cap S}v^p dH_{n-1}\right)^{1/p} \leq C(p) \inf_{\mathcal B_r \cap S} v \end{equation} for all $r \leq \beta R$. \end{thm} \begin{cor}\label{global Harnack}Let $S$ be an oriented surface of least area in $B_1 \times \mathbb{R}\subset \mathbb{R}^{n-1}\times \mathbb{R}$. Assume $\mathcal{S}_{1/2}:=S \cap (B_{1/2} \times \mathbb{R})$ is connected, and $S \subset B_1 \times [-M,M]$. Let $v$ be a positive supersolution to to the Laplace-Beltrami operator, $\Delta_S v \le 0$, in $(B_1 \times \mathbb{R}) \cap S$, such that \begin{equation}\label{average}\int_{\mathcal{S}_{1/2}}v dH_{n-1} \geq 1. \end{equation} Then \begin{equation} v \geq c \quad \textrm{on} \quad \mathcal{S}_{1/2}, \end{equation} with $c>0$ depending only on $n$ and $M.$ \end{cor} \begin{proof} Let $\beta$ (small) be the constant in Theorem \ref{weakHarnack}. Decompose $\mathbb R^{n}$ into cubes of side-length $\beta/(20\sqrt{n}).$ For each cube $Q_i$ that intersects $\mathcal{S}_{1/2}$ take a ball $\widetilde{B}_i \supset Q_i$ with center $x_i$ on $\mathcal{S}_{1/2} \cap Q_i$ and radius $\beta/20$. Clearly, the number $N$ of balls $\widetilde{\mathcal{B}}_i$ that cover $\mathcal{S}_{1/2}$ depends only on $n$ and $M.$ We say that $\widetilde{\mathcal{B}}_i \sim \widetilde{\mathcal{B}}_j$ if there exists a chain of balls $\widetilde{\mathcal{B}}_k$ connecting $\widetilde{\mathcal{B}}_i$ and $\widetilde{\mathcal{B}}_j$ such that consecutive balls intersect. This defines an equivalence relation. To each equivalence class we can associate the open set which is the union of all the elements in the class. Notice that open sets corresponding to distinct equivalence classes are disjoint. Since $\mathcal{S}_{1/2}$ is connected, we conclude that all the balls belong to the same equivalence class. If $\widetilde{\mathcal{B}}_1$ and $\widetilde{\mathcal{B}}_2$ intersect then they are both contained in $B_{\beta/2}(x_1)$. Hence applying Theorem \ref{weakHarnack} we obtain \begin{equation}\label{comparable} \int_{\widetilde{\mathcal{B}}_1 \cap S} v dH_{n-1} \leq C_0 \fint_{B_{\beta/2} \cap S} v dH_{n-1} \leq C_1 \inf_{S_{\beta/2}(x_1)} v \leq C_2 \int_{\widetilde{\mathcal{B}}_2 \cap S} v dH_{n-1}. \end{equation} In the last inequality we used the well-known fact that \begin{equation}\label{fatness} H_{n-1}(S\cap B_\rho(x)) \approx \rho^{n-1} \quad \textrm{for all} \quad x \in S. \end{equation} It also follows from \eqref{average} that at least one of the balls, say $\widetilde{\mathcal{B}}_1$, satisfies \begin{equation}\label{average2} \int_{\mathcal{S}_{1/2} \cap \widetilde{\mathcal{B}}_1}v dH_{n-1} \geq 1/N. \end{equation} Combining \eqref{comparable}-\eqref{average2} with the fact that any two balls can be connected by a chain of length at most $N$, we obtain the desired conclusion. \end{proof} \textit{Proof of Theorem $\ref{boundMS}$}. In what follows, the constants $c, c_i, C, C_i$ depend only on $n$ and $M$. Denote by $S$ the graph of $\phi$ over $B_1.$ We present the proof in three steps. \medskip \noindent \textbf{Step 1: Separation on a set of substantial measure.} \medskip Let $\epsilon >0$ and set \[ S_\epsilon:=\{(x,\phi(x)+\epsilon) : x \in B_1 \}. \] \noindent We will prove that there exists a smoothly bounded, closed set $\tilde{E} \subset B_{1/2}$ of positive measure independent of $\epsilon$ as $\epsilon \to 0$ such that \begin{equation}\label{E} \text{dist}((x,\phi(x)), S_\epsilon) \geq c_0\epsilon \quad \text{for all} \ x\in \tilde{E} + B_\delta \end{equation} where $\delta>0$ depends on the (a priori) bound on the modulus of continuity of $\nabla \phi$. Let $\eta \in C_0^\infty(B_1)$ be a smooth cut-off function such that $\eta \equiv 1$ on $B_{1/2}$. Then, since $\phi$ satisfies \eqref{MS} we have that $$\int_{B_1}\frac{\nabla \phi \cdot \nabla (\eta^2 \phi) }{\sqrt{1+|\nabla \phi|^2}} dx =0.$$ \noindent Hence, \begin{align*}\int_{B_1}\eta^2 \frac{|\nabla \phi|^2}{\sqrt{1+|\nabla \phi|^2}}dx =& - 2\int_{B_1}\phi \eta \frac{\nabla \phi \cdot \nabla \eta}{\sqrt{1+|\nabla \phi|^2}} \leq \\ & \ 2\left(\int_{B_1}\frac{\eta^2 |\nabla \phi|^2}{\sqrt{1+|\nabla \phi|^2}}\right)^{1/2}\left(\int_{B_1}\frac{\phi^2 |\nabla \eta|^2}{\sqrt{1+|\nabla \phi|^2}}\right)^{1/2}. \end{align*} \noindent Thus, \begin{equation*}\int_{B_1}\eta^2 \frac{|\nabla \phi|^2}{\sqrt{1+|\nabla \phi|^2}} dx \leq 4 \int_{B_1} \phi^2 \frac{|\nabla \eta|^2}{\sqrt{1+|\nabla \phi|^2}} dx\leq C M^2 \end{equation*} \smallskip \noindent Since $\eta \equiv 1$ on $B_{1/2}$ we then get \begin{equation} \int_{B_{1/2}} |\nabla \phi| dx \leq C_0. \end{equation} with $C_0$ depending on $M$ and $n$ only. Hence, by Chebyshev's inequality, (for $C_1 = 2C_0/|B_{1/2}|$) $$|\{x \in B_{1/2} : |\nabla \phi| < C_1\}| \geq |B_{1/2}|/2.$$ Since $\phi$ is smooth, there is a closed, smoothly bounded set \[ \tilde E \supset \{x \in B_{1/2} : |\nabla \phi| < C_1\} \] and $\delta >0$ sufficiently small depending on the modulus of continuity of $\nabla \phi$ such that \[ \tilde E + B_\delta \subset\{x \in B_{1/2} : |\nabla \phi|^2 \leq C_1^2 + 1\} \] This implies the desired claim \eqref{E}, for small enough $\epsilon$ and $\delta$, depending on the smoothness of $\phi$. In what follows we denote by $E = \{(x,\phi(x)), x\in \tilde{E}\}.$ Clearly, $H_{n-1}(E) \geq |B_{1/2}|/2.$ \medskip \noindent \textbf{Step 2: Construction of a family of subsolutions}. \medskip For the time being let $S$ be any smooth surface. Denote by $H(P, S)$ the mean curvature of $S$ at a point $P \in S,$ (i.e. the trace of the second fundamental form of $S$ at $P$.) Assume that $S$ is a smooth graph over $B_1$, i.e. $S= \{(x, \phi(x)) : x \in B_1\}$, and let $w$ be a $C^2$ non-negative function on $S$. Consider the surface $S_{t,\nu}:=S+tw\nu$ obtained deforming $S$ along the upward unit normal to $S$, that is $$S_{t,\nu} = \{(x,\phi(x))+ tw(x,\phi(x))\nu_x, x \in B_1\},$$ with $$\nu_x = \dfrac{(-\nabla \phi(x), 1)}{\sqrt{1+|\nabla \phi(x) |^2}}.$$ Then, for $t$ small enough, $S_{t}$ is also a graph and one can compute (see for example \cite{Ko}) \begin{align}\label{korevar} &H(P_t, S_{t,\nu}) = H(P, S) + t(\Delta_{S} w(P) + |A|_{S}^2 w(P)) + O(t^2), \\ &P:= (x,\phi(x)), P_t:=(x,\phi(x))+ tw(x,\phi(x))\nu_x, \end{align} \smallskip \noindent where $|A|_S$ is the norm of the second fundamental form of $S$. (The $O(t^2)$ term depends at most on the third derivatives of $\phi$ and on the second derivatives of $w$.) Applying formula \eqref{korevar} to our minimal surface $S$ we find that \begin{equation}\label{korevar2} H(P_t, S_{t,\nu}) = t(\Delta_{S} w(P) + |A|_{S}^2 w(P)) + O(t^2). \end{equation} In order to run a continuity argument (as in the proof of Theorem \ref{main}), we wish to use formula \eqref{korevar2} to produce a family of surfaces $S_{t,\nu} $ which are strict subsolutions to the minimal surface equation i.e. $H(\cdot, S_{t,\nu}) >0$ at least outside $E_{t,\nu}:= E+ tw\nu$, with $E$ the set from the previous step. Towards this aim we prove the following claim. \medskip \noindent \textit{Claim.} There exists a function $w$ defined on $S$ such that \begin{align*} & \Delta_S w + |A|_S^2 w > 0 \quad \text{on} \ S \setminus E,\\ & w(x,\phi(x)) =1 \quad \text{on} \ \tilde{E},\\ & w(x,\phi(x))=0 \quad \text{on} \ \partial B_1. \end{align*} \noindent Moreover \begin{equation}\label{lowerbound} w(x,\phi(x)) \geq c_0 >0 \quad \text{on} \ B_{1/2}, \end{equation} with $c_0$ depending only on $n, M$ and $w \in C^2(\overline{S\setminus E})$ with $C^2$ bounds depending on $S$ and $E$. \noindent \textit{Proof of the claim.} Let $w_1$ be the solution to the following boundary value problem, \begin{align*} & \Delta_S w_1 = 0 \quad \text{on} \ S \setminus E,\\ & w_1(x,\phi(x)) =1 \quad \text{on} \ \partial \tilde{E},\\ & w_1(x,\phi(x)) =0 \quad \text{on} \ \partial B_1. \end{align*} Note that the solution exists and is smooth in its domain of definition because $\tilde E$ is smoothly bounded. Extend $w_1 =1$ on $\tilde E.$ Then $\Delta_S w_1 \leq 0$ on $S$. Moreover, according to Step 1, we have that (using the notation of Corollary \ref{global Harnack}) \[ \int_{\mathcal{S}_{1/2}} w_1 dH_{n-1} \geq \int_{E} w_1 dH_{n-1} = H_{n-1}(E) \geq |B_{1/2}|/2. \] Hence we can apply Corollary \ref{global Harnack} to conclude that \begin{equation}\label{w1c} w_1 \geq c \quad \text{on} \quad \mathcal{S}_{1/2}=S \cap (B_{1/2} \times \mathbb{R}).\end{equation} Now, let $w_0$ be the solution to the following problem: \begin{align*} & \Delta_S w_0 = 1 \quad \text{on} \ S \setminus E,\\ & w_0(x,\phi(x)) =0 \quad \text{on} \ \tilde{E},\\ & w_0(x,\phi(x))=0 \quad \text{on} \ \partial B_1. \end{align*} and set $w=w_1 + \delta_1 w_0$. Clearly, $|\nabla w_0|$ is bounded (by a constant depending on $S$ and $E$). Applying Hopf's lemma to $w_1$ on $(\partial B_1 \times \mathbb{R}) \cap S$ , we obtain that, for $\delta_1$ sufficiently small, $w > 0$ in a neighborhood of $(\partial B_1 \times \mathbb{R}) \cap S$ and hence (for a possibly smaller $\delta_1$) $w>0$ on $S$. Moreover, in view of $\eqref{w1c}$, we can choose $\delta_1$ so that $w$ satisfies \eqref{lowerbound}. Thus, $w$ has all the required properties. \smallskip In view of the claim, according to formula \eqref{korevar2}, if $t$ is sufficiently small, $0<t \leq \epsilon_0$ then $$H(\cdot, S_{t,\nu}) >0, \quad \text{on $S_{t,\nu} \setminus E_{t,\nu}$}.$$ \medskip \noindent \textbf{Step 3: Comparison.} \medskip We show that for $0 \leq t \leq c_0\epsilon \leq \epsilon_0$, the surface $S_{t,\nu}$ is below the surface $S_\epsilon.$ Indeed, this is true at $t=0$. The first touching point cannot occur at some $x \in \partial B_1$, as our deformation leaves the $\partial B_1$ fixed. Moreover, for $t$ small enough, no touching can occur on $E_{t,\nu}$ in view of \eqref{E} in Step 1. Finally $S_{t,\nu}$ is a strict subsolution on $S_{t,\nu} \setminus E_{t,\nu}$, hence no touching can occur there either. Since $w$ satisfies \eqref{lowerbound}, we can then conclude that for all sufficiently small $\epsilon,$ (recall $S_{c_0\epsilon,\nu}=S+c_0\epsilon w\nu $) $$\text{dist}((x,\phi(x)), S_\epsilon) \geq \text{dist}((x,\phi(x)), S_{c_0\epsilon,\nu}) \geq c_1 \epsilon \quad \text{on} \quad B_{1/2}$$ as desired. Note that although the size of $\epsilon_0$ depends on the a priori bound on $\nabla \phi$, the constants $c_0>0$ and $c_1>0$ do not. \qed \section{Final Remarks} The analogy between the two gradient bound proofs presented here goes farther. Not only does each proof depend crucially on a scale-invariant Harnack inequality for the second variation operator of the associated functional, but also the proofs of these two Harnack estimates follow a roughly parallel course. The key ingredient of our proof of the gradient bound for minimal surface graphs is the Harnack inequality for the Laplace-Beltrami operator on the surface. This Harnack inequality permits us to convert a gradient bound on average (separation on a set of substantial measure) to a gradient bound everywhere (separation everywhere). The way this Harnack inequality is proved by Bombieri and Giusti is as follows. A monotonicity formula yields (via a limiting cone argument) a measure-theoretic form of connectivity. This, in turn, implies another scale-invariant form of connectivity, an isoperimetric, or Poincar\'e-type, inequality. One then deduces a Harnack inequality for the Laplace-Beltrami operator on the minimal surface by a Moser-type argument. In the free boundary case, a monotonicity formula due to Alt, Caffarelli and Friedman yields (by arguments of \cite{ACS} and \cite{D1}) the NTA property, a scale-invariant form of connectivity. A theorem of \cite{JK} says that the NTA property implies a boundary Harnack inequality. The boundary Harnack inequality is used to show that separation of level surfaces of the solution function $u$ at distances far from the free boundary implies a similar separation all the way up to the free boundary. The parallel between these two Harnack inequalities leads to the hope that there is a Harnack estimate for the second variation operator associated to minimizers of functionals of the form \[ \int |\nabla v|^2 + F(v) \] for wider classes of functions $F$.
1,941,325,220,789
arxiv
\section{Introduction} Modified gravity (see for recent reviews \cite{Nojiri:2010wj} -\cite{Bamba:2014eea}) is the most popular way to adderess the recently observational data in favor of an accelerating expanding Universe \cite{obs1}-\cite{obs3} . It is believed that it is able to realize the acceleration expansion of the Universe \cite{Nojiri:2008ku}-\cite{ijgmmp}. To describe dark energy and dark matter the first simple candidate is scalar field which it realizes modified gravity by reconstruction scheme. Scalar fields induce extra degree of freedom and cause ghost states which are physically unacceptable. Sometimes these new extra degrees of the freedom break the Lorentz symmetry in the Planck scales. So, they are able to improve the propagators of free graviton. In the absence of Lorentz symmetry , conformal symmetry is important and it is believed that \texttt{the small distance structure of canonical quantum gravity can be described using the conformal group} \cite{Hoooft}. Not only in classical general relativity but in the Einstein-Cartan theories with torsion,it is possible to find a consistent Lagrangian of gravity as a gauge theory \cite{Maluf,Hehl,Poplawski,Momeni:2014taa}.\par Recently a new motivated model for gravity has been introduced in which the geometry is Riemannian and it respects to the conformal symmetry as internal degree of freedom (not extra one) \cite{Chamseddine:2013kea}. The model is called as Mimetic gravity. In this model it has been shown that the gravity can be described by a physical metric $g_{\mu\nu}$ in which the metric is function of the scalar field $\phi(x^{\alpha})$ (free and massless) and an auxiliarly metric $h_{\mu\nu}$. If we apply a generic conformal transformation on the auxiliary metric,it has been proven that the physical metric remains invariant. So, the theory is conformal invariant. By studying equation of motions for physical metric and scalar field we find two equations in which the scalar field has unit norm of gradient i.e. $\partial_{\mu}\phi\partial^{\mu}\phi=1$. This constraint helps to the scalar field to be not an extra degree of freedom and consequently a non ghost field. In cosmological background we can show that dark matter appeares as an integration constant. In this sense it looks like a version of Horava-Lifshitz gravity \cite{Mukohyama:2009mz}. Different aspects of this model have been investigated in the literature \cite{Malaeb:2014vua}-\cite{Deruelle:2014zza}. Specially it was extended to the self- interacting scalar field \cite{Chamseddine:2014vna}. Our aim in this paper is to generalize the Mimetic's idea to a more general case in which instead of scalar field we have another metric. This second metric is not neccesary to be Lorentzian like the physical and auxiliarly metric. We show that how it is posible to construct a generalized Mimetic gravity using a bi-metric approach in an arbitrary higher dimensional spacetime. The plan of this work is as the following: In Sec.II we construct generalized Mimetic gravity. Later we derive the equations of motion. In Sec.III we study cosmological solutions of a type of modified mimetic gravity. We show that the model has exact solutions for Hubble parameter and scalar field potential. In Sec. IV we investigate the validity of different types of energy conditions. We prove that there exist a range of time in which the system governs energy conditions. We summarize and conclude in Sec. V. \section{Construction of Mimetic model of gravity} Let us to start by the physical metric $g_{\mu\nu}$: \begin{eqnarray} g_{\mu\nu}=g_{\mu\nu}\Big(h_{\alpha\beta},\phi(x^{\alpha})\Big). \end{eqnarray} Here $h_{\alpha\beta}$ is an auxiliary metric and $\phi(x^{\alpha})\in\mathcal{R}$ is an uncharged scalar field. Here we assume that scalar field is frozen under conformal transformation. We suppose that under a conformal transformation , the auxiliarly metric transforms to the $h_{\alpha\beta}\to \hat{h}_{\alpha\beta}=\Omega^2(x)h_{\alpha\beta},\ \ x\equiv x^a $,but the physical metric has retained its original shape. It means , we are interested to have : \begin{eqnarray} g_{\mu\nu}\to\hat{g}_{\mu\nu}=g_{\mu\nu}. \end{eqnarray} One possible form of the metric is to be in the following form: \begin{eqnarray} g_{\mu\nu}=\sum_{n=1}^{\infty} c_n\Big[\lambda h^{\alpha\beta}Y_{\alpha\beta}\Big]^nh_{\mu\nu}. \end{eqnarray} In which $c_n$ are coefficients of a taylor series and $\lambda$ is a coupling constant under a conformal transformation as the following in $D$ dimensional spacetime we have: \begin{eqnarray} h_{\mu\nu}\longrightarrow\Omega^2(x)h_{\mu\nu} , (h_{\mu\nu})_{D\times D}. \end{eqnarray} We apply the conformal transformation to find: \begin{eqnarray} \hat{g}_{\mu\nu}=\sum_{n=1}^{\infty} c_n\Big[\lambda\Omega^{-2} h^{\alpha\beta}Y_{\alpha\beta}\Big]^n\Omega^2(x)h_{\mu\nu}=\sum_{n=1}^{\infty} c_n\Big[h^{\alpha\beta}Y_{\alpha\beta}\Big]^nh_{\mu\nu}\Omega^{2(1-n)} \end{eqnarray} here we study the form of auxiliarly metric $h_{\mu\nu},|h|=det(h_{\mu\nu})$. We know that: \begin{eqnarray} \hat{h}_{\mu\nu}=\Omega^2(x) h_{\mu\nu} \longrightarrow |\hat{h}|=\Omega^{2D}(x)|h_{\mu\nu}|=\Omega^{2D}(x)|h| \end{eqnarray} So we find: \begin{eqnarray} \Omega^2(x) =\Big(\frac{|\hat{h}|}{|h|}\Big)^{1/D} , \Omega^2(x) =\frac{\hat{h}_{\mu\nu}}{h_{\mu\nu}} \Longrightarrow \frac{\hat{h}_{\mu\nu}}{h_{\mu\nu}}\equiv\Big(\frac{|\hat{h}|}{|h|}\Big)^{1/D} \end{eqnarray} It is appropriate to write: \begin{eqnarray} \frac{\hat{h}_{\mu\nu}}{|\hat{h}|} = \frac{{h}_{\mu\nu}}{|h|} \longrightarrow \hat{h}_{\mu\nu}|\hat{h}|^{-1/D}= h_{\mu\nu}|h|^{-1/D} \end{eqnarray} Consequently we define a "new" auxiliary metric by: \begin{eqnarray} H_{\mu\nu}= \frac{h_{\mu\nu}}{|h|^{1/D}} \longrightarrow h_{\mu\nu} \equiv (h_{\mu\nu})_{D\times D} \end{eqnarray} Note that \begin{eqnarray} H_{\mu\nu}\longrightarrow\hat{H}_{\mu\nu}=H_{\mu\nu}. \end{eqnarray} because we know that: \begin{eqnarray} H^{\alpha\beta}\longrightarrow\Omega^{-2}(x)H^{\alpha\beta} ,\ \ H_{\mu\nu}\longrightarrow\Omega^{-2}(x)H_{\mu\nu}. \end{eqnarray} thus the auxiliarly metric $H_{\mu\nu}$ is self-conformal invariant. Now we rewite the physical metric in terms of the $H_{\mu\nu}$ as the following: \begin{eqnarray} g_{\mu\nu}=\sum_{n=1}^{\infty} c_n\Big[\lambda H^{\alpha\beta}Y_{\alpha\beta}\Big]^nH_{\mu\nu} \Longrightarrow (g)_{D\times D}=\Big(\sum_{n=1}^{\infty} c_n\Big[\lambda H^{\alpha\beta}Y_{\alpha\beta}\Big]^n\Big)(H)_{D\times D} . \end{eqnarray} To have a closed form for metric we choice $c_n=\frac{1}{n!}$. So we obtain: \begin{eqnarray} g_{\mu\nu}=exp\Big[\lambda H^{\alpha\beta}Y_{\alpha\beta}\Big]H_{\mu\nu}\label{g}. \end{eqnarray} We presented the most general form of metrics which remain invariant under conformal transformations. We mention here that it is possible to calculate the coefficents of the series by derivatives with respect to the $\lambda$. For example we have: \begin{eqnarray} \frac{\delta g_{\mu\nu}}{\delta\lambda}=(H^{\alpha\beta}Y_{\alpha\beta}) exp\Big[\lambda H^{\alpha\beta}Y_{\alpha\beta}\Big]H_{\mu\nu}\Big|_{\lambda=0}=(H^{\alpha\beta}Y_{\alpha\beta})H_{\mu\nu}. \end{eqnarray} For second order we obtain: \begin{eqnarray} \frac{\delta^2 g_{\mu\nu}}{\delta\lambda^2}\Big|_{\lambda=0}=(H^{\alpha\beta}Y_{\alpha\beta})^2H_{\mu\nu}. \end{eqnarray} Where $(H^{\alpha\beta}Y_{\alpha\beta})^2H_{\mu\nu}$ defines the second order Mimetic model. We mention here that: \begin{eqnarray} \frac{\delta \hat{g}_{\mu\nu}}{\delta\lambda}\Big|_{\lambda=0}=\frac{\delta g_{\mu\nu}}{\delta\lambda}\Big|_{\lambda=0} \end{eqnarray} It is possible to write the physical metric in the following series form: \begin{eqnarray} g_{\mu\nu}=H_{\mu\nu}+\lambda(H^{\alpha\beta}Y_{\alpha\beta})H_{\mu\nu}+\frac{\lambda^2}{2!}(H^{\alpha\beta}Y_{\alpha\beta})^2H_{\mu\nu} +...\label{series}. \end{eqnarray} Here the first term is invariant under conformal transformation. The second term is also invariant under such transformations thanks to the self invariance of the auxiliary mrtric $\hat{H}_{\mu\nu}=H_{\mu\nu} $. We sumarize the results:\par We constructed a model for gravity defined by a physical metric $g_{\mu\nu}$ which it remains invariant under consormal transformations of an auxiliary metric as the following: \begin{eqnarray} H_{\alpha\beta}\longrightarrow\hat{H}_{\alpha\beta}=H_{\mu\nu} \Longleftrightarrow \hat{g}_{\mu\nu} = g_{\mu\nu} \Longleftrightarrow H_{\mu\nu}=\frac{h_{\mu\nu}}{|h|^{1/D}} \end{eqnarray} It is interesting to mention here that if we choice the metric $Y_{\alpha\beta}=\partial_{\alpha}\phi\partial_{\beta}\phi$ (Lyra's geometry \cite{Lyra}) then the series up to the first order of $\mathcal{O}(\lambda)$ is the one proposed by \cite{Chamseddine:2013kea}. But the second term is a higher order non-trivial extension of the original Mimetic model. So, we can say:\par \texttt{Mimetic gravity is a bi-metric model (with two auxiliary metric) ,one is Lorentzian and one is Lyra's metric}. \subsection{Equations of the motion} In this section we derive the gravitational field equations for a pair of auxiliary metric and physical metric. We start by the common form of Einstein-Hilbert action in $D\geq4$ dimensins with a matter Lagrangian $\mathcal{L}_m$ in the following form: \begin{eqnarray} S=\frac{1}{2k^2} \int \Big\{R\Big[g(H_{\alpha\beta},Y_{\alpha\beta})\Big]+\mathcal{L}_m \Big\}\sqrt{-g(H_{\alpha\beta},Y_{\alpha\beta})}d^Dx\label{S}. \end{eqnarray} By varying the (\ref{S}) with respect to the physical metric $g_{\alpha\beta}$ we find: \begin{eqnarray} \delta S=\frac{1}{2k^2} \int d^4x\sqrt{-g(H_{\alpha\beta},Y_{\alpha\beta})}\Big(G^{\alpha\beta}-T^{\alpha\beta})\Big) \delta g_{\alpha\beta}(H_{\mu\nu},Y_{\mu\nu}) \end{eqnarray} Where $T^{\alpha\beta}=-\frac{2}{\sqrt{-g}}\frac{\delta S}{\delta g_{\alpha\beta}}$ denotes the typical energy momentum tensor of matter fields. But since $g=g(H,Y)$, so it is needed to compute $\delta g_{\alpha\beta}$. Using (\ref{g}) we find: \begin{eqnarray} &&\delta g_{\alpha\beta}=exp(H^{\alpha\beta}Y_{\alpha\beta})\delta H_{\mu\nu} +(\delta H^{\alpha\beta})Y_{\alpha\beta}exp(H^{\alpha\beta}Y_{\alpha\beta})H_{\mu\nu} \\&&\nonumber+H^{\alpha\beta}\delta Y_{\alpha\beta}exp(H^{\alpha\beta}Y_{\alpha\beta})H_{\mu\nu} =exp(H^{\alpha\beta}Y_{\alpha\beta})\Big[\delta H_{\mu\nu}+(Y_{\alpha\beta}\delta H^{\alpha\beta})H_{\mu\nu}+H^{\alpha\beta}H_{\mu\nu}\delta Y_{\alpha\beta}\Big] \end{eqnarray} We know that: \begin{eqnarray} \delta H^{\alpha\beta}=g^{\acute{\mu}\alpha}g^{\acute{\nu}\beta}\delta H_{\acute{\mu}\acute{\nu}}\longrightarrow Y_{\alpha\beta}\delta H^{\alpha\beta}=Y^{\acute{\mu}\acute{\nu}}\delta H_{\acute{\mu}\acute{\nu}} \end{eqnarray} So, in fact we have: \begin{eqnarray} \delta H_{\mu\nu}=\delta_\mu^\alpha\delta_\nu^\beta \delta H_{\alpha\beta} \end{eqnarray} Using these identities we find the total variation of (\ref{S}) as the following: \begin{eqnarray} &&\delta S=\frac{1}{2k^2} \int d^4x\sqrt{-g}(G^{\alpha\beta}-T^{\alpha\beta}) exp(\lambda H^{\alpha\beta}Y_{\alpha\beta})_\lambda \Big(\delta H_{\mu\nu}+\\&&\nonumber+(Y_{\alpha\beta}g^{\mu\alpha}g^{\nu\beta}\delta H_{\mu\nu})H_{\mu\nu}+H^{\alpha\beta}H_{\mu\nu}\delta Y_{\alpha\beta}\Big)=0 \end{eqnarray} Or equivalently we write it in the following form: \begin{eqnarray} \delta S=\frac{1}{2k^2} \int d^4x\sqrt{-g_D}\lambda(G^{\alpha\beta}-T^{\alpha\beta}) exp(\lambda H^{\alpha\beta}Y_{\alpha\beta})\Big[\delta H_{\mu\nu}+\notag\\+(Y^{\alpha\beta}\delta H_{\alpha\beta})H_{\mu\nu}+H^{\alpha\beta}H_{\mu\nu}\delta Y_{\alpha\beta}\Big]=0 \end{eqnarray} Note that: \begin{eqnarray} \Big[\delta H_{\mu\nu}+(Y^{\alpha\beta}\delta H_{\alpha\beta})H_{\mu\nu}+H^{\alpha\beta}H_{\mu\nu}\delta Y_{\alpha\beta}\Big]=\notag\\=\delta_\mu^\alpha\delta_\nu^\beta \delta H_{\alpha\beta}+ Y^{\alpha\beta}\delta H_{\alpha\beta}H_{\mu\nu}+H^{\alpha\beta}H_{\mu\nu} \delta Y_{\alpha\beta} \end{eqnarray} So we obtain: \begin{eqnarray} \Big[\delta H_{\mu\nu}+(Y^{\alpha\beta}\delta H_{\alpha\beta})H_{\mu\nu}+H^{\alpha\beta}H_{\mu\nu}\delta Y_{\alpha\beta}\Big]=\notag\\=(\delta_\mu^\alpha\delta_\nu^\beta + Y^{\alpha\beta}H_{\mu\nu})\delta H_{\alpha\beta}+H^{\alpha\beta}H_{\mu\nu}\delta Y_{\alpha\beta} \end{eqnarray} Finally we have: \begin{eqnarray} \delta g_{\mu\nu}=\Big[(\delta_\mu^\alpha\delta_\nu^\beta + Y^{\alpha\beta}H_{\mu\nu})\delta H_{\alpha\beta}+H^{\alpha\beta}H_{\mu\nu}\delta Y_{\alpha\beta}\Big]exp(\lambda H^{\alpha\beta}Y_{\alpha\beta}) \end{eqnarray} And consequently we obtain: \begin{eqnarray} &&\delta S=\frac{1}{2k^2} \int d^4x\sqrt{-g_D}exp(\lambda H^{\alpha\beta}Y_{\alpha\beta})\Big[(G^{\mu \nu}-T^{\mu \nu}) (\delta_{\mu}^{\acute{\alpha}} \delta_{\nu}^{\acute{\beta}} +Y^{\acute{\alpha}\acute{\beta}}H_{\mu\nu})\delta H_{\acute{\alpha}\acute{\beta}}+ \\&&\nonumber+H^{\acute{\alpha}\acute{\beta}}H_{\mu\nu}\delta Y_{\acute{\alpha}\acute{\beta}}(G^{\mu\nu}-T^{\mu\nu})\Big] \\&&\nonumber\Rightarrow \delta S=\frac{1}{2k^2} \int d^4x\sqrt{-g_D}exp[\lambda H^{\alpha\beta}Y_{\alpha\beta}]\Big[(G^{\acute{\alpha}\acute{\beta}}-T^{\acute{\alpha}\acute{\beta}})+Y^{\acute{\alpha}\acute{\beta}}T^{\mu \nu}H_{\mu\nu})\delta H_{\acute{\alpha}\acute{\beta}}+ \\&&\nonumber+H^{\acute{\alpha}\acute{\beta}}\delta Y_{\acute{\alpha}\acute{\beta}}H_{\mu\nu}(G^{\mu\nu}-T^{\mu\nu})\Big] \end{eqnarray} So, we find the following variations of (\ref{S}) for $\{Y,g\}$: \begin{eqnarray} &&\delta S_{Y}\propto\int d^4x\sqrt{-g_D}exp[\lambda H^{\alpha\beta}Y_{\alpha\beta}]H^{\acute{\alpha}\acute{\beta}}H_{\mu\nu}(G^{\mu\nu}-T^{\mu\nu})\delta Y_{\acute{\alpha}\acute{\beta}} \\&&\nonumber \delta S_g\propto\int d^4x\sqrt{-g_D}exp[\lambda H^{\alpha\beta}Y_{\alpha\beta}]\delta H_{\acute{\alpha}\acute{\beta}}\Big[(G^{\acute{\alpha}\acute{\beta}}-T^{\acute{\alpha}\acute{\beta}})+Y^{\acute{\alpha}\acute{\beta}}T^{\mu \nu}H_{\mu\nu}\Big] \end{eqnarray} Putting zero we derive these quantities: \begin{eqnarray} &&\frac{\delta S_{H} }{\delta H_{\acute{\alpha}\acute{\beta}}}=0\longrightarrow(G^{\acute{\alpha}\acute{\beta}}-T^{\acute{\alpha}\acute{\beta}})+Y^{\acute{\alpha}\acute{\beta}}T^{\mu \nu}H_{\mu\nu}=0\label{eom1} \\&& \frac{\delta S_{Y} }{\delta Y_{\acute{\alpha}\acute{\beta}}}=0\longrightarrow H^{\acute{\alpha}\acute{\beta}}H_{\mu\nu}(G^{\mu\nu}-T^{\mu\nu})=0\label{eom2}. \end{eqnarray} Trace of (\ref{eom1}) gives us: \begin{eqnarray} (G-T)+ Y_{\acute{\alpha}}^{\acute{\alpha}} T^{\mu\nu}H_{\mu\nu}=0 \end{eqnarray} But (\ref{eom2}) reads as an orthogonality condition. It is adequate to write (\ref{eom2}) as the following: \begin{eqnarray} &&H_{\alpha}^{\alpha}H_{\mu\nu}G^{\mu\nu}-H_{\alpha}^{\alpha}H_{\mu\nu}T^{\mu\nu}=0\longrightarrow H_{\alpha}^{\alpha}\neq0 \longrightarrow H_{\mu\nu}G^{\mu\nu}=H_{\mu\nu}T^{\mu\nu} \end{eqnarray} Equations (\ref{eom1},\ref{eom2}) complete the physical description of the model is defined by (\ref{S}). \section{Modified Mimetic Cosmology in flat spacetime} This section is devoted to cosmological solution of a type a MMG ,proposed in \cite{Chamseddine:2014vna}. Let us to start by the FRW metric in the following form: \begin{eqnarray} g_{\mu\nu}=diag\Big(1,-a^2(t)\Sigma_3\Big). \end{eqnarray} Consider the theory with the following form,is called as MMG, \begin{equation} S=% {\displaystyle\int} d^{4}x\sqrt{-g}\left[ -\frac{1}{2}R\left( g_{\mu\nu}\right) +\lambda\left( g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-1\right) -V\left( \phi\right) +\mathcal{L}_{m}\left( g_{\mu\nu},...\right) +\frac{1}{2}\gamma\left( \square\phi\right) ^{2}\right] \ , \end{equation} where $\gamma$ is a constant. Following \cite{Chamseddine:2014vna},the $0-0$ , $0-i,\ \ (i=x,y,z)$ and $i-i$ components of the Einstein equation read: \begin{eqnarray} 0-0:,\ \ && 3H^2=V(\phi)-\gamma(1\pm\frac{k}{a})\frac{d}{dt}(H(3\pm\frac{2k}{a}))+2\lambda(1\pm\frac{k}{a})^2+\frac{\gamma H^2}{2}(3\pm \frac{2k}{a})^2,\\ 0-i:,\ \ && 2\lambda\dot{\phi}\acute{\phi}-\gamma\acute{\phi}\dot{\chi}=0,\\ i-i:,\ \ &&2\dot{H}+3H^2=\frac{2\lambda(k_x)^2}{a^2}-V(\phi)\\&&\nonumber-\gamma\Big[(1\pm\frac{k}{a})\Big((3\pm\frac{2k}{a})\dot{H}\mp\frac{2k}{a}H^2\Big)+\frac{1}{2}H^2(3\pm\frac{2k}{a})^2\Big]. \end{eqnarray} Here $\chi=\Box\phi=(-g)^{-1/2}\partial_{\mu}((-g)^{1/2}\partial^{\mu}\phi)$. Using the $(0-i)$ equation we find: \begin{eqnarray} \ \ &&\acute{\phi}(2\lambda\dot{\phi}-\dot{\chi})=0,\\ \ \ &&\acute{\phi}=k_x\neq0 \longrightarrow 2\lambda\dot{\phi}-\dot{\chi}=0. \end{eqnarray} The scalar field constraint reads: \begin{eqnarray} &&g^{\mu\nu}\partial_{\mu}\partial_{\nu}=1,\ \ (\frac{\partial\phi}{\partial t})^2-\frac{1}{a^{2}(t)}(\nabla\phi)^2=1. \end{eqnarray} Where $\nabla\phi=(\partial_x\phi,\partial_y\phi,\partial_z\phi)$. We can take $\phi=\phi(t;x^a),\ \ x^a=\{x,y,z\}$, so the exact solution for the normalized scalar field reads: \begin{eqnarray} \phi(t,\vec{x})=t+\vec{k}\cdot\vec{x}\pm|\vec{k}|\int{\frac{dt}{a(t)}}\label{phi}. \end{eqnarray} Using (\ref{phi}) and $(0-i)$ equation we obtain: \begin{eqnarray} &&2\lambda\phi(t,\vec{x})- \chi(t)=f(x) =\texttt{Arbitrary funcsion} \end{eqnarray} Now by plugging (\ref{phi}) in the above equation we obtain: \begin{eqnarray} &&2\lambda (t\pm k\int\frac{dt}{a(t)}+\vec{k}\cdot\vec{x})-\gamma H(3\pm\frac{2k}{a})=f(x), \Rightarrow f(x) = 2\lambda \vec{k}\cdot\vec{x},\\ \ \ &&\Rightarrow t\pm k\int\frac{dt}{a(t)}=\frac{\gamma}{2\lambda}(3\pm\frac{2k}{a})H, \end{eqnarray} Using the another FRW equation we obtain: \begin{eqnarray} && 1\pm \frac{k}{a}=\frac{\gamma}{2\lambda}\Big[3\pm\frac{2k}{a}\Big]\dot{H}+\frac{\gamma}{2\lambda}(\mp 2k\frac{H^2}{a}), \end{eqnarray} Finally it is possible to write the FRW equation using the effective pressure and energy density as the following: \begin{eqnarray} && 3H^2=\kappa^2\rho_{eff},\ \ 2\dot{H}=-\kappa^2(p_{eff}+\rho_{eff}). \end{eqnarray} Where the effective quantities are written as the following: \begin{eqnarray} p_{eff}=V(\phi)\Big[\frac{3\pm\frac{2k}{2}}{\frac{\gamma}{2}(1\pm \frac{k}{a})+\frac{1}{3\pm\frac{2k}{2}}-\frac{3}{4k}\pm\frac{\gamma}{2}-\frac{a\gamma}{8k}(3\pm\frac{2k}{2})^2}\Big]-\notag\\-\lambda\Big[\frac{\frac{9a^2}{4k^2\gamma}(1\pm \frac{k}{a})+\frac{2k_x^2}{a^2(3\pm\frac{2k}{2})}}{\frac{\gamma}{2}(1\pm \frac{k}{a})+\frac{1}{3\pm\frac{2k}{2}}-\frac{3}{4k}\pm\frac{\gamma}{2}-\frac{a\gamma}{8k}(3\pm\frac{2k}{2})^2}\Big] \end{eqnarray} \begin{eqnarray} \rho_{eff}=V(\phi)\Big[\frac{3a(3\pm\frac{2k}{2})^2}{2k\gamma(1\pm \frac{k}{a})+\frac{4k}{3\pm\frac{2k}{2}} -3a \pm 2k\gamma-\frac{a\gamma}{2}(3\pm\frac{2k}{2})^2}\Big]-\notag\\-\frac{3a\lambda}{k}\Big[\frac{(1\pm \frac{k}{a})}{4\gamma}-\frac{\frac{9a^2}{4k^2\gamma}(1\pm \frac{k}{a})+\frac{2k_x^2}{a^2(3\pm\frac{2k}{2})}}{2\gamma(1\pm \frac{k}{a})+\frac{4}{3\pm\frac{2k}{2}} -\frac{3a}{k} \pm 2\gamma-\frac{a\gamma}{2k}(3\pm\frac{2k}{2})^2}\Big] \end{eqnarray} Specially when $k=0$ we have: \begin{eqnarray} p_{eff}=2V(\phi)+2\lambda,\ \ \rho_{eff}=\frac{2+6\gamma}{2+3\gamma}V(\phi)+2\lambda \end{eqnarray} In this case the exact solution for FRW equations are: \begin{eqnarray} \dot{H}=\frac{2\lambda}{3\gamma}\Longrightarrow H(t)=\frac{2\lambda}{3\gamma}(t-t_0)\Longrightarrow a(t)=exp\Big(\frac{\lambda}{3\gamma}(t-t_0)^2\Big) \label{k0-eq2}. \end{eqnarray} This equation has some solution for a(t). If we substitue $\dot{H}=\frac{2\lambda}{3\gamma}$ in $0-0$ ($k=0$) we find: \begin{eqnarray} &&3H^2-\frac{9}{2}\gamma H^2=V(t)\Longrightarrow V(t)=3(1-\frac{3}{2}\gamma)H^2=(1-\frac{3}{2}\gamma)(\frac{2\lambda}{3\gamma})^2(t-t_0)^2. \end{eqnarray} If $\phi=t$ we find: \begin{eqnarray} && V(\phi)=\frac{4\lambda^2}{3\gamma^2}(1-\frac{3\gamma}{2})(\phi-\phi_0)^2 \end{eqnarray} So MMG posses only the scalar potential in the form of square $(\phi-\phi_0)^2 $. But for $k\neq0$ the bahavior is different,we've: \begin{eqnarray} 0-i:,\ \ && \frac{\gamma}{2\lambda}(3\pm\frac{2k}{a})\dot{H}\mp \frac{k}{a}(1+\frac{\gamma H^2}{\lambda})-1=0, \end{eqnarray} The equation can be solved to obtain H. There are two branches $\pm$ and consequently the pair of solutions for $H_{\pm}(t)$ as the following: \begin{eqnarray} \pm\sqrt{\gamma}\int_{a_0}^{a_{+}(t)}{\frac{3x+2k}{x\sqrt{c_1\gamma x^2-4\lambda k^2+12\lambda x^2\log x-20 k\lambda x}}}-t-c_2=0\label{a1}\\ \pm\sqrt{\gamma}\int_{a_0}^{a_{-}(t)}{\frac{3x-2k}{x\sqrt{c'_1\gamma x^2-4\lambda k^2+12\lambda x^2\log x+20 k\lambda x}}}-t-c'_2=0\label{a2}. \end{eqnarray} Here $\{c_i,c'_i\},\ \ i=1,2$ are constants. Using (\ref{a1},\ref{a2}) we are able to find $\{H_{+},H_{-}\}$. We find: \begin{eqnarray} H_{+}(t)=\pm\sqrt{\gamma}^{-1}(3a(t)+2k)^{-1}\sqrt{c_1\gamma a(t)^2-4\lambda k^2+12\lambda a(t)^2\log a(t)-20 k\lambda a(t)}\label{H+}\\ H_{-}(t)=\pm\sqrt{\gamma}^{-1}(3a(t)-2k)^{-1}\sqrt{c'_1\gamma a(t)^2-4\lambda k^2+12\lambda a(t)^2\log a(t)+20 k\lambda a(t)}\label{H-} \end{eqnarray} \section{Energy conditions in modified mimetic cosmology} Consider a type of modified gravity in cosmological background with the following forms of effective FRW equations: \begin{eqnarray} 3H^2=\kappa^2\rho_{eff},\ \ 2\dot{H}=-\kappa^2(p_{eff}+\rho_{eff}). \end{eqnarray} Here the effective quantities are functions of the geometrical quantities like $\{H,\dot{H},...,H^{(n)},a(t),..\}$. The null energy condition (NEC), weak energy condition (WEC), strong energy condition (SEC) and the dominant energy condition (DEC) are given by \cite{lobo,anzhong}: \begin{eqnarray} \text{NEC}&\Longleftrightarrow&\rho_{\text{eff}}+p_{\text{eff}}\geq0.\label{n1}\\ \text{WEC}&\Longleftrightarrow& \rho_{\text{eff}}\geq0\ \text{and}\ \rho_{\text{eff}}+p_{\text{eff}}\geq0.\label{n2}\\ \text{SEC}&\Longleftrightarrow& \rho_{\text{eff}}+3p_{\text{eff}}\geq0\ \text{and}\ \rho_{\text{eff}}+p_{\text{eff}}\geq0.\label{n3}\\ \text{DEC}&\Longleftrightarrow& \rho_{\text{eff}}\geq0\ \text{and}\ \rho_{\text{eff}}\pm p_{\text{eff}}\geq0.\label{n4} \end{eqnarray} It is clear to us that these energy conditions are independent from any gravity theory and that these are purely geometrical \cite{hawking}. Furthermore, NEC implies WEC and WEC implies SEC and DEC. In MMG we clearly suppose that the non exotic ordinary matter satisfies all the energy conditions separately i.e. $\rho_m\geq0$, $\rho_m\pm p_m\geq0$, $\rho_m+3p_m\geq0$. Different types of modified gravities have been checked for such energy conditions, namely $f(R)$ theory \cite{wang1}, $f(T)$ in which $T$ denotes torsion \cite{Jamil:2012ck} and more(see \cite{Nojiri:2006ri} for a review). One checks standard energy conditions on Quantum de Sitter cosmology and phantom matter and energy conditions in phantom / tachyon inflationary cosmology perturbed by quantum effects\cite{hep-th/0303117,hep-th/0306212}. Furthermore stronger analysing including non-linear energy conditions has been investigated \cite{Martin-Moruno:2013sfa}. In our work we also would like to check these conditions. \par Let us to think on a case with $\vec{k}=0$. So,we write: \begin{eqnarray} \phi(t,\vec{x})=t. \end{eqnarray} The effective quantities read : \begin{eqnarray} \ \ && p_{eff} = -2\lambda-m^2t^2\\ \ \ &&\rho_{eff}=\frac{(4-2\gamma)\lambda+(1-3\gamma)m^2t^2}{2-3\gamma} \end{eqnarray} The energy conditions read: \begin{eqnarray} && NEC \Longleftrightarrow \frac{4\gamma\lambda-m^2t^2}{2-3\gamma} \geq0.\\ \ \ && WEC \Longleftrightarrow \frac{(4-2\gamma)\lambda+(1-3\gamma)m^2t^2}{2-3\gamma}\geq0 \ \ \& \ \ \frac{4\gamma\lambda-m^2t^2}{2-3\gamma}\geq0.\\ \ \ && SEC \Longleftrightarrow \frac{(-8+16\gamma)\lambda+(-5+6\gamma)m^2t^2}{2-3\gamma}\geq0 \ \ \& \ \ \frac{4\gamma\lambda-m^2t^2}{2-3\gamma}\geq0.\\ \ \ && DEC \Longleftrightarrow \frac{(4-2\gamma)\lambda+(1-3\gamma)m^2t^2}{2-3\gamma}\geq0 \ \ \& \ \ (\frac{4-2\gamma}{2-3\gamma}\pm2)\lambda+(\frac{1-3\gamma}{2-3\gamma}\pm1)m^2t^2 \geq0.\\ \end{eqnarray} In the case of $2-3\gamma>0$ we conclude that all energy conditions satisfy for a time scale no longer than $t_c$,where \begin{eqnarray} t_c\leq\frac{1}{m}\sqrt{4\gamma\lambda}. \end{eqnarray} \par Now we consider the case of $\gamma> 2/3$,so we obtain: \begin{eqnarray} \ \ && NEC \Longleftrightarrow \texttt{is satisfied}.\\ \ \ && WEC \Longleftrightarrow \texttt{is satisfied when}\ \ t\leq t^{*}_1,\ \ t^{*}_1=\frac{1}{m}\sqrt{\frac{4-2\gamma}{1-3\gamma}\lambda}.\\ \ \ && SEC \Longleftrightarrow \texttt{is satisfied when} \ \ t\leq t^{*}_2,\ \ t^{*}_2=\frac{2}{m}\sqrt{\frac{16\gamma -12}{6\gamma -5}\lambda}.\\ \ \ && DEC \Longleftrightarrow \texttt{is satisfied when}\ \ t\leq t^{*}_{3},\ \ t^{*}_3=\frac{1}{m}\sqrt{\frac{8-8\gamma}{3-6\gamma}\lambda}\}. \end{eqnarray} The NEC does not violate. It is a posibility to preserve the rest,when we have: \begin{eqnarray} t\leq \texttt{Max}\{ t^{*}_1, t^{*}_2,t^{*}_{3}\}. \end{eqnarray} So, we should find the following quantity in fixed $\{m,\lambda>0\}$: \begin{eqnarray} T_c=\texttt{Max}\{ \sqrt{\frac{16(4\gamma -3)}{6\gamma -5}},\sqrt{\frac{2(2-\gamma)}{1-3\gamma}} ,\sqrt{\frac{8(1-\gamma)}{3(1-2\gamma)}}\}_{\gamma> 2/3}=\sqrt{\frac{2(2-\gamma)}{1-3\gamma}}. \end{eqnarray} So, for time scales shorter than $T_c$ all kinds of energy conditions are satisfied. \section{Final remarks} There are different types of modified gravity theories like $f(R),f(T),f(R,G)$ and more. In all these theories there exists one or more extra degrees of freedom. Only a few numbers of such models are ghost free. Specially the case of a single or multiple scalar fields are considered as potentially interesting models for dark sectors like energy or matter. In this work we have been motivated by a very recently proposal of gravity as a mimetic model with conformal symmetry (internal symmetry). We proposed a generalized model for gravity at large scales. In our proposal, the metric is a function of an auxiliay metric and another Lyra's metric. When we apply the conformal symmetry on the auxiliary metric we find that the physical metric is invariant. The gravitational action is obtained by replacing the Ricci scalar of the physical metric. But to find the equations of motion we should take into the account the total variation of the metric. We find two independent equations of motion for auxiliary and Lyra metrics. If we specify Lyra's metric as the bi -linear combination of the gradient of a scalar field like $Y_{\alpha\beta}=\partial_{\alpha}\phi\partial_{\beta}\phi$ we find that the scalar field must have a unit norm. So, with a fixed physical metric, the scalar metric has a definite form. There is no additional second order differential equation for the scalar field to specify its dymanics. Further more,we investigate some particular solutions of a type of Mimetic gravity for cosmology. We show that with a homogenous scalar field the scalar field interaction is a square function of $\phi$. In this case the Hubble parameter is linear as a function of time. For a slightly inhomogenous scalar field,we have a couple forms of Hubble parameter. The form of scalar potential is determined from FRW equations. Later we investigate the validity of energy conditions in this kind of modified gravitiy. We show that it is possible to rewrite the pair of FRW equations in an effective form. By using the effective energy density and pressure we study energy conditions. We showed that for time scales shorter than a certain time the system respects all energy conditions. For longer times these energy conditions are violated. It predicts that in the gravitational system there is a phase transition in cosmological scales. \subsection{Acknowledgement} D. Momeni would like to thank the kind hospitality of the Abdus Salam ICTP, Trieste, Italy where part of this work was completed. We would like to thank R. da Rocha, S. D. Odintsov for useful comments and also the anonymous reviewer for enlightening comments related to this work.
1,941,325,220,790
arxiv
\section{Introduction} Estimation of the parameters of stellar models is a challenging task. The increasing precision achieved in the determination of atmospheric parameters, coupled to the fast growing amount of available seismic and interferometric data, has allowed to improve the constraints on stellar models. However, it is also important to improve the methods for stellar parameter estimation. Stellar models are known to be highly non-linear. Therefore, optimisation methods may fail in computing the optimum parameters. Moreover, it is usually difficult to associate a confidence level to the inferred stellar parameters. We consider here the use of Monte Carlo Markov Chain (MCMC) algorithms to address this problem. They offer the advantage of being efficient for non-linear models and allow, when combined with statistical inference, to estimate jointly the parameters and associated confidence levels. \section{Principle of the MCMC algorithm} \begin{table*}[t!] \begin{center} \caption{Results of the MCMC simulations for {\acena}. For each run the first row gives the mean value and the corresponding 1-$\sigma$ error bar. The second row gives the parameter value estimated from the maximum {\it a posteriori} (MAP, in brackets). } \label{estim} \begin{tabular}{lcccc} \hline Run \#& $M$ ({\msol})&$\sigma_M$ ({\msol})& $t_{\star}$ (Gyr) & $\sigma_{t_{\star}}$ (Gyr)\\ \hline 1& 1.122 & 0.019 & 5.59 & 1.13\\ &(1.119)&&(5.80)&\\ 2& 1.107 & 0.007 & 6.52 & 0.45\\ &(1.106)&&(6.55)&\\ 3& 1.117 & 0.014 & 5.91 & 0.51\\ &(1.116)&&(5.97)&\\ \hline \end{tabular} \end{center} \end{table*} Let ${\mathbf{\theta}}$ and $\mathbf{X}$ be the vectors collecting respectively the model parameters and the observational data. Our probabilistic approach aims at estimating the posterior probability distribution (PPD) of the model parameters, conditional on the available data for the star, which is given by Bayes's formula : \begin{equation}\label{bayes} \pi({\mathbf{\theta}}|\mathbf{X}) = \frac{f(\mathbf{X}|{\mathbf{\theta}})\pi({\mathbf{\theta}})}{K}, \end{equation} with $\pi({\mathbf{\theta}}|\mathbf{X})$ the PPD, $f(\mathbf{X}|{\mathbf{\theta}})$ the likelihood, $\pi({\mathbf{\theta}})$ a given prior density function for the model parameters and $K$ a normalisation constant. Assuming that the observations are independent, we used a Gaussian likelihood: \begin{equation}{\label{likeli}} f(\mathbf{X}|{\mathbf{\theta}}) \propto \exp \left( - \frac{1}{2}\displaystyle\sum_{i=1}^{N} \left[ \frac{X_i^{th}({\mathbf{\theta}})-X_i}{\sigma_i} \right]^2 \right), \end{equation} with $X_i^{\mathrm{th}}({\mathbf{\theta}})$ the output of the model corresponding to the $i$-th component of $\mathbf{X}$, $\sigma_i$ the standard deviation, chosen as the $1\sigma$-error bar on the measurement and $N$ the number of measurements. The idea behind MCMC algorithms is to generate samples distributed according to a distribution of interest, here $\pi({\mathbf{\theta}} | \mathbf{X})$. Since these distributions are usually complex, samples are generated using simpler ones, called instrumental distributions. The useful property of an MCMC algorithm is that when it reaches convergence it generates sets of parameters according to the target PPD. The most general form of MCMC algorithms, the one we used in this work, is the Metropolis-Hastings algorithm \citep{Metropolis53}. At iteration $t$, a new candidate $\theta^\star$ is sampled from a given instrumental distribution $q(\theta^\star|\theta^{t-1})$, and is chosen as $\theta^t = \theta^\star$ with some acceptance probability depending on $q$, $\pi$, $\theta^{t-1}$ and $\theta^\star$. Distribution $q$ has to be chosen carefully, so that it can cover the whole parameter space and also represent the true posterior distribution, in order to achieve a satisfactory acceptance rate. To this end, we chose a mixture of three distributions: a uniform and a Gaussian distribution with large variance allow to scan the entire parameter space, while a Gaussian distribution with small variance is used for quick local refinements. \section{A test case: $\alpha$ Cen A} Since it is the closest star to the Solar System, {\acena} has been extensively observed. Effective temperature and luminosity, $T_{\mathrm{eff}}=5810\pm50$~K and $L/L_{\odot}=1.522\pm0.030$ \citep{Eggenberger04}, radius, $R/R_{\odot}=1.224\pm0.003$ \citep{Kervella03} and seismic data are available and, because it is part of a binary, so is its mass, $M/M_{\odot} = 1.105\pm0.007$ \citep{Pourbaix02}. In the following we limit ourselves to the non-seismic constraints. We used the stellar evolution code ASTEC \citep[][2007]{JCD82b} to compute $\mathbf{X}({\mathbf{\theta}})$. The estimated parameters are the stellar mass, $M$, and age, $t_{\star}$ (${\mathbf{\theta}}=\{ M, t_{\star}\}$). By neglecting diffusion and mixing processes other than convection, we can fix the metallicity ($Z=0.027$). The mixing-length parameter is assumed to be solar. We present in Table~\ref{estim} and Fig.~\ref{marge}\ the results of three MCMC simulations using different observational constraints and priors. In run~1 we retained $\mathbf{X}= \{ L,T_{\mathrm{eff}}\}$ and uniform priors were used on $M$ and $t_{\star}$. We used the same constraints in run~2 but the uniform prior on the mass was replaced by a Gaussian prior, $\pi(M) \propto \exp \left( - [M/M_{\odot}-1.105]^2/9.8\times 10^{-5} \right)$. Run~3 aims at testing the effect of precise radius measurements on the stellar parameters, we thus chose $\mathbf{X} = \{ L,T_{\mathrm{eff}},R \}$ and used uniform priors. \begin{figure*}[t!] \center \resizebox{0.90\hsize}{!}{\includegraphics[clip=true]{acena_marg_dists_3.eps}} \caption{\footnotesize Marginal distributions obtained for the mass, $\pi(M|\mathbf{X})$ (upper row), and the age, $\pi(t_{\star}|\mathbf{X})$ (lower row), of {\acena}. The columns correspond, from left to right, to runs 1 to 3 (see text for details).} \label{marge} \end{figure*} Table~\ref{estim} gives the inferred values for $M$ and $t_{\star}$. We present the mean value and the corresponding standard deviation, derived from the marginal distributions $\pi(M|\mathbf{X})$ and $\pi(t_{\star} | \mathbf{X})$ displayed in Fig.~\ref{marge}. The value estimated as the maximum {\it a posteriori} is also given. We can see immediately that there is a good agreement between the estimated values, which underlines the good agreement between independent observational constraints. We observe a small discrepancy between the mean and MAP values obtained from run~1. This can be explained by the fact that convective cores start to appear in models with masses $\gtrsim 1.14$~{\msol}. Non-linear effects associated with convective cores will, as a general trend, lead to a wider range of stellar parameters being able to reproduce the observations \citep[see, e.g.,][for an illustration of this phenomena]{Jorgensen05}, and hence to be accepted by the MCMC algorithm. Therefore the difference could be explained this way: by using the MAP value one selects only the model which reproduces best the observations, whereas the mean value accounts for all accepted models. It is worth noting that this discrepancy becomes extremely small for run~3 and almost disappears for run~2, during which no models with convective cores were accepted. \bibliographystyle{aa} \input{Cefalu07_proc.bbl} \end{document}
1,941,325,220,791
arxiv
\section{Introduction}\label{Intro} \begin{figure}[h!]\begin{center} \includegraphics[scale=0.755]{fig1a.eps} \includegraphics[scale=0.411]{fig1b.eps} \caption{ Images of the group SL2S\,J02140-0535 at $z_{\rm spec}=0.44$. \emph{Left:} composite HST/ACS F814, F606, F475 color image ($22\arcsec \times 22\arcsec$ = 125 $\times$ 125 kpc$^{2}$) showing the central region of the group \citepalias[from][]{Verdugo2011}. \emph{Right:} composite WIRCam $J, K_s$ color image ($22\arcsec \times 22\arcsec$).} \label{presentlens} \end{center} \end{figure} The Universe has evolved into the filamentary and clumpy structures \citep[dubbed the cosmic web,][]{Bond1996} that are observed in large redshift surveys \citep[e.g.,][]{Colless2001}. Massive and rich galaxy clusters are located at the nodes of this cosmic web, being fed by accretion of individual galaxies and groups \citep[e.g.,][]{Frenk1996,Springel2005,Jauzac2012}. Galaxy clusters are the most massive gravitationally bound systems in the Universe, thus constituting one of the most important and crucial astrophysical objects to constrain cosmological parameters \citep[see for example][]{Allen2011}. Furthermore, they provide information about galaxy evolution \citep[e.g.,][]{Postman2005}. Also, galaxy groups are important cosmological probes \citep[e.g.,][]{Mulchaey2000,EkeVR2004} because they are tracers of the large-scale structure of the Universe, but also because they are probes of the environmental dependence of the galaxy properties, the galactic content of dark matter haloes, and the clustering of galaxies \citep{Eke2004}. Although there is no clear boundary in mass between groups of galaxies and clusters of galaxies \citep[e.g.,][]{Tully2014}, it is commonly assumed that groups of galaxies lie in the intermediate mass range between large elliptical galaxies and galaxy clusters (i.e., masses between $\sim 10^{13}$M$_{\sun}$\, to $\sim 10^{14}$M$_{\sun}$). The mass distribution in both galaxy groups and clusters has been studied extensively using different methods, such as the radial distribution of the gas through X-ray emission \citep[e.g.,][and references therein]{Sun2012,Ettori2013}, the analysis of galaxy-based techniques that employs the positions, velocities and colors of the galaxies \citep[see][for a comparison of the accuracies of dynamical methods in measuring $M_{200}$]{Old2014,Old2015}, or through gravitational lensing \citep[e.g.,][and references therein]{paperI,Limousin2010,Kneib2011}. Each probe has its own limitations and biases, which in turn impact the mass distribution measurements. Strong lensing (hereafter SL) analysis provides the total amount of mass and its distribution with no assumptions on neither the dynamical state nor the nature of the matter producing the lensing effect. Nevertheless, the analysis has its own weakness; for example, it can solely constrain the two dimensional projected mass density, and it is limited to small projected radii. On the other hand, the analysis of galaxy kinematics do not has such limitations, but assumes local dynamical equilibrium (i.e., negligible rotation and streamings motions) and spherical symmetry. In this paper, we put forward a new method to overcome one of the limitations of the SL analysis, namely, the impossibility to constrain large-scale properties of the mass profile. Our method combines SL (in a parametric fashion, see Sect.\,\ref{Metho}) with the dynamics of the group or the cluster galaxy members, fitting simultaneously both data sets. Strong lensing and dynamics are both well recognized probes, the former providing an estimate of the projected two dimensional mass distribution within the core (typically a few dozens of arcsecs at most), whereas the latter is able to study the density profile at larger radii, using galaxies as test particles to probe the host potential. There are several methods to constrain the mass profiles of galaxy clusters from galaxy kinematics. One can fit the 2\textit{nd} and 4\textit{th} moments of the line-of-sight velocities, in bins of projected radii \citep{Lokas2003}. One can assume a profile for the velocity anisotropy, and apply mass inversion techniques \citep[e.g.,][]{Mamon2010,Wolf2010,Sarli2014}. Both methods require binning of the data. An alternative is to fit the observed distribution of galaxies in projected phase space (projected radii and line-of-sight velocities), which does not involve binning the data. This can be performed by assuming six-dimensional distribution functions (DFs), expressed as function of energy and angular momentum \citep[e.g.,][who used DFs derived by \citealp{Wojtak2008} for $\Lambda$CDM halos]{Wojtak2009}, but the method is very slow, as it involves triple integrals for every galaxy and every point in parameter space. An accurate and efficient alternative is to assume a shape of the velocity DF as in the {\sc MAMPOSS}t method of \citet[][]{Mamon2013}, which has been used to study the radial profiles of mass and velocity anisotropy of clusters \citep{Biviano2013,Munari2014,Guennou2014,Biviano2016}. {\sc MAMPOSS}t is ideal for the aims of the present study as: 1.- it is accurate for a dynamical model and very rapid\footnote{A valuable asset, since lensing modeling has become time demanding. See for example the discussion in \citet{Jauzac2014} about the computing resources when modeling a SL cluster with many constraints.}; 2.- it produces a likelihood as does the {\sc LENSTOOL} code used here for SL (see Sect.\,\ref{Metho}); 3.- can run with the same parametric form of the mass profile as used in {\sc LENSTOOL}. \begin{figure*}[!htp] \centering \includegraphics[width=0.75\textwidth]{fig2.eps} \caption{CFHTLS $i$-band image with the luminosity density contours for \object{SL2S\,J02140-0535}. They represent $1\times10^6$, $5\times10^6$, and $1.0\times10^7\;$L$_\odot\,$kpc$^{-2}$ from outermost to innermost contour, respectively. The red squares and circles show the location of the 24 confirmed members of the group, the squares represent the galaxies previously reported by \citet{Roberto2013}, and the circles the new observations. The black vertical line on the left represents 1 Mpc at the group rest-frame. The inset in the top-right corner shows a 30$\arcsec$$\times$30$\arcsec$ CFHTLS false color image of the system.\label{fig1}} \end{figure*} The idea of combining lensing and dynamics is not new. So far, dynamics have been used to probe the very centre of the gravitational potential, through the measurement of the velocity dispersion profile of the central brightest cluster galaxy \citep{sand02,sand04,new09,new13}. However, the use of dynamical information at large scale (velocities of the galaxy members in a cluster or a group), together with SL analysis has not been fully explored. Through this approach, \citet{Thanjavur2010} showed that it is possible to characterize the mass distribution and the mass-to-light ratio of galaxy groups. \citet{Biviano2013} analyzed the cluster MACS\,J1206.2-0847, constraining its mass, velocity-anisotropy, and pseudo-phase-space density profiles, finding a good agreement between the results obtained from cluster kinematics and those derived from lensing. Similarly, \citet{Guennou2014} compared the mass profile inferred from lensing with different profiles obtained from three methods based on kinematics, showing that they are consistent among themselves. This work follows the analysis of \citet{Verdugo2011}, hereafter \citetalias{Verdugo2011}, where we combined SL and dynamics in the galaxy group \object{SL2S\,J02140-0535}. In \citetalias{Verdugo2011}, dynamics were used to constrain the scale radius of a NFW mass profile, a quantity that is not accessible to SL constraints alone. These constraints were used as a prior in the SL analysis, allowing to probe the mass distribution from the centre to the virial radius of the galaxy group. However, the fit was not simultaneous. In this work we propose a framework aimed at fitting simultaneously SL and dynamics, combining the likelihoods obtained from both techniques in a consistent way. Our paper is arranged as follows: In Sect.\,\ref{Metho} the methodology is explained. In Sect.\,\ref{DATA} and Sect.\,\ref{MassM} we present the observational data images, spectroscopy, and the application of the method to the galaxy group \object{SL2S\,J02140-0535}. We summarize and discuss our results in Sect.\,\ref{Discus}. Finally in Sect.\,\ref{Conclusions}, we present the conclusions. All our results are scaled to a flat, $\Lambda$CDM cosmology with $\Omega_{\rm{M}} = 0.3, \Omega_\Lambda = 0.7$ and a Hubble constant $H_0 = 70$ km\,s$^{-1}$ Mpc$^{-1}$. All images are aligned with WCS coordinates, i.e., North is up, East is left. Magnitudes are given in the AB system. \section{Methodology}\label{Metho} In this section we explain how the SL and dynamical likelihoods are computed in our models. \subsection{Strong lensing} The figure-of-merit-function, $\chi^{2}$, that quantifies the goodness of the fit for each trial of the lens model, has been introduced in several works \citep[e.g.,][]{ver07,Limousin2007,jullo07}, therefore, we summarize the method here. Consider a model whose parameters are $\vec {\theta}$, with $N$ sources, and $n_i$ the number of multiple images for source $i$. We compute, for every system $i$, the position in the image plane $x^j(\theta)$ of image $j$, using the lens equation. Therefore, the contribution to the overall $\chi^{2}$ from multiple image system $i$ is \begin{equation}\label{eq:Chi2Lens} \chi_{i}^{2} = \sum_{j=1}^{n_i} \frac{\left[ x_{obs}^j - x^j(\theta) \right]^2}{\sigma_{ij}^{2}}, \end{equation} \noindent were $\sigma_{ij}$ is the error on the position of image $j$, and $x_{obs}^j$ is the observed position. Thus, we can write the likelihood as \begin{equation}\label{eq:LikeLens} \mathcal{L}_{\textrm{Lens}} = \prod_{i}^{N}\frac{1}{\prod_{j}\sigma_{ij}\sqrt{2\pi}}e^{-\chi^2_i/2}, \end{equation} \noindent where it is assumed that the noise associated to the measurement of each image position is Gaussian and uncorrelated \citep[][]{jullo07}. This is not true in the case of images that are very close to each other, but it is a reasonable approximation for \object{SL2S\,J02140-0535}. In this work we assume that the error in the image position is $\sigma_{ij}$ = 0.5$\arcsec$, which is slightly greater than the value adopted in \citetalias{Verdugo2011}, but is half the value that has been suggested by other authors in order to take into account systematic errors in lensing modeling \citep[e.g.,][]{Jullo2010,DAloisio2011,Host2012,Zitrin2015}. \subsection{Dynamics} {\sc MAMPOSS}t \citep[][]{Mamon2013} is a method that performs a maximum likelihood fit of the distribution of observed tracers in projected phase space (projected radii and line-of-sight velocity, hereafter PPS). We refer the interested reader to \citet[][]{Mamon2013} for a detailed description, here we present a summary. {\sc MAMPOSS}t assumes parameterized radial profiles of mass and velocity anisotropy, as well as a shape for the three-dimensional velocity distribution (a Gaussian 3D velocity distribution). {\sc MAMPOSS}t fits the distribution of observed tracers in PPS. The method has been tested in cosmological simulations, showing the possibility to recover the virial radius, the tracer scale radius, and the dark matter scale radius when using 100 to 500 tracers \citep[][]{Mamon2013}. Moreover, \citet{Old2015} found that the mass normalization $M_{200}$ is recovered with 0.3 dex accuracy for as few as $\sim$30 tracers. The velocity anisotropy is defined through the expression \begin{equation}\label{eq:anisotropy} \beta(r) = 1 - \frac{\sigma^2_{\theta}(r) + \sigma^2_{\phi}(r)}{2\sigma^2_{r}(r)}, \end{equation} \noindent where, in spherical symmetry, $\sigma_{\phi}$($r$) = $\sigma_{\theta}$($r$). In the present work we adopt a constant anisotropy model with $\sigma_r/\sigma_{\theta}$ = (1 $-$ $\beta$)$^{-1/2}$, assuming spherical symmetry (see Sect.\,\ref{MassM}). The 3D velocity distribution is assumed to be Gaussian: \begin{equation}\label{eq:3Ddis} f_{\upsilon} = \frac{1}{(2\pi)^{3/2}\sigma_{r}\sigma^2_{\theta}}\exp\left[ - \frac{\upsilon^2_{r}}{2\sigma^2_{r}} - \frac{\upsilon^2_{\theta} + \upsilon^2_{\phi} }{2\sigma^2_{\theta}} \right ], \end{equation} \noindent where $\upsilon_{r}$, $\upsilon_{\theta}$, and $\upsilon_{\phi}$ are the velocities in a spherical coordinate system. This Gaussian distribution assumes no rotation or radial streaming, which is a good assumption inside the virial radius, as has been shown by numerical simulations \citep{Prada2006,Cuesta2008}. The Gaussian 3D velocity model is a first-order approximation, which can be improved \citep[see][]{Beraldo2015}. Thereby, {\sc MAMPOSS}t fits the parameters using maximum likelihood estimation, i.e. by minimizing \begin{equation}\label{eq:MLE} - \ln{\mathcal{L}_{\textrm{Dyn}}} = -\sum_{i=1}^n \frac{\ln q(R_i,\upsilon_{z,i} \mid \bar{\theta})}{C(R_i)}, \end{equation} \begin{figure}[h!]\begin{center} \includegraphics[scale=0.46]{fig3.eps} \caption{Completeness as a function of the radius in \object{SL2S\,J02140-0535}. $C\sim60\%$ roughly constant up to 1 Mpc.} \label{completeness} \end{center} \end{figure} \noindent where $q$ is the probability density of observing an object at projected radius $R$, with line-of-sight (hereafter LOS) velocity $\upsilon_z$, for a N-parameter vector $\bar{\theta}$. $C(R_i)$ is the completeness of the data set (see section~\ref{NSD}). \begin{figure*}[!htp] \centering \includegraphics[scale=0.46]{fig4a.eps} \includegraphics[scale=0.46]{fig4b.eps}\\ \includegraphics[scale=0.37]{fig4c.eps}\\ \includegraphics[scale=0.37]{fig4d.eps} \caption{\textit{Top panel}. As mentioned in section Sect.\,\ref{DATA3.3}, the slit length was limited by the position of the arcs, hence producing poor sky subtraction. \textit{Left}. Observed spectrum of arc A (black continuous line). In gray we depict a starburst template from \citet{Kinney1996} shifted at $z$ = 1.628. We marked a possible [OII]$\lambda$3727 emission line, and some sky lines in blue (see Sect.\,\ref{DATA3.3}). \textit{Right}. Observed spectrum of arc C (black continuous line), as before we depicted in gray the starburst template from \citet{Kinney1996}, but shifted at $z$ = 1.02. We marked some characteristic emission lines, along with the sky lines in blue, but we omit for clarity the labels of the last ones. \textit{Bottom panel.} Two-dimensional spectra of arc A (with a color bar in arbitrary units). Note the [OII]$\lambda$3727 emission line. Below, the two-dimensional spectra of arc C, with two emission lines clearly identified: [OII]$\lambda$3727, and [OIII]$\lambda$4958.9. } \label{spectraArcC} \end{figure*} \subsection{Combining likelihoods} In order to combine SL and dynamical constraints, we compute their respective likelihoods. The SL likelihood is computed via {\sc LENSTOOL}\footnote{Publicly available at: http://www.oamp.fr/cosmology/ {\sc LENSTOOL}/} code. This software implements a Bayesian Monte Carlo Markov chain (MCMC) method to search for the most likely parameters in the modeling. It has been used in a large number of clusters studies, and characterized in \citet{jullo07}. The likelihood coming from dynamics of cluster members is computed using the {\sc MAMPOSS}t code \citep[][]{Mamon2013}, which has been tested and characterized on simulations. Technically, we have incorporated the {\sc MAMPOSS}t likelihood routine into {\sc LENSTOOL}. Note that the SL likelihood (see Eq.~\ref{eq:LikeLens}) depends on the image positions of the arcs and their respective errors. On the other hand, the {\sc MAMPOSS}t likelihood is calculated through the projected radii and line-of-sight velocity (Eq.~\ref{eq:MLE}). The errors on the inputs for the strong lensing on one hand and {\sc MAMPOSS}t on the other should not be correlated (in other words the joint lensing-dynamics covariance matrix should be diagonal). So, we can write: \begin{equation}\label{eq:LikeTot} \mathcal{L}_{T} = \mathcal{L}_{\textrm{Lens}} \times \mathcal{L}_{\textrm{Dyn}}, \end{equation} \noindent where $\mathcal{L}_{\textrm{Dyn}}$ is given by Eq.~\ref{eq:MLE} and $\mathcal{L}_{\textrm{Lens}} $ is calculated through Eq.~\ref{eq:LikeLens}. This definition of a total likelihood, where the two techniques (lensing and dynamics) are considered independent, is not new, and has been used previously at different scale by other authors \citep[e.g.,][]{sand02,sand04}, here we are using the dynamics to obtain constraints in the outer regions\footnote{Note that here we are assuming the same weight of the SL and dynamics on the total likelihood. However this can not be the case, for example when combining SL and weak lensing \citep[see the discussion in][]{Umetsu2015}}. In this sense, the main difference with previous works, as for example \citet{Biviano2013}, \citet{Guennou2014} or \citetalias{Verdugo2011}, is that in this work we do a joint analysis, searching for a solution consistent with both methods, maximizing a total likelihood. \subsection{NFW mass profile} We adopt the NFW mass density profile that has been predicted in cosmological $N$-body simulations \citep{Navarro1996,nav97}, given by \begin{equation}\label{eq:rho} \rho(r) = \frac{\rho_{s}}{(r/r_s)(1+r/r_s)^{2}}, \end{equation} \noindent where $r_s$ is the radius that corresponds to the region where the logarithmic slope of the density equals the isothermal value, and $\rho_s$ is a characteristic density. The scale radius is related to the virial radius $r_{200}$ through the expression $c_{200}$ = $r_{200}/r_s$, which is the so-called concentration\footnote{$r_{200}$ is the radius of a spherical volume inside of which the mean density is $200$ times the critical density at the given redshift $z$, $M_{200} = 200 \times (4\pi/3)r_{200}^{3} \rho_{crit}$ = $100H^2r^3_{200}/G$.}. The mass contained within a radius $r$ of the NFW halo is given by \begin{equation}\label{eq:mass} M(r) = 4\pi r_s^{3}\rho_s \left[\ln{(1+r/r_s}) - \frac{r/r_s}{1+r/r_s}\right]. \end{equation} \noindent Although other mass models (e.g., Hernquist or Burkert density profiles) have been studied within the {\sc MAMPOSS}t formalism \citep[see][]{Mamon2013}, and {\sc LENSTOOL} allows to probe different profiles, we adopt the NFW profile in order to compare our results with those obtained in \citetalias{Verdugo2011}. Note that the NFW profile is a spherical density profile, and {\sc MAMPOSS}t's formalism can only model spherical systems. However, with {\sc LENSTOOL} the initial profile is spherical, but is transformed into a pseudo-elliptical NFW \citep[see][]{Golse2002}, as is explained in \citet{jullo07}, in order to perform the lensing calculations. Although the simultaneous modeling share the same spherical parameters, the difference between the pseudo-elliptical and the spherical framework could influence our methodology (see Sect.\,\ref{MassM4.3}). \begin{figure*}\begin{center} \includegraphics[scale=0.45]{fig5a.eps} \includegraphics[scale=0.45]{fig5b.eps}\\ \includegraphics[scale=0.45]{fig5c.eps} \includegraphics[scale=0.45]{fig5d.eps}\\ \includegraphics[scale=0.45]{fig5e.eps} \includegraphics[scale=0.45]{fig5f.eps}\\ \caption{\textit{Left column.} Photometric redshift PDF for the selected arcs (see text). The dashed vertical lines corresponds to the the spectroscopic value. \textit{Right column.} Best fit spectral energy distribution. Points represent the observed CFHTLS broad band magnitudes, and $J$ and $K_s$ from WIRCam. Vertical and horizontal error bars correspond to photometric error and wavelength range of each filter, respectively.} \label{SED_PDF} \end{center}\end{figure*} \section{Data}\label{DATA} In this section, we present the group \object{SL2S\,J02140-0535}, reviewing briefly our old data sets. Also, we present new data that we have obtained since \citetalias{Verdugo2011}. From space, the lens was followed-up with \xmm\, Newton Space Telescope. Additionally, a new spectroscopic follow-up of the arcs and group members have been carried out with the Very Large Telescope (VLT). \subsection{\object{SL2S\,J02140-0535} } This group, located at $z_{\rm spec}=0.44$, is populated by three central galaxies. We label them as G1 (the brightest group galaxy, BGG), G2, and G3 (see left panel of Fig.~\ref{presentlens}). The lensed images consist of three arcs surrounding these three galaxies: arc $A$, situated north of the deflector, composed by two merging images; a second arc in the east direction (arc $B$), which is associated to arc $A$, whereas a third arc, arc $C$, situated in the south, is singly imaged. \object{SL2S\,J02140-0535} \citep[first reported by][]{Cabanac2007} has been studied previously using strong lensing by \citet{alardalone} and both strong and weak lensing by \citet{paperI}, and also kinematically by \citet{Roberto2013}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.55]{fig6.eps} \caption{The adaptively smoothed image of \object{SL2S\,J02140-0535} in the 0.5-2.0 keV band. The X-ray contours in black are linearly spaced from 5 to 20 cts/s/deg$^2$.} \label{B1} \end{center}\end{figure} \object{SL2S\,J02140-0535} was observed in five filters ($u^*$, $g'$, $r'$, $i'$, $z'$) as part of the CFHTLS (Canada-France-Hawaii Telescope Legacy Survey)\footnote{http://www.cfht.hawaii.edu/Science/CFHLS/} using the wide field imager \textsc{MegaPrime} \citep{Gwyn2011}, and in the infrared using WIRCam (Wide-field InfraRed Camera, the near infrared mosaic imager at CFHT) as part of the proposal 07BF15 (P.I. G. Soucail), see \citet{Verdugo2014} for more information. In the right panel of Fig.~\ref{presentlens} we show a false-color image of \object{SL2S\,J02140-0535}, combining the two bands $J$ and $K_s$. Note that arcs A and C appear mixed with the diffuse light of the central galaxies, and arc B is barely visible in the image. \object{SL2S\,J02140-0535} was also followed up spectroscopically using FORS\,2 at VLT \citepalias{Verdugo2011}. From space, the lens was observed with the \emph{Hubble Space Telescope} (HST) in snapshot mode (C\,15, P.I. Kneib) using three bands with the ACS camera (F814, F606, and F475). \subsection{New spectroscopic data}\label{NSD} \textit{Selecting members.-} We used FORS\,2 on VLT with a medium resolution grism (GRIS\,600RI; 080.A-0610; PI V. Motta) to target the group members (see Mu\~noz et al. 2013) and a low resolution grism (GIRS\,300I; 086.A-0412; P.I. V. Motta) to observe the strongly lensed features. In the later observation, we use one mask with $2\times1300$~s on-target exposure time. Targets (other than strongly lensed features) were selected by a two-step process. First, we use the T0005 release of the CFHTLS survey (November, 2008) to obtain a photometric redshift--selected catalog which include galaxies within $\pm0.01$ of the redshift of the main lens galaxy. The selected galaxies in this catalog have colors within $(g-i)_{lens}-0.15<g-i<(g-i)_{lens}+0.15$, where $(g-i)_{lens}$ is the color of the brightest galaxy within the Einstein radius. From this sample, we selected those candidates that were not observed previously. More details will be presented in a forthcoming publication (Motta et~al., in prep.). The spectroscopic redshifts of the galaxies were determined using the Radial Velocity SAO package \citep{Kurtz1998} within the IRAF software\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation.}. By visual inspection of the spectra, we identify several emission and absorption lines. Then, we determine the redshifts \citep[typical errors are discussed in][]{Roberto2013} by doing a cross-correlation between a spectrum and template spectra of known velocities. To determine the group membership of \object{SL2S\,J02140-0535}, we follow the method presented in \citet{Roberto2013}, which in turn adopt the formalism of \citet{Wil05}. The group members are identified as follows: we assume initially that the group is located at the redshift of the main bright lens galaxy, $z_{\textrm{lens}}$, with an initial observed-frame velocity dispersion of $\sigma_{\textrm{obs}}$ = 500(1+$z_{\textrm{lens}}$) km\,s$^{-1}$. After computing the required redshift range for group membership \citep[see][]{Roberto2013}, and applying a biweight estimator \citep{Bee90}, the iterative process reached a stable membership solution with 24 secure members and a velocity dispersion of $\sigma$ = 562 $\pm$ 60 km\,s$^{-1}$. These galaxies are shown with red squares and circles in Fig.~\ref{fig1}, and their respective redshifts are presented in Table~\ref{tbl-A1}. The squares in Fig.~\ref{fig1} represent the galaxies previously reported by \citet{Roberto2013}. Fig.~\ref{fig1} also shows the luminosity contours calculated according to \citet[][]{Gael2013}. Fitting ellipses to the luminosity map, using the task \textit{ellipse} in IRAF, we find that the luminosity contours have position angles equal to 99$^{\circ}$ $\pm$ 9$^{\circ}$, 102$^{\circ}$ $\pm$ 2$^{\circ}$, and 109$^{\circ}$ $\pm$ 2$^{\circ}$, from outermost to innermost contour respectively. \begin{figure}[h!]\begin{center} \includegraphics[scale=0.525]{fig7a.eps}\\ \vspace{0.1cm} \includegraphics[scale=0.525]{fig7b.eps}\\ \vspace{0.1cm} \includegraphics[scale=0.525]{fig7c.eps} \caption{PDFs and contours of the parameters $c_{200}$ and $r_{s}$. The three contours stand for the 68\%, 95\%, and 99\% confidence levels. The values obtained for our best-fit model are marked by a gray square, and with vertical lines in the 1D histograms (the asymmetric errors are presented in Table~\ref{tbl-1}).\textit{ Top panel.-} Results from the \textrm{SL\,Model}. \textit{Middle panel.-} Results from the \textrm{Dyn\,Model}. \textit{ Bottom panel.-} Results from the \textrm{SL+Dyn\,Model}.} \label{PDFModels1} \end{center} \end{figure} \textit{Completeness.-} Mu\~noz et al. (2013) presented the dynamical analysis of seven SL2S galaxy groups, including \object{SL2S\,J02140-0535}. They estimate the completeness within 1 Mpc of radius from the centre of the group to be 30\%. In the present work, as we increased the number of observed galaxies in the field of \object{SL2S\,J02140-0535}, and thus increasing the number of confirmed members (hereafter $gal_{\textrm{spec}}$), a new calculation is carried out to estimate the completeness as a function of the radius. We first define the color-magnitude cuts to be applied to the photometric catalog of the group, i.e. $0.7<(r-i)<0.92$ and $21.44<m_{i}<21.47$. These values correspond to the photometric properties of the $gal_{\textrm{spec}}$. Note that we exclude one galaxy because of its color $(r-i)=0.45$. Then, we select all the galaxies falling within the photometric ranges (hereafter $gal_{\textrm{phot}}$), and we estimate the density of field galaxies within the $15'\times15'$ square arcminutes after excluding a central region of radius 1.3 Mpc (which is the largest distance from the center of the group of $gal_{\textrm{spec}}$). This density is then converted into an estimated total number of galaxies $N_{\textrm{field}}$ over the full field of view. Given $gal_{\textrm{spec}}$, we bin the data and define $N_{\textrm{spec}}(r_{i})$ as the number of confirmed members in the $i$th radial bin. Thus, the radial profile of the completeness is given by \begin{equation}\label{eq:CompG} C(r_{i}) \equiv \frac{N_{\textrm{spec}}(r_{i})} {N_{t}(r_{i})-N_{\textrm{field},r}(r_{i})}, \end{equation} \noindent where $N_{\textrm{field},r}(r_{i})$ is the number of field galaxies in the $i$th bin, and $N_{t}(r_{i})$ is the total number of $gal_{\textrm{phot}}$ present in the $i$th bin, i.e., its value is the sum of the number of group members and field galaxies. To estimate $N_{\textrm{field},r}(r_{i})$, a Monte Carlo approach is adopted: we randomly draw the positions of the $N_{\textrm{field}}$ galaxies over the whole field of view, and then count the corresponding number of galaxies $N_{\textrm{field},r}(r_{i})$ falling in each bin. Thus, each Monte Carlo realization leads to an estimate of the completeness. Finally, we average the $C(r_{i})$, after excluding the realizations for which we obtain $N_{t}(r_{i})<N_{r}(r_{i})$ or $C(r_{i})>1$. In Fig.~\ref{completeness} we present the resulting profile and its estimated $1\sigma$ deviation, showing a completeness $C\sim60\%$ consistent with a constant profile up to 1 Mpc. \subsection{Multiple images: confirming two different sources}\label{DATA3.3} \textit{Spectroscopic redshifts.-} In \citetalias{Verdugo2011} we reported a strong emission line at 7538.4\,\AA\, in the spectra of arc C, that we associated to [OII]$\lambda$3727 at $z_{\rm spec}$ = 1.023 $\pm$ 0.001. We obtained new 2D spectra for the arcs consisting in two exposures of 1300s each. Due to the closeness of the components (inside a radius of $\sim$8\arcsec\ ), the slit length is limited by the relative position of the arcs, making sky-subtraction difficult (see bottom panel of Fig.~\ref{spectraArcC}). Most of the 2D spectra show a poor sky subtraction compared to which we would have obtained using longer slits. However, our new 2D spectra also shows the presence of [OII]$\lambda$3727 spectral feature, and additionally another emission line appears in the spectrum, [OIII]$\lambda$4958.9. In the same Fig.~\ref{spectraArcC} (top-right panel) we show the spectrum and marked some characteristic emission lines, along with a few sky lines. We compare it with a template of a starburst galaxy from \citet{Kinney1996}, shifted at $z$ = 1.02. After performing a template fitting using RVSAO we obtain $z_{\rm spec}$ = 1.017 $\pm$ 0.001, confirming our previously reported value. On the other hand, in our previous work, we did not found any spectroscopic features in the arcs A and B due to the poor signal-to-noise. In the top-left panel of Fig.~\ref{spectraArcC} we show the new obtained spectrum of arc A. It reveals a weak (but still visible) emission line at 9795.3\,\AA. This line probably corresponds to [OII]$\lambda$3727 at $z\sim1.6$. However, we do not claim a clear detection (this region of the spectrum is affected by sky emission lines), as we discuss below, but is worth to note that the photometric redshift estimate supports this detection (see Fig.~\ref{SED_PDF}). Assuming emission from [OII]$\lambda$3727 and applying a Gaussian fitting, we obtain $z_{\rm spec}$ = 1.628 $\pm$ 0.001. This feature is not present in the spectrum of arc B, since arc B is almost one magnitude fainter than arc A. Furthermore, this line is not present in the spectrum of arc C, which confirms the previous finding of \citetalias{Verdugo2011}, i.e. system AB and arc C do come from two different sources. \textit{Photometric redshifts.-} As a complementary test, and to extend the analysis presented in \citetalias{Verdugo2011}, we calculate the photometric redshifts of arcs A, B, and C using the HyperZ software \citep{hyperz}, adding the $J$ and $K_s$ bands to the original ones ($u^*$, $g'$, $r'$, $i'$, $z'$). The photometry in $J$ and $K_s$ bands was performed with the IRAF package \textit{apphot}. We employed polygonal apertures to obtain a more accurate flux measurement of the arcs. For each arc, the vertices of the polygons were determined using the IRAF task \textit{polymark}, and the magnitudes inside these apertures were computed by the IRAF task \textit{polyphot}. The results are presented in Table~\ref{tbl-A2}. \begin{figure}[h!]\begin{center} \includegraphics[scale=0.525]{fig8a.eps}\\ \vspace{0.1cm} \includegraphics[scale=0.525]{fig8b.eps}\\ \vspace{0.1cm} \includegraphics[scale=0.525]{fig8c.eps} \caption{PDFs and contours of the parameters log$r_s$ and log$r_{200}$. The three contours stand for the 68\%, 95\%, and 99\% confidence levels. The values obtained for our best-fit model are marked by a gray square, and with vertical lines in the 1D histograms (the asymmetric errors are presented in Table~\ref{tbl-1}).\textit{ Top panel.-} Results from the \textrm{SL\,Model}. \textit{Middle panel.-} Results from the \textrm{Dyn\,Model}. \textit{ Bottom panel.-} Results from the \textrm{SL+Dyn\,Model}.} \label{PDFModels2} \end{center} \end{figure} It is evident in the right panel of Fig.~\ref{presentlens} that the gravitational arcs are contaminated by the light of the central galaxies. In order to quantify the error in our photometric measurements in both $J$ and $K_s$ bands we proceed as follows: we subtract the central galaxies of the group, and follow the procedure described in McLeod et al. (1998),that is, we fit a galaxy profile model convolved with a PSF (de Vaucouleurs profiles were fitted to the galaxies with synthetic PSFs). After the subtraction, we run again the IRAF task \textit{polyphot}. The errors associated to the fluxes are defined as the quadratic sum of the errors on both measurements. The photometric redshifts for the arcs were estimated from the magnitudes reported in Table~\ref{tbl-A2}, as well as those reported in \citetalias{Verdugo2011}. We present the output probability distribution function (PDF) from HyperZ in Fig.~\ref{SED_PDF}. We note in the same figure that the $K_s$ band data do not match with the best-fit spectral energy distribution, this is probably related to the fact that the photometry of the arcs is contaminated by the light of the central galaxies. Arc C is constrained to be at $z_{\rm phot} = 0.96 \pm 0.07$, which is in good agreement with the $z_{\rm spec}$ = 1.017 $\pm$ 0.001 reported above. The multiple imaged system constituted by arcs A and B have $z_{\rm phot}$= 1.7 $\pm$ 0.1 and $z_{\rm phot}$= 1.6 $\pm$ 0.2, respectively. The photometric redshift of arc A is in agreement with the identification of the emission line as [OII]$\lambda$3727 at $z_{\rm spec}$ = 1.628 $\pm$ 0.001. To summarize, both the spectroscopic and photometric data confirm the results of \citetalias{Verdugo2011}, namely, the system formed by arcs A and B, and the single arc C, originate from two different sources, the former at $z_{\rm spec}$ = 1.628, and the latter at $z_{\rm spec}$ = 1.017. \subsection{X-ray data} We observed \object{SL2S\,J02140-0535} with \xmm\ as part of an X-ray follow-up program of the SL2S groups to obtain an X-ray detection of these strong-lensing selected systems and to measure the X-ray luminosity and temperature (Gastaldello et~al., in prep.). SL2S J02140-0535 was observed by \xmm\ for 19 ks with the MOS detector and for 13 ks with the pn detector. The data were reduced with SAS v14.0.0, using the tasks {\em emchain} and {\em epchain}. We considered only event patterns 0-12 for MOS and 0-4 for pn, and the data were cleaned using the standard procedures for bright pixels, hot columns removal, and pn out-of-time correction. Periods of high backgrounds due to soft protons were filtered out leaving an effective exposure time of 11 ks for MOS and 8 ks for pn. For each detector, we create images with point sources in the 0.5-2 keV band. The point sources were detected with the task {\em edetect\_chain}, and masked using circular regions of 25\arcsec\ radius centered at the source position. The images were exposure-corrected and background-subtracted using the XMM-Extended Source Analysis Software (ESAS). The \xmm\ image in the 0.5-2 keV band of the field of \object{SL2S\,J02140-0535} is shown in Fig.~\ref{B1}. The X-ray peak is spatially coincident with the bright galaxies inside the arcs, and the X-ray isophotes are elongated in the same direction as the optical contours (see discussion in Sect.\,\ref{Discus}). The quality of the X-ray snapshot data is not sufficient for a detailed mass analysis assuming hydrostatic equilibrium \citep[e.g.,][]{Gastaldello2007}. In this case, the mass can only be obtained adopting a scaling relation, such as a mass-temperature relation \citep[e.g.,][]{Gastaldello2014}. Therefore this mass determination is not of the same quality as the obtained with our lensing and dynamical information. And, as we will discuss in Sect.\,\ref{Discus}, we need to be very cautious when assuming scaling relations for strong lensing clusters. We will only make use of the morphological information provided by the X-ray data hereinafter. \begin{figure}[h!]\begin{center} \includegraphics[scale=0.525]{fig9a.eps}\\ \vspace{0.1cm} \includegraphics[scale=0.525]{fig9b.eps}\\ \vspace{0.1cm} \includegraphics[scale=0.525]{fig9c.eps} \caption{PDFs and contours of the parameters $c_{200}$ and $M_{200}$. The three contours stand for the 68\%, 95\%, and 99\% confidence levels. The values obtained for our best-fit model are marked by a gray square, and with vertical lines in the 1D histograms (the asymmetric errors are presented in Table~\ref{tbl-1}).\textit{ Top panel.-} Results from the \textrm{SL\,Model}. \textit{Middle panel.-} Results from the \textrm{Dyn\,Model}. \textit{ Bottom panel.-} Results from the \textrm{SL+Dyn\,Model}.} \label{Fig:9} \end{center} \end{figure} \section{Results}\label{MassM} In this Section, we apply the formalism outlined in Sect.\,\ref{Metho} on \object{SL2S\,J02140-0535}, using the data presented in Sect.\,\ref{DATA}. In the subsequent analysis, \textrm{SL\,Model} refers to the SL modeling, \textrm{Dyn\,Model} to the dynamical analysis, and \textrm{SL+Dyn Model} to the combination of both methods. \subsection{\textrm{SL\,Model}} As we discussed in \citetalias{Verdugo2011}, the system AB show multiple subcomponents (surface brightness peaks) that can be conjugated as different multiple image systems, increasing the number of constraints as well as the degrees of freedom (for a fixed number of free parameters). Thus, AB system is transformed in four different systems, conserving C as a single-image arc \citepalias[see Fig. 4 in][]{Verdugo2011}. In this way, our model have five different arc systems in the optimization procedure, leading to 16 observational constraints. Based on the geometry of the multiple images, the absence of structure in velocity space, and the X-ray data, we model \object{SL2S\,J02140-0535} using a single large-scale mass clump accounting for the dark matter component. This smooth component is modeled with a NFW mass density profile, characterized by its position, projected ellipticity, position angle, scale radius, and concentration parameter. The position, $(X,Y)$ ranges from -5$\arcsec$ and 5$\arcsec$, the ellipticity from 0 < $\epsilon$ < 0.7, and the position angle from 0 to 180 degrees. The parameters $r_s$ and $c_{200}$ are free to range between 50 kpc $\leq$ $r_s$ $\leq$ 500 kpc, and 1 $\leq$ $c_{200}$ $\leq$ 30, respectively. Additionally, we add three smaller-scale clumps that are associated with the galaxies at the center of \object{SL2S\,J02140-0535}. We model them as follows: as in \citetalias{Verdugo2011}, we assume that the stellar mass distribution in these galaxies follows a pseudo isothermal elliptical mass distribution (PIEMD). A clump modeled with this profile is characterized by the seven following parameters: the center position, ($X,Y$), the ellipticity $\epsilon$, the position angle $\theta$, and the parameters, $\sigma_0$, $r_{\rm core}$, and $r_{\rm cut}$ \citep[see][for a detailed discussion of the properties of this mass profile]{Limousin2005, ardis2218}. The center of the profiles, ellipticity, and position angle are assumed to be the same as for the luminous components. The remaining parameters in the small-scale clumps, namely, $\sigma_0$, $r_{\rm core}$, and $r_{\rm cut}$, are scaled as a function of their galaxy luminosities \citep{Limousin2007}, using as a scaling factor the luminosity $L^{*}$ associated with the $g$-band magnitude of galaxy G1 (see Fig.~\ref{presentlens}), \begin{equation}\label{eq:scale} \begin{array}{l} \displaystyle r_{\rm core}=r_{\rm core}^{*}\left(\frac{L}{L^{*}}\right)^{1/2},\\ \\ \displaystyle r_{\rm cut}=r_{\rm cut}^{*}\left(\frac{L}{L^{*}}\right)^{1/2},\\ \\ \displaystyle \sigma_0 = \sigma_0^{*}\left(\frac{L}{L^{*}}\right)^{1/4}, \end{array} \end{equation} \noindent setting $r_{\rm core}^{*}$ and $\sigma_0^{*}$ to be 0.15 kpc and 253 km\,s$^{-1}$, respectively. This velocity dispersion is obtained from the LOS velocity dispersion of galaxy G1, with the use of the relation reported by \citet{ardis2218}. This LOS velocity dispersion has a value of $\sigma_{\rm los}^{*}$ = 215 $\pm$34 km s$^{-1}$, computed from the G-band absorption line profile \citepalias[see][]{Verdugo2011}. The last parameter, $r_{\rm cut}^{*}$, is constrained from the possible stellar masses for galaxy G1 \citepalias[][]{Verdugo2011}, which in turn produce an interval of 1 - 6 kpc. \begin{figure}[h!]\begin{center} \includegraphics[scale=0.49]{fig10a.eps}\\ \vspace{0.77cm} \includegraphics[scale=0.49]{fig10b.eps}\\ \vspace{0.77cm} \includegraphics[scale=0.49]{fig10c.eps}\\ \caption{Joint distributions. \textit{ Top panel.-} Scale radius and concentration \textit{Middle panel.-} log$r_s$ and log$r_{200}$. \textit{ Bottom panel.-} concentration and $M_{200}$. Green-filled contours are 1, 2 and 3-$\sigma$ regions from the \textrm{Dyn\,Model}. Grey contours stand for the 68, 95, and 99\% confidence levels for the \textrm{SL\,Model}. Red contours is the result of the \textrm{SL+Dyn\,Model}, with the best solution depicted with a red star. \label{Fig:10}} \end{center} \end{figure} Our model is computed and optimized in the image plane with the seven free parameters discussed above, namely \{$X$, $Y$, $\epsilon$, $\theta$, $r_s$, $c_{200}$, $r_{\rm cut}^{*}$\}. The first six parameters characterize the NFW profile, and the last parameter is related to the profile of the central galaxies. All the parameters are allowed to vary with uniform priors. We show the results (the PDF) of the SL analysis \emph{only} in top-panel of Fig.~\ref{PDFModels1}, and the best fit parameters are given in Table~\ref{tbl-1}. Additionally, in Fig.~\ref{PDFModels2} we show the plots of $r_s$ vs $r_{200}$, since they provide with a better understanding of how unconstrained is the lens model at large scale (see next section), and also to be consistent with the form in which the plots are presented in \citet{Mamon2013}. Figure~\ref{Fig:9} shows the results for the concentration and $M_{200}$, from which we can gain insight for the mass constraint. \subsection{\textrm{Dyn\,Model}} Since we only have 24 group members, we assume that the group has an isotropic velocity dispersion (i.e., $\beta$ = 0 in Eq.~\ref{eq:anisotropy}). This parameter $\beta$ might influence the parameters of the density profile ($r_s$ and $c_{200}$), however, it is not possible to constrain $\beta$ with only 24 galaxies, besides it is beyond the scope of this work to analyze its effect over the parameters. We defer this analysis to a forthcoming paper, in which we apply the method to a galaxy cluster with a greater number of members. The Jeans equation of dynamical equilibrium, as implemented in {\sc MAMPOSS}t, is only valid for values of $r \lesssim 2r_{vir} \simeq 2.7r_{200}$ \citep{Falco2013}. Thus, before running MAMPOSt, we estimate the viral radius, $r_{200}$, of \object{SL2S\,J02140-0535}. From the scale radius and the concentration values reported in \citetalias{Verdugo2011} we find the virial radius to be $r_{200}$ = 1 $\pm$ 0.2 Mpc. This value is considerably smaller than the previously reported value of 1.42 Mpc by \citet{Lieu2015}\footnote{\object{SL2S\,J02140-0535} is identified as \object{{\sc XLSSC\,110 }} in the {\sc XXL} Survey.}. Table~\ref{tbl-A1} shows that there are 3 galaxies with 1 Mpc $<$ R $<$ 1.4 Mpc, i.e. within 1.4$r_{200}$, which seems sufficiently small to keep in our analysis. The galaxy members lie between 7.9 kpc to 1392.3 kpc (with a mean distance of 650 kpc from the center). Given the scarce number of members in \object{SL2S\,J02140-0535} we keep this galaxy in our calculations. To further simplify our analysis, we assume that the completeness, as a function of the radius, is a constant (see Sect.\,\ref{NSD}). Also, we assume that both the tracer scale radius \citep[$r_{\nu}$ in][]{Mamon2013} and the dark matter scale radius $r_s$ are the same, that is, the total mass density profile is forced to be proportional to the galaxy number density profile: \textit{we assume that mass follows light}. As we will see in the next section, this is not a bad assumption. Therefore our model has only two free parameters, namely, the scale radius $r_s$, and the concentration $c_{200}$. These parameters have broad priors, with 50 kpc $\leq$ $r_s$ $\leq$ 500 kpc, and 1 $\leq$ $c_{200}$ $\leq$ 30. The middle panels of Fig.~\ref{PDFModels1}, Fig.~\ref{PDFModels2}, and Fig.~\ref{Fig:9} show the PDF for this model; the best values of the fit are presented in Table~\ref{tbl-1}. \begin{table*} \caption{Best-fit model parameters.} \begin{center} \label{tbl-1} \begin{tabular}{lccccccccc} \hline\hline \\ \multirow{2}{*}{Parameter} & \multicolumn{2}{c}{\textrm{SL\,Model}} & \multicolumn{2}{c}{\textrm{Dyn\,Model}} & \multicolumn{2}{c}{\textrm{SL+Dyn\,Model}} \\ & Group & $L^{*}$ & Group & $L^{*}$ & Group & $L^{*}$ \\ \\ \hline \\ X$^{\dagger}$ [\arcsec] & $1.2^{+0.2}_{-0.4}$ & -- & -- & -- & $0.9\pm0.2$ & -- \\ \\ Y$^{\dagger}$ [\arcsec] & $0.7^{+0.9}_{-0.4}$ & -- & -- & -- & $1.5^{+0.4}_{-0.3}$ & -- \\ \\ $\epsilon^{\dagger\dagger}$ & $0.23^{+0.04}_{-0.05}$ & -- & -- & -- & $0.298^{+0.002}_{-0.045}$ & -- \\ \\ $\theta\,[^{\circ}]$ & $111.2^{+1.6}_{-1.3}$ & -- & -- & -- & $111.1^{+1.4}_{-1.3}$ & -- \\ \\ $r_s$ [kpc] & $184^{+209}_{-60}$ & -- & $199^{+135}_{-\phantom{1}91}$ & -- & $82^{+44}_{-17}$ & -- \\ \\ log$r_{200}$ [kpc] & $3.04^{+0.12}_{-0.06}$ & -- & $2.80^{+0.07}_{-0.06}$ & -- & $2.92^{+0.06}_{-0.03}$ & -- \\ \\ $c_{200}$ & $6.0^{+1.8}_{-2.3}$ & -- & $3.1^{+2.9}_{-1.4}$ & -- & $10.0^{+1.7}_{-2.5}$ & -- \\ \\ $M_{200}$ [10$^{14}$M$_\odot$] & $2.5^{+3.0}_{-0.9}$ & -- & $0.4^{+0.3}_{-0.1}$ & -- & $1.0^{+0.5}_{-0.2}$ & -- \\ \\ $r_{\rm core}$ [kpc] & -- & $[0.15]$ & -- & -- & -- & $[0.15]$ \\ \\ $r_{\rm cut}$ [kpc] & -- & $2.6^{+2.1}_{-1.1}$ & -- & -- & -- & $2.4^{+2.1}_{-1.0}$ \\ \\ $\sigma_0$ [km s$^{-1}$] & -- & $[253]$ & -- & -- & -- & $[253]$ \\ \\ $\chi^{2}_{DOF}$ & \phantom{000000}-- \phantom{000}0.1 & -- & \phantom{000000}-- \phantom{0000.9} & -- & \phantom{000000}-- \phantom{000}0.9 & -- \\ \\ \\ \hline \end{tabular} \end{center} \tablefoot{($\dagger$): The position in arc seconds relative to the BGG.\\ ($\dagger$$\dagger$): The ellipticity is defined as $\epsilon$ = ($a^2$ $-$ $b^2$)/($a^2$ $+$ $b^2$), where a and b are the semimajor and semiminor axis, respectively, of the elliptical shape.\\ The first column identifies the model parameters. In columns 2-10 we provide the results for each model, using square brackets for those values which are not optimized. Columns $L^{*}$ indicate the parameters associated to the small-scale clumps. Asymmetric errors are calculated following \citet{Andrae2010} and \citet{Barlow1989}.\\} \end{table*} \subsection{\textrm{SL+Dyn Model}}\label{MassM4.3} The main difference of our work, when compared to previous works \citep[e.g.,][]{Biviano2013,Guennou2014} is that we apply a joint analysis, seeking a solution consistent with both the SL and the dynamical methods, maximizing the total likelihood. In the bottom-panels of Fig.~\ref{PDFModels1}, Fig.~\ref{PDFModels2}, and Fig.~\ref{Fig:9} we show the PDF of this combined model. The best-fit values are presented in Table~\ref{tbl-1}. From the figures it is clear that exists tension between the results from the \textrm{SL\,Model} and the \textrm{Dyn\,Model}, the models are in disagreement at 1-$\sigma$ level. The discrepancy is related to the oversimplified assumption of the spherical \textrm{Dyn\,Model}. Although in some cases it is expected to recover a spherical mass distribution at large scale \citep[e.g.,][]{Gavazzi2005}, at smaller scale, i.e. at strong lensing scales, the mass distribution tends to be aspherical. In order to investigate such tension between the results, we construct a strong lensing spherical model, with the same constrains as before. In the left panel of Fig.~\ref{figA2} we show the results. It is clear that in this case the model is not well constrained, a natural result given the lensing images in \object{SL2S\,J02140-0535}. However, the comparison between the joint distributions in the top panel of Fig.~\ref{Fig:10} and the one in the right panel of Fig.~\ref{figA2} shows that the change in the contours is small, which indicates that the assumption of spherical symmetry has little impact in the final result\footnote{Note that the agreement between contours is related to the the shallow distribution from {\sc LENSTOOL}, which produce a joint distribution that follows the top edge of the narrower {\sc MAMPOSS}t distribution.}. Note also that the combined model has a bimodal distribution (see right panel of Fig.~\ref{figA2}), with higher values of concentration inherited from the lensing constraints. A possible way to shed more light on the systematics errors of our method is to test it with simulations. For example, comparing between spherical and non-spherical halos, or quantifying the bias when a given mass distribution is assumed and the underlaying one is different. Such kind of analysis is out of the scope of the present work, however it could be performed in the near future since the state-of-the-art simulations on lensing galaxy clusters has reached an incredible quality \citep[e.g.,][]{Meneghetti2016}. \section{Discusion}\label{Discus} \subsection{Lensing and dynamics as complementary probes} From Fig.~\ref{PDFModels1} and Fig.~\ref{PDFModels2}, it is clear that \textrm{SL\,Model} is not able to constrain the NFW mass profile. This result is expected since SL constraints are available in the very central part of \object{SL2S\,J02140-0535}, whereas the scale radius is generally several times the SL region. The degeneracy between $c_{200}$ and $r_s$ (or $r_s$ and $r_{200}$), which is related to the mathematical definition of the gravitational potential, was previously discussed in \citetalias{Verdugo2011}. This degeneracy occurs commonly in lensing modeling \citep[e.g.,][]{jullo07}. From \textrm{SL\,Model} we obtain the following values: $r_s$ = $184^{+209}_{-60}$ kpc, and $c_{200}$ = $6.0^{+1.8}_{-2.3}$. Thus, the model is not well constrained (similarly, we obtain for the virial radius a value of log$r_{200}$ = 3.04$^{+0.12}_{-0.06}$ kpc). Moreover, the mass $M_{200}$ is not constrained in the \textrm{SL\,Model} (see Fig.~\ref{Fig:9}). The same conclusion holds when considering dynamics only, i.e., \textrm{Dyn\,Model}: the constraints are so broad that both parameters ($r_s$ and $c$) can be considered as unconstrained (see medium-panel of Fig.~\ref{PDFModels1} and medium-panel of Fig.~\ref{PDFModels2}). In this case we obtained a scale radius of $r_s$ = $199^{+135}_{-\phantom{1}91}$ kpc, and a concentration of $c_{200}$ = $3.1^{+2.9}_{-1.4}$. However, in this case the scale radius is slightly more constrained when compared to the value obtained with the \textrm{SL\,Model}. This is due to employing the distribution of the galaxies to estimate the value when using {\sc MAMPOSS}t. Furthermore, the viral radius $r_{200}$ is even more constrained, with log$r_{200}$ = 2.80$^{+0.07}_{-0.06}$ kpc. Note that these weak constraints are related to the small number of galaxy members (24) in the group. Nonetheless, even with the low number of galaxies the error in our mass $M_{200}$ (see Table~\ref{tbl-1} and Fig.~\ref{Fig:9}) is approximately a factor of two, i.e., $\sim$ 0.3 dex, consistent with the analysis of \citet{Old2015}. \begin{figure*}[!htp] \centering \includegraphics[scale=0.87]{fig11.eps} \caption{The distribution of mass, gas and galaxies in \object{SL2S\,J02140-0535}, as reflected by the total mass derived from the combined lensing and dynamics analysis (magenta), the adaptively smoothed $i$-band luminosity of group galaxies (yellow) and the surface brightness from \xmm\, observations (blue). The lensing mass contours (magenta lines) correspond to projected surface densities of 0.2$\times$10$^{9}$, 1.2$\times$10$^{9}$, and 7.4$\times$10$^{9}$ M$_{\sun}$\,arcsec$^{-2}$. The size of the image is 1.5$\times$1.5 Mpc.} \label{Fig:11} \end{figure*} Interestingly, when combining both probes, \textrm{SL+Dyn\,Model}, it is possible to constrain both the scale radius and the concentration parameter. SL is sensitive to the mass distribution at inner radii (within 10$\arcsec$), whereas the dynamics provide constraints at larger radius (see bottom-panels of Fig.~\ref{PDFModels1} and Fig.~\ref{PDFModels2}). For this model, we find the values $r_s$ =$82^{+44}_{-17}$ kpc, $c_{200}$ = $10.0^{+1.7}_{-2.5}$, and $M_{200}$ = $1.0^{+0.5}_{-0.2}$ $\times$ 10$^{14}$ M$_\odot$. The errors in the mass, although big, are smaller when compared to the two previous models, by a factor of 2.2 (0.34 dex) and by a factor of 1.4 (0.15 dex), for \textrm{SL\,Model} and \textrm{Dyn\,Model}, respectively. To highlight how the combined \textrm{SL+Dyn\,Model} is better constrained than both the \textrm{SL\,Model} and the \textrm{Dyn\,Model}, we show in Fig.~\ref{Fig:10} the 2D contours for $c_{200} - r_s$, log$r_s$ $-$ log$r_{200}$, and $c_{200} - M_{200}$, for the three models discussed in this work. We note the overlap of the solutions of the \textrm{SL\,Model} and the \textrm{Dyn\,Model}, as well as the stronger constraints of the \textrm{SL+Dyn\,Model}. The shift in the solutions for \textrm{SL+Dyn\,Model} that it is seen in Fig.~\ref{PDFModels1}, i.e., $r_s$ is much lower (greater $c_{200}$) than the values for the \textrm{SL\,Model} and the \textrm{Dyn\,Model}, can be understood in the light of the discussion presented in Sect.\,\ref{MassM4.3}, and additionally explained with the analysis of Fig.~\ref{Fig:10}. On one hand, the tension between both results (lack of agreement between solutions at 1$\sigma$) is the result of assuming a spherical mass distribution in the \textrm{Dyn\,Model}. On the other hand, the joint solution of \textrm{SL+Dyn\,Model} (red contours) is consistent with the region where both the \textrm{SL\,Model} and the \textrm{Dyn\,Model} overlap. \subsection{Mass, light \& gas} We find that the centre of the mass distribution coincides with that of the light (see Fig.~\ref{Fig:11}). In \citetalias{Verdugo2011} we showed that the position angle of the halo was consistent with the orientation of the luminosity contours and the spatial distribution of the group-galaxy members. In the present work we confirm these results. The measured position angles of the luminosity contours presented in Fig.~\ref{fig1} and Fig.~\ref{Fig:11} (the values are equal to 109$^{\circ}$ $\pm$ 2$^{\circ}$, 102$^{\circ}$ $\pm$ 2$^{\circ}$, and 99$^{\circ}$ $\pm$ 9$^{\circ}$, from innermost to outermost contour), agree with the orientation of the position angle of $111.1^{+1.4}_{-1.3}$ degrees of the halo. In addition to the distribution of mass and light, Fig.~\ref{Fig:11} shows the distribution of the gas component of \object{SL2S\,J02140-0535}, which was obtained from our X-ray analysis. The agreement between these independent observational tracers of the three group constituents (dark matter, gas, and galaxies) is remarkable. This supports a scenario where the mass is traced by light, and argues in favor of a non disturbed structure, i.e., the opposite to a disturbed one, where the different tracers are separated, such as in the Bullet Group \citep[][]{Gastaldello2014} or as in the more extreme cluster mergers \citep[e.g.,][]{Bradac2008,Randall2008,Menanteau2012}. \subsection{Comparison with our previous work} In \citetalias{Verdugo2011} we analyzed \object{SL2S\,J02140-0535} using the dynamical information to constrain and build a reliable SL model for this galaxy group. However, it is not expected to have a perfect agreement between the best value of the parameters computed in the former work and the values reported in the present paper, mainly because the difference in methodologies, and also due to the new spectroscopic number of members reported in this work. For example, in \citetalias{Verdugo2011} we found the values $c_{200}$ = 6.0 $\pm$ 0.6, and $r_s$ = 170 $\pm$ 18 kpc, whereas for our \textrm{SL+Dyn\,Model}, we find the values $c_{200}$ = $10.0^{+1.7}_{-2.5}$, and $r_s$ = $82^{+44}_{-17}$ kpc. However, it is important to note that the latter values lie within the range predicted by \citetalias{Verdugo2011} with dynamics (cf. 2 < $c_{200}$ < 8 , and 50 kpc < $r_s$ <200 kpc). Furthermore, those ranges need to be corrected by using the new velocity dispersion. This correction will shift the confidence interval to larger values in $c$, and smaller values in $r_s$, thus improving the agreement between both works. Additionally, as presented in Sect.\,\ref{DATA} , we found the velocity dispersion to be $\sigma$ = 562 $\pm$ 60 km\,s$^{-1}$ with the 24 confirmed members. This velocity dispersion is in good agreement with the velocity reported in \citetalias{Verdugo2011}, $\sigma$ = 630 $\pm$ 107 km\,s$^{-1}$, which was computed with only 16 members \footnote{The projected viral radius, $\widetilde{R_{v}}$ = 0.9 $\pm$ 0.3 Mpc \citep[the projected harmonic mean radius, e.g.,][]{Irg02}, is also consistent with the value reported in our previous work, $\widetilde{R_{v}}$ = 0.8 $\pm$ 0.3 Mpc, which is worth to note as it was used in \citetalias{Verdugo2011} to estimate the priors in the SL modeling.}. It is also in agreement with the value obtained from weak lensing analysis \citep[$\sigma$ = 638 $^{+101}_{-152}$ km\,s$^{-1}$,][]{Gael2013}. \begin{figure}[h!]\begin{center} \includegraphics[scale=0.5]{fig12.eps} \caption{2D projected mass as a function of the radius measured from the BGG. The green and blue shaded areas corresponds to the mass profile within 1-$\sigma$ errors for the SL model (\textrm{SL+Dyn Model}) and the weak lensing model reported in \citetalias{Verdugo2011}, respectively. The orange-shaded region shows the area where the arc systems lie. The two red triangles with error bars, show two estimates (at 0.5 and 1 Mpc) of the weak lensing mass from \citet{Gael2013}. Black diamonds (shifted in $r$-0.05 for clarity) show the predicted mass calculated from the work of \citet{Lieu2015}. Cyan diamonds (shifted in $r$+0.05 for clarity) are the masses calculated from \citet{Lieu2015} data but assuming $c_{200}$ = 10 (see text). We also depict with two arrows, our best $r_s$ (for $c_{200}$ = 10) and the $r_s$ (for $c_{200}$ = 2.7) reported in \citet{Lieu2015}.\label{Fig:12}} \end{center} \end{figure} \subsection{An over-concentrated galaxy group?} The concentration value of \object{SL2S\,J02140-0535} is clearly higher than the expected from $\Lambda$CDM numerical simulations. Assuming a dark matter halo at $z$ = 0.44 with $M_{200}$ $\approx$ 1 $\times$ 10$^{14}$ M$_{\sun}$ the concentration is $c_{200}$ $\approx$ 4.0 \citep[computed with the procedures of][]{Duff08}. \object{SL2S\,J02140-0535} has also been studied by \citet{Gael2014}, who were able to constrain the scale radius and the concentration parameters of galaxy groups using stacking techniques. \object{SL2S\,J02140-0535}, with an Einstein radius of $\sim$7$\arcsec$, belong to their stack "R3", which was characterized to have $c_{200} \sim 10$ and M$_{200} \sim 10^{14}$ M$_{\sun}$. Those values are in agreement with our computed values. As discussed thoroughly in \citet{Gael2014}, this over-concentration seems to be due to an alignment of the major axis with the line of sight. Even in the case a cluster displays mass contours elongated in the plane of the sky, the major axis could be near to the line of sight \citep[see for example][]{Limousin2007,Limousin2013}. Finally, figure~\ref{Fig:12} shows the comparison between the mass obtained from our \textrm{SL+Dyn\,Model} with the weak lensing mass previously obtained by \citetalias{Verdugo2011}. Both models overlap up to $\sim$1 Mpc; this is consistent with the scarce number of galaxies located at radii larger than 1 Mpc. Therefore, the dynamic constraints are not strong, also it is worth to note that the weak lensing mass estimate can be slightly overestimated at large radii, since the mass is calculated assuming a singular isothermal sphere. The red triangles in Fig.~\ref{Fig:12} show two estimates (at 0.5 and 1 Mpc) of the weak lensing mass, calculated using the values reported in \citet{Gael2013}. Those values are also consistent with the above mentioned measurements. For comparison, we show in black diamonds the predicted mass (at 0.5 and 1 Mpc) derived from \citet{Lieu2015}. The discrepancy between these values and our values calculated from lensing arises from the fact that \citet{Lieu2015} set the cluster concentration from a mass-concentration relation derived from N-body simulations, thus obtaining the values $c_{200}$ = 2.7 and $r_s$ = 0.52 Mpc. To prove this assertion, we perform a simple test. We use the values of $M_{200}$ and $c_{200}$ from \citet{Lieu2015}, and then we generate a shear profile with their same radial range and number of bins. We fitted their data seeking the best $M_{200}$ value, assuming a concentration value of $c_{200}$ = 10. The projected mass from this estimate is shown as cyan symbols in Fig.~\ref{Fig:12}. This change in concentration not only solves the difference in mass estimates, but also explains why \object{SL2S\,J02140-0535} ({\sc XLSSC\,110}) is an outlier in the sample of \citet{Lieu2015}. This highlights the risk of assuming a $c$-$M$ relation for some particular objects, such as strong lensing clusters. The discussion of the bias and the effect on the $M$-$T$ scaling relation will be discussed in a forthcoming publication (Fo{\"e}x et al. in prep.). \section{Conclusions}\label{Conclusions} We have presented a framework that allows to fit simultaneously strong lensing and dynamics. We apply our method to probe the gravitational potential of the lens galaxy group \object{SL2S\,J02140-0535} on a large radial range, by combining two well known codes, namely {\sc MAMPOSS}t \citep[][]{Mamon2013} and {\sc LENSTOOL} \citep{jullo07}. We performed a fit adopting a NFW profile and three galaxy-scale mass components as perturbations to the group potential, as previously done by \citetalias{Verdugo2011}, but now including the dynamical information in a new consistent way. The number of galaxies increased to 24, when new VLT (FORS2) spectra were analyzed. This new information was included to perform the combined strong lensing and dynamics analysis. Moreover, we studied the gas distribution within the group from X-ray data obtained with \xmm. We list below our results: \begin{enumerate} \item Our new observational data set confirms the results presented previously in \citetalias{Verdugo2011}. We also present supporting X-ray analysis. \begin{itemize} \item Spectroscopic analysis confirms that the arcs AB and the arc C of \object{SL2S\,J02140-0535} belong to different sources, the former at $z_{\rm spec}$ = 1.017 $\pm$ 0.001, and the latter at $z_{\rm spec}$ = 1.628 $\pm$ 0.001. These redshift values are consistent with the photometric redshift estimation. \item We find 24 secure members of \object{SL2S\,J02140-0535}, from the analysis of our new and previously reported spectroscopic data. The completeness $C\sim60\%$ is roughly constant up to 1 Mpc. We also computed the velocity dispersion, obtaining $\sigma$ = 562 $\pm$ 60 km\,s$^{-1}$, a value comparable to the previous estimate of \citetalias{Verdugo2011}. \item The X-ray contours show an elongated shape consistent with the spatial distribution of the confirmed members. This argues in favor of an unimodal structure, since the X-ray emission is unimodal and centered on the lens. \end{itemize} \item Our method fits simultaneously strong lensing and dynamics, allowing to probe the mass distribution from the centre of the group up to its viral radius. However, there is a tension between the results of the \textrm{Dyn\,Model} and the \textrm{SL\,Model}, related to the assumed spherical symmetry of the former. While our result shows that deviation from spherical symmetry can in some cases induce a bias in the {\sc MAMPOSS}t solution for the cluster $M(r)$, this does not need to be the rule. In another massive cluster at $z$ = 0.44, \citet{Biviano2013} found good agreement between the spherical {\sc MAMPOSS}t solution and the non-spherical solution from strong lensing. In addition, MAMPOSSt has been shown to provide unbiased results for the mass profiles of cluster-sized halos extracted from cosmological simulations \citep{Mamon2013}. \begin{itemize} \item Models relying solely on either lensing (\textrm{SL\,Model}) or dynamical information (\textrm{Dyn\,Model}) are not able to constrain the scale radius of the NFW profile. We obtain for the best \textrm{SL\,Model} a scale radius of $r_s$ = $184^{+209}_{-60}$ kpc, whereas for the best \textrm{Dyn\,Model} model we obtain a value of $r_s$ = $199^{+135}_{-\phantom{1}91}$ kpc. We find that the concentration parameter is unconstrained as well. \item However, it is possible to constrain both the scale radius and the concentration parameter when combining both lensing and dynamics analysis \citepalias[as previously discussed in][]{Verdugo2011}. We find a scale radius of $r_s$ =$82^{+44}_{-17}$ kpc, and a concentration value of $c_{200}$ = $10.0^{+1.7}_{-2.5}$. The \textrm{SL+Dyn\,Model} reduces the error in the mass estimation in 0.34 dex (a factor of 2.2), when compared to the \textrm{SL\,Model}, and in 0.15 dex (a factor of 1.4), compared to the \textrm{Dyn\,Model}. \item Our joint \textrm{SL+Dyn\,Model} allows to probe, in a reliable fashion, the mass profile of the group \object{SL2S\,J02140-0535} at large scale. We find a good agreement between the luminosity contours, the mass contours, and the X-ray emission. This result confirms that the mass is traced by light. \end{itemize} \end{enumerate} The joint lensing-dynamical analysis presented in this paper, applied to the lens galaxy group \object{SL2S\,J02140-0535}, is aimed to show a consistent method to probe the mass density profile of groups and clusters of galaxies. This is the first paper in a series in which we extend our methodology to the galaxy clusters, for which the number of constraints is larger both in lensing images and in galaxy members. Therefore, we should be able to probe with our new method more parameters, such as the anisotropy parameter and the tracer radius (Verdugo et al. in preparation). \begin{acknowledgements} The authors thank the anonymous referee for invaluable remarks and suggestions. T. V. thanks the staff of the Instituto de F\'isica y Astronom\'ia of the Universidad de Valpara\'iso. ML acknowledges the Centre National de la Recherche Scientifique (CNRS) for its support. V. Motta gratefully acknowledges support from FONDECYT through grant 1120741, ECOS-CONICYT through grant C12U02, and Centro de Astrof\'{\i}sica de Valpara\'{\i}so. M.L. and E.J. also acknowledge support from ECOS-CONICYT C12U02. A.B. acknowledges partial financial support from the PRIN INAF 2014: "Glittering kaleidoscopes in the sky: the multifaced nature and role of galaxy clusters" P.I.: M. Nonino. K. Rojas acknowledges support from Doctoral scholarship FIB-UV/2015 and ECOS-CONICYT C12 U02. J.M. acknowledges support from FONDECYT through grant 3160674. J.G.F-T is currently supported by Centre National d'Etudes Spatiales (CNES) through PhD grant 0101973 and the R\'egion de Franche-Comt\'e and by the French Programme National de Cosmologie et Galaxies (PNCG). M. A. De Leo would like to thank the NASA-funded FIELDS program, in partnership with JPL on a MUREP project, for their support. \end{acknowledgements} \bibliographystyle{aa}
1,941,325,220,792
arxiv
\section{Introduction} Transparent conducting materials (TCMs) are necessary in many applications ranging from solar cells to transparent electronics. So far, \emph{n}-type oxides (e.g., \ce{In2O3}, \ce{SnO2} and \ce{ZnO}) are the highest performing TCMs, allowing them to be used in commercial devices~\cite{H.Ohta-MatTo04, A.Facchetti10, K.Ellmer-NatPho12, P.Barquinha12, E.Fortunato12}. On the other hand, \emph{p}-type TCMs show poorer performances, especially in terms of carrier mobility. This hinders the development of new technologies such as transparent solar cells or transistors~\cite{K.Ellmer-NatPho12, S.C.Dixon-JMChC16}. Taking advantage of the predictive power of density functional theory (DFT) calculations, we have set up a high-throughput (HT) computational framework to identify novel \emph{p}-type TCMs focusing first on oxide compounds~\cite{G.Hautier-NatCom13, J.B.Varley-PRB14, A.Bhatia-CheMat16}. The analysis of the calculated HT data confirmed that on average \emph{p}-type oxides have inherently higher effective masses than \emph{n}-type oxides~\cite{G.Hautier-NatCom13}. This could be traced back to the strong oxygen \emph{p}-orbital character in the valence band of most oxides and has rationalized the current gap in mobility between the best \emph{p}-type and \emph{n}-type oxides. This inherent difficulty in developing high-hole-mobility oxides justifies moving towards non-oxide TCM chemistries including fluorides~\cite{H.Yanagi-APL03}, sulfides~\cite{S.Park-APL02,R.W.Robinson-AdvEMat16}, oxianions~\cite{K.Ueda-APL00}, or germanides~\cite{F.Yan-NatCom15}. Recently, we started extending our HT computing approach to search for non-oxide TCMs. Phosphides were identified to be among the lowest hole effective mass materials and more specifically boron phosphide (BP) was detected as a very promising \emph{p}-type TCM candidate~\cite{J.B.Varley-CheMat17}. We note that subsequent computational studies focusing on selected binaries and ternaries reported also on the computational screening of non-oxide TCMs~\cite{R.K.M.Raghupathy-JMCC17, K.M.Raghupathy2018}. In the present work, we extend our HT computing approach to a larger space of chemistries and investigate some selected candidates. We screen all non-oxide compounds in a large computational data set ($>$34,000 semiconductors)~\cite{F.Ricci-SciData17}. Combining DFT-based HT computations with higher accuracy methods such as $GW$, hybrid functionals and electron-phonon coupling computations (to assess the relaxation time and thus the mobility), we identify that CaTe and \ce{Li3Sb} would be of great interest as high mobility \emph{p}-type TCMs. \section{Methods} \label{methods} All the considered materials originate from the Inorganic Crystal Structure Database (ICSD)~\cite{ICSD13}. Their relaxed crystal structures and electronic band structures were obtained from the Materials Project database~\cite{A.Jain-APLM13, MatPro13}. These rely on DFT high-throughput computations which were performed with VASP~\cite{G.Kresse-CMS96,G.Kresse-PRB96} using the Perdew-Burke-Ernzerhof (PBE) exchange-correlation (XC) functional~\cite{J.Perdew-PRL96} within the projector augmented wave (PAW) framework~\cite{P.E.Blochl-PRB94}. One of the first selection criteria for TCMs is their stability. Here, it is assessed by the energy above hull $E_\textrm{hull}$ in the Materials Project database~\cite{A.Jain-APLM13}. For a compound stable at 0K, $E_\mathrm{hull}=0$~meV/atom, and the stability decreases as $E_\textrm{hull}$ increases. In the beginning of the screening procedure, the PBE band gap can be used as a filter. However, since PBE is known to systematically underestimate the band gap compared to experiments, more accurate calculations are needed in the subsequent steps (though with a limited number of materials). So, the fundamental and direct band gaps were also calculated with VASP for about a hundred materials using the Heyd-Scuseria-Ernzerhof (HSE) hybrid XC functional~\cite{J.Heyd03, E.N.Brothers08} and adopting the same computational parameters as for the PBE calculations. For the final candidates (\ce{CaTe} and \ce{Li3Sb}), $G_0W_0$ calculations were performed with ABINIT~\cite{X.Gonze-Comput.Mater.Sci02,X.Gonze-Z.Kristallogr05,X.Gonze-Comput.Phys.Commun09,X.Gonze-Comput.Phys.Commun16}. In these calculations, optimized norm-conserving (NC) pseudopotentials including semi-core electrons were used which were generated with ONCVPSP~\cite{D.R.Hamann-PRB13, pseudodojo}. The kinetic cut-off energy for the wavefunctions were set to 51 and 52~Ha for \ce{CaTe} and \ce{Li3Sb} respectively, as recommended in the PseudoDojo table~\cite{pseudodojo}. The convergence of these calculations with respect to the kinetic energy cut-off $E_c$ for the dielectric function and the number of bands $N_b$ was tested using automatic $GW$ workflows~\cite{M.J.vanSetten-PRB17} based on the pymatgen~\cite{S.P.Ong-CMS13} and AbiPy packages~\cite{Abipy14,X.Gonze-Comput.Phys.Commun16}. For \ce{CaTe}, the convergence of the gap at the $\Gamma$ point (with a truncation error smaller than 0.01~eV) was obtained for $E_c=12$~Ha and $N_b=480$. In the case of \ce{Li3Sb}, the convergence is significantly faster: using $E_c=10$~Ha and $N_b=240$ guarantee a truncation error smaller than 0.01~eV. More details about the convergence tests are available in the supplementary document. For the calculations of the screening and the quasi-particle self-energy, $10\times10\times10$ and $8\times8\times8$ \textbf{k}-point meshes were used for \ce{CaTe} and \ce{Li3Sb}, respectively. The band structures are then interpolated from these \textbf{k}-point meshes using AbiPy~\cite{Abipy14,X.Gonze-Comput.Phys.Commun16}. The point defect computations were performed using the supercell technique~\cite{C.Freysoldt-RevMP14} adopting $3\times3\times3$ supercells of the primitive cells. We calculated the defect formation energies first using the PBE XC functional but also with the more accurate HSE functional for \ce{Li3Sb} and \ce{CaTe}~\cite{J.Heyd03, E.N.Brothers08}. For the latter, the screening length and fraction of exact exchange were set to the common values of 0.2~\AA\ and 25~\% respectively. The kinetic energy cut-off for the wavefunctions was set to 19.1~Ha (520~eV) and the relaxations are stopped when the change in total energy between two ionic relaxation-steps is smaller than $3.67\times10^{-4}$~Ha (0.01~eV). The formation energy of defect $D$ in charged state $q$ can be written as~\cite{H.-P.Komsa-RRB12,S.B.Zhang-PRL91} \begin{equation} \begin{split} E_f[D^q] = & E[D^q] + E_{corr}[D^q] - E[bulk] - \Sigma_{i}n_i\mu_i \\ & + q(\epsilon_\mathrm{VBM} + \Delta v + \Delta \epsilon_F) \end{split} \label{dfenergy} \end{equation} where $E[D^q]$ and $E[bulk]$ are the total energies of the supercell with a defect $D$ in the charge state $q$ and without any defects, respectively; $n_i$ is the number of atoms of type $i$ removed ($n_i<0$) or added ($n_i>0$); and, $\mu_i$ is the corresponding chemical potential. $\epsilon_\mathrm{VBM}$ is the energy of the valence band maximum (VBM), and $\Delta \epsilon_F$ is the Fermi level referenced to $\epsilon_\mathrm{VBM}$. The correction terms $E_{corr}[D^q]$ and $\Delta v$ are introduced to take care of the spurious image-charge interactions and the potential alignment for charged defects, respectively. The defect states with the charge $q$ were corrected using the extended Freysoldt's (Kumagai's) scheme~\cite{C.Freysoldt-PSSB11,Y.Kumagai-PRB14}. All defects computations were performed using the PyCDT package~\cite{D.Broberg-CPCom18}. The effective masses were calculated with BoltzTrap (based on Boltzmann transport theory framework)~\cite{G.K.H.Madsen-CPC06} using the pymatgen~\cite{S.P.Ong-CMS13} interface and the Fireworks workflow package~\cite{A.Jain-CCPE15}. All the raw effective mass data is freely available in a separate paper which covers around 48,000 inorganic materials ~\cite{F.Ricci-SciData17}. The mobility depends on the effective mass $m^{*}$ through $\mu = e\tau/m^{*}$ where the relaxation time $\tau$ (inverse of the scattering rate) depends on different scattering mechanisms. Carriers can be scattered by phonons, ionized and neutral impurities, grain boundaries,... In this work, we only took into account the scattering of electrons by phonons which is likely to be an important component of scattering and is an intrinsic mechanism, difficult to control through purity and microstructure. The carriers scattering by phonons can be computed theoretically if the electron-phonon matrix elements are known. In principle, one can employ Density Functional Perturbation Theory (DFPT) to obtain all the electron-phonon matrix elements from first principles. However, converging the relevant physical properties (such as the scattering rate of electrons by phonons) often requires very dense \textbf{k}-point and \textbf{q}-point meshes for electrons and phonons respectively leading to a considerable increase of computational time. The recently developed interpolation techniques based on Wannier functions offer a very practical and efficient solution to overcome this obstacle. In this work, we used the EPW code~\cite{J.Noffsinger-CPC10,S.Ponce-CPC16} interfaced with Quantum ESPRESSO~\cite{P.Giannozzi-J.Phys.CondMat09,P.Giannozzi-J.Phys.CondMat17} to calculate the relaxation-time $\tau_{n\textbf{k}}$ ($n$ and \textbf{k} are band index and wave vector of a Bloch's state, respectively). More details on the theory and the implementation can be found in Ref.~\onlinecite{S.Ponce-CPC16}. The $\tau_{n,\textbf{k}}$ were interpolated on a dense $40\times40\times40$ mesh for both \textbf{k}-points (for electrons) and \textbf{q}-points (for phonons) starting from the DFPT values on a $6\times6\times6$ mesh. The latter (together with the structural relaxation, self-consistent, non self-consistent calculations which are needed to run EPW) were obtained using Quantum ESPRESSO with NC pseudopotentials and very stringent parameters for convergence, e.g. high cut-off energy of 40~Ha. These $\tau_{n,\textbf{k}}$ are then used as an input to compute the carrier mobility by solving the Boltzmann transport equation by means of the BoltzTrap package~\cite{G.K.H.Madsen-CPC06}. In the latter calculations, the DFT band-energies (computed on a finite number of \textbf{k}-points) are interpolated using star functions (see section 2 of Ref.~\onlinecite{G.K.H.Madsen-CPC06}). Here, we have implemented another interpolation for the relaxation time in BoltzTrap in order to obtain the same very dense \textbf{k}-point grid as the one used for band-energies. The physical principle for this implementation is that the symmetries of the quasi-particle energies are the same as those of band-energies~\cite{M.Giantomassi09} ($\tau_{n,\textbf{k}}$ due to the interaction with phonons can be calculated from the imaginary part of the electron-phonon self-energy). \section{Results} Starting from the Materials Project database, our first step was to extract materials with a low hole effective mass ($< 1~m_o$, where $m_o$ is the free electron mass) and a large enough fundamental gap ($>0.5$~eV) and direct gap ($>1.5$~eV), based on PBE calculations. Regarding the effective masses, in the most general form, they are represented by a tensor. As most TCMs are used as polycrystalline films, materials with isotropic or close to isotropic transport are easier to use in practical applications. Therefore, for the screening, we focus on the three principal values of this tensor and sort the materials based on the highest of the three principal hole effective masses. There were about 390 compounds passing through this first filter. We then screened out very unstable materials selecting only those with an energy above hull lower than 24~meV/atoms. This threshold corresponds to the typical standard deviation of computational errors (compared with experiment) of DFT formation-energies~\cite{G.Hautier2012}. For the 107 materials passing these criteria, more accurate fundamental and direct gaps were calculated using the HSE hybrid functional. All the results of this step are presented in Table SI of the Supplemental Material~\cite{supplem}. For sake of clarity, \tabref{tableI} shows a selection of 63 materials with a direct band gap $\geq 2.8$~eV. The materials are sorted in decreasing order as a function of the computed direct band gap. \begin{center} \LTcapwidth=0.98\textwidth \begin{longtable*}{@{\extracolsep{\fill}} l l r c c r c c c c l} \caption{Formula, space group, Materials Project identification number (MP-id)~\cite{A.Jain-APLM13, MatPro13}, fundamental $E_g$ and direct gaps $E_g^d$ computed by HSE functional (in eV), energy above hull $E_\mathrm{hull}$ (in $meV$/atom), principal components $m_1$, $m_2$ and $m_3$ of the hole effective mass tensor (in atomic units), verification of the absence of toxic/rare-earth (T/RE) elements (Be, As, Cd, Yb, Hg, Pb and Th) and of the \emph{p}-type dopability (when computed here or obtained from the existing literature) for the selected compounds (see text). The materials are sorted as a function of the direct band gap in decreasing order. } \label{tableI} \\ \hline \multicolumn{1}{l}{Formula} & \multicolumn{1}{l}{Space group} & \multicolumn{1}{r}{MP-id} & \multicolumn{1}{c}{$E_g^d$} & \multicolumn{1}{c}{$E_g$} & \multicolumn{1}{r}{$E_\mathrm{hull}$} & \multicolumn{1}{c}{$m_1$} & \multicolumn{1}{c}{$m_2$} & \multicolumn{1}{c}{$m_3$} & \multicolumn{1}{c}{T/RE} & \multicolumn{1}{c}{\emph{p}-dopability}\\ \hline \endfirsthead \multicolumn{11}{c}% {{\tablename\ \thetable{} -- continued from previous page}} \\ \hline \multicolumn{1}{l}{Formula} & \multicolumn{1}{l}{Space group} & \multicolumn{1}{r}{MP-id} & \multicolumn{1}{c}{$E_g^d$} & \multicolumn{1}{c}{$E_g$} & \multicolumn{1}{r}{$E_\mathrm{hull}$} & \multicolumn{1}{c}{$m_1$} & \multicolumn{1}{c}{$m_2$} & \multicolumn{1}{c}{$m_3$} & \multicolumn{1}{c}{T/RE} & \multicolumn{1}{c}{\emph{p}-dopability}\\ \hline \endhead \hline \multicolumn{11}{r}{{Continued on next page}} \\ \hline \endfoot \hline \hline \endlastfoot \ce{BeS} & $F\overline{4}3m$ & 422 & 6.89 & 4.05 & 0.0 & 0.65 & 0.65 & 0.65 & $\times$ & -\\ \ce{KMgH3} & $Pm\overline{3}m$ & 23737 & 5.76 & 3.58 & 0.0 & 0.75 & 0.75 & 0.75 & \checkmark & -\\ \ce{SiC} & $F\overline{4}3m$ & 8062 & 5.75 & 2.25 & 0.7 & 0.58 & 0.58 & 0.58 & \checkmark & \checkmark~\cite{K.Furukawa-APL86, Y.Kondo-IEEE86, K.Shibahara-JJAP87, R.Weingartner-APL02}\\ \ce{CsPbCl3} & $Amm2$ & 675524 & 5.69 & 5.69 & 0.0 & 0.30 & 0.32 & 0.33 & $\times$ & -\\ \ce{BeSe} & $F\overline{4}3m$ & 1541 & 5.27 & 3.36 & 0.0 & 0.55 & 0.55 & 0.55 & $\times$ & -\\ \ce{BeCN2} & $I\overline{4}2d$ & 15703 & 5.21 & 5.21 & 0.0 & 0.75 & 0.75 & 0.78 & $\times$ & -\\ \ce{RbPbF3} & $Cc$ & 674508 & 5.20 & 4.84 & 0.0 & 0.71 & 0.83 & 0.95 & $\times$ & -\\ \ce{MgS} & $Fm\overline{3}m$ & 1315 & 4.95 & 3.84 & 0.0 & 0.98 & 0.98 & 0.98 & \checkmark & -\\ \ce{RbHgF3} & $Pm\overline{3}m$ & 7482 & 4.90 & 2.11 & 0.0 & 0.93 & 0.93 & 0.93 & $\times$ & -\\ \ce{AgCl} & $Fm\overline{3}m$ & 22922 & 4.81 & 2.28 & 0.0 & 0.83 & 0.83 & 0.83 & \checkmark & -\\ \ce{CsHgF3} & $Pm\overline{3}m$ & 561947 & 4.59 & 2.20 & 0.0 & 0.89 & 0.89 & 0.89 & $\times$ & -\\ \ce{Be2C} & $Fm\overline{3}m$ & 1569 & 4.56 & 1.63 & 0.0 & 0.37 & 0.37 & 0.37 & $\times$ & -\\ \ce{SrMgH4} & $Cmc2_1$ & 643009 & 4.52 & 3.78 & 0.0 & 0.84 & 0.90 & 0.95 & \checkmark & -\\ \ce{Li2Se} & $Fm\overline{3}m$ & 2286 & 4.36 & 3.70 & 0.0 & 0.95 & 0.95 & 0.95 & \checkmark & -\\ \ce{BP} & $F\overline{4}3m$ & 1479 & 4.35 & 2.26 & 0.0 & 0.34 & 0.34 & 0.34 & \checkmark & \checkmark\cite{J.B.Varley-CheMat17}\\ \ce{CaS} & $Fm\overline{3}m$ & 1672 & 4.28 & 3.34 & 0.0 & 0.88 & 0.88 & 0.88 & \checkmark & -\\ \ce{LiCa4B3N6} & $Im\overline{3}m$ & 6799 & 4.25 & 3.38 & 0.0 & 0.86 & 0.86 & 0.86 & \checkmark & -\\ \ce{BaSrI4} & $R\overline{3}m$ & 754852 & 4.22 & 4.22 & 21.8 & 0.73 & 0.73 & 0.80 & \checkmark & -\\ \ce{LiSr4B3N6} & $Im\overline{3}m$ & 9723 & 4.18 & 3.22 & 0.0 & 0.89 & 0.89 & 0.89 & \checkmark & -\\ \ce{NaSr4B3N6} & $Im\overline{3}m$ & 10811 & 4.08 & 3.14 & 0.0 & 0.92 & 0.92 & 0.92 & \checkmark & -\\ \ce{K2LiAlH6} & $Fm\overline{3}m$ & 24411 & 4.04 & 3.70 & 9.1 & 0.65 & 0.65 & 0.65 & \checkmark & -\\ \ce{BeTe} & $F\overline{4}3m$ & 252 & 4.04 & 2.45 & 0.0 & 0.42 & 0.42 & 0.42 & $\times$ & -\\ \ce{Ba3SrI8} & $I4/mmm$ & 756235 & 4.02 & 4.02 & 7.5 & 0.70 & 0.81 & 0.81 & \checkmark & -\\ \ce{CaSe} & $Fm\overline{3}m$ & 1415 & 4.01 & 2.95 & 0.0 & 0.77 & 0.77 & 0.77 & \checkmark & -\\ \ce{LiH} & $Fm\overline{3}m$ & 23703 & 3.97 & 3.97 & 0.0 & 0.46 & 0.46 & 0.46 & \checkmark & $\times$\\ \ce{AlP} & $F\overline{4}3m$ & 1550 & 3.90 & 2.50 & 0.0 & 0.56 & 0.56 & 0.56 & \checkmark & $\times$\\ \ce{YbS} & $Fm\overline{3}m$ & 1820 & 3.76 & 2.96 & 0.0 & 0.76 & 0.76 & 0.76 & $\times$ & -\\ \ce{Na2LiAlH6} & $Fm\overline{3}m$ & 644092 & 3.75 & 3.75 & 3.9 & 0.66 & 0.66 & 0.66 & \checkmark & -\\ \ce{SrSe} & $Fm\overline{3}m$ & 2758 & 3.68 & 3.03 & 0.0 & 0.83 & 0.83 & 0.83 & \checkmark & -\\ \ce{BaLiH3} & $Pm\overline{3}m$ & 23818 & 3.62 & 3.26 & 0.0 & 0.36 & 0.36 & 0.36 & \checkmark & $\times$\\ \ce{CsPbF3} & $Pm\overline{3}m$ & 5811 & 3.59 & 3.59 & 4.6 & 0.39 & 0.39 & 0.39 & $\times$ & -\\ \ce{Cs3ZnH5} & $I4/mcm$ & 643702 & 3.58 & 3.58 & 0.0 & 0.69 & 0.93 & 0.93 & \checkmark & -\\ \ce{Al2CdS4} & $Fd\overline{3}m$ & 9993 & 3.56 & 3.55 & 20.0 & 0.78 & 0.78 & 0.78 & $\times$ & -\\ \ce{K2LiAlH6} & $R\overline{3}m$ & 23774 & 3.52 & 3.52 & 0.0 & 0.68 & 0.84 & 0.84 & \checkmark & -\\ \ce{BaMgH4} & $Cmcm$ & 643718 & 3.51 & 3.26 & 4.8 & 0.48 & 0.55 & 0.70 & \checkmark & -\\ \ce{CaTe} & $Fm\overline{3}m$ & 1519 & 3.50 & 2.18 & 0.0 & 0.60 & 0.60 & 0.60 & \checkmark & \checkmark\\ \ce{Cs3MgH5} & $P4/ncc$ & 23947 & 3.49 & 3.49 & 0.3 & 0.88 & 0.93 & 0.93 & \checkmark & -\\ \ce{Cs3MgH5} & $I4/mcm$ & 643895 & 3.49 & 3.49 & 0.0 & 0.83 & 0.94 & 0.94 & \checkmark & -\\ \ce{YbSe} & $Fm\overline{3}m$ & 286 & 3.48 & 2.43 & 0.0 & 0.67 & 0.67 & 0.67 & $\times$ & -\\ \ce{ZnS} & $F\overline{4}3m$ & 10695 & 3.46 & 3.46 & 0.0 & 0.81 & 0.81 & 0.81 & \checkmark & \checkmark\cite{R.W.Robinson-AdvEMat16}\\ \ce{TaCu3S4} & $P\overline{4}3m$ & 10748 & 3.46 & 2.95 & 0.0 & 0.98 & 0.98 & 0.98 & \checkmark & -\\ \ce{Al2ZnS4} & $Fd\overline{3}m$ & 4842 & 3.46 & 3.43 & 0.0 & 0.66 & 0.66 & 0.66 & \checkmark & $\times$\\ \ce{Li2ThN2} & $P\overline{3}m1$ & 27487 & 3.46 & 3.33 & 0.0 & 0.85 & 0.95 & 0.95 & $\times$ & -\\ \ce{Mg2B24C} & $P\overline{4}n2$ & 568556 & 3.42 & 3.41 & 0.0 & 0.77 & 0.93 & 0.93 & \checkmark & -\\ \ce{Li2GePbS4} & $I\overline{4}2m$ & 19896 & 3.33 & 3.20 & 0.0 & 0.61 & 0.61 & 0.98 & $\times$ & -\\ \ce{Cs3H5Pd} & $P4/mbm$ & 643006 & 3.32 & 3.09 & 0.0 & 0.79 & 0.83 & 0.83 & \checkmark & -\\ \ce{SrTe} & $Fm\overline{3}m$ & 1958 & 3.24 & 2.39 & 0.0 & 0.67 & 0.67 & 0.67 & \checkmark & $\times$\\ \ce{MgTe} & $F\overline{4}3m$ & 13033 & 3.24 & 3.24 & 0.9 & 0.95 & 0.95 & 0.95 & \checkmark & -\\ \ce{CsTaN2} & $I\overline{4}2d$ & 34293 & 3.22 & 3.22 & 0.0 & 0.71 & 0.71 & 0.92 & \checkmark & -\\ \ce{Cs3MnH5} & $I4/mcm$ & 643706 & 3.21 & 3.18 & 0.0 & 0.82 & 0.96 & 0.96 & \checkmark & -\\ \ce{LiMgP} & $F\overline{4}3m$ & 36111 & 3.18 & 2.00 & 0.0 & 0.65 & 0.65 & 0.65 & \checkmark & -\\ \ce{BaS} & $Fm\overline{3}m$ & 1500 & 3.17 & 3.02 & 0.0 & 0.85 & 0.85 & 0.85 & \checkmark & -\\ \ce{LiAlTe2} & $I\overline{4}2d$ & 4586 & 3.11 & 3.11 & 0.0 & 0.52 & 0.83 & 0.83 & \checkmark & -\\ \ce{YbTe} & $Fm\overline{3}m$ & 1779 & 3.09 & 1.76 & 0.0 & 0.54 & 0.54 & 0.54 & $\times$ & -\\ \ce{Li3Sb} & $Fm\overline{3}m$ & 2074 & 3.06 & 1.15 & 0.0 & 0.24 & 0.24 & 0.24 & \checkmark & \checkmark\\ \ce{SrAl2Te4} & $I422$ & 37091 & 3.06 & 2.66 & 0.0 & 0.42 & 0.79 & 0.80 & \checkmark & -\\ \ce{TaCu3Te4} & $P\overline{4}3m$ & 9295 & 3.05 & 2.50 & 0.0 & 0.63 & 0.63 & 0.63 & \checkmark & -\\ \ce{TaCu3Se4} & $P\overline{4}3m$ & 4081 & 2.98 & 2.43 & 0.0 & 0.82 & 0.82 & 0.82 & \checkmark & -\\ \ce{BaSe} & $Fm\overline{3}m$ & 1253 & 2.95 & 2.59 & 0.0 & 0.76 & 0.76 & 0.76 & \checkmark & -\\ \ce{KAg2PS4} & $I\overline{4}2m$ & 12532 & 2.87 & 2.53 & 0.0 & 0.67 & 0.82 & 0.82 & \checkmark & -\\ \ce{AlAs} & $F\overline{4}3m$ & 2172 & 2.84 & 2.12 & 0.0 & 0.50 & 0.50 & 0.50 & $\times$ & -\\ \ce{LiErS2} & $I4_1/amd$ & 35591 & 2.80 & 2.80 & 10.4 & 0.62 & 0.99 & 0.99 & \checkmark & -\\ \ce{GaN} & $F\overline{4}3m$ & 830 & 2.80 & 2.80 & 5.2 & 0.94 & 0.94 & 0.94 & \checkmark & -\\ \end{longtable*} \end{center} Among the materials at the top of the list, \ce{SiC} is a well-known wide band gap semiconductor. This material exhibits polymorphism (e.g. cubic: 3C, Rhombohedral: 15R, hexagonal: 6H, 4H, 2H)~\cite{W.J.Choyke97} and can be doped both \emph{n}- and \emph{p}-type~\cite{K.Furukawa-APL86, Y.Kondo-IEEE86, K.Shibahara-JJAP87, R.Weingartner-APL02}. A high hole mobility of 40~cm$^2$/Vs was obtained for the cubic phase~\cite{H.Morkoc-JAP94}. The indirect optical absorption of cubic phase is very weak at room temperature with a coefficient of $10^3$~cm$^{-1}$ at 3.1~eV~\cite{H.R.Philipp-PR58}. We suggest that \ce{SiC} can be considered as a good \emph{p}-type TCM. The main disadvantage of this compound is the difficulty of hole doping. Most known impurities such as Al, B, Ga and Sc create deep doping-levels leading to rather low concentrations of holes which were typically measured to be lower than $10^{18}$~cm$^{-3}$~\cite{H.Morkoc-JAP94} and is suitable for transistor applications. Next comes a series of beryllium based compounds (\ce{BeS}, \ce{BeSe}, \ce{BeCN2}, \ce{Be2C} and \ce{BeTe}). While their computed performance in terms of band gap and hole effective masses are very attractive, the toxicity of beryllium lowers their interest for technological applications. Likewise, the many lead-based halide perovskites (\ce{CsPbCl3}, \ce{RbPbF3}, and \ce{CsPbF3}) and \ce{Li2GePbS4} also present toxicity issues. It is interesting however to see these halide perovskites being of great interest as solar absorbers when they are made in chemistries showing smaller gaps~\cite{M.Liu13, M.A.Green14}. Toxicity is also an issue with the series of arsenides, e.g. AlAs. These arsenides are also very analogous to the phosphides such as BP and AlP that were identified in a previous work~\cite{J.B.Varley-CheMat17}. Some of the materials in the list contain rare-earth elements which might present some cost issues. We consider that further assessment of all these materials in terms of dopability and mobility is not a priority. Therefore, in the penultimate column of \tabref{tableI}, the absence of toxic or rare-earth elements is verified, as indicated by a checkmark. Continuing to explore the list of materials, many hydrides appear to be of interest with low hole effective mass and large direct band gaps for \ce{LiH}, \ce{BaLiH3} and \ce{CsH}. Unfortunately, our subsequent defect computations indicate that these hydrides have low-lying hole-killing defects especially the hydrogen vacancy making unlikely their efficient \emph{p}-type doping (see the Supplemental Material~\cite{supplem}). A few sulfides are also identified by our screening: ZnS and \ce{ZnAl2S4}. ZnS has been indeed recently studied as a good performance \emph{p}-type TCM~\cite{R.W.Robinson-AdvEMat16}. \ce{ZnAl2S4}, on the other hand, is less studied but our defect computation indicates that it is very unlikely to be \emph{p}-type dopable because Zn-Al anti-site defects form easily and act as hole-killers. \ce{Al2CdS4} is likely to present the same issues. The defect formation energies computed by DFT for \ce{ZnAl2S4} are given in the Supplemental Material~\cite{supplem}. Among the different materials in the table, two promising candidates, \ce{Li3Sb} and \ce{CaTe}, also attracted our attention. The rest of the paper is dedicated to the further computations that were performed for these compounds. The conventional cells of CaTe and \ce{Li3Sb} are shown in \figref{joint_figs} (a) and (e). Ca atoms in CaTe are surrounded by six Te atoms forming an octahedral local environment. In \ce{Li3Sb}, the cation fills tetrahedral and octahedral sites. Both CaTe and \ce{Li3Sb} are cubic phases with high symmetry, which explains for their isotropy in hole effective masses ($m_1 = m_2 = m_3$). CaTe and \ce{Li3Sb} exhibit very low hole effective masses with the eigenvalues being 0.60 and 0.25 $m_o$ ($m_o$-mass of free electron), respectively. It is worth noting that the lowest hole effective masses found so far in a computational database for a \emph{p}-type conducting oxides \ce{K2Sn2O3}~\cite{G.Hautier-NatCom13, V.-A.Ha-JMCC17} is 0.27 $m_o$. The promising non-oxide \emph{p}-type TCM reported recently~\cite{J.B.Varley-CheMat17}, BP, shows an effective mass around 0.35 $m_o$. Current Cu-based \emph{p}-type TCOs show effective masses around 1.5 to 2 $m_o$~\cite{G.Hautier-NatCom13}. The direct gaps of CaTe and \ce{Li3Sb} calculated using HSE hybrid functional are 3.5 and 3.06~eV respectively. Next to hybrid functional computations, we performed $G_0W_0$ to confirm the value of these band gaps. \begin{figure*}[!htb] \begin{center} \includegraphics{figures/fig1.pdf} \end{center} \vspace{-15pt} \caption{From the left to the right, the conventional cells, band structures, projected density of states (DOS) and relaxation time and scattering rate. Sub-figures (a)-(d) and (e)-(h) show data of CaTe and \ce{Li3Sb} respectively. The conventional cells present local environments around cations Ca (blue) and Li (green). (b) and (f) plot DFT band structures with a rigid shift of the conduction bands (scissor operator) to fit the fundamental gaps computed by $G_0W_0$. (d) and (h) show relaxation time $\tau$ (in femto-second) and scattering rate $1/\tau$ (in 1/second) as functions of energy at temperature 300~K. The projected DOS in (c) and (g) are computed by DFT. The band gaps of DOS and relaxation time are also shifted to fit $G_0W_0$ values.} \label{joint_figs} \end{figure*} \figref{joint_figs} (b) shows DFT band structure with a scissor shift to fit $G_0W_0$ fundamental gap ($G_0W_0$ band structure of CaTe is shown in Fig. S6 of the Supplemental Material~\cite{supplem}). The $G_0W_0$ fundamental gap ($\Gamma-X$) is 2.95~eV while the direct gap is located at $X$-point and has a value of 4.14~eV. The $G_0W_0$ direct gap is consistent with the optical gap of 4.1~eV measured experimentally~\cite{G.A.Saum-PR59}. We expect such a large band gap to lead to transparency in the visible region. \ce{Li3Sb} is also an indirect semiconductor. In the same way, the DFT electronic band structure with a scissor shift is presented in \figref{joint_figs} (f) (see Fig. S7 of the Supplemental Material~\cite{supplem} for $G_0W_0$ band structure). The $G_0W_0$ band gap and direct gap are 1.37 and 3.17~eV, respectively. The $G_0W_0$ direct gap (located at the $\Gamma$-point) of 3.17~eV. This is consistent with a experimental value of 3.1~eV measured recently~\cite{T.J.Richardson-SolStaIoni03} but much lower than another experimental value of 3.9~eV reported earlier~\cite{R.Gobrecht-PSS66}. The indirect band gap is narrow and will lead to some absorption in the visible range. However, the indirect nature of the absorption makes it phonon-assisted and is expected to lead to weak absorption. To quantify this absorption, we computed the optical absorption including phonon-assisted processes using EPW~\cite{J.Noffsinger-CPC10,S.Ponce-CPC16}. Details about computational method can be found in Ref.~\onlinecite{J.Noffsinger-PRL12}. The result in \figref{Li3Sb_inda} shows quite weak absorption in the visible range with the average intensity about $5\times10^3$~cm$^{-1}$, which means that a 100-nm film still allows more than 70 \% of visible light energy to get through. This is suitable for applications and devices using thin-film form of \ce{Li3Sb}. The weak indirect optical absorption computed here is similar to that of established \emph{p}-type TCOs such as SnO~\cite{N.F.Quackenbush13} or recently proposed \emph{p}-type TCMs such as BP~\cite{J.B.Varley-CheMat17}. \begin{figure}[!htb] \begin{center} \includegraphics{figures/fig2.pdf} \end{center} \vspace{-15pt} \caption{The indirect optical absorption of \ce{Li3Sb} due to phonon-assisted transitions.} \label{Li3Sb_inda} \end{figure} CaTe and \ce{Li3Sb} show very low hole effective mass (0.60 and 0.24 $m_o$ within DFT). Indeed, both materials have threefold degeneracy at VBM ($\Gamma$ point), therefore, the transport of holes occurs in three bands with some lighter and some heavier. Our definition of effective mass takes into account the competition among these three bands and give an average value that is representative of the transport which will happen in the different bands. More details about formulas and calculation techniques can be found in Ref.~\cite{G.Hautier-CheMat14, F.Ricci-SciData17}. This should be kept in mind when comparing our results to other studies which sometimes only focus on one band when several competing bands are present ~\cite{R.K.M.Raghupathy-JMCC17,K.Kuhar-ACSEL18}. \figref{joint_figs} shows projected density of states (DOS) for (c) CaTe and (g) \ce{Li3Sb}. For both compounds, the top of valence band is mainly of anionic \emph{p}-orbital characters (\ce{Sb^{3-}} or \ce{Te^{2-}}) with some mixing from the cations. The effective masses are directly related to overlap and energy difference between orbitals~\cite{G.Hautier-CheMat14}. The lower value of hole effective masses obtained in these non-oxide compounds can be associated to both a better alignment between the anionic and cationic states than in oxides and larger anionic \emph{p}-orbitals (5\emph{p} and 4\emph{p} versus 2\emph{p} for oxides). The effective mass is an important factor driving carrier mobility but not the only one. Scattering rate or relaxation time also affects the mobility. There are several mechanisms which can influence relaxation time as mentioned in \ref{methods}. Phonon scattering is the most intrinsic factor as it is not affected by purity and microstructure. The evaluation of relaxation time from phonon scattering can be performed \emph{ab initio} using electron-phonon coupling matrices obtained from DFPT phonon computations. \figref{phband} shows phonon band structures (fat bands) and projected DOS of phonons for (a) CaTe and (b) \ce{Li3Sb}. The fat bands represent qualitatively characteristics of vibrational modes including what type of atoms participates in the phonon modes at a given energy, their direction and amplitude. The absence of modes with negative (purely imaginary) frequencies show that these materials are dynamically stable at 0~K. The lighter atoms (Ca and Li) mainly contribute to the optical modes at high frequencies (3 and 9 modes in CaTe and Li3Sb, respectively) while the heavier elements (Te and Sb) play an important role in the three acoustic modes at low frequencies. \begin{figure*}[!htb] \begin{center} \includegraphics{figures/fig3.pdf} \end{center} \vspace{-15pt} \caption{Phonon band structures with fat bands representing displacements of atomic vibrations. The width of fat bands gives qualitative understanding of the vibrational modes such as what are the atomic types involved in the vibrations at a given energy, their direction of oscillation and the amplitude (related to the displacement). The projected DOS of phonons on each type of atom are correspondingly shown next to the band structures. (a) CaTe and (b) \ce{Li3Sb}.} \label{phband} \end{figure*} Using the DFPT phonon computations and EPW, we can extract electron-phonon coupling matrices and the relaxation time $\tau_{n\boldsymbol{k}}$ on a dense \emph{\textbf{k}}-point grid (see Eq.~S1 of the Supplemental Material~\cite{supplem}). \figref{joint_figs} (d) and (h) show the scattering rate and lifetime (inverse of scattering rate) as a function of energy at 300~K for CaTe and \ce{Li3Sb} respectively (see Eq.~S2 of the Supplemental Material~\cite{supplem}). As commonly observed, the scattering rate is proportional to the DOS. A higher DOS offer more states available for the scattered electrons. At the doping hole concentration of $10^{18}$~cm$^{-3}$, the Fermi levels are 90.5 and 120.8~meV above the VBMs for CaTe and \ce{Li3Sb}, respectively. For the highest doping of $10^{21}$~cm$^{-3}$, the Fermi levels lie below VBMs of 264.5 and 168.5~meV for CaTe and \ce{Li3Sb}, respectively. The transport of holes, therefore, takes place around VBMs ($\Gamma$-points). The DOS at $\Gamma$-point of \ce{Li3Sb} is larger than that of CaTe but the scattering rate of \ce{Li3Sb} are fairly similar (see \figref{joint_figs} (d) and (h)) indicating that a slightly weaker electron-phonon coupling is present in \ce{Li3Sb}. We computed scattering rates at temperatures of 300 and 400~K. \figref{mobility} shows the hole mobilities as a function of hole concentrations at 300 and 400~K for both CaTe and \ce{Li3Sb}. The mobilities decreases with hole concentrations. As the Fermi levels shifts deeper below the VBMs, the DOS increases as well as the scattering rate (see \figref{joint_figs} (d) and (h)). CaTe shows values of hole mobility around 20~cm$^2$/Vs that is comparable with the mobility of \ce{Ba2BiTaO6}, a recently reported \emph{p}-type TCO~\cite{A.Bhatia-CheMat16}, and larger than mobilities of the traditional \emph{p}-type TCOs such as \ce{CuAlO2}~\cite{J.Tate-PRB09} and SnO~\cite{Y.Ogo-APL08}. \ce{Li3Sb} exhibits an exceptional hole mobility up to about 70~cm$^2$/Vs at room-temperature. This value nearly reaches the values of the electron mobilities of the best current \emph{n}-TCOs such as \ce{SnO2}, ZnO, \ce{In2O3} and \ce{Ga2O3} which are around 100~cm$^2$/Vs (see Table SII of the Supplemental Material~\cite{supplem}). It is worth noting that the mobility measured experimentally take into account other scattering processes. Our computed mobilities as they only take into account phonon scattering can be seen as an upper bound. \begin{figure}[!htb] \begin{center} \includegraphics{figures/fig4.pdf} \end{center} \vspace{-15pt} \caption{Hole mobilities as a function of hole concentrations of CaTe and \ce{Li3Sb} at temperatures 300 and 400~K.} \label{mobility} \end{figure} Our final assessment focuses on the dopability of CaTe and \ce{Li3Sb}. While we have assumed so far that the Fermi level of these two materials could be tuned to generate hole carriers, it remains to be seen if the defect chemistry is favorable to hole generation. To answer this question, we performed defect calculations using a HSE following the procedure described in section \ref{methods}. \figref{defects} (a) presents the defect formation energy for both intrinsic and extrinsic defects for each sort of defect in CaTe. The chemical potentials are chosen in conditions which lead to the most favorable \emph{p}-type doping tendency for this material. The chemical potentials corresponding to different conditions in the phase diagrams are available in Fig. S8 of the Supplemental Material~\cite{supplem}. Focusing first on intrinsic defects only including vacancies, anti-site defects and interstitial atoms, defect formation energies of these are plotted in \figref{defects} (a) with chemical potentials extracted in Te-rich condition of the phase diagram. Intrinsically, CaTe is unlikely to present \emph{p}-type doping as no intrinsic defect acts as a low lying acceptor. The vacancy of Ca will be in competition with the hole killing vacancy of Te leading to a fermi level far from the valence band. However, the Te vacancy is not low enough in energy that it would prevent extrinsic \emph{p}-type doping. When extrinsic defects with Na, K and Li substituting onto Ca-sites are considered, we find that all these substitutions offer shallow acceptor very competitive compared to the Te vacancy. The Ca by Na substitution is the lowest in energy. Extrinsic doping by Na might therefore lead to \emph{p}-type doping in CaTe. The plots of formation energies of K$_\textrm{Ca}$, Na$_\textrm{Ca}$ and Li$_\textrm{Ca}$ in \figref{defects} (a) were achieved with chemical potentials extracted from \ce{KTe-CaTe-K2Te3}, \ce{NaTe3-CaTe-Na2Te} and \ce{Li2Te-CaTe-Te} facets of the three-element phase diagrams (see Fig. S8 of the Supplemental Material~\cite{supplem}). For \ce{Li3Sb}, \figref{defects} (b) shows an intrinsic tendency for hole doping with the lithium vacancy (Vac$_\textrm{Li}$) acting as a shallow acceptor with a very low formation energy and no competing hole-killer. This plot is produced with chemical potentials computed in \ce{Li3Sb-Li2Sb} facet of the phase diagram (see Fig. S9 of the Supplemental Material~\cite{supplem}). \begin{figure*}[!htb] \begin{center} \includegraphics{figures/fig5.pdf} \end{center} \vspace{-15pt} \caption{The defect formation energy as a function of Fermi level of intrinsic and extrinsic defects for (a) \ce{CaTe} and (b) \ce{Li3Sb}. For \ce{CaTe}, the intrinsic defects include vacancies (Vac$_\textrm{Ca}$ and Vac$_\textrm{Te}$), anti-sites (Te$_\textrm{Ca}$ and Ca$_\textrm{Te}$) and interstitial atoms inserted into the tetrahedral hollows formed by 4 Te atoms (Ca$_\textrm{i(tet,Te4)}$ and Te$_\textrm{i(tet,Te4)}$) while Na, K and Li are used as the extrinsic defects substituting Ca atoms (Na$_\textrm{Ca}$, K$_\textrm{Ca}$ and Li$_\textrm{Ca}$). For \ce{Li3Sb}, the intrinsic defects include vacancies (Vac$_\textrm{Li-1}$, Vac$_\textrm{Li-2}$ and Vac$_\textrm{Sb}$), anti-sites (Li$_\textrm{Sb}$, Sb$_\textrm{Li-1}$ and Sb$_\textrm{Li-2}$) and interstitial atoms inserted into the octahedral hollows formed by Sb and Li atoms (Li$_\textrm{i(oct, Sb2Li4)}$, Li$_\textrm{i(oct, SbLi5)}$, Sb$_\textrm{i(oct, Sb2Li4)}$ and Sb$_\textrm{i(oct, SbLi5)}$). In both cases, the VBM is set to zero.} \label{defects} \end{figure*} \section{Discussions} \label{discus} The discovery of the quite unanticipated \ce{Li3Sb} with a potential for very high hole mobility demonstrates the interest of our HT screening strategy. \ce{Li3Sb} is an unexpected compound for TCM applications and would have been difficult to intuitively identify. Among other \ce{A3B} compounds (A = Li, Na, K and Rb; and B = N, P, As and Sb), \ce{Li3Sb} is exceptional because of its very low hole effective masses (see Table SIII in the Supplemental Material~\cite{supplem}). We suggest that the energy difference between A-$ns^1$ ($n = 2, 3, 4, 5$ for Li, Na, K and Rb, respectively) and B-$np^3$ ($n = 2, 3, 4, 5$ for N, P, As and Sb, respectively) orbitals of valence electrons (A and B) might play important role here. In fact, the energy difference between Li-$2s^1$ and Sb-$5p^3$ is about 1.954 (eV)~\cite{pseudodojo} and is the smallest value among many other ones of A-$ns^1$/B-$np^3$ pairs. This leads to a small orbital-energy difference and strong $s/p$ (anti-)bonding, which results in low hole effective mass. While we only focus on CaTe and \ce{Li3Sb} as they are likely the most potential candidates, there are other interesting materials with hole effective masses from 0.6 to 1.0 $m_o$ and high direct gaps (see \tabref{tableI}) such as CaS, SrSe, SrTe, \ce{LiCa4B3N6}, \ce{LiSr4B3N6}, \ce{NaSr4B3N6}... Defects calculations for these materials have not performed in this work and we, therefore, cannot adjudge their \emph{p}-type doping-tendency. By going beyond oxides, we identified compounds with very high hole mobility. However, several other issues also arise and need to be considered. The processing of antimonides or tellurides might be more difficult than oxides. They are, however, very common chemistries in other applications such as thermoelectrics with several exemplary compounds such as PbTe, \ce{Bi2Te3}~\cite{E.Macia-Barber15}, or more recently \ce{Mg3Sb2}~\cite{T.Kajikawa2003, C.L.Condron2006, J.Zhang2017}. The band gaps in non-oxide compounds are narrower, which lowers in average their transparency in the visible light. As we already discussed~\cite{J.B.Varley-CheMat17}, this can be overcome by exploiting the indirect gaps and weak phonon-assisted optical transitions. Lower band gaps are useful for \emph{p}-dopability though as lower band gap materials tend to be easier to dope~\cite{A.Zunger03}. We note that the defect chemistry of non-oxide can be different than in traditional TCOs. For oxides, the cation-anion anti-site defects (replacement of anions on cations' sites and vice versa) are unlikely to be favorable energetically because of the large electronegativity difference between cations and anions. In non-oxide compounds, $e.g.$ CaTe, the cation-anion anti-sites are more likely to be present leading to potentially different hole-killing defects. While the anion (oxygen) vacancy vacancy is the most common hole-killer in oxides, we see our non-oxide materials presenting anti-sites cation-anion defects lower in energy than the anion vacancy such as in CaTe. We also identify that the hydride chemistry while offering attractive electronic structures presents dopability issues (i.e., a low lying hydrogen vacancy acting as hole killer) preventing them for further consideration in \emph{p}-type TCMs. \section{Conclusions} Using a large database and appropriate filtering strategies, we report on a high-throughput search for non-oxide \emph{p}-type TCMs. We identified two materials to be of interest: CaTe and \ce{Li3Sb}. We performed extensive follow-up computational investigation of these candidates, evaluating their band structure using beyond DFT techniques, their transport and phonon-assisted optical properties using electron-phonon computations as well as their defect chemistry. Both CaTe and Li3Sb present very attractive properties for \emph{p}-type TCM applications. The \ce{Li3Sb} shows a very high hole mobility of around 70~cm$^2$/Vs, which is close to electron mobility in the best \emph{n}-type TCMs. Our work motivates further experimental investigation of these two materials for TCM applications. \section{Acknowledgments} V.-A.H. was funded through a grant from the FRIA. G.-M.R. is grateful to the F.R.S.-FNRS for financial support. G.H., G.-M.R., G.Y. and F.R. acknowledge the F.R.S.-FNRS project HTBaSE (contract N$^\circ$ PDR-T.1071.15) for financial support. We acknowledge access to various computational resources: the Tier-1 supercomputer of the F\'{e}d\'{e}ration Wallonie-Bruxelles funded by the Walloon Region (grant agreement N$^\circ$ 1117545), and all the facilities provided by the Universit\'{e} catholique de Louvain (CISM/UCLouvain) and by the Consortium des \'{E}quipements de Calcul Intensif en F\'{e}d\'{e}ration Wallonie Bruxelles (C\'{E}CI). The authors thank Dr. Samuel Ponc\'{e} and Professor Emmanouil Kioupakis for helpful discussions on the technical aspects of the electron-phonon computations. \bibliographystyle{apsrev4-1} \section{COMPUTATIONAL DETAILS} All details of electron-phonon interaction computations can be found from the Refs. \onlinecite{S.Ponce-Comput.Phys.Commun16,F.Giustino-RMP17}. Here, we rewrite the formulas for the scattering rate (inverse of the relaxation time) and the electron-phonon coupling strength for specific phonon mode and phonon wave-vector. The scattering rates at given temperature: \begin{myequation} \begin{split} \frac{1}{\tau_{n\boldsymbol{k}}} = & \frac{2\pi}{\hbar} \sum_{m\nu} \int \frac{d\boldsymbol{q}}{\Omega_{BZ}} \left| g_{nm\nu}(\boldsymbol{k}, \boldsymbol{q}) \right|^2 \\ & \times \left[ (1-f_{m\boldsymbol{k}+\boldsymbol{q}}+n_{\boldsymbol{q}\nu})\delta(\epsilon_{n\boldsymbol{k}}-\hbar\omega_{\boldsymbol{q}\nu}-\epsilon_{m\boldsymbol{k}+\boldsymbol{q}}) + (f_{m\boldsymbol{k}+\boldsymbol{q}} + n_{\boldsymbol{q}\nu})\delta(\epsilon_{n\boldsymbol{k}}+\hbar\omega_{\boldsymbol{q}\nu}-\epsilon_{m\boldsymbol{k}+\boldsymbol{q}}) \right], \end{split} \label{scatt_rate} \end{myequation} where, $\tau_{n\boldsymbol{k}}$ is relaxation time of carriers at band $n$ and wave-vector $\boldsymbol{k}$, $\Omega_{BZ}$ is volume of Brillouin zone (BZ), $g_{nm\nu}(\boldsymbol{k}, \boldsymbol{q})$ is the first-order electron-phonon matrix element from initial Kohn-Sham state $n\boldsymbol{k}$ (eigenvalue $\epsilon_{n\boldsymbol{k}}$) to final one $m\boldsymbol{k}+\boldsymbol{q}$ (eigenvalue $\epsilon_{m\boldsymbol{k}+\boldsymbol{q}}$) associated with a phonon mode $\nu$ and wave-vector $\boldsymbol{q}$, $f_{m\boldsymbol{k}+\boldsymbol{q}} = \left[ \exp((\epsilon_{n\boldsymbol{k}+\boldsymbol{q}}-\epsilon_F)/k_BT) + 1 \right]^{-1}$ is Fermi-Dirac distribution of carriers at Fermi energy $\epsilon_F$ and temperature $T$, $n_{\boldsymbol{q}\nu}=\left[ \exp(\hbar\omega_{\boldsymbol{q}\nu}/k_BT) - 1 \right]^{-1}$ is Bose-Einstein distribution of phonons with frequencies $\omega_{\boldsymbol{q}\nu}$ at temperature $T$. The scattering rates (inverse of relaxation time) as a function of energy is computed by averaging all states as \begin{myequation} \frac{1}{\tau(\epsilon)} = \frac{1}{N(\epsilon)} \sum_{n\boldsymbol{k}} \frac{1}{\tau_{n\boldsymbol{k}}} \delta(\epsilon_{n\boldsymbol{k}}-\epsilon), \label{rlx_energy} \end{myequation} where, $N(\epsilon) = \sum_{n\boldsymbol{k}} \delta(\epsilon_{n\boldsymbol{k}}-\epsilon)$ is density of states. The electron-phonon coupling strength of a specific phonon mode $\nu$ and wave-vector $\boldsymbol{q}$ \begin{myequation} \lambda_{\boldsymbol{q}\nu} = \frac{1}{N(\epsilon_F)\hbar\omega_{\boldsymbol{q}\nu}} \sum_{nm} \int \frac{d\boldsymbol{k}}{\Omega_{BZ}} \left| g_{nm\nu}(\boldsymbol{k}, \boldsymbol{q}) \right|^2 \delta(\epsilon_{n\boldsymbol{k}}-\epsilon_F) \delta(\epsilon_{m\boldsymbol{k}+\boldsymbol{q}}-\epsilon_F), \label{elph_cp} \end{myequation} where $N(\epsilon_F)$ is density of state at Fermi level. We kept our materials intrinsic by setting the Fermi level at the mid-gap and in the very general approach of EPW, all phonon modes, both inter-band and intra-band scattering mechanisms are taken into account in the computation of scattering rates. The hole mobilities $\mu_h$ can then be calculated as \begin{myequation} \mu_h=\sigma_h/n_he, \end{myequation} where $\sigma_h$, $n_h$ and $e$ are conductivity tensor, density of holes and elementary charge respectively. In order to calculate conductivity tensor, we solve semi-classical Boltzmann transport equation (BTE). The conductivity tensor is given by \begin{myequation} \sigma_{\alpha\beta}(n,\textbf{k}) = e^2\tau_{n\textbf{k}}v_{\alpha}(n,\textbf{k})v_{\beta}(n,\textbf{k}), \label{conduct} \end{myequation} where $e$ is elementary charge and $v_{\alpha}(n,\textbf{k})$ the group velocity defined through the first-derivative of the band-energy $\epsilon_{n,\textbf{k}}$ with respect to the wave-vector $\textbf{k}$ \begin{myequation} v_{\alpha}(n,\textbf{k}) = \frac{1}{\hbar}\frac{\partial \epsilon_{n,\textbf{k}}}{\partial k_{\alpha}}. \label{groupvl} \end{myequation} The conductivity tensor can be expressed as a function of energy by multiplying \eqref{conduct} by a Dirac delta and then summing over all bands and $\textbf{k}$ points in the Brillouin zone as \begin{myequation} \sigma_{\alpha\beta}(\epsilon) = \frac{1}{N} \Sigma_{n,\textbf{k}} \sigma_{\alpha\beta}(n,\textbf{k})\delta(\epsilon-\epsilon_{n,\textbf{k}}), \label{conduct_e} \end{myequation} where $N$ is the number of $\textbf{k}$ points. Finally, the conductivity tensor as a function of temperature $T$ and Fermi level $\mu$ (electronic chemical potential) is computed through $\sigma_{\alpha\beta}(\epsilon)$ as \begin{myequation} \sigma_{\alpha\beta}(T;\mu) = \frac{1}{\Omega} \int \sigma_{\alpha\beta}(\epsilon) \left[ - \frac{\partial f_{\mu}(T;\epsilon)}{\partial \epsilon} \right] d\epsilon, \label{conduct_T} \end{myequation} where $f_{\mu}$ is the Fermi-Dirac distribution and $\Omega$ is the volume of the unit cell. The Fermi level $\mu$ is defined correspondingly to a given doping carrier concentration. Here, we used the BoltzTrap package~\cite{G.K.H.Madsen-CPC06} to solve BTE. In practice, BoltzTrap interpolates the DFT band-energies (computed on a finite number of \textbf{k}-points) using star functions (see section 2 of Ref.~\onlinecite{G.K.H.Madsen-CPC06}). Hence, the group velocities in \eqref{groupvl} can be easily obtained on a much denser \textbf{k}-point grid thus facilitating the numerical convergence of the final results. In constant relaxation-time approximation, the calculations based on the interpolation of band-energies can produce many transport quantities such as conductivity, mobility, Seebeck coefficient, \emph{etc.} as long as the relaxation-time is known. In this work, we go beyond this approximation using $\tau_{n,\textbf{k}}$ computed by EPW. We implement an interpolation for relaxation time in BoltzTrap on the same very dense \textbf{k}-point grid used for band-energies. The physical principle for this implementation is that the symmetries of the self-energy are the same as those of band-energies~\cite{M.Giantomassi09} ($\tau_{n,\textbf{k}}$ can be calculated from the imaginary part of the electron self-energy in interaction with phonons). \section{RESULTS} \tabref{tableSI} shows information of 107 materials passing through the first two filters and then computed with HSE to obtain more accurate band gaps and direct gaps. \begin{center} \LTcapwidth=0.98\textwidth \begin{mylongtable}[!ht]{@{\extracolsep{\fill}} l l r c c c r c c c c} \caption{Formula, space group (SG), Materials Project identification number~\cite{A.Jain-APLM13, MatPro13}, three principal hole effective masses $m_1$, $m_2$ and $m_3$ (in $m_o$-free electron mass) (the data of effective masses is reported in the previous work\cite{F.Ricci-SciData17}), stability measured by the energy above hull $E_{hull}$ in the phase diagram (in meV/atom), fundamental $E_g$ and direct gaps $E_g^d$ (in eV) computed using PBE and HSE~\cite{J.Heyd03, E.N.Brothers08}.} \label{tableSI} \\ \hline & & & & & & & \multicolumn{2}{c}{$E_g$} &\multicolumn{2}{c}{$E_g^d$} \\ \cline{8-9} \cline{10-11} \multicolumn{1}{l}{Formula} & \multicolumn{1}{l}{SG} & \multicolumn{1}{r}{MP-id} & \multicolumn{1}{c}{$m_1$} & \multicolumn{1}{c}{$m_2$} & \multicolumn{1}{c}{$m_3$} & \multicolumn{1}{c}{$E_{hull}$} & \multicolumn{1}{c}{PBE} & \multicolumn{1}{c}{HSE} & \multicolumn{1}{c}{PBE} & \multicolumn{1}{c}{HSE}\\ \hline \endfirsthead \multicolumn{11}{c}% {{\tablename\ \thetable{} -- continued from previous page}} \\ \hline & & & & & & & \multicolumn{2}{c}{$E_g$} &\multicolumn{2}{c}{$E_g^d$} \\ \cline{8-9} \cline{10-11} \multicolumn{1}{l}{Formula} & \multicolumn{1}{l}{SG} & \multicolumn{1}{r}{MP-id} & \multicolumn{1}{c}{$m_1$} & \multicolumn{1}{c}{$m_2$} & \multicolumn{1}{c}{$m_3$} & \multicolumn{1}{c}{$E_{hull}$} & \multicolumn{1}{c}{PBE} & \multicolumn{1}{c}{HSE} & \multicolumn{1}{c}{PBE} & \multicolumn{1}{c}{HSE}\\ \hline \endhead \hline \multicolumn{11}{r}{{Continued on next page}} \\ \hline \endfoot \hline \hline \endlastfoot \ce{BeS} & $F\overline{4}3m$ & 422 & 0.65 & 0.65 & 0.65 & 0.0 & 3.14 & 4.05 & 5.62 & 6.89\\ \ce{KMgH3} & $Pm\overline{3}m$ & 23737 & 0.75 & 0.75 & 0.75 & 0.0 & 2.46 & 3.58 & 4.51 & 5.76\\ \ce{SiC} & $F\overline{4}3m$ & 8062 & 0.58 & 0.58 & 0.58 & 0.7 & 1.39 & 2.25 & 4.53 & 5.75\\ \ce{CsPbCl3} & $Amm2$ & 675524 & 0.30 & 0.32 & 0.33 & 0.0 & 2.46 & 5.69 & 2.41 & 5.69\\ \ce{BeSe} & $F\overline{4}3m$ & 1541 & 0.55 & 0.55 & 0.55 & 0.0 & 2.69 & 3.36 & 4.22 & 5.27\\ \ce{BeCN2} & $I\overline{4}2d$ & 15703 & 0.75 & 0.75 & 0.78 & 0.0 & 3.85 & 5.21 & 3.85 & 5.21\\ \ce{RbPbF3} & $Cc$ & 674508 & 0.71 & 0.83 & 0.95 & 0.0 & 3.81 & 4.84 & 4.10 & 5.20\\ \ce{MgS} & $Fm\overline{3}m$ & 1315 & 0.98 & 0.98 & 0.98 & 0.0 & 2.79 & 3.84 & 3.61 & 4.95\\ \ce{RbHgF3} & $Pm\overline{3}m$ & 7482 & 0.93 & 0.93 & 0.93 & 0.0 & 0.65 & 2.11 & 2.70 & 4.90\\ \ce{AgCl} & $Fm\overline{3}m$ & 22922 & 0.83 & 0.83 & 0.83 & 0.0 & 0.95 & 2.28 & 2.89 & 4.81\\ \ce{CsHgF3} & $Pm\overline{3}m$ & 561947 & 0.89 & 0.89 & 0.89 & 0.0 & 0.76 & 2.20 & 2.43 & 4.59\\ \ce{Be2C} & $Fm\overline{3}m$ & 1569 & 0.37 & 0.37 & 0.37 & 0.0 & 1.19 & 1.63 & 4.12 & 4.56\\ \ce{SrMgH4} & $Cmc2_1$ & 643009 & 0.84 & 0.90 & 0.95 & 0.0 & 2.74 & 3.78 & 3.27 & 4.52\\ \ce{Li2Se} & $Fm\overline{3}m$ & 2286 & 0.95 & 0.95 & 0.95 & 0.0 & 3.00 & 3.70 & 3.36 & 4.36\\ \ce{BP} & $F\overline{4}3m$ & 1479 & 0.34 & 0.34 & 0.34 & 0.0 & 1.24 & 2.26 & 3.39 & 4.35\\ \ce{CaS} & $Fm\overline{3}m$ & 1672 & 0.88 & 0.88 & 0.88 & 0.0 & 2.39 & 3.34 & 3.18 & 4.28\\ \ce{LiCa4B3N6} & $Im\overline{3}m$ & 6799 & 0.86 & 0.86 & 0.86 & 0.0 & 2.21 & 3.38 & 2.98 & 4.25\\ \ce{BaSrI4} & $R\overline{3}m$ & 754852 & 0.73 & 0.73 & 0.80 & 21.8 & 3.37 & 4.22 & 3.35 & 4.22\\ \ce{LiSr4B3N6} & $Im\overline{3}m$ & 9723 & 0.89 & 0.89 & 0.89 & 0.0 & 2.09 & 3.22 & 2.95 & 4.18\\ \ce{NaSr4B3N6} & $Im\overline{3}m$ & 10811 & 0.92 & 0.92 & 0.92 & 0.0 & 1.99 & 3.14 & 2.78 & 4.08\\ \ce{K2LiAlH6} & $Fm\overline{3}m$ & 24411 & 0.65 & 0.65 & 0.65 & 9.1 & 2.45 & 3.70 & 2.93 & 4.04\\ \ce{BeTe} & $F\overline{4}3m$ & 252 & 0.42 & 0.42 & 0.42 & 0.0 & 2.02 & 2.45 & 3.62 & 4.04\\ \ce{Ba3SrI8} & $I4/mmm$ & 756235 & 0.70 & 0.81 & 0.81 & 7.5 & 3.23 & 4.02 & 3.23 & 4.02\\ \ce{CaSe} & $Fm\overline{3}m$ & 1415 & 0.77 & 0.77 & 0.77 & 0.0 & 2.09 & 2.95 & 2.99 & 4.01\\ \ce{LiH} & $Fm\overline{3}m$ & 23703 & 0.46 & 0.46 & 0.46 & 0.0 & 3.02 & 3.97 & 2.97 & 3.97\\ \ce{AlP} & $F\overline{4}3m$ & 1550 & 0.56 & 0.56 & 0.56 & 0.0 & 1.63 & 2.50 & 3.09 & 3.90\\ \ce{YbS} & $Fm\overline{3}m$ & 1820 & 0.76 & 0.76 & 0.76 & 0.0 & 2.22 & 2.96 & 2.91 & 3.76\\ \ce{Na2LiAlH6} & $Fm\overline{3}m$ & 644092 & 0.66 & 0.66 & 0.66 & 3.9 & 2.64 & 3.75 & 2.89 & 3.75\\ \ce{SrSe} & $Fm\overline{3}m$ & 2758 & 0.83 & 0.83 & 0.83 & 0.0 & 2.23 & 3.03 & 2.80 & 3.68\\ \ce{BaLiH3} & $Pm\overline{3}m$ & 23818 & 0.36 & 0.36 & 0.36 & 0.0 & 2.27 & 3.26 & 2.55 & 3.62\\ \ce{CsPbF3} & $Pm\overline{3}m$ & 5811 & 0.39 & 0.39 & 0.39 & 4.6 & 3.05 & 3.59 & 2.92 & 3.59\\ \ce{Cs3ZnH5} & $I4/mcm$ & 643702 & 0.69 & 0.93 & 0.93 & 0.0 & 2.75 & 3.58 & 2.79 & 3.58\\ \ce{Al2CdS4} & $Fd\overline{3}m$ & 9993 & 0.78 & 0.78 & 0.78 & 20.0 & 2.47 & 3.55 & 2.47 & 3.56\\ \ce{K2LiAlH6} & $R\overline{3}m$ & 23774 & 0.68 & 0.84 & 0.84 & 0.0 & 2.58 & 3.52 & 2.90 & 3.52\\ \ce{BaMgH4} & $Cmcm$ & 643718 & 0.48 & 0.55 & 0.70 & 4.8 & 2.32 & 3.26 & 2.58 & 3.51\\ \ce{CaTe} & $Fm\overline{3}m$ & 1519 & 0.60 & 0.60 & 0.60 & 0.0 & 1.55 & 2.18 & 2.62 & 3.50\\ \ce{Cs3MgH5} & $P4/ncc$ & 23947 & 0.88 & 0.93 & 0.93 & 0.3 & 2.61 & 3.49 & 2.63 & 3.49\\ \ce{Cs3MgH5} & $I4/mcm$ & 643895 & 0.83 & 0.94 & 0.94 & 0.0 & 2.59 & 3.49 & 2.61 & 3.49\\ \ce{YbSe} & $Fm\overline{3}m$ & 286 & 0.67 & 0.67 & 0.67 & 0.0 & 1.97 & 2.43 & 2.77 & 3.48\\ \ce{ZnS} & $F\overline{4}3m$ & 10695 & 0.81 & 0.81 & 0.81 & 0.0 & 2.02 & 3.46 & 2.02 & 3.46\\ \ce{TaCu3S4} & $P\overline{4}3m$ & 10748 & 0.98 & 0.98 & 0.98 & 0.0 & 1.95 & 2.95 & 2.34 & 3.46\\ \ce{Al2ZnS4} & $Fd\overline{3}m$ & 4842 & 0.66 & 0.66 & 0.66 & 0.0 & 2.49 & 3.43 & 2.52 & 3.46\\ \ce{Li2ThN2} & $P\overline{3}m1$ & 27487 & 0.85 & 0.95 & 0.95 & 0.0 & 2.18 & 3.33 & 2.34 & 3.46\\ \ce{Mg2B24C} & $P\overline{4}n2$ & 568556 & 0.77 & 0.93 & 0.93 & 0.0 & 2.63 & 3.41 & 2.62 & 3.42\\ \ce{Li2GePbS4} & $I\overline{4}2m$ & 19896 & 0.61 & 0.61 & 0.98 & 0.0 & 2.25 & 3.20 & 2.31 & 3.33\\ \ce{Cs3H5Pd} & $P4/mbm$ & 643006 & 0.79 & 0.83 & 0.83 & 0.0 & 2.28 & 3.09 & 2.38 & 3.32\\ \ce{SrTe} & $Fm\overline{3}m$ & 1958 & 0.67 & 0.67 & 0.67 & 0.0 & 1.77 & 2.39 & 2.48 & 3.24\\ \ce{MgTe} & $F\overline{4}3m$ & 13033 & 0.95 & 0.95 & 0.95 & 0.9 & 2.32 & 3.24 & 2.32 & 3.24\\ \ce{CsTaN2} & $I\overline{4}2d$ & 34293 & 0.71 & 0.71 & 0.92 & 0.0 & 2.15 & 3.22 & 2.21 & 3.22\\ \ce{Cs3MnH5} & $I4/mcm$ & 643706 & 0.82 & 0.96 & 0.96 & 0.0 & 1.65 & 3.18 & 1.66 & 3.21\\ \ce{LiMgP} & $F\overline{4}3m$ & 36111 & 0.65 & 0.65 & 0.65 & 0.0 & 1.56 & 2.00 & 2.39 & 3.18\\ \ce{BaS} & $Fm\overline{3}m$ & 1500 & 0.85 & 0.85 & 0.85 & 0.0 & 2.16 & 3.02 & 2.30 & 3.17\\ \ce{LiAlTe2} & $I\overline{4}2d$ & 4586 & 0.52 & 0.83 & 0.83 & 0.0 & 2.44 & 3.11 & 2.44 & 3.11\\ \ce{YbTe} & $Fm\overline{3}m$ & 1779 & 0.54 & 0.54 & 0.54 & 0.0 & 1.47 & 1.76 & 2.46 & 3.09\\ \ce{Li3Sb} & $Fm\overline{3}m$ & 2074 & 0.24 & 0.24 & 0.24 & 0.0 & 0.72 & 1.15 & 2.28 & 3.06\\ \ce{SrAl2Te4} & $I422$ & 37091 & 0.42 & 0.79 & 0.80 & 0.0 & 1.50 & 2.66 & 1.55 & 3.06\\ \ce{TaCu3Te4} & $P\overline{4}3m$ & 9295 & 0.63 & 0.63 & 0.63 & 0.0 & 1.14 & 2.50 & 1.59 & 3.05\\ \ce{TaCu3Se4} & $P\overline{4}3m$ & 4081 & 0.82 & 0.82 & 0.82 & 0.0 & 1.63 & 2.43 & 2.03 & 2.98\\ \ce{BaSe} & $Fm\overline{3}m$ & 1253 & 0.76 & 0.76 & 0.76 & 0.0 & 1.96 & 2.59 & 2.19 & 2.95\\ \ce{KAg2PS4} & $I\overline{4}2m$ & 12532 & 0.67 & 0.82 & 0.82 & 0.0 & 1.27 & 2.53 & 2.05 & 2.87\\ \ce{AlAs} & $F\overline{4}3m$ & 2172 & 0.50 & 0.50 & 0.50 & 0.0 & 1.52 & 2.12 & 1.77 & 2.84\\ \ce{LiErS2} & $I4_1/amd$ & 35591 & 0.62 & 0.99 & 0.99 & 10.4 & 1.99 & 2.80 & 1.99 & 2.80\\ \ce{GaN} & $F\overline{4}3m$ & 830 & 0.94 & 0.94 & 0.94 & 5.2 & 1.57 & 2.80 & 1.56 & 2.80\\ \ce{CsPbCl3} & $Pm\overline{3}m$ & 23037 & 0.26 & 0.26 & 0.26 & 5.5 & 2.40 & 2.75 & 2.19 & 2.75\\ \ce{GaP} & $F\overline{4}3m$ & 2490 & 0.45 & 0.45 & 0.45 & 0.0 & 1.59 & 1.97 & 1.59 & 2.69\\ \ce{LiSmS2} & $I4_1/amd$ & 34477 & 0.93 & 0.93 & 0.99 & 0.0 & 1.92 & 2.69 & 1.92 & 2.69\\ \ce{LiGaTe2} & $I\overline{4}2d$ & 5048 & 0.37 & 0.70 & 0.70 & 0.0 & 1.59 & 2.69 & 1.59 & 2.69\\ \ce{ThSnI6} & $P\overline{3}1c$ & 28815 & 0.55 & 0.57 & 0.57 & 20.8 & 1.99 & 2.32 & 2.23 & 2.66\\ \ce{BaTe} & $Fm\overline{3}m$ & 1000 & 0.64 & 0.64 & 0.64 & 0.0 & 1.59 & 2.22 & 1.97 & 2.65\\ \ce{CuI} & $F\overline{4}3m$ & 22895 & 0.86 & 0.86 & 0.86 & 6.0 & 1.14 & 2.65 & 1.13 & 2.65\\ \ce{NbCu3Se4} & $P\overline{4}3m$ & 4043 & 0.82 & 0.82 & 0.82 & 0.0 & 1.40 & 2.12 & 1.79 & 2.64\\ \ce{TaSbRu} & $F\overline{4}3m$ & 31454 & 0.73 & 0.73 & 0.73 & 0.0 & 0.71 & 1.30 & 1.84 & 2.63\\ \ce{Nd2TeS2} & $P\overline{3}m1$ & 10933 & 0.45 & 0.72 & 0.72 & 0.0 & 1.62 & 2.23 & 1.95 & 2.63\\ \ce{Zr2SN2} & $P6_3/mmc$ & 11583 & 0.40 & 0.54 & 0.54 & 0.0 & 0.56 & 1.38 & 1.62 & 2.62\\ \ce{Ca3PCl3} & $Pm\overline{3}m$ & 29342 & 0.63 & 0.63 & 0.63 & 0.0 & 1.84 & 2.60 & 1.84 & 2.60\\ \ce{BaMg2P2} & $P\overline{3}m1$ & 8278 & 0.62 & 0.62 & 0.88 & 0.0 & 1.15 & 1.69 & 1.75 & 2.60\\ \ce{WS2} & $R3m$ & 9813 & 0.79 & 0.91 & 0.91 & 3.8 & 1.34 & 2.12 & 1.84 & 2.60\\ \ce{Ca3AsCl3} & $Pm\overline{3}m$ & 28069 & 0.58 & 0.58 & 0.58 & 0.0 & 1.84 & 2.57 & 1.84 & 2.57\\ \ce{BaSnS2} & $P2_1/c$ & 12181 & 0.44 & 0.57 & 0.85 & 0.0 & 1.62 & 2.40 & 1.69 & 2.54\\ \ce{LiZnP} & $F\overline{4}3m$ & 10182 & 0.40 & 0.40 & 0.40 & 0.0 & 1.36 & 1.69 & 1.50 & 2.51\\ \ce{ScCuS2} & $P3m1$ & 6980 & 0.75 & 0.75 & 0.87 & 0.0 & 0.88 & 1.77 & 1.50 & 2.50\\ \ce{SbIrS} & $Pca2_1$ & 9270 & 0.50 & 0.52 & 0.89 & 4.5 & 1.04 & 1.76 & 1.54 & 2.43\\ \ce{Cd2P3Cl} & $Cc$ & 29246 & 0.41 & 0.76 & 0.80 & 9.1 & 1.12 & 2.06 & 1.58 & 2.42\\ \ce{CsPbBr3} & $Pnma$ & 567629 & 0.25 & 0.28 & 0.29 & 0.0 & 2.01 & 2.39 & 2.01 & 2.39\\ \ce{Hg2P3Cl} & $C2/c$ & 28875 & 0.40 & 0.82 & 0.99 & 2.3 & 1.13 & 1.89 & 1.56 & 2.38\\ \ce{Cd2P3Br} & $C2/c$ & 29245 & 0.44 & 0.78 & 0.87 & 6.8 & 1.05 & 2.01 & 1.57 & 2.37\\ \ce{CsNbN2} & $Fd\overline{3}m$ & 8978 & 0.53 & 0.53 & 0.53 & 8.2 & 1.53 & 2.36 & 1.53 & 2.36\\ \ce{RbGeBr3} & $Pna2_1$ & 28558 & 0.32 & 0.42 & 0.47 & 0.0 & 1.99 & 2.34 & 1.99 & 2.34\\ \ce{SbIrS} & $P2_13$ & 8630 & 0.39 & 0.39 & 0.39 & 0.0 & 1.42 & 2.18 & 1.56 & 2.34\\ \ce{Ca3AsBr3} & $Pm\overline{3}m$ & 27294 & 0.57 & 0.57 & 0.57 & 0.0 & 1.67 & 2.34 & 1.67 & 2.34\\ \ce{LiYSe2} & $I4_1/amd$ & 37879 & 0.55 & 0.83 & 0.83 & 17.6 & 1.55 & 2.33 & 1.55 & 2.33\\ \ce{ZrCoBi} & $F\overline{4}3m$ & 31451 & 0.80 & 0.80 & 0.80 & 0.0 & 1.15 & 1.28 & 1.66 & 2.31\\ \ce{NaLi2Sb} & $Fm\overline{3}m$ & 5077 & 0.41 & 0.41 & 0.41 & 0.0 & 0.71 & 1.04 & 1.61 & 2.27\\ \ce{ZnP2} & $P4_12_12$ & 2782 & 0.37 & 0.65 & 0.65 & 0.0 & 1.47 & 2.14 & 1.59 & 2.27\\ \ce{KHgF3} & $Pm\overline{3}m$ & 7483 & 0.87 & 0.87 & 0.87 & 0.0 & 0.64 & 2.26 & 2.82 & 2.26\\ \ce{LiHoSe2} & $I4_1/amd$ & 33322 & 0.52 & 0.83 & 0.83 & 16.8 & 1.58 & 2.25 & 1.58 & 2.25\\ \ce{LiDySe2} & $I4_1/amd$ & 35717 & 0.55 & 0.81 & 0.82 & 16.0 & 1.56 & 2.23 & 1.56 & 2.23\\ \ce{P2Pt} & $Pa\overline{3}$ & 730 & 0.25 & 0.25 & 0.25 & 0.0 & 1.06 & 1.79 & 1.51 & 2.22\\ \ce{TbLiSe2} & $I4_1/amd$ & 38695 & 0.61 & 0.78 & 0.78 & 15.4 & 1.54 & 2.20 & 1.54 & 2.20\\ \ce{MgGeP2} & $I\overline{4}2d$ & 34903 & 0.26 & 0.61 & 0.61 & 13.0 & 1.51 & 2.16 & 1.54 & 2.16\\ \ce{LiSmSe2} & $I4_1/amd$ & 35388 & 0.72 & 0.72 & 0.75 & 0.0 & 1.51 & 2.15 & 1.51 & 2.15\\ \ce{LiNdSe2} & $I4_1/amd$ & 37605 & 0.72 & 0.72 & 0.84 & 7.6 & 1.52 & 2.15 & 1.52 & 2.15\\ \ce{SrMg2Sb2} & $P\overline{3}m1$ & 9566 & 0.53 & 0.55 & 0.55 & 0.0 & 0.98 & 1.42 & 1.51 & 2.06\\ \ce{RbAu} & $Pm\overline{3}m$ & 30373 & 0.24 & 0.24 & 0.24 & 0.0 & 0.57 & 0.49 & 1.80 & 2.02\\ \ce{CsAu} & $Pm\overline{3}m$ & 2667 & 0.25 & 0.25 & 0.25 & 0.0 & 1.02 & 1.25 & 1.73 & 1.99\\ \ce{LiNbS2} & $P6_3/mmc$ & 7936 & 0.61 & 0.61 & 0.68 & 0.0 & 0.73 & 1.06 & 1.56 & 1.95\\ \ce{Ag3SbS3} & $R3c$ & 4515 & 0.49 & 0.49 & 1.00 & 2.3 & 1.00 & 1.30 & 1.54 & 1.94\\ \end{mylongtable} \end{center} \tabref{tableSII} shows information of current \emph{n}-type TCMs. Most of them exhibits mobility at the order of 100 cm$^2$/Vs. \begin{center} \LTcapwidth=0.98\textwidth \begin{mylongtable}[!ht]{@{\extracolsep{\fill}} l l l r r} \caption{The current \emph{n}-type TCMs, fabrication methods (abbreviations of some methods are at the bottom of this table), sorts of dopants used, electron carrier concentrations $C$ (in cm$^{-3}$) and mobilities $\mu$ (in cm$^2$/Vs). These values are extracted from experimental measurements (as cited references) at room-temperature. The values of mobility depend on morphologies of fabricated-samples such as single crystal, polycrystalline, amorphous, thin-film,...} \label{tableSII} \\ \hline \multicolumn{1}{l}{\emph{n}-TCMs} & \multicolumn{1}{l}{Processing} & \multicolumn{1}{l}{Dopants} & \multicolumn{1}{c}{$C$ (cm$^{-3}$)} & \multicolumn{1}{c}{$\mu$ (cm$^2$/Vs)} \\ \hline \endfirsthead \multicolumn{5}{c}% {{\tablename\ \thetable{} -- continued from previous page}} \\ \hline \multicolumn{1}{l}{\emph{n}-TCMs} & \multicolumn{1}{l}{Processing} & \multicolumn{1}{l}{Dopants} & \multicolumn{1}{c}{$C$ (cm$^{-3}$)} & \multicolumn{1}{c}{$\mu$ (cm$^2$/Vs)} \\ \hline \endhead \hline \multicolumn{5}{r}{{Continued on next page}} \\ \hline \endfoot \hline \multicolumn{3}{l}{CVD: Chemical vapor deposition} & \multicolumn{2}{l}{PLD: Pulsed laser deposition} \\ \multicolumn{3}{l}{VPT: Vapor phase transport} & \multicolumn{2}{l}{\dag: Reviewed from many papers}\\ \multicolumn{3}{l}{PVD: Physical vapor deposition} & \multicolumn{2}{l}{ST: Spray technique} \\ \multicolumn{3}{l}{VEM: Vacuum evaporation method} & \multicolumn{2}{l}{EBE: e-beam evaporation} \\ \multicolumn{3}{l}{TRE: Thermal reactive evaporation} & \multicolumn{2}{l}{FZM: Floating zone method} \\ \multicolumn{5}{l}{EFG: Edge-defined film-fed growth} \\ \hline \hline \endlastfoot \ce{SnO2} & CVD & Sb & $8.5\times10^{15}$ & 260 \cite{C.G.Fonstad71} \\ & CVD & Sb & $8.6\times10^{16}$ & 240 \cite{C.G.Fonstad71} \\ & CVD & Sb & $2.2\times10^{18}$ & 150 \cite{C.G.Fonstad71} \\ & PLD & Ta & $2.7\times10^{20}$ & 83 \cite{S.Nakao10} \\ & PLD & Sb & $\sim 1\times10^{19}$ & $\sim 40$ \cite{J.E.Dominguez02} \\ \ce{ZnO} & VPT & Undoped & $6\times10^{16}$ & 205 \cite{D.C.Look98} \\ & \dag & \dag & $10^{15}-10^{20}$ & 50-230 \cite{K.Ellmer12, D.S.Ginley10} \\ & PLD & Undoped & $3\times10^{16}$ & 155 \cite{E.M.Kaidashev03} \\ & Sputtering & Al & $3.6\times10^{20}$ & 41.3 \cite{C.Agashe04} \\ & Sputtering & Al & $8\times10^{20}$ & 17 \cite{K.Ellmer94, K.Ellmer00} \\ \ce{In2O3} & PVD & - & $3-9\times10^{17}$ & 160 \cite{R.L.Weiher62} \\ & ST & Sn & $0.1-6\times10^{20}$ & 30-70 \cite{R.Groth66} \\ & Sputtering & Undoped & $10^{17}-10^{20}$ & $<10$ \cite{H.K.Muller68} \\ & VEM & Undoped & $\>3.5\times10^{19}$ & $25-60$ \cite{S.Noguchi80} \\ & VEM & Undoped & $4.69\times10^{20}$ & 74 \cite{C.A.Pan81} \\ & EBE & Sn & $0.46-8.6\times10^{20}$ & $43-79$ \cite{I.Hamberg86} \\ & Flux & Sn & $1.6\times10^{20}$ & 100 \cite{S.J.Wen92} \\ & Sputtering & Sn & $6\times10^{20}$ & $\sim 25$ \cite{M.Sawada98} \\ & TRE & Mo & $2.5-3.5\times10^{20}$ & $80-130$ \cite{Y.Meng01} \\ & PLD & Mo & $1.9\times10^{20}$ & 95 \cite{C.Warmsingh04} \\ & Sputtering & H & $1.4-1.8\times10^{20}$ & $98-130$ \cite{T.Koida07} \\ & Sputtering & H & $1.5\times10^{20}$ & 140 \cite{T.Koida09} \\ & Sputtering & Sn & $1\times10^{21}$ & 40 \cite{N.Oka12} \\ \ce{Ga2O3} & Verneuil & Undoped & $1\times10^{18}$ & 80 \cite{M.R.Lorenz67} \\ & FZM & Undoped & $1.2-5.2\times10^{18}$ & $2.6-46$ \cite{N.Ueda97} \\ & EFG & Undoped & $1\times10^{17}$ & 153 \cite{T.Oishi15} \\ & - & Undoped & $8\times10^{16}$ & $\sim 150$ \cite{N.Ma16} \\ \end{mylongtable} \end{center} \tabref{tableSIII} shows information for different \ce{A3B} compounds (A = Li, Na, K, Rb and Cs; and B = N, P, As and Sb). The very unstable compounds are not considered here. The data of compounds with narrow gap $E_g<0.4$ $eV$ is not presented because the values of effective masses are not reliable. \begin{center} \LTcapwidth=0.98\textwidth \begin{mylongtable}[!ht]{@{\extracolsep{\fill}} l l r c c c c c} \caption{Formula, space group (SG), Materials Project identification number~\cite{A.Jain-APLM13, MatPro13}, fundamental $E_g$ computed with DFT (in eV), three principal hole effective masses $m_1$, $m_2$ and $m_3$ ($m_o$-free electron mass) and stability measured by energy above hull $E_{hull}$ in the phase diagram (in meV/atom).} \label{tableSIII} \\ \hline \multicolumn{1}{l}{Formula} & \multicolumn{1}{l}{SG} & \multicolumn{1}{r}{MP-id} & \multicolumn{1}{c}{$E_g$} & \multicolumn{1}{c}{$m_1$} & \multicolumn{1}{c}{$m_2$} & \multicolumn{1}{c}{$m_3$} & \multicolumn{1}{c}{$E_{hull}$} \\ \hline \endfirsthead \multicolumn{8}{c}% {{\tablename\ \thetable{} -- continued from previous page}} \\ \hline \multicolumn{1}{l}{Formula} & \multicolumn{1}{l}{SG} & \multicolumn{1}{r}{MP-id} & \multicolumn{1}{c}{$E_g$} & \multicolumn{1}{c}{$m_1$} & \multicolumn{1}{c}{$m_2$} & \multicolumn{1}{c}{$m_3$} & \multicolumn{1}{c}{$E_{hull}$} \\ \hline \endhead \hline \multicolumn{8}{r}{{Continued on next page}} \\ \hline \endfoot \hline \hline \endlastfoot \ce{Li3N} & $P6/mmm$ & 2251 & 0.98 & 1.09 & 1.09 & 5.30 & 0.0 \\ \ce{Li3N} & $P6_3/mmc$ & 2341 & 1.22 & 1.60 & 1.60 & 3.66 & 10.0 \\ \ce{Li3P} & $P6_3/mmc$ & 736 & 0.70 & 0.84 & 0.84 & 2.41 & 0.0 \\ \ce{Li3As} & $P6_3/mmc$ & 757 & 0.64 & 0.74 & 0.74 & 2.10 & 0.0 \\ \ce{Li3Sb} & $P6_3/mmc$ & 7955 & 0.48 & 0.61 & 0.61 & 1.75 & 3.0 \\ \ce{Li3Sb} & $Fm\overline{3}m$ & 2074 & 0.85 & \textbf{0.24} & \textbf{0.24} & \textbf{0.24} & 0.0 \\ \ce{Na3P} & $P6_3/mmc$ & 1598 & 0.41 & 1.61 & 1.61 & 6.52 & 0.0 \\ \ce{Na3Sb} & $P6_3/mmc$ & 7956 & 0.40 & 1.10 & 1.10 & 3.85 & 0.0 \\ \ce{K3Sb} & $Fm\overline{3}m$ & 10159 & 0.68 & 5.22 & 5.22 & 5.22 & 29.0 \\ \ce{Rb3Sb} & $Fm\overline{3}m$ & 33018 & 0.43 & 1.71 & 1.71 & 1.71 & 34.0 \\ \ce{Cs3Sb} & $Fm\overline{3}m$ & 10378 & 0.61 & 1.03 & 1.03 & 1.03 & 0.0 \\ \end{mylongtable} \end{center} \figref{scattering_rates} presents scattering rate of both CaTe and \ce{Li3Sb} as a functions of energy at room temperature (valence band maximums (VBM) are set to zeros). The specific contributions of acoustic (the first three modes) and optical (the remaining ones) phonon-modes are also shown. We can see that the optical modes are the main source of scattering in both cases. \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{supfigures/scatteringRates.pdf} \end{center} \vspace{-15pt} \caption{Scattering rates of (a) CaTe and (b) \ce{Li3Sb} at temperature of 300 K. The contributions of acoustic and optical phonon-modes are shown as well.} \label{scattering_rates} \end{myfigure} \figref{elphCoupling_CaTe} shows electron-phonon coupling strength (see \eqref{elph_cp}) for six phonon modes of CaTe computed with a dense \emph{\textbf{q}}-point mesh of $40\times40\times40$. The Fermi energy in \eqref{elph_cp} was set to 146 meV below the VBM in order to assure this quantity to be defined ($N(\epsilon_F) > 0$). Moreover, the Fermi level corresponding to very high doping of $10^{21}$ cm$^{-3}$ lies 264.5 meV below the VBM so the value of 146 meV can give us the picture in the considering range of hole concentrations. The electron-phonon coupling strength $\lambda_{\boldsymbol{q}\nu}$, therefore, can depict the intensity of interactions between hole carriers (around VBM) and each phonon mode. The average values of $\lambda_{\boldsymbol{q}\nu}$ over all \emph{\textbf{q}}-points (shown as red lines) point out that the hole carriers interacts with optical modes (mainly mode-6) around 5 times stronger than with acoustic modes. \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{supfigures/lambda_CaTe.pdf} \end{center} \vspace{-20pt} \caption{The electron-phonon coupling strength of CaTe for specific mode $\nu$ and phonon wave-vector $q$. There are 6 phonon modes including 3 acoustic (1-3) and 3 optical ones (4-6). The number of \emph{q}-points are 64000 corresponding to $40\times40\times40$ mesh in the full Brillouin zone. To reduce size of the figure, \emph{q}-points with $\lambda_{\nu q} < 1\times10^{-3}$ are not shown in the subfigures. The red lines are average values of $\lambda_{\nu q}$ over 64000 \emph{q}-points.} \label{elphCoupling_CaTe} \end{myfigure} In the same way, \figref{elphCoupling_Li3Sb} presents electron-phonon coupling strength for twelve phonon modes of \ce{Li3Sb} computed with a dense \emph{\textbf{q}}-point mesh of $40\times40\times40$. The Fermi energy in \eqref{elph_cp} was set to 100 meV below the VBM (the Fermi level at doping of $10^{21}$ cm$^{-3}$ is 168.5 meV lower than VBM). In this case, the intensity of interactions (between hole carriers and phonons) with optical modes is about 19 times stronger than with acoustic modes. \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{supfigures/lambda_Li3Sb.pdf} \end{center} \vspace{-20pt} \caption{The electron-phonon coupling strength of \ce{Li3Sb} for specific mode $\nu$ and phonon wave-vector $q$. There are 12 phonon modes including 3 acoustic (1-3) and 9 optical ones (4-12). The number of \emph{q}-points are 64000 corresponding to $40\times40\times40$ mesh in the full Brillouin zone. To reduce size of the figure, \emph{q}-points with $\lambda_{\nu q} < 0.5\times10^{-2}$ are not shown in the subfigures. The red lines are average values of $\lambda_{\nu q}$ over 64000 \emph{q}-points.} \label{elphCoupling_Li3Sb} \end{myfigure} We performed convergence tests for G$_0$W$_0$ calculations over the number of bands ($N_b$) and the kinetic energy cut-off for dielectric tensor ($E_c$). These two parameters will be simultaneously investigated in specific ranges, then we can choose appropriate values those give acceptable convergence of band gap. \figref{CaTe_G0W0} and \tabref{tableSIV} show how band gap at $\Gamma$ point of CaTe evolves with the change of $N_b$ and $E_c$. Here, $N_b = [75, 150, 300, 450]$ and $E_c = [6, 8, 10, 12, 14]$ Ha. In the same way, \figref{Li3Sb_G0W0} and \tabref{tableSV} present how band gap at $\Gamma$ point of \ce{Li3Sb} converges with the change of number of band and energy cut-off for the dielectric tensor. In the case, $N_b = [80, 160, 240, 320]$ and $E_c = [4, 6, 8, 10, 12]$ Ha. In both cases, energy cut-off for kinetic energy is set to 46 Ha and \textbf{k}-point mesh is $6\times6\times6$. \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.75\linewidth]{supfigures/CaTe_G0W02.pdf} \end{center} \vspace{-15pt} \caption{The convergence test for the gap at $\Gamma$ point of CaTe in G$_0$W$_0$ computation.} \label{CaTe_G0W0} \end{myfigure} \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.75\linewidth]{supfigures/Li3Sb_G0W02.pdf} \end{center} \vspace{-15pt} \caption{The convergence test for the gap at $\Gamma$ point of \ce{Li3Sb} in G$_0$W$_0$ computation.} \label{Li3Sb_G0W0} \end{myfigure} \begin{mytable}[!ht] \caption{The evolution of the gap at $\Gamma$ point of CaTe $E^\Gamma_g$ (in eV) with the change of number of bands $N_b$ and kinetic energy cut-off for dielectric tensor $E_c$ (in Ha).} \begin{center} \renewcommand{\arraystretch}{1.3} \begin{tabular*} {1.0\textwidth} {@{\extracolsep{\fill}} | c | c | c | c | c | c | c | c | c | c | c |} \hline $N_b$ & 75 & 75 & 75 & 75 & 75 & 150 & 150 & 150 & 150 & 150 \\ \hline $E_c$ & 6.0 & 8.0 & 10.0 & 12.0 & 14.0 & 6.0 & 8.0 & 10.0 & 12.0 & 14.0 \\ \hline $E^\Gamma_g$ & 5.41558 & 5.42030 & 5.42063 & 5.42239 & 5.42275 & 5.39362 & 5.39570 & 5.39460 & 5.39418 & 5.39352 \\ \hline \hline $N_b$ & 300 & 300 & 300 & 300 & 300 & 450 & 450 & 450 & 450 & 450 \\ \hline $E_c$ & 6.0 & 8.0 & 10.0 & 12.0 & 14.0 & 6.0 & 8.0 & 10.0 & 12.0 & 14.0 \\ \hline $E^\Gamma_g$ & 5.36266 & 5.36771 & 5.36552 & 5.36260 & 5.36077 & 5.34078 & 5.34818 & 5.34680 & 5.34287 & 5.34034\\ \hline \end{tabular*} \label{tableSIV} \end{center} \end{mytable} \begin{mytable}[!ht] \caption{The evolution of the gap at $\Gamma$ point of \ce{Li3Sb} $E^\Gamma_g$ (in eV) with the change of number of bands $N_b$ and kinetic energy cut-off for dielectric tensor $E_c$ (in Ha).} \begin{center} \renewcommand{\arraystretch}{1.3} \begin{tabular*} {1.0\textwidth} {@{\extracolsep{\fill}} | c | c | c | c | c | c | c | c | c | c | c |} \hline $N_b$ & 80 & 80 & 80 & 80 & 80 & 160 & 160 & 160 & 160 & 160 \\ \hline $E_c$ & 4.0 & 6.0 & 8.0 & 10.0 & 12.0 & 4.0 & 6.0 & 8.0 & 10.0 & 12.0 \\ \hline $E^\Gamma_g$ & 3.33445 & 3.34410 & 3.34744 & 3.34911 & 3.35034 & 3.37762 & 3.39112 & 3.39166 & 3.39208 & 3.39262 \\ \hline \hline $N_b$ & 240 & 240 & 240 & 240 & 240 & 320 & 320 & 320 & 320 & 320 \\ \hline $E_c$ & 4.0 & 6.0 & 8.0 & 10.0 & 12.0 & 4.0 & 6.0 & 8.0 & 10.0 & 12.0 \\ \hline $E^\Gamma_g$ & 3.37743 & 3.40028 & 3.39860 & 3.39749 & 3.39752 & 3.37587 & 3.40757 & 3.40897 & 3.40748 & 3.40683 \\ \hline \end{tabular*} \label{tableSV} \end{center} \end{mytable} In comparison, \figref{band_CaTe} and \figref{band_Li3Sb} show band structures of CaTe and \ce{Li3Sb} computed by both $G_0W_0$ (red) and DFT (blue). \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.75\linewidth]{supfigures/CaTe_band.pdf} \end{center} \vspace{-15pt} \caption{The band structure of CaTe computed by DFT (blue) and $G_0W_0$ (red).} \label{band_CaTe} \end{myfigure} \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.75\linewidth]{supfigures/Li3Sb_band.pdf} \end{center} \vspace{-15pt} \caption{The band structure of \ce{Li3Sb} computed by DFT (blue) and $G_0W_0$ (red).} \label{band_Li3Sb} \end{myfigure} \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{supfigures/CaTe_defects.pdf} \end{center} \vspace{-15pt} \caption{The defect formation energy as a function of Fermi level of intrinsic and extrinsic defects for CaTe (left) and the phase diagrams with chemical potentials of each element in specific facets (right). The intrinsic defects include vacancies (Vac$_\textrm{Ca}$ and Vac$_\textrm{Te}$), anti-sites (Te$_\textrm{Ca}$ and Ca$_\textrm{Te}$) and interstitial atoms inserting into the tetrahedral hollows formed by 4 Te atoms (Ca$_\textrm{i(tet,Te4)}$ and Te$_\textrm{i(tet,Te4)}$) while Na, K and Li are used as the extrinsic defects substituting onto Ca-sites (Na$_\textrm{Ca}$, K$_\textrm{Ca}$ and Li$_\textrm{Ca}$). The chemical potential of elements are obtained from phase diagrams of Ca-Te and Ca-Te-X (X=Na, K and Li) for intrinsic and extrinsic defects, respectively. The VBM is set to zero.} \label{CaTe_defects} \end{myfigure} \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{supfigures/Li3Sb_defects.pdf} \end{center} \vspace{-15pt} \caption{The defect formation energy as a function of Fermi level of intrinsic defects for \ce{Li3Sb} (left) and the phase diagram with chemical potentials of each element in a specific facet (right). The intrinsic defects include vacancies (Vac$_\textrm{Li-1}$, Vac$_\textrm{Li-2}$ and Vac$_\textrm{Sb}$), anti-sites (Li$_\textrm{Sb}$, Sb$_\textrm{Li-1}$ and Sb$_\textrm{Li-2}$) and interstitial atoms inserting into the octahedron hallows formed by Sb and Li atoms (Li$_\textrm{i(oct, Sb2Li4)}$, Li$_\textrm{i(oct, SbLi5)}$, Sb$_\textrm{i(oct, Sb2Li4)}$ and Sb$_\textrm{i(oct, SbLi5)}$). The chemical potentials of Li and Sb are obtained from their phase diagram. The VBM is set to zero.} \label{Li3Sb_defects} \end{myfigure} \figref{dfe_AlP}, \figref{dfe_LiH}, \figref{dfe_BaLiH3}, \figref{dfe_SrTe} and \figref{dfe_Al2ZnS4} show defect formation energies and phase diagrams of AlP, LiH, \ce{BaLiH3}, CsH, SrTe and \ce{Al2ZnS4} (computed by DFT), correspondingly. The facets of phase diagrams in which the chemical potentials of elements were estimated are marked to corresponding defect formation energies. It worth noting that although DFT calculations using GGA underestimate band gap, the defect formation energy computed with it is still reliable\cite{Peng2013}. In HSE computations, the VBM shifts down while the CBM shifts up, therefore, the general trend of defect formation energy is similar to that in DFT-GGA computations. The change of formation energy is small as well\cite{Peng2013}. \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.65\linewidth]{supfigures/AlP_joint.pdf} \end{center} \vspace{-15pt} \caption{The phase diagram and defect formation energies of AlP in Al-rich and P-rich conditions.} \label{dfe_AlP} \end{myfigure} \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.65\linewidth]{supfigures/LiH_joint.pdf} \end{center} \vspace{-15pt} \caption{The phase diagram and defect formation energies of LiH in H-rich and Li-rich conditions.} \label{dfe_LiH} \end{myfigure} \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.75\linewidth]{supfigures/BaLiH3_joint.pdf} \end{center} \vspace{-15pt} \caption{The phase diagram and defect formation energies of \ce{BaLiH3} in different conditions corresponding to the numbers marked in different facets.} \label{dfe_BaLiH3} \end{myfigure} \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.65\linewidth]{supfigures/SrTe_joint.pdf} \end{center} \vspace{-15pt} \caption{The phase diagram and defect formation energies of SrTe in Sr-rich and Te-rich conditions.} \label{dfe_SrTe} \end{myfigure} \begin{myfigure}[!ht] \begin{center} \includegraphics[width=0.75\linewidth]{supfigures/Al2ZnS4_joint.pdf} \end{center} \vspace{-15pt} \caption{The phase diagram and defect formation energies of \ce{Al2ZnS4} in different conditions corresponding to the numbers marked in different facets.} \label{dfe_Al2ZnS4} \end{myfigure} \FloatBarrier
1,941,325,220,793
arxiv
\section{Introduction} A defect---or interface---in conformal field theory is generally defined as a non-contractible line separating two a priori different conformal field theories (CFTs), with matching conditions between the two sides of the line. Various situations can be encountered in this general context. We will restrict here to the case of so-called topological defects, where the two CFTs are identical, and the stress-energy tensor is continuous across the defect line. In this case, correlation functions for fields inserted away from the defect line are unchanged when the line is continuously deformed, as long as the line is not taken across the field insertions: hence the name ``topological''. Defects in CFT appear in a variety of physical problems, both in two-dimensional statistical mechanics, e.g.\ in the context of Kramers--Wannier duality~\cite{FFRS,FFRS1}, and in imaginary-time one-dimensional quantum mechanics, e.g.\ in the context of quantum impurity problems such as the Kondo problem~\cite{BG,OA}. The problem of classifying topological defects has received considerable attention, in particular in the case of rational CFTs~\cite{PZ,Petkova}. For such theories with diagonal modular invariants, for instance, it has been shown that the set of defects is isomorphic with the set of representations of the chiral algebra. Many results for non-diagonal invariants, or for non-rational unitary theories such as Liouville \cite{Sark} are also known. \smallskip Meanwhile, the general question of relating structures within the CFTs with properties of underlying lattice models has also attracted much attention. Work in this direction has included attempts to define lattice versions of the Virasoro algebra \cite{KooSaleur,Vidal,ZW}, to define fusion of primary fields in terms or representation theory of lattice algebras \cite{ReadSaleur,GV,GS,GJS,BSA, BelleteteFusion}, to calculate modular transformations from lattice partition functions \cite{PS}, and to build topological defect lines directly on the lattice \cite{Fendley}. Many of these attempts drew from the pioneering work of Kadanoff and Ceva \cite{KadanoffCeva}. \smallskip The present work is motivated by our interest in non-unitary (in particular, logarithmic) conformal field theory (LCFT)~\cite{SpecialIssue}. Decisive progress has been realized in this difficult subject by turning to lattice models---in particular, to understand better the indecomposable properties of the Virasoro-algebra representations involved. In view of the close relationship between defects, primary fields and fusion in the unitary case, it is natural to continue the program set out in \cite{Ourreview} by trying to define topological defects on the lattice using an algebraic approach. While such endeavor has been partially completed in the case of restricted solid-on-solid models---whose associated CFTs are rational, and which are closely related to ``anyonic'' spin chains \cite{Fendley}---we will be interested here in the profoundly different case of loop models, which provide regularizations of the simplest known LCFTs. This paper will discuss the first part of our study, where we will focus on the definition and mathematical properties of a certain kind of lattice topological defects. \smallskip The correspondence between CFTs and lattice models is often best handled by thinking of the CFT in radial quantization, where, after the usual logarithmic mapping, (imaginary) time propagation occurs along the axis of a cylinder, and space is periodic. In this point of view, the non-contractible line for the topological defect can either run along the infinite cylinder, or be a non-contractible loop winding around it. We will refer to these two situations as a defect in the ``direct'' or in the ``crossed'' channel, respectively, see Fig.~\ref{TopoZ}. \begin{figure} \begin{center} $\hbox{crossed channel} \qquad \qquad$ \begin{tikzpicture}[scale = 1/3, baseline = {(current bounding box.center)}] \draw[black, line width = 1pt ] (-2,-2) -- (-2,2); \draw[black, line width = 1pt ] (2,-2) -- (2,2); \draw[black, line width = 1pt ] (-2,-2) .. controls (-2,-3) and (2,-3) .. (2,-2); \draw[black, line width = 1pt ] (-2,2) .. controls (-2,1) and (2,1) .. (2,2); \draw[black, line width = 1pt ] (-2,2) .. controls (-2,3) and (2,3) .. (2,2); \draw[blue, line width = 1pt, dotted] (-2,0) .. controls (-2,-1) and (2,-1) .. (2,0); \end{tikzpicture} $ \quad \equiv \quad $ \begin{tikzpicture}[scale = 4/9, baseline = {(current bounding box.center)}] \def\alpha{3.2}; \def\b{1.5}; \def\Pi{3.14159265359}; \draw[line width = 1pt, domain = 0:2*\Pi] plot ({\alpha*cos(\x r)}, {\b*sin(\x r)}); \draw[line width = 1pt, domain = \Pi/4:3*\Pi/4] plot ({.5*\alpha*cos(\x r)}, {.5*\b*sin(\x r)-.5}); \draw[line width = 1pt, domain = -.5+5*\Pi/4: .5 + 7*\Pi/4] plot ({.5*\alpha*cos(\x r)}, {.5*\b*sin(\x r)+.5}); \draw[dotted, blue, line width = 1pt] (0,-.25) .. controls (.5,-.5) and (-.5, -1) .. (0,-1.5); \draw[black, line width = 1pt, ->] (-2, 0) .. controls (-2,-.25) and (-1,-.75) .. (-.5,-.75); \end{tikzpicture} \\ $\hbox{direct channel} \qquad \qquad$ \begin{tikzpicture}[scale = 1/3, baseline = {(current bounding box.center)}] \draw[black, line width = 1pt ] (-2,-2) -- (-2,2); \draw[black, line width = 1pt ] (2,-2) -- (2,2); \draw[blue, line width = 1pt, dotted] (0,-3) .. controls (-1,-1) and (1,1) .. (0,3); \filldraw[white] (-2,-2) .. controls (-2,-3) and (2,-3) .. (2,-2) -- (2,-3.5) -- (-2,-3.5) -- (-2,2); \filldraw[white] (-2,2) .. controls (-2, 1) and (2,1) .. (2,2) -- (2,3.5) -- (-2,3.5) -- (-2,2); \draw[black, line width = 1pt ] (-2,-2) .. controls (-2,-3) and (2,-3) .. (2,-2); \draw[black, line width = 1pt ] (-2,2) .. controls (-2,1) and (2,1) .. (2,2); \draw[black, line width = 1pt ] (-2,2) .. controls (-2,3) and (2,3) .. (2,2); \end{tikzpicture} $ \quad \equiv \quad$ \begin{tikzpicture}[scale = 4/9, baseline = {(current bounding box.center)}] \def\alpha{3.2}; \def\b{1.5}; \def\Pi{3.14159265359}; \draw[line width = 1pt, domain = 0:2*\Pi] plot ({\alpha*cos(\x r)}, {\b*sin(\x r)}); \draw[line width = 1pt, domain = \Pi/4:3*\Pi/4] plot ({.5*\alpha*cos(\x r)}, {.5*\b*sin(\x r)-.5}); \draw[line width = 1pt, domain = -.5+5*\Pi/4: .5 + 7*\Pi/4] plot ({.5*\alpha*cos(\x r)}, {.5*\b*sin(\x r)+.5}); \draw[dotted, blue, line width = 1pt] (0,-.25) .. controls (.5,-.5) and (-.5, -1) .. (0,-1.5); \draw[black, line width = 1pt, ->] (.5, -1.25) -- (.5,-.5); \end{tikzpicture} \caption{The two possible geometries for a defect line after mapping the plane to the cylinder. }\label{TopoZ} \end{center} \end{figure} In the crossed channel, the defect can be associated with an operator $X$ acting on the Hilbert space of the bulk CFT. The defect is topological if $X$ commutes with the chiral $Vir$ and the anti-chiral $\overline{Vir}$ Virasoro generators~\cite{Petkova}: \begin{equation} [L_n,X]=0=[\bar{L}_n,X]\label{centVir}\ . \end{equation} Our strategy to identify the possible choices of operators $X$ is based on the identification of (representations of) the Virasoro algebra via the continuum limit of the Temperley-Lieb (TL) algebra---an idea that has been used in several works on related topics \cite{KooSaleur, GRS3,GRSV1}. This ``identification'' must be qualified. First, since we are dealing with bulk CFTs, we must think of the product of the chiral and anti-chiral Virasoro algebras, ${Vir}\otimes \overline{Vir}$. Similarly, since the lattice models are defined on a cylinder, the proper lattice algebra is a ``periodicized'' version---the affine Temperley-Lieb algebra~$\mathsf{aTL}$: strictly speaking, the continuum limit of this algebra is known to be larger than ${Vir}\otimes \overline{Vir}$, and has been identified as the ``interchiral algebra'' in~\cite{GRS3}. In the typical physical interpretation of the (affine) Temperley-Lieb algebras on $n$ sites, the nodes on the top and bottom of the TL diagrams should be interpreted as a chain of $n$ subsystems whose interactions are determined by the TL generators, but whose internal sub-structure is not---it is determined by the specific model chosen, which also fixes the $\atl{}$ representation corresponding to the chain. The simplest examples of these are the various kinds of spin-chains, like the twisted XXZ model. We will therefore start our search for lattice analogues of topological defects by demanding the closest lattice equivalent of~\eqref{centVir}, that is by looking for operators $X$ on the lattice that commute with the interactions in the chain, or, in a model-independent setting, that are central in $\mathsf{aTL}$. We will follow this model-independent point of view on lattice defects as central elements satisfying certain nice properties, e.g.\ having a well-defined fusion. This is discussed in Section~\ref{sec:3} after the algebraic preliminaries of Section~\ref{sec:2} where we recall the usual definition of $\mathsf{aTL}$ together with a less standard formulation using a blobbed set of generators. In this last formulation, the lattice meaning of~$X$ turns out to be very simple: it just consists in passing a line ``above'' or ``below'' the non-contractible loops by using solutions of the spectral-parameter independent Yang-Baxter equation exchanging spin-$1/2$ (the value relevant for bulk loops) and spin-$j$ (the value relevant for the defect lines) representations. The topological nature of this defect is obvious, as the Yang-Baxter equation allows one to move and deform the defect line at will without changing neither the partition function, nor the correlation functions if operators are inserted. The simplest example of such a defect operator $X$ is given by a diagram corresponding to a single non-contractible loop passing \textit{over} the bulk, see Fig.~\ref{fig:Y-commTL} where we denote this operator by~$Y$. This operator and its powers are manifestly in the center of the affine TL algebra. We define similarly operators~$\bar{Y}$ where the non-contractible loop is passing \textit{under} the bulk. The two operators generate an interesting algebra of defects. Let us describe this type of defect operators in more precise mathematical terms. First of all, the $\mathsf{aTL}$ algebras depend on $n$ (the number of sites) and a loop parameter $\mathfrak{q}+\mathfrak{q}^{-1}$. In this paper, we consider only the case of $\mathfrak{q}$ a generic complex number (not a root of unity, we leave the root of unity case discussion for a forthcoming paper). The $\mathsf{aTL}$ algebra can be obtained as a quotient of the so-called affine Hecke algebra of type $\hat{A}_{n-1}$ where all central elements are known---they form the algebra of symmetric Laurent polynomials in Jucys-Murphy elements $J_i$, for $1\leq i \leq n$. One of our main mathematical results in this paper is that the image of this affine Hecke center inside~$\mathsf{aTL}$ is generated by the two elements $Y$ and $\bar{Y}$, i.e. their powers can be written as symmetric polynomials in the $J_i$, and vice versa. We shall call this natural subalgebra in the center of $\mathsf{aTL}$ the \textit{symmetric center} $\mathsf{Z}_{\mathrm{sym}}$. Moreover, we show that products of Chebyshev polynomials in $Y$ and $\bar{Y}$ provide a ``canoncical" basis in $\mathsf{Z}_{\mathrm{sym}}$ with \textsl{non-negative integer} structure constants, i.e.\ a product of two defect operators is decomposed onto defect operators again, and with non-negative multiplicities. The multiplicities are interpreted as fusion rules of the defects. \smallskip Of course, the line passing above or below the loops can as well be taken to run along the axis of the cylinder, i.e.\ along the time direction. This corresponds to having the defect in the direct channel. In this setting, the presence of the defect line leads to a modified Hilbert space where an extra representation of spin $j$ is introduced, together with a Hamiltonian suitably modified by corresponding ``defect" terms. This is discussed in Section~\ref{sec:4}, where we relate spectral properties of such a modified Hamiltonian (which is hard to study directly) to a clear and precise algebraic construction within the representation theory of $\mathsf{aTL}$ algebras---namely the fusion product and fusion quotient. The first is based on a certain induction, while the second is dual to it and practically very convenient for actual calculations. In simple terms, the spectrum of the spin-$j$ defect Hamiltonian is given by the spectrum of the standard affine TL Hamiltonian with no defects however acting on the fusion quotient of an $\mathsf{aTL}$ representation by the spin-$j$ standard TL representation. The advantage of this construction is that it allows us to perform precise calculations, as we demonstrate in several examples, including the case of the twisted XXZ model. \smallskip In the last section~\ref{sec:5}, we provide conclusions and discuss a CFT interpretation together with further steps that will be discussed in the next papers, like the analysis of modular $S$-transformation in infinite lattices and the continuum limit from a more physical point of view. In Section~\ref{sec:5}, we also make an attempt to give a precise mathematical definition of lattice defects studied in this work. Finally, several appendices contain proofs of our mathematical results and auxiliary calculations, such as examples of fusion products and fusion quotients. \begin{figure} \begin{center} \begin{tikzpicture}[scale = 1/3, baseline = {(current bounding box.center)}] \foreach \i in {1,2,3,6,7,8}{ \draw[line width = 1pt, black] (\i,-3) -- (\i,0); }; \draw[line width = 1pt, black] (4,-1) -- (4,0); \draw[line width = 1pt, black] (5,-1) -- (5,0); \draw[line width = 1pt, black] (4,-3) .. controls (4,-2) and (5,-2) .. (5,-3); \draw[line width = 1pt, black] (4,-1) .. controls (4,-2) and (5,-2) .. (5,-1); \draw[line width = 3pt, white] (9,0) .. controls (9,-1) and (0,-1) .. (0,0); \draw[line width = 1pt, black] (9,0) .. controls (9,-1) and (0,-1) .. (0,0); \draw[line width = 1pt, black , dotted] (0,0) .. controls (0,1) and (9,1) .. (9,0); \foreach \i in {1,2,3,4,5,6,7,8}{ \draw[line width = 3pt, white] (\i,3) -- (\i,0); \draw[line width = 1pt, black] (\i,3) -- (\i,0); }; \end{tikzpicture} $\quad = \quad $ \begin{tikzpicture}[scale = 1/3, baseline = {(current bounding box.center)}] \foreach \i in {1,2,3,4,5,6,7,8}{ \draw[line width = 1pt, black] (\i,-3) -- (\i,0); }; \draw[line width = 1pt, black , dotted] (0,0) .. controls (0,1) and (9,1) .. (9,0); \foreach \i in {1,2,3,6,7,8}{ \draw[line width = 3pt, white] (\i,3) -- (\i,0); \draw[line width = 1pt, black] (\i,3) -- (\i,0); }; \draw[line width = 3pt, white] (4,0) -- (4,1); \draw[line width = 1pt, black] (4,0) -- (4,1); \draw[line width = 3pt, white] (5,0) -- (5,1); \draw[line width = 1pt, black] (5,0) -- (5,1); \draw[line width = 3pt, white] (4,1) .. controls (4,2) and (5,2) .. (5,2); \draw[line width = 1pt, black] (4,3) .. controls (4,2) and (5,2) .. (5,3); \draw[line width = 1pt, black] (4,1) .. controls (4,2) and (5,2) .. (5,1); \draw[line width = 3pt, white] (9,0) .. controls (9,-1) and (0,-1) .. (0,0); \draw[line width = 1pt, black] (9,0) .. controls (9,-1) and (0,-1) .. (0,0); \end{tikzpicture} \caption{Commutativity of defect operator $Y$ with $e_j$ generators of $\mathsf{aTL}$.}\label{fig:Y-commTL} \end{center} \end{figure} \newcommand{\textbf{1}}{\textbf{1}} \section{Algebraic preliminaries: the affine TL algebra}\label{sec:2} In this section, we fix our notations and conventions. We first give a definition of the affine Temperley-Lieb algebra in terms of generators, and in terms of diagrams. We give the definition both in terms of the translation generator, which is very standard, and a new one in terms of the so-called blob and hoop generators; the blob formulation is significantly more convenient when discussing topological defects. We then discuss the standard modules and give the eigenvalues of the common central elements on them. The orbit classification shows that representations can be identified by the eigenvalues of the topological defect operators and the full translation operator~$u^{n}$. \subsection{Two definitions} The affine Temperley-Lieb algebras $\lbrace \atl{n}(\mathfrak{q}) \rbrace$ form a family of infinite dimensional associative $\mathbb{C}$-algebras, indexed by a positive integer $n$ -- number of sites -- and a non-zero complex number $\mathfrak{q}$. They can be defined in many ways but we chose three particular presentations for their relevance in physics. Each of these are described in terms of generators with relations and were chosen because they lighten the notation in particular sub-sections of this work. The first set of generators, which we shall refer to as the \emph{periodic} set of generators, is the one appearing in the original literature on these algebras: two \emph{shift} generators $u, u^{-1}$, and $n$ \emph{arc} generators $e_{1}, \hdots, e_{n}$, with the defining relations ($n > 2$) \begin{align} e_{i}e_{i} & = (\mathfrak{q} + \mathfrak{q}^{-1})e_{i},\notag\\ e_{i}e_{i \pm 1}e_{i} & = e_{i} , \notag\\ e_{i}e_{j} & = e_{j}e_{i} \; \text{ if } |i-j| \geq 2, \label{eq:period-gen-1}\\ u e_{i} &= e_{i+1} u,\notag \\ u^{2}e_{n-1} & = e_{1} \hdots e_{n-1}, \notag \end{align} which stands for all $i$, and we defined $e_{0} \equiv e_{n}$, $e_{n+1} \equiv e_{1}$. If $n = 2$, one must remove the relations $e_{i}e_{i\pm 1} e_{i} = e_{i}$, but the other relations are unchanged. If $n=1$, one must remove all the arc generators, keeping only the shift generators with the defining relations $u u^{-1} = u^{-1}u = 1$. One notices immediately that this set of generators is not minimal, since for instance $e_{i} = u^{i-1} e_{1} u^{1-i}$ for all $i\geq 1$. Furthermore, the elements $u^{\pm n}$ are both obviously central. The sub-algebra generated by $\lbrace e_{1}, \hdots, e_{n} \rbrace$ is often called the \emph{periodic} Temperley-Lieb algebra, while the one generated by $\lbrace e_{1}, \hdots, e_{n-1} \rbrace$ is called the \emph{regular} Temperley-Lieb algebra. The second set of generators, which we shall refer to as the \emph{blobbed} set of generators, is significantly less known: there are two \emph{blob} generators $b,b^{-1}$, and $n-1$ \emph{arc} generators $e_i$, $1\leq i \leq n-1$ (so if $n= 1$ there are no arc generators), with defining relations \begin{align} e_{i}e_{i} & = (\mathfrak{q} + \mathfrak{q}^{-1})e_{i}, \notag\\ e_{i}e_{i \pm 1}e_{i} &= e_{i}, \notag\\ e_{i}e_{j}&= e_{j}e_{i} \qquad \text{ if } |i-j| \geq 2,\label{eq:blob-gen-1}\\ e_{i} b &= b e_{i} \qquad\;\; \text{ if } i \geq 2, \notag\\ e_{1}be_{1} &= (\underbrace{\mathfrak{q} b + \mathfrak{q}^{-1}b^{-1}}_{\equiv -Y})e_{1} = e_{1}(\mathfrak{q} b + \mathfrak{q}^{-1}b^{-1}), \notag \end{align} which stands for all $i$ such that these expressions make sense; note that in this case we have no generator~$e_n$. We also note that the element $Y \equiv -\mathfrak{q} b - \mathfrak{q}^{-1}b^{-1} $ introduced in the above relations is central, it will be called the \emph{hoop} operator\footnote{The name will be justified via its diagrammatical presentation that we discuss below.}. We want to stress that in our formulation the blob generator $b$ is invertible. The epithet \emph{blob} here denotes the relation with the so-called \emph{blob algebra}\cite{MartinSaleur}, which is a finite-dimensional algebra where the blob element is not invertible but an idempotent. This latter algebra is obtained by taking the quotient of $\atl{n}(\mathfrak{q})$ by the two-sided ideal $\atl{n}(\mathfrak{q})\cdot (Y - y1)$ for some $y \in \mathbb{C}$, or in simple words the blob algebra is obtained via fixing the eigenvalue of $Y$. See for instance~\cite{Halverson}. We note that the connection with the first description, i.e.\ in terms of ``periodic" type generators is (here, we place periodic type generators in RHS) \begin{align} e_i &= e_i , \qquad 1\leq i\leq n-1 ,\notag\\ b &= (-\mathfrak{q})^{-3/2} g_{1}^{-1}\hdots g_{n-1}^{-1}u^{-1}, \label{eq:blob-ug-1}\\ b^{-1} &= (-\mathfrak{q})^{3/2} u\, g_{n-1}\hdots g_{1},\label{eq:blob-ug-2} \end{align} where we introduced the \emph{braid} generators \begin{equation}\label{eq:braids-g} g^{\pm1}_{i} = (-\mathfrak{q})^{\pm 1/2}1 + (-\mathfrak{q})^{\mp 1/2} e_{i}. \end{equation} It is straightforward to check the braid relations \begin{equation} g_i g_{i\pm1}g_i = g_{i\pm1}g_i g_{i\pm1}. \end{equation} The normalization\footnote{The normalization used here for $g_i$ will also become useful when doing graphical calculations.} in~\eqref{eq:braids-g} was chosen such that \begin{eqnarray} g_i^{\pm 1} g_{i+1}^{\pm 1} e_i &=& e_{i+1} g_i^{\pm 1} g_{i+1}^{\pm 1} = e_{i+1} e_i \,, \nonumber \\ g_{i+1}^{\pm 1} g_i^{\pm 1} e_{i+1} &=& e_i g_{i+1}^{\pm 1} g_i^{\pm 1} = e_i e_{i+1} \,. \label{gge_rels} \end{eqnarray} These relations are used to prove the equivalence of the relations in equations~\eqref{eq:blob-gen-1} and~\eqref{eq:period-gen-1}, specifically when verifying those involving $b$, $u$, or $e_{n}$. We note that an expression of periodic generators in terms of the blobbed ones is obtained as follows: the shift generators $u^{\pm1}$ are obtained multiplying both sides of~\eqref{eq:blob-ug-1}-\eqref{eq:blob-ug-2} with appropriate $g_i^{\pm1}$'s, then the generator $e_n$ is formally defined as $u^{-1}e_1 u$. It is then rather straightforward, however tedious, to show that the defining relations~\eqref{eq:period-gen-1} are equivalent to those in~\eqref{eq:blob-gen-1}. We give one example of such computations as the others are all quite similar; recall the proposed form for $b$ in~\eqref{eq:blob-ug-1}, we then verify that \begin{align*} (\mathfrak{q} b)^{2} e_{1} & = (-\mathfrak{q})^{-1} u g^{-1}_{n-1} \hdots g_{1}^{-1} u g_{n-1}^{-1}\hdots \underbrace{g_{1}^{-1}e_{1}}_{ = -(-\mathfrak{q})^{3/2} e_{1}},\\ & = - u g^{-1}_{n-1} \hdots g_{2}^{-1} \left( 1 - \mathfrak{q} e_{1}\right)u g_{n-1}^{-1}\hdots g_{2}^{-1}e_{1} ,\\ & = - u^{2}\underbrace{g_{n-2}^{-1} \hdots g_{1}^{-1}g_{n-2}^{-1} \hdots g_{2}^{-1} e_{1}}_{= e_{n-1}e_{n-2}\hdots e_{1}} + \mathfrak{q} b e_{1}b e_{1} ,\\ & = - u^{2}e_{n-1} e_{n-2}\hdots e_{1} + \mathfrak{q} b e_{1} b e_{1},\\ & = - e_{1} + \mathfrak{q} b e_{1}b e_{1}, \end{align*} and then multiplying both sides by $b^{-1}$ from the left yields the identity $e_{1}b e_{1} = (\mathfrak{q} b + \mathfrak{q}^{-1}b^{-1})e_{1}$ from the list in~\eqref{eq:blob-gen-1}. We also note that in the context of blob algebras (recalled above as the quotients), the relation between periodic and blobbed generators reflects what was called ``braid translation" in~\cite{MartinSaleur,BGJSR}. \medskip We will also use another relation between the periodic and blobbed set of generators: Because the algebra is invariant under the substitution $b \to b^{-1}$, $\mathfrak{q} \to \mathfrak{q}^{-1} $, i.e. it provides an algebra automorphism, there is a second way to write the blob generators in terms of the generators of ``periodic" type: \begin{equation}\label{eq:b-bar-def} \begin{split} \bar{b} & = (-\mathfrak{q})^{-3/2} u\, g_{n-1}^{-1}\hdots g_{1}^{-1}, \\ \bar{b}^{-1} &= (-\mathfrak{q})^{3/2} g_{1}\hdots g_{n-1}u^{-1} . \end{split} \end{equation} We turn now to introduction of diagrammatical presentations for both types of generators, and it is much easier to check such an equivalence (or isomorphism of the two algebras) by doing standard diagram calculations. \subsection{Diagrammatic presentation}\label{sec:diag-present} We now introduce the graphical presentation of the algebra, which can be used to write words in the algebra in a very compact and intuitive form. Each of the classical generators gets associated to a diagram with $2n$ nodes connected by $n$ \emph{strands}, or \emph{lines}: \begin{align} e_{i} = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (8.5,1); \draw[line width = 1pt, black] (0.5,3) -- (8.5,3); \draw[line width = 1pt, black] (1,1) -- (1,3); \draw[line width = 1pt, black] (3,1) -- (3,3); \draw[line width = 1pt, black] (4,1) .. controls (4,2) and (5,2) .. (5,1); \draw[line width = 1pt, black] (4,3) .. controls (4,2) and (5,2) .. (5,3); \draw[line width = 1pt, black] (6,1) -- (6,3); \draw[line width = 1pt, black] (8,1) -- (8,3); \node[anchor = north] at (2,2.5) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (1,0.5) -- (3,0.5) node [midway,yshift = -7pt] {\footnotesize{i-1}}; \node[anchor = north] at (7,2.5) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (6,0.5) -- (8,0.5) node [midway,yshift = -7pt] {\footnotesize{n-i-1}}; \end{tikzpicture}\; ,& \qquad e_{n} = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (8.5,1); \draw[line width = 1pt, black] (0.5,3) -- (8.5,3); \foreach \s in {2,3,6,7}{ \draw[line width = 1pt, black ] (\s,1) -- (\s, 3); } \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (0,2) .. (0,1); \draw[line width = 1pt, black] (1,3) .. controls (1,2) and (0,2) .. (0,3); \draw[line width = 1pt, black] (8,1) .. controls (8,2) and (9,2) .. (9,1); \draw[line width = 1pt, black] (8,3) .. controls (8,2) and (9,2) .. (9,3); \filldraw[line width = 2pt, white] (0,1) -- (0.5,1) -- (0.5,3) -- (0,3) -- (0,1); \filldraw[line width = 2pt, white] (9,1) -- (8.5,1) -- (8.5,3) -- (9,3) -- (9,1); \node[anchor = north] at (4.5,2.5) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (2,0.5) -- (7,0.5) node [midway,yshift = -7pt] {\footnotesize{n-2}}; \end{tikzpicture}\;, & i = 1, \hdots, n-1,\\ u = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (8.5,1); \draw[line width = 1pt, black] (0.5,3) -- (8.5,3); \foreach \s in {1,2,3,6,7,8,9}{ \draw[line width = 1pt, black ] (\s,1) .. controls (\s, 2) and (\s -1, 2) .. (\s-1, 3); }; \filldraw[line width = 2pt, white] (0,1) -- (0.5,1) -- (0.5,3) -- (0,3) -- (0,1); \filldraw[line width = 2pt, white] (9,1) -- (8.5,1) -- (8.5,3) -- (9,3) -- (9,1); \node[anchor = north] at (4,2.5) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (1,0.5) -- (8,0.5) node [midway,yshift = -7pt] {\footnotesize{n}}; \end{tikzpicture}\;, & \qquad u^{-1} = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (8.5,1); \draw[line width = 1pt, black] (0.5,3) -- (8.5,3); \foreach \s in {1,2,3,6,7,8,9}{ \draw[line width = 1pt, black ] (\s -1,1) .. controls (\s -1, 2) and (\s, 2) .. (\s, 3); }; \filldraw[line width = 2pt, white] (0,1) -- (0.5,1) -- (0.5,3) -- (0,3) -- (0,1); \filldraw[line width = 2pt, white] (9,1) -- (8.5,1) -- (8.5,3) -- (9,3) -- (9,1); \node[anchor = north] at (4,2.5) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (1,0.5) -- (8,0.5) node [midway,yshift = -7pt] {\footnotesize{n}}; \end{tikzpicture}, & 1_{\atl{n}} = \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (8.5,1); \draw[line width = 1pt, black] (0.5,3) -- (8.5,3); \draw[line width = 1pt, black] (1,1) -- (1,3); \draw[line width = 1pt, black] (3,1) -- (3,3); \draw[line width = 1pt, black] (4,1) -- (4,3); \draw[line width = 1pt, black] (5,1) -- (5,3); \draw[line width = 1pt, black] (6,1) -- (6,3); \draw[line width = 1pt, black] (8,1) -- (8,3); \node[anchor = north] at (2,2.5) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (1,0.5) -- (3,0.5) node [midway,yshift = -7pt] {\footnotesize{i-1}}; \node[anchor = north] at (7,2.5) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (6,0.5) -- (8,0.5) node [midway,yshift = -7pt] {\footnotesize{n-i-1}}; \end{tikzpicture},\label{eq:u-inv-diag} \end{align} where the opposing vertical sides are identified, so these drawings should be imagined as being drawn on a cylinder, with the top and bottom black lines resting on its top and bottom edge, respectively. Strands that connects both edges of the cylinder are called \emph{through lines}. One can show that every diagram which can be drawn on this cylinder with $n$ non-intersecting strings represents a non-zero element of the algebra, and every such element is represented by a unique diagram, up to isotopy of the strands which is ambient on the boundary. Sums of elements of the algebra can be understood as formal sums of diagrams, and products in the algebra are computed using \emph{diagram composition}\footnote{In this work, product of operators are read from left to right, and diagrams are read from bottom to top. In some the authors previous work, for instance in \cite{GS}, the opposite convention is used so operators were multiplied right to left and diagrams read from top to bottom.}: the diagrams $a b$ is defined by putting the diagram for $b$ on top of the diagram for $a$ and joining the strands that meet. A closed arc that is homotopic to a point is simply removed and replaced by a factor $\mathfrak{q} + \mathfrak{q}^{-1}$. For instance, here are some of the defining relations of the algebra in the diagrammatic presentation (for $n = 3$): \begin{equation} e_{1}e_{1} = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (3.5,1); \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[line width = 1pt, black] (1,3) .. controls (1,2) and (2,2) .. (2,3); \draw[line width = 1pt, black] (0.5,5) -- (3.5,5); \draw[line width = 1pt, black] (1,3) .. controls (1,4) and (2,4) .. (2,3); \draw[line width = 1pt, black] (1,5) .. controls (1,4) and (2,4) .. (2,5); \draw[line width = 1pt, black] (3,1) -- (3,5); \end{tikzpicture} \; = (\mathfrak{q} + \mathfrak{q}^{-1})e_{1}, \qquad e_{1}e_{2}e_{1} = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (3.5,1); \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[line width = 1pt, black] (1,3) .. controls (1,2) and (2,2) .. (2,3); \draw[line width = 1pt, black] (3,1) -- (3,3); \draw[line width = 1pt, black] (0.5,7) -- (3.5,7); \draw[line width = 1pt, black] (3,5) -- (3,7); \draw[line width = 1pt, black] (1,7) .. controls (1,6) and (2,6) .. (2,7); \draw[line width = 1pt, black] (1,5) .. controls (1,6) and (2,6) .. (2,5); \draw[line width = 1pt, black] (3,3) .. controls (3,4) and (2,4) .. (2,3); \draw[line width = 1pt, black] (3,5) .. controls (3,4) and (2,4) .. (2,5); \draw[line width = 1pt, black] (1,3) -- (1,5); \end{tikzpicture}\; = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (3.5,1); \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[line width = 1pt, black] (1,3) .. controls (1,2) and (2,2) .. (2,3); \draw[line width = 1pt, black] (0.5,3) -- (3.5,3); \draw[line width = 1pt, black] (3,1) -- (3,3); \end{tikzpicture}\; = e_{1}. \end{equation} \begin{figure} \begin{align*} &\begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (3,2) .. (3,3); \draw[line width = 3pt, white] (3,1) .. controls (3,2) and (1,2) .. (1,3); \draw[line width = 1pt, black] (3,1) .. controls (3,2) and (1,2) .. (1,3); \end{tikzpicture} \; \equiv (-\mathfrak{q})^{\frac{1}{2}}\; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}, rotate = 90] \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (3,2).. (3,1); \draw[line width = 1pt, black] (1,3) .. controls (1,2) and (3,2) .. (3,3); \end{tikzpicture} \; + (-\mathfrak{q})^{-\frac{1}{2}} \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (3,2).. (3,1); \draw[line width = 1pt, black] (1,3) .. controls (1,2) and (3,2) .. (3,3); \end{tikzpicture}\\ g_{i} & = (-\mathfrak{q})^{\frac{1}{2}}\textbf{1} + (-\mathfrak{q})^{-\frac{1}{2}} e_{i} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \foreach \vec{r} in {1,3,6,8}{ \draw[black, line width = 1pt] (\vec{r}, 1) -- (\vec{r},3); }; \draw[black, line width = 1pt] (4,1) .. controls (4,2) and (5,2) .. (5,3); \draw[white, line width = 3pt] (5,1) .. controls (5,2) and (4,2) .. (4,3); \draw[black, line width = 1pt] (5,1) .. controls (5,2) and (4,2) .. (4,3); \node[anchor = south] at (2,1.5) {$\hdots $}; \node[anchor = south] at (7,1.5) {$\hdots $}; \draw[black, line width = 2pt] (.5,1) -- (8.5,1); \draw[black, line width = 2pt] (.5,3) -- (8.5,3); \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (.5,0.5) -- (3.5,0.5) node [midway,yshift = -7pt] {\footnotesize{i-1}}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (5.5,0.5) -- (8.5,0.5) node [midway,yshift = -7pt] {\footnotesize{n-i-1}}; \end{tikzpicture}\\ g^{-1}_{i} & = (-\mathfrak{q})^{-\frac{1}{2}}\textbf{1} + (-\mathfrak{q})^{\frac{1}{2}} e_{i} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \foreach \vec{r} in {1,3,6,8}{ \draw[black, line width = 1pt] (\vec{r}, 1) -- (\vec{r},3); }; \draw[black, line width = 1pt] (5,1) .. controls (5,2) and (4,2) .. (4,3); \draw[white, line width = 3pt] (4,1) .. controls (4,2) and (5,2) .. (5,3); \draw[black, line width = 1pt] (4,1) .. controls (4,2) and (5,2) .. (5,3); \node[anchor = south] at (2,1.5) {$\hdots $}; \node[anchor = south] at (7,1.5) {$\hdots $}; \draw[black, line width = 2pt] (.5,1) -- (8.5,1); \draw[black, line width = 2pt] (.5,3) -- (8.5,3); \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (.5,0.5) -- (3.5,0.5) node [midway,yshift = -7pt] {\footnotesize{i-1}}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (5.5,0.5) -- (8.5,0.5) node [midway,yshift = -7pt] {\footnotesize{n-i-1}}; \end{tikzpicture} \end{align*} \caption{Braid notations}\label{fig:br-diag} \end{figure} For graphical presentation of the blobbed generators, we introduce first the \emph{braid notation} for the overlapping strands in Fig.~\ref{fig:br-diag}, as well as the diagram presentation of $g^{\pm1}_{i}$ introduced in~\eqref{eq:braids-g}. Then using~\eqref{eq:u-inv-diag} we get by stacking the diagrams: \begin{equation} g_{1}^{-1}\hdots g_{n-1}^{-1} u^{-1} \quad = \begin{tikzpicture}[scale = 1/3, baseline = {(current bounding box.center)}] \foreach \i in {1,2,3,5,6,7,8}{ \draw[black, line width = 1pt] (\i,3) .. controls (\i, 4) and (\i+1,4) .. (\i+1, 5); }; \filldraw[white] (1.5,2) -- (-.5,2) -- (-.5,5) -- (1.5,5) -- (1.5,2); \filldraw[white] (8.5,2) -- (9.5,2) -- (9.5,5) -- (8.5,5) -- (8.5,2); \foreach \i in {2,3,5,6,7}{ \draw[ black, line width = 1pt] ( \i+1,1) .. controls (\i +1, 2) and (\i, 2) .. (\i, 3); }; \draw[white, line width = 3pt] (2,1) .. controls (2,2) and (8,2) .. (8,3); \draw[black, line width = 1pt] (2,1) .. controls (2,2) and (8,2) .. (8,3); \node[anchor = north] at (5,1.75) {$\hdots$}; \node[anchor = south] at (4,2.75) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (3,0.5) -- (8,0.5) node [midway,yshift = -7pt] {\footnotesize{n-1}}; \draw[black, line width = 2pt] (1.5,1) -- (8.5,1); \draw[black, line width = 2pt] (1.5,5) -- (8.5,5); \end{tikzpicture} \; = \begin{tikzpicture}[scale = 1/3, baseline = {(current bounding box.center)}] \foreach \i in {2,3,5,6,7}{ \draw[ black, line width = 1pt] ( \i,1) -- (\i, 3); }; \draw[line width = 3pt, white] (0,2) -- (8,2); \draw[line width = 1pt, black] (2,2) -- (8,2); \draw[line width = 1pt, black] (1,1) .. controls (1,1.5) and (1.5,2) .. (2,2); \draw[line width = 1pt, black] (1,3) .. controls (1,2.5) and (0.5,2) .. (0,2); \node[anchor = north] at (4,1.75) {$\hdots$}; \node[anchor = south] at (4,2.25) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (2,0.5) -- (7,0.5) node [midway,yshift = -7pt] {\footnotesize{n-1}}; \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \draw[black, line width = 2pt] (.5,1) -- (8,1); \draw[black, line width = 2pt] (.5,3) -- (8,3); \end{tikzpicture} \end{equation} and similar calculation for $u g_{n-1}\hdots g_{1}$. Therefore, the blob generators $b$ and $b^{-1}$ from~\eqref{eq:blob-ug-1}-\eqref{eq:blob-ug-2} can be represented as \begin{equation}\label{eq:b-diag} b = (-\mathfrak{q})^{-3/2}\; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \foreach \s in {2,4,5}{ \draw[line width = 1pt, black ] (\s, 1) -- (\s, 3); }; \draw[line width = 1pt, black] (1,3) .. controls (1,2) and (0,2) .. (0,1); \draw[line width = 3pt, white] (6,2) .. controls (1,2) .. (1,1); \draw[line width = 1pt, black] (6,2) .. controls (1,2) .. (1,1); \filldraw[line width = 1pt, white] (0,1) -- (.5,1) -- (.5,3) -- (0,3) -- (0,1); \filldraw[line width = 1pt, white] (6,1) -- (5.5,1) -- (5.5,3) -- (6,3) -- (6,1); \node[anchor = north] at (3,2.75) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (2,0.5) -- (5,0.5) node [midway,yshift = -7pt] {\footnotesize{n-1}}; \draw[line width = 2pt, black] (0.5,1) -- (5.5,1); \draw[line width = 2pt, black] (0.5,3) -- (5.5,3); \end{tikzpicture}\; , \qquad b^{-1} = (-\mathfrak{q})^{3/2}\; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \foreach \s in {2,4,5}{ \draw[line width = 1pt, black ] (\s, 1) -- (\s, 3); }; \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (0,2) .. (0,3); \draw[line width = 3pt, white] (6,2) .. controls (1,2) .. (1,3); \draw[line width = 1pt, black] (6,2) .. controls (1,2) .. (1,3); \filldraw[line width = 1pt, white] (0,1) -- (.5,1) -- (.5,3) -- (0,3) -- (0,1); \filldraw[line width = 1pt, white] (6,1) -- (5.5,1) -- (5.5,3) -- (6,3) -- (6,1); \node[anchor = north] at (3,2.75) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (2,0.5) -- (5,0.5) node [midway,yshift = -7pt] {\footnotesize{n-1}}; \draw[line width = 2pt, black] (0.5,1) -- (5.5,1); \draw[line width = 2pt, black] (0.5,3) -- (5.5,3); \end{tikzpicture}\;. \end{equation} It is then straightforward to check the relations~\eqref{eq:blob-gen-1} using the standard graphical manipulations together with the relations~\eqref{gge_rels}. \medskip We recall the central element $Y= -(\mathfrak{q} b + \mathfrak{q}^{-1}b^{-1})$. In the diagram basis, it can be written as \begin{center} $Y = $ $ (-\mathfrak{q})^{-\frac{1}{2}} \;$ \begin{tikzpicture}[scale = 1/3, baseline = {(current bounding box.center)}] \foreach \i in {2,3,5,6,7}{ \draw[ black, line width = 1pt] ( \i,1) -- (\i, 3); }; \draw[line width = 3pt, white] (0,2) -- (8,2); \draw[line width = 1pt, black] (2,2) -- (8,2); \draw[line width = 1pt, black] (1,1) .. controls (1,1.5) and (1.5,2) .. (2,2); \draw[line width = 1pt, black] (1,3) .. controls (1,2.5) and (0.5,2) .. (0,2); \node[anchor = north] at (4,1.75) {$\hdots$}; \node[anchor = south] at (4,2.25) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (2,0.5) -- (7,0.5) node [midway,yshift = -7pt] {\footnotesize{n-1}}; \filldraw[line width = 1pt, white] (0,1) -- (.5,1) -- (.5,3) -- (0,3) -- (0,1); \filldraw[line width = 1pt, white] (8,1) -- (7.5,1) -- (7.5,3) -- (8,3) -- (8,1); \draw[black, line width = 2pt] (.5,1) -- (7.5,1); \draw[black, line width = 2pt] (.5,3) -- (7.5,3); \end{tikzpicture} $ \;+ \;(-\mathfrak{q})^{\frac{1}{2}} \;$ \begin{tikzpicture}[scale = 1/3, baseline = {(current bounding box.center)}] \foreach \i in {2,3,5,6,7}{ \draw[ black, line width = 1pt] ( \i,1) -- (\i, 3); }; \draw[line width = 3pt, white] (0,2) -- (8,2); \draw[line width = 1pt, black] (2,2) -- (8,2); \draw[line width = 1pt, black] (1,1) .. controls (1,1.5) and (0.5,2) .. (0,2); \draw[line width = 1pt, black] (1,3) .. controls (1,2.5) and (1.5,2) .. (2,2); \node[anchor = north] at (4,1.75) {$\hdots$}; \node[anchor = south] at (4,2.25) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (2,0.5) -- (7,0.5) node [midway,yshift = -7pt] {\footnotesize{n-1}}; \filldraw[line width = 1pt, white] (0,1) -- (.5,1) -- (.5,3) -- (0,3) -- (0,1); \filldraw[line width = 1pt, white] (8,1) -- (7.5,1) -- (7.5,3) -- (8,3) -- (8,1); \draw[black, line width = 2pt] (.5,1) -- (7.5,1); \draw[black, line width = 2pt] (.5,3) -- (7.5,3); \end{tikzpicture} $\; = $ \begin{tikzpicture}[scale = 1/3, baseline = {(current bounding box.center)}] \foreach \i in {1,2,3,5,6,7}{ \draw[ black, line width = 1pt] ( \i,1) -- (\i, 3); }; \draw[line width = 3pt, white] (0,2) -- (8,2); \draw[line width = 1pt, black] (0,2) -- (8,2); \node[anchor = north] at (4,1.75) {$\hdots$}; \node[anchor = south] at (4,2.25) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (1,0.5) -- (7,0.5) node [midway,yshift = -7pt] {\footnotesize{n}}; \filldraw[line width = 1pt, white] (0,1) -- (.5,1) -- (.5,3) -- (0,3) -- (0,1); \filldraw[line width = 1pt, white] (8,1) -- (7.5,1) -- (7.5,3) -- (8,3) -- (8,1); \draw[black, line width = 2pt] (.5,1) -- (7.5,1); \draw[black, line width = 2pt] (.5,3) -- (7.5,3); \end{tikzpicture} \ , \end{center} where for the last equality we also used the braid conventions in Fig.~\ref{fig:br-diag}. That $Y$ is central is easy to check using the diagrammatic calculation as in Fig.~\ref{fig:Y-commTL}: generators $e_j$ obviously commute with the insertion of a line going ``above'' or ``under'' the system, the same applies for the commutativity with the shift operators where one just uses the braid relations. Recall now the algebra automorphism $b \to b^{-1}$, $\mathfrak{q} \to \mathfrak{q}^{-1} $ discussed above~\eqref{eq:b-bar-def}. The diagram presentation for the second set of blobbed generators is \begin{equation} \bar{b} = (-\mathfrak{q})^{-3/2}\; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (0,2) .. (0,3); \draw[line width = 1pt, black] (6,2) .. controls (1,2) .. (1,3); \foreach \s in {2,4,5}{ \draw[line width = 3pt, white ] (\s, 1) -- (\s, 3); \draw[line width = 1pt, black ] (\s, 1) -- (\s, 3); }; \draw[line width = 1pt, black] (0.5,1) -- (5.5,1); \draw[line width = 1pt, black] (0.5,3) -- (5.5,3); \filldraw[line width = 1pt, white] (0,1) -- (.5,1) -- (.5,3) -- (0,3) -- (0,1); \filldraw[line width = 1pt, white] (6,1) -- (5.5,1) -- (5.5,3) -- (6,3) -- (6,1); \node[anchor = north] at (3,2) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (2,0.5) -- (5,0.5) node [midway,yshift = -7pt] {\footnotesize{n-1}}; \end{tikzpicture}, \qquad \bar{b}^{-1} = (-\mathfrak{q})^{3/2}\; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (1,3) .. controls (1,2) and (0,2) .. (0,1); \draw[line width = 1pt, black] (6,2) .. controls (1,2) .. (1,1); \foreach \s in {2,4,5}{ \draw[line width = 3pt, white ] (\s, 1) -- (\s, 3); \draw[line width = 1pt, black ] (\s, 1) -- (\s, 3); }; \draw[line width = 1pt, black] (0.5,1) -- (5.5,1); \draw[line width = 1pt, black] (0.5,3) -- (5.5,3); \filldraw[line width = 1pt, white] (0,1) -- (.5,1) -- (.5,3) -- (0,3) -- (0,1); \filldraw[line width = 1pt, white] (6,1) -- (5.5,1) -- (5.5,3) -- (6,3) -- (6,1); \node[anchor = north] at (3,2) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (2,0.5) -- (5,0.5) node [midway,yshift = -7pt] {\footnotesize{n-1}}; \end{tikzpicture}\;. \end{equation} The second representative of the blob generators $\bar{b}$ and $\bar{b}^{-1}$ allows us to identify the second distinct central element $\bar{Y}$: \begin{equation} \bar{Y} \equiv -(\mathfrak{q} \bar{b} + \mathfrak{q}^{-1}\bar{b}^{-1}) = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,2) -- (5.5,2); \foreach \s in {1,2,4,5}{ \draw[line width = 3pt, white ] (\s, 1) -- (\s, 3); \draw[line width = 1pt, black ] (\s, 1) -- (\s, 3); }; \draw[line width = 1pt, black] (0.5,1) -- (5.5,1); \draw[line width = 1pt, black] (0.5,3) -- (5.5,3); \filldraw[line width = 1pt, white] (0,1) -- (.5,1) -- (.5,3) -- (0,3) -- (0,1); \filldraw[line width = 1pt, white] (6,1) -- (5.5,1) -- (5.5,3) -- (6,3) -- (6,1); \node[anchor = north] at (3,2) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (1,0.5) -- (5,0.5) node [midway,yshift = -7pt] {\footnotesize{n}}; \end{tikzpicture}. \end{equation} We will show below that for generic values of~$\mathfrak{q}$ the two central elements $Y$ and $\bar{Y}$ generate a natural subalgebra $\mathsf{Z}_{\mathrm{sym}}$ in the center of $\atl{n}(\mathfrak{q})$. We call this subalgebra \textit{the symmetric centre} of $\atl{n}(\mathfrak{q})$ and it has two interesting properties (that will be proven in the next section): \begin{enumerate} \item $\mathsf{Z}_{\mathrm{sym}}$ is an image of the whole center of the affine Hecke algebra under the standard covering map: $\widehat{H}_n(q)\to \atl{n}(\mathfrak{q})$, $T_{i} \to g_{i}$, $J_{i} \to J_{i}$. Recall that the center of $\widehat{H}_n(q)$ is spanned by symmetric polynomials in the Jucy-Murphy elements $J_i$, for $1\leq i\leq n$. \item There is a special ``canonical" basis (made of Chebyshev polynomials) such that the structure constants are non-negative integers, i.e.\ $\mathsf{Z}_{\mathrm{sym}}$ endowed with this basis is a Verlinde algebra. \end{enumerate} The second point is very important for our defect construction, and we will see below in Section~\ref{sec:topdef.crossedchannel} that the central elements in the canonical basis (on certain representations) provide operators that represent topological defects in the crossed channel. It is however not clear to us whether the symmetric centre $\mathsf{Z}_{\mathrm{sym}}$ generates the centre of $\atl{n}(\mathfrak{q})$ or not. We plan to come back to this important question in the next publication. \subsection{Tile formalism and the transfer matrix}\label{sec:transf} While we formulate most of our results in terms of diagrams with strings and arcs on a cylinder, a very significant body of work on this subject is written in terms of \emph{planar tiles} (see for instance~\cite{PeRasmPolymers,PeMDFusionHierarchy}); we present here a brief translation between the two formalisms and use it to introduce the usual transfer matrix. The planar tile with spectral parameter $x$ is defined by\footnote{This tile is often divided by $(\mathfrak{q} - \mathfrak{q}^{-1})$ to normalize it, but then the natural defect operator would be $(\mathfrak{q} - \mathfrak{q}^{-1})^{-n}Y$ instead of $Y$.} \begin{equation} \begin{tikzpicture} \foreach \vec{r} in {0,1}{ \draw[line width = 1] (-3/4,1/4 -\vec{r} /2) -- (3/4,1/4 -\vec{r}/2); } \node at (0,0) {\utiles{x}}; \node at (1,0) {$=$}; \end{tikzpicture} \begin{tikzpicture} \foreach \vec{r} in {0,1}{ \draw[line width = 1] (-1/2,1/4 -\vec{r} /2) -- (1/2,1/4 -\vec{r}/2); } \node at (0,0) {\idtiles}; \node[anchor = east] at (- 1/2,0) {$\left( \frac{q}{x} - \frac{x}{q}\right)$}; \end{tikzpicture} \begin{tikzpicture} \foreach \vec{r} in {0,1}{ \draw[line width = 1] (-1/2,1/4 -\vec{r} /2) -- (1/2,1/4 -\vec{r}/2); } \node at (0,0) {\etiles}; \node[anchor = east] at (- 1/2,0) {$+ \left( x - x^{-1}\right)$}; \end{tikzpicture}. \end{equation} These satisfy three particular relations: \begin{align} \begin{tikzpicture}[baseline = {(current bounding box.center)}] \foreach \vec{r} in {0,1}{ \draw[line width = 1] (-3/4,1/4 -\vec{r} /2) -- (6/4,1/4 -\vec{r}/2); } \node at (0,0) {\utiles{x}}; \node at (1,0) {\utiles{x^{-1}}}; \end{tikzpicture} \; = & \;\; (\mathfrak{q}^{2} + \mathfrak{q}^{-2} - x^{2} - x^{-2})\; \begin{tikzpicture}[baseline = {(current bounding box.center)}] \foreach \vec{r} in {0,1}{ \draw[line width = 1] (-1/2,1/4 -\vec{r} /2) -- (1/2,1/4 -\vec{r}/2); } \node at (0,0) {\idtiles}; \end{tikzpicture}\ , \\ \begin{tikzpicture}[baseline = {(current bounding box.center)}] \foreach \vec{r} in {0,1,2}{ \draw[line width = 1] (-3/4,1/4 -\vec{r} /2) -- (6/4,1/4 -\vec{r}/2); } \node at (0,0) {\utiles{x}}; \node at (1,0) {\utiles{y}}; \node at (.5,-.5) {\utiles{ x y}}; \end{tikzpicture} \; = & \;\; \begin{tikzpicture}[baseline = {(current bounding box.center)}] \foreach \vec{r} in {-1,0,1}{ \draw[line width = 1] (-3/4,1/4 -\vec{r} /2) -- (6/4,1/4 -\vec{r}/2); } \node at (0,0) {\utiles{y}}; \node at (1,0) {\utiles{x}}; \node at (.5,.5) {\utiles{ x y}}; \end{tikzpicture}\ ,\\ \begin{tikzpicture}[baseline = {(current bounding box.center)}] \foreach \vec{r} in {0,1}{ \draw[line width = 1] (-3/4,1/4 -\vec{r} /2) -- (3/4,1/4 -\vec{r}/2); } \node at (0,0) {\utiles{x}}; \end{tikzpicture} \; = &\; \; \begin{tikzpicture}[baseline = {(current bounding box.center)}] \foreach \vec{r} in {0,1}{ \draw[line width = 1] (-3/4,1/4 -\vec{r} /2) -- (3/4,1/4 -\vec{r}/2); } \node at (0,0) {\rutiles{q x^{-1}}{90}}; \end{tikzpicture} \;. \end{align} These are respectively called the inversion, Yang-Baxter, and crossing symmetry. The transfer matrix $T_{n}(\vec{x})$ can then be defined as \begin{equation} T_{n}(\vec{x}) = \; \begin{tikzpicture}[baseline={(current bounding box.center)}] \draw[line width = 2] (0,-0.353553) -- (4*.707,-0.353553); \draw[line width = 2] (-.354553,0.353553) -- (13*.353553,0.353553); \node at (0 + 0.707*3,0) {$\hdots$}; \node at (0 + 0.707*0 ,0) {\rutiles{x_{1}}{45}}; \node at (0 + 0.707*1 ,0) {\rutiles{x_{2}}{45}}; \node at (0 + 0.707*2 ,0) {\rutiles{x_{3}}{45}}; \node at (0 + 0.707*4 ,0) {\rutiles{x_{n-2}}{45}}; \node at (0 + 0.707*5 ,0) {\rutiles{x_{n-1}}{45}}; \node at (0 + 0.707*6 ,0) {\rutiles{x_{n}}{45}}; \end{tikzpicture}, \qquad \vec{x} = \lbrace x_{1}, x_{2}, \hdots, x_{n} \rbrace \;, \end{equation} where there are $n$ tiles and the opposing vertical sides are identified so that this defines an element of $\atl{n}(\mathfrak{q})$ for each $n$-dimensional vector $\vec{x}$. If $x_{1} = x_{2} = \hdots = x_{n} $ the transfer matrix is said to be \emph{homogeneous} and is simply written $T_{n}(x_{1})$. Using the three previous identities, one readily shows that homogeneous transfer matrices commute with each others, i.e.\ $[T_{n}(x),T_{n}(y)]=0$, and thus define families of integrable lattice models. We note four specific cases of the homogeneous transfer matrix that are of importance in this work. Setting the spectral parameter $x$ to $1$ or $\mathfrak{q}$ gives the translation operators $u^{\mp1}$: \begin{equation} T_{n}(1) = (\mathfrak{q} - \mathfrak{q}^{-1})^{n} \; \begin{tikzpicture}[baseline = {(current bounding box.center)}] \foreach \vec{r} in {1,...,6}{ \node at (0 + 0.707*\vec{r},0) { \begin{tikzpicture}[scale=1/2,rotate= -45] \filldraw[white] (0,0) -- (1,1) -- (2 ,0) -- (1 , -1 ) -- (0, 0); \draw[line width = 2] (0,0) -- (1,1) -- (2 ,0) -- (1 , -1 ) -- (0, 0); \draw[line width = 1] (.5,-.5) arc (-45:45:.707); \draw[line width = 1] (1.5,-.5) arc (225:135:.707); \end{tikzpicture} }; }; \end{tikzpicture} \; = (\mathfrak{q} - \mathfrak{q}^{-1})^{n} u^{-1}, \end{equation} \begin{equation} T_{n}(\mathfrak{q}) = (\mathfrak{q} - \mathfrak{q}^{-1})^{n} \; \begin{tikzpicture}[baseline = {(current bounding box.center)}] \foreach \vec{r} in {1,...,6}{ \node at (0 + 0.707*\vec{r},0) { \begin{tikzpicture}[scale=1/2,rotate= 45] \filldraw[white] (0,0) -- (1,1) -- (2 ,0) -- (1 , -1 ) -- (0, 0); \draw[line width = 2] (0,0) -- (1,1) -- (2 ,0) -- (1 , -1 ) -- (0, 0); \draw[line width = 1] (.5,-.5) arc (-45:45:.707); \draw[line width = 1] (1.5,-.5) arc (225:135:.707); \end{tikzpicture} }; }; \end{tikzpicture} \; = (\mathfrak{q} - \mathfrak{q}^{-1})^{n} u, \end{equation} while taking the limits in the spectral parameter to zero or infinity produces the two hoop operators $Y$ and $\bar{Y}$: \begin{equation} \lim_{x \to 0}((-(-\mathfrak{q})^{-\frac{1}{2}}x)^{n} T_{n}(x)) = \; \begin{tikzpicture}[baseline = {(current bounding box.center)}] \foreach \vec{r} in {1,...,6}{ \node at (0 + 0.707*\vec{r},0) { \begin{tikzpicture}[scale=1/2,rotate= 45] \filldraw[white] (0,0) -- (1,1) -- (2 ,0) -- (1 , -1 ) -- (0, 0); \draw[line width = 1] (.5,.5) -- (1.5,-.5); \draw[white, line width = 3pt] (.5,-.5) -- (1.5,.5); \draw[black, line width = 1pt] (.5,-.5) -- (1.5,.5); \draw[line width = 2] (0,0) -- (1,1) -- (2 ,0) -- (1 , -1 ) -- (0, 0); \end{tikzpicture} }; }; \end{tikzpicture} \; = \bar{Y}, \end{equation} \begin{equation} \lim_{x \to \infty}(((-\mathfrak{q})^{-\frac{1}{2}}x)^{-n} T_{n}(x)) = \; \begin{tikzpicture}[baseline = {(current bounding box.center)}] \foreach \vec{r} in {1,...,6}{ \node at (0 + 0.707*\vec{r},0) { \begin{tikzpicture}[scale=1/2,rotate= -45] \filldraw[white] (0,0) -- (1,1) -- (2 ,0) -- (1 , -1 ) -- (0, 0); \draw[line width = 1] (.5,.5) -- (1.5,-.5); \draw[white, line width = 3pt] (.5,-.5) -- (1.5,.5); \draw[black, line width = 1pt] (.5,-.5) -- (1.5,.5); \draw[line width = 2] (0,0) -- (1,1) -- (2 ,0) -- (1 , -1 ) -- (0, 0); \end{tikzpicture} }; }; \end{tikzpicture} \; = Y. \end{equation} \subsection{Standard modules} We present a brief overview of the most common class of $\atl{n} \equiv \atl{n}(\mathfrak{q})$ modules: the \textit{standard} modules $\mathsf{W}_{k,z}(n)$; these are indexed by a non-negative integer $2k \leq n $ (so $k$ is a half-integer), of the same parity as $n$, and a non-zero complex number $z$. The simplest way of describing their basis is in terms of diagrams having $n$ ($k$) nodes on their bottom (top) side, and having exactly $k$ through lines. One simply takes the formal sums of every such diagrams, and use diagram composition to describe the action of the algebra (by stacking an algebra diagram on the bottom), understanding that if composition produces a diagram with less than $k$ through lines, it is identified with the zero element. For instance, \begin{equation} \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (4.5,1); \draw[line width = 1pt, black] (0.5,3) -- (4.5,3); \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[line width = 1pt, black] (1,3) .. controls (1,2) and (2,2) .. (2,3); \draw[line width = 1pt, black] (3,1) -- (3,3); \draw[line width = 1pt, black] (4,1) -- (4,3); \draw[line width = 1pt, black] (0.5,5) -- (2.5,5); \draw[line width = 1pt, black] (1,3) -- (1,5); \draw[line width = 1pt, black] (4,3) ..controls (4,4) and (2,4) .. (2,5); \draw[line width = 1pt, black] (2,3) .. controls (2,4) and (3,4) .. (3,3); \end{tikzpicture} \; = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (4.5,1); \draw[line width = 1pt, black] (0.5,3) -- (2.5,3); \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[line width = 1pt, black] (3,1) ..controls (3,2) and (1,2) .. (1,3); \draw[line width = 1pt, black] (4,1) ..controls (4,2) and (2,2) .. (2,3); \end{tikzpicture}\;, \qquad \qquad \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (4.5,1); \draw[line width = 1pt, black] (0.5,3) -- (4.5,3); \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[line width = 1pt, black] (1,3) .. controls (1,2) and (2,2) .. (2,3); \draw[line width = 1pt, black] (3,1) -- (3,3); \draw[line width = 1pt, black] (4,1) -- (4,3); \draw[line width = 1pt, black] (0.5,5) -- (2.5,5); \draw[line width = 1pt, black] (1,3) -- (1,5); \draw[line width = 1pt, black] (2,3) -- (2,5); \draw[line width = 1pt, black] (4,3) ..controls (4,4) and (3,4) .. (3,3); \end{tikzpicture} \; = 0. \end{equation} This is the way standard modules $\mathsf{S}_{k}(n) $ are defined for the regular Temperley-Lieb algebra $\tl{n}(\mathfrak{q})$, by simply excluding the diagrams with strings crossing the imaginary boundary on each side of the diagrams; while for $\tl{n}(\mathfrak{q})$ such diagrams form a finite dimensional module, it is not true for the affine version $\atl{n}(\mathfrak{q})$, as e.g.\ the translation generators $u^{\pm1}$ produce states with arbitrary winding of through lines. To get a finite dimensional module for $\atl{n}(\mathfrak{q})$, one must also fix the eigenvalues of the two central elements identified in the previous section: $-Y =\mathfrak{q} b + \mathfrak{q}^{-1} b^{-1} $ and $-\bar{Y}=\mathfrak{q} \bar{b} + \mathfrak{q}^{-1} \bar{b}^{-1} $. The simplest way to do this is to define the right action of $u$ (the action on through lines) as multiplication by $z$, i.e. \begin{equation}\label{eq:u-act-standard} \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (4.5,1); \draw[line width = 1pt, black] (0.5,3) -- (2.5,3); \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[line width = 1pt, black] (3,1) ..controls (3,2) and (1,2) .. (1,3); \draw[line width = 1pt, black] (4,1) ..controls (4,2) and (2,2) .. (2,3); \draw[line width = 1pt, black] (0.5,5) -- (2.5,5); \foreach \s in {1,2,3}{ \draw[line width = 1pt, black] (\s,3) .. controls (\s, 4) and (\s -1, 4) .. (\s - 1, 5); }; \filldraw[line width = 1pt, white] (-.5,5) -- (0.5,5) -- (0.5,3) -- (-.5,3) -- (-.5,5); \filldraw[line width = 1pt, white] (3,5) -- (2.5,5) -- (2.5,3) -- (3,3) -- (3,5); \end{tikzpicture} \; = z \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (4.5,1); \draw[line width = 1pt, black] (0.5,3) -- (2.5,3); \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[line width = 1pt, black] (3,1) ..controls (3,2) and (1,2) .. (1,3); \draw[line width = 1pt, black] (4,1) ..controls (4,2) and (2,2) .. (2,3); \end{tikzpicture}, \qquad \qquad \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (4.5,1); \draw[line width = 1pt, black] (0.5,3) -- (2.5,3); \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[line width = 1pt, black] (3,1) ..controls (3,2) and (1,2) .. (1,3); \draw[line width = 1pt, black] (4,1) ..controls (4,2) and (2,2) .. (2,3); \draw[line width = 1pt, black] (0.5,5) -- (2.5,5); \foreach \s in {0,1,2}{ \draw[line width = 1pt, black] (\s,3) .. controls (\s, 4) and (\s + 1, 4) .. (\s + 1, 5); }; \filldraw[line width = 1pt, white] (-0.5,5) -- (0.5,5) -- (0.5,3) -- (-.5,3) -- (-.5,5); \filldraw[line width = 1pt, white] (3,5) -- (2.5,5) -- (2.5,3) -- (3,3) -- (3,5); \end{tikzpicture} \; = z^{-1} \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (4.5,1); \draw[line width = 1pt, black] (0.5,3) -- (2.5,3); \draw[line width = 1pt, black] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[line width = 1pt, black] (3,1) ..controls (3,2) and (1,2) .. (1,3); \draw[line width = 1pt, black] (4,1) ..controls (4,2) and (2,2) .. (2,3); \end{tikzpicture}, \end{equation} where LHS of the first equality is the right action of $u$ while LHS of the second equality is the right action of $u^{-1}$. The eigenvalue of the central element $u^n$ is thus $z^n$. It was shown in~\cite{GL} that the endomorphism ring of standard modules is one dimensional, so any central element must act like a multiple of the identity on a standard module; finding the eigenvalue is then simply a matter of choosing a convenient element $x$ such that computing $Y x$ is easy. For example, using $x$ which is filled by non-nested arcs from the right and the rest are the $2k$ through lines, we calculate that the choice~\eqref{eq:u-act-standard} for the action of $u$ also fixes the eigenvalues of the central elements $Y$ and~$\bar{Y}$, as follows: \begin{equation}\label{eq:Y-eigenvalues} \begin{split} Y & = - (\mathfrak{q} b + \mathfrak{q}^{-1} b^{-1}) = z (-\mathfrak{q})^{k} + z^{-1} (-\mathfrak{q})^{-k}, \\ \bar{Y} & = - (\mathfrak{q} \bar{b} + \mathfrak{q}^{-1} \bar{b}^{-1}) = z (-\mathfrak{q})^{-k} + z^{-1} (-\mathfrak{q})^{k}. \end{split} \end{equation} To see this, we first recall the diagram presentation for $b$ in~\eqref{eq:b-diag}. Applying then $ -\mathfrak{q} b$ to the chosen~$x$ and expanding the braid-crossings according to the rules in Figure~\ref{fig:br-diag}, only one configuration has a non-zero contribution that corresponds to the factor $z^{-1} (-\mathfrak{q})^{-k}$. As an example of such a calculation for $k=1$, $n=4$, we have \begin{equation} (-\mathfrak{q})b \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (4.5,1); \draw[line width = 1pt, black] (0.5,3) -- (2.5,3); \draw[line width = 1pt, black] (3,1) .. controls (3,2) and (4,2) .. (4,1); \draw[line width = 1pt, black] (1,1) -- (1,3); \draw[line width = 1pt, black] (2,1) -- (2,3); \end{tikzpicture} \; = (-\mathfrak{q})^{-\frac{1}{2}}\; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (3,1) .. controls (3,2) and (4,2) .. (4,1); \draw[line width = 1pt, black] (1,1) -- (1,3); \draw[line width = 1pt, black] (2,1) -- (2,3); \draw[line width = 1pt, black] (1,1) .. controls (1,0) .. (0,0); \foreach \vec{r} in {2,3,4}{ \draw[line width = 3pt, white] (\vec{r},-1) -- (\vec{r},1); \draw[line width = 1pt, black] (\vec{r},-1) -- (\vec{r},1); } \draw[line width = 3pt, white] (1,-1) .. controls (1,0) .. (5,0); \draw[line width = 1pt, black] (1,-1) .. controls (1,0) .. (5,0); \filldraw[white] (0.5,-1) -- (.5,3) -- (-.5,3) -- (-.5,-1) -- (.5,-1); \filldraw[white] (5.5,-1) -- (5.5,3) -- (5-.5,3) -- (5-.5,-1) -- (5.5,-1); \draw[line width = 1pt, black] (0.5,-1) -- (4.5,-1); \draw[line width = 1pt, black] (0.5,1) -- (4.5,1); \draw[line width = 1pt, black] (0.5,3) -- (2.5,3); \end{tikzpicture} \; = (-\mathfrak{q})^{-\frac{1}{2}}\; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (3,-1) .. controls (3,0) and (4,0) .. (4,-1); \draw[line width = 1pt, black] (1,1) .. controls (1,0) .. (0,0); \foreach \vec{r} in {2}{ \draw[line width = 3pt, white] (\vec{r},-1) -- (\vec{r},1); \draw[line width = 1pt, black] (\vec{r},-1) -- (\vec{r},1); } \draw[line width = 3pt, white] (1,-1) .. controls (1,0) .. (5,0); \draw[line width = 1pt, black] (1,-1) .. controls (1,0) .. (5,0); \filldraw[white] (0.5,-1) -- (.5,1) -- (-.5,1) -- (-.5,-1) -- (.5,-1); \filldraw[white] (5.5,-1) -- (5.5,1) -- (5-.5,1) -- (5-.5,-1) -- (5.5,-1); \draw[line width = 1pt, black] (0.5,-1) -- (4.5,-1); \draw[line width = 1pt, black] (0.5,1) -- (2.5,1); \end{tikzpicture} \; = (-\mathfrak{q})^{-1}\; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (3,-1) .. controls (3,0) and (4,0) .. (4,-1); \draw[line width = 1pt, black] (1,-1) .. controls (1,0) and (2,0) .. (2,1); \draw[line width = 1pt, black] (0,-1) .. controls (0,0) and (1,0) .. (1,1); \draw[line width = 1pt, black] (2,-1) .. controls (2,0) .. (5,0); \filldraw[white] (0.5,-1) -- (.5,1) -- (-.5,1) -- (-.5,-1) -- (.5,-1); \filldraw[white] (5.5,-1) -- (5.5,1) -- (5-.5,1) -- (5-.5,-1) -- (5.5,-1); \draw[line width = 1pt, black] (0.5,-1) -- (4.5,-1); \draw[line width = 1pt, black] (0.5,1) -- (2.5,1); \end{tikzpicture}. \end{equation} A similar calculation can be done for $\bar{b}^{\pm1}$ confirming the result in~\eqref{eq:Y-eigenvalues}. It shall be convenient in what follows to use the notation \begin{equation} \mathsf{W}^{o}_{\pm |k|,\delta}(n) \equiv \mathsf{W}_{|k|,\delta^{\pm 1}(-\mathfrak{q})^{-k}}(n), \qquad \mathsf{W}^{u}_{\pm |k|,\mu}(n) \equiv \mathsf{W}_{|k|,\mu^{\pm 1}(-\mathfrak{q})^{k}}(n), \end{equation} which fixes the eigenvalue of $Y = \delta + \delta^{-1}$, or $ \bar{Y} = \mu + \mu^{-1}$, respectively, and the superscript $o/u$ refers here to the central element being fixed: the one with a horizontal line going \textsl{over} ($Y$) or \textsl{under} ($\bar{Y}$) all others. We conclude this section with a description of the structure of these modules at generic $\mathfrak{q}$. Based on the results~\cite{GL}, we observe that there exists a non-zero morphism\footnote{For brevity, we will use the term ``morphism" instead of the more standard ``homomorphism".} $f\colon \mathsf{W}_{s,w}(n) \to \mathsf{W}_{r,z}(n)$ if and only if $s \geq r$ and $Y, \bar{Y}$ have the same eigenvalues on both modules; furthermore any such morphism is proportional to the identity (for $s=r$) or to a unique injective map. The conditions on equality of the eigenvalues of $Y$ and $\bar{Y}$ is equivalent to the Graham-Lehrer conditions~\cite{GL}: \begin{equation} z = \begin{cases} w (-\mathfrak{q})^{r-s} & \text{ if } (-\mathfrak{q})^{2(r-s)} = 1 \text{ or } w^{2} = (-\mathfrak{q})^{2r}, \\ w^{-1} (-\mathfrak{q})^{r+s} & \text{ if } (-\mathfrak{q})^{2(r+s)} = 1 \text{ or } w^{2} = (-\mathfrak{q})^{-2r}. \end{cases} \end{equation} Furthermore, each standard module has a unique simple quotient denoted by $\overline{\mathsf{W}}_{r,z}(n)$, and these form a complete set of irreducible modules. \subsection{Tower structure}\label{sec:def.tower} The family of affine Temperley-Lieb algebras admits inclusions of the form (we will often abbreviate $\atl{n}\equiv \atl{n}(\mathfrak{q})$) $$ \atl{n} \subset \atl{n+1} \subset \atl{n+2} \subset \hdots\ , \qquad n\geq 1\ , $$ giving the structure of a tower of algebras~\cite[Sec.\,3.3]{GS}. Some of these inclusions will play a role in our construction of topological defects so we describe them here. We now assume that $k$ is a positive integer, and define a morphism of algebras \begin{equation} \phi^{u}_{n,k}\colon\; \atl{n} \to \atl{n+k}, \end{equation} by its action on the various sets of generators of the algebra. For clarity, we add a superscript to the generators to indicate which algebra they belong to; for instance $u^{(n)}$ is the shift generator in $\atl{n}$, while $u^{(n+2)}$ is the shift generator in $\atl{n+2}$, etc. With this notation, the map $\phi^{u}_{n,k}$ on the blobbed set of generators is \begin{align} \phi^{u}_{n,k}\colon \quad \big(b^{(n)} \big)^{\pm 1} &\mapsto \big(b^{(n+k)}\big)^{\pm 1},\\ e^{(n)}_{i} &\mapsto e^{(n+k)}_{i}. \end{align} It is straightforward to verify that $\phi^{u}_{n,k}$ defines an inclusion of algebras. We note that this definition is parallel to what was done in affine Hecke algebra terms in~\cite[Sec.\,4.4.2]{GS}. While the map is very simple with the blobbed generators, it is more complicated when expressed on the periodic set of generators, for instance \begin{equation} \phi^{u}_{n,k} \colon \quad u^{(n)} \mapsto u^{(n+k)}g^{(n+k)}_{n+k-1}g^{(n+k)}_{n+k-2}\hdots g^{(n+k)}_{n} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \foreach \vec{r} in {1,2,3,5}{ \draw[black, line width = 1pt] (\vec{r}, 1) .. controls (\vec{r}, 2) and (\vec{r} -1, 2 ) .. (\vec{r} - 1,3); }; \draw[black, line width = 1pt] (6,1) -- (6,3); \draw[black, line width = 1pt] (8,1) -- (8,3); \draw[white, line width = 3pt] (9,2) .. controls (5,2) .. (5,3); \draw[black, line width = 1pt] (9,2) .. controls (5,2) .. (5,3); \draw[black, line width = 2pt] (.5,1) -- (8.5,1); \draw[black, line width = 2pt] (.5,3) -- (8.5,3); \node[anchor = south] at (3.75,1.5) {$\hdots$}; \node[anchor = south] at (7,1) {$\hdots$}; \node[anchor = north] at (7,3) {$\hdots$}; \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \filldraw[white] (8.5,1) -- (9.5,1) -- (9.5,3) -- (8.5,3) -- (8.5,1); \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (.5,0.5) -- (5.5,0.5) node [midway,yshift = -7pt] {\footnotesize{n}}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (5.5,0.5) -- (8.5,0.5) node [midway,yshift = -7pt] {\footnotesize{k}}; \end{tikzpicture} \end{equation} which agrees with~\cite[Eq.\,(3.9)]{GS}, after one takes into account the difference in conventions. One therefore sees that, in terms of diagrams, the morphism $\phi^{u}_{n,k}$ consists in adding $k$ through lines on the right side of each diagrams, going \emph{under} every lines that wraps around the cylinder (hence the superscript $u$ on the morphism). Similarly, one defines another morphism of algebras \begin{equation} \phi^{o}_{n,k}\colon \; \atl{n} \to \atl{n+k} \end{equation} by adding the $k$ lines \emph{over} the lines that wrap around the cylinder, i.e.\ on the periodic set of generators the map is \begin{equation} \phi^{o}_{n,k}\colon \quad u^{(n)} \mapsto u^{(n+k)}\big(g^{(n+k)}_{n+k-1}\big)^{-1}\big(g^{(n+k)}_{n+k-2}\big)^{-1}\hdots \big(g^{(n+k)}_{n}\big)^{-1} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \foreach \vec{r} in {1,2,3,5}{ \draw[black, line width = 1pt] (\vec{r}, 1) .. controls (\vec{r}, 2) and (\vec{r} -1, 2 ) .. (\vec{r} - 1,3); }; \draw[black, line width = 1pt] (9,2) .. controls (5,2) .. (5,3); \draw[white, line width = 3pt] (6,1) -- (6,3); \draw[white, line width = 3pt] (8,1) -- (8,3); \draw[black, line width = 1pt] (6,1) -- (6,3); \draw[black, line width = 1pt] (8,1) -- (8,3); \draw[black, line width = 2pt] (.5,1) -- (8.5,1); \draw[black, line width = 2pt] (.5,3) -- (8.5,3); \node[anchor = south] at (3.75,1.5) {$\hdots$}; \node[anchor = south] at (7,1) {$\hdots$}; \node[anchor = north] at (7,3) {$\hdots$}; \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \filldraw[white] (8.5,1) -- (9.5,1) -- (9.5,3) -- (8.5,3) -- (8.5,1); \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (.5,0.5) -- (5.5,0.5) node [midway,yshift = -7pt] {\footnotesize{n}}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (5.5,0.5) -- (8.5,0.5) node [midway,yshift = -7pt] {\footnotesize{k}}; \end{tikzpicture}, \end{equation} which also agrees with~\cite[Eq.\,(3.10)]{GS}. On the blobbed set of generators, the map is simply \begin{align} \phi^{o}_{n,k}\colon \quad \big(\bar{b}^{(n)}\big)^{\pm 1} & \mapsto \big(\bar{b}^{(n+k)}\big)^{\pm 1}, \\ e^{(n)}_{i} &\mapsto e^{(n+k)}_{i}. \end{align} Furthermore, while we placed the extra lines on the right side of the diagram, we could have put them on the left side instead; we name the resulting morphisms \begin{equation} \psi^{u/o}_{n,k}\colon \; \atl{n} \to \atl{n+k}, \end{equation} for the corresponding \textit{under} and \textit{over} versions. We then notice that the two subalgebras $\phi^{u}_{n,k}(\atl{n})$ and $\psi^{o}_{k,n}(\atl{k})$ commute with each others; this can be seen by a direct calculation as in~\cite{GS} or showing that $$ \phi^{u}_{n,k}\big(b^{(n)}\big) \propto J^{(n+k)}_{1} , \qquad \psi^{o}_{k,n}\big(b^{(k)}\big) \propto J^{(n+k)}_{n+1} , $$ where $J_{i}$ is the Jucys-Murphy element\footnote{The Jucys-Murphy elements form a commutative subalgebra.} of the affine Temperley-Lieb algebra (see Section~\ref{sec:JM-elements}). This fact can be exploited to define a monoidal structure on the affine Temperley-Lieb category~\cite{GS}, see also~\cite{GJS} for the corresponding fusion calculation. We note that $ \phi^{o}_{n,k}(\atl{n})$ and $\psi^{u}_{k,n}(\atl{k})$ also commute. \section{Lattice topological defects: crossed channel}\label{sec:topdef.crossedchannel}\label{sec:3} In this section, we formulate our lattice topological defects in terms of the affine TL algebra using the hoop operators -- the central elements $Y$ and $\bar{Y}$ introduced in the previous section -- and describe their fusion rules. In more mathematical terms, we show that the two elements $Y$ and $\bar{Y}$ generate an interesting subalgebra $\mathsf{Z}_{\mathrm{sym}}$ in the centre of $\atl{n}(\mathfrak{q})$ -- the so-called symmetric centre -- we will show that $\mathsf{Z}_{\mathrm{sym}}$ agrees with the algebra of symmetric Laurent polynomials in the famous Jucys-Murphy elements. We also show that it admits a certain basis with \textsl{non-negative integer} structure constants. Interestingly, at least for generic values of $\mathfrak{q}$, the structure constants do not depend on~$n$ or~$\mathfrak{q}$. \subsection{The algebra of defects $Y$ and $\bar{Y}$} Recall that the hoop operators defined in Section~\ref{sec:diag-present} can be represented by diagrams with a single closed string wrapping over or under all the other strings: \begin{equation} Y = -(\mathfrak{q} b + \mathfrak{q}^{-1} b^{-1}) = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \foreach \s in {1,2,4,5}{ \draw[line width = 1pt, black ] (\s, 1) -- (\s, 3); }; \draw[line width = 3pt, white] (0.5,2) -- (5.5,2); \draw[line width = 1pt, black] (0.5,2) -- (5.5,2); \draw[line width = 1pt, black] (0.5,1) -- (5.5,1); \draw[line width = 1pt, black] (0.5,3) -- (5.5,3); \filldraw[line width = 1pt, white] (0,1) -- (.5,1) -- (.5,3) -- (0,3) -- (0,1); \filldraw[line width = 1pt, white] (6,1) -- (5.5,1) -- (5.5,3) -- (6,3) -- (6,1); \node[anchor = north] at (3,2) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (1,0.5) -- (5,0.5) node [midway,yshift = -7pt] {\footnotesize{n}}; \end{tikzpicture},\; \qquad \bar{Y} = -(\mathfrak{q} \bar{b} + \mathfrak{q}^{-1} \bar{b}^{-1}) = \; \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,2) -- (5.5,2); \foreach \s in {1,2,4,5}{ \draw[line width = 3pt, white ] (\s, 1) -- (\s, 3); \draw[line width = 1pt, black ] (\s, 1) -- (\s, 3); }; \draw[line width = 1pt, black] (0.5,1) -- (5.5,1); \draw[line width = 1pt, black] (0.5,3) -- (5.5,3); \filldraw[line width = 1pt, white] (0,1) -- (.5,1) -- (.5,3) -- (0,3) -- (0,1); \filldraw[line width = 1pt, white] (6,1) -- (5.5,1) -- (5.5,3) -- (6,3) -- (6,1); \node[anchor = north] at (3,2) {$\hdots$}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (1,0.5) -- (5,0.5) node [midway,yshift = -7pt] {\footnotesize{n}}; \end{tikzpicture}, \end{equation} and these are central elements in $\atl{n}(\mathfrak{q})$. This wrapping string can be isotopically deformed at will without changing the spectrum of the transfer matrix from Section~\ref{sec:transf}, and it thus can be thought of as a defect line (in the crossed channel). We are interested in the algebra generated by these hoop operators, and first study their powers. Taking powers of the hoop operators will increase the width of the defects by increasing the number of lines going across the system; one can then imagine Temperley-Lieb operators acting horizontally on the defect. For instance \begin{align} Y^{2}(e_{1}) = \;& \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (7.5,1); \draw[line width = 1pt, black] (0.5,4) -- (7.5,4); \foreach \s in {1,2,6,7}{ \draw[line width = 1pt, black] (\s, 1) -- (\s,4); } \draw[line width = 3pt, white] (0,2) -- (8,2); \draw[line width = 3pt, white] (0,3) -- (8,3); \draw[line width = 1pt, black] (0.5,2) -- (7.5,2); \draw[line width = 1pt, black] (0.5,3) -- (7.5,3); \filldraw[line width = 1pt, white] (3,2) -- (5,2) -- (5,3) -- (3,3) -- (3,2); \draw[line width = 1pt, black] (3,2) .. controls (4,2) and (4,3) .. (3,3); \draw[line width = 1pt, black] (5,2) .. controls (4,2) and (4,3) .. (5,3); \end{tikzpicture} \; = (\mathfrak{q} + \mathfrak{q}^{-1}) 1_{\atl{n}},\label{eq:defectmap1}\\ Y^{3}(e_{1}e_{2}) = \;& \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \draw[line width = 1pt, black] (0.5,1) -- (7.5,1); \draw[line width = 1pt, black] (0.5,5) -- (7.5,5); \foreach \s in {1,2,6,7}{ \draw[line width = 1pt, black] (\s, 1) -- (\s,5); } \draw[line width = 3pt, white] (0,2) -- (7.5,2); \draw[line width = 3pt, white] (0,3) -- (7.5,3); \draw[line width = 3pt, white] (0,4) -- (7.5,4); \draw[line width = 1pt, black] (0.5,2) -- (7.5,2); \draw[line width = 1pt, black] (0.5,3) -- (7.5,3); \draw[line width = 1pt, black] (0.5,4) -- (7.5,4); \filldraw[line width = 1pt, white] (3,2) -- (5,2) -- (5,4) -- (3,4) -- (3,2); \draw[line width = 1pt, black] (3,3) .. controls (4,3) and (4,4) .. (3,4); \draw[line width = 1pt, black] (3,2) .. controls (4,2) and (4,4) .. (5,4); \draw[line width = 1pt, black] (5,2) .. controls (4,2) and (4,3) .. (5,3); \end{tikzpicture} \; = Y.\label{eq:defectmap2} \end{align} One recognize that this corresponds to taking a Markov trace in the horizontal direction; in particular, the operator $Y^{m}$ can be seen as a map from $\tl{m}$ to the ring of endomorphisms of~$\atl{n}$: \begin{equation} Y^m \colon \; \tl{m} \to \mathrm{End}_{\atl{n}} \big(\atl{n}\big)\ , \end{equation} where a given element in $\tl{m}$ considered as a diagram is just placed on the $m$ horizontal strands, as in Fig.~\ref{fig:defectmap}. It is easy to see that the image of this map lives in the center of $\atl{n}$, and the central elements provide an endomorphism via the multiplication. We similarly define the mapping \begin{equation}\label{eq:bYm} \bar{Y}^m \colon \tl{m} \to \mathrm{End}_{\atl{n}} \big(\atl{n}\big) \end{equation} whose image is also in the center of~$\atl{n}$, and that can be represented graphically similarly to Fig.~\ref{fig:defectmap}, however with horizontal lines going under the vertical ones. We show below that the images of the two maps $Y^m$ and $\bar{Y}^m$ generate an algebra that we call $\mathsf{Z}_{\mathrm{sym}}$. \begin{figure} \begin{center} \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \foreach \s in {1,2,4,5}{ \draw[line width = 1pt, black] (\s,1) -- (\s,3); } \filldraw[white] (.5,1.25) -- (5.5,1.25) -- (5.5,2.75) -- (.5,2.75) -- (.5,1.25); \draw[line width = 1pt, black] (.5,1.25) -- (5.5,1.25) -- (5.5,2.75) -- (.5,2.75) -- (.5,1.25); \draw[line width = 1pt, black, ->] (1,1.5) -- (1,2.5); \draw[line width = 1pt, black, ->] (5,1.5) -- (5,2.5); \node[anchor = south] at (3,1.25) {D}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (1,0.5) -- (5,0.5) node [midway,yshift = -7pt] {\footnotesize{m}}; \end{tikzpicture} $ \qquad \overset{Y^{m}}{\longrightarrow} $ \begin{tikzpicture}[scale = 1/3, baseline ={(current bounding box.center)}] \foreach \s in {1,2,6,7}{ \draw[line width = 1pt, black] (\s, 1) -- (\s, 7); }; \foreach \s in {2,3,5,6}{ \draw[line width = 3pt, white] (.5,\s) -- (7.5,\s); \draw[line width = 1pt, black] (.5,\s) -- (7.5,\s); } \filldraw[white] (3.25,1.5) -- (4.75,1.5) -- (4.75,6.5) -- (3.25,6.5) -- (3.25,1.5); \draw[line width = 1pt, black] (3.25,1.5) -- (4.75,1.5) -- (4.75,6.5) -- (3.25,6.5) -- (3.25,1.5); \draw[line width = 1pt, black, ->] (3.5,2) -- (4.5,2); \draw[line width = 1pt, black, ->] (3.5,6) -- (4.5,6); \node[anchor = west] at (3.25,4) {D}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, yshift = -3pt] (1,0.5) -- (7,0.5) node [midway,yshift = -7pt] {\footnotesize{n}}; \draw[decorate, decoration = {brace, mirror, amplitude = 2 pt}, xshift = -3pt] (.5,6.5) -- (.5,1.5) node [midway,xshift = -7pt] {\footnotesize{m}}; \draw[line width = 2pt, black] (.5,1) -- (7.5,1); \draw[line width = 2pt, black] (.5,7) -- (7.5,7); \end{tikzpicture} \end{center} \caption{An illustration of the action of the map $Y^{m}$; the D box represent some diagram in $\tl{m}$ and the arrows illustrate its orientation. The map then rotates the diagram $90$ degrees clockwise, and insert it on the defect. The result is a central element of $\atl{n}$.}\label{fig:defectmap} \end{figure} \subsection{Higher-spin operators $Y_{j}$ and $\bar{Y}_{j}$}\label{sec:higherspin} Instead of applying the defect operators $Y^m$ on individual elements of the Temperley-Lieb algebra, we can have them act on an entire ideal, sending each to a sub-ring of the ring of endomorphisms of $\atl{n}$. If $\mathfrak{q}$ is generic, every indecomposable left-ideal of $\tl{m}$ is isomorphic to one of the form $\mathsf{S}_{j}(m) = \tl{m}P_{j} $, where $P_{j}$ is an idempotent of spin $j$; when $j= m/2$ one can use the Jones-Wenzl projectors $$ P_{m/2} = W^{m+1}_{1}\; , $$ defined recursively through the following formula: \begin{align} W^{1}_{i}(n) & \equiv W^{2}_{i}(n) \equiv 1_{\tl{n}}, \notag \\ W^{m}_{i}(n) & \equiv W^{m-1}_{i+1}(n)\left( 1_{\tl{n}} - \frac{\mathfrak{q}^{m-2} - \mathfrak{q}^{2-m}}{\mathfrak{q}^{m-1} - \mathfrak{q}^{1-m}}e_{i} \right)W^{m-1}_{i+1}(n), \end{align} where the index $m$ is related to the spin as above, and $i$ is just the lattice position. Recall that $P_{j}$ is an idempotent, i.e.\ $P_j P_j = P_j$, and the map $Y^{m}$ has the property of a trace, we then have $$ Y^{m}(x P_{j}) = Y^{m}(P_{j} x P_{j}) $$ for all $x\in \tl{m}$. By construction, $P_{j} x P_{j}$ is an endomorphism of the ideal $\mathsf{S}_{j}(m)$ (by multiplication on the right), which is simple whenever $\mathfrak{q}$ is generic; it follows that $ P_{j} x P_{j} = \lambda_{x}P_{j}$ for some $\lambda_{x} \in \mathbb{C}$, and thus that \begin{equation} \label{eq:Ym-Sj} Y^{m}(\mathsf{S}_{j}(m)) = \mathbb{C}Y_{j}, \end{equation} where we introduced a special central element \begin{equation}\label{eq:Yj-def} Y_{j} := Y^{2j}(W^{2j+1}_{1}) . \end{equation} Here, we used the fact that the trace of $P_{j}$ is independent both of $m$, and of the particular choice of $P_{j}$ we made (see Appendix \ref{sec:rigor.hspindefect} for details of the proof). In particular, the identity~\eqref{eq:Ym-Sj} makes sense and is true for any valid value of $m$ when the ideal $\mathsf{S}_{j}(m)$ is non-zero. \medskip Using the recurrence relation for the Jones-Wenzl projectors, we find \begin{equation}\label{eq:Yj-U} Y_{j} = \mathsf{U}_{2j}\big(\halff Y\big), \end{equation} where $\mathsf{U}_{k}(x)$ is the Chebyshev polynomial of the second kind, of order $k$. For instance, we have \begin{eqnarray} Y_{1/2} &=& Y \,, \nonumber \\ Y_1 &=& (Y_{1/2})^2 - 1 \,, \nonumber \\ Y_{3/2} &=& (Y_{1/2})^3 - 2 Y_{1/2} \,,\nonumber \\ Y_2 &=& (Y_{1/2})^4 - 3 (Y_{1/2})^2 + 1 \,.\nonumber \end{eqnarray} Recall that $Y$ acts on $\mathsf{W}^{o}_{k,\delta}$ as $(\delta + \delta^{-1})$; writing $\delta = e^{i \theta}$, the higher-spin operator eigevalues are thus \begin{align*} Y_{j} = \frac{\sin((2j+1)\theta)}{\sin \theta}. \end{align*} The important observation is that the properties of the Chebyshev polynomials allow us to decompose products of $Y_{j} $s: \begin{equation}\label{eq:Y-fusion} Y_{j}\cdot Y_{k} = \sum_{r = |j-k|}^{j+k} Y_{r}. \end{equation} We finally note that the whole construction of this section would work equally well if the defect had been going under the strings instead of over them, by simply replacing $Y$ with $\bar{Y}$ everywhere it appears. We begin with the map $\bar{Y}^m$ defined in~\eqref{eq:bYm}. Its properties are identical to those of the map $Y^m$ in every way; applying it to the ideals $\mathsf{S}_{j}(m)$ yields higher-spin defect operators $\bar{Y}_{j}$ whose eigenvalues on $\mathsf{W}^{u}_{k,\delta}$ are \begin{align*} \bar{Y}_{j} = \frac{\sin((2j+1)(\phi))}{\sin \phi}, \end{align*} where $\delta \equiv e^{i \phi}$. And they have similarly the fusion \begin{equation}\label{eq:bY-fusion} \bar{Y}_{j}\cdot \bar{Y}_{k} = \sum_{r = |j-k|}^{j+k} \bar{Y}_{r}. \end{equation} The algebra generated by $Y_j$ and $\bar{Y}_k$ will be called \textit{the symmetric center} $\mathsf{Z}_{\mathrm{sym}}$, this name will be justified in the next subsection. In other words, the images of the two maps $Y^m$ and $\bar{Y}^m$ generate $\mathsf{Z}_{\mathrm{sym}}$ as claimed above. We finally note that for the ``mixed" fusion ${Y}_{j}\cdot \bar{Y}_{k}$ there is no interesting decomposition, or rather a trivial one, and the element ${Y}_{j}\cdot \bar{Y}_{k}$ has to be thought of as one of the basis elements in $\mathsf{Z}_{\mathrm{sym}}$. Of course all the other products in the algebra $\mathsf{Z}_{\mathrm{sym}}$ can be now decomposed over ${Y}_{j}$, $ \bar{Y}_{k}$, and ${Y}_{j}\cdot \bar{Y}_{k}$ using~\eqref{eq:Y-fusion} and~\eqref{eq:bY-fusion}. \subsection{Relation to symmetric polynomials.}\label{sec:JM-elements} While identifying the topological defect operators with the hoop operator is an intuitive choice, there are many other known central elements, which could also lead to topological defects. These are built from the so-called Jucys-Murphy elements; let \begin{equation} J_{1} \equiv \bar{b}, \qquad J_{i} \equiv g_{i-1}J_{i-1}g_{i-1} = (-\mathfrak{q})^{-3/2} \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (3,2) .. controls (3,1) .. (7,1); \foreach \vec{r} in {0,2,4,6}{ \draw[white, line width = 3pt] (\vec{r},0) -- (\vec{r},2); \draw[black, line width = 1pt] (\vec{r},0) -- (\vec{r},2); } \draw[white, line width = 3pt] (-1,1) .. controls (3,1) .. (3,0); \draw[black, line width = 1pt] (-1,1) .. controls (3,1) .. (3,0); \filldraw[white] (-3/2,0)-- (-1/2,0) -- (-1/2,2) -- (-3/2,2) -- (-3/2,0); \filldraw[white] (15/2,0)-- (13/2,0) -- (13/2,2) -- (15/2,2) -- (15/2,0); \draw[black, line width = 2pt] (-1/2,0) -- (13/2,0); \draw[black, line width = 2pt] (-1/2,2) -- (13/2,2); \foreach \vec{r} in {1,5}{ \node[anchor = south] at (\vec{r}, 0) {$\hdots$}; \node[anchor = north] at (\vec{r}, 2) {$\hdots$}; } \draw[decorate, decoration = {brace, mirror, amplitude = 4 pt}, yshift = -3pt] (-.5,-.5) -- (2.5,-.5) node [midway,yshift = -7pt] {\footnotesize{i-1}}; \draw[decorate, decoration = {brace, mirror, amplitude = 4 pt}, yshift = -3pt] (3.5,-.5) -- (6.5,-.5) node [midway,yshift = -7pt] {\footnotesize{n-i}}; \end{tikzpicture} \;, \qquad i = 2,\hdots n \end{equation} \begin{equation} M_{1} \equiv b, \qquad M_{i} \equiv g_{i-1}M_{i-1}g_{i-1} = (-\mathfrak{q})^{-3/2} \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (3,2) .. controls (3,1) .. (-1,1); \foreach \vec{r} in {0,2,4,6}{ \draw[white, line width = 3pt] (\vec{r},0) -- (\vec{r},2); \draw[black, line width = 1pt] (\vec{r},0) -- (\vec{r},2); } \draw[white, line width = 3pt] (7,1) .. controls (3,1) .. (3,0); \draw[black, line width = 1pt] (7,1) .. controls (3,1) .. (3,0); \filldraw[white] (-3/2,0)-- (-1/2,0) -- (-1/2,2) -- (-3/2,2) -- (-3/2,0); \filldraw[white] (15/2,0)-- (13/2,0) -- (13/2,2) -- (15/2,2) -- (15/2,0); \draw[black, line width = 2pt] (-1/2,0) -- (13/2,0); \draw[black, line width = 2pt] (-1/2,2) -- (13/2,2); \foreach \vec{r} in {1,5}{ \node[anchor = south] at (\vec{r}, 0) {$\hdots$}; \node[anchor = north] at (\vec{r}, 2) {$\hdots$}; } \draw[decorate, decoration = {brace, mirror, amplitude = 4 pt}, yshift = -3pt] (-.5,-.5) -- (2.5,-.5) node [midway,yshift = -7pt] {\footnotesize{i-1}}; \draw[decorate, decoration = {brace, mirror, amplitude = 4 pt}, yshift = -3pt] (3.5,-.5) -- (6.5,-.5) node [midway,yshift = -7pt] {\footnotesize{n-i}}; \end{tikzpicture} \;, \qquad i = 2,\hdots n \end{equation} It is straightforward, though tedious, to prove that the $J$s commute with each others and so do the $M$s; furthermore if $P(x_1, \hdots, x_{n})$ is a symmetric Laurent polynomial, then $P((-\mathfrak{q})J_{1}, \hdots, (-\mathfrak{q})^{n}J_{n})$ and $P((-\mathfrak{q})^{i}M_{1}, \hdots, (-\mathfrak{q})^{n}M_{n})$ are central in $\atl{n}$. All of these can be generated from the power-sum symmetric polynomials \begin{equation} C_{k}(n) = \sum_{i=1}^{n} ((-\mathfrak{q})^{i+1} M_{i})^{k}, \qquad \bar{C}_{k}(n) = \sum_{i=1}^{n} ((-\mathfrak{q})^{i+1} J_{i})^{k}. \end{equation} However, it turns out that these are related to the hoop operators through the following relations: \begin{equation}\label{eq:JMandTopD1} C_{k}(n) + C_{-k}(n) = (-\mathfrak{q})^{- n k}\bar{C}_{k}(n) + (-\mathfrak{q})^{n k}\bar{C}_{-k}(n)= 2 [n]_{k} \mathsf{T}_{k}(\bar{Y}/2), \end{equation} \begin{equation}\label{eq:JMandTopD2} \bar{C}_{k}(n) + \bar{C}_{-k}(n) = (-\mathfrak{q})^{- n k}C_{k}(n) + (-\mathfrak{q})^{ n k}C_{-k}(n) = 2 [n]_{k} \mathsf{T}_{k}(Y/2), \end{equation} where we defined $$ [n]_{k} \equiv \frac{ (-\mathfrak{q})^{k n} - (-\mathfrak{q})^{-k n}}{(-\mathfrak{q})^{k} - (-\mathfrak{q})^{-k}}, $$ where it is understood that $[n]_{0} \equiv n $, and $\mathsf{T}_{k}(x)$ is the $k$th Chebyshev polynomial of the first kind. The proof of these identities can be found in Appendix \ref{sec:rigor.JM}. If $(-\mathfrak{q})^{n k} \neq 1$, these relations can be combined to find \begin{equation} ((-\mathfrak{q})^{k} - (-\mathfrak{q})^{-k})C_{k}(n) = 2\left((-\mathfrak{q})^{k n} \mathsf{T}_{|k|}( Y/2 ) - \mathsf{T}_{|k|}( \bar{Y}/2 )\right), \end{equation} \begin{equation} ((-\mathfrak{q})^{k} - (-\mathfrak{q})^{-k})\bar{C}_{k}(n) = 2 \left((-\mathfrak{q})^{k n} \mathsf{T}_{|k|}( \bar{Y}/2 ) - \mathsf{T}_{|k|}( Y/2 )\right). \end{equation} Finally, using the properties of the Chebyshev polynomial it follows that \begin{equation} Y_{k/2} = \sum_{\underset{\text{step } = 2}{j = 1-k}}^{k-1} \frac{1}{[n]_{j}}\bar{C}_{j}(n). \end{equation} \section{Lattice topological defects: direct channel}\label{sec:4} \begin{figure} \begin{center} \begin{tikzpicture}[baseline={(current bounding box.center)}] \foreach \vec{r} in {0,1,2,4,5,6}{ \draw[black, line width = 1pt] (.707*\vec{r}, -.5) -- (.707*\vec{r}, .5 + .707*4); }; \foreach \s in {0,1,3,4}{ \draw[black, line width = 1pt] (-.5, .707*\s) -- (.5 + .707*1, .707*\s); \draw[black, line width = 1pt] (.5+.707*4, .707*\s) -- (.5 + .707*6, .707*\s); }; \draw[white, line width = 3pt] (-.5, .707*2) -- (.5 + .707*6, .707*2); \draw[red, line width = 1pt] (-.5, .707*2) -- (.5 + .707*6, .707*2); \foreach \s in {0,1,3,4}{ \node at (0 + 0.707*3,0.707*\s) {$\hdots$}; \draw[line width = 2] (0,0.707*\s -0.353553) -- (4*.707,0.707*\s -0.353553); \draw[line width = 2] (-.354553,0.707*\s + 0.353553) -- (13*.353553,0.707*\s + 0.353553); \foreach \vec{r} in {0,1,2,4,5,6}{ \node at (0 + 0.707*\vec{r} ,0.707*\s) {\rutiles{x}{45}}; }; }; \end{tikzpicture} $\qquad \overset{S}{\leftrightarrow} \qquad $ \begin{tikzpicture}[rotate = -90,baseline={(current bounding box.center)}] \foreach \vec{r} in {0,1,2,4,5,6}{ \draw[black, line width = 1pt] (.707*\vec{r}, -0.5) -- (.707*\vec{r}, .5 + .707*4); }; \foreach \s in {0,1,3,4}{ \draw[black, line width = 1pt] (-.5, .707*\s) -- (.5 + .707*1, .707*\s); \draw[black, line width = 1pt] (.5+.707*4, .707*\s) -- (.5 + .707*6, .707*\s); }; \draw[white, line width = 3pt] (-.5, .707*2) -- (.5 + .707*6, .707*2); \draw[red, line width = 1pt] (-.5, .707*2) -- (.5 + .707*6, .707*2); \foreach \s in {0,1,3,4}{ \node at (0 + 0.707*3,0.707*\s) {$\vdots$}; \draw[line width = 2] (0,0.707*\s -0.353553) -- (4*.707,0.707*\s -0.353553); \draw[line width = 2] (-.354553,0.707*\s + 0.353553) -- (13*.353553,0.707*\s + 0.353553); \foreach \vec{r} in {0,1,2,4,5,6}{ \node at (0 + 0.707*\vec{r} ,0.707*\s) {\rutiles{x}{45}}; }; }; \end{tikzpicture}, \caption{The modular $S$-transformation, which is the lattice rotation by $90^o$, sends a defect $Y$ (in red) in the crossed channel to a defect in the direct channel, and vice versa.}\label{DMfig3} \end{center} \end{figure} In this section, we are interested in interpretation of previously introduced defects $Y_j$ and $\bar{Y}_j$ in the direct channel, or in their Hamiltonian realization. The action of the defect $Y_{1/2}$ in the direct channel can be inferred by a simple modular transformation - that is, a rotation by $90^o$ as in Fig.~\ref{DMfig3}. What this means microscopically is that we should have a system where, on top of the usual TL interaction terms, we have an extra line that simply goes over/under the others, and this contributes to defect terms in the Hamiltonian. The Hamiltonian with defects can be obtained as a logarithmic derivative evaluated at $x = 1$ of the transfer matrix $T_{n}(x;m)$ in Fig.~\ref{fig:transfermatrix.defect} in the case of rotation of the defect $\bar{Y}_{m/2}$. In this case, we obtain the Hamiltonian on $n+m$ sites \begin{equation*} H^{u} = \sum_{j = 1}^{n-1} e^{(n+m)}_{j} + \mu_{n,m}^{-1} e^{(n+m)}_{n} \mu_{n,m} \rho, \end{equation*} where $\mu_{n,m} = g_{n}g_{n+1}\hdots g_{n+m}$ and the idempotent $\rho$ is the JW idempotent $W^{m+1}_{1}$. We are interested in the spectral problem of $H^{u}$. It is important to note that this Hamiltonian can be written as $$ H^{u} = \phi^{u}_{n,m}\big(\sum_{j=1}^{n}e^{(n)}_{j}\big), $$ i.e.\ as the image of the standard periodic TL Hamiltonian $$H_{n}= \sum_{j=1}^{n} e_{j}$$ on $n$ sites under the embedding map $\phi^{u}_{n,m}$. To solve the spectral problem, we present an algebraic construction linking the spectrum of $H^{u}$ with the spectrum of the standard Hamiltonian $H_{n}$ acting on a certain fusion quotient module of $\atl{n}(\mathfrak{q})$. This requires certain preparation and an algebraic discussion below. We then come back to the spectral problem in Section~\ref{sec:4.4} with the final result formulated in Theorem~\ref{eq:thm-H}, and then provide an explicit example based on the twisted XXZ chains in Section~\ref{eq:sec4-ex}. \begin{figure} \begin{center} $T_{n}(x;k) = \;\; $ \begin{tikzpicture}[scale = 1, baseline={(current bounding box.center)}] \draw[black, line width = 2pt] (1,.5) -- (10,.5); \draw[black, line width = 2pt] (1,1.5) -- (10,1.5); \foreach \vec{r} in {3,9}{ \node at (\vec{r}, 1) {$\hdots $}; } \node at (6, .75) {$\hdots $}; \node at (6, 1.25) {$\hdots $}; \draw[black, line width = 1pt] (5,.5) -- (5,1.5); \draw[black, line width = 1pt] (5+.5,.5) -- (5+.5,1.5); \draw[black, line width = 1pt] (7-.5,.5) -- (7-.5,1.5); \draw[black, line width = 1pt] (7,.5) -- (7,1.5); \draw[white, line width = 3pt] (4,1) -- (8,1); \draw[black, line width = 1pt] (4,1) -- (8,1); \foreach \vec{r} in {1,2,4,8,10}{ \node at (\vec{r},1) {\correcttiles{x}{45}{.707}}; } \draw[decorate, decoration = {brace, mirror, amplitude = 4 pt}, yshift = -5pt] (5,.5) -- (7,.5) node [midway,yshift = -7pt] {\footnotesize{m}}; \end{tikzpicture} \caption{$T_{n}(x;m)$ is a transfer matrix carrying a defect of width $m$ going under the other lines; taking its logarithmic derivative evaluated at $x = 1$ yields the Hamiltonian $H^{u}_{n}$ (up to a normalization factor) with $\rho = 1$.}\label{fig:transfermatrix.defect} \end{center} \end{figure} \medskip Let us begin with the idea that stays behind the two algebraic constructions formulated below. Adding the extra lines/defects in the direct channel can be realised as a functor that combine a module of $\atl{}$ (the bulk model) with a module of $\tl{}$ (the defect) into a new module of $\atl{}$ (the bulk model with a defect); it turns out that there are (at least) two natural ways of doing this: one can \textsl{add} new strands carrying the defect to the module, a process we call the \emph{fusion product}, or one can \textsl{impose} the defect on an existing part of the module, a process we call the \emph{fusion quotient}. \subsection{The fusion product} {\center \emph{This section uses the notation introduced in section \ref{sec:def.tower}} } Let $m,k$ both be positive integers, we give $\atl{m+k}$ the structure of a $(\atl{m+k}, \atl{m} \otimes_{\mathbb{C}} \tl{k})$ bimodule by letting $\atl{m+k}$ act on the left through the natural representation, and $\atl{m} \otimes_{\mathbb{C}} \tl{k}$ acts on the right by the morphism $\phi^{u/o}_{m,k} \otimes_{\mathbb{C}} \psi^{o/u}_{k,m}$, where we identified $\tl{k}$ with its image in $\atl{k}$. For $M$ an $\atl{m}$ module, and $V$ a $\tl{k} $-module, our definition of the fusion product can then be written \begin{equation} M \times^{u/o}_{f} V \equiv \atl{m+k} \otimes_{\atl{m} \otimes_{\mathbb{C}} \tl{k}} \left(M \otimes_{\mathbb{C}} V \right), \end{equation} where the superscript $u/o$ denotes which one of $\phi^{u/o}_{m,k}$ we used to define the bimodule structure of $\atl{m+k} $. From a more physical point of view, this corresponds to having a bulk model described by $M$ which contains an isolated sub-system $V$, such that they are both entirely blind to each others so that the Hilbert space of the system is simply the tensor product of the Hilbert spaces of $M \otimes_{\mathbb{C}} V$; at some point one then remove the barrier between the two sub-system and thus letting $V$ \emph{propagate} freely inside $M$. Note also that this fusion is related, though different, to ones introduced previously (see Appendix~\ref{sec:previousfusions} for more details). Before giving the general result we give a small example and compute the fusion product of two standard modules $\mathsf{W}_{1/2,z}(3) \times^{o}_{f} \mathsf{S}_{1/2}(1) $. Since the standard modules are cyclic, their fusion is also, and thus $\mathsf{W}_{1/2,z}(3) \times^{o}_{f} \mathsf{S}_{1/2}(1) = \atl{4}x$, with \begin{equation} x =\; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (3,3) -- (3,1); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (2,2) .. (2,3); \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (4,1) -- (4,3); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \draw[black, line width = 1pt] (.5,4) -- (.5,6); \draw[black, line width = 1pt] (1.5,4) .. controls (1.5,5) and (2.5,5) .. (2.5,4); \node[anchor = south] at (3.5,4) {$\otimes$}; \draw[black, line width = 1pt] (4.5,4) -- (4.5,6); \draw[black, line width = 2pt] (0,4) -- (3,4); \draw[black, line width = 2pt] (4,4) -- (5,4); \end{tikzpicture} \;, \end{equation} where we also introduced our diagram notation for the fusion product: the diagram at the bottom is the element of $\atl{4}$, the one on the top left corner is the element of $\mathsf{W}_{1,z}(3) $, and the one on the top right corner is the element of $\mathsf{S}_{1/2}(1)$. Since this module is cyclic, we can choose a basis of the form $\lbrace a_{i} x | i= 1, ... \rbrace $ for some subset $\lbrace a_{i} \rbrace \subset \atl{4}$; in the case at hand the simplest choice is \begin{equation*} a_{1} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \draw[black, line width = 1pt] (2,3) .. controls (2,2) and (3,2) .. (3,3); \draw[black, line width = 1pt] (1,3) -- (1,1); \draw[black, line width = 1pt] (4,3) .. controls (4,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (4,2) .. (4,1); \end{tikzpicture} \qquad a_{2} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \draw[black, line width = 1pt] (2,3) .. controls (2,2) and (3,2) .. (3,3); \draw[black, line width = 1pt] (1,3) -- (1,1); \draw[black, line width = 1pt] (4,3) .. controls (4,2) and (4,2) .. (4,1); \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (2,2) .. (2,1); \end{tikzpicture} \qquad a_{3} =\; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \draw[black, line width = 1pt] (2,3) .. controls (2,2) and (3,2) .. (3,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (3,2) .. (3,1); \draw[black, line width = 1pt] (4,3) .. controls (4,2) and (4,2) .. (4,1); \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (2,2) .. (2,1); \end{tikzpicture} \qquad a_{4} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (2,3) .. controls (2,2) and (3,2) .. (3,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (4,3) .. controls (4,2) and (3,2) .. (3,1); \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (0,2) .. (0,1); \draw[black, line width = 1pt] (4,1) .. controls (4,2) and (5,2) .. (5,1); \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \filldraw[white] (4.5,1) -- (5.5,1) -- (5.5,3) -- (4.5,3) -- (4.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \end{tikzpicture} \end{equation*} \begin{equation*} a_{5} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (2,3) .. controls (2,2.25) and (3,2.25) .. (3,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (4,2) .. (4,3); \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (4,1) .. controls (4,2) and (3,2) .. (3,1); \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \filldraw[white] (4.5,1) -- (5.5,1) -- (5.5,3) -- (4.5,3) -- (4.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \end{tikzpicture}\quad a_{6} =\; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (2,3) .. controls (2,2.25) and (3,2.25) .. (3,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (4,2) .. (4,3); \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (0,2) .. (0,1); \draw[black, line width = 1pt] (2,1) .. controls (2,2) and (3,2) .. (3,1); \draw[black, line width = 1pt] (4,1) .. controls (4,2) and (5,2) .. (5,1); \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \filldraw[white] (4.5,1) -- (5.5,1) -- (5.5,3) -- (4.5,3) -- (4.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \end{tikzpicture}\quad a_{7} = \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (2,3) .. controls (2,2.25) and (3,2.25) .. (3,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (4,2) .. (4,3); \draw[black, line width = 1pt] (1,1) .. controls (1,1.75) and (0,1.75) .. (0,1); \draw[black, line width = 1pt] (2,1) .. controls (2,2.25) and (-1,2.25) .. (-1,1); \draw[black, line width = 1pt] (4,1) .. controls (4,1.75) and (5,1.75) .. (5,1); \draw[black, line width = 1pt] (3,1) .. controls (3,2.25) and (6,2.25) .. (6,1); \filldraw[white] (-1.5,1) -- (.5,1) -- (.5,3) -- (-1.5,3) -- (-1.5,1); \filldraw[white] (4.5,1) -- (6.5,1) -- (6.5,3) -- (4.5,3) -- (4.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \end{tikzpicture} \quad a_{8} = \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (2,3) .. controls (2,2.25) and (3,2.25) .. (3,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (4,2) .. (4,3); \draw[black, line width = 1pt] (1,1) .. controls (1,1.75) and (2,1.75) .. (2,1); \draw[black, line width = 1pt] (3,1) .. controls (3,2.25) and (-1,2.25) .. (-1,1); \draw[black, line width = 1pt] (4,1) .. controls (4,1.75) and (5,1.75) .. (5,1); \filldraw[white] (-1.5,1) -- (.5,1) -- (.5,3) -- (-1.5,3) -- (-1.5,1); \filldraw[white] (4.5,1) -- (6.5,1) -- (6.5,3) -- (4.5,3) -- (4.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \end{tikzpicture} \end{equation*} \begin{equation*} a_{9} = \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (2,3) .. controls (2,2.25) and (3,2.25) .. (3,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (4,2) .. (4,3); \draw[black, line width = 1pt] (1,1) .. controls (1,1.75) and (0,1.75) .. (0,1); \draw[black, line width = 1pt] (2,1) .. controls (2,2.25) and (6,2.25) .. (6,1); \draw[black, line width = 1pt] (4,1) .. controls (4,1.75) and (3,1.75) .. (3,1); \filldraw[white] (-1.5,1) -- (.5,1) -- (.5,3) -- (-1.5,3) -- (-1.5,1); \filldraw[white] (4.5,1) -- (6.5,1) -- (6.5,3) -- (4.5,3) -- (4.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \end{tikzpicture} \qquad a_{10} = \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (2,3) .. controls (2,2.25) and (3,2.25) .. (3,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (4,2) .. (4,3); \draw[black, line width = 1pt] (2,1) .. controls (2,1.75) and (3,1.75) .. (3,1); \draw[black, line width = 1pt] (1,1) .. controls (1,2.25) and (4,2.25) .. (4,1); \filldraw[white] (-1.5,1) -- (.5,1) -- (.5,3) -- (-1.5,3) -- (-1.5,1); \filldraw[white] (4.5,1) -- (6.5,1) -- (6.5,3) -- (4.5,3) -- (4.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \end{tikzpicture}. \end{equation*} It is not trivial at all to show that this set is sufficient, for instance, can $e_{4} a_{2}x$ really be expressed as a linear combination of $a_{i}x$? Indeed it can: \begin{equation}\label{eq:exfusion1} e_{4} a_{2} x \; = \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (2,2) .. (2,3); \draw[black, line width = 1pt] (3,3) .. controls (3,1.5) and (0,1.5) .. (0,3); \draw[black, line width = 1pt] (4,3) .. controls (4,2) and (5,2) .. (5,3); \foreach \vec{r} in {0,2,4}{ \draw[ black, line width = 1pt] (\vec{r}, 0) .. controls (\vec{r}, 1) and (\vec{r} + 1, 1) .. (\vec{r} + 1, 0); } ; \filldraw[white] (-.5,0) -- (.5,0) -- (.5,3) -- (-.5,3) -- (-.5,0); \filldraw[white] (4.5,0) -- (5.5,0) -- (5.5,3) -- (4.5,3) -- (4.5,0); \draw[black, line width = 2pt] (.5,0) -- (4.5,0); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \draw[black, line width = 1pt] (.5,4) -- (.5,6); \draw[black, line width = 1pt] (1.5,4) .. controls (1.5,5) and (2.5,5) .. (2.5,4); \node[anchor = south] at (3.5,4) {$\otimes$}; \draw[black, line width = 1pt] (4.5,4) -- (4.5,6); \draw[black, line width = 2pt] (0,4) -- (3,4); \draw[black, line width = 2pt] (4,4) -- (5,4); \end{tikzpicture} \; = z^{-1} \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (2,2) .. (2,3); \draw[black, line width = 1pt] (3,3) .. controls (3,1.5) and (0,1.5) .. (0,3); \draw[black, line width = 1pt] (4,3) .. controls (4,2) and (5,2) .. (5,3); \foreach \vec{r} in {0,2,4}{ \draw[ black, line width = 1pt] (\vec{r}, 0) .. controls (\vec{r}, 1) and (\vec{r} + 1, 1) .. (\vec{r} + 1, 0); } ; \filldraw[white] (-.5,0) -- (.5,0) -- (.5,3) -- (-.5,3) -- (-.5,0); \filldraw[white] (4.5,0) -- (5.5,0) -- (5.5,3) -- (4.5,3) -- (4.5,0); \draw[black, line width = 2pt] (.5,0) -- (4.5,0); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \draw[black, line width = 1pt] (.5,4) .. controls (.5,5) and (-.5,5) .. (-.5,5); \draw[black, line width = 1pt] (1.5,4) .. controls (1.5,5) and (2.5,5) .. (2.5,4); \draw[black, line width = 1pt] (.5,6) .. controls (.5,5) and (1.5,5) .. (1.5,6); \draw[black, line width = 1pt] (2.5,6) .. controls (2.5,5) and (3.5,5) .. (3.5,6); \filldraw[white] (-1,4) -- (0,4) -- (0,6) -- (-1,6) -- (-1,4); \filldraw[white] (3,4) -- (4,4) -- (4,6) -- (3,6) -- (3,4); \draw[black, line width = 1pt] (.5,6) -- (.5,8); \draw[black, line width = 1pt] (1.5,6) .. controls (1.5,7) and (2.5,7) .. (2.5,6); \node[anchor = south] at (3.5,4) {$\otimes$}; \draw[black, line width = 1pt] (4.5,4) -- (4.5,6); \draw[black, line width = 2pt] (0,4) -- (3,4); \draw[black, line width = 2pt] (0,6) -- (3,6); \draw[black, line width = 2pt] (4,4) -- (5,4); \end{tikzpicture} \; = z^{-1} \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (1,1) .. controls (1,0) and (2,0) .. (2,1); \draw[black, line width = 1pt] (3,1) .. controls (3,-.5) and (0,-.5) .. (0,1); \draw[black, line width = 1pt] (4,1) .. controls (4,0) and (5,0) .. (5,1); \foreach \vec{r} in {0,2,4}{ \draw[ black, line width = 1pt] (\vec{r}, -2) .. controls (\vec{r}, -1) and (\vec{r} + 1, -1) .. (\vec{r} + 1, -2); } ; \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (2,2) .. (2,3); \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (0,2) .. (0,1); \draw[black, line width = 1pt] (3,3) .. controls (3,2) and (5,2) .. (5,3); \draw[white, line width = 3pt] (4,1) -- (4,3); \draw[black, line width = 1pt] (4,1) -- (4,3); \filldraw[white] (-.5,-2) -- (.5,-2) -- (.5,3) -- (-.5,3) -- (-.5,-2); \filldraw[white] (4.5,-2) -- (5.5,-2) -- (5.5,3) -- (4.5,3) -- (4.5,-2); \draw[black, line width = 2pt] (.5,-2) -- (4.5,-2); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \draw[black, line width = 1pt] (.5,4) -- (.5,6); \draw[black, line width = 1pt] (1.5,4) .. controls (1.5,5) and (2.5,5) .. (2.5,4); \node[anchor = south] at (3.5,4) {$\otimes$}; \draw[black, line width = 1pt] (4.5,4) -- (4.5,6); \draw[black, line width = 2pt] (0,4) -- (3,4); \draw[black, line width = 2pt] (4,4) -- (5,4); \end{tikzpicture} \; = -(-\mathfrak{q})^{3/2} z^{-1} a_{6}x, \end{equation} where the last equality was obtained by using the closed braid identity \begin{equation} \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (0,0) -- (2,2); \draw[white, line width = 3pt] (0,2) -- (2,0); \draw[black, line width = 1pt] (0,2) -- (2,0); \draw[black, line width = 1pt] (2,0) .. controls (3,0) and (3,2) .. (2,2); \end{tikzpicture} \; = \; -(-\mathfrak{q})^{3/2} \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (2,0) .. controls (3,0) and (3,2) .. (2,2); \end{tikzpicture} \;. \end{equation} Using similar tricks, every elements can be brought to a linear combination of the $a_{i}x$. Note that it is clear that any element $a \in \atl{4}$ acting on a linear combinations of $a_{5}x, a_{6}x, \hdots, a_{10}x $ will result in another linear combination of those same basis elements; in other words, $ \lbrace a_{i} x| i=5,6,\hdots, 10 \rbrace$ generate a submodule of $\mathsf{W}_{1/2,z}(3) \times^{o}_{f} \mathsf{S}_{1/2}(1) $, which we immediately recognize has $\mathsf{W}_{0, z_{-}}(4)$ for some $z_{-} \in \mathbb{C}^{*}$. By definition, $z_{-} + z_{-}^{-1}$ is the weight of the non-contractible loops in the standard module, which must be equal to the eigenvalue of $\bar{Y}$; however $\phi^{o}_{3,1}(\bar{Y}^{(3)}) = \bar{Y}^{(4)} $, so this eigenvalue must be $z (-\mathfrak{q})^{-1/2} + z^{-1} ( -\mathfrak{q} )^{1/2}$ (the eigenvalue of $\bar{Y}$ on $\mathsf{W}_{1/2,z}(3)$). We thus conclude\footnote{By definition $\mathsf{W}_{0,z} = \mathsf{W}_{0,z^{-1}} $ so there is no ambiguity here.} that $z_{-} = z(-\mathfrak{q})^{-1/2}$. Similarly, the quotient of $\mathsf{W}_{1/2,z}(3) \times^{o}_{f} \mathsf{S}_{1/2}(1) $ by this submodule yields the standard module $\mathsf{W}_{1,z_{+}}(4) $. By definition, in $\mathsf{W}_{1,z_{+}}(4) $ \begin{equation} \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (0,2) .. (0,1); \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (4,1) .. controls (4,2) and (1,2) .. (1,3); \draw[black, line width = 1pt] (5,1) .. controls (5,2) and (2,2) .. (2,3); \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \filldraw[white] (5.5,1) -- (4.5,1) -- (4.5,3) -- (5.5,3) -- (5.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (2.5,3); \end{tikzpicture} \equiv z_{+}\; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (1,1) -- (1,3); \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (4,1) .. controls (4,2) and (2,2) .. (2,3); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (2.5,3); \end{tikzpicture} \;. \end{equation} By contrast, in $\mathsf{W}_{1/2,z}(3) \times^{o}_{f} \mathsf{S}_{1/2}(1) $ \begin{equation} \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (0,2) .. (0,1); \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (4,1) .. controls (4,2) and (3,2) .. (3,3); \draw[black, line width = 1pt] (5,1) .. controls (5,2) and (4,2) .. (4,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (2,2) .. (2,3); \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \filldraw[white] (5.5,1) -- (4.5,1) -- (4.5,3) -- (5.5,3) -- (5.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \draw[black, line width = 1pt] (.5,4) -- (.5,6); \draw[black, line width = 1pt] (1.5,4) .. controls (1.5,5) and (2.5,5) .. (2.5,4); \node[anchor = south] at (3.5,4) {$\otimes$}; \draw[black, line width = 1pt] (4.5,4) -- (4.5,6); \draw[black, line width = 2pt] (0,4) -- (3,4); \draw[black, line width = 2pt] (4,4) -- (5,4); \end{tikzpicture} = \mathfrak{q} \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (0,2) .. (0,1); \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (4,1) .. controls (4,2) and (5,2) .. (5,1); \draw[black, line width = 1pt] (3,3) .. controls (3,2) and (4,2) .. (4,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (2,2) .. (2,3); \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \filldraw[white] (5.5,1) -- (4.5,1) -- (4.5,3) -- (5.5,3) -- (5.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \draw[black, line width = 1pt] (.5,4) -- (.5,6); \draw[black, line width = 1pt] (1.5,4) .. controls (1.5,5) and (2.5,5) .. (2.5,4); \node[anchor = south] at (3.5,4) {$\otimes$}; \draw[black, line width = 1pt] (4.5,4) -- (4.5,6); \draw[black, line width = 2pt] (0,4) -- (3,4); \draw[black, line width = 2pt] (4,4) -- (5,4); \end{tikzpicture} + z (-\mathfrak{q})^{1/2} \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (3,2) .. (3,3); \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (4,1) -- (4,3); \draw[black, line width = 1pt] (1,3) .. controls (1,2) and (2,2) .. (2,3); \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \filldraw[white] (5.5,1) -- (4.5,1) -- (4.5,3) -- (5.5,3) -- (5.5,1); \draw[black, line width = 2pt] (.5,1) -- (4.5,1); \draw[black, line width = 2pt] (.5,3) -- (4.5,3); \draw[black, line width = 1pt] (.5,4) -- (.5,6); \draw[black, line width = 1pt] (1.5,4) .. controls (1.5,5) and (2.5,5) .. (2.5,4); \node[anchor = south] at (3.5,4) {$\otimes$}; \draw[black, line width = 1pt] (4.5,4) -- (4.5,6); \draw[black, line width = 2pt] (0,4) -- (3,4); \draw[black, line width = 2pt] (4,4) -- (5,4); \end{tikzpicture}, \end{equation} where we used the same trick as in equation \eqref{eq:exfusion1}. We thus conclude that $z_{+} = (-\mathfrak{q})^{1/2}z$. Finally, the eigenvalue of $Y^{(4)}$ on $\mathsf{W}_{0,z_{-}}(4)$ is $z_{-} + z_{-}^{-1}$, while on $\mathsf{W}_{1,z_{+}}$ it is $(-\mathfrak{q}) z_{+} + (-\mathfrak{q})^{-1} z_{+}^{-1}$, so this fusion product cannot be indecomposable unless \begin{equation} (-\mathfrak{q}) z_{+} + (-\mathfrak{q})^{-1} z_{+}^{-1} = z_{-} + z_{-}^{-1} \iff z^{2} = (-\mathfrak{q})^{2} \text{ or } (-\mathfrak{q})^{2} = 1. \end{equation} It follows that, if $\mathfrak{q}$ and $z$ are \emph{generic} \begin{equation}\label{eq:fusionprod.ex1} \mathsf{W}_{1/2,z}(3) \times_{f}^{o}\mathsf{S}_{1/2} \simeq \mathsf{W}_{0, z (-\mathfrak{q})^{-1/2}}(4) \oplus \mathsf{W}_{1, z (-\mathfrak{q})^{1/2}}(4). \end{equation} What if the parameters are not generic? If $z^{2} = (-\mathfrak{q})^{2} $, a direct calculation shows that the defect operator $Y $ has a Jordan block linking the two standard modules, and this fusion product is indecomposable. However, these new indecomposable modules are, for the moment, largely unclassified, and we plan to come back to this question in the close future. \newcommand{\ATL}[1]{\mathsf{T}^a_{#1}} \newcommand{\TL}[1]{TL_{#1}} \newcommand{\StJTL}[2]{\mathcal{W}_{#1,#2}} \newcommand{\StTL}[1]{\mathcal{W}_{#1}} \newcommand{\times_{f}}{\times_{f}} More generally, we find that (for generic values of the parameters) \begin{equation} \mathsf{W}^{u}_{k ,\delta} \times^{o}_f \mathsf{S}_T \simeq \bigoplus_{ i = k - t }^{k+t} \mathsf{W}^{u}_{i ,\delta} \simeq \bigoplus_{i=k-T}^{k+T} \mathsf{W}_{i,(-q)^{(i-k)}\delta},\label{Jon1} \end{equation} \begin{equation}\label{eq:fusionproduct.standard2} \mathsf{W}^{o}_{k ,\delta} \times^{u}_f \mathsf{S}_T \simeq \bigoplus_{ i = k - t}^{k+t} \mathsf{W}^{o}_{i ,\delta} \simeq \bigoplus_{i=k-T}^{k+T} \mathsf{W}_{i,(-q)^{(k-i)}\delta}, \end{equation} As shown in Appendix~\ref{app:C}, these results can also be derived by following the approach in \cite{GJS}, that is, by first establishing the branching rules from $\atl{n_1+n_2}$ to $\atl{n_1} \otimes \tl{n_2}$ and then inferring the corresponding fusion product from Frobenius reciprocity. \subsection{The fusion quotient} The fusion product defined in the previous section implicitly assumed that the affine module $M$ was only a left $\atl{m}$-module. If $M$ is an $\atl{m}$-bimodule, then the fusion product $M \times^{u/o}_{f} V$ will also be a $(\atl{m+k}, \atl{k}) $ bimodule; given $W$ a left $\atl{m+k}$ module, and $V$ a left $\tl{k}$ module, we define the fusion quotient by \begin{equation} W \div_{f}^{u/o} V \equiv \mathsf{Hom}_{\atl{m+k}} \left( \atl{m}\times_{f}^{o/u} V, W \right). \end{equation} Because $ \atl{m}\times_{f}^{o/u} V $ is a \emph{right} $\tl{m}$ module, this $\mathsf{Hom} $ group is naturally a \emph{left} $\atl{m}$-module. Here's an example to show that the construction is actually quite natural despite its abstract definition. Let $V = \tl{k}$ seen as a left module; one finds \begin{equation*} \atl{m} \times_{f} V= \lbrace a x_{0}| a\in \atl{k+m} \rbrace, \qquad x_{0} = 1_{\atl{k+m}}\otimes_{\atl{m}\otimes_{\mathbb{C}} \tl{k}} (1_{\atl{m}} \otimes_{\mathbb{C}} 1_{\tl{k}}), \end{equation*} \begin{equation} W \div_{f} \tl{k} \simeq \lbrace{ f_{y}: a x_{0} \to a y | y \in U \rbrace}, \qquad af_{y} \equiv f_{ay}. \end{equation} One can now recognize that $W \div_{f} \tl{k}$ is simply the restriction of $W$ to $\atl{m}$. More generally, if there exists an idempotent $a_{0} \in \tl{k}$ such that $V = \tl{k}a_{0}$ then the fusion is then the subset $a_{0}W $ of the restriction of $W$. As a more concrete example we compute $\mathsf{W}_{1/2,z}(3) \div^{o}_{f} \mathsf{S}_{1/2}(1) $; since $\mathsf{S}_{1/2}(1) \simeq \tl{1}$, this is simply the restriction of $\mathsf{W}_{1/2,z}(3)$ to $\atl{2}$. We start by choosing a basis of the standard module: \begin{equation} x_{1} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (1,2) .. (1,3); \draw[black, line width = 2pt] (.5,1) -- (3.5,1); \draw[black, line width = 2pt] (.5,3) -- (1.5,3); \end{tikzpicture} \quad x_{2} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (1,1) .. controls (1,2) and (0,2) .. (0,1); \draw[black, line width = 1pt] (2,1) .. controls (2,2) and (4,2) .. (4,1); \draw[white, line width = 3pt] (3,1) .. controls (3,2) and (1,2) .. (1,3); \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (1,2) .. (1,3); \filldraw[white] (-.5,1) -- (.5,1) -- (.5,3) -- (-.5,3) -- (-.5,1); \filldraw[white] (4.5,1) -- (3.5,1) -- (3.5,3) -- (4.5,3) -- (4.5,1); \draw[black, line width = 2pt] (.5,1) -- (3.5,1); \draw[black, line width = 2pt] (.5,3) -- (1.5,3); \end{tikzpicture} \; x_{3} = \; \begin{tikzpicture}[scale = 1/3,baseline={(current bounding box.center)}] \draw[black, line width = 1pt] (3,1) .. controls (3,2) and (2,2) .. (2,1); \draw[black, line width = 1pt] (1,1) -- (1,3); \draw[black, line width = 2pt] (.5,1) -- (3.5,1); \draw[black, line width = 2pt] (.5,3) -- (1.5,3); \end{tikzpicture} \; . \end{equation} The element $x_{2}$ was chosen so that \begin{equation} \phi^{o}_{2,1}(e^{(2)}_{1}) x_{2} = ((-\mathfrak{q})^{-1/2}z +(-\mathfrak{q})^{1/2}z^{-1} )x_{1} , \qquad \phi^{o}_{2,1}(e^{(2)}_{2}) x_{2} = (\mathfrak{q} + \mathfrak{q}^{-1}) x_{2}, \qquad \phi^{o}_{2,1}(u^{(2)}) x_{2} = x_{1}. \end{equation} We thus recognize that $\lbrace x_{1}, x_{2} \rbrace $ span a submodule isomorphic to $\mathsf{W}_{0, z (-\mathfrak{q})^{-1/2}}(2)$. Furthermore, \begin{equation} \phi^{o}_{2,1}(e^{(2)}_{1}) x_{3} = x_{1}, \qquad \phi^{o}_{2,1}(e^{(2)}_{2}) x_{3} = -(-\mathfrak{q})^{-3/2}z^{-1} x_{2}, \phi^{o}_{2,1}(u^{(2)}) x_{3} = (-\mathfrak{q})^{1/2}z x_{3} + \mathfrak{q} x_{2}, \end{equation} so the quotient $\left(\mathsf{W}_{1/2,z}(3) \div^{o}_{f} \mathsf{S}_{1/2}(1)\right)/(\mathsf{W}_{0, z (-\mathfrak{q})^{-1/2}}(2))$ is isomorphic to $\mathsf{W}_{1,(-\mathfrak{q})^{1/2}z}(2)$. Finally, comparing the eigenvalues of $Y$ on the two standard modules yields the conclusion: if $\mathfrak{q}$ and $z$ are \emph{generic} \begin{equation} \mathsf{W}_{1/2,z}(3) \div_{f}^{o}\mathsf{S}_{1/2}(1) \simeq \mathsf{W}_{0, z (-\mathfrak{q})^{-1/2}}(2) \oplus \mathsf{W}_{1, z (-\mathfrak{q})^{1/2}}(2). \end{equation} More generally, we find that for generic values of the parameters \begin{equation}\label{eq:fusionquotient.standard1} \mathsf{W}^{u}_{k ,\delta} (n + m) \div^{o}_f \mathsf{S}_T(m) \simeq \bigoplus_{ i = k - t}^{k+t} \mathsf{W}^{u}_{i ,\delta}(n) \simeq \bigoplus_{i=k-T}^{k+T} \mathsf{W}_{i,(-q)^{(i-k)}\delta}(n) \end{equation} \begin{equation}\label{eq:fusionquotient.standard2} \mathsf{W}^{o}_{k ,\delta}(n+m) \div^{u}_f \mathsf{S}_T (m)\simeq \bigoplus_{ i = k - t}^{k+t} \mathsf{W}^{o}_{i ,\delta}(n) \simeq \bigoplus_{i=k-T}^{k+T} \mathsf{W}_{i,(-q)^{(k-i)}z}(n). \end{equation} In these expressions it should be understood that all modules of the form $\mathsf{W}_{k ,z}(n)$ with $n < k$ should be identified with the zero module. \subsection{Dualities between the two fusions} Aside from their possible interpretation as algebraic realisations of topological defects, the two types of fusion are of independent interest for the representation theory of $\atl{n}$. As such, we mention here certain properties which they have, and which can be used to compute them. The first such property is that the fusion product and the fusion quotients are duals as functors, i.e. for any $\atl{n+m}$ module $W$, $\atl{n}$ module $V$ and $\tl{m}$ module $U$, there is a natural isomorphism \begin{equation} \mathsf{Hom}_{\atl{n+m}}\left( V \times^{u/o}_{f} U, W \right) \simeq \mathsf{Hom}_{\atl{n}}\left( V , W \div^{u/o}_{f} U \right) . \end{equation} It follows in particular that if one knows every fusion product, one can get back all the fusion quotient by using this duality, and vice versa. The second property we mention is the associativity: for all $\atl{n}$ module $W$, $\tl{k}$ module $V$ and $\tl{m}$ module $U$, \begin{equation} (W \times_{f}^{u/o} V) \times_{f}^{u/o} U \simeq W \times_{f}^{u/o} ( V \times_{f}^{r} U ) \simeq (W \times_{f}^{u/o} U) \times_{f}^{u/o} V, \end{equation} where $\times^{r}_{f}$ is the fusion product in the regular Temperley-Lieb algebra, which was studied in detail in \cite{GV,BelleteteFusion}. Similarly, for all $\atl{n+k+m}$ module $W$, $\tl{k}$ module $V$ and $\tl{m}$ module $U$, \begin{equation} (W \div_{f}^{u/o} V) \div_{f}^{u/o} U \simeq W \div_{f}^{u/o} ( V \times_{f}^{r} U ) \simeq (W \div_{f}^{u/o} U) \div_{f}^{u/o} V. \end{equation} As an example, if we assume that $\mathfrak{q}$ is generic then for all $k \geq 0$ \begin{equation} \mathsf{S}_{k}(n) \times^{r}_{f} \mathsf{S}_{0}(2) \simeq \mathsf{S}_{k}(n+2), \qquad \mathsf{S}_{k}(n) \times^{r}_{f} \mathsf{S}_{1/2}(1) \simeq \mathsf{S}_{k-1/2}(n+1) \oplus \mathsf{S}_{k + 1/2}(n+1). \end{equation} It follows that for a given $\atl{n}$ module $W$, knowing its fusion product (or quotient) with $\mathsf{S}_{0}(2m)$ and $\mathsf{S}_{1/2}(1)$ is enough to compute the fusion with all other standard modules by recurrence. Equations \eqref{Jon1}-\eqref{eq:fusionproduct.standard2} and \eqref{eq:fusionquotient.standard1}-\eqref{eq:fusionquotient.standard2} were obtained in this manner. \subsection{Fusion and the Hamiltonian}\label{sec:4.4} We now go back to the problem of studying the defects in the direct channel, and the Hamiltonian $H^{u}$ introduced in the beginning of this section~\ref{sec:4}. Let $\mathsf{W}$ be a $ \atl{n+m}$-module, $\rho$ be a non-zero idempotent of $\tl{m}$, and consider the classical Hamiltonian living in $\atl{n}$: \begin{equation} H_{n} = \sum_{j=1}^{n} e^{(n)}_{j}\label{classham}. \end{equation} Because the fusion quotient $\mathsf{W} \div^{u/o}_{f} (\tl{m}\rho)$ can be seen as a restriction of $\mathsf{W} $, one can express this Hamiltonian as an operator acting directly on $\mathsf{W}$: \begin{align} H^{u}_{n} & = \phi^{u}_{n,m}\big(H_{n} \big) = \sum_{j = 1}^{n-1} e^{(n+m)}_{j} + \mu_{n,m}^{-1} e^{(n+m)}_{n} \mu_{n,m} \rho,\label{eq:hamiltunder}\\ H^{o}_{n} & = \phi^{o}_{n,m}\big(H_{n} \big) = \sum_{j = 1}^{n-1} e^{(n+m)}_{j} + \nu_{n,m} e_{n}^{(n+m)} \nu_{n,m}^{-1}\rho,\label{eq:hamiltover} \end{align} where $\mu_{n,m} = g_{n}g_{n+1}\hdots g_{n+m}$, $\nu_{n,m} = g_{n+m}\hdots g_{n}$. Choosing the idempotent $\rho$ to correspond to a representation of spin $m/2$, or the standard module on $m$ strands with $m$ through lines, these are precisely the expressions obtained from spin chains with impurities. As a corollary of the preceding discussion we formulate our main result on the spectral problem of the defect Hamiltonians: \begin{Thm}\label{eq:thm-H} Let $\rho \in \tl{m}$ be an idempotent such that $\tl{m}\rho \simeq V$, then for any $\atl{n+m}$-module $M$ the Hamiltonian $H^{u/o}_{n}$ is similar (as a matrix) to the direct sum of the classical Hamiltonian $H_{n}$ acting on $M \div_{f}^{u/d} V$ and a zero matrix of dimension $\text{dim}((1-\rho)M)$. \end{Thm} Similarly, the transfer matrix acting on this fused module is precisely the one in Fig.~\ref{fig:transfermatrix.defect} obtained by adding a cluster of lines going under (or over) the other lines in the lattice. This strongly suggests that the fusion quotient is indeed the right algebraic construction for these defects. However it should be mentioned that for generic values of the parameters the fusion product and quotients are equivalent in the limit; it follows that while the Hamiltonian acting on the fusion product does not have such a simple interpretation it will produce the same spectrum in the limit. \subsection{Example of quotient: the twisted XXZ spin chain}\label{eq:sec4-ex} The twisted XXZ spin chain on $n$ sites can be realized by the Hamiltonian $H_{n}(Q)$ expressed in terms of the usual Pauli matrices acting on $(\mathbb{C}_{2})^{n}$: \begin{equation} H_{n}(Q) = \sum_{j=1}^{n} \left(\sigma^{-}_{j}\sigma^{+}_{j+1} + \sigma^{-}_{j+1}\sigma^{+}_{j} + \frac{\mathfrak{q} +\mathfrak{q}^{-1}}{4}\left(\sigma^{z}_{j}\sigma^{z}_{j+1} - 1 \right) \right) = - \sum_{j=1}^{n} e_{j}, \end{equation} where $\sigma^{\pm} = 1/2(\sigma^{x}_{j} \pm i \sigma^{y}_{j})$ are the usual ladder operators, $Q$ is a non-zero complex number, and the boundary conditions are \begin{equation} \sigma^{z}_{n+1} \equiv \sigma^{z}_{1}, \qquad \sigma^{\pm}_{n+1} \equiv Q^{\mp 2} \sigma^{\pm}_{1}. \end{equation} The model is unitary if $Q$ is on the unit circle in $\mathbb{C}$. The Temperley-Lieb generators are \begin{equation} -e_{j} \equiv \sigma^{-}_{j}\sigma^{+}_{j+1} + \sigma^{-}_{j+1}\sigma^{+}_{j} + \frac{\mathfrak{q} +\mathfrak{q}^{-1}}{4}\left(\sigma^{z}_{j}\sigma^{z}_{j+1} - 1 \right) + \frac{\mathfrak{q} -\mathfrak{q}^{-1}}{4} (\sigma^{z}_{j} - \sigma^{z}_{j+1}), \end{equation} with the twist \begin{equation} u = (-1)^{n/2} Q^{-\sigma^{z}_{1}} s_{1}\hdots s_{n-1}, \qquad s_{j} = \sigma^{-}_{j}\sigma^{+}_{j+1} + \sigma^{+}_{j}\sigma^{-}_{j+1} + \frac{1}{2}(\sigma^{z}_{j}\sigma^{z}_{j+1} + 1). \end{equation} A quick calculation shows that the hoop operators are\footnote{In everything that follows, one should understand that for all matrix $A$, $\mathfrak{q}^{A} \equiv (-\mathfrak{q})^{A}(-1)^{-A}$. We simplify these expressions to lighten the notation but one should be careful when verifying these results numerically.} \begin{equation} Y = (-1)^{n}\left(\mathfrak{q}^{S_{z}}Q^{-1} + \mathfrak{q}^{-S_{z}}Q \right), \qquad\bar{Y} = \mathfrak{q}^{S_{z}}Q + \mathfrak{q}^{-S_{z}}Q^{-1}, \end{equation} with $S_{z} = \frac{1}{2} \sum_{j=1}^{n}\sigma^{z}_{j}$ the total spin. Our goal is now to impose a defect of spin $1/2$ on this chain, which according to our formalism consist in computing the fusion quotient of its Hilbert space by $\mathsf{S}_{1}(1) = \tl{1}$. This specific defect corresponds to a simple restriction from $\atl{n}$ to $\atl{n+1}$, so the new Hamiltonian with a defect is either \begin{equation} H^{u}_{n-1}(Q) = - \sum_{j=1}^{n-1} \phi^{u}_{n-1,1}(e^{(n-1)}_{j}) = -\sum_{j=1}^{n-2}e^{(n)}_{j} - g^{(n)}_{n}e^{(n)}_{n-1}(g^{(n)}_{n})^{-1}, \end{equation} or \begin{equation} H^{o}_{n-1}(Q) = - \sum_{j=1}^{n-1} \phi^{o}_{n-1,1}(e^{(n-1)}_{j}) = -\sum_{j=1}^{n-2}e^{(n)}_{j} - (g^{(n)}_{n})^{-1}e^{(n)}_{n-1}g^{(n)}_{n}, \end{equation} for a defect that goes under or over the other lines, respectively. Using the explicit construction of $e_{n-1}, g_{n}$ one finds \begin{equation*} H^{u}_{n-1}(Q) = \sum_{j}^{n-1}(a^{-}_{j}a^{+}_{j+1} + a^{-}_{j+1}a^{+}_{j} + \frac{\mathfrak{q} +\mathfrak{q}^{-1}}{4}\left(a^{z}_{j}a^{z}_{j+1} - 1 \right)) + \left( (1-\mathfrak{q}^{2 a^{z}_{1}})a^{-}_{n-1} + Q^{2}(1-\mathfrak{q}^{-2 a^{z}_{n-1}})a^{-}_{1} \right)\sigma^{+}_{n}, \end{equation*} where we defined new operators $a^{k}_{j} = \sigma^{k}_{j}$, $k= z, \pm $, $j = 1, 2, \hdots n-1$, with boundary conditions \begin{equation} a^{z}_{n} \equiv a^{z}_{1}, \qquad a^{\pm}_{n} \equiv (Q^{2} \mathfrak{q}^{-\sigma^{z}_{n}} )^{\mp 1} a^{\pm}_{1}. \end{equation} It follows that \begin{equation}\label{eq:XXZHamilup} H^{u}_{n-1}(Q) \sim \overset{\begin{array}{cc} (\hdots )\otimes | \uparrow \rangle \qquad & \qquad (\hdots )\otimes | \downarrow \rangle \end{array}}{\left( \begin{array}{cc} H_{n-1}(-Q \mathfrak{q}^{-1/2}) & \Delta \\ 0 & H_{n-1}(- Q \mathfrak{q}^{1/2}) \end{array} \right)}, \qquad \Delta = (1-\mathfrak{q}^{2 a^{z}_{1}})a^{-}_{n-1} + Q^{2}(1-\mathfrak{q}^{-2 a^{z}_{n-1}})a^{-}_{1} . \end{equation} A straightforward calculation then shows the defect operators: \begin{equation} Y =(-1)^{n} (\mathfrak{q}^{S_{z}}Q^{-1} + \mathfrak{q}^{-S_{z}}Q) = (-1)^{n-1} \left( \mathfrak{q}^{S_{z} - \frac{1}{2}\sigma^{z}_{n}} \left(-Q \mathfrak{q}^{-\frac{1}{2}\sigma^{z}_{n}} \right)^{-1} + \mathfrak{q}^{-S_{z} + \frac{1}{2}\sigma^{z}_{n}} \left(-Q \mathfrak{q}^{-\frac{1}{2}\sigma^{z}_{n}} \right) \right), \end{equation} \begin{equation} \bar{Y} \sim \overset{\begin{array}{cc} (\hdots )\otimes | \uparrow \rangle \qquad & \qquad (\hdots )\otimes | \downarrow \rangle \end{array}}{\left( \begin{array}{cc} Q_{-}\mathfrak{q}^{\tilde{S}_{z}} + Q^{-1}_{-}\mathfrak{q}^{-\tilde{S}_{z}} & Q (\mathfrak{q} - \mathfrak{q}^{-1})^{2} \tilde{S}_{-} \\ 0 & Q_{+}\mathfrak{q}^{\tilde{S}_{z}} + Q^{-1}_{+}\mathfrak{q}^{-\tilde{S}_{z}} \end{array} \right)}, \end{equation} where $Q_{\pm} \equiv -Q \mathfrak{q}^{\pm 1/2} $, and $\tilde{S}_{-}$, $\mathfrak{q}^{\pm \tilde{S}_{z}} $ are the standard $U_{\mathfrak{q}}(\mathfrak{sl}_{2}) $ generators on $n-1$ spins \begin{equation} \tilde{S}_{-} = \sum_{i=1}^{n-1} (\mathfrak{q})^{\sum_{j=1}^{i-1}\sigma^{z}_{j}}\sigma^{-}_{i}(\mathfrak{q})^{-\sum_{j=i+1}^{n-1}\sigma^{z}_{j}}, \qquad \mathfrak{q}^{\pm \tilde{S}_{z}} = \mathfrak{q}^{\sum_{j=1}^{n-1}\sigma^{z}_{j}/2}. \end{equation} Note that $\bar{Y}$ can be diagonalized if and only if $(Q - \mathfrak{q}^{-S_z})(Q + \mathfrak{q}^{-S_z})$ is an invertible matrix, which can be verified by comparing its eigenvalues in the $\sigma^{z}_{n} = \pm 1 $ sectors. It follows in particular that the Hamiltonian \eqref{eq:XXZHamilup} cannot have a Jordan block linking the $\sigma^{z}_{n} = \pm 1 $ sectors if $Q$ is generic. Similarly, one finds \begin{equation} H^{o}_{n-1}(Q) \sim \overset{\begin{array}{cc} (\hdots )\otimes | \uparrow \rangle \qquad & \qquad (\hdots )\otimes | \downarrow \rangle \end{array}}{\left( \begin{array}{cc} H_{n-1}(Q \mathfrak{q}^{1/2}) & 0 \\ \Delta & H_{n-1}(Q \mathfrak{q}^{-1/2}) \end{array} \right)}, \qquad \Delta = (1-\mathfrak{q}^{2 a^{z}_{1}})a^{+}_{n-1} + Q^{-2}(1-\mathfrak{q}^{-2 a^{z}_{n-1}})a^{+}_{1} . \end{equation} Note that in each of these expressions, the off-diagonal term $\Delta$ can only link sectors of $H_{n-1}(Q \mathfrak{q}^{\pm 1/2}) $ corresponding to different total spin ( $\sum_{j=1}^{n-1}a _{j}$) because of the ladder operators appearing in it. \section{Conclusion: connection to CFT}\label{sec:5} In order to provide a lattice analogue of CFT topological defects $X$ satisfying~\eqref{centVir}, we have defined and studied in a model-independent way operators on the lattice that commute with the local interactions given by the TL elements---the central elements $Y$ and $\bar{Y}$ in $\atl{n}$---and have demonstrated their interesting properties. From the crossed-channel point of view, these defect operators generate an algebra spanned by $Y_j$, $\bar{Y}_j$, and their products, that has structure constants or fusion rules~\eqref{eq:Y-fusion} and~\eqref{eq:bY-fusion} resembling the chiral and anti-chiral fusion rules of Virasoro Kac modules of type $(1,s)$ where $s=2j+1$. We recall that the Kac modules are obtained as quotients of Verma modules of the conformal weight $h_{1,s}$ by the submodule generated by the singular vector at the level $h_{1,s} + s$. The analogy with CFT goes further: Recall that at least in rational CFT a topological defect can be seen as a map from the set of chiral primary fields to the ring of endomorphisms of the Hilbert space of the full non-chiral CFT. In much the same way our maps $Y^m$ and $\bar{Y}^m$ from Fig.~\ref{fig:defectmap} defining the defect operators send ideals in the open or regular TL algebra (which are known to correspond to chiral primary fields of conformal weight $h_{1,s}$) to central elements in affine TL algebra which are realized as endomorphisms of the bulk lattice model, e.g.\ of periodic spin-chains. \smallskip We saw that the higher-spin defects $Y_j$ and $\bar{Y}_j$ \eqref{eq:Yj-def}-\eqref{eq:Yj-U} carry some sort of internal structure ``living" on the horizontal non-contractible loops. From the direct-channel point of view, or after a modular transformation, this internal structure was realized in Section~\ref{sec:4} as some sort of impurities in the spatial direction. Therefore, we have just rewritten the defects $Y_j$ and $\bar{Y}_j$ in the Hamiltonian formulation. Interestingly, the problem of spectrum with impurities was reformulated in algebraic terms as a rather simple fusion product of affine and regular TL representations which is a sort of combination of the constructions in~\cite{GS,GJS} and~\cite{BSA} that we review in Appendix~\ref{sec:previousfusions}. \smallskip So far we have defined and studied lattice defects that do not depend on a spectral parameter. Let us call these defects of \textit{first type}. However, there is some evidence that there should be a \textit{second type} of (lattice) defects that do depend on a spectral parameter. Though they are not central in~$\atl{n}$, but possibly become topological defects $X$, i.e.\ they satisfy~\eqref{centVir}, in the continuum limit only. We will address studying these defects of the second type in our next paper where an identification with Virasoro Kac modules of the type $(r,1)$ is expected. \medskip It is important for several reasons to try to define what we call lattice defects in a precise mathematical way and in higher generality, for a possible application to more general lattice models not necessarily based on TL interactions. For the first kind of defects, from the results obtained in this work, we are approaching a mathematical definition of (an algebra of) defects for general lattice algebras (e.g. $\atl{n}(\mathfrak{q})$, Birman-Wenzl-Murakami, Brauer algebras, etc): \smallskip \textbf{Definition:} \textit{ In a lattice algebra $A$, a space of defects $D$ of the first type is a subspace in the center of $A$ such that it forms a Verlinde algebra. } \smallskip Note that not any central element in a lattice algebra corresponds to a defect operator; it should also have nice properties that reflect know properties from the CFT side. That is why we demand that the space of defects forms a Verlinde algebra. First of all this implies the presence of a special basis in this algebra with structure constants being non-negative integers. Secondly, the idea is that these integer numbers should correspond to fusion rules of corresponding representations of an (anti-)chiral algebra, e.g.\ Virasoro. We have indeed recovered these two aspects in our case of $A=\atl{n}(\mathfrak{q})$, where we identified $D$ as the symmetric center $\mathsf{Z}_{\mathrm{sym}}$ of $\atl{n}(\mathfrak{q})$, and the latter as a Verlinde algebra generated by $Y_j$ and $\bar{Y}_j$ where the structure constants do not depend on $n$ and correspond to fusion rules of chiral and anti-chiral Virasoro representations of type $(1,s)$.\footnote{Strictly speaking our algebra of defects is the product of two Verlinde algebras, for chiral and anti-chiral Virasoro representations, modulo non-linear algebraic relations between $Y$ and $\bar{Y}$. } This is here shown to be true for the generic $\mathfrak{q}$ case where the fusion rules might look rather trivial, since they are $sl(2)$ type fusion after all. The situation is not so trivial in degenerate cases (where $\mathfrak{q}$ is a root of unity) that we will describe in one of our forthcoming papers on the subject, with applications to minimal models as well as LCFTs. There, a connection to Virasoro fusion rules also holds, although it is much less evident due to more involved representation theory. However, this is not the end of the story. Any Verlinde algebra has the third aspect: it admits a modular $S$-transformation that ``diagonalizes" the fusion rules. For the moment we have concentrated on the first two aspects only. It is, of course, an important problem to properly define and analyze such $S$-transformations in a precise algebraic way, and hopefully it will reflect the modular transformation on the lattice. We hope to come back to this problem soon. \bigskip {\bf Acknowledgments.} \hspace{3pt} We acknowledge interesting discussions with Yvan St Aubin, and we thank D. Bulgakova for discussions and early collaboration on this project. This work was supported by the Institut Universitaire de France and the European Research Council (advanced grant NuQFT). We are also grateful to the CRM in Montreal and to the organizers of the conference \emph{Algebraic methods in mathematical physics} in Montreal in 2018 where a part of this work was done. The work of AMG was supported by CNRS. AMG is also grateful to IPHT Saclay for kind hospitality in 2017 and 2018.
1,941,325,220,794
arxiv
\section{Introduction} \raggedbottom Like most diagnostic imaging exams, chest radiography produces a few very common findings, followed by many relatively rare findings \cite{Paul2021Zeroshot,Zhou2021Review}. Such a ``long-tailed" (LT) distribution of outcomes can make it challenging to learn discriminative image features, as standard deep image classification methods will be biased toward the common ``head" classes, sacrificing predictive performance on the infrequent ``tail" classes \cite{zhang2021deep}. In other settings and modalities, there are a select few examples of LT datasets, such as in dermatology \cite{liu2020deep} and gastrointestinal imaging \cite{borgli2020hyperkvasir}; however, the data from Liu et al~\cite{liu2020deep} are not publicly available, and the \textit{HyperKvasir} dataset \cite{borgli2020hyperkvasir} -- while providing 23 unique class labels with several very rare conditions (\textless $50$ labeled examples) -- only contains about 10,000 labeled images for classification. Additionally, while many studies offer techniques to combat class imbalance for medical image analysis problems \cite{marrakchifighting2021,zhuangcare2019,linautomated2021,galdranbalanced2021}, very few methods specifically address the challenges posed by an LT distribution, as there is no freely available benchmark for this purpose. Only recently have studies begun to use the lens of ``LT learning" to describe and improve medical image understanding solutions. For example, Galdran et al. \cite{galdranbalanced2021} proposed Balanced-MixUp, an extension of the MixUp \cite{zhang2018mixup} regularization technique with class-balanced sampling, a common approach in the LT learning literature. Ju et al. \cite{jurelational2021} grouped rare classes into subsets based on prior knowledge (location, clinical presentation) and used knowledge distillation to train a ``teacher" model to enforce the ``student" to learn these groupings. Zhang et al. \cite{zhangmbnm2021} combined a feature ``memory" module, resampling of tail classes, and a re-weighted loss function to improve the LT classification of several medical datasets. More broadly, many relevant techniques have been developed in the related fields of imbalanced learning \cite{zhuangcare2019,marrakchifighting2021} and few-shot learning \cite{quellecautomatic2020,lidifficulty2020}. These medical image-specific techniques, plus the wealth of methods from the computer vision literature \cite{chawla2002smote,huang2016learning,wang2017learning,linfocal2017,cuiclass2019,caolearning2019,shu2019meta,kangdecoupling2020,zhang2021deep,jiang2021self,park2021influence,kini2021label}, provide a foundation from which the medical deep learning community can develop methods for medical LT classification. Since no large-scale, publicly available dataset exists for LT medical image classification, we curate a large benchmark (\textgreater 200,000 labeled images) of two thorax disease classification tasks on chest X-rays. Further, we evaluate state-of-the-art LT learning methods on this data, analyzing which components of existing methods are most applicable to the medical imaging domain. Our contributions can be summarized as follows: \begin{itemize}[topsep=1pt, leftmargin=1.25cm] \item[$\bullet$] We formally introduce the task of long-tailed classification of thorax disease on chest X-rays. The task provides a comprehensive and realistic evaluation of thorax disease classification in clinical practice settings. \item[$\bullet$] We curate a large-scale benchmark from the existing representative datasets NIH ChestXRay14 \cite{wang2017learning} and MIMIC-CXR \cite{johnson2019mimic}. The benchmark contains five new, more fine-grained pathologies, producing a challenging and severely imbalanced distribution of diseases. We describe the characteristics of this benchmark and will publicly release the labels. \item[$\bullet$] We find that the standard cross-entropy loss and augmentation methods such as MixUp fail to adequately classify the rarest ``tail" classes. We observe that class-balanced re-weighting improves performance on infrequent classes, and ``decoupling" via classifier re-training is the most effective approach for both datasets. \end{itemize} \begin{figure}[!ht] \centering \includegraphics[scale=0.41]{figs/061322_log_nih-lt_train.pdf} \includegraphics[scale=0.41]{figs/061322_log_mimic-lt_train.pdf} \caption{Long-tailed distribution of thorax disease labels for the proposed NIH-CXR-LT (left) and MIMIC-CXR-LT (right) training datasets. Values by each bar represent log-frequency, while values in parentheses represent raw frequency. Textured bars represent newly added disease labels, which help create naturally long-tailed distributions without the need for artificial subsampling.} \label{fig:data} \end{figure} \section{Long-Tailed Classification of Thorax Diseases} \raggedbottom \subsection{Task Definition} Disease patterns in chest X-rays are numerous, and their incidence exhibits a long-tailed distribution \cite{Zhou2021Review,Paul2021Zeroshot}: while a small number of common diseases have sufficient observed cases for large-scale analysis, most diseases are infrequent. Conventional computer vision methods may fail to correctly identify uncommon thorax disease classes due to the extremely imbalanced class distribution \cite{Paul2021Zeroshot}, introducing a new and clinically valuable LT classification task on chest X-rays. We formulate the LT classification task first by dividing thorax disease classes into ``head" (many-shot: \textgreater 1,000), ``medium" (medium-shot: 100-1000, inclusive), and ``tail" (few-shot: \textless 100) categories according to their frequency in the training set. \subsection{Dataset Construction} We curate two long-tailed chest X-ray benchmarks, \textbf{NIH-CXR-LT} and \textbf{MIMIC-CXR-LT}, for NIH ChestXRay14 \cite{wangchest2017} and MIMIC-CXR \cite{johnson2019mimic}, respectively. Each study of NIH ChestXRay14 and MIMIC-CXR usually contains one or more chest radiographs and one free-text radiology report. To generate a strongly long-tailed distribution without artificially subsampling, we introduce five new rare disease findings that are text-mined from radiology reports: Calcification of the Aorta, Subcutaneous Emphysema, Tortuous Aorta, Pneumomediastinum, and Pneumoperitoneum. We identify the presence or absence of new disease findings by parsing the text report associated with each study following the method detailed in RadText \cite{Peng2018NegBioAH,wangchest2017}. For this study, we only use frontal-view, single-label images, as most LT methods are developed specifically for multi-class (not multi-label) classification. Following the structure of previous LT benchmark datasets in computer vision, such as ImageNet-LT \cite{liu2019openlongtailrecognition}, we split NIH-CXR-LT and MIMIC-CXR-LT into \textit{training}, \textit{validation}, \textit{test}, and \textit{balanced test} sets. Since both datasets contain patients with multiple images, we split them at the patient level to prevent data leakage. Both validation and balanced test sets are small but perfectly balanced, where the balanced test set is a subset of the larger, imbalanced test set. This data split allows for evaluation consistent with the LT literature (via the balanced test set), as well as more traditional evaluation on a large naturally distributed set (via the test set). The resulting splits produce extreme class imbalance, with an \textit{imbalance factor} -- the cardinality of the most frequent training class divided by the cardinality of the least frequent training class -- of 6,491 for NIH-CXR-LT and 4,438 for MIMIC-CXR-LT. Full detailed statistics and data split for NIH-CXR-LT and MIMIC-CXR-LT can be found in the Supplementary Materials. \textbf{NIH-CXR-LT.} NIH ChestXRay14 contains over 100,000 chest X-rays labeled with 14 pathologies, plus a ``No Findings" class. We construct a single-label, long-tailed version of the NIH ChestXRay14 dataset by introducing five new disease findings described above. The resulting NIH-CXR-LT dataset has 20 classes, including 7 head classes, 10 medium classes, and 3 tail classes. NIH-CXR-LT contains 88,637 images labeled with one of 19 thorax diseases, with 68,058 training and 20,279 test images. The validation and balanced test sets contain 15 and 30 images per class, respectively. \textbf{MIMIC-CXR-LT.} We construct a single-label, long-tailed version of MIMIC-CXR in a similar manner. MIMIC-CXR is a multi-label classification dataset with over 200,000 chest X-rays labeled with 13 pathologies and a ``No Findings" class. The resulting MIMIC-CXR-LT dataset contains 19 classes, of which 10 are head classes, 6 are medium classes, and 3 are tail classes. MIMIC-CXR-LT contains 111,792 images labeled with one of 18 diseases, with 87,493 training images and 23,550 test set images. The validation and balanced test sets contain 15 and 30 images per class, respectively. \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.1} \caption{Long-tailed learning methods selected for benchmarking grouped by type of approach (``R" = Re-balancing, ``A" = Augmentation, ``O" = Other). ``RW" = re-weighted with scikit-learn weights \cite{scikit-learn}, ``CB" = re-weighted with class-balanced weights \cite{cuiclass2019}.} \begin{tabular}{lccc|lccc} \toprule Method & R & A & O & \multicolumn{1}{c}{Method} & R & A & O \\ \midrule Softmax (Baseline) & & & & CB LDAM-DRW \cite{caolearning2019} & \checkmark & &\\ CB Softmax & \checkmark & & & RW LDAM \cite{caolearning2019} & \checkmark & & \\ RW Softmax & \checkmark & & & RW LDAM-DRW \cite{caolearning2019} & \checkmark & &\\ Focal Loss \cite{linfocal2017} & \checkmark & & & MixUp \cite{zhang2018mixup} & & \checkmark &\\ CB Focal Loss \cite{linfocal2017} & \checkmark & & & Balanced-MixUp \cite{galdranbalanced2021} & \checkmark & \checkmark & \\ RW Focal Loss \cite{linfocal2017} & \checkmark & & & Decoupling--cRT \cite{kangdecoupling2020} & \checkmark & & \checkmark\\ LDAM \cite{caolearning2019} & \checkmark & & & Decoupling--$\tau$-norm \cite{kangdecoupling2020} & \checkmark & & \checkmark\\ CB LDAM \cite{caolearning2019} & \checkmark & & \\ \bottomrule \end{tabular} \label{method:summary} \end{table} \subsection{Methods for Benchmarking} In their survey, Zhang \textit{et al.} group LT learning methods into three main categories: class re-balancing, information augmentation, and module improvement \cite{zhang2021deep}. We simplify this categorization down to \underline{re-balancing}, \underline{augmentation}, and \underline{others}, noting that some sophisticated methods can fall into more than one of these categories. We have summarized our selected methods for benchmarking with their corresponding categorizations in Table \ref{method:summary}. Class re-balancing, arguably the most common approach to LT learning, usually involves \textit{resampling} the data such that it is effectively balanced during training or \textit{re-weighting} a loss function to modulate the importance of classes based on their frequency. Resampling methods include SMOTE \cite{chawla2002smote}, which undersamples common classes and oversamples rare classes, and progressively-balanced sampling \cite{kangdecoupling2020}, which interpolates from instance- to class-balanced sampling; recent re-weighting strategies include Focal Loss \cite{linfocal2017}, Label-Distribution-Aware Margin (LDAM) Loss \cite{caolearning2019}, and Influence-Balanced Loss \cite{park2021influence}. In addition to the baseline softmax cross-entropy loss function, we consider Focal Loss and LDAM, with optional deferred re-weighting (DRW). For re-weighting strategies, we select the ``class-balanced" (CB) approach outlined in \cite{cuiclass2019} and the re-weighting approach implemented by the scikit-learn library \cite{scikit-learn}. Approaches to ``information augmentation" can include customized data augmentation, as well as transfer learning from related data domains. For this category, we choose MixUp \cite{zhang2018mixup} and Balanced-MixUp \cite{galdranbalanced2021}. MixUp is an augmentation technique that linearly mixes pairs of input images and labels according to a Beta distribution, producing a strong regularizing effect. Balanced-MixUp, as explained earlier, is an extension of MixUp that linearly mixes pairs of images and labels, where one image is drawn from a batch of instance-balanced (naturally distributed) data and the other from class-balanced (resampled) data. Lastly, other popular approaches to LT learning include ensembling, representation learning, classifier design, and decoupled training. For this category, we proceed with two straightforward decoupling methods: classifier re-training (cRT) and $\tau$-normalization. Kang et al. \cite{kangdecoupling2020} observed that they could achieve state-of-the-art results on several LT learning benchmarks by (1) learning representations from naturally distributed data, then (2) re-training or otherwise calibrating the classification head in order to better discriminate tail classes. After training a model on instance-balanced data, cRT freezes this trained backbone, then re-initializes and re-trains the classifier with class-balanced resampling. Directly using the model learned in step (1), $\tau$-normalization scales each classifier's learned weights by their magnitude raised to the power $\tau$. \subsection{Experiments and Evaluation} We evaluate the list of methods shown in Table \ref{method:summary} on NIH-CXR-LT and MIMIC-CXR-LT. To enable a fair comparison among all methods, we keep the entire training pipeline identical except for the method being applied. Specifically, we train a ResNet50 \cite{he2016deep} pretrained on ImageNet \cite{deng2009imagenet}, using the Adam optimizer with a learning rate of $1 \times 10^{-4}$. All models were trained for a maximum of 60 epochs with early-stopping based on overall validation accuracy. For full implementation details, refer to the Supplemental Materials and our code repository: \url{https://github.com/VITA-Group/LongTailCXR}. We present results on both the balanced test set and imbalanced test set for each model and dataset. For the balanced test set, we report head, medium, and tail class accuracy. We additionally include the class-wise average (``overall") accuracy and the group-wise average (``avg") accuracy -- namely, the mean of the head, medium, and tail accuracy; we use this metric since we seek a model that performs well across head, medium, and tail classes regardless of how many samples or classes belong to each group. For the imbalanced test set, we report the Macro-F1 score (the unweighted mean of class-wise F1 scores) and the balanced accuracy (the accuracy with samples weighted by inverse class frequency). We choose balanced accuracy since it is resistant to class imbalance, thus necessary since the test set follows the highly imbalanced real-world data distribution. \begin{table}[!hb] \centering \vspace{-1em} \renewcommand{\arraystretch}{1.1} \caption{Results on NIH-CXR-LT. Accuracy is reported for the balanced test set ($N=600$), where ``Avg" accuracy is the mean of the head, medium, and tail accuracy. Macro-F1 score (mF1) and balanced accuracy (bAcc) are used to evaluate performance on the imbalanced test set ($N=20,279$). The best and second-best results for a given metric are, respectively, bolded and underlined.} \begin{tabular}{@{}lcccccccc@{}} \toprule \multicolumn{1}{c}{Method} & \multicolumn{5}{c}{Balanced Test Set} & \multicolumn{2}{c}{Test Set} \\ \cmidrule(lr){2-6} \cmidrule(lr){7-8} & Overall & Head & Medium & Tail & Avg & mF1 & bAcc \\ \midrule Softmax & 0.175 & 0.419 & 0.056 & 0.017 & 0.164 & 0.131 & 0.115 \\ CB Softmax & 0.333 & 0.295 & 0.415 & 0.217 & 0.309 & 0.177 & 0.269 \\ RW Softmax & 0.300 & 0.248 & 0.359 & 0.258 & 0.288 & 0.116 & 0.26 \\ Focal Loss & 0.160 & 0.362 & 0.056 & 0.042 & 0.153 & 0.142 & 0.122 \\ CB Focal Loss & 0.303 & 0.371 & 0.333 & 0.117 & 0.274 & 0.157 & 0.232 \\ RW Focal Loss & 0.255 & 0.286 & 0.293 & 0.117 & 0.232 & 0.090 & 0.197 \\ LDAM & 0.232 & 0.410 & 0.133 & 0.142 & 0.228 & 0.173 & 0.178 \\ CB LDAM & 0.295 & 0.357 & 0.285 & 0.208 & 0.284 & 0.161 & 0.235 \\ CB LDAM-DRW & 0.377 & 0.476 & 0.356 & 0.250 & 0.361 & 0.172 & 0.281 \\ RW LDAM & 0.353 & 0.305 & 0.419 & 0.292 & 0.338 & 0.111 & 0.279 \\ RW LDAM-DRW & 0.370 & 0.410 & 0.367 & 0.308 & \underline{0.362} & 0.127 & \underline{0.289} \\ MixUp & 0.170 & 0.419 & 0.044 & 0.017 & 0.160 & 0.132 & 0.118 \\ Balanced-MixUp & 0.213 & 0.443 & 0.081 & 0.108 & 0.211 & 0.167 & 0.155 \\ Decoupling--cRT & 0.380 & 0.433 & 0.374 & 0.300 & \textbf{0.369} & 0.138 & \textbf{0.294} \\ Decoupling--$\tau$-norm & 0.280 & 0.457 & 0.230 & 0.083 & 0.257 & 0.144 & 0.214 \\ \bottomrule \end{tabular} \label{results:nih} \end{table} \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.1} \caption{Results on MIMIC-CXR-LT. Accuracy is reported for the balanced test set ($N=570$), where ``Avg" accuracy is the mean of head, medium, and tail accuracy. Macro-F1 score (mF1) and balanced accuracy (bAcc) are used to evaluate performance on the imbalanced test set ($N=23,550$). The best and second-best results for a given metric are, respectively, bolded and underlined.} \begin{tabular}{@{}lcccccccc@{}} \toprule \multicolumn{1}{c}{Method} & \multicolumn{5}{c}{Balanced Test Set} & \multicolumn{2}{c}{Test Set} \\ \cmidrule(lr){2-6} \cmidrule(lr){7-8} & Overall & Head & Medium & Tail & Avg & mF1 & bAcc \\ \midrule Softmax & 0.281 & 0.503 & 0.039 & 0.022 & 0.188 & 0.183 & 0.169 \\ CB Softmax & 0.347 & 0.493 & 0.167 & 0.222 & 0.294 & 0.186 & 0.227 \\ RW Softmax & 0.314 & 0.473 & 0.139 & 0.133 & 0.249 & 0.163 & 0.211 \\ Focal Loss & 0.268 & 0.477 & 0.044 & 0.022 & 0.181 & 0.182 & 0.172 \\ CB Focal Loss & 0.288 & 0.373 & 0.117 & 0.344 & 0.278 & 0.136 & 0.191 \\ RW Focal Loss & 0.335 & 0.403 & 0.283 & 0.211 & 0.299 & 0.144 & 0.239 \\ LDAM & 0.261 & 0.497 & 0.000 & 0.000 & 0.166 & 0.172 & 0.165 \\ CB LDAM & 0.330 & 0.467 & 0.161 & 0.211 & 0.280 & 0.161 & 0.225 \\ CB LDAM-DRW & 0.379 & 0.520 & 0.156 & 0.356 & \underline{0.344} & 0.197 & 0.267 \\ RW LDAM & 0.335 & 0.437 & 0.250 & 0.167 & 0.284 & 0.149 & 0.243 \\ RW LDAM-DRW & 0.365 & 0.447 & 0.256 & 0.311 & 0.338 & 0.177 & \underline{0.275} \\ MixUp & 0.291 & 0.543 & 0.011 & 0.011 & 0.189 & 0.182 & 0.176 \\ Balanced-MixUp & 0.267 & 0.480 & 0.039 & 0.011 & 0.177 & 0.176 & 0.168 \\ Decoupling--cRT & 0.412 & 0.490 & 0.306 & 0.367 & \textbf{0.387} & 0.170 & \textbf{0.296} \\ Decoupling--$\tau$-norm & 0.337 & 0.520 & 0.167 & 0.067 & 0.251 & 0.178 & 0.230 \\ \bottomrule \end{tabular} \label{results:mimic} \end{table} \section{Results and Analysis} For the NIH-CXR-LT dataset, the baseline method fails to adequately classify tail classes, achieving 1.7\% accuracy on those three rarest diseases (Table \ref{results:nih}). The baseline of softmax cross-entropy loss achieves a group-wise average accuracy of 0.164, but improves to 0.309 and 0.288, respectively, when using class-balanced and scikit-learn weights. Furthermore, we see that re-weighting constantly improves performance, though it is inconsistent which re-weighting method provides more significant gains than others. We also see that DRW can additionally improve performance, as evidenced by the fact that both CB LDAM-DRW and RW LDAM-DRW outperform their counterparts without DRW. We find that cRT decoupling achieves the best performance on both the balanced and imbalanced test sets, reaching 0.369 group-wise average accuracy on the balanced test set and 0.294 balanced accuracy on the test set. Classifier re-training is followed closely by RW LDAM-DRW, reaching 0.362 group-wise average accuracy and 0.294 balanced accuracy. On MIMIC-CXR-LT, again, the baseline approach almost entirely fails to capture the tail classes, reaching 0.022 tail accuracy and 0.188 group-wise average accuracy (Table \ref{results:mimic}). Like with the NIH-CXR-LT results, re-weighting is always beneficial; for example, class-balanced re-weighting and scikit-learn re-weighting, respectively, improve focal loss performance from 0.181 to 0.278 and 0.299 group-wise average accuracy. Similarly, DRW brings even further gains to a re-weighted LDAM loss, improving group-wise accuracy by at least 0.05. Classifier re-training again achieves both the highest group-wise average accuracy on the balanced test set and the highest balanced accuracy on the test set by a considerable margin. For both the balanced and imbalanced test sets, the second-best method is a re-weighted LDAM loss with deferred re-weighting -- CB LDAM-DRW for the balanced test set and RW LDAM-DRW for the test set. \textbf{Summary of Findings.} Overall, we see that the standard approach of optimizing softmax cross-entropy with instance-balanced weights fails to adequately capture medium and tail classes for both NIH-CXR-LT and MIMIC-CXR-LT. In contrast to the empirical success of MixUp on many natural image-based problems and Balanced-MixUp on certain medical imaging tasks, we find that MixUp and Balanced-MixUp perform similarly to the baseline for these two tasks; perhaps linearly mixing radiographs destroys valuable high-contrast signal that is necessary for discriminating disease conditions. We see that re-weighting is always beneficial, though which re-weighting method provides larger gains appears to depend on its interaction with the loss function used. We also observe that DRW can provide additional gains to standard re-weighting when used with the LDAM loss. Finally, we see that cRT decoupling was the highest-performing method on both datasets, demonstrating that decoupled training can be a simple and powerful technique for long-tailed disease classification on chest X-rays. We note that performance is lower than prior work on the original NIH ChestXRay14 and MIMIC-CXR datasets since (1) we only consider single-label images, and (2) the newly added classes are difficult to classify and introduce confusion with the set of original diseases in each dataset. \section{Discussion and Conclusion} In summary, we have conducted the first comprehensive study of long-tailed learning methods for disease classification from chest X-rays. We publicly release all code, models, and data to encourage the development of long-tailed learning methods for medical image classification. While we adopted the standard practice of using ImageNet pretrained weights, this limited the list of candidate long-tailed learning methods we could use. For example, certain LT methods that use specialized architectures \cite{shu2019meta,wang2020long} or explore self-supervised learning \cite{jiang2021self,marrakchifighting2021} on other datasets are not compatible with ImageNet pretraining. Future work will explore various pretraining options, combating long-tailed data with a different weight initialization. Lastly, future work will also involve adapting multi-label long-tailed learning methods to these datasets, acknowledging the clinical reality that patients often present with multiple pathologies at once. \section{Acknowledgments} This material is based upon work supported by the Intramural Research Programs of the National Institutes of Health Clinical Center, National Library of Medicine under Award No. 4R00LM013001, and National Science Foundation under Grant No. 2145640. \bibliographystyle{splncs04} \section{Introduction} \raggedbottom Like most diagnostic imaging exams, chest radiography produces a few very common findings, followed by many relatively rare findings \cite{Paul2021Zeroshot,Zhou2021Review}. Such a ``long-tailed" (LT) distribution of outcomes can make it challenging to learn discriminative image features, as standard deep image classification methods will be biased toward the common ``head" classes, sacrificing predictive performance on the infrequent ``tail" classes \cite{zhang2021deep}. In other settings and modalities, there are a select few examples of LT datasets, such as in dermatology \cite{liu2020deep} and gastrointestinal imaging \cite{borgli2020hyperkvasir}; however, the data from Liu et al~\cite{liu2020deep} are not publicly available, and the \textit{HyperKvasir} dataset \cite{borgli2020hyperkvasir} -- while providing 23 unique class labels with several very rare conditions (\textless $50$ labeled examples) -- only contains about 10,000 labeled images for classification. Additionally, while many studies offer techniques to combat class imbalance for medical image analysis problems \cite{marrakchifighting2021,zhuangcare2019,linautomated2021,galdranbalanced2021}, very few methods specifically address the challenges posed by an LT distribution, as there is no freely available benchmark for this purpose. Only recently have studies begun to use the lens of ``LT learning" to describe and improve medical image understanding solutions. For example, Galdran et al. \cite{galdranbalanced2021} proposed Balanced-MixUp, an extension of the MixUp \cite{zhang2018mixup} regularization technique with class-balanced sampling, a common approach in the LT learning literature. Ju et al. \cite{jurelational2021} grouped rare classes into subsets based on prior knowledge (location, clinical presentation) and used knowledge distillation to train a ``teacher" model to enforce the ``student" to learn these groupings. Zhang et al. \cite{zhangmbnm2021} combined a feature ``memory" module, resampling of tail classes, and a re-weighted loss function to improve the LT classification of several medical datasets. More broadly, many relevant techniques have been developed in the related fields of imbalanced learning \cite{zhuangcare2019,marrakchifighting2021} and few-shot learning \cite{quellecautomatic2020,lidifficulty2020}. These medical image-specific techniques, plus the wealth of methods from the computer vision literature \cite{chawla2002smote,huang2016learning,wang2017learning,linfocal2017,cuiclass2019,caolearning2019,shu2019meta,kangdecoupling2020,zhang2021deep,jiang2021self,park2021influence,kini2021label}, provide a foundation from which the medical deep learning community can develop methods for medical LT classification. Since no large-scale, publicly available dataset exists for LT medical image classification, we curate a large benchmark (\textgreater 200,000 labeled images) of two thorax disease classification tasks on chest X-rays. Further, we evaluate state-of-the-art LT learning methods on this data, analyzing which components of existing methods are most applicable to the medical imaging domain. Our contributions can be summarized as follows: \begin{itemize}[topsep=1pt, leftmargin=1.25cm] \item[$\bullet$] We formally introduce the task of long-tailed classification of thorax disease on chest X-rays. The task provides a comprehensive and realistic evaluation of thorax disease classification in clinical practice settings. \item[$\bullet$] We curate a large-scale benchmark from the existing representative datasets NIH ChestXRay14 \cite{wang2017learning} and MIMIC-CXR \cite{johnson2019mimic}. The benchmark contains five new, more fine-grained pathologies, producing a challenging and severely imbalanced distribution of diseases. We describe the characteristics of this benchmark and will publicly release the labels. \item[$\bullet$] We find that the standard cross-entropy loss and augmentation methods such as MixUp fail to adequately classify the rarest ``tail" classes. We observe that class-balanced re-weighting improves performance on infrequent classes, and ``decoupling" via classifier re-training is the most effective approach for both datasets. \end{itemize} \begin{figure}[!ht] \centering \includegraphics[scale=0.41]{figs/061322_log_nih-lt_train.pdf} \includegraphics[scale=0.41]{figs/061322_log_mimic-lt_train.pdf} \caption{Long-tailed distribution of thorax disease labels for the proposed NIH-CXR-LT (left) and MIMIC-CXR-LT (right) training datasets. Values by each bar represent log-frequency, while values in parentheses represent raw frequency. Textured bars represent newly added disease labels, which help create naturally long-tailed distributions without the need for artificial subsampling.} \label{fig:data} \end{figure} \section{Long-Tailed Classification of Thorax Diseases} \raggedbottom \subsection{Task Definition} Disease patterns in chest X-rays are numerous, and their incidence exhibits a long-tailed distribution \cite{Zhou2021Review,Paul2021Zeroshot}: while a small number of common diseases have sufficient observed cases for large-scale analysis, most diseases are infrequent. Conventional computer vision methods may fail to correctly identify uncommon thorax disease classes due to the extremely imbalanced class distribution \cite{Paul2021Zeroshot}, introducing a new and clinically valuable LT classification task on chest X-rays. We formulate the LT classification task first by dividing thorax disease classes into ``head" (many-shot: \textgreater 1,000), ``medium" (medium-shot: 100-1000, inclusive), and ``tail" (few-shot: \textless 100) categories according to their frequency in the training set. \subsection{Dataset Construction} We curate two long-tailed chest X-ray benchmarks, \textbf{NIH-CXR-LT} and \textbf{MIMIC-CXR-LT}, for NIH ChestXRay14 \cite{wangchest2017} and MIMIC-CXR \cite{johnson2019mimic}, respectively. Each study of NIH ChestXRay14 and MIMIC-CXR usually contains one or more chest radiographs and one free-text radiology report. To generate a strongly long-tailed distribution without artificially subsampling, we introduce five new rare disease findings that are text-mined from radiology reports: Calcification of the Aorta, Subcutaneous Emphysema, Tortuous Aorta, Pneumomediastinum, and Pneumoperitoneum. We identify the presence or absence of new disease findings by parsing the text report associated with each study following the method detailed in RadText \cite{Peng2018NegBioAH,wangchest2017}. For this study, we only use frontal-view, single-label images, as most LT methods are developed specifically for multi-class (not multi-label) classification. Following the structure of previous LT benchmark datasets in computer vision, such as ImageNet-LT \cite{liu2019openlongtailrecognition}, we split NIH-CXR-LT and MIMIC-CXR-LT into \textit{training}, \textit{validation}, \textit{test}, and \textit{balanced test} sets. Since both datasets contain patients with multiple images, we split them at the patient level to prevent data leakage. Both validation and balanced test sets are small but perfectly balanced, where the balanced test set is a subset of the larger, imbalanced test set. This data split allows for evaluation consistent with the LT literature (via the balanced test set), as well as more traditional evaluation on a large naturally distributed set (via the test set). The resulting splits produce extreme class imbalance, with an \textit{imbalance factor} -- the cardinality of the most frequent training class divided by the cardinality of the least frequent training class -- of 6,491 for NIH-CXR-LT and 4,438 for MIMIC-CXR-LT. Full detailed statistics and data split for NIH-CXR-LT and MIMIC-CXR-LT can be found in the Supplementary Materials. \textbf{NIH-CXR-LT.} NIH ChestXRay14 contains over 100,000 chest X-rays labeled with 14 pathologies, plus a ``No Findings" class. We construct a single-label, long-tailed version of the NIH ChestXRay14 dataset by introducing five new disease findings described above. The resulting NIH-CXR-LT dataset has 20 classes, including 7 head classes, 10 medium classes, and 3 tail classes. NIH-CXR-LT contains 88,637 images labeled with one of 19 thorax diseases, with 68,058 training and 20,279 test images. The validation and balanced test sets contain 15 and 30 images per class, respectively. \textbf{MIMIC-CXR-LT.} We construct a single-label, long-tailed version of MIMIC-CXR in a similar manner. MIMIC-CXR is a multi-label classification dataset with over 200,000 chest X-rays labeled with 13 pathologies and a ``No Findings" class. The resulting MIMIC-CXR-LT dataset contains 19 classes, of which 10 are head classes, 6 are medium classes, and 3 are tail classes. MIMIC-CXR-LT contains 111,792 images labeled with one of 18 diseases, with 87,493 training images and 23,550 test set images. The validation and balanced test sets contain 15 and 30 images per class, respectively. \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.1} \caption{Long-tailed learning methods selected for benchmarking grouped by type of approach (``R" = Re-balancing, ``A" = Augmentation, ``O" = Other). ``RW" = re-weighted with scikit-learn weights \cite{scikit-learn}, ``CB" = re-weighted with class-balanced weights \cite{cuiclass2019}.} \begin{tabular}{lccc|lccc} \toprule Method & R & A & O & \multicolumn{1}{c}{Method} & R & A & O \\ \midrule Softmax (Baseline) & & & & CB LDAM-DRW \cite{caolearning2019} & \checkmark & &\\ CB Softmax & \checkmark & & & RW LDAM \cite{caolearning2019} & \checkmark & & \\ RW Softmax & \checkmark & & & RW LDAM-DRW \cite{caolearning2019} & \checkmark & &\\ Focal Loss \cite{linfocal2017} & \checkmark & & & MixUp \cite{zhang2018mixup} & & \checkmark &\\ CB Focal Loss \cite{linfocal2017} & \checkmark & & & Balanced-MixUp \cite{galdranbalanced2021} & \checkmark & \checkmark & \\ RW Focal Loss \cite{linfocal2017} & \checkmark & & & Decoupling--cRT \cite{kangdecoupling2020} & \checkmark & & \checkmark\\ LDAM \cite{caolearning2019} & \checkmark & & & Decoupling--$\tau$-norm \cite{kangdecoupling2020} & \checkmark & & \checkmark\\ CB LDAM \cite{caolearning2019} & \checkmark & & \\ \bottomrule \end{tabular} \label{method:summary} \end{table} \subsection{Methods for Benchmarking} In their survey, Zhang \textit{et al.} group LT learning methods into three main categories: class re-balancing, information augmentation, and module improvement \cite{zhang2021deep}. We simplify this categorization down to \underline{re-balancing}, \underline{augmentation}, and \underline{others}, noting that some sophisticated methods can fall into more than one of these categories. We have summarized our selected methods for benchmarking with their corresponding categorizations in Table \ref{method:summary}. Class re-balancing, arguably the most common approach to LT learning, usually involves \textit{resampling} the data such that it is effectively balanced during training or \textit{re-weighting} a loss function to modulate the importance of classes based on their frequency. Resampling methods include SMOTE \cite{chawla2002smote}, which undersamples common classes and oversamples rare classes, and progressively-balanced sampling \cite{kangdecoupling2020}, which interpolates from instance- to class-balanced sampling; recent re-weighting strategies include Focal Loss \cite{linfocal2017}, Label-Distribution-Aware Margin (LDAM) Loss \cite{caolearning2019}, and Influence-Balanced Loss \cite{park2021influence}. In addition to the baseline softmax cross-entropy loss function, we consider Focal Loss and LDAM, with optional deferred re-weighting (DRW). For re-weighting strategies, we select the ``class-balanced" (CB) approach outlined in \cite{cuiclass2019} and the re-weighting approach implemented by the scikit-learn library \cite{scikit-learn}. Approaches to ``information augmentation" can include customized data augmentation, as well as transfer learning from related data domains. For this category, we choose MixUp \cite{zhang2018mixup} and Balanced-MixUp \cite{galdranbalanced2021}. MixUp is an augmentation technique that linearly mixes pairs of input images and labels according to a Beta distribution, producing a strong regularizing effect. Balanced-MixUp, as explained earlier, is an extension of MixUp that linearly mixes pairs of images and labels, where one image is drawn from a batch of instance-balanced (naturally distributed) data and the other from class-balanced (resampled) data. Lastly, other popular approaches to LT learning include ensembling, representation learning, classifier design, and decoupled training. For this category, we proceed with two straightforward decoupling methods: classifier re-training (cRT) and $\tau$-normalization. Kang et al. \cite{kangdecoupling2020} observed that they could achieve state-of-the-art results on several LT learning benchmarks by (1) learning representations from naturally distributed data, then (2) re-training or otherwise calibrating the classification head in order to better discriminate tail classes. After training a model on instance-balanced data, cRT freezes this trained backbone, then re-initializes and re-trains the classifier with class-balanced resampling. Directly using the model learned in step (1), $\tau$-normalization scales each classifier's learned weights by their magnitude raised to the power $\tau$. \subsection{Experiments and Evaluation} We evaluate the list of methods shown in Table \ref{method:summary} on NIH-CXR-LT and MIMIC-CXR-LT. To enable a fair comparison among all methods, we keep the entire training pipeline identical except for the method being applied. Specifically, we train a ResNet50 \cite{he2016deep} pretrained on ImageNet \cite{deng2009imagenet}, using the Adam optimizer with a learning rate of $1 \times 10^{-4}$. All models were trained for a maximum of 60 epochs with early-stopping based on overall validation accuracy. For full implementation details, refer to the Supplemental Materials and our code repository: \url{https://github.com/VITA-Group/LongTailCXR}. We present results on both the balanced test set and imbalanced test set for each model and dataset. For the balanced test set, we report head, medium, and tail class accuracy. We additionally include the class-wise average (``overall") accuracy and the group-wise average (``avg") accuracy -- namely, the mean of the head, medium, and tail accuracy; we use this metric since we seek a model that performs well across head, medium, and tail classes regardless of how many samples or classes belong to each group. For the imbalanced test set, we report the Macro-F1 score (the unweighted mean of class-wise F1 scores) and the balanced accuracy (the accuracy with samples weighted by inverse class frequency). We choose balanced accuracy since it is resistant to class imbalance, thus necessary since the test set follows the highly imbalanced real-world data distribution. \begin{table}[!hb] \centering \vspace{-1em} \renewcommand{\arraystretch}{1.1} \caption{Results on NIH-CXR-LT. Accuracy is reported for the balanced test set ($N=600$), where ``Avg" accuracy is the mean of the head, medium, and tail accuracy. Macro-F1 score (mF1) and balanced accuracy (bAcc) are used to evaluate performance on the imbalanced test set ($N=20,279$). The best and second-best results for a given metric are, respectively, bolded and underlined.} \begin{tabular}{@{}lcccccccc@{}} \toprule \multicolumn{1}{c}{Method} & \multicolumn{5}{c}{Balanced Test Set} & \multicolumn{2}{c}{Test Set} \\ \cmidrule(lr){2-6} \cmidrule(lr){7-8} & Overall & Head & Medium & Tail & Avg & mF1 & bAcc \\ \midrule Softmax & 0.175 & 0.419 & 0.056 & 0.017 & 0.164 & 0.131 & 0.115 \\ CB Softmax & 0.333 & 0.295 & 0.415 & 0.217 & 0.309 & 0.177 & 0.269 \\ RW Softmax & 0.300 & 0.248 & 0.359 & 0.258 & 0.288 & 0.116 & 0.26 \\ Focal Loss & 0.160 & 0.362 & 0.056 & 0.042 & 0.153 & 0.142 & 0.122 \\ CB Focal Loss & 0.303 & 0.371 & 0.333 & 0.117 & 0.274 & 0.157 & 0.232 \\ RW Focal Loss & 0.255 & 0.286 & 0.293 & 0.117 & 0.232 & 0.090 & 0.197 \\ LDAM & 0.232 & 0.410 & 0.133 & 0.142 & 0.228 & 0.173 & 0.178 \\ CB LDAM & 0.295 & 0.357 & 0.285 & 0.208 & 0.284 & 0.161 & 0.235 \\ CB LDAM-DRW & 0.377 & 0.476 & 0.356 & 0.250 & 0.361 & 0.172 & 0.281 \\ RW LDAM & 0.353 & 0.305 & 0.419 & 0.292 & 0.338 & 0.111 & 0.279 \\ RW LDAM-DRW & 0.370 & 0.410 & 0.367 & 0.308 & \underline{0.362} & 0.127 & \underline{0.289} \\ MixUp & 0.170 & 0.419 & 0.044 & 0.017 & 0.160 & 0.132 & 0.118 \\ Balanced-MixUp & 0.213 & 0.443 & 0.081 & 0.108 & 0.211 & 0.167 & 0.155 \\ Decoupling--cRT & 0.380 & 0.433 & 0.374 & 0.300 & \textbf{0.369} & 0.138 & \textbf{0.294} \\ Decoupling--$\tau$-norm & 0.280 & 0.457 & 0.230 & 0.083 & 0.257 & 0.144 & 0.214 \\ \bottomrule \end{tabular} \label{results:nih} \end{table} \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.1} \caption{Results on MIMIC-CXR-LT. Accuracy is reported for the balanced test set ($N=570$), where ``Avg" accuracy is the mean of head, medium, and tail accuracy. Macro-F1 score (mF1) and balanced accuracy (bAcc) are used to evaluate performance on the imbalanced test set ($N=23,550$). The best and second-best results for a given metric are, respectively, bolded and underlined.} \begin{tabular}{@{}lcccccccc@{}} \toprule \multicolumn{1}{c}{Method} & \multicolumn{5}{c}{Balanced Test Set} & \multicolumn{2}{c}{Test Set} \\ \cmidrule(lr){2-6} \cmidrule(lr){7-8} & Overall & Head & Medium & Tail & Avg & mF1 & bAcc \\ \midrule Softmax & 0.281 & 0.503 & 0.039 & 0.022 & 0.188 & 0.183 & 0.169 \\ CB Softmax & 0.347 & 0.493 & 0.167 & 0.222 & 0.294 & 0.186 & 0.227 \\ RW Softmax & 0.314 & 0.473 & 0.139 & 0.133 & 0.249 & 0.163 & 0.211 \\ Focal Loss & 0.268 & 0.477 & 0.044 & 0.022 & 0.181 & 0.182 & 0.172 \\ CB Focal Loss & 0.288 & 0.373 & 0.117 & 0.344 & 0.278 & 0.136 & 0.191 \\ RW Focal Loss & 0.335 & 0.403 & 0.283 & 0.211 & 0.299 & 0.144 & 0.239 \\ LDAM & 0.261 & 0.497 & 0.000 & 0.000 & 0.166 & 0.172 & 0.165 \\ CB LDAM & 0.330 & 0.467 & 0.161 & 0.211 & 0.280 & 0.161 & 0.225 \\ CB LDAM-DRW & 0.379 & 0.520 & 0.156 & 0.356 & \underline{0.344} & 0.197 & 0.267 \\ RW LDAM & 0.335 & 0.437 & 0.250 & 0.167 & 0.284 & 0.149 & 0.243 \\ RW LDAM-DRW & 0.365 & 0.447 & 0.256 & 0.311 & 0.338 & 0.177 & \underline{0.275} \\ MixUp & 0.291 & 0.543 & 0.011 & 0.011 & 0.189 & 0.182 & 0.176 \\ Balanced-MixUp & 0.267 & 0.480 & 0.039 & 0.011 & 0.177 & 0.176 & 0.168 \\ Decoupling--cRT & 0.412 & 0.490 & 0.306 & 0.367 & \textbf{0.387} & 0.170 & \textbf{0.296} \\ Decoupling--$\tau$-norm & 0.337 & 0.520 & 0.167 & 0.067 & 0.251 & 0.178 & 0.230 \\ \bottomrule \end{tabular} \label{results:mimic} \end{table} \section{Results and Analysis} For the NIH-CXR-LT dataset, the baseline method fails to adequately classify tail classes, achieving 1.7\% accuracy on those three rarest diseases (Table \ref{results:nih}). The baseline of softmax cross-entropy loss achieves a group-wise average accuracy of 0.164, but improves to 0.309 and 0.288, respectively, when using class-balanced and scikit-learn weights. Furthermore, we see that re-weighting constantly improves performance, though it is inconsistent which re-weighting method provides more significant gains than others. We also see that DRW can additionally improve performance, as evidenced by the fact that both CB LDAM-DRW and RW LDAM-DRW outperform their counterparts without DRW. We find that cRT decoupling achieves the best performance on both the balanced and imbalanced test sets, reaching 0.369 group-wise average accuracy on the balanced test set and 0.294 balanced accuracy on the test set. Classifier re-training is followed closely by RW LDAM-DRW, reaching 0.362 group-wise average accuracy and 0.294 balanced accuracy. On MIMIC-CXR-LT, again, the baseline approach almost entirely fails to capture the tail classes, reaching 0.022 tail accuracy and 0.188 group-wise average accuracy (Table \ref{results:mimic}). Like with the NIH-CXR-LT results, re-weighting is always beneficial; for example, class-balanced re-weighting and scikit-learn re-weighting, respectively, improve focal loss performance from 0.181 to 0.278 and 0.299 group-wise average accuracy. Similarly, DRW brings even further gains to a re-weighted LDAM loss, improving group-wise accuracy by at least 0.05. Classifier re-training again achieves both the highest group-wise average accuracy on the balanced test set and the highest balanced accuracy on the test set by a considerable margin. For both the balanced and imbalanced test sets, the second-best method is a re-weighted LDAM loss with deferred re-weighting -- CB LDAM-DRW for the balanced test set and RW LDAM-DRW for the test set. \textbf{Summary of Findings.} Overall, we see that the standard approach of optimizing softmax cross-entropy with instance-balanced weights fails to adequately capture medium and tail classes for both NIH-CXR-LT and MIMIC-CXR-LT. In contrast to the empirical success of MixUp on many natural image-based problems and Balanced-MixUp on certain medical imaging tasks, we find that MixUp and Balanced-MixUp perform similarly to the baseline for these two tasks; perhaps linearly mixing radiographs destroys valuable high-contrast signal that is necessary for discriminating disease conditions. We see that re-weighting is always beneficial, though which re-weighting method provides larger gains appears to depend on its interaction with the loss function used. We also observe that DRW can provide additional gains to standard re-weighting when used with the LDAM loss. Finally, we see that cRT decoupling was the highest-performing method on both datasets, demonstrating that decoupled training can be a simple and powerful technique for long-tailed disease classification on chest X-rays. We note that performance is lower than prior work on the original NIH ChestXRay14 and MIMIC-CXR datasets since (1) we only consider single-label images, and (2) the newly added classes are difficult to classify and introduce confusion with the set of original diseases in each dataset. \section{Discussion and Conclusion} In summary, we have conducted the first comprehensive study of long-tailed learning methods for disease classification from chest X-rays. We publicly release all code, models, and data to encourage the development of long-tailed learning methods for medical image classification. While we adopted the standard practice of using ImageNet pretrained weights, this limited the list of candidate long-tailed learning methods we could use. For example, certain LT methods that use specialized architectures \cite{shu2019meta,wang2020long} or explore self-supervised learning \cite{jiang2021self,marrakchifighting2021} on other datasets are not compatible with ImageNet pretraining. Future work will explore various pretraining options, combating long-tailed data with a different weight initialization. Lastly, future work will also involve adapting multi-label long-tailed learning methods to these datasets, acknowledging the clinical reality that patients often present with multiple pathologies at once. \section{Acknowledgments} This material is based upon work supported by the Intramural Research Programs of the National Institutes of Health Clinical Center, National Library of Medicine under Award No. 4R00LM013001, and National Science Foundation under Grant No. 2145640. \bibliographystyle{splncs04}
1,941,325,220,795
arxiv
\section{Introduction} \label{s:intro} Survival outcomes are common end-points of interest in confirmatory trials to demonstrate treatment effect in oncology. In the presence of non-proportional hazards (NPH), which has been increasingly encountered in practice, the detection power of the commonly used log-rank test is much lower than those under proportional hazards (PH). For instance, delayed treatment effect was often reported in immune-directed anti-cancer therapies \cite{reck2016pembrolizumab,mok2019pembrolizumab}. Unlike chemotherapy, which displays early antitumor effects or separation between the survival curves, immunotherapy stimulates the patient's immune system for an antitumor response, causing delayed clinical effects \cite{mick2015statistical}. Weighted log-rank tests (WLRT) incorporate time-dependent weights to improve the detection power in the presence of non-proportional hazards. Weight functions can be dependent on survival functions \cite{harrington1982class} or at-risk proportions \cite{gehan1965generalized,tarone1977distribution}. In this report, we focus on the Fleming-Harrington class of WLRT, but one can easily extend the proposed design to other weight functions. The shape of the Fleming-Harrington weight function can be adjusted according to the survival curves. For example, one can put more weight on late separation for delayed treatment effect to improve the detection power. According to Schoenfeld \cite{schoenfeld1983sample}, the optimal weight (with the highest power) should be proportional to the logarithm of the hazard ratio \cite{schoenfeld1981asymptotic}, which is a time-dependent function under NPH. Survival curves are generally unknown, and thus, the appropriate weight function cannot be decided before starting the trial. Lee \cite{lee1996some} proposed a versatile max-combo test, which takes the maximum value of a set of different WLRT to provide a robust detection. In other words, whether it is PH or NPH, the max-combo test gives power quite close to the optimal one in the combo of different WLRT. A typical maxcombo test combines several WLRT, each of which is most powerful in detecting a certain pattern of NPH or PH difference between treatment arms, and the multiple testing adjustment is conducted via a Dunnett-type parametric method. Interim analysis (IA) in group sequential design (GS) enables multiple looks (interims) before the end of the study. They will allow early stops when there is sufficient evidence to discontinue the study, like the rejection of the null hypothesis, toxic effects, and futility. Though their benefits have been extensively studied, to the best of our knowledge, group sequential designs with max-combo tests (GS-MC) have not been systematically established. In this report, we develop a simulation-free approach to calculate the stopping boundaries with GS-MC design. As shown in the sequel, our methods can control type I error and provide an efficient way of computing the power and sample size with a GS-MC design. The rest of this report is organized as follows. We introduce the designs of WLRT and the max-combo test, and extend them to GS-MC in Section~\ref{s:problem}. We propose simulation-free approaches for GS-MC to compute boundaries for type I error control and practical sample size in realistic scenarios in Section~\ref{s:solution}. We evaluate the proposed methods through extensive simulations with or without violations of the model assumptions in Section~\ref{s:simulation}. Some concluding remarks are provided in Section~\ref{s:discuss}. \section{Problem Formulation} \label{s:problem} \subsection{Notation} \label{s:problem:subs:notation} Suppose there are $n$ subjects entering the study at $E_i$, $i=1,\dots,n$, within the accrual period $[0,R]$. Let $T_i$ denote a univariate event time of interest, and $A_i$ indicate treatment assignment, e.g. using $0$ for the control group and $1$ for the treatment group. Given the treatment $A_i=a$, $a\in\{1,0\}$, event time $T_i$ follows a survival function $S_a(s)=P(T_i > s\mid A_i=a)$. To consider censoring time $C_i$, the observed follow-up time and event indicator are $\{Y_i(t)=min(T_i,C_i, t-E_i)\}$ and $\delta_i(t)=I(Y_i(t)=T_i)$, where $t$ is a stopping time for tests. Alternatively, observed event times can also be written in a counting process form, i.e., $N_i(t,s)=I(T_i\leq s, \delta_i(t)=1)$. Note that $t$ and $E_i$ follow the chronological time scale, while $s$ in all the functions, $T_i$ and $C_i$ are in a follow-up time scale starting from the accrual time $E_i$. The censoring might differ between the two treatment arms so that the survival functions of censoring are $S_{ca}(s)=P(C_i > s\mid A_i=a)$, $a=0,1$. In this report, we allow both the control and the treatment group to follow piecewise exponential distributions with a general form: \begin{equation}\label{equ:survival} S_a(s)= \exp\left[-\sum_{q=1}^{Q}\lambda_{aq} \max\left\{0,\min(\epsilon_q-\epsilon_{q-1},s-\epsilon_{q-1})\right\}\right],\quad a=1,0. \end{equation} Note that $0=\epsilon_0<\epsilon_1<\dots<\epsilon_{Q}=\infty$ are the splitting points where the hazard changes, and $\lambda_{aq}$ is the hazard in the interval $[\epsilon_{q-1},\epsilon_q)$, $q = 1,\dots, Q$. The flexibility of including multiple pieces enables an accurate approximation of any survival curves. The corresponding density function is $f_a(s)=S_a(s)\sum_{q=1}^QI(s\in [\epsilon_{q-1},\epsilon_q))\lambda_{aq}$. The hazard ratios between the two treatment arms are $\bm\Theta=\{\theta_q=\lambda_{1q}/\lambda_{0q},q=1,\dots,Q\}$. The hazard ratios can describe all different changing pattern of the treatment effects, e.g., constant hazard ratio or PH cases with $\theta_q$ identical for $\forall q$, and delayed or increasing effects with $0<\theta_{Q}<\dots<\theta_1\leq1$ etc. For delayed treatment effect, a simple NPH case is given in Appendix~B (\ref{equ:survival_2p}) and (\ref{equ:density_2p}) with only the two-piece exponential distribution considered: hazards are $\lambda_{11}=\lambda_{01}=\lambda$ and $\theta_1=1$ for $[0,\epsilon)$, $\lambda_{12}=\theta_2\lambda$ and $\lambda_{02}=\lambda$ for $[\epsilon,\infty)$. In this simple case, the null hypothesis ($H_0$) has $\theta_2=1$ and the alternative hypothesis ($H_1$) has $\theta_2=\theta$ with some $\theta<1$. Or more broadly speaking, the null hypothesis $H_0$ is always $\bm\Theta=\bm\Theta_0$ with all elements $\theta_q=1$, and the alternative one $H_1$ could embrace any predefined $\bm \Theta=\bm\Theta_1$ with at least one of elements $\theta_q\not=1$. Additionally, we assume uniform accrual and administrative right censoring at time $\tau$ following the proposal of Hasegawa \cite{hasegawa2014sample} in Section~\ref{s:solution} when developing our solutions. However, the robustness of the proposed approaches towards the assumption violations can be seen in Section~\ref{s:simulation}, and easy extensions on the proposed method could be incorporated to accommodate more complicated scenarios in future studies. For example, one could implement the method from Luo et al \cite{luo2019design} for all types of piece-wise exponential distributions under various censoring and accrual processes. \subsection{Weighted log-rank test} \label{s:problem:subs:WLRT} Suppose the treatment-specific at-risk proportions are $R_a(t,s)=\frac{1}{n}\sum_{i=1}^n I(Y_i(t)\geq s, A_i=a)$, and the total at-risk proportion is $R(t,s)=R_1(t,s)+R_0(t,s)$. The standardized Fleming-Harrington class weighted log-rank test statistic (WLRT) stopped at time $t$ is given by \begin{equation}\label{equ:wlrt} \mathcal{G}_{\rho,\gamma}(t)=\frac{\sum_{i=1}^n\int_0^t w_{\rho,\gamma}(s)[A_i-\frac{R_1(t,s)}{R(t,s)}]N_i(t,ds)}{\sqrt{\sum_{i=1}^n\int_0^t w_{\rho,\gamma}^2(s)\frac{R_1(t,s)R_0(t,s)}{R(t,s)^2}N_i(t,ds)}}, \end{equation} with a Fleming-Harrington weight $w_{\rho,\gamma}(s)=S(s^-)^\rho\{1-S(s^-)\}^\gamma$. We denote the numerator of (\ref{equ:wlrt}) as $G_{\rho,\gamma}(t)$, and its asymptotic variance $V(G_{\rho,\gamma}(t))$ can be estimated via the denominator: \begin{equation}\label{equ:var_est} \widehat{V}(G_{\rho,\gamma}(t))=\sum_{i=1}^n\int_0^t w_{\rho,\gamma}^2(s)\frac{R_1(t,s)R_0(t,s)}{R(t,s)^2}N_i(t,ds). \end{equation} Thus formula in (\ref{equ:wlrt}) can also be written into and should satisfy the following equation $\mathcal{G}_{\rho,\gamma}(t)=G_{\rho,\gamma}(t)/\sqrt{\widehat{V}(G_{\rho,\gamma}(t))}=G_{\rho,\gamma}(t)/\sqrt{V(G_{\rho,\gamma}(t))}+op(1)$. Modifying the two parameters $\rho$ and $\gamma$ can adjust the shape of the weights, and thus the focus of the detection. For instance, $\rho=\gamma=0 $, the test statistic is reduced to a standard log-rank test (SLRT), which provides best power in the presence of PH; while $\rho=0$ and $\gamma=1$, it emphasizes more on late separations and thus provides better power for delayed effect. Sample size calculation for a confirmatory clinical trial is based on the predefined null and alternative hypotheses (denoted by $H_0$ and $H_1$). Letting $z_\alpha$ and $z_{1-\beta}$ be critical values of standard normal distribution, and $\Delta$ the effective difference between two arms (e.g. the constant log hazard ratio under PH), The required number of events for SLRT has a closed-form expression \cite{schoenfeld1983sample} given by \begin{equation} d=\frac{(z_\alpha+z_{1-\beta})^2}{p(1-p)\Delta^2}. \end{equation} The sample size computation for WLRT, however, was established following a stochastic process approach suggested by Lakatos\cite{lakatos1988sample} and Hasegawa\cite{hasegawa2014sample}. Suppose $b$ is the number of intervals at each time unit (month), and $J(t)=floor(bt)$ is the total number of time intervals at an equal length $[s_0=0,s_1,s_2,\dots,s_J=t]$, where $t$ represents a stopping time as aforementioned. There follows the mean estimator $\widetilde{E}_{sto,H}(G_{\rho,\gamma}(t))$ and variance/information estimator of the numerator in (\ref{equ:wlrt}): \begin{equation}\label{equ:mean_sto} \widetilde{E}_{sto,H}(G_{\rho,\gamma}(t))=\sum_{j=0}^{J(t)-1}D^*_{j,H}(t)w_{\rho,\gamma}^*(j)\left[\frac{\phi_j\theta_j}{1+\phi_j\theta_j}-\frac{\phi_j}{1+\phi_j}\right], H=H_1, H_0; \end{equation} \begin{equation}\label{equ:var_sto} \widetilde{V}_{sto,H}(G_{\rho,\gamma}(t))=\sum_{j=0}^{J(t)-1}D^*_{j,\bm\Theta}(t)w_{\rho,\gamma}^{*2}(j)\frac{\phi_j}{(1+\phi_j)^2}; \end{equation} where, \begin{equation}\label{equ:compo_sto} \begin{array}{l} R_1^*(0)=p, R_0^*(0)=1-p,w_{\rho,\gamma }^*(j)=S(s_j)^\rho[1-S(s_j)]^\gamma,\\ R_a^*(j+1)=R_a^*(j)\left[1-h_a^*(s_j)\frac{1}{b}-\frac{I(s_j>\tau-R)}{b(\tau-s_i)} \right], h_a^*(s_j)=\frac{f_a(s_j)}{S_a(s_j)},a=0,1,\\ \theta_j^*=\frac{h_1(s_j)}{h_0(s_j)}, \phi_j^*=\frac{R_1^*(s_j)}{R_0^*(s_j)}, D_{j,H}^*(t)=\left[\left\{h_0^*(s_j)R_0^*(j)+h_1^*(s_j)R_1^*(j)\right\}\frac{1}{b}\right]\min(\frac{t}{R},1). \end{array} \end{equation} Note that we set $H=H_1$ for the alternative hypothesis and $H=H_0$ the null hypothesis in formulas (\ref{equ:mean_sto})-(\ref{equ:compo_sto}). It is also expected to see that $\widetilde{E}_{sto,H_0}(G_{\rho,\gamma}(t))=0$ in that $\theta_j=1$ for $\forall j$. In this report, we name the mean and variance in (\ref{equ:mean_sto})-(\ref{equ:var_sto}) be ``predicted'' values since none of the observation data are used here. Moreover, we use accent tilde to distinguish them from those estimated from observed events in (\ref{equ:var_est}). The difference between the prediction and estimation methods merely depends on whether the observed event times are used in the computation. Henceforth, prediction is usually used for early trial design purpose, but estimation is more often implemented when part of the data have been collected. The marginal survival function is the weighted average of two treatment arms: $S(s)=(1-p)S_0(s)+pS_1(s)$. In contrast to the proposal by Hasegawa \cite{hasegawa2014sample}, there is an extra multiplicative term $\min(\frac{t}{R},1)$ for the formulation of $D_{j,\bm\Theta_1}^*(t)$ in (\ref{equ:compo_sto}) to account for a special case that $t<R$, i.e., stopping before the end of accrual period. Lakatos \cite{lakatos1988sample} suggested that $\mathcal{G}_{\rho,\gamma}(t)$ follows an asymptotically normal distribution with unit variance and mean approximated by \begin{equation}\label{equ:mu_sto} \widetilde{\mu}_{\rho,\gamma,H}(t)=\frac{\widetilde{E}_{sto,H}(G_{\rho,\gamma}(t))}{\sqrt{\widetilde{V}_{sto,H}(G_{\rho,\gamma}(t))}}. \end{equation} Fixing $t=\tau$ for tests at the end of the study, the required sample sizes in terms of the total number of subjects ($n$) and observed events ($d$) are \begin{equation} \begin{array}{l} n=\left(\frac{z_\alpha+z_{1-\beta}}{ \widetilde{\mu}_{\rho,\gamma,H_1}(\tau)}\right)^2\\ d=nD_{H_1}^*(\tau), \end{array} \end{equation} where $D_{H_1}^*(\tau)=\sum_{j=0}^{J(\tau)-1}D^*_{j,H_1}(\tau)$ approximates the probability of observing an event from each subject under $H_1$. \subsection{Maxcombo test} \label{s:problem:subs:maxcombo} In practice, the true survival curves or the hazard ratio between the treatment arms are usually unknown; moreover, the existence of delayed treatment effect and its severity can hardly be predicted in advance. To that end, Lee \cite{lee1996some} proposed a versatile max-combo test, taking the maximum of a combo of different WLRTs to cover various scenarios: PH case, NPH cases with early, middle and late effects, etc. The general form of a maxcombo test is \begin{equation}\label{equ:maxcombo} \mathcal{G}_{max}(t)=\max\left(\mathcal{G}_{\rho_1,\gamma_1}(t), \mathcal{G}_{\rho_2,\gamma_2}(t),\dots,\mathcal{G}_{\rho_K,\gamma_K}(t)\right), \end{equation} where $\mathcal{G}_{\rho_k,\gamma_k}(t)$ is one of the $K$ different Fleming-Harrington family WLRTs. Boundary calculation for a maxcombo test statistic is equivalent to finding the boundary value $ g(\tau)$ at the end of the study (time $\tau$) such that the type one error ($\alpha$) is under control: \begin{equation}\label{equ:alpha} P(\mathcal{G}_{max}(\tau)<g(\tau)\mid H_0)=1-\alpha. \end{equation} According to Lee \cite{lee2007versatility}, $ \bm{\mathcal{ G}}(t)=[\mathcal{G}_{\rho_1,\gamma_1}(t),\dots,\mathcal{G}_{\rho_K,\gamma_K}(t)]^\prime$ is (asymptotically) multivariate normal distributed with mean 0 and variance 1, and the correlation for two different WLRTs with $k_1\not=k_2$ is given by \begin{equation}\label{equ:cor} Cor(\mathcal{G}_{\rho_{k_1},\gamma_{k_1}}(t),\mathcal{G}_{\rho_{k_2},\gamma_{k_2}}(t))=\frac{Cov(G_{\rho_{k_1},\gamma_{k_1}}(t),G_{\rho_{k_2},\gamma_{k_2}}(t))}{\sqrt{V(G_{\rho_{k_1},\gamma_{k_1}}(t))V(G_{\rho_{k_2},\gamma_{k_2}}(t))}}. \end{equation} The variances can be obtained through either the data-driven estimation $\widehat{V}(G_{\rho,\gamma}(t))$ from (\ref{equ:var_est}) or the stochastic prediction $\widetilde{V}_{sto}(G_{\rho,\gamma}(t))$ from (\ref{equ:var_sto}). In a similar vein, covariance can be obtained following either prediction or estimation via the equation given by \begin{equation}\label{equ:cov} Cov(G_{\rho_{k_1},\gamma_{k_1}}(t),G_{\rho_{k_2},\gamma_{k_2}}(t))=V\left\{G_{\frac{\rho_{k_1}+\rho_{k_2}}{2},\frac{\gamma_{k_1}+\gamma_{k_2}}{2}}(t)\right\}. \end{equation} Alternatively, for the (piece-wise) exponential distributions, one can derive close-form expressions for their mean, variance, and covariance values according to their exact distribution functions, for which we denote as ``exact prediction'', and indexing with an ``exa'' for distinction. Please check the exactly predicted variances ($\widetilde{V}_{exa}$) and covariance ($\widetilde{Cov}_{exa}$) for a piece-wise exponential survival distribution in Appendix~B. The exact prediction method can largely alleviate the computational burden, though the closed-form solutions may not exist for complex survival curves. To that end, one might refer to numerical approximation by transforming the integration to a summation over many small intervals, which is quite similar to the proposed stochastic prediction method. Under $H_1$, the asymptotic mean for each WLRT can be approximated through (\ref{equ:mu_sto}), and thus the mean vector $\widetilde{\bm\mu}(t)=[\widetilde{\mu}_{\rho_1,\gamma_1,H_1}(t),\dots,\widetilde{\mu}_{\rho_K,\gamma_K,H_1}(t)]^\prime$. The approximate asymptotic distribution of the test statistics $\bm {\mathcal{G}}(t)$ is multivariate normal with mean $\sqrt{n}\widetilde{\bm\mu}(t)$ and the covariance/correlation matrix can be obtained via the prediction methods proposed for the boundary calculation in (\ref{equ:cor}). Note that we do not consider using the estimation approach for sample size calculation, since sample size calculation is usually decided before starting the trial in group sequential designs. The sample size will be obtained through solving the function below so that the type II error equals $\beta$: \begin{equation}\label{equ:beta} P(\mathcal{G}_{max}(\tau)<g(\tau)\mid H_1)=\beta. \end{equation} \subsection{Group Sequential Design for Maxcombo tests} \label{s:problem:subs:IA-MC} Practitioners often employ interim analyses or group sequential designs in clinical trials. They will save time and budgets by stopping a trial early when there is sufficient statistical evidence to terminate the study: futility, unexpected side effects, and significant treatment effect. There were extensive discussions about introducing maxcombo tests to group sequential design on the FDA workshop at Duke-Margolis Health Policy Center in 2018 \cite{fdaworkshop}. Plenty of simulations have shown that maxcombo could potentially improve the robustness when NPH exists. However, there are mainly two problems that hinder the implementation of GS-MC: 1) how to compute the boundaries at each stage to control the type I error; 2) how to compute the sample size. Simulations can solve both two problems, but the computational burden could be considerable. To avoid the tedious simulations, we propose a design procedure that can control the type I and accurately predicted the required sample size by approximating the asymptotic distribution of all the test statistics across different stopping points. \section{Proposed solutions} \label{s:solution} \subsection{Correlation matrix approximation} \label{sub:correlation} The correlation matrix of the test statistics requires 3 different types of correlation values. The first type is the within-stage correlation between different tests, e.g. $Cor(\mathcal{G}_{\rho_{k_1},\gamma_{k_1}}(t),\mathcal{G}_{\rho_{k_2},\gamma_{k_2}}(t))$, which can be computed following equation (\ref{equ:cor}). The second type is within-test correlation between two stopping time points, or $Cor(\mathcal{G}_{\rho,\gamma}(t_{m_1}),\mathcal{G}_{\rho,\gamma}(t_{m_2}))$ for $0<t_{m_1}<t_{m_2}\leq\tau$ and $m_1<m_2$, computed from \begin{equation}\label{equ:cor_t} Cor(\mathcal{G}_{\rho,\gamma}(t_{m_1}),\mathcal{G}_{\rho,\gamma}(t_{m_2}))=\sqrt{\frac{V(G_{\rho,\gamma}(t_{m_1}))}{V(G_{\rho,\gamma}(t_{m_2}))}}. \end{equation} The information fraction $IF_{\rho,\gamma}(t_{m_1},t_{m_2})={V(G_{\rho,\gamma}(t_{m_1}))}/{V(G_{\rho,\gamma}(t_{m_2}))}$ under the square root of (\ref{equ:cor_t}) was used to decide stopping times in group sequential designs \cite{hasegawa2016group}. Note that (\ref{equ:cor_t}) asymptotically holds only under $H_0$, when the independent increment property $Cov(G_{\rho,\gamma}(t_{m_1}),G_{\rho,\gamma}(t_{m_2}))=V(G_{\rho,\gamma}(t_{m_1}))$ is asymptotically true \cite{tsiatis1981asymptotic}. Although not strictly satisfied under $H_1$, the independent increment property and thus the equation (\ref{equ:cor_t}) almost hold numerically when the difference between two treatment arms are not considerable (under the so-called ``local alternatives") or when the events are not too frequent, according to Example~1 in Luo et al \cite{luo2019design} and our simulations in Section~\ref{s:simulation}. Note that the variance value for both within-test and within-stage correlations can be obtained via both prediction and estimation approaches. The third type includes correlations across different time points and test types. We propose a simple calculation based on the first two types of correlations by introducing the Theorem~1, which is also proved in Appendix~\ref{app:sec:proof}. \begin{theorem} If random variables $X_1$, $X_2$ and $X_3$ have mean 0, variance 1, satisfying $X_3=\phi X_2+M$, $M\perp (X_1,X_2)$ with $E(M)=0$, and $\phi$ is a constant value, then the equality $cor(X_1,X_3)= cor(X_1,X_2)cor(X_2,X_3)$ holds. \end{theorem} In particular, let $X_1=\mathcal{G}_{\rho_{k_1},\gamma_{k_1}}(t_{m_1})-E[\mathcal{G}_{\rho_{k_1},\gamma_{k_1}}(t_{m_1})]$ and $X_2=\mathcal{G}_{\rho_{k_2},\gamma_{k_2}}(t_{m_1})-E[\mathcal{G}_{\rho_{k_2},\gamma_{k_2}}(t_{m_1})]$, then it holds that $X_3=\mathcal{G}_{\rho_{k_2},\gamma_{k_2}}(t_{m_2})-E[\mathcal{G}_{\rho_{k_2},\gamma_{k_2}}(t_{m_2})]=\phi X_2+M$, where $\phi=\sqrt{IF_{\rho_{k_2},\gamma_{k_2}}(t_{m_1},t_{m_2})}$ and $M=[G_{\rho_{k_2},\gamma_{k_2}}(t_{m_2})-G_{\rho_{k_2},\gamma_{k_2}}(t_{m_1})-E\{G_{\rho_{k_2},\gamma_{k_2}}(t_{m_2})-G_{\rho_{k_2},\gamma_{k_2}}(t_{m_1})\}]/\sqrt{V(G_{\rho_{k_2},\gamma_{k_2}}(t_{m_2}))}$. Note that under the $H_0$ we have $E(G_{\rho,\gamma}(t))=0$. Similar to the comments for the second-type correlation, $M\perp (X_1,X_2)$ is asymptotically correct following the asymptotic independent increment property \cite{tsiatis1981asymptotic} under the $H_0$, and only approximately true under $H_1$ when the difference between two arms and the event hazards are limited within some practical range. In the Section~\ref{s:simulation}, we conducted extensive simulations to explore how well the this approximation is in various scenarios and in the presence of multiple assumption violations (Table~\ref{tab:corr}, Web Tables~3-6). Following Theorem~1, we have the third-type correlation given by \begin{equation}\label{equ:equal} \begin{array}{l} Cor(\mathcal{G}_{\rho_{k_1},\gamma_{k_1}}(t_{m_1}),\mathcal{G}_{\rho_{k_2},\gamma_{k_2}}(t_{m_2}))=\\ \qquad\qquad Cor(\mathcal{G}_{\rho_{k_1},\gamma_{k_1}}(t_{m_1}),G_{\rho_{k_2},\gamma_{k_2}}(t_{m_1}))Cor(\mathcal{G}_{\rho_{k_2},\gamma_{k_2}}(t_{m_1}),\mathcal{G}_{\rho_{k_2},\gamma_{k_2}}(t_{m_2})). \end{array} \end{equation} With all the 3 different types of correlations calculated using either distribution-based prediction or data-driven estimation, the two sets of correlation matrices are obtained under $H_0$ and $H_1$. \subsection{Type I error: boundaries} \label{sub:type1error} We introduce boundary vector $\bm g=[g(t_{1}),\dots,g(t_{M})]^\prime$ for M stages including a final stage and $M-1$ interim stages. To control the type I error at each stage, we employ a monotone increasing error spending function $\alpha(\nu)$ with $\nu\in[0,1]$, $\alpha(0)=0$ and $\alpha(1)=\alpha$ \cite{gordon1983discrete}. Suppose that we monitor the information fractions at times $0=t_0<t_1<\dots<t_M=\tau$ satisfying $IF_{\rho_{k},\gamma_{k}}(t_{m},\tau)=\nu_m$, where $0=\nu_0<\nu_1<\dots<\nu_M=1$ are pre-defined, and $w_{\rho_k,\gamma_k}$ indicates the $kth$ WLRTs in the combo to monitor stopping times. Error spending funcitonn $\alpha(v)$ controls type I error spent at each stage via step-wise equations for stage $m=1,\dots,M$, \begin{equation} \begin{array}{l} P(\mathcal{G}_{max}(t_{j})\leq g(t_{j}),j=1,\dots,m-1,\mathcal{G}_{max}(t_{m})>g(t_{m})\mid H_0)=\alpha(\nu_m)-\alpha(\nu_{m-1}). \end{array} \end{equation} The boundary values $\bm g$ can be obtained via solving the multivariate normal distribution with mean $0$, and variance matrix $\bm \Sigma_0$ with all the diagonal entries to be 1, and off-diagonal correlation entries computed following Subsection~\ref{sub:correlation}. \subsection{Power: sample size} The sample size calculation is based on the asymptotic distribution of the multivariate normal distribution under $H_1$. With all the boundaries decided in Subsection~\ref{sub:type1error}, the sample sizes ($n$) can be solved such that we have \begin{equation}\label{equ:beta2} \begin{array}{l} P(\mathcal{G}_{max}(t_{m})\leq g(t_{m}),m=1,\dots,M\mid H_1)=\beta. \end{array} \end{equation} In group sequential trials, sample size calculation usually precedes the trial. we will obtain the sample size according to predicted mean $\sqrt{n}[\widetilde{\bm\mu}(t_{1})^\prime,\dots,\widetilde{\bm\mu}(t_M)^\prime]^\prime$ and variance matrix $\widetilde{\bm\Sigma}_1$ of $\bm{ \mathcal{G}}=[\bm{\mathcal{ G}}(t_1)^\prime,\bm {\mathcal{G}}(t_2)^\prime,\dots,\bm{ \mathcal{G}}(t_M)^\prime]^\prime$, where $\bm{\mathcal{G}}(t_m)^\prime=[\mathcal{G}_{\rho_1,\gamma_1}(t_m),\dots,\mathcal{G}_{\rho_K,\gamma_K}(t_m)]$. The required event count is $d=nD_{H_1}^*(\tau)$. \subsection{A complete design} The complete design is summarized in Figure~\ref{fig:1}. First, one will need to decide the plan by defining the hypotheses $H_0$ and $H_1$, their survival functions, $K$ WLRTs within the combo, $M-1$ interim stages, and the corresponding stopping rule $\nu_m$. The stopping rule is dependent on the information fraction of one of the WLRTs in the combo, i.e., stopping at $IF_{\gamma_{k},\rho_k}(t_m)=\nu_m$ for $mth$ test if the previous $m-1$ stages fail to reject $H_0$. For instance, if using surrogate information fraction\cite{hasegawa2016group}, it would base on the event counts. The correlation matrices can be obtained using either the prediction approach or the estimation approach. Prediction can be done using the stochastic method and the exact method, whereby the former would apply to all kinds of survival functions and also consider the changing at-risk proportion of treatment group, while the latter treats this proportion a constant value (denoted to be p) in the formulas given in Appendix~B. The estimation approach is entirely data-driven, following equations (\ref{equ:var_est}) and (\ref{equ:cov}). The stopping times are predicted according to the predicted information fractions, and thus we obtain the resulting distributions of the multivariate test statistics under both hypotheses. As follows, the boundaries at each stage and sample size can be predicted as well. Moreover, once we start the trial, data are collected, and the correlation matrix can be estimated following (\ref{equ:var_est}). Therefore, instead of using the predicted correlations, one can also estimate the correlation matrices for boundary and sample size calculation, in order to ensure that the type I error can still be controlled when the assumed survival distributions, censoring process, or accrual procedure are violated in practice. \begin{figure}[h!] \centerline{\includegraphics[width=\columnwidth]{flow-chart.pdf}} \caption{A flow-chart to describe the procedure of the proposed simulation-free GS-MC design. The superscript ``*'' indicates that the correlation matrices can be predicted using both the stochastic and exact approaches, while the mean ($\widetilde{\mu}$) is predicted using stochastic approach to enjoy a more precise approximation as the at-risk proportions change by time. The ``analysis'' stage is when we conducted the maxcombo tests under various scenarios with or without assumption violations, and the output of this step is the decision to reject or accept $H_0$, which are summarized as type I error and power in simulations. The arrows between the blocks indicate directions of the information flow. } \label{fig:1} \end{figure} \section{Simulation Studies} \label{s:simulation} We used the two-piece exponential provided in Appendix~B as an example to demonstrate and compare the performance of our proposed approaches. The event times from the control arm is following an exponential distribution with rate $\lambda=log(2)/6$ (median survival: 6 months), while the event times from the treatment arm were generated following a two-piece exponential with its hazard changing from $\lambda$ to $\theta \lambda$ at after $\epsilon=2$ months of follow-up. When $\theta=1$, the two-piece exponential is reduced to an exponential distribution identical to the control group. To strictly follow the piece-wise exponential distribution suggested in (\ref{equ:survival}) with $Q=2$, we let $\bm\Theta_0=\{1,1\}$ for $H_0$ and $\bm\Theta_1=\{1,\theta\}$ for $H_1$, where $\theta\in(0,1)$ and $\epsilon_1=\epsilon$, which is equivalent to (\ref{equ:survival_2p}). Simulations under different scenarios were carried out to evaluate the proposed methods for a reasonable range of post-delay treatment effects ($\theta\in[0.5,0.7]$). To begin, we generated data following uniform accrual within time interval $[0, R]$, where $R=14$; and with a probability $p=0.5$, the subjects were randomly assigned to the treatment arm. All the studies were expected to end at $\tau=18$. We included two log-rank test statistics for a maxcombo test $\mathcal{G}_{max}(t)=\max(\mathcal{G}_{0,0}(t),\mathcal{G}_{0,1}(t))$, and one interim stopping stage. Thus we have $M=2$ and $K=2$. Note that $\mathcal{G}_{0,1}(t)$ tends to be more powerful than $\mathcal{G}_{0,0}(t)$ in the presence of a delayed treatment effect. In practice, however, the existence of such a delayed effect is generally unknown. Moreover, its severity (in terms of $\theta$ and $\epsilon$) can hardly be predicted, thus incorporating more WLRTs can potentially provide better robustness. In the following simulations, we only focus on one-sided tests, with their type I error controlled at level $\alpha=0.25$ and sample size targeting power $1-\beta=0.9$, respectively. For each simulation study, we generated 200,000 datasets for type I error evaluation, and 50,000 for power estimation. The stopping times were decided according to the information fraction of the SLRT $\mathcal{G}_{0,0}$, or namely the surrogate information fraction in Hasegawa \cite{hasegawa2016group}. We stopped for an interim analysis at $t_{int}$ when $0.6d$ events were observed, and terminated the study for a final analysis at $t_{fin}$ when $d$ events were observed. In other words, we let $\nu_1=0.6$ and $\nu_2=1$. Note that $d$ is the total number of events we need and will be predicted once the stopping times are predicted. In particular, the stopping times ($t_{int}$ and $t_{fin}$) can be decided by solving $D^*_{H}(\widetilde{t}_{int})=0.6D_{H_1}^*(\tau)$ and $D^*_{H}(\widetilde{t}_{fin})=D_{H_1}^*(\tau)$, with $H=H_0$ when the null hypothesis is true, and $H=H_1$ otherwise. Or in other more general cases when monitoring any WLRT with respect to its information fraction, one would predict the stopping times by solving $\widetilde{V}_{sto,H}(G_{\rho,\gamma}(\widetilde{t}_{int}))=0.6\widetilde{V}_{sto,H_1}(G_{\rho,\gamma}(\tau))$ and $\widetilde{V}_{sto,H}(G_{\rho,\gamma}(\widetilde{t}_{fin}))=\widetilde{V}_{sto,H_1}(G_{\rho,\gamma}(\tau))$. Note that the two sets of stopping times can differ under different hypotheses, and consequently the predicted correlation matrices ($\widetilde{\bm \Sigma}_0$ and $\widetilde{\bm \Sigma}_1$) are different too. The mean of the test statistics under $H_0$ is $\bm0_4$, but is $[\widetilde{\bm\mu}(\widetilde{t}_{int})^\prime,\widetilde{\bm\mu}(\widetilde{t}_{fin})^\prime]^\prime$ under $H_1$. We obtained the predicted boundaries $\widetilde{\bm g}$ and sample sizes $d$ or $n$ based on the predicted mean and correlation matrices. We can also predict the stopping times and subsequently the correlation matrices using the exact-prediction method given in Appendix~B. Alternatively, the boundaries $\widehat{g}(t)$ can be updated according to the data by calculating the estimated correlation matrix $\widehat{\bm\Sigma}_0$ following (\ref{equ:var_est}) and (\ref{equ:cov}), which is expected to be more accurate than the prediction methods in the presence of violations in the distributional assumptions of the survival, censoring and accrual processes. We used R package mvtnorm \cite{mvtnorm} for the boundary calculation. Since it is seed-dependent, we each time generated 5 replicates, and keep the median of them as output value. All the prediction and estimation methods proposed in this report have been established in an R package on Github (lilywang1988/GSMC). First, we tested various post-delay hazard ratios with only administrative censoring and correctly specified survival functions, in consistence with the assumptions given in Hasegawa \cite{hasegawa2014sample}. All results were summarized in Table~\ref{tab:default} , where GS-WLRT denotes group sequential design with a WLRT $\mathcal{G}_{0,1}(t)$, GS-SLRT denotes group sequential design with SLRT $\mathcal{G}_{0,0}(t)$, and GS-MC denotes group sequential design with a maxcombo test of $\mathcal{G}_{0,1}(t)$ and $\mathcal{G}_{0,0}(t)$. Note that since all the tests were in group sequential designs, we eliminated prefix `GS'' in the summary tables. In GS-MC, we consider the naive method with boundaries that are identical to the other two univariate tests GS-WLRT and GS-SLRT, which are stopped according to the observed events (namely, the surrogate information fraction of SLRT according to Hasegawa\cite{hasegawa2016group}). To produce fair comparisons, we computed sample sizes for GS-MC using proposed prediction methods (the stochastic and exact methods), whose results turned out to be identical. The boundaries for the other three GS-MC methods were computed from the prediction (stochastic or exact), and the estimation approaches as well. As expected, in the presence of a delayed treatment effect, GS-SLRT provided a controlled type I error for the correct boundary specification under $H_0$, but the powers were much lower than the rest because it did not consider the delayed treatment effect well. On the other hand, GS-WLRT provides a much higher power than GS-SLRT, but the type I error ($0.0259-0.0268$) was obviously above the nominal $0.025$, which is because the surrogate information fraction did not reflect the true correlation between the two stopping times for WLRT, or in other words, the boundaries for GS-SLRT were not appropriate for GS-WLRT. To that end, it was suggested to monitor the true information fractions of WLRT instead of using the surrogate information fraction by Hasegawa \cite{hasegawa2016group}. The naive GS-MC appeared to enjoy a higher power than WLRT, but its type I error ($\approx0.04$) was also not controlled under the nominal $0.025$. This is another example that the event count ratio (surrogate information fraction) did not reflect the correlation between the two maxcombo tests at the interim and final stages. There did not seem to exist much difference comparing the performance among the proposed approaches, though the estimation approach had slightly better power than the two predicted methods with their type I error controlled similarly. The stochastic prediction does not limit its survival function to piece-wise exponential distributions. \iffalse Its flexibility seems not to improve its performance in comparison with the exact prediction approach, possibly because its prediction on the stopping times are not as accurate as those using exact prediction.\fi All the three proposed approaches performed much better than the naive method in controlling the type I error. By increasing the post-delay separation up to $\theta=0.5$, the type I error increased slightly above $0.025$, and the power decreased. There could be mainly two explanations: 1) the accrual sample size decreased from $n=927$ ($d=597$) to $n=274$ ($d=166$) when the post-delay hazard ratio was decreased from $\theta=0.7$ to $\theta=0.5$, thus the tails of the distribution for the test statistic became heavier; 2) the increasing treatment effect caused a more serious violation of the independent increment assumption, and thus damaged the approximation accuracy. \begin{table}\centering \caption{The rejection probabilities under the null hypothesis denoted by $H_0$ (type I error) and under the alternative hypothesis $H_1$ (power) when \emph{censoring, accrual, and survival functions are correctly specified}. Prefixes ``GS'' standing for ``group-sequential'' are eliminated here for simplicity. Sample sizes were decided according to GS-MC, and both prediction approaches provided identical sample sizes. Note that among the proposed GS-MC methods, the predicted powers are $0.3692$, $0.3630$, $0.3543$ for $\theta\in\{0.7,0.6,0.5\}$ at the interim stage respectively, and all are $0.9$ with two stages combined. } \label{tab:default} \begin{tabular}{lllcclcclccl}\hline \phantom{}&{}&\phantom{}&\multicolumn{2}{c}{ $\theta=0.7$} & &\multicolumn{2}{c}{ $\theta=0.6$} &\phantom{}&\multicolumn{2}{c}{ $\theta=0.5$} &\phantom{}\\ [-1pt] \cline{4-5} \cline{7-8} \cline{10-11}\\[-10pt] &Test&Stage&$H_0$&$H_1$& &$H_0$&$H_1$& & $H_0$&$H_1$& \\ \hline &WLRT& combined &0.0260&0.9143& &0.0262&0.9140& &0.0269&0.9137&\\ & & interim &0.0051&0.4082& &0.0052&0.4039& &0.0055&0.3983&\\ &SLRT& combined &0.0251&0.8272& &0.0252&0.8206& &0.0246&0.8103&\\ & & interim &0.0051&0.2487& &0.0049&0.2355& &0.0050&0.2283&\\ &MC (naive)&combined &0.0384&0.9279& &0.0388&0.9273& &0.0390&0.9258&\\ & & interim &0.0082&0.4346& &0.0082&0.4282& &0.0085&0.4213&\\ &MC (pred-sto)&combined&0.0251&0.8970& &0.0253&0.8962& &0.0255&0.8948&\\ & &interim &0.0050&0.3691& &0.0051&0.3620& &0.0054&0.3579&\\ &MC (pred-exa)& combined &0.0252&0.8972& &0.0253&0.8963& &0.0255&0.8950&\\ & & interim &0.0050&0.3693& &0.0051&0.3621& &0.0054&0.3582&\\ &MC (est)& combined &0.0252&0.8972& &0.0253&0.8966& &0.0255&0.8952\\ & & interim &0.0050&0.3693& &0.0051&0.3622& &0.0054&0.3581&\\ &size& &n= 927&d=597& &n=475&d=297&&n=274&d=166&\\ \hline \end{tabular} \vspace{5pt} \end{table} Then we tested different special cases with some of the assumptions, e.g., uniform accrual, administrative censoring, correctly specified survival hazards, and delayed time being violated. For violation I, the accrual is no longer uniform with a monthly accrual rate $n/14$ subjects per month, but instead is $n/70$, $2n/70$, $3n/70$, $4n/70$ for the first 4 months, and then $6n/70$ for rest 10 months. In the presence of violation I, still there are $n$ subjects enrolled by the end of the accrual period $R=14$, but the accrual rate is increasing for the first four months before getting stabilized at a constant rate. For violation II, censoring is not limited to a shared administrative censoring but can differ between treatment arms. In particular, we generated censoring time following exponential distributions with a yearly censoring proportion $20\%$ for the treatment group and $10\%$ for the control group. For violation III, the true median survival time for the control group is 12 months, other than the presumed six months. For violation IV, the separation occurs at six months after the enrollment, instead of the predefined two months. The results under various post-delay treatment hazard ratios with violations I and II (I\&II) were summarized in Table~\ref{tab:violation1}, and those with an additional violation III (I\&II\&III) or violation IV (I\&II\&IV) were in Web Tables~1-2. Their corresponding correlation matrices can be found in Web Tables~3-6, and the subset results of $\theta=0.6$ in Table~\ref{tab:corr}. According to Table~\ref{tab:violation1}, it turned out that all the three proposed approaches were quite robust to misspecifications of the accrual process and the censoring mechanism (violations I\&II). The average stopping times ($t_{int}$, $t_{fin}$) were larger than those from Table~\ref{tab:default}, to collect enough events (information) in front of the additional censoring. The detection power was even better, possibly due to a long waiting time before ending the study enabled us to collect more events after the start of the separation. Similar to the results without any violations in Table~\ref{tab:default}, the estimation approach has slightly better power than the two prediction methods, with all their type I error controlled similarly. Note that since the subjects were enrolled slower than those with uniform enrollment in Table~\ref{tab:default} at the early stage, the power was generally lower at the interim stage. Other additional violations were considered in the Web Tables~1-2. When there existed another violation that the event hazard $\lambda=\log(2)/12$ was wrongly specified to be $\lambda=\log(2)/6$ (I\&II\&III), the waiting time before observing enough events would be longer than that without violation III in Tables~\ref{tab:default}-\ref{tab:violation1}, and thus producing a higher power for delayed separation according to Web Table~5. Or when the delayed effect time $\epsilon=6$ was misspecified to be $\epsilon=2$ in addition to the violations in censoring and accrual (I\&II\&IV), the power would become much lower than the expected value according to Web Table~6, since the required sample sizes to achieve the nominal power was largely underestimated. Among all the combinations of violations we tested, the type I error was not affected much, while the power was obviously affected depending on the degree of violations of the model assumptions. To scrutinize whether the correlation matrices were accurately approximated and how they were affected by the violations, we compared correlations computed using the prediction and estimation approaches with or without violations I\&II for $\theta=0.6$ in Table~\ref{tab:corr}, and all other cases in Web Tables~3-6. The mean correlations of the simulated datasets were treated as gold standards. When there was no assumption violation, none of the correlations had more than $5\%$ difference from the gold standard (Table~\ref{tab:corr} and Web Table~3). Only the predicted $\widetilde{cor}(\mathcal{G}_{0,1}(t_{int}),\mathcal{G}_{0,1}(t_{fin}))$ and $\widetilde{cor}(\mathcal{G}_{0,0}(t_{int}),\mathcal{G}_{0,1}(t_{fin}))$ were over $5\%$ different from the gold standard correlations under $H_1$ (Table~\ref{tab:corr} and Web Table~4), possibly because the violations in censoring and accrual mechanisms affect the prediction of the stopping times, the predicted correlations of $G_{0,1}(t)$ are not correctly reflecting the true correlations between stopping times. Note that $cor(\mathcal{G}_{0,0}(t_{int}),\mathcal{G}_{0,1}(t_{fin}))$ is approximated by the product of $cor(\mathcal{G}_{0,1}(t_{int}),\mathcal{G}_{0,1}(t_{fin}))$ and $cor(\mathcal{G}_{0,0}(t_{int}),\mathcal{G}_{0,1}(t_{int}))$ following (\ref{equ:equal}). In the presence of violations I\&II\&III, the predicted correlations for $cor(\mathcal{G}_{0,1}(t_{int}),\mathcal{G}_{0,1}(t_{fin}))$ and $cor(\mathcal{G}_{0,0}(t_{int}),\mathcal{G}_{0,1}(t_{fin}))$ would suffer from a severe bias with over $25\%$ difference from the gold standard values. Misspecification of the delayed time, on the contrary, did not impact the correlation matrices much. The estimation method provides the most accurate correlation approximations, and the correlations predicted via the stochastic method is slightly more accurate than those via the exact prediction method. The type I error, however, seems not being affected much by the correlation matrices. We checked the boundaries predicted and estimated under different violation combinations and found that their changes were extremely small ($<0.003$ in magnitudes), implying that slight bias in the correlations did not exert considerable influence on the boundary calculation under $H_0$. \begin{table}\centering \caption{The rejection probabilities under the null hypothesis denoted by $H_0$ (type I error) and under the alternative hypothesis $H_1$ (power) \emph{when censoring and accrual are misspecified}. Prefixes ``GS'' standing for ``group-sequential'' are eliminated here for simplicity. } \label{tab:violation1} \begin{tabular}{lllcclcclccl}\hline \phantom{}&{}&\phantom{}&\multicolumn{2}{c}{ $\theta=0.7$} & &\multicolumn{2}{c}{ $\theta=0.6$} &\phantom{}&\multicolumn{2}{c}{ $\theta=0.5$} &\phantom{}\\ [-1pt] \cline{4-5} \cline{7-8} \cline{10-11}\\[-10pt] &Test&Stage&$H_0$&$H_1$& &$H_0$&$H_1$& & $H_0$&$H_1$& \\ \hline &WLRT& combined &0.0259&0.9172& &0.0261&0.9159& &0.0268&0.9135&\\ & & interim &0.0052&0.3747& &0.0053&0.3663& &0.0052&0.3569&\\ &SLRT& combined &0.0248&0.8325& &0.0248&0.8224& &0.0250&0.8128&\\ & & interim &0.0053&0.2156& &0.0051&0.2002& &0.0051&0.1917&\\ &MC (naive)&combined &0.0379&0.9293& &0.0383&0.9279& &0.0391&0.9252&\\ & & interim &0.0085&0.3996& &0.0084&0.3883& &0.0084&0.3782&\\ &MC (pred-sto)&combined &0.0246&0.9010& &0.0247&0.8976& &0.0256&0.8953&\\ & &interim &0.0052&0.3339& &0.0052&0.3255& &0.0051&0.3145&\\ &MC (pred-exa)& combined &0.0246&0.9012& &0.0248&0.8976& &0.0256&0.8954&\\ & & interim &0.0053&0.3342& &0.0052&0.3258& &0.0051&0.3147&\\ &MC (est)& combined &0.0246&0.9012& &0.0247&0.8979& &0.0256&0.8956&\\ & & interim &0.0052&0.3335& &0.0051&0.3252& &0.0051&0.3141&\\ \hline \end{tabular} \vspace{5pt} \end{table} \begin{table}\centering \caption{Comparison of the correlations computed using different methods: the correlations calculated directly from the simulated samples ($\overline{cor}$), the predicted values using either stochastic process ($\widetilde{cor}_{sto}$) or exact distribution ($\widetilde{cor}_{exa}$), and the data-driven estimation ($\widehat{cor}$). The sample correlations were treated as the gold standard for a fair comparison on the other methods. In comparison with the gold-standard mean, correlations with difference $5-10\%$ were made \textit{italic}, and $>10\%$ were made \textbf{bold}. } \label{tab:corr} \begin{tabular}{lllcclccl}\hline \phantom{}&{}&\phantom{}&\multicolumn{2}{c}{ No violation} & &\multicolumn{2}{c}{ Violations I\&II} &\phantom{}\\ [-1pt] \cline{4-5} \cline{7-8}\\[-10pt] &Correlation pair& &$H_0$&$H_1$& &$H_0$&$H_1$& \\ \hline &$\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.8329&0.8348& &0.8261&0.8263&\\ & \ \&$\mathcal{G}_{0,0}(t_{int})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0029&-0.0038& &0.0040&0.0047&\\ & & $\widetilde{cor}_{exa}-\overline{cor} $ &-0.0009&-0.0015& &0.0059&0.0070&\\ & & $\widehat{cor}-\overline{cor} $ &-0.0017&-0.0027& &-0.0007&0.0006&\\ &$\mathcal{G}_{0,1}(t_{fin})$&$\overline{cor} $&0.8452&0.8516& &0.8482&0.8529&\\ & \ $\&\mathcal{G}_{0,0}(t_{fin})$ & $\widetilde{cor}_{sto}-\overline{cor} $ &0.0107&0.0145& &0.0176&0.0230&\\ & & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0120&0.0162& &0.0188&0.0247&\\ & & $\widehat{cor}-\overline{cor} $ &-0.0010&-0.0018& &-0.0008&-0.0014&\\ &$\mathcal{G}_{0,0}(t_{int})$&$\overline{cor} $&0.7766&0.7791& &0.7763&0.7797&\\ & \ $\&\mathcal{G}_{0,0}(t_{fin})$ & $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0037&-0.0033& &-0.0035&-0.0039&\\ & & $\widetilde{cor}_{exa}-\overline{cor} $ &-0.0046&-0.0016& &-0.0043&-0.0021&\\ & & $\widehat{cor}-\overline{cor} $ &-0.0020&-0.0045& &-0.0017&-0.0051&\\ &$\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.6457&0.6267& &0.6158&0.5884&\\ & \ $\&\mathcal{G}_{0,1}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0068&-0.0074& &0.0232&\textit{0.0309}&\\ & & $\widetilde{cor}_{exa}-\overline{cor} $ &-0.0068&-0.0052& &0.0232&\textit{0.0331}&\\ & & $\widehat{cor}-\overline{cor} $ &-0.0012&-0.0077& &0.0000&-0.0082&\\ &$\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.6475&0.6493& &0.6414&0.6454&\\ & \ $\&\mathcal{G}_{0,0}(t_{fin})$ & $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0060&-0.0046& &0.0002&-0.0008&\\ & & $\widetilde{cor}_{exa}-\overline{cor} $ &-0.0052&-0.0014& &0.0009&0.0024&\\ & & $\widehat{cor}-\overline{cor} $ &-0.0037&-0.0047& &-0.0020&-0.0049&\\ &$\mathcal{G}_{0,0}(t_{int})$&$\overline{cor} $&0.5375&0.5262& &0.5093&0.4887&\\ & \ $\&\mathcal{G}_{0,1}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0071&-0.0115& &0.0211&\textit{0.0260}&\\ & & $\widetilde{cor}_{exa}-\overline{cor} $ &-0.0059&-0.0083& &0.0223&\textit{0.0292}&\\ & & $\widehat{cor}-\overline{cor} $ &-0.0018&-0.0111& &-0.0011&-0.0089&\\ \hline \end{tabular} \vspace{5pt} \end{table} To demonstrate the advantage of using maxcombo in group sequential designs, we computed the required sample sizes following (\ref{equ:beta2}) in comparison with all the single tests from the combo. The results were presented in Figure~\ref{fig:2} in terms of patient numbers ($n$) and event counts ($d$). The settings were consistent with those of simulations for Table~\ref{tab:default}: no assumption violation, fixing post-delay hazard ratio to be $\theta=0.6$ and varying the delayed time $\epsilon$ from $0$ to $3.5$. We plotted three curves in Figure~\ref{fig:2}: the GS-MC test of $\mathcal{G}_{max}(t)=\max(\mathcal{G}_{0,0}(t),\mathcal{G}_{0,1}(t))$, GS-WLRT ($\mathcal{G}_{0,1}(t)$) and GS-SLRT ($\mathcal{G}_{0,0}(t)$). According to Figure~\ref{fig:2}, when the delay-time is close to 0 ($<0.75$), $\mathcal{G}_{0,0}(t)$ requires the smallest sample size, while $\mathcal{G}_{0,1}(t)$ requires the biggest, consistent to the case that PH is dominant. When the waiting time before treatment effect $\epsilon$ is long ($>1.15$), $\mathcal{G}_{0,1}(t)$ becomes more powerful and thus requires a smaller sample size. Interestingly, in the intermediate state, when $\epsilon\in[0.75,1.15]$, the maxcombo test requires smaller sample sizes than the other two. It implies that the sample size needed from a maxcombo test is always nearly the most powerful one among all the tests in the combo. Therefore, we conclude that employing a maxcombo test in a group sequential design tends to reduce the sample size and largely improve the testing robustness. \begin{figure} \centerline{ \includegraphics[width=0.5\columnwidth]{size_plot1_new.pdf} \includegraphics[width=0.5\columnwidth]{size_plot2_new.pdf} } \caption{Sample sizes needed to achieve the required power $\beta=0.9$ for the group sequential design in the presence of different delayed effect times ($\epsilon$). The required number of subjects ($n$) and events ($d$) were plotted against the delay-effect times $\epsilon$ in $[0,3.5]$. } \label{fig:2} \end{figure} \section{Discussion} \label{s:discuss} In this report, we proposed a general framework for a group sequential design with maxcombo tests or GS-MC in short. The proposed design is completely simulation-free, can effectively control the type I error and find the required sample size to achieve the nominal ideal power under $H_1$. We have developed two prediction methods that are based on the assumed distributions under the two hypotheses ($H_0$ and $H_1$), and one data-driven estimation method. We demonstrated in our simulation studies that the proposed approaches displayed strong robustness towards various violations of assumptions related to survival, censoring, and accrual processes. Note that among the prediction methods, the stochastic approach can adapt to various survival functions. One can also extend the use of the exact prediction by repeating the derivations of the formulas in Appendix~B. Most often, the stochastic prediction method and the exact prediction method exhibit similar performance (depending on the number of intervals $b$), though the former is usually more flexible but also more computationally expensive than the latter. When we were preparing the manuscript, we notice another group also developed a correlation matrix approximation method, which shares some similar features to our proposal \cite{roychoudhury2019robust}. Their proposed method directly calculates all the correlations using the independent increment properties, but ours computes the third type of correlations based on the first two types. It can be shown that the two methods are numerically equivalent for the correlation matrix computation under $H_0$. Another major difference between the two methods lies in the fact that their approach is not entirely simulation-free, since their sample size calculation needs simulations. We have established all the functions used in the proposed design in an R package on GitHub (lilywang1988/GSMC). Note that the proposed design can accommodate a regular maxcombo test without interim analysis by setting $M=1$. Although in this report, we restrict our scope to group sequential designs with sample sizes decided in advance, the proposed approach can be extended to adaptive designs, whereby sample sizes are adjusted based on the observed data. \iffalse \section*{Acknowledgments} This is acknowledgment text.\cite{Kenamond2013} Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. \subsection*{Author contributions} This is an author contribution text. This is an author contribution text. This is an author contribution text. This is an author contribution text. This is an author contribution text. \subsection*{Financial disclosure} None reported. \subsection*{Conflict of interest} The authors declare no potential conflict of interests. \fi \section*{Supporting information} The following supporting information is available as part of the online article: \noindent \textbf{Web Table~1} {The rejection probabilities under the null hypothesis denoted by $H_0$ (type I error) and under the alternative hypothesis $H_1$ (power) when the censoring, accrual and event hazards are misspecified. } \noindent \textbf{Web Table~2} {The rejection probabilities under the null hypothesis denoted by $H_0$ (type I error) and under the alternative hypothesis $H_1$ (power) when the censoring, accrual and delayed time are misspecified.} \noindent \textbf{Web Table~3} {Comparison of the correlations computed using different methods \emph{without} violation in accrual (I), censoring (II), survival functions (III) or delayed time (IV).} \noindent \textbf{Web Table~4} {Comparison of the correlations computed using different methods \emph{with} violations I (accrual) and II (censoring). } \noindent \textbf{Web Table~5} {Comparison of the correlations computed using different methods \emph{with} violations I (accrual), II (censoring), and III(event rate). } \noindent \textbf{Web Table~6} {Comparison of the correlations computed using different methods \emph{with} violations I (accrual), II (censoring), and IV(delayed time). } \section{Web Appendix A: violation of event hazards} \label{A1} In addition to the violations of both censoring and accrual procedures (Table~2), we also let the event hazard $\lambda$ be misspecified. Let the actual median survival time be 12 months instead of 6 months, indicating that the hazard is $\lambda=\log(2)/12$. According to Web Table~\ref{tab:violation2}, the type I error did not change, while the power was much larger than $0.9$. A plausible explanation could be the stopping times were higher than those with $\lambda=\log(2)/6$, and thus more events during were observed the late departure period, ending up with power higher than the nominal $0.9$. In contrast, when the hazard is higher than $\lambda=\log(2)/6$, one would expect that the power would be less than the nominal $0.9$. \begin{table}[h!] \begin{minipage}{160mm}\centering \caption{The rejection probabilities under the null hypothesis denoted by $H_0$ (type I error) and under the alternative hypothesis $H_1$ (power) \emph{ when the censoring, accrual and event hazards are misspecified}. Prefixes ``GS'' standing for ``group-sequential'' are eliminated here for simplicity. } \label{tab:violation2} \begin{tabular}{lllcclcclccl}\hline\hline \phantom{}&{}&\phantom{}&\multicolumn{2}{c}{ $\theta=0.7$} & &\multicolumn{2}{c}{ $\theta=0.6$} &\phantom{}&\multicolumn{2}{c}{ $\theta=0.5$} &\phantom{}\\ [-1pt] \cline{4-5} \cline{7-8} \cline{10-11}\\[-10pt] &Test&Stage&$H_0$&$H_1$& &$H_0$&$H_1$& & $H_0$&$H_1$& \\ \hline &WLRT& combined &0.0264&0.9536& &0.0263&0.9555& &0.0261&0.9575&\\ & & interim &0.0051&0.5108& &0.0052&0.5172& &0.0048&0.5138&\\ &SLRT& combined &0.0250&0.9449& &0.0245&0.9438& &0.0243&0.9434&\\ & & interim &0.0051&0.4331& &0.0051&0.4265& &0.0049&0.4164&\\ &MC (naive)&combined &0.0380&0.9703& &0.0376&0.9707& &0.0376&0.9723&\\ & & interim &0.0083&0.5675& &0.0083&0.5695& &0.0079&0.5644&\\ &MC (pred-sto)&combined &0.0245&0.9551& &0.0242&0.9561& &0.0244&0.9573&\\ & &interim &0.0050&0.4992& &0.0052&0.5015& &0.0047&.4956&\\ &MC (pred-exa)& combined &0.0246&0.9552& &0.0243&0.9562& &0.0244&0.9573&\\ & & interim &0.0051&0.4995& &0.0052&0.5017& &0.0047&0.4959&\\ &MC (est)& combined &0.0245&0.9552& &0.0243&0.9561& &0.0244&0.9573&\\ & & interim &0.0051&0.5007& &0.0052&0.5029& &0.0047&0.4969&\\ \hline \end{tabular} \end{minipage} \vspace{5pt} \end{table} \section{Web Appendix B: violation of delayed time} \label{A2} In addition to the violations of both the censoring and the accrual were violated (as in Table~2), we also let the delayed time $\epsilon$ be misspecified, i.e., the actual change time is $\epsilon=6$ other than $\epsilon=2$. According to Web Table~\ref{tab:violation3}, the misspecifications did not affect the type I error computation, but largely reduced the power. The reason is that the expected treatment effect is much earlier than its exact starting time. Thus the sample size is underestimated. \begin{table}[h!] \begin{minipage}{160mm}\centering \caption{The rejection probabilities under the null hypothesis denoted by $H_0$ (type I error) and under the alternative hypothesis $H_1$ (power) \emph{ when the censoring, accrual and delayed time are misspecified}. Prefixes ``GS'' standing for ``group-sequential'' are eliminated here for simplicity. } \label{tab:violation3} \begin{tabular}{lllcclcclccl}\hline\hline \phantom{}&{}&\phantom{}&\multicolumn{2}{c}{ $\theta=0.7$} & &\multicolumn{2}{c}{ $\theta=0.6$} &\phantom{}&\multicolumn{2}{c}{ $\theta=0.5$} &\phantom{}\\ [-1pt] \cline{4-5} \cline{7-8} \cline{10-11}\\[-10pt] &Test&Stage&$H_0$&$H_1$& &$H_0$&$H_1$& & $H_0$&$H_1$& \\ \hline &WLRT& combined &0.0259&0.3805& &0.0261&0.3459& &0.0268&0.3136&\\ & & interim &0.0052&0.0279& &0.0053&0.0270& &0.0052&0.0230&\\ &SLRT& combined &0.0248&0.1834& &0.0248&0.1634& &0.0250&0.1490&\\ & & interim &0.0053&0.0136& &0.0051&0.0129& &0.0051&0.0122&\\ &MC (naive)&combined &0.0379&0.3923& &0.0383&0.3581& &0.0391&0.3261&\\ & & interim &0.0085&0.0330& &0.0084&0.0317& &0.0084&0.0276&\\ &MC (pred-sto)&combined &0.0246&0.3215& &0.0247&0.2922& &0.0256&0.2627&\\ & &interim &0.0052&0.0218& &0.0052&0.0212& &0.0051&0.0183&\\ &MC (pred-exa)& combined &0.0246&0.3219& &0.0248&0.2926& &0.0256&0.2629&\\ & & interim &0.0053&0.0219& &0.0052&0.0212& &0.0051&0.0184&\\ &MC (est)& combined &0.0246&0.3224& &0.0247&0.2931& &0.0256&0.2635&\\ & & interim &0.0052&0.0218& &0.0051&0.0212& &0.0051&0.0183&\\ \hline \end{tabular} \end{minipage} \vspace{5pt} \end{table} \section{Web Appendix C: correlation matrices} \begin{table} \begin{minipage}{160mm}\centering \caption{Comparison of the correlations computed using different methods: the correlations calculated directly from the simulated samples ($\overline{cor}$), the predicted values using either stochastic process ($\widetilde{cor}_{sto}$) or exact distribution ($\widetilde{cor}_{exa}$), and the data-driven estimation ($\widehat{cor}$). The sample correlations were treated as the gold standard for comparison of other methods. In comparison with the gold-standard mean, correlations with difference $5-10\%$ were made \textit{italic}, and $>10\%$ were made \textbf{bold}. There are \emph{not any violations} in accrual (I), censoring (II), survival functions (III) or delayed time (IV).} \label{tab:corr_all} \begin{tabular}{clcclcclcc}\hline\hline {Correlation }&\phantom{}&\multicolumn{2}{c}{ $\theta=0.7$} & &\multicolumn{2}{c}{ $\theta=0.6$} &\phantom{}&\multicolumn{2}{c}{ $\theta=0.5$}\\ [-1pt]\cline{3-4} \cline{6-7} \cline{9-10}\\[-10pt] Pair& &$H_0$&$H_1$& &$H_0$&$H_1$& &$H_0$&$H_1$ \\ \hline $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.8327&0.8329& &0.8329&0.8348& &0.8310 &0.8351\\ \ \&$\mathcal{G}_{0,0}(t_{int})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0024&-0.0021& &-0.0029&-0.0038& &-0.0012&-0.0038\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &-0.0004&0.0001& &-0.0009&-0.0015& &0.0008&-0.0014\\ & $\widehat{cor}-\overline{cor} $ &-0.0009&-0.0005& &-0.0017&-0.0027& &-0.0008&-0.0033\\ $\mathcal{G}_{0,1}(t_{fin})$&$\overline{cor} $&0.8469&0.8521& &0.8452&0.8516& &0.8417&0.8511\\ \ $\&\mathcal{G}_{0,0}(t_{fin})$ & $\widetilde{cor}_{sto}-\overline{cor} $ &0.0129&0.0167& &0.0107&0.0145& &0.0100&0.0139\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0142&0.0182& &0.0120&0.0162& &0.0114&0.0159\\ & $\widehat{cor}-\overline{cor} $ &-0.0001&-0.0015& &-0.0010&-0.0018& &0.0002&-0.0019\\ $\mathcal{G}_{0,0}(t_{int})$&$\overline{cor} $&0.7754&0.7776& &0.7766&0.7791& &0.7751&0.7807\\ \ $\&\mathcal{G}_{0,0}(t_{fin})$ & $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0020&-0.0036& &-0.0037&-0.0033& &0.0008&-0.0021\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0025&-0.0008& &-0.0046&-0.0016& &0.0000&-0.0067\\ & $\widehat{cor}-\overline{cor} $ &-0.0008&-0.0030& &-0.0020&-0.0045& &-0.0005&-0.0061\\ $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.6399&0.6213& &0.6457&0.6267& &0.6459&0.6243\\ \ $\&\mathcal{G}_{0,1}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0040&-0.0032& &-0.0068&-0.0074& &0.0015&-0.0019\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0034&0.0011& &-0.0068&-0.0052& &0.0015&-0.0091\\ & $\widehat{cor}-\overline{cor} $ &-0.0007&-0.0013& &-0.0012&-0.0077& &0.0014&-0.0085\\ $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.6453&0.6444& &0.6475&0.6493& &0.6440&0.6473\\ \ $\&\mathcal{G}_{0,0}(t_{fin})$ & $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0031&-0.0015& &-0.0060&-0.0046& &-0.0002&-0.0001\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0022&0.0025& &-0.0052&-0.0014& &0.0006&-0.0021\\ & $\widehat{cor}-\overline{cor} $ &-0.0009&0.0003& &-0.0037&-0.0047& &-0.0009&-0.0030\\ $\mathcal{G}_{0,0}(t_{int})$&$\overline{cor} $&0.5325&0.5228& &0.5375&0.5262& &0.5364&0.5273\\ \ $\&\mathcal{G}_{0,1}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0046&-0.0093& &-0.0071&-0.0115& &0.0009&-0.0098\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0029&-0.0044& &-0.0059&-0.0083& &0.0021&-0.0144\\ & $\widehat{cor}-\overline{cor} $ &-0.0008&-0.0068& &-0.0018&-0.0111& &0.0010&-0.0151\\ \hline \end{tabular} \end{minipage} \vspace{5pt} \end{table} \begin{table} \begin{minipage}{160mm}\centering \caption{Comparison of the correlations computed using different methods: the correlations calculated directly from the simulated samples ($\overline{cor}$), the predicted values using either stochastic process ($\widetilde{cor}_{sto}$) or exact distribution ($\widetilde{cor}_{exa}$), and the data-driven estimation ($\widehat{cor}$). The sample correlations were treated as the gold standard for comparison of other methods. In comparison with the gold-standard mean, correlations with difference $5-10\%$ were made \textit{italic}, and $>10\%$ were made \textbf{bold}. The simulations in this table include \emph{violations I (accrual) and II (censoring)}. } \label{tab:corr_all1} \begin{tabular}{clcclcclcc}\hline\hline {Correlation }&\phantom{}&\multicolumn{2}{c}{ $\theta=0.7$} & &\multicolumn{2}{c}{ $\theta=0.6$} &\phantom{}&\multicolumn{2}{c}{ $\theta=0.5$}\\ [-1pt]\cline{3-4} \cline{6-7} \cline{9-10}\\[-10pt] Pair& &$H_0$&$H_1$& &$H_0$&$H_1$& &$H_0$&$H_1$ \\ \hline $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.8267&0.8304& &0.8261&0.8263& &0.8248&0.8295\\ \ \&$\mathcal{G}_{0,0}(t_{int})$& $\widetilde{cor}_{sto}-\overline{cor} $ &0.0036&0.0004& &0.0040&0.0047& &0.0051&0.0018\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0056&0.0026& &0.0059&0.0070& &0.0070&0.0042\\ & $\widehat{cor}-\overline{cor} $ &-0.0009&-0.0036& &-0.0007&0.0006& &-0.0001&-0.0024\\ $\mathcal{G}_{0,1}(t_{fin})$&$\overline{cor} $&0.8508&0.8539& &0.8482&0.8529& &0.8452&0.8522\\ \ \&$\mathcal{G}_{0,0}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &0.0189&0.0192& &0.0176&0.0230& &0.0163&0.0196\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0201&0.0207& &0.0188&0.0247& &0.0176&0.0215\\ & $\widehat{cor}-\overline{cor} $ &-0.0010&-0.0015& &-0.0008&-0.0014& &-0.0002&-0.0016\\ $\mathcal{G}_{0,0}(t_{int})$&$\overline{cor} $&0.7772&0.7789& &0.7763&0.7797& &0.7757&0.7824\\ \ \&$\mathcal{G}_{0,0}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0039&-0.0050& &-0.0035&-0.0039& &0.0002&-0.0038\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0006&-0.0022& &-0.0043&-0.0021& &-0.0007&-0.0085\\ & $\widehat{cor}-\overline{cor} $ &-0.0027&-0.0043& &-0.0017&-0.0051& &-0.0011&-0.0078\\ $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.6099&0.5826& &0.6158&0.5884& &0.6219&0.5864\\ \ \&$\mathcal{G}_{0,1}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &0.0260&\textit{0.0355}& &0.0232&\textit{0.0309}& &0.0255&\textit{0.0361}\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &\textit{0.0334}&\textit{0.0398}& &0.0232&\textit{0.0331}& &0.0255&0.0288\\ & $\widehat{cor}-\overline{cor} $ &-0.0029&-0.0015& &0.0000&-0.0082& &0.0005&-0.0092\\ $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.6436&0.6456& &0.6414&0.6454& &0.6413&0.6472\\ \ \&$\mathcal{G}_{0,0}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0014&-0.0026& &0.0002&-0.0008& &0.0025&0.0001\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0039&0.0014& &0.0009&0.0024& &0.0034&-0.0020\\ & $\widehat{cor}-\overline{cor} $ &-0.0038&-0.0052& &-0.0020&-0.0049& &-0.0025&-0.0065\\ $\mathcal{G}_{0,0}(t_{int})$&$\overline{cor} $&0.5044&0.4862& &0.5093&0.4887& &0.5117&0.4912\\ \ \&$\mathcal{G}_{0,1}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &0.0236&\textit{0.0273}& &0.0211&\textit{0.0260}& &0.0255&\textit{0.0263}\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &\textit{0.0311}&\textit{0.0322}& &0.0223&\textit{0.0292}& &\textit{0.0268}&0.0217\\ & $\widehat{cor}-\overline{cor} $ &-0.0031&-0.0059& &-0.0011&-0.0089& &0.0015&-0.0138\\ \hline \end{tabular} \end{minipage} \vspace{5pt} \end{table} \begin{table} \begin{minipage}{160mm}\centering \caption{Comparison of the correlations computed using different methods: the correlations calculated directly from the simulated samples ($\overline{cor}$), the predicted values using either stochastic process ($\widetilde{cor}_{sto}$) or exact distribution ($\widetilde{cor}_{exa}$), and the data-driven estimation ($\widehat{cor}$). The sample correlations were treated as the gold standard for comparison of other methods. In comparison with the gold-standard mean, correlations with difference $5-10\%$ were made \textit{italic}, and $>10\%$ were made \textbf{bold}. The simulations in this table include \emph{violations I (accrual), II (censoring), and III(event rate)}. } \label{tab:corr_all2} \begin{tabular}{clcclcclcc}\hline\hline {Correlation }&\phantom{}&\multicolumn{2}{c}{ $\theta=0.7$} & &\multicolumn{2}{c}{ $\theta=0.6$} &\phantom{}&\multicolumn{2}{c}{ $\theta=0.5$}\\ [-1pt]\cline{3-4} \cline{6-7} \cline{9-10}\\[-10pt] Pair& &$H_0$&$H_1$& &$H_0$&$H_1$& &$H_0$&$H_1$ \\ \hline $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.8361&0.8429& &0.8343&0.8409& &0.8306&0.8372\\ \ \&$\mathcal{G}_{0,0}(t_{int})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0057&-0.0121& &-0.0042&-0.0100& &-0.0007&-0.0059\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &-0.0037&-0.0099& &-0.0023&-0.0077& &0.0012&-0.0035\\ & $\widehat{cor}-\overline{cor} $ &-0.0004&-0.0022& &-0.0008&-0.0004& &0.0004&0.0029\\ $\mathcal{G}_{0,1}(t_{fin})$&$\overline{cor} $&0.8548&0.8541& &0.8540&0.8536& &0.8522&0.8516\\ \ \&$\mathcal{G}_{0,0}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &0.0096&0.0067& &0.0094&0.0083& &0.0105&0.0119\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0108&0.0082& &0.0106&0.0100& &0.0118&0.0139\\ & $\widehat{cor}-\overline{cor} $ &-0.0007&-0.0009& &-0.0006&-0.0014& &0.0002&-0.0009\\ $\mathcal{G}_{0,0}(t_{int})$&$\overline{cor} $&0.7771&0.7802& &0.7777&0.7828& &0.7767&0.7859\\ \ \&$\mathcal{G}_{0,0}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0037&-0.0063& &-0.0048&-0.0070& &-0.0008&-0.0073\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0008&-0.0035& &-0.0057&-0.0052& &-0.0016&-0.0119\\ & $\widehat{cor}-\overline{cor} $ &-0.0025&-0.0056& &-0.0031&-0.0082& &-0.0021&-0.0113\\ $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.4978&0.4820& &0.5047&0.4884& &0.5104&0.4906\\ \ \&$\mathcal{G}_{0,1}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &\textbf{0.1381}&\textbf{0.1361}& &\textbf{0.1343}&\textbf{0.1309}& &\textbf{0.1370}&\textbf{0.1319}\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &\textbf{0.1455}&\textbf{0.1404}& &\textbf{0.1343}&\textbf{0.1331}& &\textbf{0.1369}&\textbf{0.1246}\\ & $\widehat{cor}-\overline{cor} $ &-0.0022&-0.0032& &-0.0022&-0.0099& &-0.0026&-0.0148\\ $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.6494&0.6566& &0.6490&0.6591& &0.6474&0.6603\\ \ \&$\mathcal{G}_{0,0}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0073&-0.0137& &-0.0074&-0.0145& &-0.0035&-0.0130\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &-0.0020&-0.0097& &-0.0067&-0.0112& &-0.0027&-0.0151\\ & $\widehat{cor}-\overline{cor} $ &-0.0021&-0.0054& &-0.0034&-0.0081& &-0.0037&-0.0096\\ $\mathcal{G}_{0,0}(t_{int})$&$\overline{cor} $&0.4172&0.4094& &0.4212&0.4128& &0.4215&0.4128\\ \ \&$\mathcal{G}_{0,1}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &\textbf{0.1108}&\textbf{0.1041}& &\textbf{0.1092}&\textbf{0.1019}& &\textbf{0.1158}&\textbf{0.1047}\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &\textbf{0.1182}&\textbf{0.1090}& &\textbf{0.1105}&\textbf{0.1051}& &\textbf{0.1170}&\textbf{0.1001}\\ & $\widehat{cor}-\overline{cor} $ &-0.0031&-0.0069& &-0.0024&-0.0106& &0.0005&-0.0132\\ \hline \end{tabular} \end{minipage} \vspace{5pt} \end{table} \begin{table} \begin{minipage}{160mm}\centering \caption{Comparison of the correlations computed using different methods: the correlations calculated directly from the simulated samples ($\overline{cor}$), the predicted values using either stochastic process ($\widetilde{cor}_{sto}$) or exact distribution ($\widetilde{cor}_{exa}$), and the data-driven estimation ($\widehat{cor}$). The sample correlations were treated as the gold standard for comparison of other methods. In comparison with the gold-standard mean, correlations with difference $5-10\%$ were made \textit{italic}, and $>10\%$ were made \textbf{bold}. The simulations in this table include \emph{violations I (accrual), II (censoring), and IV(delayed time)}. } \label{tab:corr_all3} \begin{tabular}{clcclcclcc}\hline\hline {Correlation }&\phantom{}&\multicolumn{2}{c}{ $\theta=0.7$} & &\multicolumn{2}{c}{ $\theta=0.6$} &\phantom{}&\multicolumn{2}{c}{ $\theta=0.5$}\\ [-1pt]\cline{3-4} \cline{6-7} \cline{9-10}\\[-10pt] Pair& &$H_0$&$H_1$& &$H_0$&$H_1$& &$H_0$&$H_1$ \\ \hline $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.8267&0.8288& &0.8261&0.8284& &0.8248&0.8297\\ \ \&$\mathcal{G}_{0,0}(t_{int})$& $\widetilde{cor}_{sto}-\overline{cor} $ &0.0036&0.0020& &0.0040&0.0025& &0.0051&0.0016\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0056&0.0042& &0.0059&0.0048& &0.0070&0.0040\\ & $\widehat{cor}-\overline{cor} $ &-0.0009&-0.0007& &-0.0007&-0.0002& &-0.0001&-0.0015\\ $\mathcal{G}_{0,1}(t_{fin})$&$\overline{cor} $&0.8508&0.8567& &0.8482&0.8569& &0.8452&0.8579\\ \ \&$\mathcal{G}_{0,0}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &0.0189&0.0208& &0.0176&0.0208& &0.0163&0.0194\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0201&0.0223& &0.0188&0.0225& &0.0176&0.0214\\ & $\widehat{cor}-\overline{cor} $ &-0.0010&-0.0042& &-0.0008&-0.0054& &-0.0002&-0.0074\\ $\mathcal{G}_{0,0}(t_{int})$&$\overline{cor} $&0.7772&0.7776& &0.7763&0.7774& &0.7757&0.7778\\ \ \&$\mathcal{G}_{0,0}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0039&-0.0037& &-0.0035&-0.0016& &0.0002&0.0007\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0006&-0.0009& &-0.0043&0.0002& &-0.0007&-0.0039\\ & $\widehat{cor}-\overline{cor} $ &-0.0027&-0.0030& &-0.0017&-0.0028& &-0.0011&-0.0032\\ $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.6099&0.6076& &0.6158&0.6209& &0.6219&0.6310\\ \ \&$\mathcal{G}_{0,1}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &0.0260&0.0105& &0.0232&-0.0015& &0.0255&-0.0085\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &\textit{0.0334}&0.0148& &0.0232&0.0007& &0.0255&-0.0158\\ & $\widehat{cor}-\overline{cor} $ &-0.0029&-0.0063& &0.0000&-0.0112& &0.0005&-0.0147\\ $\mathcal{G}_{0,1}(t_{int})$&$\overline{cor} $&0.6436&0.6458& &0.6414&0.6462& &0.6413&0.6510\\ \ \&$\mathcal{G}_{0,0}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &-0.0014&-0.0028& &0.0002&-0.0016& &0.0025&-0.0038\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &0.0039&0.0012& &0.0009&0.0017& &0.0034&-0.0058\\ & $\widehat{cor}-\overline{cor} $ &-0.0038&-0.0043& &-0.0020&-0.0047& &-0.0025&-0.0095\\ $\mathcal{G}_{0,0}(t_{int})$&$\overline{cor} $&0.5044&0.5070& &0.5093&0.5164& &0.5117&0.5230\\ \ \&$\mathcal{G}_{0,1}(t_{fin})$& $\widetilde{cor}_{sto}-\overline{cor} $ &0.0236&0.0065& &0.0211&-0.0018& &0.0255&-0.0056\\ & $\widetilde{cor}_{exa}-\overline{cor} $ &\textit{0.0311}&0.0114& &0.0223&0.0015& &\textit{0.0268}&-0.0102\\ & $\widehat{cor}-\overline{cor} $ &-0.0031&-0.0090& &-0.0011&-0.0115& &0.0015&-0.0126\\ \hline \end{tabular} \end{minipage} \vspace{5pt} \end{table} \end{document}
1,941,325,220,796
arxiv
\section{Introduction} Generalized anyonic statistics, which interpolate continously between bosons and fermions, are considered one of the most remarkable breakthrough of modern physics. In fact, while in three dimensions particles can be only bosons or fermions, in lower dimensionality they can experience exchange properties intermediate between the two standard ones \cite{anyons}. In two spatial dimensions, it is well known that fractional braiding statistics describe the elementary excitations in quantum Hall effect, motivating a large effort towards their complete understanding. In one dimension, anyonic statistics are described in terms of fields that at different points ($x_1 \neq x_2$) satisfy the commutation relations \begin{eqnarray} \Psi_A^\dag(x_1)\Psi_A^\dag(x_2)&=&e^{i\kappa \pi\epsilon(x_1-x_2)} \Psi_A^\dag(x_2)\Psi_A^\dag(x_1)\label{ancomm}\,,\\ \Psi_A(x_1)\Psi_A^\dag(x_2)&=&e^{-i\kappa \pi\epsilon(x_1-x_2)} \Psi_A^\dag(x_2)\Psi_A(x_1)\,, \nonumber \end{eqnarray} where $\epsilon(z)=-\epsilon(-z)=1$ for $z>0$ and $\epsilon(0)=0$. $\kappa$ is called statistical parameter and equals $0$ for bosons and $1$ for fermions. Other values of $\kappa$ give rise to general anyonic statistics ``interpolating'' between the two familiar ones. After the second life of one-dimensional gases started following their experimental realization in cold atomic setups \cite{1dexp}, there have been several proposal to create and manipulate anyons \cite{par}, even in one dimension. A few 1D anyonic models have been introduced and investigated \cite{k-99,bg-06,oae-99,g-06,ssc-07,zw-92,bg-06b,cm-07,pka-07,an-07,lm-99,it-99,kl-05,fibo,g-07,zw-07,o-07,sc-08,pka-08,hzc-08,dc-08,bgk-08,bcm-08}. In this paper we consider the calculation of off-diagonal correlation (or one-particle density matrix) for a system of $N$ particle \begin{equation} g_1^\kappa(x)=\rho_{N}^\kappa(x)= \langle\Psi^{\kappa\dag}_A(x) \Psi^\kappa_A(0)\rangle\,, \end{equation} of models that have a factorizable many-body ground-state wave function as \begin{equation} \Phi_\kappa(x_1,\ldots, x_N) = C_N \prod_{k<l} f(e^{2\pi ix_k/L}-e^{2\pi i x_l/L}) B_\kappa(x_k-x_l)\,, \label{gs} \end{equation} with $f(x)$ a generic even function, $C_N$ a normalization constant, and \begin{equation} B_\kappa(x)= \cases{ e^{-i\pi \kappa}& $x<0$, \cr 1 & $x>0$. } \label{1dbraiding2} \end{equation} that completely specify the statistic. Among these models, the more relevant are the anyonic generalization of Tonks-Girardeau gas and of the Calogero-Sutherland model. In this paper we propose a generalization of the replica trick that is able to select the correct anyonic branch in the replicated space. The use of this replica method started in the context of the diagonal correlation of the Calogero-Sutherland model \cite{gk-01}, has been later extended to a gas of impenetrable bosons \cite{g-04,gs-06}, and finally to the off-diagonal correlation of Calogero-Sutherland, but only for bosonic and fermionic statistics \cite{agls-06}. It has been already proposed \cite{g-04} that in the replica approach one deals with two functions of the replica number $n$, one for fermions and one for bosons, or better a single function with two branches, a fermionic and a bosonic ones that coincide only for integer $n$. Here we push forward this interpretation and we claim that the replicated correlation has an infinite number of branches corresponding to the different anyonic statistics. The apparent lack of rigor of this approach (that is shared by most of replicas calculations starting from the celebrated Parisi mean-field solution of the spin-glass \cite{sg}) can be justified a posteriori by the amazingly simple final results that agree with all the available analytics and with the direct numerical calculations. The analytic continuation we propose in this paper allows us to obtain exact asymptotic expansions for large distances that are valid for the off-diagonal correlation for general anyonic parameter $\kappa$. We also exploit the property that the anyonic correlation function satisfies a Painlev\'e V differential equation \cite{sc-08}, that allows a rigorous analytic check of the results obtained trough replicas. To further check the replica predictions, we perform a careful numerical comparison of the finite-size off-diagonal correlation obtained via the Toeplitz determinant representation \cite{ssc-07,sc-08}. The manuscript is organized as follows. In Sec. \ref{secTG} we introduce the anyonic Tonks-Girardeau gas and the strategy for the replica calculation. In Sec. \ref{secint}, starting from the example of a simple integral we show how to select the correct anyonic statistic from the replicas. In Sec. \ref{secrepbos} we review the replica calculation for the bosonic impenetrable gas following Ref. \cite{g-04}. In Sec. \ref{secrepany} we apply the replicas to the anyonic Tonks-Girardeau gas, showing that all the harmonic terms in the large distance expansion are captured by the saddle-point approximation in the replica space. In Sec. \ref{secbeysp} we calculate the corrections to the saddle-point approximation, both by perturbation theory and via the solution of the Painlev\'e V equation. In Sec. \ref{secdet} we compute numerically the off-diagonal correlation function, showing perfect agreement with the analytic results. In Sec. \ref{secCS} we use the replica trick to obtain the correlation function in the fully anyonic Calogero-Sutherland model. Finally in Sec. \ref{secconcl} we draw our conclusions. \section{The anyonic Tonks-Girardeau gas} \label{secTG} In first-quantization language, the Lieb-Liniger Hamiltonian is \begin{equation} H=-\sum_{i}^{N}\frac{\partial^2}{\partial x_{i}^{2}}+ 2c\sum_{1\leq i<j\leq N}\delta(x_i-x_j), \label{ALL} \end{equation} where the $N$-anyons ground-state function exhibits the generalized symmetry under the exchange of particles. For $\kappa = 0$ the model reduces to the bosonic Lieb-Liniger \cite{LL}, while for $\kappa=1$ to free fermions. The Tonks-Girardeau gas is obtained for $c\to\infty$ and corresponds to impenetrable anyons. In this limit, the ground-state wave function has the form of Eq. (\ref{gs}) with $f(y)=|y|$. This anyonic model has been explicitly studied in Refs. \cite{k-99,g-06,bg-06,ssc-07,cm-07,pka-07,pka-08,sc-08,dc-08}. The anyonic one-particle density matrix for $N+1$ particles can be written \cite {ssc-07}, as $N$-dimensional integral (in the variables $x_j/L=t_j$ and $t=x/L$) \begin{eqnarray} \fl \rho^\kappa_{N+1}(t)=\frac{1}{(N+1)!}\int_0^1d t_1\cdots\int_0^1d t_N \nonumber\\ \prod_{s=1}^N B_\kappa(t_s-t) |\sin[\pi (t_s-t)]||\sin[\pi t_s]| \prod_{1\leq i<j \leq N} 4\sin[\pi(t_j-t_i)]^2. \label{onepart_int} \end{eqnarray} Using the identity \begin{equation} \fl \prod_{1\leq j<k \leq N} 4\sin[\pi(t_j-t_k)]^2=\prod_{1\leq j<k \leq N} |e^{i 2 \pi t_k}-e^{i 2 \pi t_j}|^2\equiv |\Delta_N(\{t_i\})|^2, \end{equation} we can identify the second product in the integral (\ref{onepart_int}) with the square of the absolute value of a Vandermonde determinant. Thus the reduced density matrix is an average over the distribution $|\Delta_N|^2$: \begin{equation} \rho^\kappa_{N+1}(t)= \left\langle \prod_{s=1}^N B_\kappa(t_s-t) |z-z_s| |1-z_s| \right\rangle_{|\Delta_N|^2}\,, \end{equation} where $z_s=e^{i 2 \pi t_s}$ and $z=e^{i 2 \pi t}$. In the case of bosons ($\kappa=0$) the reduced density matrix can be obtained \cite{g-04} through a {\it replica trick} considering for a positive integer $n$ \begin{equation} Z_{2n}^{\kappa=0}(t)=\left\langle \prod_{s=1}^N (z-z_s)^{2n} (1-z_s)^{2n} \right\rangle_{|\Delta_N|^2}\,, \label{Repbos} \end{equation} and then $\rho^\kappa_{N+1}(t)$ is obtained from $Z_{2n}(t)$ by a suitable analytic continuation to $n\to 1/2$. It has been shown by Kurchan \cite{k-91} that to obtain the average of the absolute value from the replicated correlation, we have to require that the analytic continuation for any complex $n$ with positive real part should diverge as maximum as a power law. How to modify such replica calculation to anyons when we deal with averages over a function of unitary modulus but with arbitrary phase? Our trick is to start by considering an anyonic parameter that is a rational number, i.e. $\kappa= q/p$ with $p$ and $q$ integer. We can then consider \begin{equation} Z_{2np} (t)= \left\langle \prod_{s=1}^N (z-z_s)^{2np} (1-z_s)^{2np}\right\rangle_{|\Delta_N|^2}, \end{equation} that, after a proper analytic continuation, for $2np\to 1$ could give the desired reduced density matrix. The ambiguity in this case is obvious, in fact $Z_{2np}$ includes all the correlations of anyons with statistical parameter $\kappa= q/p$ for any $q=1 \dots 2p$. This means that different analytic continuations will give different anyonic correlations including bosonic and fermionic ones. We will propose a general criterion that would allow to choose the proper analytic continuation for any value of the statistical parameter. To elucidate the meaning of our trick we start by applying it to a trivial integral that contains all the essence of the modified replica trick. \section{The replica trick for an integral} \label{secint} Let us consider the integral \begin{equation} I= \int_{-\infty}^{+\infty} dt \,|t| e^{-t^2}= 1\,, \end{equation} that has the replica representation \begin{equation} I_{2n}= \int_{-\infty}^{+\infty} dt \,t^{2n} e^{-t^2}= \frac{1+e^{2\pi i n}}{2} \Gamma(1/2 +n)\,. \end{equation} Notice that $I_{1}=0$, while considering $n$ integer and analytically continuing we have $I_{2n}^c=\Gamma(1/2+n)$, that leads to $I_{2n}^{c}=1$, the desired result. This analytic continuation selected the ``bosonic'' branch of the integral, giving the integral with the absolute value, because it is the only analytic continuation that does not diverge exponentially in the complex plane for $n\to i \infty$, i.e. that satisfy Kurchan criterion for the absolute value \cite{k-91} . Oppositely taking directly $I_{2n}$ we would have obtained the ``fermionic'' branch. Let us then modify the integral anyonically: \begin{equation} I^{\kappa}= \int_{-\infty}^{+\infty} dt B_\kappa(t) |t| e^{-t^2}= \frac{1+e^{-i\pi q/p}}2\,, \end{equation} with $B_\kappa(t)$ given by Eq. (\ref{1dbraiding2}) and $\kappa= q/p$. The proposed analytic continuation is \begin{equation} I_{2np}= \int_{-\infty}^{+\infty} dt\, t^{2np} e^{-t^2}= \frac{1+e^{2i \pi n p}}{2} \Gamma(1/2 +np)\,, \end{equation} where we loose any knowledge of the value of $q$, that is the previous mentioned ambiguity. Considering $n$ and $p$ integers and analytically continuing one would always get the bosonic branch. On the other hand, one can play the following game \begin{equation} I_{2np}=\frac{1+e^{-2i\pi n q }e^{2i\pi n (p+q)}}{2} \Gamma(1/2+np)\,, \end{equation} with $q,p,n$ integers. Considering the analytic continuation \begin{equation} I_{2np}^{q}=\frac{1+e^{-2i \pi n q }}{2} \Gamma(1/2+np)\,, \end{equation} we get \begin{equation} I_{1}^{q}=\frac{1+e^{-i \pi q/p }}{2}\,, \end{equation} that selects the desired anyonic branch. The reason why this simple game works is that $I_{2np}^{q}$ is the only analytic continuation which has the correct asymptotic behavior for large imaginary $n$ (and positive real part) that selects the correct branch. The message is that the integral $I_{2np}$ contains all statistics from bosons to fermions and a given one can be obtained just by selecting the correct behavior at large imaginary $n$. With this observation in mind, we can then use the result $Z_{2n}(t)$ for the bosonic Tonks-Girardeau gas and for the Calogero-Sutherland model. Properly continuing $Z_{2np}(t)$, we easily get the anyonic result. Thus in the next section we review the bosonic result and in the following one we generalize it to anyons. \section{Bosons review} \label{secrepbos} All the content of this section is taken from the pioneering work of Gangardt \cite{g-04}. We report this material here only to make this paper self-contained. The replicated average for bosons Eq. (\ref{Repbos}) is (we recall $z=e^{2\pi i t}$) \begin{equation} \fl \label{eq:circ_Zn_def} Z_{2n} (t) = \frac{1}{M_N(2n)} \int_{-1/2}^{1/2} d^N t |\Delta_N (e^{2\pi i t})|^2 \prod_{s=1}^N |1+e^{2\pi i t_s}|^{2n}|z+e^{2\pi i t_s}|^{2n}, \end{equation} where we have normalized $Z_{2n}(1) = 1$, introducing the constant $M_N(2n)$ (see Ref. \cite{g-04} for its precise value, it is not essential for what follows). The integral (\ref{eq:circ_Zn_def}) has a dual representation \cite{fw-04} by an integral over $n$ variables \begin{equation} \label{eq:circ_duality} Z_{2n} (t) = \frac{z^{-Nn}}{S_{2n}} \int_0^1 d^{2n} x\, \Delta_{2n}^2 (x) \prod_{a=1}^{2n} (1-(1-z)x_a)^N, \end{equation} where the normalization constant is given by the Selberg integral \begin{equation} \label{eq:circ_selberg} S_{2n} = \int_0^1 d^{2n} x \,\Delta_{2n}^2 (x) = \prod_{a=1}^{2n} \frac{\Gamma^2(a) \Gamma(1+a) } {\Gamma(2n+a)}\,, \end{equation} In Eq. (\ref{eq:circ_duality}) the number of particles $N$ appears only as a parameter. This representation allows us to obtain the asymptotic expression for $Z_{2n}$ suitable for analytic continuation in $n$. In the large $N$ limit the integrand in (\ref{eq:circ_duality}) oscillates rapidly and the main contribution comes from the endpoints $x_\pm=1,0$ which are the only stationary points of the phase. We change variables near each endpoint \begin{eqnarray} \label{eq:circ_chang_var} x_a&=&0+\frac{\xi_a}{N(1-z)} ,\qquad\qquad\qquad a=1,\ldots,l, \nonumber\\ x_b&=&1-\frac{\xi_b}{N(1-z^{-1})} ,\qquad\qquad b=l+1,\ldots,2n , \end{eqnarray} The integrand in (\ref{eq:circ_duality}) simplifies in the large $N$ limit: \begin{equation} (1-(1-z)x_a)^N\simeq \left\{ \begin{array}{ll} e^{-\xi_a}, & a=1,\ldots,l\\ z^N e^{-\xi_a}, & a=l+1,\ldots, 2n \end{array}\right. \label{eq:circ_spm} \end{equation} and the integration measure including the Vandermonde determinant is factorized as \begin{equation} \label{eq:circ_vandermonde_fact} d^{2n} x\,\Delta^2_{2n} (x) = \frac{d^{l}\xi_a\,\Delta^2_{l} (\xi_a) \;d^{2n-l}\xi_b\,\Delta^2_{2n-l} (\xi_b)}{ [N(1-z)]^{l^2} [N(1-z^{-1})]^{(2n-l)^2}}. \end{equation} The remaining integrals are calculated using \cite{metha,df,fz-96} \begin{equation} \label{eq:lag_selberg} I_l (\lambda) = \int_0^\infty d^l \xi_a \Delta^2_l (\xi_a) \prod_{a=1}^l e^{-\lambda \xi_a} = \lambda^{-l^2} \prod_{a=1}^{l} \Gamma(a)\Gamma(1+a). \end{equation} Multiplying the contribution of each saddle point by the number of ways to distribute variables we obtain the asymptotic of the integral (\ref{eq:circ_duality}) as a sum of $2n+1$ terms \footnote{we corrected an error/typo irrelevant in the limit $n\to1/2$ in Ref. \cite{g-04}, i.e. a factor that is one when $n\to 1/2$.} \begin{equation} \label{eq:circ_Zn_2} Z_{2n} (t) = \prod_{c=1}^{2n} \frac{\Gamma (2n+c)}{\Gamma(c)} \sum_{l=0}^{2n} (-1)^{2n(n-l)} \frac{\left[F^l_{2n}\right]^2 z^{(N+2n)(n-l)}} {{(2X)^{l^2+(2n-l)^2}}}, \end{equation} where we have introduced $X=N(z^{1/2}-z^{-1/2})/2i= N\sin\pi t$ and the factors $F^l_{2n}$ \begin{eqnarray} F_{2n}^l & \prod_{a=1}^l \frac{\Gamma(a)}{\Gamma{(2n-l+a)}} \frac{G(l+1) G(2n-l+1)}{G(2n+1)}\,, \end{eqnarray} where $G(x)$ is the Barnes $G$-function \begin{equation} \fl G(z+1)=(2 \pi)^{z/2} e^{-(z+(\gamma_E+1)z^2)/2} \prod_{k=1}^{\infty}(1+z/k)^k e^{-z+z^2/(2k)}, \end{equation} $\gamma_E$ is the Euler constant, $G(1)=G(2)=1$, and $G(3/2) = 1.06922\dots$. Using the Barnes function is particularly useful to get the analytic continuation easily. Notice that also at this level, if we set $2n=1$ in Eq. (\ref{eq:circ_Zn_2}) we still get the fermion result $g_1^{\kappa=1}(x)= \sin((N+1)\pi t)/ ((N+1) \sin\pi t)$. For $n$ integer, we can extend the sum from $0\leq l\leq 2n$ to all the integers, because $F_{2n}^l$ vanishes for $l<0$ and $l>2n$ (it is a simple properties of the Barnes $G$ function inherited from the $\Gamma$). While this is an innocuous change for integer $n$, it is the fundamental step that will allow for replica symmetry breaking, because for general values of $n$ (i.e. non integers) $F_{2n}^l$ are non zero for any $l$ and the sum results in a true infinite series. Changing the summation variable to $k=l-n$ we obtain finally \begin{equation} \fl \label{eq:circ_Zn_3} Z_{2n} (t) = \frac{\prod_{c=1}^{2n} \Gamma (2n+c)/\Gamma(c)} {(2N |\sin \pi t|)^{2n^2}} \left(G^4(3/2)+2\sum_{k=1}^\infty (-1)^{2nk} \left[F^{k+n}_{2n}\right]^2 \frac{\cos \left[2k\, \pi (N+2n) t\right] } {(4N^2\sin^2\pi t)^{k^2}}\right). \end{equation} This apparently useless shift is the crucial point where the replica symmetry is finally broken because it selects the zero mode. Now we are in a position to take the limit $n\to 1/2$ which results in the desired harmonic expansion of the one-body density matrix \begin{equation*} \fl \rho_{N+1}(t) = \frac{1}{|2N \sin \pi t|^{1/2}} \left[\sum_{k=-\infty}^\infty (-1)^{k} [G(3/2+k) G(3/2-k)]^2 \frac{\cos 2 k\,\pi (N+1) t}{|2 N\sin\pi t|^{2k^2}}\right]. \end{equation*} The form of this expansion has been firstly predicted by Haldane \cite{h-81}, but the complete knowledge of all the amplitude has been only obtained by means of replicas \cite{g-04}. \section{The anyons} \label{secrepany} In the calculation of $Z_{2np}$ everything proceeds like in the calculation for boson, leading to the replacement $n\to np$ in the replicated correlation (\ref{eq:circ_Zn_2}) \begin{equation} \fl Z_{2np} (t) = \prod_{c=1}^{2np} \frac{\Gamma (2np+c)}{\Gamma(c)} \sum_{l=0}^{2np} (-1)^{2np(np-l)} \frac{\left[F^l_{2np}\right]^2 z^{(N+2np)(np-l)}} {{(2X)^{l^2+(2np-l)^2}}}. \end{equation} The correct analytic continuation to get the anyonic branch with $\kappa=q/p$ is obtained by selecting the behavior at large imaginary $n$. As for bosons, this is obtained by choosing the zero mode in the sum over $l$ and then by extending the sum over all integers. Thus we set \begin{equation} np-l=nq+m\,, \end{equation} that, amazingly, is the only change we need to describe anyons from the bosonic formulae. With this substitution we get \begin{equation} \fl Z_{2np}^q (t) = \prod_{c=1}^{2np} \frac{\Gamma (2np+c)}{\Gamma(c)} \; \sum_{m=-\infty}^{\infty} (-1)^{2np(nq+m)} \frac{\left[F^{np-nq-m}_{2np}\right]^2 z^{(N+2np)(nq+m)}} {{(2X)^{(np-(nq+m))^2+(np+(nq+m))^2}}}, \end{equation} that is ready for the analytic continuation $2np\to 1$, obtaining \begin{equation} \fl \rho^{q/p}_{N+1} (z) = \sum_{m=-\infty}^{\infty} (-1)^{(q/2p+m)} \frac{\left[F^{1/2-q/2p-m}_{1}\right]^2 z^{(N+1)(q/2p+m)}} {{(2X)^{1/2 +2 (m+ q/2p)^2 }}}, \end{equation} where only the ratio $\kappa= q/p$ appear and so it can be generalized to any $\kappa$ not only ratio of integers, giving (restoring $z=e^{i2\pi x/L}$) \begin{equation} \fl \rho^{\kappa}_{N+1} (x) = \sum_{m=-\infty}^{\infty} (-1)^{(\kappa/2+m)} \frac{\left[G(\frac32-\frac\k2-m) G(\frac32+\frac\k2+m)\right]^2 e^{i \pi x (\kappa+2m)(N+1)/L}} {{(2 N |\sin \pi x/L| )^{1/2 +2 (m+ \kappa/2)^2 }}}. \end{equation} According to bosonization (or harmonic fluid approach), any anyonic off-diagonal correlation in the thermodynamic limit $N\to\infty$ with $\rho_0=N/L$ kept constant, admits the expansion \cite{cm-07} \begin{equation} g_1(x)= \rho_0 \rho_\infty \sum_{m=-\infty}^{\infty} b_m \frac{e^{i(2m+\kappa)\pi\rho_0 x}e^{-i (m+\kappa/2)\pi}}{ (\rho_0 d(x))^{\frac{(2m+\kappa)^2}2 K+\frac1{2K}}}\,. \label{main} \end{equation} Here $d(x)=|L\sin (\pi x/L)|$, $b_m$ are unknown non-universal amplitudes, while $K$ is the universal Luttinger liquid exponent equal to $1$ for Tonks-Girardeau (we fixed $b_0=1$). The two expansions agree perfectly, confirming the correctness of the result. Furthermore, we obtained an exact expression for all the coefficients in the harmonic expansion \begin{equation} \fl \rho_\infty b_m= \frac{\left[G(\frac32-\frac\k2-m) G(\frac32+\frac\k2+m)\right]^2} {{2^{1/2 +2 (m+ \kappa/2)^2 }}}\quad {\rm with}\; \rho_\infty=\frac{\left[G(\frac32-\frac\k2) G(\frac32+\frac\k2)\right]^2}{ \sqrt2}\,. \end{equation} The leading term $\rho_\infty$ has been already obtained from Fisher-Hartwig conjecture applied to the Toeplitz determinant \cite{ssc-07,sc-08}, and the two results agree. All the other terms are new. It is important to write down the ratio of two consecutive amplitudes \begin{equation} \frac{b_{m+1}}{b_m}= -\left[\frac{\Gamma(3/2+\kappa/2+m)}{\Gamma(1/2-\kappa/2-m)}\right]^2 4^{1+2\kappa+2m}\,, \end{equation} The first corrections to the leading behavior are given by the terms with $m=\pm1$, that satisfy \begin{equation} b_1 b_{-1} = \frac1{256} (1-\kappa^2)^2\,. \label{b1bm1} \end{equation} We will see that the same result also follows from the Painlev\'e V equation, given a further check of the replica approach. \section{Beyond the saddle points} \label{secbeysp} The formula just derived gives the leading term in $1/N$ of the desired correlation function. As well known, this does not give {\it all} the dominating terms in the thermodynamic limit, defined as $N\to \infty$ with $N \pi x/L= k_F x$ taken constant. One then needs to calculate corrections to the previous terms. A systematic way to get these corrections is to perform the perturbation theory close to the saddle points order-by-order considering in Eq. (\ref{eq:circ_chang_var}) the full expansion in $\xi_{a,b}$ close to $x_{\pm}=0,1$, as done in Ref. \cite{g-04} for bosons. This approach is very general and straightforward and gives all the $1/N$ corrections, but unfortunately becomes soon computationally cumbersome increasing the order. For this reason, we only report the first-order computation in the next subsection. Alternatively, we can take a different approach based on the solution of the Painlev\'e V equation, that gives very easily all the terms in the expansion, but only in the thermodynamic limit. \subsection{First order in perturbation theory} In Ref. \cite{g-04} the first order correction in $1/N$ to $Z_{2n}(t)$ has been explictly calculated (see Eq. (65) there). Using this result and simply posing $2n\to 2np$ and then performing the shift $np-l=nq+m$, we get for the first order term (we recall that $X=N\sin \pi t$ so it is $O(N)$) \begin{eqnarray} \fl\frac{i}X (np-nq-m)(np+nq+m) [(np+nq+m) e^{i\pi x}-(np-nq-m)e^{-i\pi x}] \nonumber\\ -\frac1N [(np+nq+m)^3+(np-nq-m)^3]\,, \end{eqnarray} taking $2np\to 1$ and then considering only the zero mode $m=0$ (higher modes give rise to higher corrections in $1/N$), we get \begin{equation} -\frac1{2N}(1+\kappa^2)+i\frac{1-\kappa^2}4 \kappa \frac1{N\tan \pi x/L}\,. \label{1ord} \end{equation} Notice that for bosons and fermions ($\kappa=0,1$) the tan correction is absent: the role of this term in the replica approach is only understood while studying anyons. The first term $-(1+\kappa^2)/2N$ just corresponds to the natural replacement $N \to N+1$ in the denominator of the leading term. \subsection{Large distance asymptotic expansion via Painlev\'e V equation} In Ref. \cite{sc-08} we have shown that $\rho_\kappa(t)$ for finite $N$ satisfies second order differential equation of Painlev\'e VI type. The differential equation does not depend on the anyonic parameter, i.e. it is the same for any $\kappa$. What characterizes the statistic are the boundary conditions at $t=0$ that do depend on the anyonic parameter. It is well known that in the thermodynamic limit, the Painlev\'e VI equation reduces to a Painlev\'e V (see e.g. \cite{metha,fw-02,ffgw-03})) that has been firstly derived for bosons by Jimbo et al. \cite{jmms-80} \begin{eqnarray} (x \sigma'')^2 + 4 (x \sigma' - 1 - \sigma) (x \sigma' + {\sigma'}^2 - \sigma)=0\,,\\ {\rm with} \qquad \sigma(x)=x \frac{d \log \rho(x)}{dx}\,. \end{eqnarray} To get a systematic expansion for large $x$ from this equation we need to impose the anyonic boundary conditions at $x=\infty$. We need the knowledge of the first two leading terms in the harmonic-fluid expansion $b_0$ and $b_1$. Actually $b_0$ gives only the overall normalization, in fact $\sigma(x)$ does not depend on it, while $b_{-1}$ is essential to fix all the signs (in fact an old sign error in $b_{-1}=b_1$ for bosons \cite{vt-79} caused all the expansion in \cite{jmms-80} to have the wrong signs. This has been corrected only in \cite{g-04}, see also \cite{ctw-80}). \begin{figure}[t] \includegraphics[width=\textwidth]{coeff.eps} \caption{Dependence on the anyonic statistic parameter $\kappa$ of the coefficients of the large distance expansion of the off-diagonal correlators Eqs. (\ref{exp1}) and (\ref{expfs}).} \label{figcoeff} \end{figure} In this subsection we set $k_F=1$ for simplicity in writing, the correct formulas are obtained by replacing $x\to k_F x$. The complete asymptotic expansion of $\rho(x)$ is obtained by multiplying any term in the harmonic expansion (\ref{main}) by an analytic function at $x=\infty$, i.e. \begin{eqnarray} \fl\rho(x)= \frac{\rho_\infty e^{i\kappa x}}{x^{1/2+\kappa^2/2}} \left[\left(1+\frac{d_1}x +\frac{d_2}{x^2}+\dots\right) +b_{-1} \frac{e^{-2 i x}}{x^{2-2\kappa}}\left(1+\frac{a_1}x +\frac{a_2}{x^2}+\dots\right) \right.\nonumber\\ \left. +b_{1} \frac{e^{2 i x}}{x^{2+2\kappa}}\left(1+\frac{c_1}x +\frac{c_2}{x^2}+\dots\right) +\dots \right]\,. \label{exp1} \end{eqnarray} Form this we have the large distance expansion of $\sigma$ that fixes the boundary conditions: \begin{equation} \lim_{x\to\infty} \sigma(x)=i\kappa x -\frac{1+\kappa^2}2 -2 i b_{-1} e^{-2i x} x^{-1+2\kappa}+ O(x^o)\,, \end{equation} where $o$ is some not better specified exponent that will follow from the equation. Plugging this expansion in the Painlev\'e equation one can get iteratively all the various coefficient $a_i,b_i,c_i,d_i$. The first term in this expansion is Eq. (\ref{b1bm1}), that provides a further consistency check of the replica approach. After long and tedious algebra we got the first coefficients: \begin{eqnarray} d_1&=& i\kappa\frac{1-\kappa^2}4\,,\\ c_1&=& -i \frac{(1+\kappa)(2+\kappa)(3+\kappa)}4\,,\\ a_1&=& i \frac{(1-\kappa)(2-\kappa)(3-\kappa)}4\,,\\ d_2&=& \frac{1-\kappa^2}{32}(\kappa^4+4\kappa^2-1)\,,\\ a_2&=& -\frac1{32} (3-\kappa)(1-\kappa)(31-48\kappa+28\kappa^2-8\kappa^3+\kappa^4)\,,\\ c_2&=& -\frac1{32} (3+\kappa)(1+\kappa)(31+48\kappa+28\kappa^2+8\kappa^3+\kappa^4)\,, \end{eqnarray} that for $\kappa\to0$ reproduce the known result for bosons \cite{g-04,jmms-80}. Notice that $d_1$ reproduces the result from first order perturbation theory. In Fig. \ref{figcoeff} we report all these coefficients as function of $\kappa$. Notice that since at the fermionic point all the terms but two must vanish we have $a_m=d_m=0$ for any $m$, while the vanishing of the other terms is ensured by $b_1=0$. At the bosonic point instead the reality of the correlator imply $a_i=c_i^*$ that is satisfied by our expressions. \section{The comparison with numerics} \label{secdet} In this section we show how our expansions perfectly fit with the numerical data obtained by the exact evaluation of the off-diagonal correlators at finite $N=L$ through the evaluation of the Toeplitz determinants derived in Refs. \cite{ssc-07,sc-08}. \begin{figure}[t] \includegraphics[width=\textwidth]{CORRkap=0.1N=81.eps} \caption{Off-diagonal correlation function for $\kappa=0.1$ and $N=81$. Top panels: Real (left) and imaginary (right) parts of $\rho^{0.1}_{81}(t)$. Second rows: $\rho^{0.1}_{81}(t)/\rho_{FH}(t)-1$. Third rows: Subtraction of the first two harmonic terms $b_{\pm1}$. Fourth rows: Further subtraction of the analytic contributions $d_{1,2}$. Bottomost panels: Further subtraction of the analytic corrections to the harmonic terms $a_1$ and $c_1$.} \label{fig0.1} \end{figure} At finite volume and unit density, the expansion can be written as \begin{eqnarray} \fl\rho(x)&=& \frac{\rho_\infty e^{i\kappa x}}{[N \sin \pi x/L]^{\frac12+\frac{\kappa^2}2}} \left[1[1+O(N^{-2})]+\frac{d_1}{N \tan \pi x/L} +\frac{d_2}{[N \sin \pi x/L]^2} +O(N^{-3})\right.\nonumber\\ \fl &&\left. +b_{-1} \frac{e^{-2 i N x}}{[N\sin \pi x/L]^{2-2\kappa}} \left(1+\frac{a_1}{N \tan\pi x/L} +\frac{a_2}{[N\sin\pi x/L]^2} +O(N^{-3})\right) \right.\nonumber\\ \fl &&\left. +b_{1} \frac{e^{2 i Nx}}{[N\sin \pi x/L]^{2+2\kappa}} \left(1+\frac{c_1}{N\tan\pi x/L} +\frac{c_2}{[N\sin\pi x/L]^2}+O(N^{-3}) \right) \right.\nonumber\\ \fl && \left. + {\rm higher} \; {\rm harmonics} \frac{}{}\right]\,. \label{expfs} \end{eqnarray} Notice that we have used in the leading term $N$ instead of $N-1$ to cancel the $1/N$ correction to the constant in the first line, according to Eq. (\ref{1ord}). It has been shown already in Ref. \cite{ssc-07} that only the leading term in this expansion (obtained from Fisher-Hartwig conjecture) almost perfectly describe $\rho(x)$ for all $\kappa<1/2$. In Ref. \cite{cm-07} it has been argued that for larger value of $\kappa$ the first harmonic term (i.e. $b_{-1}$) is fundamental to have a good description of the asymptotic results because its power $2-2\kappa$ gets closer to zero. Including this term, we have a perfect description of the asymptotic behavior \cite{cm-07}, but the exact value of the amplitude $b_1$ has been obtained only here. \begin{figure}[t] \includegraphics[width=\textwidth]{CORRkap=0.5N=101.eps} \caption{Off-diagonal correlation function for $\kappa=0.5$ and $N=101$. Top panels: Real (left) and imaginary (right) parts of $\rho^{0.5}_{101}(t)$. Second rows: $\rho^{0.5}_{101}(t)/\rho_{FH}(t)-1$. Third rows: Subtraction of the first two harmonic terms $b_{\pm1}$. Fourth rows: Further subtraction of the analytic contributions $d_{1}$ (right) and $a_1,c_1$ (left). Bottomost panels: Further subtraction of the analytic corrections $d_2$ (left) $a_1$ and $c_1$ (right).} \label{fig0.5} \end{figure} Why wonder about further corrections? The main reason to care about them is that they give rise to non-analytic terms in the momentum distribution function at $k=\kappa+n k_F$ whose exact structure is characterized by the coefficients $b_m$. Furthermore they give visible corrections to the off-diagonal correlators if we carefully search for them. This is elucidated in the there figures \ref{fig0.1}, \ref{fig0.5}, and \ref{fig0.9} where we report $\rho(x)$ for $\kappa=0.1$ and $N=81$, for $\kappa=0.5$ and $N=101$, $k=0.9$ and $N=121$ respectively. In the three figures, the top panels show the real (left) and imaginary (right) parts of $\rho(x)$ directly calculated from the Toeplitz determinant. In the second panels from the top we report real and imaginary parts of \begin{equation} \rho_{\rm corr}(x)=\frac{\rho(x)}{\rho_{FH}(x)}-1= \frac{\rho(x)[N\sin\pi x/L]^{\frac12+\frac{\kappa^2}2}}{\rho_\infty e^{i\kappa x}}-1\,, \end{equation} that should tend to zero for $N\to\infty$ for any $\kappa\neq1$ (at $\kappa=1$, the first harmonic $b_{-1}$ is of order $O(N^0)$). We can in fact see that these are much smaller than the leading contribution for $\kappa$ far from $1$, but become very relevant at $\kappa=0.9$ also at $N=121$. Furthermore the second panels show very interesting oscillating behavior due to the first harmonic term. Thus, in order to kill the first oscillating behavior we subtract from $\rho_c(x)$ the first oscillating terms in Eq. (\ref{expfs}), i.e. the terms in $b_{\pm1}$. This subtraction is shown in the set third (from top) of panels. The effect of this subtraction is rather peculiar. In fact while for $\kappa=0.1$ it mainly kills the oscillations, leaving the absolute value almost unchanged because of the smallness of the exponent in $b_{\pm1}$, for larger $\kappa$ the absolute value goes down drastically, but strong oscillations are still present. The left oscillations are the second harmonics $b_{-2}$. In the imaginary part of what is left, one can clearly recognize a $1/\tan\pi x/L$ shape due to the term $d_1$ for any value of $\kappa$. In fact, by subtracting the $d_1$ term, as shown in the right fourth (from top) panels the value goes still drastically down of one- or two-order of magnitude. Oppositely the real part at the third level has a shape depending on $\kappa$. This is very easily understood: For $\kappa<1/2$ the larger term left in the expansion is $d_2$, going always like $N^{-2}$, while for $\kappa>1/2$ it is the term in $a_1$ going like $3-2\kappa$. At $\kappa=1/2$ the two terms are both $N^{-2}$, but being $d_2$ very close to zero (see Fig. \ref{figcoeff}), the most important term is $a_1$. For these reasons, in the left fourth panel is shown for $\kappa=0.1$ the subtraction of $d_2$ and in the other two cases the subtractions of $a_1$ and $c_1$. Evidently these subtractions are the correct ones and in fact decreases the considered value of at least one order of magnitude. Finally in the bottomost panel we report the final subtractions left of $a_1$, $c_1$ or $d_2$ depending on the case. A further reduction of the value is observed. Clearly this procedure of subtraction can be repeated ad libidum, obtaining better and better agreement, but we preferred to stop at this level. \begin{figure}[t] \includegraphics[width=\textwidth]{CORRkap=0.9N=121.eps} \caption{As Fig. \ref{fig0.5} with $\kappa=0.9$ and $N=121$.} \label{fig0.9} \end{figure} A last comment concerns the absolute value of the real part of these corrections, that is small, but non-vanishing, as evident from the plots. The $1/N^2$ contribution to this term can be easily obtained from second order perturbation theory, as done for bosons in Ref. \cite{g-04}, but the asymptotic nature of this expansion makes this calculation not very useful. \section{The general anyonic Calogero Sutherland model} \label{secCS} The Hamiltonian of the Calogero-Sutherland model is given by a long-range pair interactions controlled by the coupling parameter $\lambda$: \begin{equation} \label{ham} H=-\sum_{i=1}^N\frac{\partial^2}{\partial x_i^2} +\lambda(\lambda-1)\sum_{i\neq j} \frac{\pi^2/L^2}{\sin^2(\pi(x_i-x_j)/L)}. \end{equation} The ground state wave function of the Hamiltonian (\ref{ham}) was found in \cite{s-71} and can be written in the form (\ref{gs}) with $f(y)=|y|^\lambda$ with normalization constant \begin{eqnarray} C^2_N (\lambda) = \frac{1}{L^N}\frac{\Gamma(1+\lambda)^N}{\Gamma(1+\lambda N)}. \end{eqnarray} This model has always been considered only in the case of bosonic and fermionic statistics, or in the anyonic ones arising from having an analytic ground-state function, that implies $\kappa=\lambda$. While thermodynamic properties and diagonal correlations do not depend on $\kappa$, but only on $\lambda$, off-diagonal ones are a signature of the anyonic statistics and we can consider general $\kappa\neq \lambda$ as pointed out by Girardeau \cite{g-06}. The off-diagonal correlators are easily obtained with the same replica trick as before. The one-body density matrix ($t=2\pi x/L$) for $N+1$ particle in a ring is \begin{equation} \fl \label{eq:G1} g_1(x) = \frac{\Gamma(\lambda)\Gamma(1+\lambda N) }{2\pi \Gamma(\lambda(1+N))} \left\langle \prod_{j=1}^N B_\kappa(t-\theta_j)|1-e^{i\theta_j}|^\lambda |e^{i t}-e^{i\theta_j}|^\lambda\right\rangle_{N,\lambda}, \end{equation} where the average is defined as $$ \left\langle f\left(e^{i\theta_1}, \ldots,e^{i\theta_N}\right)\right\rangle_{N,\lambda}= \frac{\Gamma^N (1+\lambda)}{\Gamma(1+\lambda N)}\int_0^{2\pi} \frac{d^N\theta}{(2\pi)^N} \left|\Delta_N (\{e^{i\theta_i}\})\right|^{2\lambda} f\left(e^{i\theta_1}, \ldots,e^{i\theta_N}\right), $$ and $\Delta_N (\{z_i\})$ is the usual Vandermonde determinant. To calculate the average in (\ref{eq:G1}) we use the replica trick, along the lines of the calculation in \cite{gk-01} and \cite{g-04}, modified to include the anyonic statistics. Namely, consider the function \begin{eqnarray} \label{eq:Z_def} Z^{(\lambda)}_{2np}(t) = \left\langle \prod_{j=1}^N (1-e^{i\theta_j})^{2np} (e^{i t}-e^{i\theta_j})^{2np} \right\rangle_{N,\lambda}, \end{eqnarray} but now the replica limit is $2np\to \lambda$. The duality transformation \cite{fw-04}, which enables to re-express the $N$-dimensional integral (\ref{eq:Z_def}) depending on the parameter $2np$ as a $2np$-dimensional integral depending on $N$ as a parameter is the same as for bosons \cite{agls-06} \begin{equation} \fl \label{eq:Z_dual} Z^{(\lambda)}_{2np} (t) = \frac{e^{-iNnp t}}{S_{2np}(1/\lambda)} \int_0^1 d^{2np} x |\Delta_{2np}(\{x_a\})|^{\frac{2}{\lambda}} \prod_{a=1}^{2np}[x_a (1-x_a)]^{\frac{1}{\lambda}-1} [1-(1-e^{i t})x_a]^N . \end{equation} The duality $\lambda\leftrightarrow 1/\lambda$ becomes evident by comparing the power of Vandermonde determinants in the previous equations. We put $Z^{(\lambda)}_{2np} (0)=1$, so the normalization constant is given by the Selberg integral \begin{equation*} \fl S_{2np}(1/\lambda) = \int_0^1d^{2np} x \; |\Delta_{2np}(x)|^{\frac{2}{\lambda}} \prod_{a=1}^{2np} x_a^{\frac{1}{\lambda}-1} (1-x_a)^{\frac{1}{\lambda}-1} = \prod_{a=1}^{2np} \frac{\Gamma^2\left(\frac{a}{\lambda}\right) \Gamma\left(1+\frac{a}{\lambda}\right)}{\Gamma\left(1+\frac{1}{\lambda}\right) \Gamma\left(\frac{2np+a}{\lambda}\right)}. \end{equation*} One can perform the sum over the saddle points at $x_\pm=0,1$ arriving to the same form as before \begin{equation} \fl Z_{2np} (t) = \sum_{l=0}^{2np} (-1)^{2np/\lambda(np-l)} \frac{H^l_{2np}(\lambda) e^{it(N+2np/\lambda)(np-l)}} {{(2X)^{(l^2+(2np-l)^2)/\lambda}}}, \end{equation} but with a $\lambda$ dependent $H^l_{2np}(\lambda)$ factor given for integer $2np=m$ in Ref. \cite{agls-06} as \begin{equation} H^l_m(\lambda)= \frac{\Gamma(m+1)}{\Gamma(l+1)\Gamma(m-l+1)} \frac{I_l(\lambda) I_{m-l}(\lambda)}{S_m(1/\lambda)}\,, \end{equation} with $S_m(1/\lambda)$ given above and $ I_l(\lambda)=(\prod_{a=1}^l\Gamma[a/\lambda]\Gamma[1+a/\lambda])/\Gamma[1+1/\lambda]^l$. After trivial algebra we get \begin{equation} H^l_m(\lambda)= \prod_{c=1}^m \frac{\Gamma(\frac{m+c}{\lambda})}{\Gamma(\frac{c}\lambda)} \left[ \prod_{a=1}^l \frac{\Gamma(\frac{a}\lambda)}{\Gamma(\frac{m-l+a}{\lambda})} \right]^2\,. \end{equation} For $\lambda=1$ this formula reduces to Eq. (\ref{eq:circ_Zn_2}) whose analytic continuation to complex $n$ can be written in terms of Barnes G-functions. Unfortunately this is not possible for general $\lambda$ (special formulas can be found for integer $\lambda$ or $1/\lambda$, but they are not very illuminating). Not having this nice representation in terms of special functions, the analytic continuation becomes less simple, but still straightforward and it is reported in the appendix. \begin{figure}[t] \includegraphics[width=\textwidth]{anycs.eps} \caption{One-body density matrix $g_1(x)$ for $N=51$, for values of $\lambda$ and several values of $\kappa$ (that are the same in the four plots). We consider the sum of the first few harmonics: $5$ harmonics are considered for $\lambda>1$ and only one for $\lambda<1$, because as-well known, the asymptotic feature of the series are more severe for these value of $\lambda$. Distances $x$ are measured in unit of the mean density.} \label{figCS} \end{figure} The anyonic statistics is obtained by setting $np-l=nq-m$ and so \begin{equation} \fl g_1^{\kappa} (x) \sum_{m=-\infty}^{\infty} (-1)^{(\kappa/2+m)} \frac{H_{\lambda}^{\lambda/2-\kappa/2-m}(\lambda) e^{i \pi x (\kappa+2m)(N+1)/L}} {{(2 N |\sin \pi x/L| )^{(\lambda/2 +2 (m+ \kappa/2)^2/\lambda )}}}\,, \end{equation} where $H_1^{1/2-\kappa/2-m}(\lambda)$ is intended to be continued as explained in the appendix. Explictly we get \begin{equation} \fl \rho_\infty= \frac{(-1)^{\kappa/2}}{2^{\lambda/2+\kappa^2/(2\lambda)}}H^{\frac{\lambda-\kappa}2}_\lambda\,, \qquad \frac{b_{m+1}}{b_m}= - \left[ \frac{\Gamma(\frac12+ \frac\kappa{2\lambda}+\frac{m+1}{2\lambda})}{ \Gamma(\frac12-\frac\kappa{2\lambda}-\frac{m}{2\lambda})}\right]^2 2^{2(2m+2\kappa+ 1)/\lambda}\,, \end{equation} and $b_0=1$. This expression agrees with the general harmonic expansion (\ref{main}) with $K=\lambda^{-1}$ as well known (the parameter $K$ do not depend on the statistic $\kappa$ and so it is the same derived by Ha \cite{h-94} for $\kappa=\lambda$). In Fig. \ref{figCS} we report the correlations obtained by summing few of the first terms in the harmonic expansion for several values of the anyonic parameter $\kappa$ at fixed $\lambda$ to show the crossover from the fermionic to the bosonic curves. The extreme results $\kappa=0,1$ were already determined in Ref. \cite{agls-06} (small manipulations are needed to show that they are identical), where a perfect agreement with quantum Monte Carlo simulations has been shown. This harmonic expansion has been previously calculated in specific cases (bosons and fermions for generic $\lambda$ in \cite{agls-06} and for $\kappa=\lambda$ in \cite{h-94}). It is a long, but simple calculation to show that all these expansions agrees. Other exact results appear in Refs. \cite{s-92,sla-93,hz-93,f-92,slp-95,fz-96,wy-95,kh-07} also for dynamical correlation functions. \section{Conclusions} \label{secconcl} In this manuscript we proposed a variation of the replica method that allows the calculation of the large distance asymptotic of off-diagonal correlation functions in anyonic models. The main result we found is that the replicated correlation $Z_{2np}(t)$ is a function with $2p$ branches corresponding to different anyon statistics $\kappa=q/p$ with $q=0,\dots 2p-1$. After having understood this, it is almost straightforward but tedious, to obtain all the harmonic terms of the large-distance expansion from known results for bosons. We applied the method to a gas of impenetrable anyons (Tonks-Girardeau) and the most general anyonic Calogero-Sutherland. For the former a number of analytic and numerical checks confirmed the validity of the replica approach. For the latter instead, for special values of the interaction parameters (i.e. integer or rational), a number of exact results have been already found thanks to the random matrix theory and its connection with Jack polynomials \cite{sla-93,hz-93,f-92,h-94,slp-95,fz-96}. In some cases the correspondence of the two approaches is obvious, while in some others (for rational $\lambda=p/q$ the correlation functions are given by sum over fractional excitations involving $p+q$ quantum numbers, giving an irregular function of $\lambda$ with no clear limit to irrational values) it implies particular relations for Jack polynomials. Showing in full generality this correspondence remains an open problem. At the same time, another open problem is the correspondence of our approach to the one by Patu, Korepin and Averin \cite{pka-08} based on Fredholm determinants. We stress that (as far as we are aware) the explicit correspondence of the two approaches has not yet been proved even for bosons. The results coming for these two sets of works give an almost complete physical picture of the behavior of the off-diagonal correlator of the gas of impenetrable anyons. Beyond the impenetrable limit, only the Bethe solution of the Lieb-Liniger model is known \cite{k-99,bg-06}, with still no attempt to tackle the hard problem of the correlation functions exactly. Only the power-law structure (and the corresponding singularities) are known from bosonization \cite{cm-07,pka-07}. A powerful approach would be to generalize the quantum inverse scattering methods for bosons \cite{Kbook} to anyons and then to mix integrability and numerics to get the full correlation functions from the form-factors (on the line of Ref. \cite{cc-06} for bosons). \section*{Note Added} After the completion of this work, a manuscript appeared where some large distance asymptotic of the off-diagonal correlations for the Tonks-Girardeau gas have been found at finite temperature \cite{pka-new}. The results obtained there are different and complementary to ours. Notice that the integral system of partial differential equations they derive is independent on the anyonic parameter $\kappa$ that enters only through the initial condition, on the line of our previous \cite{sc-08} and current results. \section*{Acknowledgments} We thank G. Marmorini, S. Ouvry, V. Pasquier, E. Tonni, and P. Wiegmann for useful discussions. This work has been done in part when the authors were guest of the Galileo Galilei Institute in Florence, whose hospitality is kindly acknowledged. Financial support of the ESF (INSTANS activity) is also acknowledged.
1,941,325,220,797
arxiv
\section{Introduction} \label{sec:intro} \noindent The identification of the sources of ultra-high energy cosmic rays is still an open issue, and in fact a tricky business. The excellent information from the two large experiments, the Pierre Auger Observatory in the South, and the Telescope Array in the North, on spectrum~{\cite{Fenu:2017hlc, Matthews:2017hlc}} and chemical composition~{\cite{Unger:2017fhr, Abbasi:2018nun}} hardly constrain source scenarios, as these properties are difficult to predict from first principles and are usually chosen in any scenario suitably to fit the data. A new hope to eventually identify sources has been risen by the tentative claim of intermediate-scale anisotropies of the arrival directions of UHECR above a few tens of EeV~{\cite{Matthews:2017hlc, Aab:2018chp}}. If they can be further corroborated, one may compare them with sky-distributions of putative sources. Given the complexity of the physics of cosmic-ray acceleration and propagation, however, it can hardly be assumed that na\"ive comparisons with astronomical catalogs will be conclusive. What needs to be done is to develop a parameterized source model attached to known individual astronomical objects (an example for radio galaxies is presented in these proceedings by Rachen \& Eichmann, \pos{PoS(ICRC2019)396}), as well as a realistic setup for extragalactic and Galactic magnetic fields including the uncertainties in their modeling. The propagation of UHECR in such a setup has then to be probed in extensive simulations, following the individual trajectories of a very large number of particles. Only the results of such simulations, when compared with real arrival direction distributions, will allow to conclusively constrain UHECR source scenarios. The currently most advanced tool to perform such simulations is CRPropa~\cite{2016JCAP...05..038A}, which has all processes relevant for UHECR propagation implemented and can apply them to calculate realistic 3D trajectories, potentially even considering cosmological evolution during long propagation path (then called a 4D simulation). There is a fundamental problem with these kind of simulations, though. Sources of UHECR are almost certainly extragalactic~{\cite{Aab:2017tyv}}, and may be spread over a volume of at least 300\,Mpc radius. Neglecting effects of our Galaxy for the time being, we may then consider a putative detector -- called an observer sphere -- of the order of the size of our Galaxy, roughly 30\,kpc, placed in the center of this volume.\footnote{ The reason for this common choice is not that propagation effects inside the Galaxy is ignored, but that a different method is used in CRPropa to describe them. We will use the terms ``observer sphere'' and ``detector'' synonymous throughout this paper. } The task to hit this observer sphere is comparable to hit the inner bull of a dartboard from 12 meters distance -- with the darts blown around by strong whirlwinds (i.e., the magnetic fields). The worst part of this dart game, however, is that the players' eyes are blindfolded and they do not know where the board hangs. While the large distance and the whirling are given by nature and we have to deal with them, the blindfolding is actually an unnecessary complication of the game. If we allow ``the players'' (i.e., our cosmic-ray simulation) to ``aim'', we can significantly increase the number of ``hits'', i.e., simulation candidates which can be used for astrophysical analysis, while at the same time avoiding any bias in the obtained results as long as we are able to correct for the aiming in a mathematically proper way. In this paper, we introduce the mathematical foundations of such a targeting mechanism, demonstrate its efficiency in a prototype implementation with CRPropa\,3, and briefly discuss how it may be implemented as a fixed part of future CRPropa versions. \section{The probability distribution of event counts} \label{sec:event_counts} \noindent We are interested in modeling the statistical detection probability of particles emitted from a source. In our simulations we assume to have an ideal detector, meaning if a particle hits the detector the event is registered in the $i$-th pixel of a sky map with probability one. Consequently, the conditional probability distribution for detector events can be expressed as: \begin{equation} \Pi(\mathrm{event}_i|\alpha_p,\delta_p, r_p) = W(\alpha_p,\delta_p, r_p) \, , \end{equation} where $\alpha_p,\,\delta_p$ are the right ascension and declination of the $p$-th particle at the detector, and $r_p$ the distance passed by it at arrival time. The detector kernel $W(\alpha_p,\delta_p, r_p)$ is one for parameters that hit the detector and otherwise zero. In fact, we are not interested in the conditional distribution but in the marginal distribution of events given as: \begin{equation} \Pi(\mathrm{event}_i) = \int \mathrm{d}\alpha_p \, \mathrm{d}\delta_p \, \mathrm{d}r_p \Pi(\mathrm{event}_i|\alpha_p,\delta_p, r_p) \, \Pi(\alpha_p,\delta_p, r_p) \, \end{equation} where $\Pi(\alpha_p,\delta_p, r_p)$ is the distribution of arrival directions and distances to the detector as calculated by the CRPropa code. Note, that this distribution can be written as the marginal distribution over the initial conditions of the CRPropa simulation given by the emission angles $\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p$ as: \begin{equation} \Pi(\alpha_p,\delta_p, r_p) = \int \mathrm{d}\alpha^{\mathrm{init}}_p \, \mathrm{d}\delta^{\mathrm{init}}_p \, \Pi(\alpha_p,\delta_p, r_p | \alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, \Pi(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, , \end{equation} where $\Pi(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p )$ is the distribution of emission angles at the source. In general, for an isotropic source, emission angles are uniformly distributed over the full $4\pi$-geometry. Given these equations the distribution of detected events can be expressed as: \begin{eqnarray} \Pi(\mathrm{event}_i) &=& \int \mathrm{d}\alpha_p \, \mathrm{d}\delta_p \, \mathrm{d}\alpha^{\mathrm{init}}_p \, \mathrm{d}\delta^{\mathrm{init}}_p \, \mathrm{d}r_p \, \Pi(\mathrm{event}_i|\alpha_p,\delta_p, r_p) \, \Pi(\alpha_p,\delta_p, r_p | \alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, \Pi(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, \nonumber \\ &=& \int \mathrm{d}\alpha_p \, \mathrm{d}\delta_p \, \mathrm{d}\alpha^{\mathrm{init}}_p \, \mathrm{d}\delta^{\mathrm{init}}_p \, \mathrm{d}r_p \, W(\alpha_p,\delta_p, r_p) \, \Pi(\alpha_p,\delta_p, r_p | \alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, \Pi(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, . \label{eq:event_integral} \end{eqnarray} In principle this is a statistical description of the CRPropa simulation framework: the CRPropa code simulates individual realizations of the joint distribution $\Pi(\alpha_p,\delta_p, r_p | \alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, \Pi(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p )$ via the following process: \begin{itemize} \item draw random emission angles at the source ($\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p$) \item follow particle trajectories to calculate the particle positions at arrival time ($\alpha_p,\delta_p, r_p$) \end{itemize} Note, that since it is possible to draw random realizations of the joint parameter set $\alpha_p,\delta_p, r_p,\alpha^{\mathrm{init}}_p$ and $\delta^{\mathrm{init}}_p$, it is possible to estimate the integral in equation \ref{eq:event_integral} via a Markov approximation as: \begin{eqnarray} \Pi(\mathrm{event}_i) &\approx& \frac{1}{N} \sum_p W(\alpha_p,\delta_p, r_p) \label{eq:event_est_stan} \end{eqnarray} where the index $p$ labels different random particle simulations performed with the CRPropa code. Also note, that the detection probability is proportional to the total intensity of observed particles at a given solid angle in the sky. A particular issue arises from the fact that, when simulating particle propagation through cosmological volumes, it is very hard to find trajectories that will actually hit the rather small observer sphere. This is particularly owed to the fact that, due to physical considerations, it is reasonable to assume an isotropic emission of cosmic rays from the source. Consequently, a large fraction of randomly simulated particle trajectories is of no interest to predictions of observational properties since they miss the detector. This problem renders 3D and 4D cosmic-ray propagation simulations a numerically challenging problem. The integral given in equation {\ref{eq:event_integral}} can also be solved in a slightly different fashion via an importance sampling approach: \begin{eqnarray} \Pi(\mathrm{event}_i) &=& \int \mathrm{d}\alpha_p \, \mathrm{d}\delta_p \, \mathrm{d}\alpha^{\mathrm{init}}_p \, \mathrm{d}\delta^{\mathrm{init}}_p \, \mathrm{d}r_p W(\alpha_p,\delta_p, r_p) \; \Pi(\alpha_p,\delta_p, r_p \,|\, \alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, \Pi(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, \nonumber \\ &=& \int \mathrm{d}\alpha_p \, \mathrm{d}\delta_p \, \mathrm{d}\alpha^{\mathrm{init}}_p \, \mathrm{d}\delta^{\mathrm{init}}_p \, \mathrm{d}r_p W(\alpha_p,\delta_p, r_p) \, \nonumber \\ & & \phantom{\int} \times\; \Pi(\alpha_p,\delta_p, r_p \,|\, \alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, \Pi'(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \frac{\Pi(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p )}{\Pi'(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p )} \, \label{eq:event_integral_imp} \end{eqnarray} where in the last line we introduced a one and the probability distribution $\Pi'(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p )$ can now be freely chosen without changing the value of the integral. One can now draw realizations of the joint distribution $\Pi(\alpha_p,\delta_p, r_p | \alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) \, \Pi'(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p ) )$ with $\Pi'(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p )$ chosen such that it increases the probability of hitting the detector target. The corresponding Markov approximation for the event distribution is then simply given by: \begin{equation} \Pi(\mathrm{event}_i) \;\approx\; \frac{1}{N} \sum_p W(\alpha_p,\delta_p, r_p)\, \frac{\Pi(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p )}{\Pi'(\alpha^{\mathrm{init}}_p,\delta^{\mathrm{init}}_p )}\, \nonumber \;\;=\;\; \frac{1}{N} \sum_p W(\alpha_p,\delta_p, r_p)\, \omega_p \, , \label{eq:event_est_gen} \end{equation} where the weight $\omega_p$ accounts for the modification of the statistical distribution of emission angles. Equation \ref{eq:event_est_gen}, therefore, is a generalization of the standard estimator presented in equation \ref{eq:event_est_stan}. \section{A simple algorithm to target} \noindent This work aims at providing optimal emission directions for sources in the CRPropa code, such that the yield of simulated particles trajectories hitting a distant detector is maximized. Because particles do not necessarily travel in straight lines, but their trajectories may be bent due to magnetic fields or other kinds of interactions, the optimal emission direction at the source does not necessarily point directly to the source. Moreover, random magnetic fields or scattering events can broaden the emitted particle distribution, such that the shape of the target looks blurred from the perspective of the source. For an arbitrary source in a general CRPropa simulation, the exact optimal direction and the width of the emission direction are not known and need to be identified on the fly during runtime. We describe the optimal emission probability distribution for arbitrary distant sources by the von Mises-Fischer (vMF) distribution \begin{equation} \label{eq:vMF_hit_prob} \Pi(x|\mathrm{hit}) = \left\{ \begin{array}{ll} \displaystyle \frac{1}{4\pi}, & \qquad \kappa=0 \\ \displaystyle \frac{\kappa}{2\pi \left(1-\mathrm{e}^{-2\kappa} \right)} \mathrm{e}^{\kappa \left( \vec{\mu}^T\vec{x}-1\right)}, & \qquad \kappa>0 \end{array}\right. \, , \end{equation} where $\vec{\mu}$ is the Cartesian unit vector corresponding to the preferred direction of emission, $\vec{x}$ is a unit vector of a random direction on the 2-sphere, and $\kappa$ controls the width of the distributions of random emission directions around the preferred direction. In particular we choose $\kappa$ simply from requiring that in the case of geometrical optics a fraction $P$ of emitted particles would hit the observer sphere. More specifically, we need to solve the equation \begin{equation} P \;=\; \int_a^1 \mathrm{d}y\;\;\frac{\kappa}{2\pi \left(1-\mathrm{e}^{-2\kappa} \right)}\;\mathrm{e}^{\kappa \left( y -1 \right)} \;\;=\;\; \frac{1-\mathrm{e}^{\kappa \left(a-1 \right)}}{1-\mathrm{e}^{-2\kappa}}\, . \end{equation} If we assume $\kappa \gg 1 $ we can find the approximation \begin{equation} \label{eq:estimate_kappa} \kappa = \frac{\mathrm{ln}(1-P)}{a-1}\, , \end{equation} where $a=\mathrm{arctan}(s/D)$ is the apparent detector size, with $s$ the radius of the observer sphere and $D$ the distance between the source and the detector. By choosing the hit probability $P$ one can now adjust the amount of simulated particles that are expected to arrive at the detector. When, e.g., the particle trajectories are getting close to the diffusion regime, the naive geometric targeting will be sub-optimal, hence one should choose $P$ to have a suitable ratio of exploitation and exploration. In practice, setting $P$ is a matter of choice and of no importance to the validity of the algorithm. In the large sample limit the algorithm will converge to the correct result as the importance weights $\omega_n$ ensure that the algorithm simulates the correct emission statistics asymptotically. Typical values may range from $P=0.1$ if we want to be conservative and be sure not to miss any unexpected paths to the target, to $P=0.9$ when we are expect to be close to the case of geometrical optics. To find the optimal emission direction and the width of the distribution, i.e., the parameters $\vec{\mu}$ and $\kappa$, we run the simulation in terms of several epochs $N_{\mathrm{epoch}}$, where each epoch simulates a batch of $N_{\mathrm{batch}}$ simulation particles. The directions of emission at the source have been drawn from a vMF distribution, using algorithms readily discussed in the literature~\cite{Ulrich:1984:CGD,tROB04a}. We record these emission directions and follow particle trajectories through the simulation. At the end of every batch of simulations we determine new optimal parameters for the vMF and run a new batch with updated parameters. To learn the target distribution the algorithm will only rely on those emission directions $\vec{x}_n$ whose trajectories will end at the detector surface. Since these particles have been emitted according to a vMF distribution, while the true physical source would emit uniformly over $4\pi$, we have to estimate their importance weights as \begin{equation} \omega_n \;=\; \frac{1}{4\pi} \, \frac{1}{\Pi(x\,|\,\mathrm{hit})} \;\;=\;\; \frac{ \left(1-\mathrm{e}^{-2\kappa} \right)}{2\,\kappa} \mathrm{e}^{-\kappa \left( \vec{\mu}^T\vec{x}-1\right)} \, . \end{equation} Given successful emission directions $\vec{x}_n$ and the corresponding importance weights $\omega_n$ we may now estimate the preferred emission direction as \begin{equation} \vec{\mu} = \frac{\sum_{n=0}^{N_{\mathrm{batch}}} \vec{x}_n \, \omega_i }{ \left|\sum_{n=0}^{N_{\mathrm{batch}}} \vec{x}_n \, \omega_n\right|} \, . \end{equation} the detector size $a$ by \begin{equation} a = \frac{\sum_{n=0}^{N_{\mathrm{batch}}} \left(\vec{\mu}^T \vec{x}_n \right)^2 \, \omega_i }{\sum_{n=0}^{N_{\mathrm{batch}}} \omega_n}\, \end{equation} and the corresponding optimal parameter $\kappa$ of the vMF distribution is then found from eq.~\ref{eq:estimate_kappa}. \section{Speedup test} \begin{figure*}[tb] \centering \centering \includegraphics[width=.49\linewidth]{./MapTargetedEmission_new.png} \includegraphics[width=.49\linewidth]{./MapIsotropicEmission_new.png} \caption{Comparison of sky maps for the reference simulation: one spherical observer with a radius of $R_{\mathrm{obs}} = 0.1$ Mpc, one source at a distance of $D = 10$ Mpc, emitting 1M protons with an energy of $E = 10$ EeV, a Kolmogorov-type turbulent magnetic field with $B_{\mathrm{RMS}} = 1$ nG, a hit probability of $P = 0.1$ and no interactions included. Left: the map obtained using targeted emission. Right: the map obtained using isotropic emission. In both cases the same number of particles were emitted from the source. The total number of hits is $\sim100$ times larger in the left figure compared with the right figure.}\label{Fig:Skymaps} \end{figure*} \begin{figure*}[tb] \centering \centering \includegraphics[width=0.82\linewidth]{./Speedup} \caption{Test of the speedup of CRPropa using directed emission. Reference simulation: one spherical observer with a radius of $R_{\mathrm{obs}} = 0.1$ Mpc, one source at a distance of $D = 10$ Mpc, emitting 1M protons with an energy of $E = 10$ EeV, a Kolmogorov-type turbulent magnetic field with $B_{\mathrm{RMS}} = 1$ nG, a hit probability of $p = 0.1$ and no interactions included. For the other scenarios one parameter of the reference simulation is changed while all other are kept the same.}\label{Fig:Speedup} \end{figure*} \noindent To test the speedup that targeted emission only (without the learning method described above for determining the optimal emission settings) can provide, we run CRPropa simulations with exactly the same settings, only changing between targeted emission and isotropic emission. The speedup is then defined as \begin{equation} \mathrm{S} = \frac{t_{\mathrm{iso}}/N_{\mathrm{iso}}}{t_{\mathrm{tar}}/N_{\mathrm{tar}}} \end{equation} with $t_{\mathrm{iso}}$ ($t_{\mathrm{tar}}$) and $N_{\mathrm{iso}}$ ($N_{\mathrm{tar}}$) the time the simulation took and the number of hits at the observer for isotropic (targeted) emission. For this test we run a reference simulation with one spherical observer with a radius of $R_{\mathrm{obs}} = 0.1$~Mpc and one source at a distance of $D = 10$~Mpc emitting protons with an energy of $E = 10$~EeV in a Kolmogorov-type turbulent magnetic field with $B_{\mathrm{RMS}} = 1$~nG and a maximum correlation length of 1~Mpc. The hit probability $P$ has not been optimised but has arbitrarily been set to $P=0.1$. For this reference simulation all interactions have been switched off. In both the isotropic and targeted emission cases 1M particles have been emitted from the source. The speedup found in this case is $S \approx 103$, already a huge computational improvement without any optimization of $P$ or other simulation settings. A comparison between the two sky maps that were obtained for this reference simulation, one for targeted emission and one for isotropic emission with the same number of particles emitted from the source, is given in Fig.~\ref{Fig:Skymaps}. To see how the simulation parameters $R_{\mathrm{obs}}$, $D$, $E$, $B_{\mathrm{RMS}}$ and $p$ and the inclusion of interactions with photon backgrounds influence the speedup, we change one parameter of the reference simulation at a time and recalculate the speedup. The results of this procedure are given in Fig.~\ref{Fig:Speedup}. This shows that, for the scenarios tested here, the speedup can vary between $S \approx 10$ and $S \approx 6500$ depending on the simulation setup. This should be considered as a minimal speedup as no auto-tuning of $p$ or the emission direction has been included. \section{Further development of the method and implementation in CRPropa} \noindent In our approach so far the desired hit probability $P$ was a matter of choice, and the vMF parameter $\kappa$ and $\vec{\mu}$ were optimized only with respect to ensuring that the prior choice of $P$ matches the posterior hit probability in the given simulation setup. Certainly there is an optimal choice of $P$ for every setup, so it would be interesting to find this value by simulations, which can be done by Bayesian sampling. In practice it means to maximize the logarithmic posterior distribution given as \begin{equation} \label{eq:log_meta_post} \ln\left( \phantom{\big(}\!\!\!\Pi\left(\kappa,\vec{\mu},P\,|\,\{\vec{x}\}_{\rm hit},\{\vec{x}\}_{\rm nohit}\right)\phantom{\big)}\!\!\!\!\!\right) \;\;=\;\; \sum_{p=0}^{N_{\rm hit}} \ln\left( \Pi(\vec{x}_p\,|\,{\rm hit}) \right) + \sum_{q=0}^{N_{\rm nohit}} \ln\left(\Pi(\vec{x}_q\,|\,{\rm nohit})\right)\quad, \end{equation} where $\Pi(\vec{x}_p\,|\,{\rm hit})$ is the vMF invoked for the parameters $P$, $\kappa$ and $\vec{\mu}$ chosen for the batch, \begin{equation} \Pi(\vec{x}\,|\,{\rm nohit}) \;=\; \frac{1}{1-P}\;\left(1- \frac{P\kappa}{2\pi \left(1-\mathrm{e}^{-2\kappa} \right)} \mathrm{e}^{\kappa \left( \vec{\mu}^T\vec{x}-1\right)}\right)\;, \end{equation} and $p$ and $q$ label particles that hit or miss the target, respectively. Note that this way we use information from all particles simulated in the batch, if we only had particles hitting the target the task of finding improved meta parameters would reduce to a standard vMF regression with $P$ as a free parameter. Optimal maximum a posteriori values for $P$ and the corresponding parameters $\kappa$ and $\vec{\mu}$ are then found by employing a standard numerical optimizer to maximize equation {\ref{eq:log_meta_post}}. The functionality described here will be made publicly available in the near future as part of the main repository of CRPropa.\footnote{ \href{https://crpropa.desy.de/}{crpropa.desy.de}} The targeting algorithm using the vMF distribution will be implemented as an extension of the {\sc Source} module of CRPropa. The learning routine to obtain the optimal parameters of the vMF distribution will be added to the main repository as a plugin to the code structure. Instructions on how to use the targeting algorithm, together with the learning routine, will be made available on the CRPropa website as part of a dedicated example page. More information on the method and its capabilities will be given in an upcoming journal publication (Jasche, van Vliet \& Rachen, in preparation). \begin{footnotesize} \subsection*{Acknowledgements} \noindent This research was supported in part by the DFG cluster of excellence \textit{Origin and Structure of the Universe}.\footnote{ \href{http://www.universe-cluster.de/}{www.universe-cluster.de}} AvV acknowledges financial support from the NWO Astroparticle Physics grant WARP and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant No. 646623). \end{footnotesize}
1,941,325,220,798
arxiv
\section{Introduction} Let $\Gamma \subset \mathbb{R}^2$ be a smooth nonintersecting open arc, and we assume that $\Gamma$ can be extended to an arbitrary smooth, simply connected, closed curve $\partial \Omega$ enclosing a bounded domain $\Omega$ in $\mathbb{R}^2$. Let $k>0$ be the wave number, and let $\theta \in \mathbb{S}^{1}$ be incident direction, where $\mathbb{S}^{1}=\{x \in \mathbb{R}^2 : |x|=1 \}$ denotes the unit sphere in $\mathbb{R}^2$. We consider the following direct scattering problem: For $\theta \in \mathbb{S}^{1}$ determine $u^s$ such that \begin{equation} \Delta u^s+k^2u^s=0 \ \mathrm{in} \ \mathbb{R}^2\setminus \Gamma, \label{1.1} \end{equation} \begin{equation} u^s=-\mathrm{e}^{ik\theta \cdot x} \ \mathrm{on} \ \Gamma \label{1.2} \end{equation} \begin{equation} \lim_{r \to \infty} \sqrt{r} \biggl( \frac{\partial u^{s}}{\partial r}-iku^s \biggr)=0, \label{1.3} \end{equation} where $r=|x|$, and (\ref{1.3}) is the {\it Sommerfeld radiation condition}. Precisely, this problem is understood in the variational form, that is, determine $u^s \in H^{1}_{loc}(\mathbb{R}^2\setminus \Gamma)$ satisfying $u^s\bigl|_{\Gamma}=-\mathrm{e}^{ik\theta \cdot x}$, the Sommerfeld radiation condition (\ref{1.3}), and \begin{equation} \int_{\mathbb{R}^2 \setminus \Gamma} \bigl[ \nabla u^s \cdot \nabla \overline{\varphi}-k^2u^s\overline{\varphi} \bigr]dx=0, \label{1.4} \end{equation} for all $\varphi \in H^{1}(\mathbb{R}^2\setminus \Gamma)$, $\varphi \bigl|_{\Gamma}=0$, with compact support. Here, $H^{1}_{loc}(\mathbb{R}^2\setminus \Gamma)=\{u : \mathbb{R}^2\setminus \Gamma \to \mathbb{C} : u \bigl|_{B\setminus \Gamma} \in H^{1}(B\setminus \Gamma)\ \mathrm{for\ all\ open\ balls}\ B \}$ denotes the local Sobolev space of one order. \par It is well known that there exists a unique solution $u^{s}$ and it has the following asymptotic behavior (see, e.g., \cite{D. Colton and R. Kress}): \begin{equation} u^s(x)=\frac{\mathrm{e}^{ikr}}{\sqrt{r}}\Bigl\{ u^{\infty}(\hat{x},\theta)+O\bigl(1/r \bigr) \Bigr\} , \ r \to \infty, \ \ \hat{x}:=\frac{x}{|x|}. \label{1.5} \end{equation} The function $u^{\infty}$ is called the {\it far field pattern} of $u^s$. With the far field pattern $u^{\infty}$, we define the {\it far field operator} $F :L^{2}(\mathbb{S}^{1}) \to L^{2}(\mathbb{S}^{1})$ by \begin{equation} Fg(\hat{x}):=\int_{\mathbb{S}^{1}}u^{\infty}(\hat{x},\theta)g(\theta)ds(\theta), \ \hat{x} \in \mathbb{S}^{1}. \label{1.6} \end{equation} The inverse scattering problem we consider in this paper is to reconstruct the unknown arc $\Gamma$ from the far field pattern $u^{\infty}(\hat{x},\theta)$ for all $\hat{x} \in \mathbb{S}^{1}$, all $\hat{x} \in \mathbb{S}^{1}$ with one $k>0$. In other words, given the far field operator $F$, reconstruct $\Gamma$. \par In order to solve such an inverse problem, we use the idea of the monotonicity method. The feature of this method is to understand the inclusion relation of an unknown onject and artificial one by comparing the data operator with some operator corresponding to an artificial object. For recent works of the monotonicity method, we refer to \cite{R. Griesmaier and B. Harrach, B. Harrach and V. Pohjola and M. Salo1, B. Harrach and V. Pohjola and M. Salo2, B. Harrach and M. Ullrich1, B. Harrach and M. Ullrich2, Lakshtanov and Lechleiter}. \par Our aim in this paper is to provide the following two theorems. \begin{thm} Let $\sigma \subset \mathbb{R}^2$ be a smooth nonintersecting open arc. Then, \begin{equation} \sigma \subset \Gamma \ \ \ \ \Longleftrightarrow \ \ \ \ H^{*}_{\sigma}H_{\sigma}\leq_{\mathrm{fin}} -\mathrm{Re}F,\label{1.7} \end{equation} where the Herglotz operator $H_{\sigma}:L^{2}(\mathbb{S}^{1}) \to L^{2}(\sigma)$ is given by \begin{equation} H_{\sigma}g(x):=\int_{\mathbb{S}^{1}}\mathrm{e}^{ik\theta \cdot x}g(\theta)ds(\theta), \ x \in \sigma, \label{1.8} \end{equation} and the inequality on the right hand side in (\ref{1.7}) denotes that $-\mathrm{Re}F-H^{*}_{\sigma}H_{\sigma}$ has only finitely many negative eigenvalues, and the real part of an operator $A$ is self-adjoint operators given by $\mathrm{Re}(A):=\displaystyle \frac{1}{2}(A+A^{*})$. \end{thm} \begin{thm} Let $B \subset \mathbb{R}^2$ be a bounded open set. Then, \begin{equation} \Gamma \subset B \ \ \ \ \Longleftrightarrow \ \ \ \ -\mathrm{Re}F \leq_{\mathrm{fin}} \tilde{H}^{*}_{\partial B}\tilde{H}_{\partial B}, \label{1.9} \end{equation} where $\tilde{H}_{\partial B}:L^{2}(\mathbb{S}^{1}) \to H^{1/2}(\partial B)$ is given by \begin{equation} \tilde{H}_{\partial B}g(x):=\int_{\mathbb{S}^{1}}\mathrm{e}^{ik\theta \cdot x}g(\theta)ds(\theta), \ x \in \partial B. \label{1.10} \end{equation} \end{thm} Theorem 1.1 determines whether an artificial open arc $\sigma$ is contained in $\Gamma$ or not. While, Theorem 1.2 determines an artificial domain $B$ contain $\Gamma$. In two theorems we can understand $\Gamma$ from the inside and outside. \par This paper is organized as follows. In Section 2, we give a rigorous definition of the above inequality. Furthermore, we recall the properties of the far field operator and technical lemmas which are useful to prove main results. In Section 3 and 4, we prove Theorem 1.1 and 1.2 respectively. In Section 5, we give numerical examples based on Theorem 1.1. \section{Preliminary} First, we give a rigorous definition of the inequality in Theorems 1.1 and 1.2. \begin{dfn} Let $A, B:X \to X$ be self-adjoint compact linear operators on a Hilbert space $X$. We write \begin{equation} A\leq_{\mathrm{fin}} B,\label{2.1} \end{equation} if $B-A$ has only finitely many negative eigenvalues. \end{dfn} The following lemma was shown in Corollary 3.3 of \cite{B. Harrach and V. Pohjola and M. Salo2}. \begin{lem} Let $A, B:X \to X$ be self-adjoint compact linear operators on a Hilbert space $X$ with an inner product $\langle \cdot, \cdot \rangle$. Then, the following statements are equivalent: \begin{description} \item[(a)] $A\leq_{\mathrm{fin}} B$ \item[(b)] There exists a finite dimensional subspace $V$ in $X$ such that \begin{equation} \langle (B-A)v, v \rangle \geq0,\label{2.2} \end{equation} for all $v \in V^{\bot}$. \end{description} \end{lem} Secondly, we define several operators in order to mention properties of the far field operator $F$. The data-to-pattern operator $G:H^{1/2}(\Gamma) \to L^{2}(\mathbb{S}^{1})$ is defined by \begin{equation} Gf:=v^{\infty}, \label{2.3} \end{equation} where $v^{\infty}$ is the far field pattern of a radiating solution $v$ (that is, $v$ satisfies the Sommerfeld radiation condition) such that \begin{equation} \Delta v+k^2v=0 \ \mathrm{in} \ \mathbb{R}^2\setminus \Gamma, \label{2.4} \end{equation} \begin{equation} v=f \ \mathrm{on} \ \Gamma. \label{2.5} \end{equation} The following lemma was given by the same argument in Lemma 1.13 of \cite{Kirsch and Grinberg}. \begin{lem} The data-to-pattern operator $G$ is compact and injective. \end{lem} We define the single layer boundary operator $S:\tilde{H}^{-1/2}(\Gamma) \to H^{1/2}(\Gamma)$ by \begin{equation} S\varphi(x):=\int_{\Gamma} \varphi(y)\Phi(x,y)ds(y), \ x \in \Gamma, \label{2.6} \end{equation} where $\Phi(x,y)$ denotes the fundamental solution to Helmholtz equation in $\mathbb{R}^2$, i.e., \begin{equation} \Phi(x,y):= \displaystyle \frac{i}{4}H^{(1)}_0(k|x-y|), \ x \neq y. \label{2.7} \end{equation} Here, we denote by \begin{equation} H^{1/2}(\Gamma):= \{u\bigl|_{\Gamma} : u \in H^{1/2}(\partial\Omega) \},\label{2.8} \end{equation} \begin{equation} \tilde{H}^{1/2}(\Gamma):= \{u \in H^{1/2}(\partial \Omega) : supp(u) \subset \overline{\Gamma} \}, \label{2.9} \end{equation} and $H^{-1/2}(\Gamma)$ and $\tilde{H}^{-1/2}(\Gamma)$ the dual spaces of $\tilde{H}^{1/2}(\Gamma)$ and $H^{1/2}(\Gamma)$ respectively. Then, we have the following inclusion relation: \begin{equation} \tilde{H}^{1/2}(\Gamma) \subset H^{1/2}(\Gamma) \subset L^{2}(\Gamma) \subset \tilde{H}^{-1/2}(\Gamma)\subset H^{-1/2}(\Gamma).\label{2.10} \end{equation} For these details, we refer to \cite{McLean}. The following two Lemmas was shown in Section 3 of \cite{Kirsch and Ritter}. \begin{lem} \begin{description} \item[(a)] $S$ is an isomorphism from $\tilde{H}^{-1/2}(\Gamma)$ onto $H^{1/2}(\Gamma)$. \item[(b)] Let $S_{i}$ be the boundary integral operator $(\ref{2.6})$ corresponding to the wave number $k=i$. The operator $S_{i}$ is self-adjoint and coercive, i.e, there exists $c_0>0$ such that \begin{equation} \langle \varphi, S_{i} \varphi \rangle \geq c_0 \left\| \varphi \right\|_{\tilde{H}^{-1/2}(\Gamma)}^2 \ for \ all \ \varphi \in \tilde{H}^{-1/2}(\Gamma), \label{2.11} \end{equation} where $\langle \cdot, \cdot \rangle$ denotes the duality pairing in $\langle \tilde{H}^{-1/2}(\Gamma), H^{1/2}(\Gamma) \rangle$. \item[(c)] $S-S_{i}$ is compact. \item[(d)] There exists a self-adjoint and positive square root $S^{1/2}_{i}:L^{2}(\Gamma) \to L^{2}(\Gamma)$ of $S_{i}$ which can be extended such that $S^{1/2}_{i}:\tilde{H}^{-1/2}(\Gamma) \to L^{2}(\Gamma)$ is an isomorphism and $S^{1/2\ *}_{i}S^{1/2}_{i}=S_{i}.$ \end{description} \end{lem} \begin{lem} The far field operator $F$ has the following factorization: \begin{equation} F=-GS^{*}G^{*}.\label{2.12} \end{equation} where $G^{*}:L^{2}(\mathbb{S}^{1})\to \tilde{H}^{-1/2}(\Gamma)$ and $S^{*}:\tilde{H}^{-1/2}(\Gamma) \to H^{1/2}(\Gamma)$ are the adjoints of $G$ and $S$, respectively. \end{lem} Thirdly, we recall the following technical lemmas which will be useful to prove Theorems 1.1 and 1.2. We refer to Lemma 4.6 and 4.7 in \cite{B. Harrach and V. Pohjola and M. Salo2}. \begin{lem} Let $X$, $Y$, and $Z$ be Hilbert spaces, and let $A:X \to Y$ and $B:X \to Z$ be bounded linear operators. Then, \begin{equation} \exists C>0: \ \left\| Ax \right\|^2 \leq C\left\| Bx \right\|^2 \ for \ all \ x \in X \ \ \ \ \Longleftrightarrow \ \ \ \ \mathrm{Ran}(A^{*})\subseteq \mathrm{Ran}(B^{*}).\label{2.13} \end{equation} \end{lem} \begin{lem} Let $X$, $Y$, $V \subset Z$ be subspaces of a vector space $Z$. If \begin{equation} X\cap Y = \{ 0 \}, \ \ \ \ and \ \ \ \ X \subseteq Y+V,\label{2.14} \end{equation} then $\mathrm{dim}(X) \leq \mathrm{dim}(V)$. \end{lem} \section{Proof of Theorem 1.1} In Section 3, we will show Theorem 1.1. Let $\sigma \subset \Gamma$. We denote by $R:L^{2}(\Gamma)\to L^{2}(\sigma)$ the restriction operator, $J:H^{1/2}(\Gamma)\to L^{2}(\Gamma)$ the compact embedding, and $H:L^{2}(\mathbb{S}^{1}) \to L^{2}(\Gamma)$, $\hat{H}:L^{2}(\mathbb{S}^{1}) \to H^{1/2}(\Gamma)$ the Herglotz operator, respectively. Then by these definitions and $\hat{H}^{*}=GS$, we have \begin{eqnarray} H_{\sigma} &=&RH \nonumber\\ &=&RJ\hat{H} \nonumber\\ &=&RJS^{*}G^{*}.\label{3.1} \end{eqnarray} Using (\ref{3.1}) and Lemmas 2.4 and 2.5, $-\mathrm{Re}F-H^{*}_{\sigma}H_{\sigma}$ has the following factorization: \begin{eqnarray} -\mathrm{Re}F-H^{*}_{\sigma}H_{\sigma} &=&G\bigl[\mathrm{Re}S- SJ^{*}R^{*}RJS^{*} \bigr]G^{*} \nonumber\\ &=&G\bigl[S_{i}+\mathrm{Re}(S-S_{i})-SJ^{*}R^{*}RJS^{*} \bigr]G^{*} \nonumber\\ &=&\bigl[GW^{*}\bigr]W^{*\ -1}\bigl[S_{i}+\mathrm{Re}(S-S_{i})-SJ^{*}R^{*}RJS^{*} \bigr]W^{-1} \bigl[GW^{*}\bigr]^{*} \nonumber\\ &=&\bigl[GW^{*}\bigr]\bigl[I_{L^{2}(\Gamma)}+K \bigr]\bigl[GW^{*}\bigr]^{*}, \label{3.2} \end{eqnarray} where $W:=S^{1/2}_{i}: \tilde{H}^{-1/2}(\Gamma) \to L^{2}(\Gamma)$ is an extension of the square root of $S^{1/2}_{i}$, $K:=W^{*\ -1}\bigl[\mathrm{Re}(S-S_{i})-SJ^{*}R^{*}RJS^{*} \bigr]W^{-1}:L^{2}(\Gamma) \to L^{2}(\Gamma)$ is self-adjoint compact, and $I_{L^{2}(\Gamma)}$ is the identity operator on $L^{2}(\Gamma)$. Let $V$ be the sum of eigenspaces of $K$ associated to eigenvalues less than $-1/2$. Then, $V$ is a finite dimensional and \begin{equation} \langle (I_{L^{2}(\Gamma)}+K \bigr)v, v \rangle \geq0,\label{3.3} \end{equation} for all $v \in V^{\bot}$. Since for $g \in L^{2}(\mathbb{S}^{1})$ \begin{equation} \bigl[GW^{*}\bigr]^{*}g \in V^{\bot} \ \ \ \ \Longleftrightarrow \ \ \ \ g \in [(GW^{*})V\bigr]^{\bot} ,\label{3.4} \end{equation} and $\mathrm{dim}[(GW^{*})V\bigr] \leq \mathrm{dim}(V)< \infty$, we have by (\ref{3.3}) and Lemma 2.2 that $H^{*}_{\sigma}H_{\sigma}\leq_{\mathrm{fin}} -\mathrm{Re}F$. \par Let now $\sigma \not\subset \Gamma$ and assume on the contrary $H^{*}_{\sigma}H_{\sigma}\leq_{\mathrm{fin}} -\mathrm{Re}F$, that is, by Lemma 2.2 there exists a finite dimensional subspace $V$ in $L^{2}(\mathbb{S}^{1})$ such that \begin{equation} \langle (-\mathrm{Re}F-H^{*}_{\sigma}H_{\sigma})v, v \rangle \geq 0,\label{3.5} \end{equation} for all $v \in V^{\bot}$. Since $\sigma \not\subset \Gamma$, we can take a small open arc $\sigma_0 \subset \sigma$ such that $\sigma_0 \cap \Gamma= \emptyset$, which implies that for all $v \in V^{\bot}$ \begin{eqnarray} \left\| H_{\sigma_0}v \right\|^{2}_{L^{2}(\sigma_0)} &\leq& \left\| H_{\sigma}v \right\|^{2}_{L^{2}(\sigma)} \nonumber\\ &\leq&\langle (-\mathrm{Re}F)v, v \rangle_{L^{2}(\mathbb{S}^{1})} \nonumber\\ &=&\langle (\mathrm{Re}S^{*})G^{*}v, G^{*}v \rangle \nonumber\\ &\leq&\left\| \mathrm{Re}S^{*} \right\| \left\|G^{*}v \right\|^{2}. \label{3.6} \end{eqnarray} Before showing contradiction with (\ref{3.6}), we will show the following lemma. \begin{lem} \begin{description} \item[(a)] $\mathrm{dim}(\mathrm{Ran}(H_{\sigma_0}^{*}))=\infty$ \item[(b)] $\mathrm{Ran}(G)\cap \mathrm{Ran}(H_{\sigma_0}^{*})=\{ 0 \}$. \end{description} \end{lem} \begin{proof}[{\bf Proof of Lemma 3.1}] {\bf (a)} By the same argument in (\ref{3.1}) we have \begin{equation} H_{\sigma_0}=J_{\sigma_0}\hat{H}_{\sigma_0}=J_{\sigma_0}S_{\sigma_0}^{*}G_{\sigma_0}^{*},\label{3.7} \end{equation} where $G_{\sigma_0}:H^{1/2}(\sigma_0) \to L^{2}(\mathbb{S}^{1})$, $S_{\sigma_0}:\tilde{H}^{-1/2}(\sigma_0) \to H^{1/2}(\sigma_0)$, and $J_{\sigma_0}:H^{1/2}(\sigma_0)\to L^{2}(\sigma_0)$ are the data-to-pattern operator, the single layer boundary operator, and the compact embedding, respectively corresponding to $\sigma_0$. Since $H_{\sigma_0}^{*}=G_{\sigma_0}S_{\sigma_0}J^{*}_{\sigma_0}$, $\mathrm{Ran}(J^{*}_{\sigma_0})$ is dense, and $G_{\sigma_0}S_{\sigma_0}$ is injective, we have $\mathrm{dim}(\mathrm{Ran}(H_{\sigma_0}^{*}))=\mathrm{dim}(\mathrm{Ran}(J_{\sigma_0}^{*}))=\infty$. \par {\bf (b)} By (\ref{3.7}), we have $\mathrm{Ran}(H_{\sigma_0}^{*})\subset \mathrm{Ran}(G_{\sigma_0})$. Let $h \in \mathrm{Ran}(G)\cap \mathrm{Ran}(G_{\sigma_0})$, i.e., $h=v_{\Gamma}^{\infty}=v_{\sigma_0}^{\infty}$ where $v_{\Gamma}^{\infty}$ and $v_{\sigma_0}^{\infty}$ are far field patterns associated to scatterers $\Gamma$ and $\sigma_0$ respectively. Then by Rellich lemma and unique continuation we have $v_{\Gamma}=v_{\sigma_0}\ \mathrm{in} \ \mathbb{R}^2\setminus (\Gamma \cup \sigma_0)$. Hence, we can define $v \in H^{1}_{loc}(\mathbb{R}^2)$ by \begin{eqnarray} v:=\left\{ \begin{array}{ll} v_{\Gamma}=v_{\sigma_0} & \quad \mbox{in $\mathbb{R}^2\setminus (\Gamma \cup \sigma_0)$} \\ v_{\Gamma} & \quad \mbox{on $\sigma_0$} \\ v_{\sigma_0} & \quad \mbox{on $\Gamma$} \\ \end{array} \right.\label{3.8} \end{eqnarray} and $v$ is a radiating solution to \begin{equation} \Delta v+k^2v=0 \ \mathrm{in} \ \mathbb{R}^2. \label{3.9} \end{equation} Thus $v=0 \ \mathrm{in} \ \mathbb{R}^2$, which implies that $h=0$. \end{proof} By the above lemma and using Lemma 2.7, we get \begin{equation} \mathrm{Ran}(H^{*}_{\sigma_0}) \not\subseteq \mathrm{Ran}(G)+V= \mathrm{Ran}(G,\ P_{V}),\label{3.10} \end{equation} where $P_{V}:L^{2}(\mathbb{S}^{1})\to L^{2}(\mathbb{S}^{1})$ is the orthognal projection on $V$. Lemma 2.6 implies that for any $C>0$ there exists a $v_c$ such that \begin{equation} \left\|H_{\sigma_0}v_{c} \right\|^2 > C^{2}\left\| \left( \begin{array}{ccc} G^{*} \\ P_{V} \\ \end{array} \right) v_c \right\|^2=C^{2}(\left\|G^{*}v_{c} \right\|^2+\left\|P_{V}v_{c} \right\|^2).\label{3.11} \end{equation} Hence, there exists a sequence $(v_m)_{m\in \mathbb{N}} \subset L^{2}(\mathbb{S}^{1})$ such that $\left\|H_{\sigma_0}v_{m} \right\| \to \infty$ and $\left\|G^{*}v_{m} \right\|^2+\left\|P_{V}v_{m} \right\| \to 0$ as $m \to \infty$. Setting $\tilde{v}_{m}:=v_{m}-P_{V}v_{m} \in V^{\bot}$ we have as $m\to \infty$, \begin{equation} \left\|H_{\sigma_0}\tilde{v}_{m} \right\| \geq \left\|H_{\sigma_0}v_{m} \right\|-\left\|H_{\sigma_0}\right\|\left\|P_{V}v_{m}\right\| \to \infty,\label{3.12} \end{equation} \begin{equation} \left\|G^{*}\tilde{v}_{m} \right\| \leq \left\|G^{*}v_{m} \right\|+\left\| G^{*} \right\| \left\|P_{V}v_{m} \right\| \to 0.\label{3.13} \end{equation} This contradicts (\ref{3.6}). Therefore, we have $H^{*}_{\sigma}H_{\sigma}\not\leq_{\mathrm{fin}} -\mathrm{Re}F$. Theorem 1.1 has been shown. \qed \section{Proof of Theorem 1.2} In Section 4, we will show Theorem 1.2. Let $\Gamma \subset B$. We denote by $G_{\partial B}:H^{1/2}(\partial B) \to L^{2}(\mathbb{S}^{1})$ and $S_{\partial B}:\tilde{H}^{-1/2}(\sigma_0) \to H^{1/2}(\sigma_0)$ are the data-to-pattern operator and the single layer boundary operator, respectively corresponding to closed curve $\partial B$. They have the same properties like Lemma 2.3 and 2.4 and we have $\tilde{H}_{\partial B}^{*}=G_{\partial B}S_{\partial B}$. (See, e.g., \cite{Kirsch and Grinberg}.) We define $T:H^{1/2}(\Gamma)\to H^{1/2}(\partial B)$ by \begin{equation} Tf:=v\bigl|_{\partial B}, \label{4.1} \end{equation} where $v$ is a radiating solution such that \begin{equation} \Delta v+k^2v=0 \ \mathrm{in} \ \mathbb{R}^2\setminus \Gamma, \label{4.2} \end{equation} \begin{equation} v=f \ \mathrm{on} \ \Gamma. \label{4.3} \end{equation} $T$ is compact since its mapping is from $H^{1/2}(\Gamma)$ to $C^{\infty}(\partial B)$. Furthermore, by the definition of $T$ we have that $G=G_{\partial B}T$. Thus, we have \begin{eqnarray} \tilde{H}^{*}_{\partial B}\tilde{H}_{\partial B}+\mathrm{Re}F &=&G_{\partial B}S_{\partial B}S^{*}_{\partial B}G^{*}_{\partial B}+G_{\partial B}\bigl[-T\mathrm{Re}(S)T^{*} \bigr]G_{\partial B}^{*} \nonumber\\ &=&G_{\partial B}\bigl[S_{\partial B, i}S^{ * }_{\partial B, i}+ K \bigr]G_{\partial B}^{*} \nonumber\\ &=&\bigl[G_{\partial B}W^{*}\bigr]\bigl[W^{*\ -1}S_{\partial B,i}S^{ *}_{\partial B,i}W^{-1}+ K'\bigr]\bigl[G_{\partial B}W^{*}\bigr]^{*}, \ \ \ \ \ \ \ \ \ \label{4.4} \end{eqnarray} where $K$ and $K'$ are some self-adjoint compact operators, and $W:=S^{1/2}_{\partial B,i}:H^{-1/2}(\partial B) \to L^{2}(\partial B)$ is an extension of the square root of $S_{\partial B,i}$. Let $V$ be the sum of eigenspaces of $K'$ associated to eigenvalues less than $-\frac{1}{2}\left\| S^{ *}_{\partial B,i}W^{-1} \right\|^{-2}$. Then $V$ is a finite dimensional, and for all $g \in \bigl[(G_{\partial B}W^{*})V\bigr]^{\bot}$ we have \begin{eqnarray} &&\langle (\tilde{H}^{*}_{\partial B}\tilde{H}_{\partial B}+\mathrm{Re}F)g, g \rangle \nonumber\\ &=& \left\| (S^{ *}_{\partial B,i}W^{-1})\bigl[G_{\partial B}W^{*}\bigr]^{*}g \right\|^{2}_{H^{1/2}(\partial B)}+\langle K'\bigl[G_{\partial B}W^{*}\bigr]^{*}g, \bigl[G_{\partial B}W^{*}\bigr]^{*}g \rangle_{L^{2}(\partial B)} \nonumber\\ &\geq&\left\| (S^{ *}_{\partial B,i}W^{-1})^{-1}\right\|^{-2} \left\| \bigl[G_{\partial B}W^{*}\bigr]^{*}g \right\|^{2}-\frac{1}{2}\left\| (S^{ *}_{\partial B,i}W^{-1})^{-1} \right\|^{-2}\left\| \bigl[G_{\partial B}W^{*}\bigr]^{*}g \right\|^{2} \nonumber\\ &\geq& 0. \label{4.5} \end{eqnarray} Therefore, $-\mathrm{Re}F \leq_{\mathrm{fin}} \tilde{H}^{*}_{\partial B}\tilde{H}_{\partial B}$. \par Let now $\Gamma \not\subset B$ and assume on the contrary $-\mathrm{Re}F \leq_{\mathrm{fin}} \tilde{H}^{*}_{\partial B}\tilde{H}_{\partial B}$, i.e., by Lemma 2.2 there exists a finite dimensional subspace $V$ in $L^{2}(\mathbb{S}^{1})$ such that \begin{equation} \langle (\tilde{H}^{*}_{\partial B}\tilde{H}_{\partial B}+\mathrm{Re}F)v, v \rangle \geq 0,\label{4.6} \end{equation} for all $v \in V^{\bot}$. Since $\Gamma \not\subset B$, we can take a small open arc $\Gamma_0 \subset \Gamma$ such that $\Gamma_0 \cap B= \emptyset$. We define $L:H^{1/2}(\Gamma_0)\to H^{1/2}(\Gamma)$ by \begin{equation} Lf:=v\bigl|_{\Gamma}, \label{4.7} \end{equation} where $v$ is a radiating solution such that \begin{equation} \Delta v+k^2v=0 \ \mathrm{in} \ \mathbb{R}^2\setminus \Gamma_0, \label{4.8} \end{equation} \begin{equation} v=f \ \mathrm{on} \ \Gamma_0. \label{4.9} \end{equation} By the definition of $L$, we have $G_{\Gamma_0}=GL$. By $\tilde{H}_{\Gamma_0}=S^{*}_{\Gamma_0}G^{*}_{\Gamma_0}$, we have \begin{eqnarray} \left\| H_{\Gamma_0}x \right\|^{2}_{L^{2}(\Gamma_0)} &\leq& \left\| \tilde{H}_{\Gamma_0}x \right\|^{2}_{H^{1/2}(\Gamma_0)} \nonumber\\ &\leq& \left\| S^{*}_{\Gamma_0} \right\|^{2} \left\| G^{*}_{\Gamma_0}x \right\|^{2} \nonumber\\ &\leq& \left\| S^{*}_{\Gamma_0} \right\|^{2} \left\| L^{*}\right\|^{2} \left\|G^{*}x \right\|^{2},\label{4.10} \end{eqnarray} for $x \in L^{2}(\mathbb{S}^{1})$. Since $\mathrm{Re}S$ is of the form $\mathrm{Re}S=S_{i}+\mathrm{Re}(S-S_i)$, by the similar argument in (\ref{3.2})--(\ref{3.3}) and (\ref{4.4})--(\ref{4.5}), there exists a finite dimensional subspace $W$ in $L^{2}(\mathbb{S}^{1})$ such that for $x \in W^{\bot}$ \begin{eqnarray} \left\| G^{*}x \right\|^{2}\leq C\langle (\mathrm{Re}S)G^{*}x, G^{*}x \rangle =C\langle (-\mathrm{Re}F)x, x \rangle. \label{4.11} \end{eqnarray} Collecting (\ref{4.6}), (\ref{4.10}), and (\ref{4.11}) we have \begin{eqnarray} \left\| H_{\Gamma_{0}}x \right\|^{2} &\leq& C\langle (-\mathrm{Re}F)x, x \rangle \leq C\left\| \tilde{H}_{\partial B}x \right\|^{2} \nonumber\\ &\leq& C\left\| S^{*}_{\partial B} \right\|^{2} \left\| G^{*}_{\partial B}x \right\|_{H^{-1/2}(\partial B)}^{2}. \label{4.12} \end{eqnarray} for $x \in (V\cup W)^{\bot}$. \begin{lem} \begin{description} \item[(a)] $\mathrm{dim}(\mathrm{Ran}(H_{\Gamma_0}^{*}))=\infty$ \item[(b)] $\mathrm{Ran}(G_{\partial B})\cap \mathrm{Ran}(H_{\Gamma_0}^{*})=\{ 0 \}$. \end{description} \end{lem} \begin{proof}[{\bf Proof of Lemma 4.1}] (a) is given by the same argument in Lemma 3.1. \par {\bf (b)} By (\ref{3.7}), we have $\mathrm{Ran}(H_{\Gamma_0}^{*})\subset \mathrm{Ran}(G_{\Gamma_0})$. Let $h \in \mathrm{Ran}(G_{\partial B})\cap \mathrm{Ran}(G_{\Gamma_0})$, i.e., $h=v_{B}^{\infty}=v_{\Gamma_0}^{\infty}$ where $v_{B}^{\infty}$ and $v_{\Gamma_0}^{\infty}$ are far field patterns associated to scatterers $B$ and $\Gamma_0$ respectively. Then by Rellich lemma and unique continuation we have $v_{B}=v_{\Gamma_0}\ \mathrm{in} \ \mathbb{R}^2\setminus (B \cup \Gamma_0)$. Hence, we can define $v \in H^{1}_{loc}(\mathbb{R}^2)$ by \begin{eqnarray} v:=\left\{ \begin{array}{ll} v_{B}=v_{\Gamma_0} & \quad \mbox{in $\mathbb{R}^2\setminus (B \cup \Gamma_0)$} \\ v_{\Gamma_0} & \quad \mbox{on $B$} \\ v_{B} & \quad \mbox{on $\Gamma_0$} \\ \end{array} \right.\label{4.13} \end{eqnarray} and $v$ is a radiating solution to \begin{equation} \Delta v+k^2v=0 \ \mathrm{in} \ \mathbb{R}^2. \label{4.14} \end{equation} Thus $v=0 \ \mathrm{in} \ \mathbb{R}^2$, which implies that $h=0$. \end{proof} By the above lemma and using Lemma 2.7, we get \begin{equation} \mathrm{Ran}(H^{*}_{\Gamma_0}) \not\subseteq \mathrm{Ran}(G_{\partial B})+(V\cup W)= \mathrm{Ran}(G_{\partial B},\ P_{V\cup W}),\label{4.15} \end{equation} where $P_{V\cup W}:L^{2}(\mathbb{S}^{1})\to L^{2}(\mathbb{S}^{1})$ is the orthognal projection on $V\cup W$. Lemma 2.6 implies that for any $C>0$ there exists a $x_c$ such that \begin{equation} \left\|H_{\Gamma_0}x_{c} \right\|^2 > C^{2}\left\| \left( \begin{array}{ccc} G^{*}_{\partial B} \\ P_{V\cup W} \\ \end{array} \right) x_c \right\|^2=C^{2}(\left\|G^{*}_{\partial B}x_{c} \right\|^2+\left\|P_{V\cup W}x_{c} \right\|^2).\label{4.16} \end{equation} Hence, there exists a sequence $(x_m)_{m\in \mathbb{N}} \subset L^{2}(\mathbb{S}^{1})$ such that $\left\|H_{\Gamma_0}x_{m} \right\| \to \infty$ and $\left\|G^{*}_{\partial B}x_{m} \right\|^2+\left\|P_{V\cup W}x_{m} \right\| \to 0$ as $m \to \infty$. Setting $\tilde{x}_{m}:=x_{m}-P_{V\cup W}x_{m} \in (V\cup W)^{\bot}$ we have as $m\to \infty$, \begin{equation} \left\|H_{\Gamma_0}\tilde{x}_{m} \right\| \geq \left\|H_{\Gamma_0}x_{m} \right\|-\left\|H_{\Gamma_0}\right\|\left\|P_{V\cup W}x_{m}\right\| \to \infty,\label{4.17} \end{equation} \begin{equation} \left\|G^{*}_{\partial B}\tilde{x}_{m} \right\| \leq \left\|G^{*}_{\partial B}x_{m} \right\|+\left\| G^{*}_{\partial B} \right\| \left\|P_{V\cup W}x_{m} \right\| \to 0.\label{4.18} \end{equation} This contradicts (\ref{4.12}). Therefore, we have $ -\mathrm{Re}F \not\leq_{\mathrm{fin}} \tilde{H}^{*}_{\partial B}\tilde{H}_{\partial B}$. Theorem 1.2 has been shown. \qed \section{Numerical examples} In Section 5, we discuss the numerical examples based on Theorem 1.1. The following three open arcs $\Gamma_j$ ($j=1,2,3$) are considered (see Figure 1): \begin{description} \item[(a)] $\Gamma_1=\left\{(s, s) | -1\leq s \leq 1 \right\}$ \item[(b)] $\Gamma_2=\left\{\biggl(2\mathrm{sin}\Bigl(\frac{\pi}{8}+(1+s)\frac{3\pi}{8} \Bigr) -\frac{2}{3}, \ \mathrm{sin}\Bigl(\frac{\pi}{4}+(1+s)\frac{3\pi}{4} \biggr) \biggl| -1\leq s \leq 1 \right\}$ \item[(c)] $\Gamma_3=\left\{\biggl(s, \ \mathrm{sin}\Bigl(\frac{\pi}{4}+(1+s)\frac{3\pi}{4} \biggr) \biggl| -1\leq s \leq 1 \right\}$ \end{description} \vspace{5mm} \begin{figure}[h] \begin{minipage}[b]{0.325\linewidth} \centering \includegraphics[keepaspectratio, scale=0.15] {S1} \subcaption{$\Gamma_1$} \end{minipage} \begin{minipage}[b]{0.325\linewidth} \centering \includegraphics[keepaspectratio, scale=0.15] {S2} \subcaption{$\Gamma_2$} \end{minipage} \begin{minipage}[b]{0.325\linewidth} \centering \includegraphics[keepaspectratio, scale=0.15] {S3} \subcaption{$\Gamma_3$} \end{minipage} \caption{The original open arc} \end{figure} \vspace{5mm} Based on Theorem 1.1, the indicator function in our examples is given by \begin{equation} I(\sigma):= \# \left\{\mathrm{negative} \ \mathrm{eigenvalues} \ \mathrm{of} -\mathrm{Re}F-H^{*}_{\sigma}H_{\sigma} \right\} \end{equation} The idea to reconstruct $\Gamma_j$ is to plot the value of $I(\sigma)$ for many of small $\sigma$ in the sampling region. Then, we expect from Theorem 1.1 that the value of the function $I(\sigma)$ is low if $\sigma$ is close to $\Gamma_j$. \par Here, $\sigma$ is chosen in two ways; One is the vertical line segment $\sigma^{ver}_{i,j}:=z_{i,j}+\{0 \} \times [-\frac{R}{2M}, \frac{R}{2M}]$ where $z_{i,j}:=(\frac{Ri}{M}, \frac{Rj}{M})$ ($i,j = -M, -M+1, ..., M$) denote the center of $\sigma^{ver}_{i,j}$, and $\frac{R}{M}$ is the length of $\sigma^{ver}_{i,j}$, and $R>0$ is length of sampling square region $[-R, R]^2$, and $M \in \mathbb{N}$ is large to take a small segment. The other is horizontal one $\sigma^{hor}_{i,j}:=z_{i,j}+[-\frac{R}{2M}, \frac{R}{2M}] \times \{0 \}$. \par The far field operator $F$ is approximated by the matrix \begin{equation} F \approx \frac{2\pi}{N} \bigl(u^{\infty}(\hat{x}_l, \theta_m) \bigr)_{1 \leq l,m \leq N} \in \mathbb{C}^{N \times N} \end{equation} where $\hat{x}_l=\bigl(\mathrm{cos}(\frac{2\pi l}{N}), \mathrm{sin}(\frac{2\pi l}{N}) \bigr)$ and $\theta_m=\bigl(\mathrm{cos}(\frac{2\pi m}{N}), \mathrm{sin}(\frac{2\pi m}{N}) \bigr)$. The far field pattern $u^{\infty}$ of the problem (\ref{1.1})--(\ref{1.3}) is computed by the Nystrom method in \cite{Kress}. The operator $H^{*}_{\sigma}H_{\sigma}$ is approximated by \begin{equation} H^{*}_{\sigma}H_{\sigma} \approx \frac{2\pi}{N} \biggl(\int_{\sigma}e^{iky\cdot(\theta_m-\hat{x}_l)}dy \biggr)_{1 \leq l,m \leq N} \in \mathbb{C}^{N \times N} \end{equation} When $\sigma$ is given by the vertical and horizontal line segment, we can compute the integrals \begin{equation} \int_{\sigma^{ver}_{i,j}}e^{iky\cdot(\theta_m-\hat{x}_l)}dy=\frac{R}{M}e^{ik(\theta_m-\hat{x}_l)\cdot z_{i,j}}\mathrm{sinc}\biggl(\frac{kR}{2M\pi}\Bigl( \mathrm{sin}\bigl(\frac{2\pi m}{N}\bigr)-\mathrm{sin}\bigl(\frac{2\pi l}{N}\bigr) \bigr) \biggr) \end{equation} \begin{equation} \int_{\sigma^{hor}_{i,j}}e^{iky\cdot(\theta_m-\hat{x}_l)}dy=\frac{R}{M}e^{ik(\theta_m-\hat{x}_l)\cdot z_{i,j}}\mathrm{sinc}\biggl(\frac{kR}{2M\pi}\Bigl( \mathrm{cos}\bigl(\frac{2\pi m}{N}\bigr)-\mathrm{cos}\bigl(\frac{2\pi l}{N}\bigr) \bigr) \biggr) \end{equation} \par In our examples we fix $R=1.5$, $M=100$, $N=60$, and wavenumber $k=1$. The Figure 2 is given by plotting the values of the vertical indicator function \begin{equation} I_{ver}(z_{i,j}):=I(\sigma^{ver}_{i,j}) \end{equation} for each $i, j = -100, 99, ..., 100$. The Figure 3 is given by plotting the values of the horizontal indicator function \begin{equation} I_{hor}(z_{i,j}):=I(\sigma^{hor}_{i,j}) \end{equation} for each $i, j = -100, -99, ..., 100$. We obverse that $\Gamma_j$ seems to be reconstructed independently of the direction of linear segment. \begin{figure}[h] \begin{minipage}[b]{0.325\linewidth} \centering \includegraphics[keepaspectratio, scale=0.13] {V1} \subcaption{$\Gamma_1$} \end{minipage} \begin{minipage}[b]{0.325\linewidth} \centering \includegraphics[keepaspectratio, scale=0.13] {V2} \subcaption{$\Gamma_2$} \end{minipage} \begin{minipage}[b]{0.325\linewidth} \centering \includegraphics[keepaspectratio, scale=0.13] {V3} \subcaption{$\Gamma_3$} \end{minipage} \caption{Reconstruction by the vertical indicator function $I_{ver}$} \vspace{5mm} \end{figure} \begin{figure}[h] \begin{minipage}[b]{0.325\linewidth} \centering \includegraphics[keepaspectratio, scale=0.13] {H1} \subcaption{$\Gamma_1$} \end{minipage} \begin{minipage}[b]{0.325\linewidth} \centering \includegraphics[keepaspectratio, scale=0.13] {H2} \subcaption{$\Gamma_2$} \end{minipage} \begin{minipage}[b]{0.325\linewidth} \centering \includegraphics[keepaspectratio, scale=0.13] {H3} \subcaption{$\Gamma_3$} \end{minipage} \caption{Reconstruction by the horizontal indicator function $I_{hor}$} \vspace{5mm} \end{figure} \section*{Acknowledgments} Authors thank to Professor Bastian von Harrach, who gave us helpful comments in our study.
1,941,325,220,799
arxiv
\section{Introduction} The gravitational field equations in general relativity, GR, and modified gravity theories, MGT, are very sophisticate systems of nonlinear partial differential equations (PDEs). Advanced analytic and numerical methods are necessary for constructing exact and approximate solutions of such equations. A number of examples of exact solutions are summarized in the monographs \cite{kramer,griff} where the coefficients of the fundamental geometric/physical objects depend on one and/or two coordinates in four dimensional (4-d) spacetimes and when the diagonalization of the metrics is possible via coordinate transformations. There are well known physically important exact solutions for the Schwarzschild, Kerr, Friedman-Lema\^ itre -Robertson-Worker (FLRW), wormhole spacetimes etc. These classes of solutions are generated by certain ansatzes when the Einstein equations are transformed into certain systems of nonlinear second order ordinary equations (ODE), 2-d solitonic equations etc. Such systems of PDEs display Killing vector symmetries which results in additional parametric symmetries \cite{ger1,ger2,vpars}. The problem of constructing generic off--diagonal exact solutions (which can not be diagonalized via coordinate transformations) with metric coefficients depending on three and/or four coordinates is much more difficult. There are, in general, six independent components of a metric tensor from the ten components in a 4-d (pseudo) Riemannian spacetime\footnote four components of the ten can be fixed to zero using coordinate transformations, and which is related to the Bianchi identities}. Any such ansatz transforms the Einstein equations into systems of nonlinear coupled PDEs which cannot be integrated in a general analytic form if the constructions are performed in local coordinate frames. In a series of works \cite{vex1,vpars,vex2,vex3,veym}, we have shown that it is possible to decouple the gravitational field equations and perform formal analytic integrations in various theories of gravity with metric and nonlinear, N-, and linear connections structures. To prove the decoupling property in a simplest way we have to consider spacetime fibrations with splitting of dimensions, 2 (or 3) + 2 + 2 + ..., introduce certain adapted frames of reference, consider formal extensions/embeddings of 4-d spacetimes into higher dimensional ones and work with necessary types of linear connections. Such an (auxiliary, in Einstein gravity) adapted connection is also completely defined in a compatible form by the metric structure and contains a nonholonomically induced torsion field. This allows us to decouple the gravitational field equations and generate various classes of exact solutions in generalized/modified gravity theories. After a class of generalized solutions has been constructed in explicit form, we can constrain to zero the induced torsion fields and "extract" solutions in Einstein gravity. We emphasize that it is important to impose the zero--torsion conditions after we found a class of generalized solutions (contrary, we cannot decouple the corresponding systems of PDEs). It should be noted here that the off--diagonal solutions constructed following the above described anholonomic frame deformation method, AFDM, depend on various classes of generating and integration functions and parameters. The Cauchy problem can be formulated with respect to necessary types of N-adapted frames; it is possible to generate various stable or un-stable solutions with singularities, nontrivial deformed horizons, stochastic behavior, etc ... which depends on the type of nonlinear couplings, prescribed symmetries, asymptotic and boundary conditions, see a number of examples in \cite{vp,vt,vsingl1,vgrg} and references therein. In general, it is not clear what physical importance (if any ?) these classes of such solutions may have. For some well defined conditions, we can speculate about black hole/ellipsoid/wormhole configurations embedded, for instance, into solitonic gravitational backgrounds or to consider small ellipsoidal deformations of certain "primary" spherical/cylindrical configurations. Our geometric techniques of constructing exact solutions can be applied to four dimensional, 4-d, (pseudo) Riemannian spacetimes with one and two Killing symmetries. For such configurations, the well known Kerr solution can be generated as a particular case. Then these "primary" metrics can be subjected to nonholonomic deformations to "target" off--diagonal exact solutions depending on three, or four, spacetime coordinates. The first goal of this paper is to show how certain primary physically important solutions depending on two coordinates can be generalized to new classes of exact solutions in Einstein gravity and (higher dimensional) modifications, with zero or nonzero torsion, depending on all possible spacetime coordinates. We consider diagonal and off--diagonal parametrization of primary and target solutions which are different from those in \cite{vex1,vpars,vex2,vex3} and other works. In this way we generate new classes of Einstein spacetimes and modifications and show that the AFDM encodes various possibilities for generalization. The second goal is to construct explicit examples of exact solutions as nonholonomic deformations of the Kerr metric determined by nontrivial sources and interactions in massive gravity and/or modified $f(R,T)$ gravity, see reviews and original results in Refs. \cit {odints1,odints2,capoz,odints3,drg1,drg2,drg3,hr1,hr2,nieu,koyam}. For non--Hilbert Lagrangians in gravity theories, the functionals $f$ depend on scalar curvature $R$ (computed, in general, for a linear connection with nontrivial torsion, or for the Levi--Civita one), on various matter and effective matter sources for modified gravity theories etc. We provide a series of exact and/or small parameter-dependent solutions which for small deformations mimic rotoid Kerr - de Sitter like black holes/ellipsoids self--consistently imbedded into generic off--diagonal backgrounds of 4/ 6/ 8 dimensional spacetimes. With respect to nonholonomic frames and via the re--definition of generating and integrations functions and coefficients of the sources, modifications of Einstein gravity are modelled by effective polarized cosmological constants and off--diagonal terms in the new classes of solutions. For certain geometrically well defined conditions, various effects in massive and $f$--modified gravity can be encoded into vacuum and non--vacuum, configurations (exact solutions) with nontrivial effective cosmological constants in GR. In some sense, we can mimic physically important effects in modified gravity effects (for instance, acceleration of universe, certain dark energy and dark matter locally anisotropic interactions, effective renormalization of quantum gravity models, see Refs. \cite{vgrg,vbranef,vepl}) via nonlinear generic off--diagonal interactions on effective Einstein spaces. The main question arising from such models and solutions is whether or not we need to modify Einstein's gravitational theory, or to try and solve physically important issues in modern cosmology and quantum gravity by considering only nonlinear and generic off--diagonal interactions based on the general relativity paradigm. There is necessarily additional theoretical and experimental/observational research which is required in order to analyze and solve these problems. Such directions of research cannot be developed if we consider only diagonalizable (and rotating ones, like the Kerr metric) metrics generated by an ansatz with two Killing symmetries. The plan of the paper is as follows:\ In section \ref{s2} we provide the necessary geometric preliminaries on nonholonomic 2+2+2+... splittings of the spacetime dimensions in GR and MGT. We summarize the key results on the AFDM for constructing generic off--diagonal solutions in gravity theories depending on all spacetime coordinates in dimensions 4,5,...,8. In section \ref{s3} we prove the general decoupling property of the (modified) Einstein equations which allows us to perform formal integrations of corresponding systems of nonlinear PDE. The geometric constructions are performed for the "simplest" case of one Killing symmetry in 4-d and generalized to non-Killing configurations and for higher dimensions. Section \ref{s4} is devoted to the theory of nonholonomic deformations of exact solutions in modified gravity theories containing the Kerr solution as a "primary" configurations but with target metrics being constructed as exact solutions in massive gravity and/or $f$--modified gravity. We show how using the AFDM we can generate as a particular case the Kerr solution. Then we construct solutions with general off--diagonal deformations of the Kerr metrics in 4--d massive gravity, provide examples of (non--Einstein) metrics with nonholonomically induced torsions and study small $f$--modifications of the Kerr metrics deformed by massive gravity. A separate subsection is devoted to ellipsoidal 4--d deformations of the Kerr metric resulting in a target vacuum rotoid or Kerr--de Sitter configurations. Another subsection is devoted to extra dimensional massive off--diagonal modifications of the Kerr solutions, for the case of 6--d spacetime with nontrivial cosmological constant and for 8--d deformations which may model Finsler-like configurations. Finally (in section \ref{s5}), we provide our conclusions and speculate on the physical meaning of the exact solutions constructed using the AFDM for massive modified gravity theories and how such effects can be modelled by nonlinear off--diagonal interactions in Einstein gravity. Some relevant formulae for the coefficients and sketches of the proofs are presented in the Appendix. \section{Nonholonomic Frames with 2+2+.... Splitting} \label{s2} In this section, we state the geometric conventions and outline the formalism which are necessary for decoupling and integrating the gravitational field equations in GR and MGTs, see relevant details in \cit {vex1,vpars,vex2,vex3}. \subsection{Geometric preliminaries} \subsubsection{Conventions} For (higher dimensional) spacetime geometric models and related exact solutions on a finite dimensional (pseudo) Riemannian spacetime $\ ^{s}V$, we consider conventional splitting of dimensions, $\dim V=4+2s=2+2+...+2\geq 4;s\geq 0.$\footnote In a similar form, we can split odd dimensions, for instance, $\dim V=3+2+...+2$. Here it should be noted that it is not possible to elaborate any simplified system of notations if we want to integrate in general explicit form certain systems of PDEs related to higher dimensional gravitational theories. It is important to distinguish indices and coordinates corresponding to higher dimensions and nonholonomically constrained variables.} The anholonomic frame deformation method, AFDM, allows us to construct exact solutions with arbitrary signatures $(\pm 1,\pm 1,\pm 1,...\pm 1)$ of metrics $\ ^{s}\mathbf{g}$. Let us establish conventions on (abstract) indices and coordinates $u^{\alpha _{s}}=(x^{i_{s}},y^{a_{s}}),$ for $s=0,1,2,3,....$ labelling the oriented number of two dimensional, 2-d, "shells" added to a 4--d spacetime. For s=0, $ we write $u^{\alpha }=(x^{i},y^{a})$ and consider such local systems of coordinates: {\small \begin{eqnarray*} s &=&0:u^{\alpha _{0}}=(x^{i_{0}},y^{a_{0}})=u^{\alpha }=(x^{i},y^{a}), \\ s &=&1:u^{\alpha _{1}}=(x^{\alpha }=u^{\alpha },y^{a_{1}})=(x^{i},y^{a},y^{a_{1}}), \\ \ s &=&2:u^{\alpha _{2}}=(x^{\alpha _{1}}=u^{\alpha _{1}},y^{a_{2}})=(x^{i},y^{a},y^{a_{1}},y^{a_{2}}), \\ \ s &=&3:u^{\alpha _{3}}=(x^{\alpha _{2}}=u^{\alpha _{2}},y^{a_{3}})=(x^{i},y^{a},y^{a_{1}},y^{a_{2}},y^{a_{3}}),... \end{eqnarray* } when indices run corresponding values i,j,...=1,2;a,b,...=3,4;a_{1},b_{1}...=5,6;a_{2},b_{2}...=7,8;$ a_{3},b_{3}...=9,10,...$ and, for instance, $i_{1},j_{1},...=1,2,3,4;i_{2},$ $j_{2},...$ $=1,2,3,4,5,6;\ i_{3},j_{3},...=1,2,3,4,5,6,7,8;...$ In brief, we shall write $u=(x,y);$ $\ ^{1}u=(u,\ ^{1}y)=(x,y,\ ^{1}y),\ ^{2}u=(\ ^{1}u,\ ^{2}y)=(x,y,\ ^{1}y,\ ^{2}y),...$ Local frames (bases, $e_{\alpha _{s}}$) on $\ ^{s}V$ are denoted in the form \begin{equation} e_{\alpha _{s}}=e_{\ \alpha _{s}}^{\underline{\alpha }_{s}}(\ ^{s}u)\partial /\partial u^{\underline{\alpha }_{s}}, \label{nholfr} \end{equation} where partial derivatives are $\partial _{\beta _{s}}:=\partial /\partial u^{\beta _{s}},$ and indices are underlined if it is necessary to emphasize that such values are defined with respect to a coordinate frame. In general, the frames (\ref{nholfr}) are nonholonomic (equivalently, anholonomic, or non-integrable), $e_{\alpha _{s}}e_{\beta _{s}}-e_{\beta _{s}}e_{\alpha _{s}}=W_{\alpha _{s}\beta _{s}}^{\gamma _{s}}e_{\gamma _{s}},$ where the anholonomy coefficients $W_{\alpha _{s}\beta _{s}}^{\gamma _{s}}=W_{\beta _{s}\alpha _{s}}^{\gamma _{s}}(u)$ vanish for holonomic, i. e. integrable, configurations. The dual frames are $e^{\alpha _{s}}=e_{\ \underline{\alpha _{s}}^{\ \alpha _{s}}(\ ^{s}u)du^{\underline{\alpha }_{s}}$, which can be defined from the condition $e^{\alpha _{s}}\rfloor e_{\beta _{s}}=\delta _{\beta _{s}}^{\alpha _{s}}$ (the 'hook' operator $\rfloor $ corresponds to the inner derivative and $\delta _{\beta _{s}}^{\alpha _{s}}$ is the Kronecker symbol). The conventional $2+2+...$ splitting for a metric is written in the form \begin{equation} \ ^{s}\mathbf{g=}g_{\alpha _{s}\beta _{s}}e^{\alpha _{s}}\otimes e^{\beta _{s}}=g_{\underline{\alpha }_{s}\underline{\beta }_{s}}du^{\underline{\alpha }_{s}}\otimes du^{\underline{\beta }_{s}},\ s=0,1,2, ..., \label{metr} \end{equation where coefficients of the metric transform following the rul \begin{equation} g_{\alpha _{s}\beta _{s}}=e_{\ \alpha _{s}}^{\underline{\alpha }_{s}}e_{\ \beta _{s}}^{\underline{\beta }_{s}}g_{\underline{\alpha }_{s}\underline \beta }_{s}}. \label{metrtransf} \end{equation Similar frame transforms can be considered for all tensor objects. We can not preserve a splitting of dimensions under general frame/coordinate transforms. \subsubsection{Nonholonomic splitting with associated N--connections} To prove the general decoupling property of the Einstein equations and generalizations/ modifications we have to construct a necessary type of nonholonomic $2+2+...$ nonholonomic splitting with associated nonlinear connection (N-connection) structure. Such a splitting is introduced using nonholonomic distributions:\footnote In modern gravity, it is largely used the so--called ADM (Arnowit--Deser--Misner) formalism with a 3+1 splitting, see details in \cit {misner}. It is not possible to elaborate a technique for a general decoupling of the gravitational field equations and generating off--diagonal solutions if we use only nonholonomic frame bases determined by the "shift" and "lapse" functions. To construct exact solutions is more convenient to work with a correspondingly defined non--integrable 2+2+... splitting \cite{vex2,vex3}.} \begin{enumerate} \item A N--connection is stated by a Whitney sum \begin{equation} \ ^{s}\mathbf{N}:T\ ^{s}\mathbf{V}=h\mathbf{V}\oplus v\mathbf{V}\oplus \ ^{1}v\mathbf{V}\oplus \ ^{2}v\mathbf{V}\oplus ...\oplus \ ^{s}v\mathbf{V}, \label{whitney} \end{equation for a conventional horizontal (h) and vertical (v) "shell by shell" splitting. We shall write boldface letters for spaces and geometric objects enabled/adapted to N--connection structure. This defines a local fibered structure on $\ ^{s}\mathbf{V}$ when the coefficients of N--connection, N_{i_{s}}^{a_{s}},$ for $\ ^{s}\mathbf{N}=N_{i_{s}}^{a_{s}}(\ ^{s}u)dx^{i_{s}}\otimes \partial /\partial y^{a_{s}},$ induce a system of N--adapted local bases, with N-elongated partial derivatives, $\mathbf{e _{\nu _{s}}=(\mathbf{e}_{i_{s}},e_{a_{s}}),$ and cobases with N--adapted differentials, $\mathbf{e}^{\mu _{s}}=(e^{i_{s}},\mathbf{e}^{a_{s}}).$ On a 4-d $\mathbf{V}$, \begin{eqnarray} &&\mathbf{e}_{i}=\frac{\partial }{\partial x^{i}}-\ N_{i}^{a}\frac{\partial }{\partial y^{a}},\ e_{a}=\frac{\partial }{\partial y^{a}}, \label{nader} \\ &&e^{i}=dx^{i},\mathbf{e}^{a}=dy^{a}+\ N_{i}^{a}dx^{i}, \label{nadif} \end{eqnarray and on $s\geq 1$ shells, \begin{eqnarray} \mathbf{e}_{i_{s}} &=&\frac{\partial }{\partial x^{i_{s}}}-\ N_{i_{s}}^{a_{s}}\frac{\partial }{\partial y^{a_{s}}},\ e_{a_{s}}=\frac \partial }{\partial y^{a_{s}}}, \label{naders} \\ e^{i_{s}} &=&dx^{i_{s}},\mathbf{e}^{a_{s}}=dy^{a_{s}}+\ N_{i_{s}}^{a_{s}}dx^{i_{s}}. \label{nadifs} \end{eqnarray The N--adapted operators (\ref{nader}) and (\ref{naders}) define a subclass of general frame transforms of type (\ref{nholfr}). The corresponding anholonomy relations \begin{equation} \lbrack \mathbf{e}_{\alpha _{s}},\mathbf{e}_{\beta _{s}}]=\mathbf{e}_{\alpha _{s}}\mathbf{e}_{\beta _{s}}-\mathbf{e}_{\beta _{s}}\mathbf{e}_{\alpha _{s}}=W_{\alpha _{s}\beta _{s}}^{\gamma _{s}}\mathbf{e}_{\gamma _{s}}, \label{anhrel1} \end{equation are completely defined by the N--connection coefficients and their partial derivatives, $W_{i_{s}a_{s}}^{b_{s}}=\partial _{a_{s}}N_{i_{s}}^{b_{s}}$ and $W_{j_{s}i_{s}}^{a_{s}}=\Omega _{i_{s}j_{s}}^{a_{s}},$ where the curvature of the N--connection is $\Omega _{i_{s}j_{s}}^{a_{s}}=\mathbf{e}_{j_{s}}\left( N_{i_{s}}^{a_{s}}\right) -\mathbf{e}_{i_{s}}\left( N_{j_{s}}^{a_{s}}\right) . $ \item Any metric structure $\ ^{s}\mathbf{g=\{g}_{\alpha _{s}\beta _{s} \mathbf{\}}$ on $\ ^{s}\mathbf{V}$ can be written as a distinguished metric (d--metric)\footnote geometric objects with coefficients defined with respect to N--adapted frames are called respectively distinguished metrics, distinguished tensors etc (in brief, d--metrics, d--tensors etc)} \begin{eqnarray} \ \ ^{s}\mathbf{g} &=&\ g_{i_{s}j_{s}}(\ ^{s}u)\ e^{i_{s}}\otimes e^{j_{s}}+\ g_{a_{s}b_{s}}(\ ^{s}u)\mathbf{e}^{a_{s}}\otimes \mathbf{e ^{b_{s}} \label{dm} \\ &=&g_{ij}(x)\ e^{i}\otimes e^{j}+g_{ab}(u)\ \mathbf{e}^{a}\otimes \mathbf{e ^{b}+g_{a_{1}b_{1}}(\ ^{1}u)\ \mathbf{e}^{a_{1}}\otimes \mathbf{e}^{b_{1}} +....+\ g_{a_{s}b_{s}}(\ ^{s}u)\mathbf{e}^{a_{s}}\otimes \mathbf{e} ^{b_{s}}. \notag \end{eqnarray In coordinate frames, a metric (\ref{metr}) is parameterized by generic off--diagonal matrice \begin{equation*} \ \ \underline{g}_{\alpha \beta }\left(\ u\right) =\left[ \begin{array}{cc} \ g_{ij}+\ h_{ab}N_{i}^{a}N_{j}^{b} & h_{ae}N_{j}^{e} \\ \ h_{be}N_{i}^{e} & \ h_{ab \end{array \right] , \end{equation* \begin{equation*} \ \underline{g}_{\alpha _{1}\beta _{1}}\left( \ ^{1}u\right) =\left[ \begin{array}{cc} \ \underline{g}_{\alpha \beta } & h_{a_{1}e_{1}}N_{\beta _{1}}^{e_{1}} \\ \ h_{b_{1}e_{1}}N_{\alpha _{1}}^{e_{1}} & \ h_{a_{1}b_{1} \end{array \right] ,\ \ \ \underline{g}_{\alpha _{2}\beta _{2}}\left( \ ^{2}u\right) = \left[ \begin{array}{cc} \ \underline{g}_{\alpha _{1}\beta _{1}} & h_{a_{2}e_{2}}N_{\beta _{1}}^{e_{2}} \\ \ h_{b_{2}e_{2}}N_{\alpha _{1}}^{e_{2}} & \ h_{a_{2}b_{2} \end{array \right] ,\ ... \end{equation* \begin{equation*} \ \ \underline{g}_{\alpha _{s}\beta _{s}}\left( \ ^{s}u\right) =\left[ \begin{array}{cc} \ g_{i_{s}j_{s}}+\ h_{a_{s}b_{s}}N_{i_{s}}^{a_{s}}N_{j_{s}}^{b_{s}} & h_{a_{s}e_{s}}N_{j_{s}}^{e_{s}} \\ \ h_{b_{s}e_{s}}N_{i_{s}}^{e_{s}} & \ h_{a_{s}b_{s} \end{array \right] . \label{fansatz} \end{equation*} \end{enumerate} For extra dimensions, such parameterizations are similar to those introduced in the Kaluza--Klein theory when $y^{a_{s}},s\geq 1,$ are considered as extra dimension coordinates with cylindrical compactification and $N_{\alpha }^{e_{s}}(\ ^{s}u)\sim A_{a_{s}\alpha }^{e_{s}}(u)y^{\alpha }$ are for certain (non) Abelian gauge fields $A_{a_{s}\alpha }^{e_{s}}(u)$. In general, various parameterizations can be used for warped/trapped coordinates in brane gravity and modifications of GR, see examples in \cit {vp,vt,vsingl1,vgrg}. \subsubsection{The Levi--Civita and auxiliary N--adapted connections} There is a subclass of linear connections on $\ ^{s}\mathbf{V}$ which are adapted to the N--connection splitting (\ref{whitney}). By definition, a distinguished connection, d--connection, $\mathbf{D}=(hD;vD),\ ^{1}\mathbf{D }(\ ^{1}hD;\ ^{1}vD),...,\ ^{s}\mathbf{D=}(\ ^{s-1}hD;\ ^{s}vD),$ preserves under parallelism the N--connection structure.\footnote In our works, certain left \textquotedblright up\textquotedblright\ or \textquotedblright low\textquotedblright\ labels are used in order to emphasize that certain geometric objects are defined on a corresponding shell and in terms of a fundamental geometric object. We shall omit such labels if that does not result in ambiguities.} The coefficient \begin{eqnarray} \mathbf{\Gamma }_{\ \beta \gamma }^{\alpha } &=&(L_{jk}^{i},L_{bk}^{a};C_{jc}^{i},C_{bc}^{a}), \notag \\ \mathbf{\Gamma }_{\ \beta _{1}\gamma _{1}}^{\alpha _{1}}&=& (L_{\beta \gamma }^{\alpha },L_{b_{1}\gamma }^{a_{1}};C_{\beta c_{1}}^{\alpha },C_{b_{1}c_{1}}^{a_{1}}),\ \mathbf{\Gamma }_{\ \beta _{2}\gamma _{2}}^{\alpha _{2}} =(L_{\beta _{1}\gamma _{1}}^{\alpha _{1}},L_{b_{2}\gamma _{1}}^{a_{2}};C_{\beta _{1}c_{2}}^{\alpha _{1}},C_{b_{2}c_{2}}^{a_{2}}),..., \label{coefd} \\ \mathbf{\Gamma }_{\ \beta _{s}\gamma _{s}}^{\alpha _{s}} &=&(L_{\beta _{s-1}\gamma _{s-1}}^{\alpha _{s-1}},L_{b_{s}\gamma _{s-1}}^{a_{s}};C_{\beta _{s-1}c_{s}}^{\alpha _{s-1}},C_{b_{s}c_{s}}^{a_{s}}) \notag \end{eqnarray of a d--connection $\ ^{s}\mathbf{D=\{D}_{\alpha _{s}}\mathbf{\}}$ can be computed in N--adapted form with respect to frames (\ref{nader})-- (\re {nadifs}) following equations $\mathbf{D}_{\alpha _{s}}\mathbf{e}_{\beta _{s}}=\mathbf{\Gamma }_{\ \beta _{s}\gamma _{s}}^{\alpha _{s}}\mathbf{e _{\gamma _{s}}$ and covariant derivatives parameterized in the form \begin{eqnarray*} \mathbf{D}_{\alpha } &=&(D_{i};D_{a}),\mathbf{D}_{\alpha _{1}}=(\ ^{1}D_{\alpha };D_{a_{1}}),\ \mathbf{D}_{\alpha _{2}} = (\ ^{2}D_{\alpha _{1}};D_{a_{2}}),...,\mathbf{D}_{\alpha _{s}}=(\ ^{s}D_{\alpha _{s-1}};D_{a_{s}}), \\ \mbox{ for } hD &=&(L_{jk}^{i},L_{bk}^{a}),vD=(C_{jc}^{i},C_{bc}^{a}), \\ \ ^{1}hD &=&(L_{\beta \gamma }^{\alpha },L_{b_{1}\gamma }^{a_{1}}),\ ^{1}vD=(C_{\beta c_{1}}^{\alpha },C_{b_{1}c_{1}}^{a_{1}}),\ \ ^{2}hD = (L_{\beta _{1}\gamma _{1}}^{\alpha _{1}},L_{b_{2}\gamma _{1}}^{a_{2}}),\ ^{2}vD=(C_{\beta _{1}c_{2}}^{\alpha _{1}},C_{b_{2}c_{2}}^{a_{2}}), \\ &&..., \\ \ ^{s}hD &=&(L_{\beta _{s-1}\gamma _{s-1}}^{\alpha _{s-1}},L_{b_{s}\gamma _{s-1}}^{a_{s}}),\ ^{s}vD=(C_{\beta _{s-1}c_{s}}^{\alpha _{s-1}},C_{b_{s}c_{s}}^{a_{s}}). \end{eqnarray* Such coefficients can be computed with respect to mixed subsets of coordinates and/or N--adapted frames on different shells. It is possible always to consider such frame transforms when all shell frames are N-adapted and $\ \begin{equation*} ^{1}D_{\alpha }=\mathbf{D}_{\alpha },\ ^{2}D_{\alpha _{1}}=\mathbf{D _{\alpha _{1}},...\ ^{s}D_{\alpha _{s-1}}=\mathbf{D}_{\alpha _{s-1}}. \end{equation*} To perform computations in N--adapted--shell form we can consider a differential connection 1--form $\mathbf{\Gamma }_{\ \beta _{s}}^{\alpha _{s}}=\mathbf{\Gamma }_{\ \beta _{s}\gamma _{s}}^{\alpha _{s}}\mathbf{e ^{\gamma _{s}}$ and elaborate a differential form calculus with respect to skew symmetric tensor products of N--adapted frames (\ref{nader})-- (\re {nadifs}). For instance, the torsion $\mathcal{T}^{\alpha _{s}}=\{\mathbf{T _{\ \beta _{s}\gamma _{s}}^{\alpha _{s}}\}$ and curvature $\mathcal{R _{~\beta _{s}}^{\alpha _{s}}=\{\mathbf{\mathbf{R}}_{\ \ \beta _{s}\gamma _{s}\delta _{s}}^{\alpha _{s}}\}$ d--tensors of $\ \ ^{s}\mathbf{D}$ can be computed respectively, \begin{eqnarray} &&\mathcal{T}^{\alpha _{s}}:=\ ^{s}\mathbf{De}^{\alpha _{s}}=d\mathbf{e ^{\alpha _{s}}+\mathbf{\Gamma }_{\ \beta _{s}}^{\alpha _{s}}\wedge \mathbf{e ^{\beta _{s}}\ \label{dt} \\ && \mathcal{R}_{~\beta _{s}}^{\alpha _{s}}:=\ ^{s}\mathbf{D\Gamma }_{\ \beta _{s}}^{\alpha _{s}}=d\mathbf{\Gamma }_{\ \beta _{s}}^{\alpha _{s}}-\mathbf \Gamma }_{\ \beta _{s}}^{\gamma _{s}}\wedge \mathbf{\Gamma }_{\ \gamma _{s}}^{\alpha _{s}}=\mathbf{R}_{\ \beta _{s}\gamma _{s}\delta _{s}}^{\alpha _{s}}\mathbf{e}^{\gamma _{s}}\wedge \mathbf{e}^{\delta _{s}}, \label{dc} \end{eqnarray see Refs. \cite{vex3} for explicit calculation of the coefficients $\mathbf{R}_{\ \beta _{s}\gamma _{s}\delta _{s}}^{\alpha _{s}}$ in higher dimensions. For any (pseudo) Riemannian metric $\ \ ^{s}\mathbf{g,}$ we can construct in standard form the Levi--Civita connection (LC--connection), $\ ^{s}\nabla =\{\ _{\shortmid }\Gamma _{\ \beta _{s}\gamma _{s}}^{\alpha _{s}}\},$ which is completely defined by the metric coefficients following two conditions: This linear connection is metric compatible, $\ ^{s}\nabla (\ ^{s}\mathbf{g) =0,$ and with zero torsion, $\ \ _{\shortmid }\mathcal{T}^{\alpha _{s}}=0$ (see formulas (\ref{dt}) for $\ ^{s}\mathbf{D\rightarrow }\ ^{s}\nabla ).$ Such a linear connection is not a d--connection because it does not preserve under general coordinate transforms a N--connection splitting. To elaborate a covariant differential calculus adapted to decomposition (\re {whitney}) we have to introduce a different type of linear connection. This is the canonical d--connection $\ ^{s}\widehat{\mathbf{D}}$ which is completely and uniquely defined by a (pseudo) Riemannian metric $\ ^{s \mathbf{g}$ (\ref{fansatz}) for a chosen $\ ^{s}\mathbf{N=\{ N_{i_{s}}^{a_{s}}\}$ if and only if $\ \ ^{s}\widehat{\mathbf{D}}(\ ^{s \mathbf{g)}=0$ and the horizontal and vertical torsions are zero, i.e. $ \widehat{\mathcal{T}}=\{\widehat{\mathbf{T}}_{\ jk}^{i}\}=0,$ $v\widehat \mathcal{T}}=\{\widehat{\mathbf{T}}_{\ bc}^{a}\}=0,\ ^{1}v\widehat{\mathcal{ }}=\{\widehat{\mathbf{T}}_{\ b_{1}c_{1}}^{a_{1}}\}=0,...,\ ^{s}v\widehat \mathcal{T}}=\{\widehat{\mathbf{T}}_{\ b_{s}c_{s}}^{a_{s}}\}=0.$ We \ can check by straightforward computations that such conditions are satisfied by \ ^{s}\widehat{\mathbf{D}}=\{\widehat{\mathbf{\Gamma }}_{\ \alpha _{s}\beta _{s}}^{\gamma _{s}}\}$ with coefficients (\ref{coefd}) computed recurrently \begin{eqnarray} \widehat{L}_{jk}^{i} &=&\frac{1}{2}g^{ir}\left( \mathbf{e}_{k}g_{jr}+\mathbf e}_{j}g_{kr}-\mathbf{e}_{r}g_{jk}\right) , \notag \\ \widehat{L}_{bk}^{a} &=&e_{b}(N_{k}^{a})+\frac{1}{2}h^{ac}\left( \mathbf{e _{k}h_{bc}-h_{dc}\ e_{b}N_{k}^{d}-h_{db}\ e_{c}N_{k}^{d}\right) , \notag \\ \widehat{C}_{jc}^{i} &=&\frac{1}{2}g^{ik}e_{c}g_{jk},\ \widehat{C}_{bc}^{a} \frac{1}{2}h^{ad}\left( e_{c}h_{bd}+e_{c}h_{cd}-e_{d}h_{bc}\right) , \label{candcon} \\ \widehat{L}_{\beta \gamma }^{\alpha } &=&\frac{1}{2}g^{\alpha \tau }\left( \mathbf{e}_{\gamma }g_{\beta \tau }+\mathbf{e}_{\beta }g_{\gamma \tau } \mathbf{e}_{\tau }g_{\beta \gamma }\right), \notag \\ && \notag \\ \widehat{L}_{b_{1}\gamma }^{a_{1}} &=&e_{b_{1}}(N_{\gamma }^{a_{1}})+\frac{ }{2}h^{a_{1}c_{1}}\left( \mathbf{e}_{\gamma }h_{b_{1}c_{1}}-h_{d_{1}c_{1}}\ e_{b_{1}}N_{\gamma }^{d_{1}}-h_{d_{1}b_{1}}\ e_{c_{1}}N_{\gamma }^{d_{1}}\right), \notag \\ \widehat{C}_{\beta c_{1}}^{\alpha } &=&\frac{1}{2}g^{\alpha \tau }e_{c_{1}}g_{\beta \tau },\ \widehat{C}_{b_{1}c_{1}}^{a_{1}}=\frac{1}{2 h^{a_{1}d_{1}}\left( e_{c_{1}}h_{b_{1}d_{1}}+e_{c_{1}}h_{c_{1}d_{1}}-e_{d_{1}}h_{b_{1}c_{1} \right), \notag \\ && ... \notag \\ && \notag \\ \widehat{L}_{\beta _{s-1}\gamma _{s-1}}^{\alpha _{s-1}} &=&\frac{1}{2 g^{\alpha _{s-1}\tau _{s-1}}\left( \mathbf{e}_{\gamma _{s-1}}g_{\beta _{s-1}\tau _{s-1}}+\mathbf{e}_{\beta _{s-1}}g_{\gamma _{s-1}\tau _{s-1}} \mathbf{e}_{\tau _{s-1}}g_{\beta _{s-1}\gamma _{s-1}}\right) , \notag \\ \widehat{L}_{b_{s}\gamma _{s-1}}^{a_{s}} &=&e_{b_{s}}(N_{\gamma _{s-1}}^{a_{s}})+ \frac{1}{2}h^{a_{s}c_{s}}\left( \mathbf{e}_{\gamma _{s-1}}h_{b_{s}c_{s}}-h_{d_{s}c_{s}}\ e_{b_{s}}N_{\gamma _{s-1}}^{d_{s}}-h_{d_{s}b_{s}}\ e_{c_{s}}N_{\gamma _{s-1}}^{d_{s}}\right) , \notag \\ \widehat{C}_{\beta _{s-1}c_{s}}^{\alpha _{s-1}} &=&\frac{1}{2}g^{\alpha _{s-1}\tau _{s-1}}e_{c_{s}}g_{\beta _{s-1}\tau _{s-1}},\ \widehat{C _{b_{s}c_{s}}^{a_{s}} = \frac{1}{2}h^{a_{s}d_{s}}\left( e_{c_{s}}h_{b_{s}d_{s}}+e_{c_{s}}h_{c_{s}d_{s}}-e_{d_{s}}h_{b_{s}c_{s} \right) . \notag \end{eqnarray} The torsion\ d--tensor (\ref{dt}) of $\ ^{s}\widehat{\mathbf{D}}$ is completely defined by $\ ^{s}\mathbf{g}$ (\ref{fansatz}) for any chosen $\ ^{s}\mathbf{N=\{}N_{i_{s}}^{a_{s}}\}$ if the above coefficients (\re {candcon}) are introduced "shell by shell" into formulas \begin{eqnarray} \widehat{T}_{\ jk}^{i} &=&\widehat{L}_{jk}^{i}-\widehat{L}_{kj}^{i},\widehat T}_{\ ja}^{i}=\widehat{C}_{jb}^{i},\widehat{T}_{\ ji}^{a}=-\Omega _{\ ji}^{a},\ \widehat{T}_{aj}^{c} = \widehat{L}_{aj}^{c}-e_{a}(N_{j}^{c}) \widehat{T}_{\ bc}^{a}=\ \widehat{C}_{bc}^{a}-\ \widehat{C}_{cb}^{a}, \notag \\ &&.... \label{dtors} \\ \widehat{T}_{\ \beta _{s}\gamma _{s}}^{\alpha _{s}} &=&\widehat{L}_{\ \beta _{s}\gamma _{s}}^{\alpha _{s}}-\widehat{L}_{\ \gamma _{s}\beta _{s}}^{\alpha _{s}},\widehat{T}_{\ \beta _{s}b_{s}}^{\alpha _{s}}=\widehat{C}_{\ \beta _{s}b_{s}}^{\alpha _{s}},\widehat{T}_{\ \beta _{s}\gamma _{s}}^{a_{s}}=\Omega _{\ \gamma _{s}\beta _{s}}^{a_{s}}. \notag \end{eqnarray The N-adapted formulas (\ref{candcon}) and (\ref{dtors}) show that any coefficient for such objects computed in 4-d can be similarly extended shell by shell by any value $s=1,2,....$ redefining correspondingly the h- and v-indices. Hereafter, we shall present coordinate formulas only for $s=0,$ omitting label $s,$ i.e. with $\alpha =(i,a),$ or for some arbitrary coefficients $\alpha _{s}=(i_{s},a_{s})$ if that will not result in ambiguities. Because both linear connections $\ ^{s}\nabla $ and $\ ^{s}\widehat{\mathbf{ }}$ are defined by the same metric structure, we can compute a canonical distortion relation \begin{equation} \ ^{s}\nabla =\ ^{s}\widehat{\mathbf{D}}+\ ^{s}\widehat{\mathbf{Z}}, \label{distorsrel} \end{equation where the distorting tensor $\ ^{s}\widehat{\mathbf{Z}}=\{\widehat{\mathbf{\ Z}}_{\ \beta _{s}\gamma _{s}}^{\alpha _{s}}\}$ is uniquely defined by the same metric $\ ^{s}\mathbf{g}$ (\ref{fansatz}). The values $\widehat{\mathbf \ Z}}_{\ \beta _{s}\gamma _{s}}^{\alpha _{s}}$ are algebraic combinations of $\widehat{T}_{\ \beta _{s}\gamma _{s}}^{\alpha _{s}}$ and vanish for zero torsion. For instance, the GR theory in 4-d can be formulated equivalently using the connection $\nabla $ and/or $\widehat{\mathbf{D}}$ if the distorting relation (\ref{distorsrel}) is used \cite{vpars,vex2}. The nonholonomic variables $(\ ^{s}\mathbf{g}$ (\ref{dm})$\mathbf{,}\ ^{s \mathbf{N,}\ ^{s}\widehat{\mathbf{D}})$ are equivalent to standard ones $(\ ^{s}\mathbf{g}$ (\ref{metr})$\mathbf{,}\ ^{s}\nabla ).$ Here we note that $\ ^{s}\nabla $ and $\ ^{s}\widehat{\mathbf{D}}$ are not tensor objects and such connections are subjected to different rules of coordinate transforms. It is possible to consider frame transforms with certain $\ ^{s}\mathbf{N=\{ N_{i_{s}}^{a_{s}}\}$ when the conditions $\ _{\shortmid }\Gamma _{\ \alpha _{s}\beta _{s}}^{\gamma _{s}}=\widehat{\mathbf{\Gamma }}_{\ \alpha _{s}\beta _{s}}^{\gamma _{s}}$ are satisfied with respect to some N--adapted frames \ref{nader})-- (\ref{nadifs}) even, in general, $\ ^{s}\nabla \neq \ ^{s \widehat{\mathbf{D}} $ and the corresponding curvature tensors $\ _{\shortmid }R_{\ \beta _{s}\gamma _{s}\delta _{s}}^{\alpha _{s}}\neq \widehat{\mathbf{R}}_{\ \beta _{s}\gamma _{s}\delta _{s}}^{\alpha _{s}}.$ \subsection{ The Einstein equations in N--adapted variables} An important motivation to use the linear connection$\ ^{s}\widehat{\mathbf{ }}$ is that the Einstein equations written in variables $(\ ^{s}\mathbf{g}$ \mathbf{,}\ ^{s}\mathbf{N,}\ ^{s}\widehat{\mathbf{D}})$ decouple with respect to N--adapted frames of reference which gives us the possibility to construct very general classes of solutions, see proofs and examples in \cit {vex1,vpars,vex2,vex3,vp,vt,vsingl1,vgrg}. We cannot "see" a general decoupling property for such nonlinear systems of PDE if we work from the very beginning with $\ ^{s}\nabla ,$ for instance, in coordinate frames or with respect to arbitrary nonholonomic ones: The condition of zero torsion, \ \ _{\shortmid }\mathcal{T}^{\alpha _{s}}=0$ states "strong coupling" conditions between various tensor coefficients in the Einstein equations and does not allow to decouple the equations.\footnote The condition of decoupling a system of equations to contain, for instance, only partial derivatives on a coordinate, is different from that of separation of variables for a function.} The main idea of the "anholonomic frame deformation method", AFDM, is to use the data $(\ ^{s}\mathbf{g}$ $\mathbf{,}\ ^{s}\mathbf{N,}\ ^{s}\widehat \mathbf{D}})$ in order to decouple certain gravitational and matter field equations, then to solve them in very general off-diagonal form, with possible dependence on all coordinates, and generate exact solutions with nontrivial nonholonomically induced torsion. Such integral varieties of solutions depend on a number of arbitrary generating and integration functions and possible symmetry parameters. This geometric approach can be applied for constructing exact solutions in various modified gravity theories with nonlinear effective Lagrangians and nontrivial torsion. Nevertheless, we can extract "integral subvarieties" of solutions in GR if at the end (after a class of "generalized" solutions was constructed) we impose, additionally, the condition of zero torsion (\ref{dtors}). This constrains the set of admissible generating/integration functions but also results in generic off--diagonal solutions depending on all coordinates. We can impose certain symmetry/ asymptotic / boundary / Cauchy conditions in order to determine certain geometrically/physically important off--diagonal configurations. Following additional assumptions, this can be related to small parametric off--diagonal, solitonic or other type, deformations of well known solutions in GR. The goal of this work is to study possible nonholonomic transformations of the Kerr and several wormhole metrics into off--diagonal (4-d or higher dimension) exact solutions. The Ricci d--tensor $Ric=\{\mathbf{R}_{\alpha _{s}\beta _{s}}:=\mathbf{R}_{\ \alpha _{s}\beta _{s}\tau _{s}}^{\tau _{s}}\}$ of a d--connection $\ ^{s \mathbf{D}$ is introduced via a respective contracting of coefficients of the curvature tensor (\ref{dc}). The explicit formulas for h--/ v--components, \begin{equation} \mathbf{R}_{\alpha _{s}\beta _{s}}=\{R_{i_{s}j_{s}}:=R_{\ i_{s}j_{s}k_{s}}^{k_{s}},\ \ R_{i_{1}a_{1}}:=-R_{\ i_{1}k_{1}a_{1}}^{k_{1}},...,\ R_{a_{s}i_{s}}:=R_{\ a_{s}i_{s}b_{s}}^{b_{s}}\}, \label{dricci} \end{equation are direct recurrent $s$--modifications of those derived in Refs. \cit {vex1,vpars,vex2,vex3} (we do not repeat such details in this article). Contracting such values with the inverse d--metric, with coefficients computed for the inverse matrix of $\ ^{s}\mathbf{g}$ (\ref{dm}), we define and compute the scalar curvature of $\ ^{s}\mathbf{D,}$ \begin{eqnarray} \ ^{s}R &:=&\mathbf{g}^{\alpha _{s}\beta _{s}}\mathbf{R}_{\alpha _{s}\beta _{s}}=g^{i_{s}j_{s}}R_{i_{s}j_{s}}+h^{a_{s}b_{s}}R_{a_{s}b_{s}} \notag \\ &=&R+S+\ ^{1}S+...+\ ^{s}S, \label{rdsc} \end{eqnarray with respective h-- and v--components of scalar curvature, $R=g^{ij}R_{ij},$ $S=h^{ab}R_{ab},$ $\ ^{1}S=h^{a_{1}b_{1}}R_{a_{1}b_{1}},...,\ ^{s}S=h^{a_{s}b_{s}}R_{a_{s}b_{s}}.$ The Einstein d--tensor $\ ^{s}\mathcal{E}=\{\mathbf{E}_{\alpha _{s}\beta _{s}}\}$ for any data $(\ ^{s}\mathbf{g}$ $\mathbf{,}\ ^{s}\mathbf{N,}\ ^{s \mathbf{D})$ can be defined in standard form \begin{equation} \mathbf{E}_{\alpha _{s}\beta _{s}}:=\mathbf{R}_{\alpha _{s}\beta _{s}}-\frac 1}{2}\mathbf{g}_{\alpha _{s}\beta _{s}}\ ^{s}R. \label{einstdt} \end{equation It should be noted that $\ ^{s}\mathbf{D(}\ ^{s}\mathbf{\mathcal{E})}\neq 0\ $ and the d--tensor $\mathbf{R}_{\alpha _{s}\beta _{s}}$ is not symmetric for a general $\ ^{s}\mathbf{D.}$ Nevertheless, we can always compute, for instance, $\ ^{s}\widehat{\mathbf{D}}\mathbf{(}\ ^{s}\widehat{\mathbf \mathcal{E}}}\mathbf{)}$ as a unique distortion relation determined by (\re {distorsrel}). This is a consequence of nonholonomic splitting structure \ref{whitney}). It is similar to nonholonomic mechanics when the conservation laws became more sophisticate when we impose certain non-integrable constraints on the dynamical equations. The Einstein equations for a metric $\mathbf{g}_{\beta _{s}\gamma _{s}}$ can be postulated in standard form using the LC--connection $\ ^{s}\nabla $ (with corresponding Ricci tensor, $\ _{\shortmid }R_{\alpha _{s}\beta _{s}},$ curvature scalar, $\ _{\shortmid }^{s}R,$ and Einstein tensor, $\ _{\shortmid }E_{\alpha _{s}\beta _{s}}), \begin{equation} \ _{\shortmid }E_{\alpha _{s}\beta _{s}}:=\ _{\shortmid }R_{\alpha _{s}\beta _{s}}-\frac{1}{2}g_{\alpha _{s}\beta _{s}}\ _{\shortmid }^{s}R=\varkappa \ _{\shortmid }T_{\alpha _{s}\beta _{s}}, \label{einsteq} \end{equation where $\varkappa $ is the gravitational constant and $\ _{\shortmid }T_{\alpha _{s}\beta _{s}}$ is the stress--energy tensor for matter fields. In 4-d, there are well-defined geometric/variational and physically motivated procedures of constructing $\ _{\shortmid }T_{\alpha _{s}\beta _{s}}.$ Such values can be similarly (at least geometrically) re--defined with respect to N--adapted frames using distorting relations (\re {distorsrel}) and introducing extra--dimensions.\footnote We do not need additional field equations for torsion fields like in Einstein--Cartan, gauge or string gravity theories.} The gravitational field equations (\ref{einsteq}) can be rewritten equivalently in N--adapted form for the canonical d--connection $\ ^{s \widehat{\mathbf{D}}, \begin{eqnarray} && \ ^{s} \widehat{\mathbf{R}}_{\ \beta _{s}\delta _{s}}-\frac{1}{2}\mathbf{ }_{\beta _{s}\delta _{s}}\ ^{s}R=\mathbf{\Upsilon }_{\beta _{s}\delta _{s}}, \label{cdeinst} \\ &&\widehat{L}_{a_{s}j_{s}}^{c_{s}}=e_{a_{s}}(N_{j_{s}}^{c_{s}}),\ \widehat{C _{j_{s}b_{s}}^{i_{s}}=0,\ \Omega _{\ j_{s}i_{s}}^{a_{s}}=0, \label{lcconstr} \end{eqnarray where the sources $\mathbf{\Upsilon }_{\beta _{s}\delta _{s}}$ are formally defined in GR but for extra dimensions when $\mathbf{\Upsilon }_{\beta _{s}\delta _{s}}\rightarrow \varkappa T_{\beta _{s}\delta _{s}}$ \ for $\ ^{s}\widehat{\mathbf{D}}\rightarrow \ ^{s}\nabla .$ The solutions of \ (\re {cdeinst}) are found with nonholonomically induced torsion (\ref{dt}). If the conditions (\ref{lcconstr}) are satisfied, the d-torsion coefficients (\re {dtors}) are zero and we get the LC--connection, i.e. it is possible to "extract" solutions of the standard Einstein equations. The decoupling property can be proved in explicit form working with $\ ^{s}\widehat{\mathbf D}}$ and nonholonomic torsion configurations. Having constructed certain classes of solutions in explicit form, with nonholonomically induced torsions and depending on various sets of integration and generating functions and parameters, we can "extract" solutions for $\ ^{s}\nabla $ imposing at the end additional constraints resulting in zero torsion. \subsection{Nonholonomic massive f(R,T) gravity and extra dimensions} We shall consider modified gravity theories constructed on dimension shells derived for the actio \begin{equation} S=\frac{1}{16\pi }\int \delta ^{4+2s}u\sqrt{|\mathbf{g}_{\alpha _{s}\beta _{s}}|}[f(\ ^{s}R,\ ^{s}T)-\frac{\mu _{g}^{2}}{4}\mathcal{U}(\mathbf{g}_{\mu _{s}\nu _{s}},\mathbf{K}_{\alpha _{s}\beta _{s}})+\ ^{m}L]. \label{act} \end{equation This generalizes in nonholonomic variables the modified $f(R,T)$ gravity, see reviews in \cite{odints1,odints2,capoz,odints3}, and the ghost--free massive gravity (by de Rham, Gabadadze and Tolley, dRGT) \cit {drg1,drg2,drg3}. Nontrivial mass terms allow us to solve certain problems of the bimetric theory by Hassan and Rosen, \cite{hr1,hr2}, with connections to various recent research in black hole physics and modern cosmology \cit {nieu,koyam}, and allows us to model solutions of (\ref{act}) in various theories with generalized Finsler branes, stochastic processes, Clifford and phase variables, fractional derivatives etc, see details in Refs. \cit {vacarfinslcosm,vfracrf,vbranef,stavr,mavr,castro,calcagni}. \ For instance, $y^{a_{s}}$--coordinates can be treated as "velocity/momentum" variables, to model stochastic and fractional processes, or to be considered as "standard" extra dimensional ones. In this paper, we shall use the units $\hbar =c=1$ and the Planck mass $M_{Pl}$ is defined $M_{Pl}^{2}=1/8\pi G$ via 4--d Newton constant $G$ and similar units will be considered for higher dimensions. We write $\delta ^{4+2s}u$ instead of $d^{4+2s}u$ because N--elongated differentials are used (\ref{nader}) and consider the constant \mu _{g}$ as the mass parameter for gravity (for simplicity, massive gravity theories will be studied for 4--d spacetimes). The geometric and physical meaning of the values contained in this formula will be explained below. The Lagrangian density $\ ^{m}L$ in action (\ref{act}) \ is used for computing the stress--energy tensor of matter. On nonholonomic manifolds/bundles such variations can be considered in N--adapted form, using operators (\ref{nader}) and (\ref{nadif}), on inverse metric d--tensor (\ref{dm}). For all shells, \ we can compute $\mathbf{T}_{\alpha _{s}\beta _{s}}=-\frac{2}{\sqrt{|\mathbf{g}_{\mu _{s}\nu _{s}}|}}\frac{\delta (\sqrt{ \mathbf{g}_{\mu _{s}\nu _{s}}|}\ ^{m}L)}{\delta \mathbf{g}^{\alpha _{s}\beta _{s}}}$, when the trace is (by definition) $\ ^{s}T:=\mathbf{g}^{\alpha _{s}\beta _{s}}\mathbf{T}_{\alpha _{s}\beta _{s}}.$ The functional $f(\ ^{s}R,\ ^{s}T)$ modifies the standard Einstein--Hilbert Lagrangian (with a scalar curvature $R$ usually taken for the Levi--Civita connection $\nabla )$ to that for the modified $f$--gravity in various dimensions but with dependence on $\ ^{s}R$ and $T.$ For various applications in modern cosmology, we can assume that \begin{equation} \mathbf{T}_{\alpha _{s}\beta _{s}}=(\rho +p)\mathbf{v}_{\alpha _{s}}\mathbf{ }_{\beta _{s}}-p\mathbf{g}_{\alpha _{s}\beta _{s}}, \label{emt} \end{equation for the approximation of perfect fluid matter with the energy density $\rho $ and the pressure $p$. The four--velocity $\mathbf{v}_{\alpha _{s}}$ is subjected to the conditions $\mathbf{v}_{\alpha _{s}}\mathbf{v}^{\alpha _{s}}=1$ and $\mathbf{v}^{\alpha _{s}}\widehat{\mathbf{D}}_{\beta _{s} \mathbf{v}_{\alpha _{s}}=0,$ for $\ ^{m}L=-p$ in a corresponding local N--adapted frame. For simplicity, we can parameterize \begin{equation} f(\ ^{s}R,\ ^{s}T)=\ ^{1}f(\ ^{s}R)+\ ^{2}f(\ ^{s}T) \label{functs} \end{equation and denote by $\ ^{1}F(\ ^{s}R):=\partial \ ^{1}f(\ ^{s}R)/\partial \ ^{s}R$ and $\ ^{2}F(\ ^{s}T):=\partial \ ^{2}f(\ ^{s}T)/\partial \ ^{s}T.$ A mass term with "gravitational mass" $\mu _{g}$ and potential {\small \begin{eqnarray} && \mathcal{U}/4 =-12+6[\sqrt{\mathcal{S}}]\mathcal{+[S}]\mathcal{-[}\sqrt \mathcal{S}}]^{2}+ \alpha _{3}\{18[\sqrt{\mathcal{S}}]-6[\sqrt{\mathcal{S} ]^{2}+[\sqrt{\mathcal{S}}]^{3}+2\mathcal{[S}^{3/2}]-3\mathcal{[S}]([\sqrt \mathcal{S}}]-2)-24\}+ \label{potent} \\ &&\alpha _{4}\{[\sqrt{\mathcal{S}}](24-12\mathcal{[}\sqrt{\mathcal{S}}] \mathcal{[}\sqrt{\mathcal{S}}]^{3})-12[\sqrt{\mathcal{S}}]\mathcal{[S}]+ \mathcal{[}\sqrt{\mathcal{S}}]^{2}(3\mathcal{[S}]+2\mathcal{[}\sqrt{\mathcal S}}])+ 3\mathcal{[S}](4-\mathcal{[S}])-8\mathcal{[S}^{3/2}](\sqrt{\mathcal{S }-1)+6\mathcal{[S}^{2}]-24\}, \notag \end{eqnarray } is considered in (\ref{act}) in addition to the usual $f$--gravity term (in particular, to the Einstein--Hilbert one). The trace of a shell extended matrix $\mathcal{S}=(S_{\mu _{s}\nu _{s}})$ is denoted by $\mathcal{[S ]:=S_{\ \nu _{s}}^{\nu _{s}}.$ We understand the square root of such a matrix, $\sqrt{\mathcal{S}}=(\sqrt{\mathcal{S}}_{\ \mu _{s}}^{\nu _{s}}),$ to be a matrix for which $\sqrt{\mathcal{S}}_{\ \alpha _{s}}^{\nu _{s}}\sqrt \mathcal{S}}_{\ \mu _{s}}^{\alpha _{s}}=S_{\ \mu _{s}}^{\nu _{s}}$ and \alpha _{3}$ and $\alpha _{4}$ are free parameters. We use such constants which transform $\mathcal{U}$ into standard 4--d one for $s=0.$ In works \cite{drg2,drg3}, see additional arguments in \cite{gratia}), such a nonlinearly extended Fierz--Pauli type potential was shown to result in a theory of massive gravity which is seem to be free from ghost--like degrees of freedom (it takes a special form of total derivative in absence of dynamics). We emphasize that the potential generating matrix $\mathcal{S}$ is constructed in a special form which results in a d--tensor with shell decomposition , $\mathbf{K}_{\ \mu _{s}}^{\nu _{s}}=\delta _{\ \mu _{s}}^{\nu _{s}}-\sqrt{\mathcal{S}}_{\ \mu _{s}}^{\nu _{s}},$ characterizing metric fluctuations away from a fiducial (flat) 4--d spacetime and possible extra dimensions, or velocity/momentum type variables. In 4-d, the coefficients \begin{equation} \mathbf{S}_{\ \mu }^{\nu }=\mathbf{g}^{\nu \alpha }\eta _{\overline{\nu \overline{\mu }}\mathbf{e}_{\alpha }s^{\overline{\nu }}\mathbf{e}_{\mu }s^ \overline{\mu }}, \label{stuk} \end{equation with the Minkowski metric $\eta _{\overline{\nu }\overline{\mu }=diag(1,1,1,-1),$ are generated by introducing four scalar St\"{u}kelberg fields $s^{\overline{\nu }},$ which is necessary for restoring the diffeomorphism invariance. Using N--adapted shell extended values $\mathbf{g ^{\nu _{s}\alpha _{s}}$ and $\mathbf{e}_{\alpha _{s}}$ we can always transform a tensor $S_{\mu \nu }$ into shell distinguished d--tensor \mathbf{S}_{\mu _{s}\nu _{s}}$ characterizing nonholonomically constrained fluctuations. This is possible for the values $\mathbf{K}_{\ \mu _{s}}^{\nu _{s}},\mathbf{S}_{\ \mu _{s}}^{\nu _{s}},\sqrt{\mathcal{S}}_{\ \mu _{s}}^{\nu _{s}}$ etc even shell extended $s^{\overline{\nu _{s}}}$ transforms as scalar fields under coordinate and frame transforms. For simplicity, we can consider 4--d variations of the action (\ref{act}) in N--adapted from for the coefficients of d--metric $\mathbf{g}_{\nu \alpha }$ (\ref{dm}). The corresponding generalized/ effective Einstein equations, for the $f$--modified massive gravity are \begin{equation} \widehat{\mathbf{E}}_{\alpha \beta }=\mathbf{\Upsilon }_{\beta \delta }, \label{efcdeq} \end{equation where the source encodes three terms of different nature, \begin{equation} \mathbf{\Upsilon }_{\beta \delta }=\ ^{ef}\eta \ G\ \mathbf{T}_{\beta \delta }+\ ^{ef}\mathbf{T}_{\beta \delta }+\mu _{g}^{2}\ ^{K}\mathbf{T}_{\beta \delta }. \label{effectsource} \end{equation The first component is determined by usual matter fields with energy momentum $\mathbf{T}_{\beta \delta }$ tensor but with effective polarization of the gravitational constant $\ ^{ef}\eta =[1+\ ^{2}F/8\pi ]/\ ^{1}F.$ The second term is for the $f$--modifications of the energy--momentum tensor, \begin{equation} \ ^{ef}\mathbf{T}_{\beta \delta }=[\frac{1}{2}(\ ^{1}f-\ ^{1}F\ \widehat{R +2p\ ^{2}F+\ ^{2}f)\mathbf{g}_{\beta \delta }-(\mathbf{g}_{\beta \delta }\ \widehat{\mathbf{D}}_{\alpha }\widehat{\mathbf{D}}^{\alpha }-\widehat \mathbf{D}}_{\beta }\widehat{\mathbf{D}}_{\delta })\ ^{1}F]/\ ^{1}F. \label{efm} \end{equation} The mass gravity contribution, i.e. the third term in source is computed as a dimensionless effective stress--energy tensor \begin{eqnarray*} \ ^{K}\mathbf{T}_{\alpha \beta }&:=&\frac{1}{4\sqrt{|\mathbf{g}_{\mu \nu }|} \frac{\delta (\sqrt{|\mathbf{g}_{\mu \nu }|}\ \mathcal{U})}{\delta \mathbf{g ^{\alpha \beta }} \\ &=&-\frac{1}{12}\{\ \mathcal{U}\mathbf{g}_{\alpha \beta }/4-2\mathbf{S _{\alpha \beta }+2([\sqrt{\mathcal{S}}]-3)\sqrt{\mathcal{S}}_{\alpha \beta }+ \\ &&\alpha _{3}[3(-6+4\mathcal{[}\sqrt{\mathcal{S}}]+\mathcal{[}\sqrt{\mathcal S}}]^{2}-\mathcal{[S}])\sqrt{\mathcal{S}}_{\alpha \beta }+6(\mathcal{[}\sqrt \mathcal{S}}]-2)\mathbf{S}_{\alpha \beta }-\mathcal{S}_{\alpha \beta }^{3/2}]- \\ &&\alpha _{4}[24\left( \mathcal{S}_{\alpha \beta }^{2}-([\sqrt{\mathcal{S} ]-1)\mathcal{S}_{\alpha \beta }^{3/2}\right) ]+12(2-2[\sqrt{\mathcal{S}}] \mathcal{[S}]+[\sqrt{\mathcal{S}}]^{2})\mathbf{S}_{\alpha \beta }+ \\ &&(24-24[\sqrt{\mathcal{S}}]+12[\sqrt{\mathcal{S}}]^{2}-[\sqrt{\mathcal{S} ]^{3}-12[\mathcal{S}]+12[\mathcal{S}][\sqrt{\mathcal{S}}]-8\mathcal{[S ^{3/2}])\sqrt{\mathcal{S}}_{\alpha \beta }\}. \end{eqnarray* The value $\ ^{K}\mathbf{T}_{\alpha \beta }$ encodes a bi--metric configurations when the second (fiducial) d--metric \textbf{\ }$\mathbf{f _{\alpha \mu }=\eta _{\overline{\nu }\overline{\mu }}\mathbf{e}_{\alpha }s^ \overline{\nu }}\mathbf{e}_{\mu }s^{\overline{\mu }}$ is determined by the S \"{u}kelberg fields $s^{\overline{\nu }}.$ The potential $\mathcal{U}$ (\re {potent}) defines interactions between $\mathbf{g}_{\mu \nu }$ and $\mathbf{ }_{\mu \nu }$ via $\sqrt{\mathcal{S}}_{\ \mu }^{\nu }=\sqrt{\mathbf{g}^{\nu \mu }\mathbf{f}_{\alpha \nu }}$ and $\mathcal{S}_{\ \mu }^{\nu }:=\mathbf{g ^{\nu \mu }\mathbf{f}_{\alpha \nu }.$ We can construct exact solutions in explicit form and study bi--metric gravity models with $\ ^{K}\mathbf{T _{\alpha \beta }=\ \lambda (x^{k})\ \mathbf{g}_{\alpha \beta },$ which can be generated by such configurations of $s^{\overline{\nu }}$ when $\mathbf{g _{\mu \nu }=\iota ^{2}(x^{k})\mathbf{f}_{\mu \nu }$ with a possible nontrivial conformal factor $\iota ^{2}.$ Such nonholonomic configurations allow us to compute using (\ref{stuk}) a diagonal matrices $\mathcal{S}_{\ \mu }^{\nu }:=\iota ^{-2}\delta _{\ \mu }^{\nu }.$ We can express the effective polarized anisotropic constant$\ $encoding the contributions of s^{\overline{\nu }}$ as a functional $\lambda \lbrack \iota ^{2}(x^{k})].$ The theories with gravitational field equations (\ref{efcdeq}) are similar to the Einstein one but for a different metric compatible linear connection, $\widehat{\mathbf{D}},$ and with nonlinear "gravitationally polarized" coupling in effective source $\mathbf{\Upsilon }_{\beta \delta }$ (\re {effectsource}). In next sections, we shall prove that such nonlinear systems of PDE can be integrated in general forms for any N--adapted parameterizations \begin{equation} \mathbf{\Upsilon }_{~\delta }^{\beta }=diag[\mathbf{\Upsilon }_{\alpha } \mathbf{\Upsilon }_{~1}^{1}=\mathbf{\Upsilon }_{~2}^{2}=\Upsilon (x^{k},y^{3});\mathbf{\Upsilon }_{~3}^{3}=\mathbf{\Upsilon _{~4}^{4}=~^{v}\Upsilon (x^{k})]. \label{source} \end{equation In particular, we can consider \begin{equation} \Upsilon =~^{v}\Upsilon =\Lambda =const, \label{source1} \end{equation for an effective cosmological constant $\Lambda ,$ see details in \cit {vpars,vex1,vex2,vex3,veym}. It should be noted that $\widehat{\mathbf{D} _{\delta }\ ^{1}F_{\mid \Upsilon =\Lambda }=0$ in (\ref{efm}) if we prescribe a functional dependence on $\ \widehat{R}=const$ (we have to chose necessary types of N--coefficients and respective canonical d--connection structure). For certain general distributions of matter fields and effective matter, we can prescribe such values for (\ref{source1}) with $\mathbf{T _{\beta \delta }=\check{T}(x^{k})\mathbf{g}_{\beta \delta }$ and $\ ^{s}R \widehat{\Lambda }$ in (\ref{source}), then we can write \begin{eqnarray} \Upsilon &=&\widetilde{\Lambda }+\widetilde{\lambda },\mbox{ for } \widetilde{\lambda }=\mu _{g}^{2}\ \lambda (x^{k}), \notag \\ \widetilde{\Lambda } &=&\ ^{ef}\eta \ G\ \check{T}(x^{k})+\frac{1}{2}(\ ^{1}f(\widehat{\Lambda })-\widehat{\Lambda }\ ^{1}F(\widehat{\Lambda })\ +2p\ ^{2}F(\check{T})+\ ^{2}f(\check{T})), \notag \\ \ ^{ef}\eta &=&[1+\ ^{2}F(\check{T})/8\pi ]/\ ^{1}F(\widehat{\Lambda }). \label{source1a} \end{eqnarray In general, any term may depend on coordinates $x^{i}$ but via re--definition of generating functions they can be transformed into certain effective constants. Prescribing the values $\widehat{\Lambda },\check{T},\ \lambda ,p$ and functionals $\ ^{1}f$ and $\ ^{2}f,$ we describe a nonholonomically constrained matter and effective matter fields dynamics with respect to N--adapted frames. All above constructions can be extended to extra shells $s=1,2,...$ via formal re--definition of indices for higher dimension. Under very general assumptions, the effective source can be parameterized in the form \begin{equation} \mathbf{\Upsilon }_{~\delta _{s}}^{\beta _{s}}=(\ ^{s}\widetilde{\Lambda }+\ ^{s}\widetilde{\lambda })\mathbf{\delta }_{~\delta _{s}}^{\beta _{s}}. \label{source1b} \end{equation} This formal diagonal form is fixed with respect to N--adapted frames and (see next section) for corresponding re--definition of certain generation functions. Such $(\ ^{s}\widetilde{\Lambda }+\ ^{s}\widetilde{\lambda }) --terms encode via nonholonomic constraints and the canonical d--connection \ ^{s}\widehat{\mathbf{D}}$ various physically important information on modifications of the GR theory by modifications in $f$--functional and/or massive gravity theories of various dimensions. LC--configurations can be extracted in all such types of theories by imposing additional constraints when $\widehat{\mathbf{D}}_{\mathcal{T}=0}\rightarrow \nabla $. \section{Decoupling \& Integration of (Modified) Einstein Equations} \label{s3}In this section, we show how the gravitational field equations \ref{cdeinst}) with possible constraints (\ref{lcconstr}), or (\ref{einsteq ), can be formally integrated in very general forms for generic off--diagonal metrics with coefficients depending on all spacetime coordinates. \subsection{Off--diagonal configurations with one Killing symmetries} In the simplest form, the decoupling property can be proven for certain ansatz with at least one Killing symmetry. \subsubsection{ Ansatz for metrics, N--connections, and gravitational polarizations} Let us consider metrics of type (\ref{dm}) which via frame transformationss (\re {metrtransf}) (for N--adapted transforms, $\mathbf{g}_{\alpha _{s}\beta _{s}}=e_{\ \alpha _{s}}^{\alpha _{s}^{\prime }}e_{\ \beta _{s}}^{\beta _{s}^{\prime }}\mathbf{g}_{\alpha _{s}^{\prime }\beta _{s}^{\prime }}$) can be parameterized in the form\footnote in our former works, we used a quite different system of notation} \begin{eqnarray} \ \ _{K}^{s}\mathbf{g} &=&\ g_{i}(x^{k})dx^{i}\otimes dx^{i}+h_{a}(x^{k},y^{4})\mathbf{e}^{a}\otimes \mathbf{e}^{b}+ \label{ansk} \\ &&h_{a_{1}}(u^{\alpha },y^{6})\ \mathbf{e}^{a_{1}}\otimes \mathbf{e ^{a_{1}}+h_{a_{2}}(u^{\alpha _{1}},y^{8})\ \mathbf{e}^{a_{2}}\otimes \mathbf e}^{b_{2}} +....+\ h_{a_{s}}(\ u^{\alpha _{s-1}},y^{a_{s}})\mathbf{e ^{a_{s}}\otimes \mathbf{e}^{a_{s}}, \notag \end{eqnarray where \begin{eqnarray*} \mathbf{e}^{a} &=&dy^{a}+N_{i}^{a}dx^{i},\mbox{\ for \ N_{i}^{3}=n_{i}(x^{k},y^{4}),N_{i}^{4}=w_{i}(x^{k},y^{4}); \\ \mathbf{e}^{a_{1}} &=&dy^{a_{1}}+N_{\alpha }^{a_{1}}du^{\alpha } \mbox{\ for \ }N_{\alpha }^{5}=\ ^{1}n_{\alpha }(u^{\beta },y^{6}),N_{\alpha }^{6}=\ ^{1}w_{\alpha }(u^{\beta },y^{6}); \\ \mathbf{e}^{a_{2}} &=&dy^{a_{2}}+N_{\alpha _{1}}^{a_{2}}du^{\alpha _{1}} \mbox{\ for \ }N_{\alpha _{1}}^{7}=\ ^{2}n_{\alpha _{1}}(u^{\beta _{1}},y^{8}),N_{\alpha _{1}}^{8}=\ ^{2}w_{\alpha }(u^{\beta _{1}},y^{8}); \\ &&.... \\ \mathbf{e}^{a_{s}} &=&dy^{a_{s}}+N_{\alpha _{s-1}}^{a_{s}}du^{\alpha _{s-1}} \mbox{\ for \ } N_{\alpha _{s-1}}^{4+2s-1}=\ ^{s}n_{\alpha _{1}}(u^{\beta _{s-1}},y^{4+2s}), N_{\alpha _{1}}^{4+2s}=\ ^{s}w_{\alpha }(u^{\beta _{s-1}},y^{4+2s}). \end{eqnarray* Such ansatz contains a Killing vector $\partial /\partial y^{s-1}$ because the coordinate $y^{s-1}$ is not contained in the coefficients of such metrics. With respect to coordinate frames, for instance, in $\dim \ ^{s \mathbf{V}=6;\ s=1,u^{\alpha _{1}}=(x^{1},x^{2},y^{3},y^{4},y^{5},y^{6}),$ the metrics (\ref{ansk}) are written in a form similar to that in Figure \re {fig1}. {\small \begin{sidewaysfigure} \centering {\scriptsize \begin{eqnarray*} && g_{\alpha _1 \beta _1}=\\ && \\ && \left[ \begin{array}{cccccc} \begin{array}{c} g_{1}+(n_{1}^{\ })^{2}h_{3} +(w_{1}^{\ })^{2}h_{4} \\ +(\ ^{1}n_{1}^{\ })^{2}h_{5}+ (\ ^{1}w_{1}^{\ })^{2}h_{6} \end{array} & \begin{array}{c} n_{1}n_{2}h_{3}+ w_{1}w_{2}h_{4}+ \\ \ ^{1}n_{1}^{\ }\ ^{1}n_{2}^{\ }h_{5}+ \ ^{1}w_{1}^{\ }\ ^{1}w_{2}^{\ }h_{6 \end{array} & \begin{array}{c} n_1 h_3+\\ \ ^{1}n_{1}^{\ }\ ^{1}n_{3}^{\ }h_{5}+ \ ^{1}w_{1}^{\ }\ ^{1}w_{3}^{\ }h_{6 \end{array} & \begin{array}{c} w_1 h_4+\\ \ ^{1}n_{1}^{\ }\ ^{1}n_{4}^{\ }h_{5}+ \ ^{1}w_{1}^{\ }\ ^{1}w_{4}^{\ }h_{6 \end{array} & \ ^{1}n_{1}^{\ }h_{5} & \ ^{1}w_{1}^{\ }h_{6} \\ & & & & & \\ \begin{array}{c} n_{1}n_{2}h_{3}+ w_{1}w_{2}h_{4}+ \\ \ ^{1}n_{1}^{\ }\ ^{1}n_{2}^{\ }h_{5}+ \ ^{1}w_{1}^{\ }\ ^{1}w_{2}^{\ }h_{6 \end{array} & \begin{array}{c} g_{2}+(n_{2}^{\ })^{2}h_{3} +(w_{2}^{\ })^{2}h_{4} \\ +(\ ^{1}n_{2}^{\ })^{2}h_{5}+ (\ ^{1}w_{2}^{\ })^{2}h_{6 \end{array} & \begin{array}{c} n_2 h_3+ \\ \ ^{1}n_{2}^{\ }\ ^{1}n_{3}^{\ }h_{5}+ \ ^{1}w_{2}^{\ }\ ^{1}w_{3}^{\ }h_{6 \end{array} & \begin{array}{c} w_2 h_4+ \\ \ ^{1}n_{2}^{\ }\ ^{1}n_{4}^{\ }h_{5}+ \ ^{1}w_{2}^{\ }\ ^{1}w_{4}^{\ }h_{6 \end{array} & \ ^{1}n_{2}^{\ }h_{5} & \ ^{1}w_{2}^{\ }h_{6} \\ & & & & & \\ \begin{array}{c} n_1 h_3 + \\ \ ^{1}n_{1}^{\ }\ ^{1}n_{3}^{\ }h_{5} +\ ^{1}w_{1}^{\ }\ ^{1}w_{3}^{\ }h_{6 \end{array} & \begin{array}{c} n_2 h_3 + \\ \ ^{1}n_{2}^{\ }\ ^{1}n_{3}^{\ }h_{5} +\ ^{1}w_{2}^{\ }\ ^{1}w_{3}^{\ }h_{6 \end{array} & \begin{array}{c} h_{3}+(\ ^{1}n_{3}^{\ })^{2}h_{5} +(\ ^{1}w_{3}^{\ })^{2}h_{6 \end{array} & \begin{array}{c} \ ^{1}n_{3}^{\ }\ ^{1}n_{4}^{\ }h_{5}+ \ ^{1}w_{3}^{\ }\ ^{1}w_{4}^{\ }h_{6 \end{array} & \ ^{1}n_{3}^{\ }h_{5} & \ ^{1}w_{3}^{\ }h_{6} \\ & & & & & \\ \begin{array}{c} w_1 h_4 + \\ \ ^{1}n_{1}^{\ }\ ^{1}n_{4}^{\ }h_{5}+ \ ^{1}w_{1}^{\ }\ ^{1}w_{4}^{\ }h_{6 \end{array} & \begin{array}{c} w_2 h_4 + \\ \ ^{1}n_{2}^{\ }\ ^{1}n_{4}^{\ }h_{5}+ \ ^{1}w_{2}^{\ }\ ^{1}w_{4}^{\ }h_{6 \end{array} & \begin{array}{c} \ ^{1}n_{3}^{\ }\ ^{1}n_{4}^{\ }h_{5}+ \ ^{1}w_{3}^{\ }\ ^{1}w_{4}^{\ }h_{6 \end{array} & \begin{array}{c} h_{4}+(\ ^{1}n_{4}^{\ })^{2}h_{5} +(\ ^{1}w_{4}^{\ })^{2}h_{6 \end{array} & \ ^{1}n_{4}^{\ }h_{5} & \ ^{1}w_{4}^{\ }h_{6} \\ & & & & & \\ \ ^{1}n_{1}^{\ }h_{5} & \ ^{1}n_{2}^{\ }h_{5} & \ ^{1}n_{3}^{\ }h_{5} & \ ^{1}n_{4}^{\ }h_{5} & h_{5} & 0 \\ & & & & & \\ \ ^{1}w_{1}^{\ }h_{6} & \ ^{1}w_{2}^{\ }h_{6} & \ ^{1}w_{3}^{\ }h_{6} & \ ^{1}w_{4}^{\ }h_{6} & 0 & h_{6 \end{array \right] \label{odm} \end{eqnarray* } \label{fig1} \caption{Generic off--diagonal metrics with respect to coordinate frames in 6-d spaces} \end{sidewaysfigure} } We note that nonholonomic 2+2+... parameterizations of type (\ref{fansatz}) prescribe certain algebraic symmetries of metrics both with respect to N--adapted and/or coordinate frames. For instance, a splitting 3+3+3+ ... may contain more complex topological configurations but to integrate the Einstein gravitational equations in such cases is not possible for general "non--Killing" ansatz. In a more general context, a d--metric (\ref{ansk}) can be a result of nonholonomic deformations of some "primary" geometric/physical data into certain "target" data, \begin{equation*} \mbox{[ primary ]}(\ _{\circ }^{s}\mathbf{g,}\ _{\circ }^{s}\mathbf{N,}\ _{\circ }^{s}\widehat{\mathbf{D}})\ \rightarrow \mbox{[ target ]}(\ _{\eta }^{s}\mathbf{g}=\ ^{s}\mathbf{\mathbf{g},}\ _{\eta }^{s}\mathbf{N}=\ ^{s \mathbf{\mathbf{N},}\ _{\eta }^{s}\widehat{\mathbf{D}}=\ ^{s}\widehat \mathbf{D}}). \end{equation* In this work we shall consider that the values labeled by "$\circ "$ may define, or not, exact solutions in a gravity theory. The metrics with "$\eta $" will be constrained always to define a solution of gravitational field equations (\ref{cdeinst}), or (\ref{einsteq}). For simplicity, we shall use prime ansatz of type \begin{eqnarray} \ \ _{\circ }^{s}\mathbf{g} &=&\ \mathring{g}_{i}(x^{k})dx^{i}\otimes dx^{i} \mathring{h}_{a}(x^{k},y^{4})\mathbf{\mathring{e}}^{a}\otimes \mathbf \mathring{e}}^{b}+\epsilon _{a_{1}}\ dy^{a_{1}}\otimes \ dy^{a_{1}}+....+\ \epsilon _{a_{s}}dy^{a_{s}}\otimes \ dy^{a_{s}}, \notag \\ \mathbf{\mathring{e}}^{a} &=&dy^{a}+\mathring{N}_{i}^{a}(x^{k},y^{4})dx^{i} \mbox{ with }\mathring{N}_{i}^{3}=\mathring{n}_{i},\mathring{N}_{i}^{4} \mathring{w}_{i}, \label{ansprime} \end{eqnarray where the constants $\epsilon _{a_{s}}$ take values $+1$ and/or $-1$ which depends on the signature of the higher dimensional spacetime and on $(\mathring{g}_{i},\mathring{h _{a};\mathring{N}_{i}^{a}).$ Such an ansatz may define, for instance, a Kerr black hole (or a wormhole) solution trivially imbedded into a $4+2s$ spacetime if the corresponding values of the coefficients are constructed respectively for different type solutions of the gravitational field equations. We choose the target metric ansatz (\ref{ansk}) as \begin{eqnarray} g_{\alpha _{s}} &=&\eta _{\alpha _{s}}(u^{\beta _{s}})\mathring{g}_{\alpha _{s}};N_{i_{s}}^{a_{s}}=\ _{\eta }N_{i_{s}}^{a_{s}}(u^{\beta _{s-1}},y^{4+2s}) \label{etad} \\ n_{i} &=&\eta _{i}^{3}\mathring{n}_{i},w_{i}=\eta _{i}^{4}\mathring{w}_{i} \mbox{ not summation on i}; \notag \end{eqnarray with so--called gravitational "polarization" functions and extra dimensional N-coefficients, $\eta _{\alpha _{s}},\eta _{i}^{a}$ and$\ _{\eta }N_{i_{s}}^{a_{s}}.$ In order to consider the limits \begin{equation*} (\ _{\eta }^{s}\mathbf{g,}\ _{\eta }^{s}\mathbf{N,}\ _{\eta }^{s}\widehat \mathbf{D}})\rightarrow (\ _{\circ }^{s}\mathbf{g,}\ _{\circ }^{s}\mathbf{N, \ _{\circ }^{s}\widehat{\mathbf{D}}),\mbox{ for }\varepsilon \rightarrow 0, \end{equation* depending on a small parameter $\varepsilon ,0\leq \varepsilon \ll 1,$ we shall introduce "small" polarizations of type $\eta =1+\varepsilon \chi (u...)$ and $_{\eta }N_{i_{s}}^{a_{s}}=\varepsilon n_{i_{s}}^{a_{s}}(u...).$ It should be noted that if a target d--metric (\ref{ansk}) is generated by a nonholonomic deformation with nontrivial $\eta $- , or $\chi , $-functions, it contains both "old" geometric/physical information on a prime metric (\re {ansprime}) and additional data for a new class of exact solutions. \subsubsection{ Ricci d--tensors and N--adapted sources} Let us consider an ansatz (\ref{ansk}) with $\partial _{4}h_{a}\neq 0,\partial _{6}h_{a_{1}}\neq 0,...,\partial _{2s}h_{a_{s}}\neq 0,$\footnote we can construct more special classes of solutions if such conditions are not satisfied; for simplicity, we suppose that via frame transforms it is always possible to introduce necessary type parameterizations for d--metrics} when the partial derivatives are denoted, for instance, $\partial _{1}h=\partial h/\partial x^{1},$ $\partial _{4}h=\partial h/\partial y^{4},$ and $\partial _{44}h=\partial ^{2}h/\partial y^{4}\partial y^{4}$. A tedious computation of the coefficients of the canonical d--connection $\widehat \mathbf{\Gamma }}_{\ \alpha _{s}\beta _{s}}^{\gamma _{s}}$(\ref{candcon}) and then of corresponding non-trivial coefficients of the Ricci d--tensor \mathbf{\hat{R}}_{\alpha _{s}\beta _{s}}$ (\ref{dricci}), see similar details in \cite{vex1,vpars,vex2,vex3}, results in such nontrivial values: \begin{eqnarray} \widehat{R}_{1}^{1} &=&\widehat{R}_{2}^{2}=-\frac{1}{2g_{1}g_{2}}[\partial _{11}g_{2}-\frac{(\partial _{1}g_{1})(\partial _{1}g_{2})}{2g_{1}}-\frac \left( \partial _{1}g_{2}\right) ^{2}}{2g_{2}}+\partial _{22}g_{1}-\frac (\partial _{2}g_{1})(\partial _{2}g_{2})}{2g_{2}}-\frac{\left( \partial _{2}g_{1}\right) ^{2}}{2g_{1}}], \label{equ1} \\ \widehat{R}_{3}^{3} &=&\widehat{R}_{4}^{4}=-\frac{1}{2h_{3}h_{4}}[\partial _{44}h_{3}-\frac{\left( \partial _{4}h_{3}\right) ^{2}}{2h_{3}}-\frac (\partial _{4}h_{3})(\partial _{4}h_{4})}{2h_{4}}], \label{equ2} \\ \widehat{R}_{3k} &=&\frac{h_{3}}{2h_{4}}\partial _{44}n_{k}+\left( \frac h_{3}}{h_{4}}\partial _{4}h_{4}-\frac{3}{2}\partial _{4}h_{3}\right) \frac \partial _{4}n_{k}}{2h_{4}}, \label{equ3} \\ \widehat{R}_{4k} &=&\frac{w_{k}}{2h_{3}}[\partial _{44}h_{3}-\frac{\left( \partial _{4}h_{3}\right) ^{2}}{2h_{3}}-\frac{(\partial _{4}h_{3})(\partial _{4}h_{4})}{2h_{4}}]+\frac{\partial _{4}h_{3}}{4h_{3}}(\frac{\partial _{k}h_{3}}{h_{3}}+\frac{\partial _{k}h_{4}}{h_{4}})-\frac{\partial _{k}(\partial _{4}h_{3})}{2h_{3}}, \label{equ4} \end{eqnarray and, on shells $s=1,2,...$, \begin{eqnarray} \widehat{R}_{5}^{5} &=&\widehat{R}_{6}^{6}=-\frac{1}{2h_{5}h_{6}}[\partial _{66}h_{5}-\frac{\left( \partial _{6}h_{5}\right) ^{2}}{2h_{5}}-\frac (\partial _{6}h_{5})(\partial _{6}h_{6})}{2h_{6}}], \label{equ5} \\ \widehat{R}_{5\tau } &=&\frac{h_{5}}{2h_{6}}\partial _{66}\ ^{1}n_{\tau }+\left( \frac{h_{5}}{h_{6}}\partial _{6}h_{6}-\frac{3}{2}\partial _{6}h_{5}\right) \frac{\partial _{6}\ ^{1}n_{\tau }}{2h_{6}}, \label{equ6} \\ \widehat{R}_{6\tau } &=&\frac{\ ^{1}w_{\tau }}{2h_{5}}[\partial _{66}h_{5} \frac{\left( \partial _{6}h_{5}\right) ^{2}}{2h_{5}}-\frac{(\partial _{6}h_{5})(\partial _{6}h_{6})}{2h_{6}}]+\frac{\partial _{6}h_{5}}{4h_{5}} \frac{\partial _{\tau }h_{5}}{h_{5}}+\frac{\partial _{\tau }h_{6}}{h_{6}}) \frac{\partial _{\tau }(\partial _{6}h_{5})}{2h_{5}}, \label{equ7} \end{eqnarray when $\tau =1,2,3,4;$ \begin{eqnarray} \widehat{R}_{7}^{7} &=&\widehat{R}_{8}^{8}=-\frac{1}{2h_{7}h_{8}}[\partial _{88}h_{7}-\frac{\left( \partial _{8}h_{7}\right) ^{2}}{2h_{7}}-\frac (\partial _{8}h_{7})(\partial _{8}h_{8})}{2h_{6}}], \notag \\ \widehat{R}_{7\tau } &=&\frac{h_{7}}{2h_{8}}\partial _{88}\ ^{2}n_{\tau _{1}}+\left( \frac{h_{7}}{h_{8}}\partial _{8}h_{8}-\frac{3}{2}\partial _{8}h_{7}\right) \frac{\partial _{8}\ ^{2}n_{\tau _{1}}}{2h_{7}}, \notag \\ \widehat{R}_{8\tau _{1}} &=&\frac{\ ^{2}w_{\tau _{1}}}{2h_{7}}[\partial _{88}h_{7}-\frac{\left( \partial _{8}h_{7}\right) ^{2}}{2h_{7}}-\frac (\partial _{8}h_{7})(\partial _{8}h_{8})}{2h_{8}}]+\frac{\partial _{8}h_{7}} 4h_{7}}(\frac{\partial _{\tau _{1}}h_{7}}{h_{7}}+\frac{\partial _{\tau _{1}}h_{8}}{h_{8}})-\frac{\partial _{\tau _{1}}(\partial _{8}h_{7})}{2h_{7}}, \label{equ4d} \end{eqnarray when $\tau _{1}=1,2,3,4,5,6.$ Similar formulas can be written recurrently for arbitrary finite extra dimensions. Using the above formulas, we can compute the Ricci scalar (\ref{rdsc}) for $\ ^{s}\widehat{\mathbf{D}}$ (for simplicity, we consider $s=1),$ $\ ^{s \widehat{R}=2(\widehat{R}_{1}^{1}+\widehat{R}_{3}^{3}+\widehat{R}_{5}^{5}).$ There are certain N--adapted symmetries of the Einstein d--tensor (\re {einstdt}) for the ansatz (\ref{ansk}), $\widehat{E}_{1}^{1}=\widehat{E _{2}^{2}=-(\widehat{R}_{3}^{3}+\widehat{R}_{5}^{5}),\widehat{E}_{3}^{3} \widehat{E}_{4}^{4}=-(\widehat{R}_{1}^{1}+\widehat{R}_{5}^{5}),\widehat{E _{5}^{5}=\widehat{E}_{6}^{6}=-(\widehat{R}_{1}^{1}+\widehat{R}_{3}^{3})$. In a similar form, we find symmetries for $s=2: \begin{eqnarray*} \widehat{E}_{1}^{1} &=&\widehat{E}_{2}^{2}=-(\widehat{R}_{3}^{3}+\widehat{R _{5}^{5}+\widehat{R}_{7}^{7}),\widehat{E}_{3}^{3}=\widehat{E}_{4}^{4}=- \widehat{R}_{1}^{1}+\widehat{R}_{5}^{5}+\widehat{R}_{7}^{7}), \\ \widehat{E}_{5}^{5} &=&\widehat{E}_{6}^{6}=-(\widehat{R}_{1}^{1}+\widehat{R _{3}^{3}+\widehat{R}_{7}^{7}),\widehat{E}_{7}^{7}=\widehat{E}_{8}^{8}=- \widehat{R}_{1}^{1}+\widehat{R}_{3}^{3}+\widehat{R}_{5}^{5}). \end{eqnarray*} We search for solutions of the nonholonomic Einstein equations (\ref{equ1 )--(\ref{equ4d}) with nontrivial $\Lambda $--sources written in the form \begin{eqnarray} \widehat{R}_{1}^{1} &=&\widehat{R}_{2}^{2}=-\Lambda (x^{k}),\ \widehat{R _{3}^{3}=\widehat{R}_{4}^{4}=-\ ^{v}\Lambda (x^{k},y^{4}), \label{sourc1} \\ \widehat{R}_{5}^{5} &=&\widehat{R}_{6}^{6}=-\ _{1}^{v}\Lambda (u^{\beta },y^{6}),\ \widehat{R}_{7}^{7}=\widehat{R}_{8}^{8}=-\ _{2}^{v}\Lambda (u^{\beta _{1}},y^{8}). \notag \end{eqnarray Similar equations can be written recurrently for arbitrary finite extra dimensions. This constrains us to define such N--adapted frame transformations when the sources $\mathbf{\Upsilon }_{\beta _{s}\delta _{s}}$ in (\re {cdeinst}) are parameterized \begin{eqnarray*} \mathbf{\Upsilon }_{1}^{1} &=&\mathbf{\Upsilon }_{2}^{2}=\ ^{v}\Lambda +\ _{1}^{v}\Lambda +\ _{2}^{v}\Lambda ,\mathbf{\Upsilon }_{3}^{3}=\mathbf \Upsilon }_{4}^{4}=\Lambda +\ _{1}^{v}\Lambda +\ _{2}^{v}\Lambda , \\ \mathbf{\Upsilon }_{5}^{5} &=&\mathbf{\Upsilon }_{6}^{6}=\Lambda +\ ^{v}\Lambda +\ _{2}^{v}\Lambda ,\mathbf{\Upsilon }_{7}^{7}=\mathbf{\Upsilon _{8}^{8}=\Lambda +\ ^{v}\Lambda +\ _{1}^{v}\Lambda . \end{eqnarray* For certain models of extra dimensional gravity, we can write $\ _{1}^{v}\Lambda =\ _{2}^{v}\Lambda =\ ^{\circ }\Lambda =const.$ Re--defining the generating functions (see below) for non--vacuum configurations, we can always introduce such effective sources. \subsubsection{Decoupling of gravitational field equations} Introducing the ansatz (\ref{ansk}) for $\ g_{i}(x^{k})=\epsilon _{i}e^{\psi (x^{k})}$ with nonzero $\partial _{4}\phi ,\partial _{4}h_{a},$ $\partial _{6}\ ^{1}\phi ,\partial _{6}h_{a_{1}},\partial _{8}\ ^{2}\phi ,\partial _{8}h_{a_{2}}$ in (\ref{equ1})--(\ref{equ4d}) with respective sources, we obtain this system of PDEs: \begin{equation} \epsilon _{1}\partial _{11}\psi +\epsilon _{2}\partial _{22}\psi =2\Lambda (x^{k}), \label{e1} \end{equation \begin{eqnarray} (\partial _{4}\phi )(\partial _{4}h_{3}) &=&2h_{3}h_{4}\ ^{v}\Lambda (x^{k},y^{4}),\ \label{e2} \\ \partial _{44}n_{i}+\gamma \partial _{4}n_{i} &=&0, \label{e3} \\ \beta w_{i}-\alpha _{i} &=&0,\ \label{e4} \end{eqnarray \begin{eqnarray} (\partial _{6}\ ^{1}\phi )(\partial _{6}h_{5}) &=&2h_{5}h_{6}\ _{1}^{v}\Lambda (u^{\beta },y^{6}), \label{e2aa} \\ \partial _{66}\ ^{1}n_{\tau }+\ ^{1}\gamma \partial _{6}\ ^{1}n_{\tau } &=&0, \label{e3aa} \\ \ ^{1}\beta \ ^{1}w_{\tau }-\ ^{1}\alpha _{\tau } &=&0,\ \label{e4aa} \end{eqnarray \begin{eqnarray} (\partial _{6}\ ^{2}\phi )(\partial _{6}h_{7}) &=&2h_{7}h_{8}\ _{2}^{v}\Lambda (u^{\beta _{1}},y^{8}), \notag \\ \partial _{88}\ ^{2}n_{\tau _{1}}+\ ^{2}\gamma \partial _{8}\ ^{2}n_{\tau _{1}} &=&0, \notag \\ \ ^{2}\beta \ ^{2}w_{\tau _{1}}-\ ^{2}\alpha _{\tau _{1}} &=&0,\ \label{e4dd} \end{eqnarray \begin{equation*} \mbox{ (similar equations can be written recurrently for arbitrary finite extra dimensions),} \end{equation* where the coefficients are defined respectively \begin{eqnarray} \phi &=&\ln \left\vert \frac{\partial _{4}h_{3}}{\sqrt{|h_{3}h_{4}|} \right\vert , \label{ca1} \\ &&\gamma :=\partial _{4}(\ln \frac{|h_{3}|^{3/2}}{|h_{4}|}),\ \ \alpha _{i} \frac{\partial _{4}h_{3}}{2h_{3}}\partial _{i}\phi ,\ \beta =\frac{\partial _{4}h_{3}}{2h_{3}}\partial _{4}\phi , \label{c1} \end{eqnarray \begin{eqnarray} \ ^{1}\phi &=&\ln \left\vert \frac{\partial _{6}h_{5}}{\sqrt{|h_{5}h_{6}|} \right\vert , \label{ca2} \\ &&\ ^{1}\gamma :=\partial _{6}(\ln \frac{|h_{5}|^{3/2}}{|h_{6}|}),\ ^{1}\alpha _{\tau }=\frac{\partial _{6}h_{5}}{2h_{5}}\partial _{\tau }\ ^{1}\phi ,\ ^{1}\beta =\frac{\partial _{6}h_{5}}{2h_{5}}\partial _{\tau }\ ^{1}\phi , \label{c2} \end{eqnarray \begin{eqnarray*} \ ^{2}\phi &=&\ln \left\vert \frac{\partial _{8}h_{7}}{\sqrt{|h_{7}h_{8}|} \right\vert , \\ &&\ ^{2}\gamma :=\partial _{8}(\ln \frac{|h_{7}|^{3/2}}{|h_{8}|}),\ \ ^{2}\alpha _{\tau _{1}}=\frac{\partial _{8}h_{7}}{2h_{7}}\partial _{\tau _{1}}\ ^{2}\phi ,\ \ ^{2}\beta =\frac{\partial _{8}h_{7}}{2h_{7}}\partial _{\tau _{1}}\ ^{2}\phi , \end{eqnarray* and similarly for extra shells. The equations (\ref{e1})-- (\ref{e4dd}) reflect a very important decoupling property of the (generalized) Einstein equations with respect to the corresponding N--adapted frames. In explicit form, such formulas can be obtained for metrics with at least one Killing symmetry (the constructions can be generalized for non--Killing configurations). Let us explain in brief the decoupling property for 4--d configurations following such steps: \begin{enumerate} \item The equation (\ref{e1}) is just a 2-d Laplace, or d'Alambert one (depending on prescribed signature), which can be solved for any value \Lambda (x^{k}).$ \item The equation (\ref{e2}) contains only the partial derivative $\partial _{4}$ and is related to the formula for the coefficient (\ref{ca1}) for the values $h_{3}(x^{i},y^{4}),$ $h_{4}(x^{i},y^{4})$ and $\phi (x^{i},y^{4})$ and source $\ ^{v}\Lambda (x^{k},y^{4}).$ Prescribing any two such functions, we can define (by integrating with respect to $y^{4})$ the other two such functions. \item Using $h_{3}$ and $\phi $ in the previous point, we can compute the coefficients $\alpha _{i}$ and $\beta ,$ see (\ref{c1}), which allows us to define $n_{i}$ from the algebraic equations (\ref{e3}). \item Having computed the coefficient $\gamma $ (\ref{c1}), the N--connection coefficients $w_{i}$ can be defined after two integrations with respect to y^{4}$ in (\ref{e4}). \end{enumerate} The procedure 2-4 can be repeated step by step on the other shells for higher dimensions. We have to add the corresponding dependencies on the extra dimensional coordinates and additional partial derivatives. For instance, the equation \ref{e2aa}) and formula (\ref{ca2}) with partial derivative $\partial _{6}$ involves the functions $h_{5}(x^{i},y^{a},y^{6}),$ $h_{6}(x^{i},y^{a},y^{6})$ and \ ^{1}\phi (x^{i},y^{a},y^{6})$ $\ $and source $\ _{1}^{v}\Lambda (u^{\beta },y^{6}).$ We can compute any two such functions integrating with respect to $y^{6}$ if the two other ones are prescribed. In a similar form, we follow the steps in points 3 and 4 with $\ ^{1}\alpha _{\tau },\ ^{1}\beta ,\ ^{1}\gamma ,$ see (\ref{c2}), and compute the higher order N--connection coefficients $\ ^{1}n_{\tau }$ and $\ ^{1}w_{\tau }.$ \subsubsection{ Integration of (modified) Einstein equations by generating functions and effective sources} The system of nonlinear PDEs (\ref{e1})-- (\ref{e4dd}) can be integrated in general forms for any finite dimension $\dim \ ^{s}\mathbf{V}\geq 4.$ \paragraph{ 4--d non--vacuum configurations:} The coefficients $g_{i}=\epsilon _{i}e^{\psi (x^{k})}$ are defined by solutions of the corresponding Laplace/ d'Alambert equation (\ref{e1}). We can solve (\ref{e2}) and (\ref{ca1}) for any $\partial _{4}\phi \neq 0,h_{a}\neq 0$ and $\ ^{v}\Lambda \neq 0$ if we re-write the equations as \begin{equation} \ h_{3}h_{4}=(\partial _{4}\phi )(\partial _{4}h_{3})/2\ ^{v}\Lambda \mbox{ and }|h_{3}h_{4}|=(\partial _{4}h_{3})^{2}e^{-2\phi }, \label{eq4bb} \end{equation for any nontrivial source $\ ^{v}\Lambda .$ Inserting the first equation into the second one, we find \begin{equation} |\partial _{4}h_{3}|=\frac{\partial _{4}(e^{-2\phi })}{4|\ ^{v}\Lambda |} \frac{\partial _{4}[\Phi ^{2}]}{2|\ ^{v}\Lambda |}, \label{aux01} \end{equation for $\Phi :=e^{\phi }$. This formula can be integrated with respect to $y^{4},$ which results in \begin{equation*} h_{3}[\Phi ,\ ^{v}\Lambda ]=\ ^{0}h_{3}(x^{k})+\frac{\epsilon _{3}\epsilon _{4}}{4}\int dy^{4}\frac{\partial _{4}(\Phi ^{2})}{\ ^{v}\Lambda }, \end{equation* where $\ ^{0}h_{3}=\ ^{0}h_{3}(x^{k})$ is an integration function and \epsilon _{3},\epsilon _{4}=\pm 1.$ To find $h_{4}$ we can use the first equation (\ref{eq4bb}) and write \begin{equation} h_{4}[\Phi ,\ ^{v}\Lambda ]=\frac{(\partial _{4}\phi )}{\ ^{v}\Lambda \partial _{4}(\ln \sqrt{|h_{3}|})=\frac{1}{2\ ^{v}\Lambda }\frac{\partial _{4}\Phi }{\Phi }\frac{\partial _{4}h_{3}}{h_{3}}. \label{h4aux} \end{equation These formulas for $h_{a}$ can be simplified if we introduce an "effective" cosmological constant $\widetilde{\Lambda }=const\neq 0$ and re--define the generating function $\Phi \rightarrow \tilde{\Phi},$ for which $\frac \partial _{4}[\Phi ^{2}]}{\ ^{v}\Lambda }=\frac{\partial _{4}[\tilde{\Phi ^{2}]}{\ \tilde{\Lambda}},$ i.e \begin{equation} \Phi ^{2}=\widetilde{\Lambda }^{-1}\int dy^{4}(\ ^{v}\Lambda )\partial _{4} \tilde{\Phi}^{2})\mbox{ and }\tilde{\Phi}^{2}=\widetilde{\Lambda }\int dy^{4}(\ ^{v}\Lambda )^{-1}\partial _{4}(\Phi ^{2}). \label{rescgf} \end{equation Introducing the integration function$\ ^{0}h_{3}(x^{k})$ and $\epsilon _{3}$ and $\epsilon _{4}$ in $\Phi $ and, respectively, in $\ ^{v}\Lambda ,$ we can expres \begin{equation} h_{3}[\tilde{\Phi},\widetilde{\Lambda }]=\frac{\tilde{\Phi}^{2}}{4\widetilde \Lambda }}\mbox{ and }h_{4}[\tilde{\Phi},\widetilde{\Lambda }]=\frac{(\partial _{4}\tilde{\Phi )^{2}}{\Xi }, \label{solha} \end{equation where $\Xi =\int dy^{4}(\ ^{v}\Lambda )\partial _{4}(\tilde{\Phi}^{2}).$ We can work for convenience with two couples of generating data, $(\Phi ,\ ^{v}\Lambda )$ and $(\tilde{\Phi},\ \tilde{\Lambda}),$ related by formulas \ref{rescgf}). Using the values $h_{a}$ (\ref{solha}), we compute the coefficients $\alpha _{i},\beta $ and $\gamma $ from (\ref{c1}). The resulting solutions for N--coefficients can be expressed recurrently, \begin{eqnarray} n_{k} &=&\ _{1}n_{k}+\ _{2}n_{k}\int dy^{4}h_{4}/(\sqrt{|h_{3}|})^{3}=\ _{1}n_{k}+\ _{2}\widetilde{n}_{k}\int dy^{4}(\partial _{4}\tilde{\Phi})^{2} \tilde{\Phi}^{3}\Xi , \notag \\ w_{i} &=&\partial _{i}\phi /\partial _{4}\phi =\partial _{i}\Phi /\partial _{4}\Phi , \label{solhn} \end{eqnarray where $\ _{1}n_{k}(x^{i})$ and $\ _{2}n_{k}(x^{i}),$ or $_{2}\widetilde{n _{k}(x^{i})=8\ _{2}n_{k}(x^{i})|\widetilde{\Lambda }|^{3/2},$ are integration functions. The quadratic line elements determined by coefficients (\ref{solha})-(\ref{solhn}) are parameterized in the form \begin{eqnarray} ds_{4dK}^{2} &=&g_{\alpha \beta }(x^{k},y^{4})du^{\alpha }du^{\beta }=\epsilon _{i}e^{\psi (x^{k})}(dx^{i})^{2}+ \label{qnk4d} \\ &&\frac{\tilde{\Phi}^{2}}{4\widetilde{\Lambda }}\left[ dy^{3}+\left( \ _{1}n_{k}+_{2}\widetilde{n}_{k}\int dy^{4}\frac{(\partial _{4}\tilde{\Phi )^{2}}{\tilde{\Phi}^{3}\Xi }\right) dx^{k}\right] ^{2}+\frac{(\partial _{4 \tilde{\Phi})^{2}}{\Xi }\ \left[ dy^{4}+\frac{\partial _{i}\Phi }{\partial _{4}\Phi }dx^{i}\right] ^{2}. \notag \end{eqnarray} This line element defines a family of generic off--diagonal solutions with Killing symmetry in $\partial /\partial y^{3}$ of the 4--d Einstein equations (\ref{sourc1}) for the canonical d--connection $\ \widehat{\mathbf D}}$ (the label $4dK$ is for "nonholonomic 4-d Killing solutions). We can verify by straightforward computations of the corresponding anholonomy coefficients $W_{\alpha \beta }^{\gamma }$ in (\ref{anhrel1}) that such values are not get zero if arbitrary generating function $\phi $ and integration ones ($\ ^{0}h_{a},_{1}n_{k}$ and $\ _{2}n_{k})$ are considered. \paragraph{ 4--d vacuum configurations:} The limits to the off--diagonal solutions with $\ \Lambda =\ ^{v}\Lambda =0$ can be not smooth because, for instance, we have multiples of $(\ ^{v}\Lambda )^{-1} $ in the coefficients of (\ref{qnk4d}). For the ansatz (\ref{ansk}), we can analyze solutions when the nontrivial coefficients of the Ricci d--tensor (\ref{equ1})--(\ref{equ4d}) are zero. The first equation is a typical example of 2--d wave, or Laplace, equation. We can express such solutions in a similar form $g_{i}=\epsilon _{i}e^{\psi (x^{k},\Lambda =0)}(dx^{i})^{2}.$ There are three classes of off--diagonal metrics which result in zero coefficients (\ref{equ2})--(\ref{equ4d}). \begin{itemize} \item In the first case, we can impose the condition $\partial _{4}h_{3}=0,h_{3}\neq 0,$ which results only in one nontrivial equation (derived from (\ref{equ3})) \begin{equation*} \partial _{44}n_{k}+\partial _{4}n_{k}\ \partial _{4}\ln |h_{4}|=0, \end{equation* where $h_{4}(x^{i},y^{4})\neq 0$ and $w_{k}(x^{i},y^{4})$ are arbitrary functions. If $\partial _{4}h_{4}=0,$ we must take $\partial _{44}n_{k}=0.$ For $\partial _{4}h_{4}\neq 0,$ we get \begin{equation} n_{k}=\ _{1}n_{k}+\ _{2}n_{k}\int dy^{4}/h_{4} \label{wsol} \end{equation with integration functions $\ _{1}n_{k}(x^{i})$ and $\ _{2}n_{k}(x^{i}).$ The corresponding quadratic line element is of the type {\small \begin{equation} ds_{v1}^{2} =\epsilon _{i}e^{\psi (x^{k},\Lambda =0)}(dx^{i})^{2}+\ ^{0}h_{3}(x^{k})[dy^{3}+ (\ _{1}n_{k}(x^{i})+ \ _{2}n_{k}(x^{i})\int dy^{4}/h_{4}) dx^{i}]^{2} + h_{4}(x^{i},y^{4})[dy^{4}+w_{i}(x^{k},y^{4})dx^{i}]^{2}. \label{vs1} \end{equation} } \item In the second case, $\partial _{4}h_{3}\neq 0$ and $\partial _{4}h_{4}\neq 0.$ We can solve (\ref{equ2}) and/or (\ref{e2}) in a self--consistent form for $\ ^{v}\Lambda =0$ if $\partial _{4}\phi =0$ for coefficients (\ref{ca1}) and (\ref{c1}). For $\phi =\phi _{0}=const,$ we can consider arbitrary functions $w_{i}(x^{k},y^{4})$ because $\beta =\alpha _{i}=0$ for such configurations. The condition (\ref{ca1}) is satisfied by any \begin{equation} h_{4}=\ ^{0}h_{4}(x^{k})(\partial _{4}\sqrt{|h_{3}|})^{2}, \label{h34vacuum} \end{equation} where $\ ^{0}h_{3}(x^{k})$ is an integration function and h_{3}(x^{k},y^{4}) $ is any generating function. The coefficients $n_{k}$ can be found from (\ref{equ3}), see (\ref{wsol}). Such a family of vacuum metrics is described by \begin{eqnarray} ds_{v2}^{2} &=&\epsilon _{i}e^{\psi (x^{k},\Lambda =0)}(dx^{i})^{2}+h_{3}(x^{i},y^{4})[dy^{3}+(\ _{1}n_{k}(x^{i})+\ _{2}n_{k}(x^{i}) \int dy^{4}/h_{4})dx^{i}]^{2}+ \label{vs2} \\ && \ ^{0}h_{4}(x^{k})(\partial _{4}\sqrt{|h_{3}| )^{2}[dy^{4}+w_{i}(x^{k},y^{4})dx^{i}]^{2}. \notag \end{eqnarray} \item In the third case, $\partial _{4}h_{3}\neq 0$ but $\partial _{4}h_{4}=0.$ The equation (\ref{equ2}) transforms into $\partial _{44}h_{3} \frac{\left( \partial _{4}h_{3}\right) ^{2}}{2h_{3}}=0$, when the general solution is $h_{3}(x^{k},y^{4})=\left[ c_{1}(x^{k})+c_{2}(x^{k})y^{4}\right] ^{2}$, with generating functions $c_{1}(x^{k}),c_{2}(x^{k})$, and $h_{4}=\ ^{0}h_{4}(x^{k}).$ For $\phi =\phi _{0}=const,$ we can take any values w_{i}(x^{k},y^{4})$ because $\beta =\alpha _{i}=0.$ The coefficients $n_{i}$ are found from (\ref{equ3}) and/or, equivalently, from (\ref{e3}) with $\gamma =\frac{3}{2}\partial _{4}|h_{3}|.$ We obtain \begin{equation*} n_{i}=\ _{1}n_{i}(x^{k})+\ _{2}n_{i}(x^{k})\int dy^{4}|h_{3}|^{-3/2}=\ _{1}n_{i}(x^{k})+\ _{2}\widetilde{n}_{i}(x^{k})\left[ c_{1}(x^{k})+c_{2}(x^{k})y^{4}\right] ^{-2}, \end{equation* with integration functions $\ _{1}n_{i}(x^{k})$ and $\ _{2}n_{i}(x^{k}),$ or re--defined $\ \ _{2}\widetilde{n}_{i}=-\ _{2}n_{i}/2c_{2}.$ The quadratic line element for this class of solutions for vacuum metrics is described by {\small \begin{eqnarray} ds_{v3}^{2} &=&\epsilon _{i}e^{\psi (x^{k},\Lambda =0)}(dx^{i})^{2}+\left[ c_{1}(x^{k})+c_{2}(x^{k})y^{4}\right] ^{2}[dy^{3}+(\ _{1}n_{i}(x^{k})+\ _{2 \widetilde{n}_{i}(x^{k})\left[ c_{1}(x^{k})+c_{2}(x^{k})y^{4}\right] ^{-2})dx^{i}]^{2} \notag \\ &&+\ ^{0}h_{4}(x^{k})[dy^{4}+w_{i}(x^{k},y^{4})dx^{i}]^{2}. \label{vs3} \end{eqnarray } \end{itemize} Finally, we note that such solutions have nontrivial induced torsions \ref{dtors}). \paragraph{Extra dimensional non--vacuum solutions:} The solutions for higher dimensions can be constructed in a certain fashion which are similar to the 4--d ones using new classes of generating and integration functions with dependencies on extra dimension coordinates. For instance, we can generate solutions of the system (\ref{e2aa})--(\ref{e4aa}) with coefficients (\ref{ca2}) and (\ref{c2}) following a formal analogy when $\partial _{4}\rightarrow \partial _{6},\phi (x^{k},y^{4})\rightarrow \ ^{1}\phi (u^{\tau },y^{6}),\ ^{v}\Lambda (x^{k},y^{4})\rightarrow \ _{1}^{v}\Lambda (u^{\tau },y^{6})...$ and associate values $\ ^{1}\tilde{\Ph }(u^{\tau },y^{6})$ and $\ ^{1}\widetilde{\Lambda }$ as we considered in the previous paragraph. The extra--dimensional coefficients are compute \begin{equation*} h_{5}[\ ^{1}\tilde{\Phi},\ ^{1}\widetilde{\Lambda }]=\frac{\ ^{1}\tilde{\Phi ^{2}}{4\ ^{1}\widetilde{\Lambda }}\mbox{ and }h_{6}[\ ^{1}\tilde{\Phi}]=\frac{(\partial _{6}\ ^{1}\tilde{\Phi})^{2}}{\ ^{1}\Xi }, \end{equation* for $\ ^{1}\Xi =\int dy^{6}(\ _{1}^{v}\Lambda )\partial _{6}(\ ^{1}\tilde \Phi}^{2})$ and, for N--coefficients, \begin{eqnarray*} \ ^{1}n_{\tau } &=&\ _{1}^{1}n_{\tau }+\ _{2}^{1}n_{\tau }\int dy^{6}h_{6}/ \sqrt{|h_{5}|})^{3}=\ _{1}^{1}n_{k}+\ _{2}^{1}\widetilde{n}_{k}\int dy^{6}(\partial _{6}\ ^{1}\tilde{\Phi})^{2}/(\ ^{1}\tilde{\Phi})^{3}\ ^{1}\Xi , \\ \ ^{1}w_{\tau } &=&\partial _{\tau }\ ^{1}\phi /\partial _{6}\ ^{1}\phi =\partial _{\tau }\ ^{1}\Phi /\partial _{6}\ ^{1}\Phi , \end{eqnarray* where $\ ^{0}h_{a_{1}}=\ ^{0}h_{a_{1}}(u^{\tau }),$ $\ _{1}^{1}n_{k}(u^{\tau })$ and $\ _{2}^{1}n_{k}(u^{\tau }),$ are integration functions. A general class of quadratic line elements in 6--d spacetimes can be parameterized in the form {\small \begin{equation} ds_{6dK}^{2}=ds_{4dK}^{2}+\frac{\ ^{1}\tilde{\Phi}^{2}}{4\ ^{1}\widetilde \Lambda }}\left[ dy^{5}+\left( \ _{1}^{1}n_{k}+\ _{2}^{1}\widetilde{n _{k}\int dy^{6}\frac{(\partial _{6}\ ^{1}\tilde{\Phi})^{2}}{(\ ^{1}\tilde \Phi})^{3}\ ^{1}\Xi }\right) du^{\tau }\right] ^{2}+\ \frac{(\partial _{6}\ ^{1}\tilde{\Phi})^{2}}{\ ^{1}\Xi }\left[ dy^{6}+\frac{\partial _{\tau }\ \ ^{1}\Phi }{\partial _{6}\ \ ^{1}\Phi }du^{\tau }\right] ^{2}, \label{qnk6d} \end{equation } where $ds_{4dK}^{2}$ is given by formula (\ref{qnk4d}) and $\tau =1,2,3,4.$ This quadratic line element has a Killing symmetry in $\partial _{5}$ (in N--adapted frames, the metric does not depend on $y^{5}$). Extending the constructions to the shell $s=2$ with $\partial _{6}\rightarrow \partial _{8},\ ^{1}\phi (u^{\tau },y^{6})\rightarrow \ ^{2}\phi (u^{\tau _{1}},y^{8}),\ _{1}^{v}\Lambda (u^{\tau },y^{6})\rightarrow \ _{2}^{v}\Lambda (u^{\tau _{1}},y^{8})...,\ ^{2}\tilde \Phi}(u^{\tau _{1}},y^{8}),\ ^{2}\widetilde{\Lambda }$, where $\tau _{1}=1,2,...,$ $5,6,$ we generate off--diagonal solutions in 8--d gravity, {\small \begin{equation} ds_{8dK}^{2}=ds_{6dK}^{2}+\frac{\ ^{2}\tilde{\Phi}^{2}}{4\ ^{2}\widetilde \Lambda }}\left[ dy^{7}+\left( \ _{1}^{2}n_{k}+\ _{2}^{2}\widetilde{n _{k}\int dy^{8}\frac{(\partial _{8}\ ^{2}\tilde{\Phi})^{2}}{(\ ^{2}\tilde \Phi})^{3}\ ^{2}\Xi }\right) du^{\tau _{1}}\right] ^{2}+\ \frac{(\partial _{8}\ ^{2}\tilde{\Phi})^{2}}{\ ^{2}\Xi }\left[ dy^{8}+\frac{\partial _{\tau _{1}}\ \ ^{2}\Phi }{\partial _{8}\ \ ^{2}\Phi }du^{\tau _{1}}\right] ^{2}, \label{qnk8d} \end{equation } where $ds_{6dK}^{2}$ is given by (\ref{qnk6d}), $\ ^{2}\Xi =\int dy^{8}(\ _{2}^{v}\Lambda )\partial _{8}(\ ^{2}\tilde{\Phi}^{2}),$ and corresponding integration/generating functions $\ ^{0}h_{a_{2}}(u^{\tau _{1}});a_{2}=7,8;\ _{1}n_{\tau _{1}}(u^{\tau _{1}})$ and $\ _{2}n_{\tau _{1}}(u^{\tau _{1}})$ are integration functions. Using \ 2+2+... symmetries of off--diagonal parameterizations (\ref{odm}), we can construct exact solutions for arbitrary finite dimension of extra dimensional spacetime $\ ^{s}\mathbf{V.}$ \paragraph{ Extra dimensional vacuum solutions:} The off--diagonal solutions (\ref{qnk4d}), (\ref{qnk6d}), (\ref{qnk8d}),... have been constructed for nontrivial sources $\ ^{v}\Lambda (x^{k}, y^{4}),$ $\ _{1}^{v}\Lambda (u^{\tau },y^{6}),$ $\ _{2}^{v}\Lambda (u^{\tau },y^{8}),...$ In a similar manner, we can generate vacuum configurations with effective zero cosmological constants by extending to higher dimensions the 4-d vacuum metrics of type $ds_{v1}^{2}$ (\ref{vs1}), $ds_{v2}^{2}$ (\re {vs2}), $ds_{v3}^{2}$ (\ref{vs3}) etc. It is possible to generate solutions when the sources for (\ref{sourc1}) are zero on some shells and nonzero for other ones. We provide here an example of quadratic line element for 6--d gravity derived as a $s=1$ generalization of (\ref{vs2}). For such solutions, \partial _{4}h_{a}\neq 0,\partial _{6}h_{a_{1}}\neq 0,...$ and $\phi =\phi _{0}=const,$ $\ ^{1}\phi =\ ^{1}\phi _{0}=const,...$ \begin{eqnarray} ds_{v2s3}^{2} &=&\epsilon _{i}e^{\psi (x^{k},\Lambda =0)}(dx^{i})^{2}+h_{3}(x^{i},y^{4})[dy^{3}+\left( \ _{1}n_{k}(x^{i})+\ _{2}n_{k}(x^{i})\int dy^{4}/h_{4}\right) dx^{i}]^{2}+ \label{qe6dvacuum} \\ &&\ ^{0}h_{4}(x^{k})(\partial _{4}\sqrt{|h_{3}| )^{2}[dy^{4}+w_{i}(x^{k},y^{4})dx^{i}]^{2}+h_{5}(u^{\tau },y^{6})[dy^{5}+\left( \ _{1}^{1}n_{\lambda }(u^{\tau })+\ _{2}^{1}n_{\lambda }(u^{\tau })\int dy^{6}/h_{6}\right) du^{\lambda }]^{2} \notag \\ &&+\ ^{0}h_{6}(u^{\tau })(\partial _{6}\sqrt{|h_{5}|})^{2}[dy^{6}+\ ^{1}w_{\lambda }(u^{\tau },y^{6})du^{\lambda }]^{2}, \notag \end{eqnarray where $\ ^{0}h_{3}(x^{k}),\ ^{0}h_{5}(u^{\tau }),\ _{1}n_{k}(x^{i}),\ _{2}n_{k}(x^{i}),\ _{1}^{1}n_{\lambda }(u^{\tau }),\ _{2}^{1}n_{\lambda }(u^{\tau })$ are integration functions. \ The values $h_{4}(x^{k},y^{4})$ and $h_{6}(u^{\tau },y^{6})$ are any generating functions. We can consider arbitrary functions $w_{i}(x^{k},y^{4})$ and $\ ^{1}w_{\lambda }(u^{\tau },y^{6})$ because, respectively, $\beta =\alpha _{i}=0$ and $\ ^{1}\beta =\ ^{1}\alpha _{\tau }=0$ for such configurations, see formulas (\ref{ca1}), \ref{c1}) and (\ref{ca2}), (\ref{c2}). \subsubsection{Coefficients of metrics as generating functions} For nontrivial sources $\ ^{v}\Lambda (x^{k},y^{4}),$ $\ _{1}^{v}\Lambda (u^{\tau },y^{6}),$ $\ _{2}^{v}\Lambda (u^{\tau },y^{8}),...$ , we can prescribe respectively $h_{3},h_{5}$ and $h_{7}$ (with nonzero $\partial _{4}h_{3},\partial _{6}h_{5}$ and $\partial _{8}h_{7}$) as generating functions. Let us perform such constructions in explicit form for $s=0.$ Using formula (\ref{aux01}), we find (up to an integration function depending on $x^{i}$) that \begin{equation} \Phi ^{2}=2\varepsilon _{\Phi }\int dy^{4}\ ^{v}\Lambda \ \partial _{4}h_{3}, \label{aux02} \end{equation where $\varepsilon _{\Phi }=\pm 1$ in order to have $\Phi ^{2}>0.$ Inserting this value into (\ref{h4aux}), we express $h_{4}$ in terms of $\ ^{v}\Lambda $ and $h_{3}, \begin{equation*} h_{4}[\ ^{v}\Lambda ,\ h_{3}]=\varepsilon _{4}(\partial _{4}h_{3})^{2}/2\ ^{v}\Lambda h_{3}\int dy^{4}(\ ^{v}\Lambda h_{3}),\ \varepsilon _4=\pm 1. \end{equation* The N--connection coefficients are computed following the formulas in (\ref{solhn}) with $\Phi \lbrack \ ^{v}\Lambda ,\ h_{3}]$ expressed in the form (\ref{aux02}) \begin{equation*} w_{i}[\ ^{v}\Lambda ,\ h_{3}]=\frac{\partial _{i}\Phi }{\partial _{4}\Phi } \frac{\partial _{i}\Phi ^{2}}{\partial _{4}\Phi ^{2}}=\frac{\int dy^{4}\partial _{i}|\ ^{v}\Lambda \partial _{4}h_{3}|}{|\ ^{v}\Lambda \partial _{4}h_{3}|}, \end{equation* and \begin{equation*} n_{k}[\ ^{v}\Lambda ,\ h_{3}]=\ _{1}n_{k}+\ _{2}n_{k}\int dy^{4}\frac (\partial _{4}h_{3})^{2}}{\ ^{v}\Lambda (\sqrt{|h_{3}|})^{5 \int_{0}^{y^{4}}dy^{4^{\prime }}(\ ^{v}\Lambda h_{3})}, \end{equation* where $\varepsilon _{4}/2$ is included in $n_{2}.$ We can use for $s=1$ and $s=2$ certain formulas similar to (\ref{aux02}), \begin{equation*} \ ^{1}\Phi ^{2}=2\varepsilon _{\ ^{1}\Phi }\int dy^{6}\ _{1}^{v}\Lambda \ \partial _{6}h_{5}\mbox{ and }\ ^{2}\Phi ^{2}=2\varepsilon _{\ ^{2}\Phi }\int dy^{8}\ _{2}^{v}\Lambda \ \partial _{8}h_{7},\ \varepsilon _{\ ^{1}\Phi }=\pm 1, \varepsilon _{\ ^{2}\Phi }=\pm 2. \end{equation* The solutions (\ref{qnk4d}), (\ref{qnk6d}) and (\ref{qnk8d}) are respectively re--parameterized as \begin{eqnarray*} ds_{4dK}^{2} &=&\epsilon _{i}e^{\psi (x^{k})}(dx^{i})^{2}+h_{3}\left[ dy^{3}+\left( \ _{1}n_{k}+\ _{2}n_{k}\int dy^{4}\frac{(\partial _{4}h_{3})^{2}}{\ ^{v}\Lambda (\sqrt{|h_{3}|})^{5}\int_{0}^{y^{4}}dy^{4^ \prime }}(\ ^{v}\Lambda h_{3})}\right) dx^{k}\right] ^{2} \\ &&+\varepsilon _{4}\frac{(\partial _{4}h_{3})^{2}}{2\ ^{v}\Lambda h_{3}\int dy^{4}(\ ^{v}\Lambda h_{3})}\ \left[ dy^{4}+\frac{\int dy^{4}\partial _{i}|\ ^{v}\Lambda \partial _{4}h_{3}|}{|\ ^{v}\Lambda \partial _{4}h_{3}|}dx^{i \right] ^{2}, \end{eqnarray* \begin{eqnarray*} ds_{6dK}^{2} &=&ds_{4dK}^{2}+h_{5}\left[ dy^{5}+\left( \ _{1}^{1}n_{\tau }+\ _{2}^{1}n_{\tau }\int dy^{6}\frac{(\partial _{6}h_{5})^{2}}{\ _{1}^{v}\Lambda (\sqrt{|h_{5}|})^{5}\int_{0}^{y^{6}}dy^{6^{\prime }}(\ _{1}^{v}\Lambda h_{5})}\right) du^{\tau }\right] ^{2} \\ &&+\varepsilon _{6}\frac{(\partial _{6}h_{5})^{2}}{2\ _{1}^{v}\Lambda h_{5}\int dy^{6}(\ _{1}^{v}\Lambda h_{5})}\left[ dy^{6}+\frac{\int dy^{6}\partial _{\tau }|\ _{1}^{v}\Lambda \partial _{6}h_{5}|}{|\ _{1}^{v}\Lambda \partial _{6}h_{5}|}du^{\tau }\right] ^{2}, \end{eqnarray* and \begin{eqnarray*} ds_{8dK}^{2} &=&ds_{6dK}^{2}+h_{7}\left[ dy^{7}+\left( \ _{1}^{2}n_{\tau _{1}}+\ _{2}^{2}n_{\tau _{1}}\int dy^{8}\frac{(\partial _{8}h_{7})^{2}}{\ _{2}^{v}\Lambda (\sqrt{|h_{7}|})^{5}\int_{0}^{y^{8}}dy^{8^{\prime }}(\ _{2}^{v}\Lambda h_{7})}\right) du^{\tau _{1}}\right] ^{2} \\ &&+\ \varepsilon _{8}\frac{(\partial _{8}h_{7})^{2}}{2\ _{2}^{v}\Lambda h_{7}\int dy^{8}(\ _{2}^{v}\Lambda h_{7})}\left[ dy^{8}+\frac{\int dy^{8}\partial _{\tau _{1}}|\ _{2}^{v}\Lambda \partial _{8}h_{7}|}{|\ _{2}^{v}\Lambda \partial _{8}h_{7}|}du^{\tau _{1}}\right] ^{2}. \end{eqnarray*} We can introduce effective cosmological constants via re--definition of the generating functions of the type (\ref{rescgf}) when $(\Phi ,\ ^{v}\Lambda )\rightarrow (\tilde{\Phi},\widetilde{\Lambda }),(\ ^{1}\Phi ,\ _{1}^{v}\Lambda )\rightarrow (\ ^{1}\tilde{\Phi},\ _{1}\widetilde{\Lambda })$ and $(\ ^{2}\Phi ,\ _{2}^{v}\Lambda )\rightarrow (\ ^{2}\tilde{\Phi},\ _{2 \widetilde{\Lambda }).$ For such parameterizations, the coefficients of the metrics depend explicitly on $\tilde{\Phi},\ ^{1}\tilde{\Phi}$ and $\ ^{2 \tilde{\Phi}.$ Finally, we note that such formulas can be similarly generalized for higher dimensions with shells $s=3,4 ...$. \subsubsection{ The Levi--Civita conditions} \label{sslc}All solutions constructed in previous sections define certain subclasses of generic off--diagonal metrics (\ref{ansk}) for canonical d--connections $\ ^{s}\widehat{\mathbf{D}}$ and nontrivial nonholonomically induced d--torsion coefficients $\widehat{\mathbf{T}}_{\ \alpha _{s}\beta _{s}}^{\gamma _{s}}\ $ (\ref{dtors}). Such a torsion vanishes for a subclass of nonholonomic distributions with necessary types of parameterizations of the generating and integration functions and sources. In explicit form, we construct LC--configurations by imposing additional constraints, shell by shell, on the d--metric and N--connection coefficients. By straightforward computations (see details in Refs. \cite{vex1,vpars,vex2,vex3}, and Appendix \ref{zt}), we can verify that if in N--adapted frames \begin{eqnarray} \mbox{ for }s &=&0:\ \partial _{4}w_{i}=\mathbf{e}_{i}\ln \sqrt{|\ h_{4}|} \mathbf{e}_{i}\ln \sqrt{|\ h_{3}|}=0,\partial _{i}w_{j}=\partial _{j}w_{i \mbox{ and }\partial _{4}n_{i}=0; \notag \\ s &=&1:\ \partial _{6}\ ^{1}w_{\alpha }=\ ^{1}\mathbf{e}_{\alpha }\ln \sqrt |\ h_{6}|},\ ^{1}\mathbf{e}_{\alpha }\ln \sqrt{|\ h_{5}|}=0,\partial _{\alpha }\ ^{1}w_{\beta }=\partial _{\beta }\ ^{1}w_{\alpha }\mbox{ and \partial _{6}\ ^{1}n_{\gamma }=0; \label{zerot} \\ s &=&2:\ \partial _{8}\ ^{2}w_{\alpha _{1}}=\ ^{2}\mathbf{e}_{\alpha _{1}}\ln \sqrt{|\ h_{8}|},\ ^{2}\mathbf{e}_{\alpha _{1}}\ln \sqrt{|\ h_{7}| =0,\partial _{\alpha _{1}}\ ^{2}w_{\beta _{1}}=\partial _{\beta _{1}}\ ^{2}w_{\alpha _{1}}\mbox{ and }\partial _{8}\ ^{2}n_{\gamma _{1}}=0; \notag \end{eqnarray (similar equations can be written recurrently for arbitrary finite extra dimensions) then the torsion coefficients become zero. For $n$--coefficients, such conditions are satisfied if $\ _{2}n_{k}(x^{i})=0$ and $\partial _{i}\ _{1}n_{j}(x^{k})=\partial _{j}\ _{1}n_{i}(x^{k});\ _{2}^{1}n_{\alpha }(u^{\beta })=0$ and $\partial _{\gamma }\ _{1}^{1}n_{\tau }(u^{\beta })=\partial _{\tau }\ _{1}^{1}n_{\gamma }(u^{\beta });\ _{2}^{2}n_{\alpha _{1}}(u^{\beta _{1}})=0$ and $\partial _{\gamma _{1}}\ _{1}^{2}n_{\tau _{1}}(u^{\beta _{1}})=\partial _{\tau _{1}}\ _{1}^{2}n_{\gamma _{1}}(u^{\beta _{1}})$ etc. The explicit form of the solutions of the constraints on $w_{k}$ derived from (\ref{zerot}) depend on the class of vacuum or non--vacuum metrics we try to construct. Let us show how we can satisfy the LC--conditions (\ref{zerot}) for $s=0.$ We note that such nonholonomic constraints cannot be solved in explicit form for arbitrary data $(\Phi ,\ ^{v}\Lambda ),$ or $(\tilde{\Phi},\ \tilde \Lambda}),$ and all types of nonzero integration functions $\ _{1}n_{j}(x^{k})$ and $\ _{2}n_{k}(x^{i})=0.$ Nevertheless, certain general classes of solutions can be written in explicit form if via coordinate and frame transformations we can fix $_{2}n_{k}(x^{i})=0$ $\ $and $\ _{1}n_{j}(x^{k})=\partial _{j}n(x^{k})$ for a function $n(x^{k}).$ Then we use the property that \begin{equation*} \mathbf{e}_{i}\Phi =(\partial _{i}-w_{i}\partial _{4})\Phi \equiv 0 \end{equation* for any $\Phi $ if $w_{i}=\partial _{i}\Phi /\partial _{4}\Phi ,$ see (\re {solhn}). For any functional $H[\Phi ],$ one has the equality \begin{equation*} \mathbf{e}_{i}H=(\partial _{i}-w_{i}\partial _{4})H=\frac{\partial H} \partial \Phi }(\partial _{i}-w_{i}\partial _{4})\Phi \equiv 0. \end{equation* We can restrict our construction to a subclass of generating data $(\Phi ,\ ^{v}\Lambda )$ and $(\tilde{\Phi},\ \tilde{\Lambda})$ which are related via formulas \ref{rescgf}) when $H=\tilde{\Phi}[\Phi ]$ is a functional which allows us to generate LC--configurations in explicit form. Using $h_{3}[\tilde{\Phi}] \tilde{\Phi}^{2}/4\widetilde{\Lambda }$ (\ref{solha}) for $H= $ $\tilde{\Phi =\ln \sqrt{|\ h_{3}|}$, we satisfy the second condition, $\mathbf{e}_{i}\ln \sqrt{|\ h_{3}|}=0,$ in (\ref{zerot}) for $s=0.$ In the second step, we solve firstly the condition in (\ref{zerot}), for $s=0.$ Taking the derivative $\partial _{4}$ of $\ w_{i}=\partial _{i}\Phi /\partial _{4}\Phi $ (\ref{solhn}), we obtai \begin{equation} \partial _{4}w_{i}=\frac{(\partial _{4}\partial _{i}\Phi )(\partial _{4}\Phi )-(\partial _{i}\Phi )\partial _{4}\partial _{4}\Phi }{(\partial _{4}\Phi )^{2}}=\frac{\partial _{4}\partial _{i}\Phi }{\partial _{4}\Phi }-\frac \partial _{i}\Phi }{\partial _{4}\Phi }\frac{\partial _{4}\partial _{4}\Phi }{\partial _{4}\Phi }. \label{fder} \end{equation If $\Phi =\check{\Phi},$ for which \begin{equation} \partial _{4}\partial _{i}\check{\Phi}=\partial _{i}\partial _{4}\check{\Phi , \label{explcond} \end{equation} and using (\ref{fder}), we compute $\partial _{4}w_{i}=\mathbf{e}_{i}\ln |\partial _{4}\Phi |$. For $h_{4}[\Phi ,\ ^{v}\Lambda ]$ (\ref{h4aux}), \mathbf{e}_{i}\ln \sqrt{|\ h_{4}|}=\mathbf{e}_{i} [ \ln |\partial _{4}\Phi |-\ln \sqrt{|\ ^{v}\Lambda |}]$, were we used the conditions (\ref{explcond ) and the property $\mathbf{e}_{i}\check{\Phi}=0.$ Using the last two formulas, we can obtain $\partial _{4}w_{i}=\mathbf{e}_{i}\ln \sqrt{|\ h_{4} }$ if $\mathbf{e}_{i}\ln \sqrt{|\ ^{v}\Lambda |}=0.$ This is possible for $\ ^{v}\Lambda =const,$ or if $\ ^{v}\Lambda $ can be expressed as a functional $\ ^{v}\Lambda (x^{i},y^{4})=\ ^{v}\Lambda \lbrack \check{\Phi}].$ Finally, we note that the third condition for $s=0$, $\partial _{i}w_{j}=\partial _{j}w_{i},$ see (\ref{zerot}), holds for any $\check{A} \check{A}(x^{k},y^{4})$ for which $w_{i}=\check{w}_{i}=\partial _{i}\check \Phi}/\partial _{4}\check{\Phi}=\partial _{i}\check{A}.$ Following similar considerations for other shells' generating functions \begin{eqnarray} s=1: &&\ ^{1}\Phi =\ ^{1}\check{\Phi}(u^{\tau },y^{6}),\partial _{6}\partial _{\tau }\ ^{1}\check{\Phi}=\partial _{\tau }\partial _{6}\ ^{1}\check{\Phi}; \label{expconda} \\ &&\partial _{\alpha }\ ^{1}\check{\Phi}/\partial _{6}\ ^{1}\check{\Phi =\partial _{\alpha }\ ^{1}\check{A};\ _{1}^{1}n_{\tau }=\partial _{\tau }\ ^{1}n(u^{\beta }); \notag \\ s=2: &&\ ^{2}\Phi =\ ^{2}\check{\Phi}(u^{\tau _{1}},y^{8}),\partial _{8}\partial _{\tau _{1}}\ ^{2}\check{\Phi}=\partial _{\tau _{1}}\partial _{8}\ ^{2}\check{\Phi}; \notag \\ &&\partial _{\alpha _{1}}\ ^{2}\check{\Phi}/\partial _{8}\ ^{2}\check{\Phi =\partial _{\alpha _{2}}\ ^{2}\check{A};\ _{1}^{2}n_{\tau _{1}}=\partial _{\tau _{1}}\ ^{2}n(u^{\beta _{1}}); \notag \end{eqnarray (similar formulas can be written recurrently for arbitrary extra shells); we can construct quadratic line elements for LC--configurations \begin{eqnarray} ds_{8dK}^{2} &=&\epsilon _{i}e^{\psi (x^{k})}(dx^{i})^{2}+\frac{\ (\tilde \Phi}[\check{\Phi}])^{2}}{4\widetilde{\Lambda }}\left[ dy^{3}+(\partial _{i}\ n)dx^{i}\right] ^{2}+\frac{(\partial _{4}\tilde{\Phi}[\check{\Phi ])^{2}}{\Xi (\tilde{\Phi}[\check{\Phi}])}\left[ dy^{4}+(\partial _{i}\ \check{A})dx^{i}\right] ^{2} \notag \\ &&+\frac{(\ ^{1}\tilde{\Phi}[\ ^{1}\check{\Phi}])^{2}}{4\ ^{1}\widetilde \Lambda }}\left[ dy^{5}+(\partial _{\tau }\ ^{1}n)du^{\tau }\right] ^{2}+\ \frac{(\partial _{6}\ ^{1}\tilde{\Phi}[\ ^{1}\check{\Phi}])^{2}}{\ ^{1}\Xi (\ ^{1}\tilde{\Phi}[\ ^{1}\check{\Phi}])}\ \left[ dy^{6}+(\partial _{\tau }\ ^{1}\check{A})du^{\tau }\right] ^{2} \label{qellcs} \\ &&+\frac{(\ ^{2}\tilde{\Phi}[\ ^{2}\check{\Phi}])^{2}}{4\ ^{2}\widetilde \Lambda }}\left[ dy^{7}+(\partial _{\tau _{1}}\ ^{2}n)du^{\tau _{1}}\right] ^{2}+\ \frac{(\partial _{8}\ ^{2}\tilde{\Phi}[\ ^{2}\check{\Phi}])^{2}}{\ ^{2}\Xi (\ ^{2}\tilde{\Phi}[\ ^{2}\check{\Phi}])}\ \left[ dy^{8}+(\partial _{\tau _{1}}\ ^{2}\check{A})du^{\tau _{1}}\right] ^{2}. \notag \end{eqnarray In these formulas, the generating functions are functionals of "inverse hat" values, when \begin{eqnarray*} \check{\Phi}^{2} &=&\widetilde{\Lambda }^{-1}\int dy^{4}(\ ^{v}\Lambda )\partial _{4}(\tilde{\Phi}^{2})\mbox{ and }\tilde{\Phi}^{2}=\widetilde{\Lambda }\int dy^{4}(\ ^{v}\Lambda )^{-1}\partial _{4}(\check{\Phi}^{2}); \\ \ ^{1}\check{\Phi}^{2} &=&(\ ^{1}\widetilde{\Lambda })^{-1}\int dy^{6}(\ _{1}^{v}\Lambda )\partial _{6}(\ ^{1}\tilde{\Phi}^{2})\mbox{ and }\ ^{1}\tilde{\Phi}^{2}=\ ^{1}\widetilde{\Lambda }\int dy^{6}(\ _{1}^{v}\Lambda )^{-1}\partial _{6}(\ ^{1}\check{\Phi}^{2}); \\ \ ^{2}\check{\Phi}^{2} &=&(\ ^{2}\widetilde{\Lambda })^{-1}\int dy^{8}(\ _{2}^{v}\Lambda )\partial _{8}(\ ^{2}\tilde{\Phi}^{2})\mbox{ and }\ ^{2}\tilde{\Phi}^{2}=\ ^{2}\widetilde{\Lambda }\int dy^{8}(\ _{2}^{v}\Lambda )^{-1}\partial _{8}(\ ^{2}\check{\Phi}^{2}). \end{eqnarray* We can compute the values $\Xi (\tilde{\Phi}[\check{\Phi}]),$ $\ ^{1}\Xi (\ ^{1}\tilde{\Phi}[\ ^{1}\check{\Phi}])$ and $\ ^{2}\Xi (\ ^{2}\tilde{\Phi}[\ ^{2}\check{\Phi}])$ as in (\ref{qnk8d}). The torsions for such non--vacuum exact solutions (\ref{qellcs}) generated by the respective data $(\ ^{s}\mathbf{\check{g},}\ ^{s}\mathbf{\check{N},}\ ^{s \mathbf{\check{\nabla}})$ are zero, which is different from the class of exact solutions (\ref{qnk8d}) with nontrivial canonical d--torsions (\re {dtors}) and completely determined by arbitrary data $(\ ^{s}\mathbf{g,}\ ^{s \mathbf{N,}\ ^{s}\widehat{\mathbf{D}})$ with Killing symmetry on $\partial _{7}.$ \subsection{ Non--Killing configurations} The off--diagonal integral varieties of solutions of gravitational field equations constructed in the previous section possess for any shell $s\geq 0$ at least one Killing vector symmetry on $\partial /\partial y^{a_{s}-1}$ when the metrics do not depend on coordinate $y^{a_{s}-1}$ in a class of N--adapted frames. There are two general possibilities to generate "non--Killing" configurations: 1) performing a formal embedding into higher dimensional vacuum spacetimes and/or via 2) "vertical" conformal nonholonomic deformations. \subsubsection{Embedding into a higher dimension vacuum} We analyze a subclass of off--diagonal metrics for 6--d spaces which via nonholonomic constraints and re--parameterizations transform into 4--d non--Killing vacuum solutions. Let us consider certain geometric data \Lambda =\ ^{v}\Lambda =\ _{1}^{v}\Lambda =0$ and $h_{3}=\epsilon _{3},h_{5}=\epsilon _{5},n_{k}=0$ and $\ ^{1}n_{\alpha }=0$ with a 2-d $h --metric $\epsilon _{i}e^{\psi (x^{k},\Lambda =0)}(dx^{i})^{2}.$ The coefficients of the Ricci d--tensor are zero (see formulas (\ref{equ1})-(\re {equ4}) and (\ref{equ5})-(\ref{equ7})). Here we note that one cannot use the equations (\ref{e1})-(\ref{e4aa}) derived for $\partial _{4}h_{3}\neq 0,$ \partial _{6}h_{5}\neq 0$ etc which does not allow, for instance, the values h_{3}=\epsilon _{3},h_{5}=\epsilon _{5},$ for any nontrivial data h_{4}(x^{i},y^{4}),w_{k}(x^{i},y^{4});$ $h_{6}(x^{i},y^{4},y^{6}),\ ^{1}w_{k}(x^{i},y^{4})$, $\ ^{1}w_{4}(x^{i},y^{4},y^{6}).$ Such values can be considered as generating functions for the vacuum quadratic line element \begin{eqnarray} ds_{6\rightarrow 4}^{2} &=&\epsilon _{i}e^{\psi (x^{k},\Lambda =0)}(dx^{i})^{2}+\epsilon _{3}(dy^{3})^{2}+h_{4}(dy^{4}+w_{k}dx^{k})^{2} \label{6to4} \\ &&+\epsilon _{5}(dy^{5})^{2}+h_{6}(dy^{6}+\ ^{1}w_{k}dx^{k}+\ ^{1}w_{4}dy^{4})^{2}. \notag \end{eqnarray In general, this class of vacuum 6-d metrics have a nonzero nonholonomically induced d--torsion (\ref{dtors}). Such solutions do not consist obligatory of a subclass of vacuum solutions (\ref{qe6dvacuum}) when h_{3}\rightarrow \epsilon _{3}$ and $h_{5}\rightarrow \epsilon _{5};$ the conditions $\partial _{4}h_{3}\neq 0$ and $\partial _{6}h_{5}\neq 0$ restrict the class of possible generating functions $h_{4}$ and $h_{6}.$ If we fix from the very beginning certain configurations with $\partial _{4}h_{3}=0$ and $\partial _{6}h_{5}=0,$ we can consider $h_{4},h_{6}$ and w_{k},\ ^{1}w_{k},\ ^{1}w_{4}$ as independent generating functions. If the coefficients in (\ref{6to4}) are subjected additionally to the constraints (\ref{zerot}) for $s=0$ and $s=1,$ we generate LC--configurations. We can follow a formal procedure which is similar to that outlined in section \ref{sslc}. The conditions $\mathbf{e}_{i}\ln \sqrt |\ h_{3}|}=0$ and $\ ^{1}\mathbf{e}_{\alpha }\ln \sqrt{|\ h_{5}|}=0$ are satisfied respectively for any constant $h_{3}=\epsilon _{3}$ and h_{5}=\epsilon _{5}.$ Let us show how we can restrict the class of generating functions in order to obtain solutions for which \begin{eqnarray} \partial _{4}w_{i}(x^{i},y^{4}) &=&\mathbf{e}_{i}\ln \sqrt{|\ h_{4}(x^{i},y^{4})|},\partial _{i}w_{j}=\partial _{j}w_{i},\mbox{ and }\ \label{zerota} \\ \partial _{6}\ ^{1}w_{\alpha }(x^{i},y^{4},y^{6}) &=&\ ^{1}\mathbf{e _{\alpha }\ln \sqrt{|\ h_{6}(x^{i},y^{4},y^{6})|},\partial _{\alpha }\ ^{1}w_{\beta }=\partial _{\beta }\ ^{1}w_{\alpha }. \notag \end{eqnarray We emphasize that the above N--adapted formulas do not depend on $y^{3}$ and y^{5}.$ Prescribing any values of $\ h_{4}$ and $\ h_{6}$ we can find LC--admissible $w$--coefficients solving respective systems of first order partial derivative equations in (\ref{zerota}). In general, such solutions are defined by nonholonomic configurations, i.e. in "non--explicit" form. If all values $h_{4}[\check{\Phi}],h_{6}[\ ^{1}\check{\Phi}]$ and $w_{k}[\check \Phi}],\ ^{1}w_{k}[\ ^{1}\check{\Phi}],\ ^{1}w_{4}[\ ^{1}\check{\Phi}]$ are respectively determined by $\check{\Phi}(x^{i},y^{4})$ and $\ ^{1}\check{\Ph }(x^{i},y^{4},y^{6})$ satisfying conditions of type (\ref{explcond}) and \ref{expconda}) (but $h_{3}$ and $h_{5}$ are not functionals of type (\re {solha})), we can solve the equations (\ref{zerota}) in explicit form. Let us chose any generating functions $\check{\Phi}$ and $\ ^{1}\check{\Phi},$ consider any functionals $h_{4}[\check{\Phi}],h_{6}[\ ^{1}\check{\Phi}]$ and comput \begin{eqnarray} w_{i} &=&\check{w}_{i}=\partial _{i}\check{\Phi}/\partial _{4}\check{\Phi =\partial _{i}\check{A}\mbox{ and } \label{data4c} \\ \ ^{1}w_{i} &=&\ ^{1}\check{w}_{i}=\partial _{i}\ ^{1}\check{\Phi}/\partial _{6}\ ^{1}\check{\Phi}=\partial _{i}\ ^{1}\check{A},\ ^{1}w_{4}=\ ^{1}\check w}_{4}=\partial _{4}\ ^{1}\check{\Phi}/\partial _{6}\ ^{1}\check{\Phi =\partial _{4}\ ^{1}\check{A}, \notag \end{eqnarray for some $\check{A}(x^{i},y^{4})$ and $\ ^{1}\check{A}(x^{i},y^{4},y^{6})$ which are necessary for $\partial _{i}w_{j}=\partial _{j}w_{i}$ and \partial _{\alpha }\ ^{1}w_{\beta }=\partial _{\beta }\ ^{1}w_{\alpha }.$ Considering functional derivatives of type (\ref{fder}) and N--coefficients of the type in (\ref{data4c}) when $H[\check{\Phi}]=\ln \sqrt{|\ h_{4}|}$ and $\ ^{1}H[\ ^{1}\check{\Phi}]=\ln \sqrt{|\ h_{6}|},$ we can satisfy the LC--conditions (\ref{zerota}). Putting together the above formulas, we construct a subclass of metrics of (\re {6to4}) determined by generic off--diagonal metrics as solutions of 6--d vacuum Einstein equations, \begin{eqnarray} ds_{6\rightarrow 4}^{2} &=&\epsilon _{i}e^{\psi (x^{k},\Lambda =0)}(dx^{i})^{2}+\epsilon _{3}(dy^{3})^{2}+h_{4}[\check{\Phi ](dy^{4}+\partial _{k}\check{A}dx^{k})^{2} \label{6to4lc} \\ &&+\epsilon _{5}(dy^{5})^{2}+h_{6}[\ ^{1}\check{\Phi}](dy^{6}+\partial _{k}\ ^{1}\check{A}\ dx^{k}+\partial _{4}\ ^{1}\check{A}\ dy^{4})^{2}. \notag \end{eqnarray We note that in this quadratic line element the terms $\epsilon _{3}(dy^{3})^{2}$ and $\epsilon _{5}(dy^{5})^{2}$ are used for trivial extensions from 4-d to 6--d. Re--defining the coordinate $y^{6}\rightarrow y^{3},$ we generate vacuum solutions in 4--d gravity with metrics (\ref{6to4lc}) depending on all four coordinates $x^{i},y^{3}$ and $y^{4}.$ The anholonomy coefficients (\ref{anhrel1}) are not zero and such metrics cannot be diagonalized by coordinate transformations. This class of 4--d vacuum spacetimes do not possess, in general, Killing symmetries. \subsubsection{"Vertical" conformal nonholonomic deformations} There is another possibility to generate off--diagonal solutions depending on all spacetime coordinates and, in general, with nontrivial sources of the type in (\ref{sourc1}), see details and proofs in Ref. \cite{vex3}. By straightforward computations, we can check that any metric \begin{eqnarray} \mathbf{g} &=&g_{i}(x^{k})dx^{i}\otimes dx^{i}+\omega ^{2}(u^{\alpha })h_{a}(x^{k},y^{4})\mathbf{e}^{a}\otimes \mathbf{e}^{a}+ \label{ans1} \\ &&\ ^{1}\omega ^{2}(u^{\alpha _{1}})h_{a_{1}}(u^{\alpha },y^{6})\mathbf{e ^{a_{1}}\otimes \mathbf{e}^{a_{1}}+\ ^{2}\omega ^{2}(u^{\alpha _{2}})h_{a_{2}}(u^{\alpha _{1}},y^{8})\mathbf{e}^{a_{2}}\otimes \mathbf{e ^{a_{2}}, \notag \end{eqnarray with the conformal $v$--factors subjected to the conditions \begin{eqnarray} \mathbf{e}_{k}\omega &=&\partial _{k}\omega +n_{k}\partial _{3}\omega +w_{k}\partial _{4}\omega =0, \label{vconfc} \\ \ ^{1}\mathbf{e}_{\beta }\ ^{1}\omega &=&\partial _{\beta }\ ^{1}\omega +\ ^{1}n_{\beta }\partial _{5}\ ^{1}\omega +\ ^{1}w_{\beta }\partial _{6}\ ^{1}\omega =0, \notag \\ \ ^{2}\mathbf{e}_{\beta _{1}}\ ^{2}\omega &=&\partial _{\beta _{1}}\ ^{2}\omega +\ ^{2}n_{\beta _{1}}\partial _{7}\ ^{2}\omega +\ ^{2}w_{\beta _{1}}\partial _{8}\ ^{2}\omega =0, \notag \end{eqnarray (similar equations can be written recurrently for arbitrary finite extra dimensions) does not change the Ricci d--tensor (\ref{equ1})--(\ref{equ4d}). Any class of solutions considered in this section can be generalized to non--Killing configurations using nonholonomic "vertical" conformal transforms. In 4--d, the ansatz (\ref{ans1}) can be parameterized with respect to coordinate frames in a form with nontrivial $\omega ^{2}(u^{\alpha })$ which is different from that given in Figure \ref{fig1}, \begin{equation} g_{\underline{\alpha }\underline{\beta }}=\left[ \begin{array}{cccc} g_{1}+\omega ^{2}(n_{1}^{\ 2}h_{3}+w_{1}^{\ 2}h_{4}) & \omega ^{2}(n_{1}n_{2}h_{3}+w_{1}w_{2}h_{4}) & \omega ^{2}n_{1}h_{3} & \omega _{1}^{2}w_{1}h_{4} \\ \omega ^{2}(n_{1}n_{2}h_{3}+w_{1}w_{2}h_{4}) & g_{2}+\omega ^{2}(n_{2}^{\ 2}h_{3}+w_{2}^{\ 2}h_{4}) & \omega ^{2}n_{2}h_{3} & \omega ^{2}w_{2}h_{4} \\ \omega ^{2}n_{1}h_{3} & \omega ^{2}n_{2}h_{3} & \omega ^{2}h_{3} & 0 \\ \omega ^{2}w_{1}h_{4} & \omega ^{2}w_{2}h_{4} & 0 & \omega ^{2}h_{4 \end{array \right] . \label{ans1a} \end{equation} A general metric $g_{\alpha \beta }(u^{\gamma })$ can be parameterized in the form (\ref{ans1a}) if there are any geometrically and physically well--defined frame transformations $g_{\alpha \beta }=e_{\ \alpha }^{\underline \alpha }}e_{\ \beta }^{\underline{\beta }}g_{\underline{\alpha }\underline \beta }}.$ For certain given values $g_{\alpha \beta }$ and $g_{\underline \alpha }\underline{\beta }}$ (in GR, there are 6 + 6 independent components), we have to solve a system of quadratic algebraic equation in order to determine 16 coefficients $e_{\ \alpha }^{\underline{\alpha }},$ up to a fixed coordinate system. We have to fix such nonholonomic 2+2 splitting and partitions on manifolds when the algebraic equations have real nondegenerate solutions. Finally, we note that we can consider generic off--diagonal coordinate decompositions which are similar to (\ref{ans1a}) but with dependencies on all coordinates for higher order shells. \section{Nonholonomic Deformations \& the Kerr Metric} \label{s4} In this section, we show how using the AFDM formalism the Kerr solution can be constructed as a particular case when corresponding types of generating and integration functions are prescribed. We provide a series of new classes of solutions when the metrics are nonholonomically deformed into general or ellipsoidal stationary configurations in four dimensional gravity and/or extra dimensions. Explicit examples are studied of generic off--diagonal metrics encoding interactions in massive gravity, $f --modifications and nonholonomically induced torsion effects. We find such nonholonomic constraints when modified massive, and zero mass, gravitational effects can be modelled by nonlinear off--diagonal interactions in GR. \subsection{Generating the Kerr vacuum solution} Let us consider the ansat \begin{equation*} ds_{[0]}^{2}=Y^{-1}e^{2h}(d\rho ^{2}+dz^{2})-\rho ^{2}Y^{-1}dt^{2}+Y(d\varphi +Adt)^{2} \end{equation* parameterized in terms of three functions $(h,Y,A)$ on coordinates $(\rho ,z). $ We obtain the Kerr solution of the vacuum Einstein equations in 4--d, for rotating black holes, if we chose \begin{eqnarray*} Y &=&\frac{1-(p\widehat{x}_{1})^{2}-(q\widehat{x}_{2})^{2}}{(1+p\widehat{x _{1})^{2}+(q\widehat{x}_{2})^{2}},\ A=2M\frac{q}{p}\frac{(1-\widehat{x _{2})(1+p\widehat{x}_{1})}{1-(p\widehat{x}_{1})-(q\widehat{x}_{2})}, \\ e^{2h} &=&\frac{1-(p\widehat{x}_{1})^{2}-(q\widehat{x}_{2})^{2}}{p^{2}[ \widehat{x}_{1})^{2}+(\widehat{x}_{2})^{2}]},\ \rho ^{2}=M^{2}(\widehat{x _{1}^{2}-1)(1-\widehat{x}_{2}^{2}),\ z=M\widehat{x}_{1}\widehat{x}_{2}, \end{eqnarray* where $M=const$ and $\rho =0$ consists of the horizon $\widehat{x}_{1}=0$ and the "north / south" segments of the rotation axis, $\widehat{x _{2}=+1/-1.$ Such a metric can be written in the form (\ref{ansprime}) \begin{equation} ds_{[0]}^{2}=(dx^{1})^{2}+(dx^{2})^{2}-\rho ^{2}Y^{-1}(\mathbf{e}^{3})^{2}+Y \mathbf{e}^{4})^{2}, \label{kerr1} \end{equation if the coordinates $x^{1}(\widehat{x}_{1},\widehat{x}_{2})$ and $x^{2} \widehat{x}_{1},\widehat{x}_{2})$ are defined for an \begin{equation*} (dx^{1})^{2}+(dx^{2})^{2}=M^{2}e^{2h}(\widehat{x}_{1}^{2}-\widehat{x _{2}^{2})Y^{-1}\left( \frac{d\widehat{x}_{1}^{2}}{\widehat{x}_{1}^{2}-1} \frac{d\widehat{x}_{2}^{2}}{1-\widehat{x}_{2}^{2}}\right) \end{equation* and $y^{3}=t+\widehat{y}^{3}(x^{1},x^{2}),y^{4}=\varphi +\widehat{y ^{4}(x^{1},x^{2},t),$ when \begin{equation*} \mathbf{e}^{3}=dt+(\partial _{i}\widehat{y}^{3})dx^{i},\mathbf{e ^{4}=dy^{4}+(\partial _{i}\widehat{y}^{4})dx^{i}, \end{equation* for some functions $\widehat{y}^{a},$ $a=3,4,$ with $\partial _{t}\widehat{y ^{4}=-A(x^{k}).$ For many purposes, the Kerr metric was written in the so--called Boyer--Linquist coordinates $(r,\vartheta ,\varphi ,t),$ for $r=m_{0}(1+ \widehat{x}_{1}),\widehat{x}_{2}=\cos \vartheta .$ The parameters $p,q$ are related to the total black hole mass, $m_{0}$ (it should be not confused with the parameter $\mu _{g}$ in massive gravity) and the total angular momentum, $am_{0},$ for the asymptotically flat, stationary and axisymmetric Kerr spacetime. The formulas $m_{0}=Mp^{-1}$ and $a=Mqp^{-1}$ when p^{2}+q^{2}=1$ implies $m_{0}^{2}-a^{2}=M^{2}$ (see monographs \cit {heusler,kramer,misner} for the standard methods and bibliography on stationary black hole solutions; we note here that the coordinates $\widehat x}_{1},\widehat{x}_{2}$ correspond respectively to $x,y$ from chapter 4 of the first book). In such variables, the vacuum solution (\ref{kerr1}) can be writte \begin{eqnarray} ds_{[0]}^{2} &=&(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}+\overline{A} \mathbf{e}^{3^{\prime }})^{2}+(\overline{C}-\overline{B}^{2}/\overline{A}) \mathbf{e}^{4^{\prime }})^{2}, \label{kerrbl} \\ \mathbf{e}^{3^{\prime }} &=&dt+d\varphi \overline{B}/\overline{A =dy^{3^{\prime }}-\partial _{i^{\prime }}(\widehat{y}^{3^{\prime }}+\varphi \overline{B}/\overline{A})dx^{i^{\prime }},\mathbf{e}^{4^{\prime }}=dy^{4^{\prime }}=d\varphi , \notag \end{eqnarray for any coordinate functions \begin{equation*} x^{1^{\prime }}(r,\vartheta ),\ x^{2^{\prime }}(r,\vartheta ),\ y^{3^{\prime }}=t+\widehat{y}^{3^{\prime }}(r,\vartheta ,\varphi )+\varphi \overline{B} \overline{A},y^{4^{\prime }}=\varphi ,\ \partial _{\varphi }\widehat{y ^{3^{\prime }}=-\overline{B}/\overline{A}, \end{equation*} for which $(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}=\Xi \left( \Delta ^{-1}dr^{2}+d\vartheta ^{2}\right) $, and the coefficients ar \begin{eqnarray} \overline{A} &=&-\Xi ^{-1}(\Delta -a^{2}\sin ^{2}\vartheta ),\overline{B =\Xi ^{-1}a\sin ^{2}\vartheta \left[ \Delta -(r^{2}+a^{2})\right] , \notag \\ \overline{C} &=&\Xi ^{-1}\sin ^{2}\vartheta \left[ (r^{2}+a^{2})^{2}-\Delta a^{2}\sin ^{2}\vartheta \right] ,\mbox{ and } \notag \\ \Delta &=&r^{2}-2m_{0}+a^{2},\ \Xi =r^{2}+a^{2}\cos ^{2}\vartheta . \label{kerrcoef} \end{eqnarray} The quadratic linear elements (\ref{kerr1}) (or (\ref{kerrbl})) with prime data \begin{eqnarray} \mathring{g}_{1} &=&1,\mathring{g}_{2}=1,\mathring{h}_{3}=-\rho ^{2}Y^{-1} \mathring{h}_{4}=Y,\mathring{N}_{i}^{a}=\partial _{i}\widehat{y}^{a}, \label{dkerr} \\ (\mbox{ or \ }\mathring{g}_{1^{\prime }} &=&1,\mathring{g}_{2^{\prime }}=1 \mathring{h}_{3^{\prime }}=\overline{A},\mathring{h}_{4^{\prime }}=\overline C}-\overline{B}^{2}/\overline{A}, \notag \\ \mathring{N}_{i^{\prime }}^{3} &=&\mathring{n}_{i^{\prime }}=-\partial _{i^{\prime }}(\widehat{y}^{3^{\prime }}+\varphi \overline{B}/\overline{A}) \mathring{N}_{i^{\prime }}^{4}=\mathring{w}_{i^{\prime }}=0) \notag \end{eqnarray define solutions of the vacuum Einstein equations parameterized in the form (\re {cdeinst}) and (\ref{lcconstr}) with zero sources. Here we note that we have to consider a correspondingly N--adapted system of coordinates instead of the "standard" prolate spherical, or Boyer--Linquist ones because parameterizations with the data (\ref{dkerr}) are most convenient for a straightforward application of the AFDM. Following such an approach, we can generalize the solutions in order to get dependencies of the coefficients on more than two coordinates, with non--Killing configurations and/or extra dimensions. In some sense, the Kerr vacuum solution in GR consists a "degenerate" case of the 4--d off--diagonal vacuum solutions determined by primary metrics with the data (\ref{dkerr}) when the diagonal coefficients depend only on two "horizontal" N--adapted coordinates and the off--diagonal terms are induced by rotation frames. \subsection{Deformations of Kerr metrics in 4--d massive gravity} Let us consider the coefficients (\ref{dkerr}) for the Kerr metric as the data for a prime metric $\mathbf{\mathring{g}}$ (in general, it may be, or not, an exact solution of the Einstein or other modified gravitational equations, or any fiducial metric). Our goal is to construct nonholonomic deformations, \begin{equation*} (\mathbf{\mathring{g}},\mathbf{\mathring{N},\ }^{v}\mathring{\Upsilon}=0 \mathring{\Upsilon}=0)\rightarrow (\widetilde{\mathbf{g}},\widetilde{\mathbf N}}\mathbf{,\ }^{v}\widetilde{\Upsilon }=\widetilde{\lambda },\widetilde \Upsilon }=\widetilde{\lambda }),\widetilde{\lambda }=const\neq 0, \end{equation* see sources (\ref{source1b}) for the shell $s=0$ and (\ref{source1a}). The main condition is that the target metric $\mathbf{g}$ positively defines a generic off--diagonal solution of field equations in 4--d massive gravity. The N--adapted deformations of coefficients of the metrics, frames and sources are parameterized in the form \begin{eqnarray} &&[\mathring{g}_{i},\mathring{h}_{a},\mathring{w}_{i},\mathring{n _{i}]\rightarrow \lbrack \widetilde{g}_{i}=\widetilde{\eta }_{i}\mathring{g _{i},\widetilde{h}_{3}=\widetilde{\eta }_{3}\mathring{h}_{3},\widetilde{h _{4}=\widetilde{\eta }_{4}\mathring{h}_{4},\widetilde{w}_{i}=\mathring{w _{i}+\ ^{\eta }w_{i},n_{i}=\mathring{n}_{i}+\ ^{\eta }n_{i}], \notag \\ &&\ \widetilde{\Upsilon }=\widetilde{\lambda },\ ^{v}\hat{\Upsilon (x^{k^{\prime }})=\ ^{v}\Lambda =\mu _{g}^{2}\ \lambda (x^{k^{\prime }} \mathring{h}_{4}^{-1},\widetilde{\Lambda }=\mu _{g}^{2}\ \widetilde{\lambda ,\tilde{\Phi}^{2}=\exp [2\varpi (x^{k^{\prime }},y^{4})]\ \mathring{h}_{3}, \label{ndefbm} \end{eqnarray where the values \ $\widetilde{\eta }_{a},\widetilde{w}_{i},\tilde{n}_{i}$ and $\varpi $ are functions of three coordinates $(x^{k^{\prime }},y^{4})$ and $\widetilde{\eta }_{i}(x^{k})$ depend only on h--coordinates. The prime data $\mathring{g}_{i},\mathring{h}_{a},\mathring{w}_{i},\mathring{n}_{i}$ are given by coefficients depending only on $(x^{k}).$ In terms of $\eta $--functions (\ref{etad}) resulting in $h_{a}^{\ast }\neq 0 $ and $g_{i}=c_{i}e^{\psi {(x^{k})}},$ the solutions of type (\ref{qnk4d}) with $\widetilde{\Lambda }\rightarrow \widetilde{\lambda }$ and $\ _{2}n_{k^{\prime }}=0$ (we use "primed" coordinates and prime Kerr data (\re {kerrbl}) and (\ref{dkerr})) can be re--written in the for \begin{eqnarray} ds^{2} &=&e^{\psi (x^{k^{\prime }})}[(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}] - \label{nvlcmgs} \\ && \frac{e^{2\varpi }}{4\mu _{g}^{2}\ |\widetilde{\lambda }|}\overline{A [dy^{3^{\prime }}+\left( \partial _{k^{\prime }}\ ^{\eta }n(x^{i^{\prime }})-\partial _{k^{\prime }}(\widehat{y}^{3^{\prime }}+\varphi \overline{B} \overline{A})\right) dx^{k^{\prime }}]^{2}+\frac{(\varpi ^{\ast })^{2}}{\ \mu _{g}^{2}\ \lambda (x^{k^{\prime }})}(\overline{C}-\overline{B}^{2} \overline{A})[d\varphi +(\partial _{i^{\prime }}\ ^{\eta }\widetilde{A )dx^{i^{\prime }}]^{2}, \notag \end{eqnarray for \begin{equation*} \Xi =\int dy^{4}(\ ^{v}\Lambda )\partial _{4}(\tilde{\Phi}^{2})=\mu _{g}^{2}\ \lambda (x^{k^{\prime }})\mathring{h}_{4}^{-1}\tilde{\Phi}^{2}, \end{equation* with $\tilde{\Phi}^{2}/\mathring{h}_{4}$ parameterized using formulas (\re {ndefbm}).\footnote Hereafter we shall consider that we can approximate $\lambda (x^{k^{\prime }})\simeq \widetilde{\lambda }=const.$} The gravitational polarizations (\eta _{i},\eta _{a})$ and N--coefficients $(n_{i},w_{i})$ are computed following formulas\ \begin{eqnarray*} e^{\psi (x^{k})} &=&\widetilde{\eta }_{1^{\prime }}=\widetilde{\eta _{2^{\prime }},\ \widetilde{\eta }_{3^{\prime }}=\frac{e^{2\varpi }}{4\mu _{g}^{2}\ |\widetilde{\lambda }|},\ \widetilde{\eta }_{4^{\prime }}=\frac (\varpi ^{\ast })^{2}}{\ \mu _{g}^{2}\ \lambda (x^{k^{\prime }})}, \\ w_{i^{\prime }} &=&\mathring{w}_{i^{\prime }}+\ ^{\eta }w_{i^{\prime }}=\partial _{i^{\prime }}(\ ^{\eta }\widetilde{A}[\varpi ]),\ n_{k^{\prime }}=\mathring{n}_{k^{\prime }}+\ ^{\eta }n_{k^{\prime }}=\partial _{k^{\prime }}(-\widehat{y}^{3^{\prime }}+\varphi \overline{B}/\overline{A}+\ ^{\eta }n), \end{eqnarray* where $\ ^{\eta }\widetilde{A}(x^{k},y^{4})$ is introduced via formulas and assumptions similar to (\ref{expconda}), for $s=1,$ and $\psi ^{\bullet \bullet }+\psi ^{\prime \prime }=2\ \mu _{g}^{2}\ \lambda (x^{k^{\prime }}).$ For N--coefficients, the parameterizations are used (\ref{solhn}) with $\check{\Phi}=\exp [\varpi (x^{k^{\prime }},y^{4})]\sqrt{\ |\mathring{h _{3^{\prime }}|}$, when $\mathring{h}_{3^{\prime }}\mathring{h}_{4^{\prime }}=\overline{A}\overline{C}-\overline{B}^{2}$ and \begin{equation*} w_{i^{\prime }}=\mathring{w}_{i^{\prime }}+\ ^{\eta }w_{i^{\prime }}=\partial _{i^{\prime }}(\ e^{\varpi }\sqrt{|\overline{A}\overline{C} \overline{B}^{2}|})/\ \varpi ^{\ast }e^{\varpi }\sqrt{|\overline{A}\overline C}-\overline{B}^{2}|}=\partial _{i^{\prime }}\ ^{\eta }\widetilde{A}. \end{equation* We can take any function $\ ^{\eta }n(x^{k})$ and put $\lambda =const\neq 0$ using the corresponding re--definitions of coordinates and generating functions. The solutions (\ref{nvlcmgs}) are valid for stationary LC--configurations determined by off--diagonal massive gravity effects on Kerr black holes when the new class of spacetimes have a Killing symmetry in $\partial /\partial y^{3^{\prime }}$ and a generic dependence on three (from maximally four) coordinates, $(x^{i^{\prime }}(r,\vartheta ),\varphi ). $ Off--diagonal modifications are possible even for very small values of the mass parameter \ \mu _{g}.$ The solutions depend on the type of generating function $\varpi (x^{i^{\prime }},\varphi )$ we have to fix in order to satisfy certain experimental/observational data in certain fixed systems of reference/coordinates. Various data can be re--parameterized for an effective $\lambda =const\neq 0.$ In such variables, we can mimic stationary massive gravity effects by off--diagonal configurations in GR with integration parameters which should be also fixed by imposing additional assumptions on the symmetries of the interactions (for instance, to have an ellipsoid configuration, see section \ref{4dellipsc} and details and discussion on parametric Killing symmetries in Refs. \cite{ger1,ger2,vpars}). \subsubsection{Nonholonomically induced torsion and massive gravity} If we do not impose the LC--conditions (\ref{lcconstr}), a nontrivial source $\ ^{\mu }\widetilde{\Lambda }=\mu _{g}^{2}\ \widetilde{\lambda }$ from massive gravity induces stationary configuration with nontrivial d--torsion \ref{dtors}). The torsion coefficients are determined by metrics of the type \ref{qnk4d}) with $\widetilde{\Lambda }\rightarrow \widetilde{\lambda }$ and parameterizations of coefficients and coordinates distinguishing the prime data for a Kerr metric (\ref{dkerr}). Such solutions can be written in the form {\small \begin{eqnarray} ds^{2} &=&e^{\psi (x^{k^{\prime }})}[(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}]-\frac{\Phi ^{2}}{4\mu _{g}^{2}\ |\widetilde{\lambda }|}\overline{A [dy^{3^{\prime }}+\left( \ _{1}n_{k^{\prime }}(x^{i^{\prime }})+\ _{2}n_{k^{\prime }}(x^{i^{\prime }})\frac{4\mu _{g}(\Phi ^{\ast })^{2}}{\Phi ^{5}}-\partial _{k^{\prime }}(\widehat{y}^{3^{\prime }}+\varphi \overline{B} \overline{A})\right) dx^{k^{\prime }}]^{2} \notag \\ &&+\frac{(\partial _{\varphi }\Phi )^{2}}{\ \mu _{g}^{2}\ \lambda (x^{k^{\prime }})\Phi ^{2}}(\overline{C}-\overline{B}^{2}/\overline{A )[d\varphi +\frac{\partial _{i^{\prime }}\Phi }{\partial _{\varphi }\Phi dx^{i^{\prime }}]^{2}, \label{ofindtmg} \end{eqnarray } where we use a generating function $\Phi (x^{i^{\prime }},\varphi )$ instead of $e^{\varpi }$ and consider nonzero values of $\ _{2}n_{k}(x^{i^{\prime }}).$ We can see that nontrivial stationary off--diagonal torsion effects may result in additional effective rotations proportional to $\mu _{g}$ if the integration function $\ _{2}n_{k}\neq 0.$ Considering two different classes of off--diagonal solutions (\ref{ofindtmg ) and (\ref{nvlcmgs}), we can study if a massive gravity theory is described in terms of an induced torsion or characterized by additional nonholonomic constraints as in GR (with zero torsion). It should be noted that configurations of the type (\ref{ofindtmg}) can be constructed in various theories with noncommutative, brane, extra--dimension, warped and trapped brane type variables in string, or Finsler like and/or Ho\v{r}ava--Lifshits theories \cit {vex1,vp,vt,vgrg,vbranef} when nonholonomically induced torsion effects play a substantial role. Those classes of solutions were constructed for different sets of interactions constants and, for instance, for propagating Schwarzschild and/or ellipsoid type configurations on Tau\ NUT backgrounds etc. The off--diagonal deformations and effective polarizations of the coefficients of the metrics correspond to a prime Kerr metric and are related to target configuration in massive gravity. \subsubsection{Small $f$--modifications of Kerr metrics and massive gravity} Using the AFDM, we can construct off--diagonal solutions for superposition of $f$--modified and massive gravity interactions. Such nonlinear effects can be distinguished in explicit form if we consider for additional $f --deformations, for instance, a "prime" solution for massive gravity/ effectively modelled in GR with source $\ ^{\mu }\Lambda =\mu _{g}^{2}\ \lambda (x^{k^{\prime }}),$ or re--defined to $\ ^{\mu }\tilde{\Lambda}=\mu _{g}^{2}\ \tilde{\lambda}=const.$ Adding a "small" value $\ \widetilde \Lambda }$ determined by $f$--modifications, we work in N--adapted frames with an effective source $\Upsilon =\widetilde{\Lambda }+\widetilde{\lambda } $ (see formulas (\ref{source1a}) and (\ref{source1b})). As a result, we construct a class of off--diagonal solutions in modified $f$--gravity generated from the Kerr black hole solution as a result of two nonholonomic deformations \begin{equation*} (\mathbf{\mathring{g}},\mathbf{\mathring{N},\ }^{v}\mathring{\Upsilon}=0 \mathring{\Upsilon}=0)\rightarrow (\widetilde{\mathbf{g}},\widetilde{\mathbf N}},\ ^{v}\widetilde{\Upsilon }=\widetilde{\lambda },\widetilde{\Upsilon } \widetilde{\lambda })\rightarrow (\ ^{\varepsilon }\mathbf{g},\ ^{\varepsilon }\mathbf{N,\Upsilon =\varepsilon \ }\widetilde{\Lambda }+\ ^{\mu }\tilde{\Lambda},\mathbf{\Upsilon =\varepsilon \ }\widetilde{\Lambda +\ ^{\mu }\tilde{\Lambda}), \end{equation* when the target data $\mathbf{g=}\ ^{\varepsilon }\mathbf{g}$ and$\ \mathbf N=}\ ^{\varepsilon }\mathbf{N}$ depend on a small parameter $\varepsilon ,$ 0<\varepsilon \ll 1.$ For simplicity, we restrict our considerations for solutions when $|\mathbf{\varepsilon \ }\widetilde{\Lambda }|\ll |\ ^{\mu \tilde{\Lambda}|,$ i.e. consider that $f$--modifications in N--adapted frames are much smaller than massive gravity effects (in a similar from, we can analyze nonlinear interactions with $|\mathbf{\varepsilon \ }\widetilde \Lambda }|\gg |\ ^{\mu }\tilde{\Lambda}|).$ The corresponding N--adapted transforms are parameterized as \begin{eqnarray} &&[\mathring{g}_{i},\mathring{h}_{a},\mathring{w}_{i},\mathring{n _{i}]\rightarrow \label{def2} \\ &&[g_{i}=(1+\varepsilon \chi _{i})\widetilde{\eta }_{i}\mathring{g _{i},h_{3}=(1+\varepsilon \chi _{3})\widetilde{\eta }_{3}\mathring{h _{3},h_{4}=(1+\varepsilon \chi _{4})\widetilde{\eta }_{4}\mathring{h}_{4},\ ^{\varepsilon }w_{i}=\mathring{w}_{i}+\widetilde{w}_{i}+\varepsilon \overline{w}_{i},\ ^{\varepsilon }n_{i}=\mathring{n}_{i}+\tilde{n _{i}+\varepsilon \overline{n}_{i}]; \notag \\ &&\mathbf{\Upsilon =}\ ^{\mu }\tilde{\Lambda}(1+\varepsilon \ \widetilde \Lambda }/\ ^{\mu }\tilde{\Lambda});\ \ \ ^{\varepsilon }\tilde{\Phi}=\tilde \Phi}(x^{k},\varphi )[1+\varepsilon \ \ ^{1}\tilde{\Phi}(x^{k},\varphi ) \tilde{\Phi}(x^{k},\varphi )]=\exp [\ \ ^{\varepsilon }\varpi (x^{k},\varphi )], \notag \end{eqnarray leading to a 4--d LC--configuration with d--metric \begin{equation*} ds_{4\varepsilon dK}^{2}=\epsilon _{i}(1+\varepsilon \chi _{i})e^{\psi (x^{k})}(dx^{i})^{2}+\frac{\ \ \ ^{\varepsilon }\tilde{\Phi}^{2}}{4\ \mathbf \Upsilon }}\left[ dy^{3}+(\partial _{i}\ n)dx^{i}\right] ^{2}+\ \frac (\partial _{\varphi }\ \ ^{\varepsilon }\tilde{\Phi})^{2}}{\ \mathbf \Upsilon }\ \ ^{\varepsilon }\tilde{\Phi}^{2}}\left[ dy^{4}+(\partial _{i}\ \ ^{\varepsilon }\check{A})dx^{i}\right] ^{2}, \end{equation* for $\partial _{i}\ \ ^{\varepsilon }\check{A}=\partial _{i}\ \ ^{\varepsilon }\check{A}+\varepsilon \partial _{i}\ \ ^{1}\check{A}$ determined by $\ ^{\varepsilon }\tilde{\Phi}=\tilde{\Phi}+\varepsilon \ ^{1 \tilde{\Phi}$ following conditions in (\ref{data4c}). The values labeled by " \circ $" and "$\widetilde{}$" are taken from (\ref{ndefbm}) (for simplicity, we omit priming of indices). The $\chi $- and $w$--values ( corresponding to a re--definition of coefficients; for simplicity, we consider \varepsilon \overline{n}_{i}=0$) have to be computed to define $\varepsilon --deformed LC--configurations, see formulas (\ref{zerot}) for $s=0$, as solutions of the system (\ref{sourc1}) in the form (\ref{e1})--(\ref{e4}) for a source $\mathbf{\Upsilon =\ ^{\mu }\tilde{\Lambda}+\ }\varepsilon \widetilde{\Lambda }.$ The deformations (\ref{def2}) of the off--diagonal solutions (\ref{nvlcmgs}) result in a new class of $\varepsilon $--deformed solutions wit \begin{eqnarray} \chi _{1} &=&\chi _{2}=\chi ,\mbox{ for }\partial _{11}\chi +\epsilon _{2}\partial _{22}\chi =2\widetilde{\Lambda }; \label{edefcel} \\ \chi _{3} &=&2\ ^{1}\tilde{\Phi}/\tilde{\Phi}-\mathbf{\ }\widetilde{\Lambda /\ ^{\mu }\tilde{\Lambda},\ \chi _{4}=2\partial _{4}\ ^{1}\tilde{\Phi} \tilde{\Phi}-2\ ^{1}\tilde{\Phi}/\tilde{\Phi}-\widetilde{\Lambda }/\ ^{\mu \tilde{\Lambda}, \notag \\ \overline{w}_{i} &=&(\frac{\partial _{i}\ ^{1}\tilde{\Phi}}{\partial _{i \tilde{\Phi}}-\frac{\partial _{4}\ ^{1}\tilde{\Phi}}{\partial _{4}\tilde{\Ph }})\frac{\partial _{i}\tilde{\Phi}}{\partial _{4}\tilde{\Phi}}=\partial _{i}\ \ ^{1}\check{A},\overline{n}_{i}=0, \notag \end{eqnarray where there is not summation on index "$i"$ in the last formula and \mathring{h}_{3^{\prime }}\mathring{h}_{4^{\prime }}=\overline{A}\overline{C -\overline{B}^{2}.$ Such nonholonomic deformations are determined respectively by two generating functions $\tilde{\Phi}=e^{\varpi }$ and $\ ^{1}\tilde{\Phi}$ and two sources $\ ^{\mu }\tilde{\Lambda}$ and $\widetilde \Lambda }.$ Putting all this together, we construct an off--diagonal generalization of the Kerr metric via "main" massive gravity terms and additional $\varepsilon --parametric $f$--modifications, \begin{eqnarray} ds^{2} &=&e^{\psi (x^{k^{\prime }})}(1+\varepsilon \chi (x^{k^{\prime }}))[(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}]- \notag \\ &&\frac{e^{2\varpi }}{4|\ ^{\mu }\tilde{\Lambda}|}\overline{A}[1+\varepsilon (2e^{-\varpi }\ ^{1}\tilde{\Phi}-\mathbf{\ }\widetilde{\Lambda }/\ ^{\mu \tilde{\Lambda})][dy^{3^{\prime }}+\left( \partial _{k^{\prime }}\ ^{\eta }n(x^{i^{\prime }})-\partial _{k^{\prime }}(\widehat{y}^{3^{\prime }}+\varphi \overline{B}/\overline{A})\right) dx^{k^{\prime }}]^{2}+ \label{nvlcmgse} \\ &&\frac{(\varpi ^{\ast })^{2}}{\ \ ^{\mu }\tilde{\Lambda}}(\overline{C} \overline{B}^{2}/\overline{A})[1+\varepsilon (2e^{-\varpi }\partial _{4}\ ^{1}\tilde{\Phi}-2e^{-\varpi }\ ^{1}\tilde{\Phi}-\widetilde{\Lambda }/\ ^{\mu }\tilde{\Lambda})][d\varphi +(\partial _{i^{\prime }}\ \widetilde{A +\varepsilon \partial _{i^{\prime }}\ \ ^{1}\check{A})dx^{i^{\prime }}]^{2}. \notag \end{eqnarray} We can consider $\varepsilon $--deformations of the type (\ref{def2}) for (\re {ofindtmg}), which allows us to generate new classes of off--diagonal solutions with nonholonomically induced torsion determined both by massive and $f$--modifications of GR. Such a spacetime cannot be modelled as an effective one with anisotropic polarizations in GR. \subsection{ Ellipsoidal 4--d deformations of the Kerr metric} \label{4dellipsc}We provide some examples how the Kerr primary data (\re {dkerr}) is nonholonomically deformed into target generic off--diagonal solutions of vacuum and non--vacuum Einstein equations for the canonical d--connection and/or the Levi--Civita connection. \subsubsection{ Vacuum ellipsoidal configurations} Let us construct a class of parametric solutions with such nonholonomic constraints on the coefficients given by (\ref{ofindtmg}) which transform the metrics into effective 4--d vacuum LC--configurations of the type (\ref{vs2}). This defines a model when $f$--modifications compensate massive gravity deformations of a Kerr solution, with $\mathbf{\Upsilon =\ ^{\mu }\tilde \Lambda}+\ }\varepsilon \widetilde{\Lambda }=0,$ and result in ellipsoidal off--diagonal configurations in GR, where $\varepsilon =-\ ^{\mu }\tilde \Lambda}\mathbf{/}\widetilde{\Lambda }\ll 1$ can be considered as an eccentricity parameter. We find solutions for $\varepsilon $--deformations into vacuum solutions. The ansatz for the target metrics is of the type \begin{eqnarray} ds^{2} &=&e^{\psi (x^{k^{\prime }})}(1+\varepsilon \chi (x^{k^{\prime }}))[(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}] \label{vacum4del} \\ &&-\frac{e^{2\varpi }}{4\mu _{g}^{2}|\ \widetilde{\lambda }|}\overline{A [1+\varepsilon \chi _{3^{\prime }}][dy^{3^{\prime }}+\left( \partial _{k^{\prime }}\ ^{\eta }n(x^{i^{\prime }})-\partial _{k^{\prime }}(\widehat{ }^{3^{\prime }}+\varphi \overline{B}/\overline{A})\right) dx^{k^{\prime }}]^{2} \notag \\ &&+\frac{(\partial _{4}\varpi )^{2}\eta _{4^{\prime }}}{\ \mu _{g}^{2}\ \widetilde{\lambda }}(\overline{C}-\overline{B}^{2}/\overline{A )[1+\varepsilon \chi _{4^{\prime }}][d\varphi +(\partial _{i^{\prime }}\ \widetilde{A}+\varepsilon \partial _{i^{\prime }}\ \ ^{1}\check{A )dx^{i^{\prime }}]^{2}, \notag \end{eqnarray when the prime metrics (\ref{nvlcmgs}) are obtained for $\eta _{4^{\prime }}=1.$ The condition (\ref{ca1}) for $\phi =const,$ i.e. for vacuum off--diagonal configurations, when $h_{4^{\prime }}=\ ^{0}h_{4^{\prime }}(\partial _{4}\sqrt{|h_{3^{\prime }}|})^{2}$ (\ref{h34vacuum}), is satisfied for $\eta _{4^{\prime }}=\overline{A}\sqrt{|\overline{B}^{2} \overline{C}\overline{A}|}e^{2\varpi }$. For terms proportional to \varepsilon ),$ we compute $\chi _{4^{\prime }}=(\partial _{4}\varpi )^{-1}(1+e^{-\varpi }\chi _{3^{\prime }})$, where $\varpi (r,\vartheta ,\varphi )$ and $\chi _{3^{\prime }}(r,\vartheta ,\varphi )$ are generating functions. We can consider as generating functions for N--coefficients any \widetilde{A}(r,\vartheta ,\varphi )$ and $\ \ ^{1}\check{A}(r,\vartheta ,\varphi ),$ which for $w_{i^{\prime }}=\partial _{i^{\prime }}(\ \widetilde A}+\varepsilon \ ^{1}\check{A})$ solve the LC--conditions. The LC--conditions $\mathbf{e}_{i}\ln \sqrt{|\ h_{3}|}=0,$ $\partial _{i}w_{j}=\partial _{j}w_{i}$ for $s=0,$ see (\ref{zerot}) can be satisfied if we parameterize \begin{equation*} w_{i^{\prime }}=\partial _{i^{\prime }}\ ^{\varepsilon }\Phi /\partial _{\varphi }\ ^{\varepsilon }\Phi =\partial _{i^{\prime }}(\ \widetilde{A +\varepsilon \ ^{1}\check{A}), \end{equation* for $\ ^{\varepsilon }\Phi =\exp (\varpi +\varepsilon \chi _{3^{\prime }})$, see discussions related to (\ref{zerota}) and (\ref{data4c}). Because h_{4^{\prime }}$ for (\ref{vacum4del}) can be approximated up to \varepsilon ^{2}$ to be a functional on $\ ^{\varepsilon }\Phi ,$ we can satisfy for certain classes of generating functions $\ ^{\varepsilon }\Phi =\ ^{\varepsilon }\tilde{\Phi}=\ ^{\varepsilon }\check{\Phi},$ see (\re {explcond}), the conditions $\partial _{\varphi }w_{i^{\prime }}=\mathbf{e _{i^{\prime }}\ln \sqrt{|\ h_{4}|}.$ We can chose such a generating function $\chi _{3^{\prime }},$ when the constraint $h_{3^{\prime }}=0$ defines a stationary rotoid configuration (different from to the ergo sphere for the Kerr solutions): Prescribing \begin{equation} \chi _{3^{\prime }}=2\underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0}), \label{chi3prim} \end{equation for constant parameters $\underline{\zeta },\omega _{0}$ and $\varphi _{0},$ and introducing the values \begin{eqnarray*} \overline{A}(r,\vartheta )[1+\varepsilon \chi _{3^{\prime }}(r,\vartheta ,\varphi )] &=&\widehat{A}(r,\vartheta ,\varphi )=-\Xi ^{-1}(\widehat{\Delta }-a^{2}\sin ^{2}\vartheta ), \\ \widehat{\Delta }(r,\varphi ) &=&r^{2}-2m(\varphi )+a^{2}, \end{eqnarray* as $\varepsilon $--deformations of Kerr coefficients (\ref{kerrcoef}), we get an effective "anisotropically polarized" mass \begin{equation} m(\varphi )=m_{0}/\left( 1+\varepsilon \underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0})\right) . \label{polarm} \end{equation The condition $h_{3}=0,$ i.e. $\ ^{\varphi }\Delta (r,\varphi ,\varepsilon )=a^{2}\sin ^{2}\vartheta ,$ results in an ellipsoidal "deformed horizon" r(\vartheta ,\varphi )=m(\varphi )+\left( m^{2}(\varphi )-a^{2}\sin ^{2}\vartheta \right) ^{1/2}$. For $a=0$, this is just the parametric formula for an ellipse with eccentricity $\varepsilon , \begin{equation*} r_{+}=\frac{2m_{0}}{1+\varepsilon \underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0})}. \end{equation*} If the anholonomy coefficients (\ref{anhrel1}) computed for (\ref{vacum4del ) are not trivial for such $w_{i}$ and $\ n_{k}=\ _{1}n_{k},$ the generated solutions cannot be diagonalized via coordinate transformations. The corresponding 4--d spacetimes have one Killing symmetry with respect to $\partial /\partial y^{3^{\prime }}.$ For small $\varepsilon ,$ the singularity at \Xi =0$ is "hidden" under ellipsoidal deformed horizons if $m_{0}\geq a.$ Both the event horizon, $r_{+}=m(\varphi )+\left( m^{2}(\varphi )-a^{2}\sin ^{2}\vartheta \right) ^{1/2}$, and the Cauchy horizon, $r_{-}=m(\varphi )-\left( m^{2}(\varphi )-a^{2}\sin ^{2}\vartheta \right) ^{1/2}$, are $\varphi $--deformed and are effectively embedded into an off--diagonal background determined by the N--coefficients. In some sense, such configurations determine Kerr-like black hole solutions with additional dependencies on the variable $\varphi $ of certain diagonal and off--diagonal coefficients of the metric. For $a=0,$ but $\varepsilon \neq 0,$ we get ellipsoidal deformations of the Schwarzschild black holes (see \cite{vex1} and references therein on the stability and interpretation of such solutions with both commutative and/or noncommutative parameters). Such an interpretation is not possible for "non-small" $N$--deformations of the Kerr metric. In general, it is not clear what physical importance such target exact solutions may have even if they may be defined to preserve the Levi--Civita configurations. The eccentricity $\varepsilon =-\mathbf{\widetilde{\lambda }/}\widetilde \Lambda }\ll 1$ depends both on massive gravity and $f$--modifications encoded into effective cosmological constants. We proved that via nonholonomic deformations it is possible to transform non--vacuum solutions with an effective locally anisotropically cosmological constant into effective off--diagonal vacuum configurations in GR. If the generating functions are prescribed to possess necessarily certain type of smooth conditions, the solutions are similar to certain Kerr black holes with ellipsoidal $\varepsilon --deformed horizons and embedded self--consistently into non--trivial off--diagonal vacuum configurations. Polarizations of such vacuums encode massive gravity contributions and $f$--modifications. \subsubsection{Ellipsoid Kerr -- de Sitter configurations} We construct a subclass of solutions (\ref{nvlcmgse}) with rotoid configurations if we constrain $\chi _{3}$ appearing in the $\varepsilon --deformations in (\ref{edefcel}) to be of the form \begin{equation*} \chi _{3}=2\ ^{1}\tilde{\Phi}/\tilde{\Phi}-\mathbf{\ }\widetilde{\Lambda }/\ ^{\mu }\tilde{\Lambda}=2\underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0}) \end{equation* which is similar to (\ref{chi3prim}). Expressing $\ ^{1}\tilde{\Phi =e^{\varpi }[\mathbf{\ }\widetilde{\Lambda }/2\ ^{\mu }\tilde{\Lambda} \underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0})],$ for $\tilde{\Ph }=e^{\varpi },$ we generate a class of generic off--diagonal metrics associated with the ellipsoid Kerr -- de Sitter configurations {\small \begin{eqnarray} &&ds^{2}=e^{\psi (x^{k^{\prime }})}(1+\varepsilon \chi (x^{k^{\prime }}))[(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}]- \notag \\ &&\frac{e^{2\varpi }}{4|\ ^{\mu }\tilde{\Lambda}|}\overline{A [1+2\varepsilon \underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0})][dy^{3^{\prime }}+\left( \partial _{k^{\prime }}\ ^{\eta }n(x^{i^{\prime }})-\partial _{k^{\prime }}(\widehat{y}^{3^{\prime }}+\varphi \overline{B}/\overline{A})\right) dx^{k^{\prime }}]^{2}+ \label{elkdscon} \\ &&\frac{(\varpi ^{\ast })^{2}}{\ ^{\mu }\tilde{\Lambda}}(\overline{C} \overline{B}^{2}/\overline{A})[1+\varepsilon (\partial _{4}\varpi \mathbf{\ \widetilde{\Lambda }/\widetilde{\lambda }+2\partial _{4}\varpi \underline \zeta }\sin (\omega _{0}\varphi +\varphi _{0})+2\omega _{0}\mathbf{\ \underline{\zeta }\cos (\omega _{0}\varphi +\varphi _{0}))][d\varphi +(\partial _{i^{\prime }}\ \widetilde{A}+\varepsilon \partial _{i^{\prime }}\ \ ^{1}\check{A})dx^{i^{\prime }}]^{2}. \notag \end{eqnarray } Such metrics have a Killing symmetry in $\partial /\partial y^{3}$ and are completely defined by a generating function $\varpi (x^{k^{\prime }},\varphi )$ and the sources $\ ^{\mu }\tilde{\Lambda}=\mu _{g}^{2}\ \lambda $ and $\ \widetilde{\Lambda }.$ They define $\varepsilon $--deformations of Kerr -- de Sitter black holes into ellipsoid configurations with effective (polarized) cosmological constants determined by the constants in massive gravity and $f$--modifications. If the LC--conditions are satisfied, such metrics can be modelled in GR. \subsection{Extra dimension off--diagonal (non) massive modifications of the Kerr solutions} Various classes of generic off--diagonal deformations of the Kerr metric into higher dimensional exact solutions can be constructed. The explicit geometric and physical properties depend on the type of additional generating and integration functions and (non) vacuum configurations and (non) zero sources we consider. Let us analyze a series of 6--d and 8--d solutions encoding possible higher dimensional interactions with effective cosmological constants, nontrivial massive gravity contributions, $f --modifications and certain analogies to Finsler gravity models. \subsubsection{6--d deformations with nontrivial cosmological constant} Off--diagonal extra dimensional gravitational interactions modify a Kerr metric for any nontrivial cosmological constant in 6--d.\footnote In a similar form we can generalize the constructions in 8--d gravity.} Such higher dimensional Kerr -- de Sitter configurations can be generated by nonholonomic deformations $(\mathbf{\mathring{g}},\mathbf{\mathring{N},\ ^{v}\mathring{\Upsilon}=0,\mathring{\Upsilon}=0)\rightarrow (\widetilde \mathbf{g}},\widetilde{\mathbf{N}}\mathbf{,\ }^{v}\widetilde{\Upsilon =\Lambda ,\widetilde{\Upsilon }=\Lambda ,\mathbf{\ }^{v_{1}}\widetilde \Upsilon }=\Lambda )$. The solutions are not stationary, are characterized by a Killing symmetry in $\partial /\partial y^{5}$ and can be parameterized in the for \begin{eqnarray} ds^{2} &=&e^{\psi (x^{k^{\prime }})}[(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}] -\frac{e^{2\varpi }}{4\Lambda }\overline{A}[dy^{3^{\prime }}+\left( \partial _{k^{\prime }}\ ^{\eta }n(x^{i^{\prime }})-\partial _{k^{\prime }} \widehat{y}^{3^{\prime }}+\varphi \overline{B}/\overline{A})\right) dx^{k^{\prime }}]^{2}+ \label{6dks} \\ &&\frac{(\partial _{\varphi }\varpi )^{2}}{\ \Lambda }(\overline{C} \overline{B}^{2}/\overline{A})[d\varphi +(\partial _{i^{\prime }}\ ^{\eta \widetilde{A})dx^{i^{\prime }}]^{2} +\frac{\ ^{1}\tilde{\Phi}^{2}}{4\ \Lambda }\left[ dy^{5}+(\partial _{\tau }\ ^{1}n)du^{\tau }\right] ^{2}+\ \frac{(\partial _{6}\ ^{1}\tilde{\Phi})^{2}}{\ \Lambda \ ^{1}\tilde{\Phi}^{2 }\left[ dy^{6}+(\partial _{\tau }\ ^{1}\check{A})du^{\tau }\right] ^{2}. \notag \end{eqnarray The "primary" data $\overline{A},\overline{B},\overline{C}$ is described by \ref{kerrcoef}) and the generating functions \begin{eqnarray*} \varpi &=&\varpi (x^{k^{\prime }},\varphi ),\ ^{1}\tilde{\Phi}(u^{\beta },y^{6})=\ ^{1}\tilde{\Phi}(x^{k^{\prime }},t,\varphi ,y^{6});\ ^{\eta }n=\ ^{\eta }n(x^{i^{\prime }}), \\ \ ^{1}n &=&\ ^{1}n(u^{\beta },y^{6});\ ^{\eta }\widetilde{A}=\ ^{\eta \widetilde{A}(x^{k^{\prime }},\varphi ),\ ^{1}\check{A}=\ ^{1}\check{A (u^{\beta },y^{6}), \end{eqnarray* subjected to the LC--conditions and integrability conditions. We can "extract" ellipsoid configurations for a subclass of metrics with "additional" $\varepsilon $--deformations, \begin{eqnarray*} ds^{2} &=&e^{\psi (x^{k^{\prime }})}[(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}]- \notag \\ && \frac{e^{2\varpi }}{4\Lambda }\overline{A}[1+2\varepsilon \underline \zeta }\sin (\omega _{0}\varphi +\varphi _{0})] [dy^{3^{\prime }}+\left( \partial _{k^{\prime }}\ ^{\eta }n(x^{i^{\prime }})-\partial _{k^{\prime }} \widehat{y}^{3^{\prime }}+\varphi \overline{B}/\overline{A})\right) dx^{k^{\prime }}]^{2}+ \\ && \frac{(\partial _{\varphi }\varpi )^{2}}{\Lambda }(\overline{C}-\overline B}^{2}/\overline{A}) [1+\varepsilon (2\partial _{4}\varpi \underline{\zeta \sin (\omega _{0}\varphi +\varphi _{0})+2\omega _{0}\underline{\zeta }\cos (\omega _{0}\varphi +\varphi _{0}))][d\varphi +(\partial _{i^{\prime }}\ ^{\eta }\widetilde{A})dx^{i^{\prime }}]^{2} \\ &&+\frac{\ ^{1}\tilde{\Phi}^{2}}{4\ \Lambda }\left[ dy^{5}+(\partial _{\tau }\ ^{1}n)du^{\tau }\right] ^{2}+\ \frac{(\partial _{6}\ ^{1}\tilde{\Phi})^{2 }{\ \Lambda \ ^{1}\tilde{\Phi}^{2}}\left[ dy^{6}+(\partial _{\tau }\ ^{1 \check{A})du^{\tau }\right] ^{2}. \end{eqnarray*} For small values of eccentricity $\varepsilon ,$ such metrics describe "slightly" deformed Kerr black holes embedded self--consistently into a generic off--diagonal 6--d spacetime. In general, extra dimensions are not compactified. Nevertheless, imposing additional constraints on the generating functions $\ ^{1}\tilde{\Phi},\ ^{1}n,^{1}\check{A},$ we can construct warped/ trapped configurations as in brane gravity models and generalizations, see similar examples in \cite{vex3,vsingl1,vgrg,vbranef}. \subsubsection{8--d deformations and Finsler like configurations} Next, we generate a 8--d metric with nontrivial induced torsion describing nonholonomic deformations $(\mathbf{\mathring{g}},\mathbf{\mathring{N},\ ^{v}\mathring{\Upsilon}=0,\mathring{\Upsilon}=0)\rightarrow (\widetilde \mathbf{g}},\widetilde{\mathbf{N}}\mathbf{,\ }^{v}\widetilde{\Upsilon =\Lambda ,\widetilde{\Upsilon }=\Lambda ,\mathbf{\ }^{v_{1}}\widetilde \Upsilon }=\Lambda ,\mathbf{\ }^{v_{2}}\widetilde{\Upsilon }=\Lambda )$. A similar 4--d example is given by (\ref{ofindtmg}) but here we use a different source (in this subsection, we take the source as a cosmological constant $\Lambda $ in 8--d). This class of solutions is parameterized in the form {\small \begin{eqnarray} && ds^{2} =e^{\psi (x^{k^{\prime }})}[(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}] -\frac{\Phi ^{2}}{4\Lambda }\overline{A}[dy^{3^{\prime }}+\left( \ _{1}n_{k^{\prime }}(x^{i^{\prime }})+\ _{2}n_{k^{\prime }}(x^{i^{\prime }} \frac{4\mu _{g}(\Phi ^{\ast })^{2}}{\Phi ^{5}}-\partial _{k^{\prime }} \widehat{y}^{3^{\prime }}+\varphi \overline{B}/\overline{A})\right) dx^{k^{\prime }}]^{2} \notag \\ &&+\frac{(\partial _{\varphi }\Phi )^{2}}{\ \Lambda \Phi ^{2}}(\overline{C} \overline{B}^{2}/\overline{A})[d\varphi +\frac{\partial _{i^{\prime }}\Phi } \partial _{\varphi }\Phi }dx^{i^{\prime }}]^{2} +\frac{\ ^{1}\tilde{\Phi}^{2 }{4\ \Lambda }\left[ dy^{5}+(\partial _{\tau }\ ^{1}n)du^{\tau }\right] ^{2}+\ \frac{(\partial _{6}\ ^{1}\tilde{\Phi})^{2}}{\ \Lambda \ ^{1}\tilde \Phi}^{2}}\left[ dy^{6}+(\partial _{\tau }\ ^{1}\check{A})du^{\tau }\right] ^{2} \notag \\ &&+\frac{\ ^{2}\tilde{\Phi}^{2}}{4\ \Lambda }\left[ dy^{7}+(\partial _{\tau _{1}}\ ^{2}n)du^{\tau _{1}}\right] ^{2}+\ \frac{(\partial _{8}\ ^{2}\tilde \Phi})^{2}}{\ \Lambda \ ^{2}\tilde{\Phi}^{2}}\left[ dy^{8}+(\partial _{\tau _{1}}\ ^{2}\check{A})du^{\tau _{1}}\right] ^{2}, \label{8dfd} \end{eqnarray } where the generating functions are chosen \begin{eqnarray} \Phi &=&\Phi (x^{k^{\prime }},\varphi ),\ ^{1}\tilde{\Phi}(u^{\beta },y^{6})=\ ^{1}\tilde{\Phi}(x^{k^{\prime }},t,\varphi ,y^{6}),\ \ ^{2}\tilde \Phi}(u^{\beta _{1}},y^{8})=\ ^{2}\tilde{\Phi}(x^{k^{\prime }},t,\varphi ,y^{5},y^{6},y^{8}); \label{genf8fd} \\ \ ^{1}n &=&\ ^{1}n(u^{\beta },y^{6}),\ ^{2}n=\ ^{2}n(u^{\beta _{1}},y^{8}), \notag \\ \ ^{\eta }\widetilde{A} &=&\ ^{\eta }\widetilde{A}(x^{k^{\prime }},\varphi ),\ ^{1}\check{A}=\ ^{1}\check{A}(x^{k^{\prime }},t,\varphi ,y^{6}),\ ^{2 \check{A}=\ ^{2}\check{A}(x^{k^{\prime }},t,\varphi ,y^{5},y^{6},y^{8}). \notag \end{eqnarray} The generating functions for the class of solutions (\ref{8dfd}) are chosen in a form when the nonholonomically induced torsion (\ref{dtors}) is effectively modeled on a 4--d pseudo--Riemannian spacetime but on the higher shells $s=1$ and $s=2$ the torsion fields are zero. We can generate extra dimensional torsion N--adapted coefficients if nontrivial integration functions of the type $\ _{2}n_{k^{\prime }}(x^{i^{\prime }})$ are considered for the higher dimensions. Metrics of type (\ref{8dfd}) can be re--parameterized to define exact solutions in the so--called Einstein--Finsler gravity and fractional derivative modifications constructed on tangent bundles to Lorentz manifolds, see details in Refs. \cite{vacarfinslcosm,vgrg,vbranef,vfracrf} and following different Finsler, or fractional \ models, \cit {stavr,mavr,castro,calcagni,gratia}. For Finsler like theories, we have to consider $y^{5},y^{6},y^{7},y^{8}$ as fiber coordinates for a tangent bundle with local coordinates $x^{i^{\prime }},y^{3^{\prime }},\varphi $ when the \ ^{1}v+\ ^{2}v$ coefficients of the metric and other geometric/physical objects can be transformed into standard ones in Finsler geometry via frame and coordinate transformations. In some sense, Finsler-like theories with small corrections to GR are extra--dimensional ones with "velocity/momentum" coordinates and with low "speed/energy" nonlinear corrections. Finally we note that the class of metrics (\ref{8dfd}) contains a subclass of the 6d$\rightarrow $8d generalization of (\ref{6dks}) to those configurations with zero torsion if we choose $\Phi =e^{2\varpi }$ and impose on the N--coefficients respective constraints which are necessary for selecting LC--configurations. \subsubsection{Kerr massive deformations and vacuum extra dimensions} In this subsection, we momentarily return to the vacuum ellipsoid solutions \ref{vacum4del}) and extend the metric to extra dimensions when the source is of type $\mathbf{\Upsilon =\widetilde{\lambda }+\ }\varepsilon \widetilde{\Lambda }+\Lambda )=0,$ $\ ^{\mu }\tilde{\Lambda}\mathbf{=}\mu _{g}^{2}|\ \lambda |\mathbf{,}$ and result in ellipsoidal off--diagonal configurations in GR, where $\varepsilon =-\ ^{\mu }\tilde{\Lambda}\mathbf{/ }\widetilde{\Lambda }+\Lambda )\ll 1$ can be considered as an eccentricity parameter. We can construct models of off--diagonal extra dimensional interactions when the $f$--modifications $\widetilde{\Lambda }$ compensate an extra dimensional contribution via the effective constant $\widetilde{\Lambda }$ and which are related to the configurations of massive gravity deformations of a Kerr solution. We select a subclass of solutions for $\varepsilon $--deformations of the vacuum solutions and described by the ansatz for the target metrics {\small \begin{eqnarray} ds^{2} &=&e^{\psi (x^{k^{\prime }})}(1+\varepsilon \chi (x^{k^{\prime }}))[(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}]-\frac{e^{2\varpi }}{4\ ^{\mu }\tilde{\Lambda}}\overline{A}[1+\varepsilon \chi _{3^{\prime }}][dy^{3^{\prime }}+\left( \partial _{k^{\prime }}\ ^{\eta }n(x^{i^{\prime }})-\partial _{k^{\prime }}(\widehat{y}^{3^{\prime }}+\varphi \overline{B} \overline{A})\right) dx^{k^{\prime }}]^{2} + \notag \\ &&\frac{(\partial _{4}\varpi )^{2}\eta _{4^{\prime }}}{\ ^{\mu }\tilde \Lambda}}(\overline{C}-\overline{B}^{2}/\overline{A})[1+\varepsilon \chi _{4^{\prime }}][d\varphi +(\partial _{i^{\prime }}\ \widetilde{A +\varepsilon \partial _{i^{\prime }}\ \ ^{1}\check{A})dx^{i^{\prime }}]^{2} \frac{\ ^{1}\tilde{\Phi}^{2}}{4(\ \widetilde{\Lambda }+\Lambda )}\left[ dy^{5}+(\partial _{\tau }\ ^{1}n)du^{\tau }\right] ^{2} + \label{kmasedvac} \\ &&\frac{(\partial _{6}\ ^{1}\tilde{\Phi})^{2}}{(\ \widetilde{\Lambda +\Lambda )\ ^{1}\tilde{\Phi}^{2}}\left[ dy^{6}+(\partial _{\tau }\ ^{1 \check{A})du^{\tau }\right] ^{2}+\frac{\ ^{2}\tilde{\Phi}^{2}}{4\ \widetilde{\Lambda }+\Lambda )}\left[ dy^{7}+(\partial _{\tau _{1}}\ ^{2}n)du^{\tau _{1}}\right] ^{2}+\ \frac{(\partial _{8}\ ^{2}\tilde{\Phi )^{2}}{\ (\ \widetilde{\Lambda }+\Lambda )\ ^{2}\tilde{\Phi}^{2}}\left[ dy^{8}+(\partial _{\tau _{1}}\ ^{2}\check{A})du^{\tau _{1}}\right] ^{2}. \notag \end{eqnarray } The extra-dimensions components of this metric are generated by the functions \ ^{1}\tilde{\Phi},$ $^{2}\tilde{\Phi}$ and the N--coefficients similarly to \ref{8dfd}) but with modified effective sources in the extra dimensions, $\Lambda \rightarrow \ \widetilde{\Lambda }+\Lambda .$ This result shows that extra dimensions can mimic the $\varepsilon $--deformations in order to compensate contributions from the $f$--modifications and even effective vacuum configurations of the 4--d horizontal part. In general, vacuum metrics (\re {kmasedvac}) encode extra-dimensions modifications/ polarizations of the physical constants and coefficients of the metrics under nonlinear polarizations of an effective 8-d vacuum distinguishing the 4--d nonholonomic configurations and massive gravity contributions. Extra-dimensions and $f$--modified contributions are described by terms proportional to the eccentricity \varepsilon .$ \subsubsection{Extra dimension massive ellipsoid Kerr -- de Sitter configurations} Combining the solutions (\ref{elkdscon}) \ and (\ref{8dfd}), we construct a class of non--vacuum 8--d solutions with rotoid configurations if we constrain $\chi _{3}$ in the $\varepsilon $--deformations (for 4--d, see a similar formula (\ref{edefcel})) to be of the form \begin{equation*} \chi _{3}=2\ ^{1}\tilde{\Phi}/\tilde{\Phi}-\mathbf{\ }(\ \widetilde{\Lambda +\Lambda )/\ ^{\mu }\tilde{\Lambda}=2\underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0}). \end{equation* We reexpress $\ ^{1}\tilde{\Phi}=e^{\varpi }[\mathbf{\ }(\ \widetilde \Lambda }+\Lambda )/2\ ^{\mu }\tilde{\Lambda}+\underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0})],$ for $\tilde{\Phi}=e^{\varpi }$ and (\re {genf8fd}), and generate a class of generic off--diagonal exra dimensional metrics for ellipsoid Kerr -- de Sitter configurations {\small \begin{eqnarray*} &&ds^{2}=e^{\psi (x^{k^{\prime }})}(1+\varepsilon \chi (x^{k^{\prime }}))[(dx^{1^{\prime }})^{2}+(dx^{2^{\prime }})^{2}]- \\ &&\frac{e^{2\varpi }}{4|\ ^{\mu }\tilde{\Lambda}|}\overline{A [1+2\varepsilon \underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0})][dy^{3^{\prime }}+\left( \partial _{k^{\prime }}\ ^{\eta }n(x^{i^{\prime }})-\partial _{k^{\prime }}(\widehat{y}^{3^{\prime }}+\varphi \overline{B}/\overline{A})\right) dx^{k^{\prime }}]^{2}+ \\ &&\frac{(\varpi ^{\ast })^{2}}{\ \ ^{\mu }\tilde{\Lambda}}(\overline{C} \overline{B}^{2}/\overline{A})[1+\varepsilon (\partial _{4}\varpi \frac{\ \widetilde{\Lambda }+\Lambda }{\ ^{\mu }\tilde{\Lambda}}+2\partial _{4}\varpi \underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0})+2\omega _{0}\mathbf{\ }\underline{\zeta }\cos (\omega _{0}\varphi +\varphi _{0}))][d\varphi +(\partial _{i^{\prime }}\ \widetilde{A}+\varepsilon \partial _{i^{\prime }}\ \ ^{1}\check{A})dx^{i^{\prime }}]^{2} \\ &&+\frac{\ ^{1}\tilde{\Phi}^{2}}{4\ (\ \widetilde{\Lambda }+\Lambda )}\left[ dy^{5}+(\partial _{\tau }\ ^{1}n)du^{\tau }\right] ^{2}+\ \frac{(\partial _{6}\ ^{1}\tilde{\Phi})^{2}}{\ (\ \widetilde{\Lambda }+\Lambda )\ ^{1}\tilde \Phi}^{2}}\left[ dy^{6}+(\partial _{\tau }\ ^{1}\check{A})du^{\tau }\right] ^{2} \\ &&+\frac{\ ^{2}\tilde{\Phi}^{2}}{4\ (\widetilde{\Lambda }+\Lambda )}\left[ dy^{7}+(\partial _{\tau _{1}}\ ^{2}n)du^{\tau _{1}}\right] ^{2}+\ \frac (\partial _{8}\ ^{2}\tilde{\Phi})^{2}}{\ (\ \widetilde{\Lambda }+\Lambda )\ ^{2}\tilde{\Phi}^{2}}\left[ dy^{8}+(\partial _{\tau _{1}}\ ^{2}\check{A )du^{\tau _{1}}\right] ^{2}. \end{eqnarray* } Such non--vacuum solutions can be also modelled for Einstein--Finsler spaces if the extra-dimension coordinates are treated as velocity/momentum ones. The metrics possess a respective Killing symmetry in $\partial /\partial y^{7}.$ They define $\varepsilon $--deformations of Kerr -- de Sitter black holes into ellipsoid configurations with effective cosmological constants determined by the constants in massive gravity, $f$--modifications and extra dimensional contributions. \section{ Concluding Remarks} \label{s5} In this work, we have elaborated the anholonomic frame deformation method, AFDM, in constructing exact solutions in gravity theories, which we formulated and developed in \cite{vpars,vex1,vex2,vex3,veym}, see also references therein. The method is based on a general decoupling property of the gravitational field equations which is possible for certain classes of noholonomic $2+2+...$ splitting of the spacetime dimensions. Such solutions are generic off--diagonal, with zero or non--zero torsion structure, and may depend on all (higher dimensions, or 4--d) spacetime coordinates. In the simplest form, the constructions can be performed by using an "auxiliary" metric-compatible connection which is constructed along with the "standard" Levi--Civita connection and from the same metric structure. Both connections are related via a distortion tensor which is completely determined by the coefficients of the metric and the frame splitting. After a class of off--diagonal solutions is constructed in general, we can impose certain conditions on the structure of the nonholonomic frames, when the coefficients of both the auxiliary and standard connections are the same, and we can extract solutions with zero torsion, for instance, in general relativity theory. In general form, the off--diagonal metrics and nonlinear and linear connections constructed following the AFDM method depend on various classes of generating and integration functions, certain symmetry parameters and on possible nonzero sources and/or (polarized) cosmological constants. This is possible because in our approach the (generalized/modified) Einstein equations are transformed (after choosing the corresponding ansatz for the metrics) into systems of nonlinear partial differential equations which can be integrated in a very general form and depending on certain classes of generating/integration functions. This is different from the case of a diagonal ansatz, for instance, for the Schwarzschild metric when the gravitational field equations transform into a system of nonlinear ordinary differential equations depending on certain integration constants. We can construct chains of nonholonomic frame deformations in order to transform a given primary metric (it may be an exact solution, or not, in a gravity theory) into other classes of target metrics and which can be fixed to be exact solutions in a "metric compatible" gravity theory. From a formal point of view, the chains' metrics can correspond to spaces with nontrivial topology, have a singular/stochastic/evolution etc behaviour and various types of horizons, symmetries and boundary conditions. In general, it is not possible to formulate some uniqueness property or limiting/ asymptotic conditions. Certain geometric data and physical information of "intermediary" metrics is encoded, step by step, into the target metrics. We can impose certain nonholonomic constraints on such integral varieties in order to relate a new class of target metric solutions to some well--defined primary metrics. However, it is not clear what physical importance these "very general" classes of target metric exact solutions may have. In a series of works \cite{vp,vt,vsingl1,vgrg} (see details and references in \cite{veym}) we studied various examples. When using the AFDM we can construct locally anisotropic black ellipsoid/hole, spinning and/or solitonic spaces etc. Certain configurations seem to be stable \cite{vex1} and mantain, for instance, the main properties of the Schwarzschild metric but for small rotoid deformations. \vskip5pt The goal of this article was fourfold: \begin{enumerate} \item to elaborate the AFDM in a form which allows us to construct generic off--diagonal solutions with Killing symmetries and the generalizations to non--Killing configurations using extensions to higher dimensions and so--called "vertical" conformal factors; \item to study off--diagonal modifications of the Kerr metric under massive gravity and $f$--modified nonlinear interactions, via higher dimensions, and state the conditions when such configurations can be modelled as effective ones in general gravity, or via nonholonomically induced torsion fields etc; \item to show how the well--known and physically important exact solution for the Kerr black hole can be constructed, for some special class-types of integration functions, following the AFDM; and \item to provide certain examples when the solutions in point 2 can be generalized to various vacuum and non--vacuum configurations with ellipsoidal symmetries. \end{enumerate} In some cases of rotoidal deformations with small eccentricity parameter, we have been able to prove that the physical properties of the primeval metrics are preserved but with certain effective polarizations of the physical constants and deformation to ellipsoidal configurations. It is possible to construct exact solutions for very general off--diagonal deformations (not depending of small parameters) but the physical properties are not clear if, for instance, additional smooth, symmetry, Cauchy and/or boundary conditions are not imposed. A very important property is that off--diagonal nonlinear gravitational interactions can mimic effective modified gravity theories, with anisotropies and re--scalings, which can find applications in modern cosmology and/or elaborate new models of quantum gravity \cite{vgrg,odints1,vepl}. \vskip5pt \textbf{Acknowledgments:\ } The work of TG and SV is partially supported by the Program IDEI, PN-II-ID-PCE-2011-3-0256. SV wrote a part of this article during a recent visit at TH-CERN. He is grateful for important discussions, support and collaboration to S. Basilakos, S. Capozziello, E. Elizalde, N. Mavromatos, D. Singleton and P. C. Stavrinos,
1,941,325,220,800
arxiv
\section{Introduction} \label{sec:introduction} A process consists of a series of interrelated tasks. Its graphical description is called \textbf{process model} \cite{processMining}. Over the past decade, a specific kind of process model - scientific workflow - has been established as a valuable means for scientists to create reproducible experiments \cite{ScientificWorkflows}. Several scientific workflow management systems (SWFM) have become freely available, easing scientific models' creation, management and execution \cite{ScientificWorkflows}. However, creating scientific models using SWFMs is still a laborious task and complex enough to impede non-computer-savvy researchers from using these tools \cite{future}. Therefore, cloud repositories emerged to allow sharing of process models, thus facilitating their reuse and repurpose \cite{myExperiment1, myExperiment2, ScientificWorkflows}. As shown in Figure \ref{fig:senario}, the model developers use modeling tools to build and manage process models which can be hosted in the cloud, then run as a service or be downloaded to users' local workspaces. Popular examples of such scientific model platforms or repositories include myExperiment, Galaxy, Kepler, CrowdLabs, Taverna, VisTrails, e-BioFlow, e-Science and SHIWA \cite{ScientificWorkflows, Taverna, Galaxy, myExperiment1, myExperiment2}. As for users, reusing shared models from public repositories is much more cost-effective than creating, testing and tuning a new one. However, those models are difficult to reuse since they lack necessary NL guidelines or instructions to explain the steps, jump conditions and related resources \cite{requirement_text,Leo,Hen,Goun}. For example, the repository offered with myExperiment currently contains more than 3918 process models from various disciplines including bioinformatics, astrophysics, earth sciences and particle physics \cite{ScientificWorkflows}, but only 1293 out of them have corresponding NL documents\footnote{The data is collected from \url{https://www.myexperiment.org/} before Aug. 2019}, which shows the gap between the shared models and their NL descriptions. This real-world scenario illustrates that the cloud platforms do not have effective means to address this translation problem \cite{Goun,SharingModels}, i.e., automatically translating the semantics of process models into NL, thus making it challenging for users to reuse the shared models. Now that means to translate a process model are becoming available to help users understand models and improve shared models' reusability \cite{requirement_text}, a growing interest in exploring automatic process translators - \textbf{process to text (P2T)} techniques - has emerged. \begin{figure}[t] \centering \includegraphics[width=0.95\maxWidth]{senario.pdf} \caption{The Model Sharing Scenario: The users collect the shared models developed by the third-party developers.} \label{fig:senario} \end{figure} We define our problem as follows: given a process model, our approach aims to generate the textual descriptions for the semantics of the model. We choose Petri nets as our modeling language for their: 1) formal semantics; 2) many analysis tools; 3) ease of transformation from/to other modeling languages \cite{petri_three_reason}. Our approach - \emph{BePT} - first embeds the structural and linguistic information of a model into a tree representation, and then linearizes it by extracting its behavior paths. Finally, it generates sentences for each path. Our theoretical analysis and the experiments we conducted demonstrate that \emph{BePT} satisfies three desirable properties and outperforms the state-of-the-art P2T approaches. To summarize, our contributions are listed as follows: \begin{enumerate}[1)] \item We propose a behavior-based process translator \emph{BePT} to generate textual descriptions without behavioral errors. To the best of our knowledge, this work is the first attempt that fully considers model behaviors in process translation. \item The "encoder-decoder" paradigm and unfolded behavior graphs are employed to help better analyze and extract correct behavior paths to be described in natural language. \item We formally analyze BePT's properties and conducted experiments on ten-time larger (compared with previous works) datasets collected from industry and academic fields to better reveal the statistical characteristics. The results demonstrate BePT's strong expressiveness and reproducibility. \end{enumerate} \section{Related Work} \textbf{Path-based Process Translators}. The path-based approach \cite{Leo} was proposed to generate the text of a process model. It first extracts language information before annotating each label \cite{label1, label2, label3}. Then it generates the annotated tree structure before traversing it by \textit{depth first search}. Once sentence generation is triggered, it employs NL tools to generate corresponding NL sentences \cite{realpro}. This work solved the annotation problem, but it only works for structured models and ignores unstructured parts. \textbf{Structure-based Process Translators}. Structure-based translator \cite{Hen} was subsequently proposed to handle unstructured parts. It recursively extracts the longest path of a model on the unstructured parts to linearize each activity. However, it only works on certain patterns and is hard to extend more complex situations. Along this line, another structure-based method was proposed \cite{Goun} which can handle more elements and complex patterns. It first preprocesses a model by trivially reversing loop edges and splitting multiple-entry-multiple-exit gateways. Then, it employs heuristic rules to match every source gateway node with the goal nodes. Next, it unfolds the original model based on those matched goals. Finally, it generates the texts of the unfolded models. Although this structure-based method maintains good paragraph indentations, it neglects the behavior correctness and completeness. \textbf{Other Translators}. Other "to-text" works that take BPMN \cite{nlg_bpmn}, EPC \cite{nlg_epc}, UML \cite{nlg_uml}, image \cite{nlg_image} or video \cite{nlg_video} as inputs are difficult to apply into the process-related scenarios or are not for translation. Hence, we aim to design a novel process translator. \section{Preliminaries} \label{sec:preliminaries} Before going further into the main idea, we introduce some background knowledge: Petri net \cite{petri1, petri2, unstructuredModel}, \textbf{R}efined \textbf{P}rocess \textbf{S}tructure \textbf{T}ree (RPST) \cite{rpst}, \textbf{C}omplete \textbf{F}inite \textbf{P}refix (CFP) \cite{unfolding1, unfolding2, unfolding3} and \textbf{D}eep \textbf{Syn}tactic \textbf{T}ree (DSynT) \cite{realpro, dsyn}. These four concepts are respectively used for process modeling, structure analysis, behavior unfolding and sentence generation. \begin{figure*}[hbtp] \centering \subfigure[A bioinformatics Petri net model ($N_1$).]{ \label{sfig:pre_petri} \includegraphics[width=0.65\maxWidth]{pre_petri.pdf} } \hspace{0.1cm} \subfigure[The RPST of $N_1$.]{ \label{sfig:pre_rpst} \includegraphics[width=0.30\maxWidth]{pre_rpst.pdf} } \hspace{0.1cm} \subfigure[The CFP of $N_1$ ($\mathbb{N}_1$).]{ \label{sfig:pre_cfp} \includegraphics[width=0.50\maxWidth]{pre_cfp.pdf} } \hspace{0.1cm} \subfigure[The DSynT of $T_a$ of $N_1$.]{ \label{sfig:pre_dsynt} \includegraphics[width=0.30\maxWidth]{pre_dsynt.pdf} } \caption{An example of a bioinformatic process model.} \label{fig:preliminary} \end{figure*} \subsection{Petri Net} \begin{definition}[Petri Net, Net System, Boundary node] A \textbf{Petri net} $N$ is a tuple $(P,T,F)$, where $P$ is a finite set of places, $T$ is a finite set of transitions, $F\subseteq (P\times T)\cup (T\times P)$ is a set of directed arcs. A marking of $N$, denoted $M$, is a bag of tokens over $P$. A \textbf{net system} $S=(N,M)$ is a Petri net $N$ with an initial marking $M$. The input set and output set of a node $n$ are respectively denoted as $\bullet n=\{x|(x,n)\in F\}$ and $n\bullet=\{x|(n,x)\in F\}$. The source and sink sets of a net $N$ are respectively denoted as $\bullet N=\{x \in P \cup T | \bullet x=\varnothing\}$ and $N\bullet=\{x \in P \cup T | x\bullet=\varnothing\}$. These boundary elements $\bullet N\bullet=\bullet N\cup N\bullet$ are called \textbf{boundary nodes} of $N$. \end{definition} \begin{definition}[Firing Sequence, TAR, Trace] Let $S=(N,M)$ be a net system with $N=(P,T,F)$. A transition $t\in T$ can be \textbf{fired} under a marking $M$, denoted $(N,M)[t\rangle$, iff each $p\in \bullet t$ contains at least one token. After $t$ fires, the marking $M$ changes to $M \backslash \bullet t \cup t\bullet$ (Firing Rule). A sequence of transitions $\sigma=t_1 t_2\cdots t_{n}\in T^*$ is called a \textbf{firing sequence} iff $(N,M)[t_1\rangle(N,M_1)[t_2\rangle\cdots[t_n\rangle(N,M_n)$ holds. Any transition pair that fires contiguously ($t_i\prec t_{i+1}$) is called a \textbf{transition adjacency relation (TAR)}. A firing sequence $\sigma$ is a \textbf{trace} of $S$ iff the tokens completely flow from all source(s) to sink(s). \end{definition} \begin{example} Figure \ref{sfig:pre_petri} shows a real-life bioinformatics process model expressed by Petri net. $P_a$ contains one token so that the current marking $M$ is $[1,0,0,0]$ (over $[P_a,P_b,P_c,P_d]$). According to the firing rule, each node in the input set of $T_a$ ($\bullet T_a=\{P_a\}$) contains at least one token so that $T_a$ can be fired. After firing $T_a$, the marking becomes $M \backslash \bullet T_a \cup T_a\bullet$, i.e., $[0,1,0,0]$. The TAR set of $N_1$ is $\{T_a\prec T_b, T_a\prec T_c, T_b\prec T_d, T_c\prec T_d\}$. The trace set of $N_1$ is $\{T_aT_bT_d, T_aT_cT_d\}$. \end{example} \subsection{Refined Process Structure Tree (RPST)} \begin{definition}[Component, RPST, Structured, Unstructured] A process \textbf{component} is a sub-graph of a process model with a single entry and a single exit (SESE), and it does not overlap with any other component. The \textbf{RPST} of a process model is the set of all the process components. Let $C=\bigcup_{i=1}^{n}\{c_i\}$ be a set of components of a process model. $C$ is a \textbf{trivial} component iff $C$ only contains a single arc; $C$ is a \textbf{polygon} component iff the exit node of $c_i$ is the entry node of $c_{i+1}$; $C$ is a \textbf{bond} component iff all sub-components share same boundary nodes; Otherwise, $C$ is a \textbf{rigid} component. A rigid component is a region of a process model that captures arbitrary structure. Hence, if a model contains no rigid components, we say it is \textbf{structured}, otherwise it is \textbf{unstructured}. \end{definition} \begin{example} The colored backgrounds in Figure \ref{sfig:pre_petri} demonstrate the decomposed components which naturally form a tree structure - RPST - shown in Figure \ref{sfig:pre_rpst}. The whole net (polygon) can be decomposed into three first-layer SESE components ($P^1$, $B^1$, $P^2$), and these three components can be decomposed into second-layer components ($a$, $b$, $P^3$, $P^4$, $g$, $h$). The recursive decomposition ends at a single arc (trivial). \end{example} \subsection{Complete Finite Prefix (CFP)} \begin{definition}[Cut-off transition, mutual, CFP] \label{def:cfp} A branching process $O=(P,T,F)$ is a completely fired graph of a Petri net satisfying that 1) $|\bullet p|\le 1, \forall p\in P$; 2) no element is in conflict with itself; 3) for each $x$, the set $\{y\in P\cup T|y \prec x\}$ is finite. The \textbf{mapping function $\hbar$} maps each CFP element to the corresponding element in the original net. If two nodes $p_1, p_2$ in CFP satisfy $\hbar(p_1)=\hbar(p2)$, we say they are \textbf{mutual} (places) to each other. A transition $t$ is a \textbf{cut-off transition} if there exists another transition $t'$ such that $Cut([t])=Cut([t'])$ where $[t]$ denotes a set of transitions of $t$ satisfying TAR closure ($\forall e \in T : e \prec t \Rightarrow e \in [t]$) and $Cut([t])=\hbar(\bullet O\cup [t]\bullet \backslash \bullet [t])$. A \textbf{CFP} is the greatest backward closed subnet of a branching process containing no transitions after any cut-off transition. \end{definition} \begin{example} Figure \ref{sfig:pre_cfp} shows the branching process of $N_1$ (including the light-gray part). Since each original node corresponds to one or more CFP nodes, thus we append "id" to number each CFP node. As $\hbar(P_{c1})=\hbar(P_{c2})=P_c$ so that $P_{c1}$ and $P_{c2}$ are mutual (places). In $\mathbb{N}_1$, $Cut([T_{b1}])=Cut([T_{c1}])=P_c$ so that $T_{c1}$ is a cut-off transition (transitions after $T_{c1}$ are cut). The cut graph is CFP of $N_1$ (excluding the light-gray part). \end{example} \subsection{Deep Syntactic Tree (DSynT)} A DSynT is a dependency representation of a sentence. In a DSynT, each node carries a verb or noun decorated with meta information such as the tense of the main verb or the number of nouns etc, and each edge can denote three dependencies - subject (I), object (II), modifier (ATTR) - between two adjacency nodes. \begin{example} Figure \ref{sfig:pre_dsynt} shows the DSynT of $T_a$ in $N_1$. The main verb ``\emph{extract}'' is decorated by class ``\emph{verb}'' and the voice ``\emph{active}''. The subject and the object of ``\emph{extract}'' are ``\emph{experimenter}'' (assigned by the model developer) and ``\emph{gene}''. This DSynT represents the dependency relations of the sentence ``\emph{the experimenter extracts the genes}''. \end{example} \section{Our Method} \label{sec:method} First, we list some non-trivial challenges to be solved: \begin{enumerate}[\textbf{C}1] \item How to analyze and decompose the structure of a complex model, such as an unstructured or multi-layered one? \item For each model element, how to analyze the language pattern of a short label, extract the main linguistic information and create semantically correct descriptions? \item How to transform a non-linear process model into linear representations, especially when it contains complex patterns? \item How to extract the correct behaviors of process models and avoid behavior space explosion? \item How to design language templates and simplify textual descriptions to express more naturally? How to make the results more intuitive to read and understand? \item How to avoid semantic errors and redundant descriptions? \end{enumerate} To solve these challenges (\textbf{C1}-\textbf{C6}), we propose \emph{BePT} which is built on the encoder-decoder framework inspired from machine translation systems \cite{machineTranslation1, machineTranslation2, machineTranslation3}. The encoder creates an intermediate tree representation from the original model and the decoder generates its NL descriptions. Figure \ref{fig:framework} presents a high-level framework of BePT, including four main phases: Structure Embedding, Language Embedding, Text Planning and Sentence Planning \cite{Leo, Hen, Goun}: \begin{figure}[t] \centering \includegraphics[width=0.95\maxWidth]{framework.pdf} \caption{High-level view of BePT's framework.} \label{fig:framework} \end{figure} \begin{enumerate}[1)] \item \textbf{\emph{Structure Embedding} (C1)}: Embedding the structure information of the original model into the intermediate representation. \item \textbf{\emph{Language Embedding} (C2)}: Embedding the language information of the original model into the intermediate representation. \item \textbf{\emph{Text Planning} (C3, C4)}: Linearizing the non-linear tree representation into linear representations. \item \textbf{\emph{Sentence Planning} (C5, C6)}: Generating NL text by employing pre-defined language templates and NL tools. \end{enumerate} \subsection{Structure Embedding} We take a simplified model $N_*$, shown in Figure \ref{fig:rigid}, as our running example due to its complexity and representativeness. A complex sub-component (any structure is possible) in the original model is replaced by the black single activity $T_e$. The simplified model $N_*$ is also complex since it contains a main path and two loops. We employ a simplification algorithm from \cite{Goun} to replace each sub-model with a single activity to obtain a simplified but behavior-equivalent one because a model containing many sub-models may complicate the behavior extraction \cite{Goun}. In the meantime, the simplification operation causes no information loss \cite{Goun} since the simplified part will be visited in the deeper recursion. We emphasize that this simplification step is easy and extremely necessary for behavior correctness (see Appendix \ref{prf:correctness} for proof of behavior correctness). Next, we analyze its structural skeleton and then create the RPST of $N_*$. Finally, we embed its structural information - RPST - into a tree representation (as shown in the upper part of Figure \ref{fig:rdt}). \begin{figure}[hbtp] \includegraphics[width=0.70\maxWidth]{rigid.pdf} \caption{A simplified model ($N_*$). The original complex component (any structure is possible) is simplified by the black element (a single activity).} \label{fig:rigid} \end{figure} \subsection{Language Embedding} \subsubsection{\textbf{Extract Linguistic Information}} This step sets out to recognize NL labels and extract the main linguistic information \cite{label1, label2, label3}. For each NL label, we first examine prepositions and conjunctions. If prepositions or conjunctions are found, respective flags are set to true. Then we check if the label starts with a gerund. If the first word of the label has an ``\emph{ing}'' suffix, it is verified as a gerund verb phrase (e.g., ``\emph{extracting gene}''). Next, \emph{WordNet} \cite{wordnet} is used to learn if the first word is a verb. If so, the algorithm refers it to a verb phrase style (e.g., ``\emph{extract gene}''). In the opposite case, the algorithm proceeds to check prepositions in the label. A label containing prepositions the first of which is ``\emph{of}'' is qualified as a noun phrase with \emph{of} prepositional phrase (e.g., ``\emph{creation of database}''). If the label is categorized to none of the enumerated styles, the algorithm refers it to a noun phrase style (e.g., ``\emph{gene extraction}''). Finally, we similarly categorize each activity label into four labeling styles (gerund verb phrase, verb phrase, noun phrase, noun phrase with \emph{of} prepositional phrase). Lastly, we extract the linguistic information - role, action and objects - depending on which pattern it triggers. For example, in $N_*$, the label of $T_d$ triggers a verb phrase style. Accordingly, the action lemma ``\emph{remove}'' and the noun lemma ``\emph{impurity}'' are extracted. \subsubsection{\textbf{Create DSynTs}} Once this main linguistic information is extracted, we create a DSynT for each label by assigning the main verb and main nouns including other associated meta information \cite{dsyn, realpro} (as shown in the lower part of Figure \ref{fig:rdt}). For better representation, we concatenate each DSynT root node to its corresponding RPST leaf node, and we call this concatenated tree RPST-DSynTs (\textbf{RDT}). The RDT of $N_*$ is shown in Figure \ref{fig:rdt}. \begin{figure}[hbtp] \centering \includegraphics[width=0.65\maxWidth]{rdt.pdf} \caption{The RDT of $N_*$. Some parts are replaced by the ellipsis due to the limited space.} \label{fig:rdt} \end{figure} So thus far, we have embedded the structural information (RPST) and the linguistic information (DSynTs) of the original process model into the intermediate representation RDT. Then, it is passed to the decoder phase. \subsection{Text planning}\label{ssec:textPlanning} The biggest gap between a model and a text is that a model contains sequential and concurrent semantics \cite{behavior}, while a text only contains sequential sentences. Thus, this step focuses on \textbf{transforming a non-linear model into its linear representations}. In order to maintain behavior correctness, we first create the CFP of the original model because a CFP is a \textbf{complete and minimal behavior-unfolded graph} of the original model \cite{unfolding1, unfolding2, unfolding3}. Figure \ref{fig:rcfp} shows the CFP of $N_*$. According to Definition \ref{def:cfp}, $T_{d1}$ and $T_{e1}$ are two cut-off transitions, thus, no transitions follow them. Besides, we introduce a basic concept: \textbf{\emph{shadow place}}. Shadow places ($\mathcal{SP}$) are those CFP places that are: 1) mutual with CFP boundary places or 2) mapped to the boundary places of the original model. \begin{example} In Figure \ref{fig:rcfp}, the five colored places are shadow places of $\mathbb{N}_*$ ($P_{a1},P_{b1},P_{a2},P_{d1},P_{b2}\in \mathcal{SP}(\mathbb{N}_*)$). Note that theyare mutual with the CFP boundary places, and $P_{d1}$ is mapped to the boundary places of the original model $N_*$ ($\hbar(P_{d1})=P_d$). Intuitively, a shadow place represents the repetition of a boundary place in the original model or its CFP. \end{example} \begin{figure}[hbtp] \centering \includegraphics[width=0.70\maxWidth]{cfp.pdf} \caption{The CFP of $N_*$ ($\mathbb{N}_*$). The five colored places are shadow places. A shadow place is shown in same (different) color as its mutual (non-mutual) places.} \label{fig:rcfp} \end{figure} \subsubsection{\textbf{Behavior Segment}} Since we have obtained the behavior-unfolded graph, i.e., CFP, now, we define (behavior) segments which capture the \textbf{minimal behavioral characteristics of a CFP} to avoid state space explosion problem. \begin{definition}[Behavior Segment] \label{def:segment} Given a net $N=(P,T,F)$ and its CFP $\mathbb{N}$, a behavior segment $\mathbb{S}=(P',T',F')$ is a connected sub-model of $\mathbb{N}$ satisfying: \begin{enumerate}[1)] \item $\bullet \mathbb{S} \bullet\subseteq\mathcal{SP}(\mathbb{N}) \wedge P' \backslash \bullet \mathbb{S} \bullet \cap \mathcal{SP}(\mathbb{N}) = \varnothing$, i.e., all boundary nodes are shadow places and all other places are not. \item If each place in $\hbar(\bullet\mathbb{S})$ contains one token, after firing all transitions in $\hbar(T')$, each place in $\hbar(\mathbb{S}\bullet)$ contains just one token while other places in $\hbar(\mathbb{N})$ are empty. \end{enumerate} \end{definition} \begin{example} According to Definition \ref{def:segment}, if we put $\hbar(P_{a1})=P_a$ (in $N_*$) a token, $T_a$ (in $N_*$) can be fired, and after this firing, only $P_b$ (in $N_*$) contains a token. Therefore, the sub-model containing nodes $P_{a1}, T_{a1}, P_{b1}$ (in $\mathbb{N}_*$) and their adjacency arcs is a behavior segment. All behavior segments of $\mathbb{N}_*$ are shown in Figure \ref{sfig:segments} (careful readers might have realized that these four segments belong to sequential structures, i.e., all segments contain only SESE nodes. However, a behavior segment can be a non-sequential structure, i.e., containing multiple-incoming or multiple-outgoing nodes. For example, the behavior segment of a transition-bounded model is homogeneous to itself, containing four multiple-incoming or multiple-outgoing nodes). \end{example} \subsubsection{\textbf{Linking Rule}} Behavior segments capture the minimal behavioral characteristics of a CFP. In order to portray the complete characteristics, we link these segments to obtain all possible behavior paths by applying the linking rule below. \begin{definition}[Linking Rule] \label{def:linkingRule} For two segments $\mathbb{S}_i=(P_i,T_i,F_i)$ and $\mathbb{S}_j=(P_j,T_j,F_j)$, if $\hbar(\mathbb{S}_i\bullet)\supseteq \hbar(\bullet \mathbb{S}_j)$ we say they are linkable. If two places $p_i\in \mathbb{S}_i\bullet, p_j\in \bullet \mathbb{S}_j$ are mutual, we say $p_i$ is the joint place of $p_j$ denoted as $\mathcal{J}(p_j)=p_i$ where $\mathcal{J}$ is the joint function. If $n \notin \bullet \mathbb{S}_j$, $\mathcal{J}(n)=n$. The linked segment of two linkable segments $\wr \mathbb{S}_i, \mathbb{S}_j\wr=(P\wr,T\wr,F\wr)$ satisfies: \begin{enumerate}[1)] \item $P\wr=P_i\cup (P_j\backslash\bullet \mathbb{S}_j)$, i.e., the places of a linked segment consist of all places in $\mathbb{S}_i$ and all non-entry places in $\mathbb{S}_j$. \item $T\wr=T_i\cup T_j$, i.e., the transitions of a linked segment consist of all transitions in $\mathbb{S}_i$ and $\mathbb{S}_j$. \item $F\wr=\{\langle \mathcal{J}(u),\mathcal{J}(v) \rangle | \langle u,v \rangle \in F_i \cup F_j \}$, i.e., the arcs of a linked segment are the $\mathcal{J}$-replaced arcs of $\mathbb{S}_i$ and $\mathbb{S}_j$. \end{enumerate} \end{definition} Similarly, $\wr \mathbb{S}_1, \mathbb{S}_2, \cdots, \mathbb{S}_n\wr$ denotes the recursive linking of two segments $\wr \mathbb{S}_1, \mathbb{S}_2, \cdots, \mathbb{S}_{n-1}\wr$ and $\mathbb{S}_n$. The graphical explanation of the linking rule is shown in Figure \ref{fig:linkingRule}. \begin{figure}[hbtp] \centering \includegraphics[width=0.90\maxWidth]{linkingRule.pdf} \caption{The graphical explanation of linking two segments $\mathbb{S}_i, \mathbb{S}_j$. The joint nodes are shown in red/blue color.} \label{fig:linkingRule} \end{figure} \begin{figure*}[hbtp] \centering \subfigure[The behavior segments of $\mathbb{N}_*$.]{ \label{sfig:segments} \includegraphics[height=0.30\maxWidth]{segments.pdf} } \hspace{0.5cm} \subfigure[The partial behavior paths of $\mathbb{N}_*$.]{ \label{sfig:paths} \includegraphics[height=0.30\maxWidth]{paths.pdf} } \caption{The behavior segments and the partial behavior paths of $\mathbb{N}_*$.} \label{fig:segmentsPaths} \end{figure*} \subsubsection{\textbf{Behavior Path}} According to the linking rule, we can obtain all linked segments. However, a linked segment might involve infinite linking due to concurrent and loop behaviors \cite{behavior}. Hence, we apply truncation conditions to avoid infinite linking, which leads to the definition of a (behavior) path. Behavior paths capture \textbf{complete behavioral characteristics of a CFP}. \begin{definition}[Behavior Path] \label{def:path} A segment $\mathbb{P}=\wr \mathbb{S}_1, \mathbb{S}_2, \cdots, \mathbb{S}_n\wr$ of $\mathbb{N}$ is a behavior path iff one of the following conditions holds: \begin{enumerate}[1)] \item $\bullet\mathbb{P}= \bullet \mathbb{N} \wedge \mathbb{P}\bullet \subseteq \mathbb{N}\bullet$, i.e., $\mathbb{P}$ starts from the entry of $\mathbb{N}$ and ends at one of the exits of $\mathbb{N}$. \item $\hbar(\bullet\mathbb{P})=\hbar(\mathbb{P}\bullet)$, i.e., $\mathbb{P}$ starts from a shadow node (set) and ends at this node (set), i.e., loop structure. \end{enumerate} \end{definition} \begin{example} Take Figure \ref{sfig:segments} as an example. Since $\hbar(\mathbb{S}_3\bullet)=\hbar(\{P_{a2}\})$\\=$\{P_{a}\}\supseteq \hbar(\bullet\mathbb{S}_1)=\hbar(\{P_{a1}\})=\{P_{a}\}$, it follows that $\mathbb{S}_3$ and $\mathbb{S}_1$ are linkable with $\mathcal{J}(p_{a1})=p_{a2}$, and $\hbar(\bullet \wr \mathbb{S}_3, \mathbb{S}_1\wr)=\hbar(\wr \mathbb{S}_3, \mathbb{S}_1\wr \bullet)=\{P_b\}$. Thus, the linked segment $\wr \mathbb{S}_3, \mathbb{S}_1\wr$ is a behavior path ($\mathbb{P}_4$ in Figure \ref{sfig:paths}). Partial behavior paths of $N_*$ are shown in Figure \ref{sfig:paths}. \end{example} \subsection{Sentence planning} After extracting all behavior paths from a process model. Then, each path is recognized as a polygon component and then put into \emph{BePT} (a recursive algorithm). The endpoint is a non-decomposable trivial component, i.e., node. When encountering a gateway node (split or join node), the corresponding DSynT (a pre-defined language template) is retrieved from RDT or pre-defined XML-format files. When encountering a SESE node, the corresponding DSynT is extracted from the embedded RDT. After obtaining all DSynTs, the sentence planning phase is triggered. Sentence planning sets out to generate a sentence for each node. The main idea here is to utilize a DSynT to create a NL sentence \cite{realpro, dsyn, Hen}. The generation task is divided into two levels: template sentence and activity sentence generation. \begin{enumerate}[$\bullet$] \item \textbf{Template sentences} focus on describing the behavioral information related to the non-terminal RPST nodes. We provide 32 language template DSynTs (including split, join, dead transition, deadlock \cite{processMining} etc,) to represent corresponding semantic(s). The choice of a template depends on three parameters \cite{Leo, Hen, Goun}: 1) the existence of a gateway label; 2) the gateway type; 3) the number of outgoing arcs. For instance, for a place with multiple outgoing arcs, the corresponding template sentence ``\emph{One of the branches is executed}'' will be retrieved. \item \textbf{Activity sentences} focus on describing a single activity related to the terminal (leaf) RPST nodes. RDT representation has embedded all DSynT messages; thus, for each activity, we can directly access its DSynT from RDT. \end{enumerate} After preparing all DSynTs in the text planning phase, we employ three steps to optimize the expression before the final generation: \begin{enumerate}[1)] \item Checking whether each DSynT lacks necessary grammar meta-information to guarantee its grammatical correctness. \item Pruning redundant TARs to ensure that the selected TARs will not be repeated (Pruning Rule). For example, $T_a \prec T_b$ derived by $\mathbb{P}_2$ or $\mathbb{P}_4$ in Figure \ref{sfig:paths} is a redundant TAR because it has been concluded in $\mathbb{P}_1$. \item Refining the DSynT messages containing the same linguistic component between two consecutive sentences and making use of three aggregation strategies: role aggregation, action aggregation and object aggregation \cite{Hen, Goun}. \end{enumerate} After expression optimization, we employ the DSynT-based realizer RealPro \cite{realpro} to realize sentence generation. RealPro requires a DSynT as input and outputs a grammatically correct sentence \cite{dsyn}. In a loop, every DSynT is passed to the realizer. The resulting NL sentence is then added to the final output text. After all sentences have been generated, the final text is presented to the end user. \begin{example} \label{exm:text} The generated text of $N_*$ in Figure \ref{fig:rigid} is as follows (other state-of-the-art methods cannot handle this model): \begin{mdframed}[style=myboxstyle,frametitle={}] \begin{enumerate}[1)] \item $\bullet$ The following main branches are executed: \item \hspace{1em} $\bullet$ The experimenter extracts the genes. Then, he sequences the DNA. Subsequently, the experimenter records the data. \item $\bullet$ Attention, there are two loops which may conditionally occur: \item \hspace{1em} $\bullet$ After sequencing DNA, the experimenter can also remove impurities if it is not clean. Then, he continues extracting genes. \item \hspace{1em} $\bullet$ After recording the data, there is a series of activities that need to be finished before DNA sequencing: \item \hspace{3em} $\bullet$ *** \end{enumerate} \end{mdframed} Template sentences (1, 3, 7) describe where the process starts, splits, joins and ends. Activity sentences (2, 4, 5) describe each sorted behavior path. The paragraph placeholder (6) can be flexibly replaced according to the sub-text of the simplified component $T_e$. We can see that \emph{BePT} first describes the main path (``\emph{\underline{$T_a$}$\rightarrow$\underline{$T_b$}$\rightarrow$\underline{$T_c$}}'') before two possible loops (``\emph{\underline{$T_a$}$\rightarrow$\underline{$T_b$}$\rightarrow$\underline{$T_d$}}'', ``\emph{\underline{$T_c$}$\rightarrow$\underline{$T_e$}$\rightarrow$\underline{$T_b$}}''). These three paragraphs of the generated text correspond to three correct firing sequences of the original model, the generated text contains just enough descriptions to reproduce the original model without redundant descriptions. \end{example} \begin{table*}[hbtp] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \small \centering \caption{Statistics of the evaluation datasets. \emph{N}=Total number of models per source; \emph{SMR}=The ratio of structured models to all models; \emph{min}=The minimum value per source; \emph{ave}=The average value per source; \emph{max}=The maximum value per source.} \label{tab:datasets} \rowcolors{3}{grayColor}{whiteColor} \begin{tabular}{c|c|r|r|rrr|rrr|rrr|rrr} \hline \multirow{2}*{\textbf{Source}} & \multirow{2}*{\textbf{Type}} & \multirow{2}*{\textbf{N}} & \multirow{2}*{\textbf{\emph{SMR} $\downarrow$}} & \multicolumn{3}{c|}{\textbf{Place}} & \multicolumn{3}{c|}{\textbf{Transition}} & \multicolumn{3}{c|}{\textbf{Arc}} & \multicolumn{3}{c}{\textbf{RPST depth}}\\ & & & & \emph{min} & \emph{ave} & \emph{max} & \emph{min} & \emph{ave} & \emph{max} & \emph{min} & \emph{ave} & \emph{max} & \emph{min} & \emph{ave} & \emph{max} \\ \hline SAP & Industry & 72 & \cellcolor{redColor!100}100.00\% & 2 & 3.95 & 13 & 1 & 3.12 & 12 & 2 & 6.75 & 24 & 1 & 1.85 & 5 \\ DG & Industry & 38 & \cellcolor{redColor!85}94.74\% & 3 & 7.65 & 22 & 2 & 7.85 & 17 & 4 & 16.02 & 44 & 1 & 2.55 & 7 \\ TC & Industry & 49 & \cellcolor{redColor!70}81.63\% & 6 & 10.10 & 17 & 6 & 10.62 & 19 & 14 & 21.87 & 38 & 1 & 3.92 & 7 \\ SPM & Academic & 14 & \cellcolor{redColor!55}57.00\% & 2 & 7.28 & 12 & 1 & 7.40 & 15 & 2 & 15.49 & 30 & 1 & 2.93 & 5 \\ IBM & Industry & 142 & \cellcolor{redColor!40}53.00\% & 4 & 39.00 & 217 & 3 & 26.46 & 145 & 6 & 79.84 & 456 & 1 & 5.21 & 12 \\ GPM & Academic & 36 & \cellcolor{redColor!25}42.00\% & 4 & 11.15 & 19 & 3 & 11.55 & 24 & 6 & 24.92 & 48 & 1 & 3.22 & 5 \\ BAI & Academic & 38 & \cellcolor{redColor!10}28.95\% & 4 & 10.54 & 21 & 2 & 9.93 & 24 & 6 & 22.92 & 49 & 1 & 3.24 & 5 \\ \hline \end{tabular} \end{table*} \subsection{Property Analysis} \label{ssec:property} We emphasize BePT's three strong properties - correctness, completeness and minimality. Specifically, given a net system $S=(N,M)$ and its TAR set $\mathcal{T}(S)$. The behavior path set $\mathcal{P}$ of $\mathbb{N}$ by the linking rule (Definition \ref{def:linkingRule}) satisfies: 1) \textbf{behavior correctness}, $\forall \mathbb{P} \in \mathcal{P} \Rightarrow \mathcal{T}(\mathbb{P}) \subseteq \mathcal{T}(S)$; 2) \textbf{behavior completeness}, $\forall \tau \in \mathcal{T}(S) \Rightarrow \exists \mathbb{P}\in\mathcal{P}, \tau \in \mathcal{T}(\mathbb{P})$; 3) \textbf{description minimality}, each TAR $\mathcal{T}(S)$ by the pruning rule is described only once in the final text. Please see Appendices A, B and C for detailed proofs. \section{Evaluation} \label{sec:evaluation} We have conducted extensive qualitative and quantitative experiments. In this section, we report the experimental results to answer the following research questions: \begin{enumerate}[\textbf{RQ}1] \item \textbf{Capability}: Can \emph{BePT} handle more complex model patterns than existing techniques? \item \textbf{Detailedness}: How much information does \emph{BePT} express? \item \textbf{Consistency}: Is \emph{BePT} text consistent to the original model? \item \textbf{Understandability}: Is \emph{BePT} text easy to understand? \item \textbf{Reproducibility}: Can the original model be reproduced only from its generated text? \end{enumerate} \subsection{Experimental Setup} \label{ssec:setup} In this part, we describe our experimental datasets, the baselines and the experiment settings. \subsubsection{\textbf{Datasets}} \label{sssec:datasets} We collected and tested on seven publicly accessible datasets: SAP, DG, TC, SPM, IBM, GPM, BAI \cite{Leo,Hen,Goun,cims}. Among them, SAP, DG, TC, IBM are from industry (enterprises etc,) and SPM, GPM, BAI are from academic areas (literature, online tutorials, books etc,). The characteristics of the seven datasets are summarized in Table \ref{tab:datasets} (sorted by the decreasing ratio of structured models $\bm{SMR}$). There are a total of 389 process models consisting of real-life enterprise models (87.15\%) and synthetic models (12.85\%). The number of transitions varies from 1 to 145 and the depth of RPSTs varies from 1 to 12. The statistical data is fully skewed due to the different areas, amounts and model structures. \subsubsection{\textbf{Baseline Methods}} \label{sssec:baselines} We compared our proposed process translator \emph{BePT} with the following three state-of-the-art methods: \begin{enumerate}[$\bullet$] \item \emph{\textbf{Leo}} \cite{Leo}. It is the first structure-based method focusing mainly on structured components: trivial, bond and polygon. \item \emph{\textbf{Hen}} \cite{Hen}. It is the extended version of Leo focusing mainly on rigid components with longest-first strategy. \item \emph{\textbf{Goun}} \cite{Goun}. It is a state-of-the-art structured-based method focusing mainly on unfolding model structure without considering its behaviors. \end{enumerate} \subsubsection{\textbf{Parameter Settings}} \label{sssec:settings} We implemented \emph{BePT} based on jBPT\footnote{\url{https://code.google.com/archive/p/jbpt/}}. An easy-to-use version of \emph{BePT} is also publicly available\footnote{\url{https://github.com/qianc62/BePT}}. We include an editable parameter for defining the size of a paragraph and predefine this parameter with a value of 75 words. Once this threshold is reached, we use a change of the performing role or an intermediate activity as an indicator and respectively introduce a new paragraph. Besides, we use the default language grammar style of subject-predicate-object and object-be-predicated-by-subject to express a sentence \cite{Leo, Hen, Goun}. Finally, we set all parameters to valid for all methods, i.e., to generate intact textual descriptions without any reduction. \subsection{Results} \label{ssec:results} \subsubsection{\textbf{Capability (RQ1)}} \ \\ As discussed earlier, a rigid is a region that captures an arbitrary model structure. Thus, these seven datasets are representative enough as the $SMR$ varies from 100\% (structured models) to 28.95\% (unstructured complex models). We analyzed and compared all process models. Table \ref{tab:capability} reports their handling capabilities w.r.t some representative complex patterns \cite{Goun, unstructuredModel}. \begin{table}[htbp] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \small \centering \caption{The handling capabilities of four P2T methods w.r.t. some representative patterns.} \label{tab:capability} \begin{tabular}{c|c|c|c|c|c} \hline \textbf{Type} & \textbf{Pattern} & \textbf{\emph{Leo}} & \textbf{\emph{Hen}} & \textbf{\emph{Goun}} & \textbf{\emph{BePT}} \\ \hline \multirow{5}*{T, B, P} & \cellcolor{grayColor}Trivial & \cellcolor{grayColor}\color{redColor}\checkmark & \cellcolor{grayColor}\color{redColor}\checkmark & \cellcolor{grayColor}\color{redColor}\checkmark & \cellcolor{grayColor}\color{redColor}\checkmark \\ & Polygon & \color{redColor}\checkmark & \color{redColor}\checkmark & \color{redColor}\checkmark & \color{redColor}\checkmark \\ & \cellcolor{grayColor}Easy Bond & \cellcolor{grayColor}\color{redColor}\checkmark & \cellcolor{grayColor}\color{redColor}\checkmark & \cellcolor{grayColor}\color{redColor}\checkmark & \cellcolor{grayColor}\color{redColor}\checkmark \\ & Easy Loop & \color{redColor}\checkmark & \color{redColor}\checkmark & \color{redColor}\checkmark & \color{redColor}\checkmark \\ & \cellcolor{grayColor}Unsymmetrical Bond & \cellcolor{grayColor} & \cellcolor{grayColor} & \cellcolor{grayColor} & \cellcolor{grayColor}\color{redColor}\checkmark \\ \hline \multirow{7}*{R} & Place Rigid & & \color{redColor}\checkmark & \color{redColor}\checkmark & \color{redColor}\checkmark \\ & \cellcolor{grayColor}Transition Rigid & \cellcolor{grayColor} & \cellcolor{grayColor} & \cellcolor{grayColor} & \cellcolor{grayColor}\color{redColor}\checkmark \\ & Mix Rigid & & & & \color{redColor}\checkmark \\ & \cellcolor{grayColor}Intersectant Loop & \cellcolor{grayColor} & \cellcolor{grayColor} & \cellcolor{grayColor} & \cellcolor{grayColor}\color{redColor}\checkmark \\ & Non-free-choice Construct & & & \color{redColor}\checkmark & \color{redColor}\checkmark \\ & \cellcolor{grayColor}Invisible or Duplicated Task & \cellcolor{grayColor} & \cellcolor{grayColor} & \cellcolor{grayColor}\color{redColor}\checkmark & \cellcolor{grayColor}\color{redColor}\checkmark \\ & Multi-layered Embedded & & & \color{redColor}\checkmark & \color{redColor}\checkmark \\ \hline \multirow{2}*{Extra} & \cellcolor{grayColor}Modeling Information & \cellcolor{grayColor} & \cellcolor{grayColor} & \cellcolor{grayColor} & \cellcolor{grayColor}\color{redColor}\checkmark \\ & Multi-layered Paragraph & & & \color{redColor}\checkmark & \color{redColor}\checkmark \\ \hline \multicolumn{2}{c|}{Total} & \cellcolor{redColor!25}4 & \cellcolor{redColor!40}5 & \cellcolor{redColor!70}9 & \cellcolor{redColor!100}14 \\ \hline \end{tabular} \end{table} First, we can see that \emph{BePT} shows the best handling capabilities. Among the 14 patterns, \emph{BePT} can handle them all, which is better than Goun that can handle 9 patterns. Second, all four methods can handle structured models well, while \emph{Goun} and \emph{BePT} can handle unstructured models, and \emph{BePT} can even further provide extra helpful messages. Third, the R and the Extra parts show that \emph{BePT} can handle rigids of arbitrary complexity even if the model is unsymmetrical, non-free-choice or multi-layered. From these results, we can conclude that the behavior-based method \emph{BePT} is sufficiently powerful to address complex structures. \subsubsection{\textbf{Detailedness (RQ2)}} \ \\ In the sentence planning phase, \emph{BePT} checks the grammatical correctness of each DSynT so that the generated text can accord with correct English grammar. Here, instead of comparing the grammatical correctness, we summarize the structural characteristics of all generated texts in Table \ref{tab:detailedness}. \begin{table}[t] \small \centering \caption{Average number of words and sentences per text. Red numbers denote the maximum and green numbers denote the minimum per dataset.} \label{tab:detailedness} \rowcolors{3}{grayColor}{whiteColor} \begin{tabular}{c|rrrr|rrrr} \hline \multirow{2}*{} & \multicolumn{4}{c|}{\textbf{Words/Text}} & \multicolumn{4}{c}{\textbf{Sentences/Text}} \\ & Leo & Hen & Goun & \emph{BePT} & Leo & Hen & Goun & \emph{BePT} \\ \hline SAP & \color{greenColor} 38.0 & \color{greenColor} 38.0 & \color{redColor} 38.1 & \color{redColor} 38.1 & \color{greenColor} 6.0 & \color{greenColor} 6.0 & \color{redColor} 6.2 & \color{redColor} 6.2 \\ DG & \color{greenColor} 74.0 & 79.7 & 79.6 & \color{redColor} 85.3 & \color{greenColor} 13.0 & 15.0 & 15.0 & \color{redColor} 15.7 \\ TC & \color{greenColor} 99.2 & 110.8 & 112.4 & \color{redColor} 135.0 & \color{greenColor} 12.2 & 15.5 & 15.7 & \color{redColor} 18.7 \\ SPM & \color{greenColor} 41.5 & 54.1 & 55.6 & \color{redColor} 100.9 & \color{greenColor} 5.8 & 7.9 & 8.1 & \color{redColor} 14.1 \\ IBM & \color{greenColor} 140.2 & 180.7 & 182.9 & \color{redColor} 191.9 & \color{greenColor} 74.2 & 80.7 & 81.7 & \color{redColor} 86.2 \\ GPM & \color{greenColor} 38.3 & 50.8 & 53.8 & \color{redColor} 147.0 & \color{greenColor} 6.2 & 7.5 & 7.9 & \color{redColor} 16.2 \\ BAI & \color{greenColor} 25.7 & 31.7 & 32.6 & \color{redColor} 111.3 & \color{greenColor} 2.7 & 4.4 & 4.6 & \color{redColor} 15.7 \\ \hline Total & \color{greenColor} 66.7 & 78.0 & 79.3 & \color{redColor} 115.3 & \color{greenColor} 17.2 & 19.6 & 19.9 & \color{redColor} 22.3 \\ \hline \end{tabular} \end{table} A general observation is that \emph{BePT} texts are longer than the other texts. Leo, Hen, and Goun texts contain an average of 66.7, 78.0 and 79.3 word length and 17.2, 19.6, 19.9 sentence length respectively, while \emph{BePT} texts include an average of 115.3 words and 22.3 sentences. However, this does not imply that \emph{BePT} texts are verbose, using longer sentences to describe the same content. Rather, Leo, Hen, Goun ignore some modeling-level messages related to soundness and safety \cite{verification, processMining}, but \emph{BePT} supplements them. Therefore, we conclude that \emph{BePT} generates more detailed messages to provide additional useful information. Of course, while all parameters are set to be valid in this experiment, \emph{BePT} is actually configurable, i.e., users can set parameters to determine whether to generate these complementary details or not. \subsubsection{\textbf{Consistency (RQ3)}} \ \\ A generally held belief is that the hierarchical organization of texts will hugely influence readability since paragraph indentation can reflect the number of components, the modeling depth of each activity, etc. Considering the generated text of the running example (Example \ref{exm:text}), if the text contains no paragraph indentation, i.e., each paragraph starts from the bullet point "$\bullet$", it will be much harder to fully reproduce the model semantics \cite{Hen, Goun}. In this part, we consider the detection of structural consistency between a process model and its corresponding textual descriptions. This task requires an alignment of a model and a text, i.e., activities in the texts need to be related to model elements and vice versa \cite{alignment, consistency}. For an activity $T$, its modeling depth $md(T)$ is the RPST depth of $T$, and its description depth $dd(T)$ is how deep it is indented in the text. For the activity set of a model, the modeling depth distribution is denoted as $\mathcal{X}=[md(T_1),md(T_2),...,md(T_n)]$ and the description depth distribution is denoted as $\mathcal{Y}=[dd(T_1),dd(T_2),...,dd(T_n)]$. We employ a correlation coefficient to evaluate the consistency between the two distributions of $\mathcal{X}$ and $\mathcal{Y}$ as follows: \begin{equation} \rho(\mathcal{X},\mathcal{Y})=\frac{E[(\mathcal{X}-E\mathcal{X})(\mathcal{Y}-E\mathcal{Y})]}{\sqrt{D\mathcal{X} \cdot D\mathcal{Y}}}\in [-1.0,1.0] \end{equation} where $E$ is the expectation function and $D$ is the variance function. The value of the $\rho(\mathcal{X},\mathcal{Y})$ function ranges from -1.0 (negatively related) to 1.0 (positively related). \begin{figure}[htbp] \centering \includegraphics[width=0.70\maxWidth]{consistency.pdf} \caption{The consistency distribution. The red color denotes the positive coefficient while the blue color denotes the negative coefficient.} \label{fig:consistency} \end{figure} \begin{figure*}[hbtp] \centering \subfigure[Information gain line.]{ \label{sfig:line} \includegraphics[height=0.30\maxWidth]{line.pdf} } \hspace{0.4cm} \subfigure[Perplexity distributions on all datasets.]{ \label{sfig:perplextity} \includegraphics[height=0.35\maxWidth]{perplexity.pdf} } \caption{The graphical representation of information gain line and the perplexity distributions.} \end{figure*} Figure \ref{fig:consistency} shows the consistency results of the four P2T methods. First, \emph{BePT} obtains the highest consistency value in every dataset, meaning that \emph{BePT} positively follows the depth distribution of original models to the maximum extent. Notice that all methods obtain 1.00 consistency on the SAP dataset since all SAP models are structured. However, on the SPM dataset, \emph{BePT} achieves 0.86 consistency, while the other methods are only at around 0.25. The main reason is that SPM contains plenty of close-to-structured rigids, which directly reflects the other methods' drawbacks. Second, with lower $SMR$, the consistency performance rapidly decreases. The most obvious updates occur in GPM and BAI where Leo, Hen and Goun even produce negative coefficient values, which demonstrates that they negatively relate the distribution of the original models even causing the opposite distribution, while \emph{BePT} obtains 0.42 and 0.25 which shows that \emph{BePT} is still positively related even while facing unstructured situations. Hence, we conclude that \emph{BePT} texts conform better to the original models. \subsubsection{\textbf{Understandability (RQ4)}} \ \\ In this section, we discuss the perplexity that reflects the textual understandability. It quantifies ``how hard to understand'' a model-text pair. This information entropy-based metric \cite{entropy1} is inspired from the natural language processing techniques \cite{perplexity}. Consider a model-text pair $\langle \mathcal{M}, \mathcal{T} \rangle$ in which the text $\mathcal{T}$ consists of a sequence of paragraphs $\langle S_1,S_2,\cdots,S_n\rangle$. $\psi(S_i)$ denotes the information gain of paragraph $S_i$: \begin{equation} \psi(S_i)=e^{|T'|log_2|T'|} \cdot |T'| \cdot |U'| \end{equation} where $T'$ is the described activity set and $U'$ is the neglected activity set. This formula employs information entropy $|T'|log_2|T'|$ to describe the confusion of all activities in a paragraph. Its exponent value has the same magnitude of $|T'|$. We notice that if any activity cannot be generated in the text, the text system should reduce the understandability value with the original model, i.e., improve the perplexity of the text system; hence, it multiplies by $|U'|$. When describing a single paragraph $S_1$, the information gain \cite{dataMining} of the text system is $\psi(S_1)$. After describing paragraph $S_2$, the information gain changes to $\psi(S_1)+\psi(S_2)$. Similarly, after describing all paragraphs, the information gain is $\Sigma_{i=1}^n\psi(S_i)$. These values are mapped to $n$ points $(i,\Sigma_{k=1}^{i}\psi(S_k))_{i=1}^n$ shown in Figure \ref{sfig:line}. We call the broken line linking all points the \emph{information gain line} $IGL(\langle \mathcal{M}, \mathcal{T} \rangle,S)$. Then, we can define the perplexity of the text system $\mathcal{T}$ (the integral over all sentence perplexities): \begin{equation} perplexty(\langle \mathcal{M}, \mathcal{T} \rangle) = \int_{0}^{n}IGL(\langle \mathcal{M}, \mathcal{T} \rangle, s)ds, s\in \mathbb{R} \end{equation} $IGL(S)$ intuitively measures whether the model-text pair system is understandable where a lower perplexity implies a higher understandability. We calculated this metric for each dataset and reported the results. Figure \ref{sfig:perplextity} shows the perplexity results. We can see that \emph{BePT} achieves the lowest perplexity in all datasets, i.e., best understandability. On average, the perplexity has been reduced from $10^{2.74}$ to $10^{0.98}$. This results also show that the perplexity trend is Leo $\ge$ Hen $\ge$ Goun $\ge$ BePT, i.e., the understandability trend is Leo $\le$ Hen $\le$ Goun $\le$ BePT. \subsubsection{\textbf{Reproducibility (RQ5)}} \ \\ This part evaluates the reproducibility of the generated text, i.e., could the original model be reproduced from the generated text? For each model-text pair ($\mathcal{M}, \mathcal{T}$), we manually back-translate (extract) the process model from the generated text and compare the elements between the original and the extracted models. All back-translators are provided only the generated texts without them knowing any information of the original models. They reproduce the original models from the texts according to their own understanding. After translation, we evaluate the structural and behavioral reproducibility between the original model and the extracted one. If an isomorphic model $\mathcal{M}$ can be reproduced, we can believe that the text $\mathcal{T}$ contains enough information to reproduce the original model, i.e., excellent reproducibility. We evaluate the P2T performance using the $F_1$ measure (the harmonic average of recall and precision) which is inspired by the data mining field \cite{dataMining}: \begin{equation} F_1 = \frac{(1+\beta^2)\cdot precision \cdot recall}{\beta^2 \cdot precision + recall} \in [0.0,1.0] \end{equation} where $\beta$ is the balance weight. In our experiments, equal weights ($\beta=1.0$) are assigned to balance recall and precision. The higher the $F_1$ is, the better the reproducibility is. \textbf{Structural Reproducibility.} Figure \ref{fig:structureMeasures} shows the results of four dimensions (\emph{place, transition, gateway, element}). First, we can see that the $F_1$ value of the four methods falls from 100\% to a lower value w.r.t. decreasing $SMR$. For GPM and BAI datasets, Leo achieves only around 40\%. The low-value cases significantly affect the ability to understand or reproduce the original model, and it reflects the general risk that humans may miss elements when describing a model, i.e., they lose around 60\% information. Still, Hen achieves around 90\% while \emph{BePT} hits 100\%, i.e., Goun and \emph{BePT} lose least information. We can conclude that, among the four P2T methods, \emph{BePT} achieves the highest reproducibility, followed by Goun and then Hen. The structural reproducibility performance also shows the trend, Leo $\le$ Hen $\le$ Goun $\le$ BePT. \begin{figure}[t] \centering \subfigure[$F_1$ score on places]{ \label{sfig:placeF} \includegraphics[width=0.40\maxWidth]{f1place.pdf} } \subfigure[$F_1$ score on transitions]{ \label{sfig:transitionF} \includegraphics[width=0.40\maxWidth]{f1transition.pdf} } \subfigure[$F_1$ score on gateways]{ \label{sfig:gatewayF} \includegraphics[width=0.40\maxWidth]{f1gateway.pdf} } \subfigure[$F_1$ score on all elements]{ \label{sfig:elementF} \includegraphics[width=0.40\maxWidth]{f1element.pdf} } \caption{The $F_1$ measures on structural dimensions.} \label{fig:structureMeasures} \end{figure} \textbf{Behavioral Reproducibility.} Behavioral reproducibility aims to evaluate the extent of correctly expressed behavior, i.e., how many correct behaviors are expressed in the generated texts. We also use $F_1$ to evaluate behavioral performance. In this part, we use TAR (local) and trace (global) to reflect the model behaviors. As trace behaviors exist space explosion problem, thus, for trace F-measure, we only evaluate these models without loop behavior. Figure \ref{fig:behaviorMeasures} shows the results for the behavior dimensions (\emph{TAR, trace}). The results show that \emph{BePT} outperforms Leo, Hen and Goun significantly in terms of both TAR and trace performance. Leo performance falls sharply with decreasing $SMR$, while Hen and Goun drop more gently than Leo and they achieve around 70\% on BAI for trace $F_1$. \emph{BePT} gets the highest $F_1$ of around 100\% for both TAR and trace measures, and \emph{BePT} also produces a distinct improvement on TAR and trace $F_1$ over other methods. From these two performance results, we can conclude that \emph{BePT} showcases the best reproducibility over the state-of-the-art P2T methods. \begin{figure}[t] \centering \subfigure[$F_1$ score on TARs]{ \label{sfig:tarF} \includegraphics[width=0.40\maxWidth]{f1tar.pdf} } \subfigure[$F_1$ score on traces]{ \label{sfig:behaviorF} \includegraphics[width=0.40\maxWidth]{f1trace.pdf} } \caption{The $F_1$ measures on behavioral dimensions.} \label{fig:behaviorMeasures} \end{figure} \section{Conclusion and Future Work} \label{sec:conclusionAndDiscussion} We present a behavior-based process translator. It first combines the structural and linguistic information into an RDT tree before decoding it by extracting the behavior paths. Then, we use NL tools to generate textual descriptions. Our experiments show the significant improvements that result on capability, detailedness, consistency, understandability and reproducibility. This approach can unlock the hidden value that lies in large process repositories in the cloud, and make them more reusable. We also list some potential limitations of this study. Above all, when the model is unsound, \emph{BePT} informs the user that the model contains non-sound or wrong parts but without giving any correction advice. Another drawback concerns manual extraction of the NL text because of the limited number of participants. We cannot guarantee that each extraction rule for a generated text is identical. Thus, generating the correction advice and automatic reverse translation would also be of interest in future studies. \section*{Appendices} \begin{appendices} \section{The Proof of Behavior Correctness}\label{prf:correctness} \begin{property} Given a net system $S=(N,M)$ and its TAR set $\mathcal{T}(S)$. The behavior path set $\mathcal{P}$ of its CFP $\mathbb{N}$ by the linking rule (Definition \ref{def:linkingRule}) satisfies behavior correctness, $\forall \mathbb{P} \in \mathcal{P} \Rightarrow \mathcal{T}(\mathbb{P}) \subseteq \mathcal{T}(S)$. \end{property} \begin{qcProof} Given two Petri nets $N_i=(P_i, T_i, F_i), N_j=(P_j, T_j, F_j)$, we assume $\mathbb{P}=\wr \mathbb{S}_1, \mathbb{S}_2, \cdots, \mathbb{S}_n \wr$. Then, consider two situations: a) inside a single segment; b) between the linking of two segments: \begin{enumerate}[a)] \item The initial (default) marking $\bullet S$ is also the the initial marking of $\hbar(\mathbb{S}_1)$, i.e., the marking $\bullet \mathbb{S}_1$ is reachable. According to the definition of behavior segment, $\mathbb{S}_1\bullet$ is reachable from $\bullet\mathbb{S}_1$, and the firing rule guarantees $\mathcal{T}(\mathbb{S}_1) \subseteq \mathcal{T}(S)$. After executing $\mathbb{S}_1$, $\bullet \mathbb{S}_2$ is reachable as $\mathbb{S}_1 \bullet \supseteq \bullet \mathbb{S}_2$, so that $\mathcal{T}(\mathbb{S}_2) \subseteq \mathcal{T}(S)$ holds. Similarly, $\wr \mathbb{S}_1, \mathbb{S}_2 , \cdots, \mathbb{S}_{i-1} \wr \bullet \supseteq \bullet \mathbb{S}_i \Rightarrow \mathcal{T}(\mathbb{S}_{i}) \subseteq \mathcal{T}(S), i\in 1,2\cdots n$ holds. \item For two segments $\mathbb{S}_i, \mathbb{S}_{i+1}, i \in 1,2\cdots n-1$, we use the notation $\mathcal{T}(\mathbb{S}_i \wr \mathbb{S}_{i+1})$ to denote the TAR set in the joint points, i.e., $\mathcal{T}(\mathbb{S}_i \wr \mathbb{S}_{i+1}) = \{ a \prec b | a \in \bullet(\mathbb{S}_i\bullet) \wedge b \in (\bullet\mathbb{S}_{i+1})\bullet \}$. Since $\mathbb{S}_i \bullet \supseteq \bullet \mathbb{S}_{i+1}$ guarantees that $(\bullet\mathbb{S}_{i+1})\bullet$ can be fired after firing $\bullet(\mathbb{S}_{i}\bullet)$, i.e., $\mathcal{T}(\mathbb{S}_i \wr \mathbb{S}_{i+1}) \subseteq \mathcal{T}(S)$. Therefore, $\mathcal{T}(\wr \mathbb{S}_i, \mathbb{S}_{i+1} \wr) =\mathcal{T}( \mathbb{S}_i) \cup \mathcal{T}(\mathbb{S}_{i+1}) \cup \mathcal{T}(\mathbb{S}_i \wr \mathbb{S}_{i+1}) \subseteq \mathcal{T}(S), i\in 1,2\cdots n-1$. \end{enumerate} According to the above two points, we can conclude that $\forall \mathbb{P} \in \mathcal{P} \Rightarrow \mathcal{T}(\mathbb{P}) = \mathcal{T}(\mathbb{S}_1,\mathbb{S}_2,\cdots,\mathbb{S}_n) = \mathcal{T}(\mathbb{S}_1) \cup \mathcal{T}(\mathbb{S}_2) \cup \cdots \cup \mathcal{T}(\mathbb{S}_n) \cup \mathcal{T}(\mathbb{S}_1 \wr \mathbb{S}_2) \cup \mathcal{T}(\mathbb{S}_2 \wr \mathbb{S}_3) \cup \cdots \cup \mathcal{T}(\mathbb{S}_{n-1} \wr \mathbb{S}_n) \subseteq \mathcal{T}(S)$. \end{qcProof} \section{The Proof of Behavior Completeness}\label{prf:completeness} \begin{property} Given a net system $S=(N,M)$ and its TAR set $\mathcal{T}(S)$. The behavior path set $\mathcal{P}$ of its CFP $\mathbb{N}$ by linking rule (Definition \ref{def:linkingRule}) satisfies behavior completeness, $\forall \tau \in \mathcal{T}(S) \Rightarrow \exists \mathbb{P}\in\mathcal{P}, \tau \in \mathcal{T}(\mathbb{P})$. \end{property} \begin{qcProof} For any TAR $\tau= a \prec b \in \mathcal{T}(S)$, the place set $a \bullet \cap \bullet b$ is denoted as $\mathscr{P}$. The sub-model $( \mathscr{P}, \{a, b\}, \{a\} \times \mathscr{P} \cup \mathscr{P} \times \{b\} )$ is denoted as $\mathscr{N}$. We use $N_i \propto N_j$ to denote $P_i \subseteq P_j \wedge T_i \subseteq T_j \wedge F_i \subseteq F_j$, i.e., $N_i$ is a sub-model of $N_j$. Then, consider the following situations: \begin{enumerate}[a)] \item When $\forall p \in \mathscr{P}, p \notin \mathcal{SP}(\mathbb{N})$, there is no $p \in \mathscr{P}$ that can be the boundary node of a segment according to Definition \ref{def:segment} ($\mathcal{SP}$-bounded). Hence, $\mathscr{N}$ can only exist in the middle of a segment, i.e., $\exists \mathbb{S}_i, \mathbb{P}_j \Rightarrow \mathscr{N} \propto \mathbb{S}_i \propto \mathbb{P}_j \in \mathcal{P} \Rightarrow \tau \in \mathcal{T}(\mathscr{N}) \subseteq \mathcal{T}(\mathbb{P}_j)$. \item When $\forall p \in \mathscr{P}, p \in \mathcal{SP}(\mathbb{N})$, $\mathscr{P}$ is split, being the sink set of a certain segment $\mathbb{S}_i$ and the source set of a certain segment $\mathbb{S}_j$ ($\mathcal{SP}$-bounded), i.e., $\mathscr{P} = \mathbb{S}_i \bullet = \bullet \mathbb{S}_j$ always holds. Hence, $\exists \mathbb{S}_i, \mathbb{S}_j, \mathbb{P}_k \Rightarrow \mathscr{N} \propto \wr \mathbb{S}_i, \mathbb{S}_j \wr \propto \mathbb{P}_k \in \mathcal{P} \Rightarrow \tau \in \mathcal{T}(\mathscr{N}) \subseteq \mathcal{T}(\mathbb{P}_k)$. \item When $\exists p_1,p_2 \in \mathscr{P}, p_1 \notin \mathcal{SP}(\mathbb{N}), p_2 \in \mathcal{SP}(\mathbb{N})$, there is no $p \in \mathscr{P}$ can be the boundary node of a segment, or it contradicts Definition \ref{def:segment} (reply-hold). Hence, $\mathscr{N}$ can only exist in the middle of a segment, i.e., $\exists \mathbb{S}_i, \mathbb{P}_j \Rightarrow \mathscr{N} \propto \mathbb{S}_i \propto \mathbb{P}_j \in \mathcal{P} \Rightarrow \tau \in \mathcal{T}(\mathscr{N}) \subseteq \mathcal{T}(\mathbb{P}_j)$. \item When $\mathscr{P} = \varnothing$, i.e., $a$ and $b$ are in a concurrent relation. There always exists a concurrent split transition $t$. According to Definition \ref{def:segment}, $\exists \mathbb{S}_i \Rightarrow t \in \mathbb{S}_i \wedge a,b \in \mathbb{S}_i$ (reply-hold). Thus, $\exists \mathbb{S}_i, \mathbb{P}_j \Rightarrow \mathscr{N} \propto \mathbb{S}_i \propto \mathbb{P}_j \in \mathcal{P} \Rightarrow \tau \in \mathcal{T}(\mathscr{N}) \subseteq \mathcal{T}(\mathbb{P}_j)$. \end{enumerate} \end{qcProof} \section{The Proof of Description Minimality}\label{prf:ninimality} \begin{property} For a net system $S=(N,M)$, the pruned TARs $\mathcal{T}(S)$ by the Pruning Rule satisfies description minimality. \end{property} \begin{qcProof} According to Appendices \ref{prf:completeness}, for any TAR $\tau$, it can always be derived from a certain behavior path, i.e., $\forall \tau \in \mathcal{T}(S) \Rightarrow \exists \mathbb{P}\in\mathcal{P}, \tau \in \mathcal{T}(\mathbb{P})$. Hence, for two TARs $\tau_1, \tau_2$ of the original model with $\tau_1 \in \mathcal{T}(\mathbb{P}_i) \wedge \tau_2 \in \mathcal{T}(\mathbb{P}_j), i<j$. If $\tau_1\ne \tau_2$, $\{ \tau_1,\tau_2 \} \subseteq \mathcal{T}(S)$ always holds, while $\{ \tau_1 \} = \{ \tau_2 \} \subseteq \mathcal{T}(S)$ always holds if $\tau_1=\tau_2$. Therefore, the pruning rule always keeps TARs appearing at the first time, i.e., $\mathcal{T}(S)$ satisfies behavior minimality. \end{qcProof} \end{appendices} \section*{Acknowledgements} The work was supported by the National Key Research and Development Program of China (No. 2016YFB1001101), the National Nature Science Foundation of China (No.71690231, No.61472207), and Tsinghua BNRist. We also would like to thank anonymous reviewers for their helpful comments. \bibliographystyle{ACM-Reference-Format}
1,941,325,220,801
arxiv
\section{\label{sec:intro}Introduction} After the firm discovery of the accelerating expansion of the universe from observations of distant--type Ia SNs \cite{1998AJ....116.1009R,1999ApJ...517..565P}, dark energy, which is responsible for the cosmic acceleration, has been one of the biggest mysteries in cosmology. In the last 20 years, detailed observations of cosmic microwave background (CMB) anisotropies have provided us with much information about the universe, and the cosmological parameters in the standard cold dark matter model with a cosmological constant $\Lambda$ ($\Lambda$CDM) have been precisely determined \cite{2013ApJS..208...19H,2020A&A...641A...6P}. The CMB is sensitive to dark energy through the integrated Sachs--Wolfe (ISW) effect, where the decay of the gravitational potentials due to the accelerating expansion of the universe generates energy fluctuations in the CMB photons passing through these potentials. However, the CMB constraint on dark energy-related parameters is weak because the ISW effect appears only on large scales in the CMB temperature anisotropy spectrum and suffers from sizable cosmic variance errors. An interesting idea of reducing the cosmic variance errors associated with large-scale fluctuations was proposed by Kamionkowski and Loeb \cite{1997PhRvD..56.4511K}. They argued that the polarization of the CMB photons scattered off the free electrons in a cluster of galaxies could be used to reduce the cosmic variance because the polarization is sensitive to the quadrupole anisotropy of the CMB's last scattering surface viewed by the cluster \cite{2000PhRvD..62l3004S}. However, as Portsmouth pointed out \cite{2004PhRvD..70f3504P}, the quadrupoles viewed by distant clusters are largely correlated with the local quadrupole viewed by us. Therefore, the cosmic variance associated with the local quadrupole is not reducible by the Kamionkowski and Loeb method. However, cluster polarization measurements can still provide information on large-scale fluctuations \cite{2006PhRvD..73l3517B,2007PhRvD..75j1302A}, which can be useful for studying the ISW effects \cite{2003PhRvD..67f3505C,2004PhRvD..69b7301C,2005PhRvL..95j1302S}, the power asymmetry of CMB polarization and density field \cite{2018JCAP...04..034D}, and the reionization optical depth \cite{2018PhRvD..97j3505M}. In our previous paper \cite{2016MNRAS.460L.104L}, we showed that cluster CMB polarization measurements could be used to estimate the initial density fluctuations on large scales and reconstruct the local quadrupole of the CMB from our viewpoint. We can reconstruct our quadrupole with a few hundred clusters at $0<z<1$ if we know the quadrupole transfer function, that is, if we know the correct cosmological model. In other words, if we assume a wrong cosmological model to reconstruct the CMB quadrupole from distant clusters, the reconstructed CMB quadrupole will be different from that observed by the CMB satellites (e.g., WMAP and Planck). In this way, we can test our cosmological models using the CMB quadrupole. Note that the proposed test directly compares the CMB quadrupole transfer functions that depend on the universe's expansion history after the initial density field, which is the origin of the cosmic variance, has been estimated and fixed. Therefore, we do not suffer from the cosmic variance uncertainty that is large for the CMB quadrupole measurement compared with the instrumental noise. Using simple simulations, this study aims to show that one can test for dark energy models in this way beyond the cosmic variance. \section{Methodology} We start with the Stokes parameters $Q(\vec{x})$ and $U(\vec{x})$ of the CMB photons induced by the primordial CMB quadrupole at a cluster position $\vec{x}$ \cite{2018JCAP...04..034D}: \begin{equation} Q(\vec{x})\pm iU(\vec{x})=-\frac{\sqrt{6}}{10}\tau\sum_m {_{\pm 2}}Y_{2 m}(\hat{x})a_{\rm 2m}(\vec{x})~. \label{eq:QiU} \end{equation} Here, $\tau$ is the optical depth of the cluster, and $a_{\ell m}(\vec{x})$ are the coefficients of the spherical harmonic expansion of the CMB temperature field at position $\vec{x}$, \begin{equation} \frac{\delta T}{T}(\vec{x},\hat{n},\eta)=\sum_{\ell m} a_{\ell m}(\vec{x})Y_{\ell m}(\hat{n})~, \end{equation} with $\eta=\eta_0-|\vec{x}|$ being the conformal time of the scattering events. The coefficients of the spherical harmonic expansion are expressed using the Fourier modes as \begin{equation} a_{\ell m}(\vec{x}) = (-i)^\ell (4\pi) \int d^3 k e^{i\vec{k}\cdot\vec{x}} \Delta_\ell(\vec{k},\eta) Y^\ast_{\ell m}(\hat{k})~, \end{equation} where $\Delta_\ell(\vec{k},\eta)$ is the Legendre expansion coefficients of the CMB photon distribution. We may further expand $\Delta_\ell(\vec{k},\eta)$ as \begin{equation} \Delta_\ell(\vec{k},\eta) = \Delta_\ell(k,\eta) {R}_{\rm ini}(\vec{k})~, \label{eq:transfer} \end{equation} where $\Delta_\ell(k,\eta)$ are the linear transfer functions that depend on a cosmological model, and ${R}_{\rm ini}(\vec{k})$ is the initial curvature perturbation with the ${P}(k)$ variance. In our simulation, we generated the transfer functions in Eq.~(\ref{eq:transfer}) using CAMB a publicly available code \cite{Lewis:1999bs}. Figure (\ref{fig:l2transfer}) shows the transfer functions at $z=0$ and $z=4$ with different equation-of-state parameters of dark energy, $w\equiv P_{\rm DE}/\rho_{\rm DE}$. Comparing the transfer functions at $z=0$ and $z=4$, the ISW contribution was apparent at wavenumbers $k\approx 10^{-3}$ Mpc$^{-1}$. As also shown in the figure, the larger $w$ parameter ($w=-0.7$) induced the larger ISW effect at $z=0$. We generated the initial curvature perturbation ${R}_{\rm ini}(\vec{k})$ as a random Gaussian variable with a dimensionless power spectrum: \begin{equation} {\cal P}(k)=\frac{k^3}{2\pi^2} P(k) = A_s \left(\frac{k}{k_\ast}\right)^{n_s-1}~, \label{eq:equation_Pk} \end{equation} where $k_\ast=0.05$ Mpc$^{-1}$, $A_s=2.1\times 10^{-9}$ and $n_s = 0.96$. For the other cosmological parameters, we fixed them to the standard $\Lambda$CDM values, i.e., $\Omega_bh^2 = 0.0226$, $\Omega_ch^2=0.112$, $\Omega_\nu h^2 = 0.00064$, $h=0.7$ where $\Omega_bh^2$, $\Omega_ch^2$ and $\Omega_\nu h^2$ are the baryon density, cold dark matter density, and neutrino density, respectively, and $h$ is the normalized Hubble parameter. The procedure of our methodology is as follows: \begin{itemize} \item[1.] We generated ${R_{\rm ini}}(k_i)$ according to the power spectrum given by Eq.~(\ref{eq:equation_Pk}). Following our previous paper \cite{2016MNRAS.460L.104L}, we sampled the Fourier mode in angular directions on the Healpix grid with $N_{\rm side}=8$, and 60 modes uniformly in a logarithmic space in a radial direction from $k=10^{-6}$ to $10^{-1}$ Mpc$^{-1}$. Thus, the total number of the Fourier mode is thus $n_k = 46,080$. \item[2.] Using the generated $R_{\rm ini}(k_i)$, we simulated $Q_{\rm fid}(x_i)$ and $U_{\rm fid}(x_i)$ at the cluster positions on our past lightcone assuming a true cosmological model (in our case, $w=-1$, the $\Lambda$CDM model). We then added Gaussian noises to $Q_{\rm fid}(x_i)$ and $U_{\rm fid}(x_i)$ with a variance $\sigma_{\rm pol}$. Figure~(\ref{fig:slice_QU}) shows a realization of the $Q$ and $U$ maps. We considered $6000$ randomly distributed clusters $N_{\rm cluster} = 6000$ from $z=0$ to $z=2$. We also calculated the CMB quadrupole at the origin $a_{2m}^{\rm true}(0)$. \item[3.] As a backword problem, we estimated the initial curvature perturbation, ${R}_{\rm ini}(k_i)$, by fitting to the $Q$ and $U$ map minimizing \cite{2016MNRAS.460L.104L} \begin{eqnarray} f &=& \sum_i^{N_{\rm cluster}} \left[\frac{(Q(x_i)-Q_{\rm fid}(x_i))^2}{2\sigma_{\rm pol}^2} + \frac{(U(x_i)-U_{\rm fid}(x_i))^2}{2\sigma_{\rm pol}^2} \right] \nonumber \\ \ &+&\sum_{j}^{n_k} \frac{{R}^2_{\rm ini}(k_j)}{2P(k_j)}~, \nonumber \end{eqnarray} where $\sigma_{\rm pol}$ describes the uncertainty in observing $Q$ and $U$ at the cluster position. Here the first two terms are for the chisquare-minimization for $Q(x_i)$ and $U(x_i)$, which depend on the initial curvature perturbation $R_{\rm ini}(k_i)$, using a fiducial mock data $Q_{\rm fid}(x_i)$ and $U_{\rm fid}(x_i)$. In a real application, $Q_{\rm fid}(x_i)$ and $U_{\rm fid}(x_i)$ will be the observables and we search or tune $R_{\rm ini}(k_i)$ to find $Q(x_i)$ and $U(x_i)$ that minimize the function $f$. The third term represents the Gaussian prior on the initial curvature perturbation $R_{\rm ini}(k_i)$ with a variance $P(k_i)$. We repeated this process for various equation of state parameters $w$. \item[4.] After obtaining the estimated initial curvature perturbation $R^{\rm est}_{\rm ini}(k_i)$, we calculated the CMB quadrupole at the origin, $a^{\rm est}_{2m}(0)$, using $R^{\rm est}_{\rm ini}(k_i)$ and compared it to $a^{\rm true}_{2m}(0)$ calculated in the second step. \item[5.] Go back to Step $1$ with different initial perturbations and cluster positions and repeat the procedure a hundred times. \end{itemize} The true and estimated CMB quadrupoles, $a^{\rm true}_{2m}(0)$ and $a^{\rm est}_{2m}(0)$, should coincide within the methodological statistical uncertainty if we use the correct transfer function in Step 3 above. In other words, we can constrain cosmological models if $a^{\rm true}_{2m}(0)$ and $a^{\rm est}_{2m}(0)$ differ beyond the methodological statistical uncertainty. \begin{figure} \centering \includegraphics[width=1.\hsize]{fig1.pdf} \caption{Transfer function of the CMB quadrupole, $\Delta_{\ell=2}(z,k)$ for $(z,w)=(0,-1)$, $(0,-0.7)$ and $(4,-1)$ as indicated in the figure.} \label{fig:l2transfer} \end{figure} \begin{figure} \centering\includegraphics[width=1.\hsize]{fig2.pdf} \caption{Example realization for the Stokes Q (left) and U (right) maps from the CMB quadrupole defined in Eq.~(1) at $z=0$. The signals are correlated on very large scales \cite{2004PhRvD..70f3504P}. We assume $\tau=1$ in Eq.~(\ref{eq:QiU}) for simplicity.} \label{fig:slice_QU} \end{figure} \section{Result} \subsection{Instrumental error in the CMB quadrupole measurement and methodological statistical uncertainity} The reconstruction of the local CMB quadrupole using the remote quadrupole information is not perfect due to the limited number of galaxy clusters available and the polarization measurement errors. We only show herein the results for the case where $6000$ galaxy clusters were used with $\sigma_{\rm pol}/\tau=10^{-2}$, $3\times10^{-2}$ and $10^{-1}$ $\mu$K. Although we argued in our previous paper that $300$ clusters would be sufficient to estimate the local quadrupole, we found that the number is not enough for the investigation of dark energy. A detailed study will be presented in our future work \cite{Sumiya.et.al.}. We first defined the statistical uncertainity in our methodology $\sigma_{\rm method}$ by an evaluation similar with $C_\ell$: \begin{equation} \sigma^2_{\rm method} = \frac{1}{N}\sum_{i=1}^{N}\frac{1}{5} \left[ |\Delta a_{20}|^2+2|\Delta a_{21}|^2+2|\Delta a_{22}|^2 \right]~, \end{equation} where, $\Delta a_{2m} = a_{2m}^{\rm est}(w=-1) - a_{2m}^{\rm true}(w=-1)$, and $N$ is the number of simulations. The variance $\sigma_{\rm method}$ may depend on the simulation setup. It directly depends on $\sigma_{\rm pol}$, and at the same time, it also depends on the detailed distribution of the cluster, such as their positions, number density, redshift range, and so on. Thus, it is difficult to find the dependency analytically. Therefore, we numerically evaluate $\sigma_{\rm method}$ for some $\sigma_{\rm pol}$ values. In our fiducial setup with $N_{\rm cluster}=6000$, $\sigma_{\rm pol}/\tau=10^{-2}$ $\mu$K, $N_{\rm side}=8$, and $n_{\rm kmode}=60$, we found \begin{equation} \sigma_{\rm method} \simeq 2.4 \times 10^{-8}~, \end{equation} and we had $\sigma_{\rm method}\simeq 5.9 \times 10^{-8}$ for $\sigma_{\rm pol}/\tau=3\times 10^{-2}$ $\mu$K. The next generation CMB satellite, namely LiteBIRD, will reach $\sim 2.0 ~\mu$Karcmin sensitivity \cite{2020SPIE11443E..2FH}. The angular scale of the CMB quadrupole is $\sim 90^\circ = 5400$ arcmin. We may expect that, in an ideal case, the CMB quadrupole viewed from us can be measured at the following level: \begin{equation} \sigma^{\rm LiteBIRD}_{\ell=2} \simeq \frac{2.0}{5400} \simeq 3.7\times 10^{-4} \mu{\rm K}~. \end{equation} In the dimensionless units normalized by the mean CMB temperature, we had $\sigma_{\ell=2} \simeq \sigma^{\rm LiteBIRD}_{\ell=2}/(2.725 \times 10^6 \mu{\rm K})=1.4\times 10^{-10}$. Variance $\sigma_{\ell=2}$ is two orders of magnitude smaller than the error associated with our methodology; thus, we may neglect it. The conclusion here is that the uncertainty in measurements of local quadrupoles will not limit the precision of this method, but the methodological, statistical uncertainty associated with measurements of remote quadrupoles. \subsection{Constraints on the dark energy parameter $w$} To make a simple statistical inference, we define the chi-squared statistics as follows: \begin{eqnarray} \chi^2(w) &=&\frac{1}{\sigma^2_{\rm method}}\left( |\Delta a_{20}|^2+2|\Delta a_{21}|^2+2|\Delta a_{22}|^2 \right)~, \label{eq:chisq} \end{eqnarray} where, $\Delta a_{2m}=a^{\rm est}_{2m}(w)-a^{\rm true}_{2m}(w=-1)$. We had real and imaginary parts in each $\Delta a_{2m}$. Accordingly, we assigned the variance $\sigma^2/2$ for $m=1$ and $m=2$ components and $\sigma^2$ for the $m=0$ component because it had only the real part. The chi-square defined in Eq.~(\ref{eq:chisq}) is a measure of the difference in the goodness-of-fit between different models. For example, if we use the correct dark energy parameter $w$ that coincides with the input value, i.e., $\Delta a_{2m}=a^{\rm est}_{2m}(w=-1)-a^{\rm true}_{2m}(w=-1)$, $\chi^2(w=-1)$ should follow the chi-square distribution with a degree of freedom of five. If we use incorrect dark energy parameters ($w \neq -1$), we have larger $\chi^2$ values, and thereby we may constrain the dark energy parameter. Following the standard $\chi^2$ method, we expect $1\sigma$ and $2\sigma$ constraints for $\Delta \chi^2=1$ and $4$, respectively. Compared with the $w=-1$ model in the hundred simulations, we found that $\langle\Delta \chi^2\rangle\approx 1.56$, $9.86$, and $47.0$ for the $w=-0.99$, $-0.95$ and $w=-0.90$ models, respectively. Figure~(\ref{fig:chisq_dist_sig1e-2}) illustrates the $\Delta\chi^2$ distribution in our simulation. \begin{figure} \centering \includegraphics[width=\linewidth]{fig3_v2.pdf} \caption{Distribution of $\Delta \chi^2=\chi^2(w)-\chi^2(w=-1)$ where $\chi^2$ is defined in Eq.~(\ref{eq:chisq}). We assumed that $\sigma_{\rm pol}/\tau=10^{-2}$ $\mu$K. The simulation setup was $N_{\rm cluster} = 6000$, $N_{\rm side}=8$, and $n^{\rm radial}_{\rm kmode}=60$.} \label{fig:chisq_dist_sig1e-2} \end{figure} The statistical test significance strongly depends on $\sigma_{\rm pol}$. If we assume less sensitivity in the polarization measurement as $\sigma_{\rm pol}/\tau=3\times 10^{-2}$ $\mu$K, the delta chi--squared reduces to $\langle\Delta \chi^2\rangle\approx 0.74$, $4.0$ and $13.1$ for $w=-0.99$, $0.95$ and $-0.90$, respectively. We will present a more detailed analysis in our future work~\cite{Sumiya.et.al.}. \section{summary and discussion} In this study, we proposed a new method for use in observational cosmology using CMB observations. The conventional way in observational cosmology has been to obtain summary statistics, such as the variance of cosmological density fluctuations, from observations at various times in the history of the universe and compare them with theory. However, this method cannot escape the cosmic variance arising from the fact that there is only one observable universe. Instead of using summary statistics, we estimated and fixed the initial density fluctuations realized in our universe using the polarization of distant galaxy clusters in the past and determined the time evolution of the density fluctuations by considering how the density fluctuations look today using the local quadrupole of the CMB. This process corresponds, in effect, to multiple density fluctuation observations in the universe, hence the title of the paper. As a working example, we considered $w$CDM cosmology and investigated how sensitive this method is to the equation of the state parameter $w$, assuming that the $\Lambda$CDM cosmology is the correct cosmological model. In Sec. III, we found that the expected $\Delta \chi^2$ value compared to the $w=-1$ model is as large as $9.86$ for $w=-0.95$. Therefore, we concluded that one may be able to achieve $5$\% precision for constraint on $w$ if the CMB polarization, which is caused by the remote quadrupole at $6000$ galaxy cluster positions up to redshift $z=2$, is accurately and precisely measured below the noise level of $\sigma_{\rm pol}\lesssim 10^{-2}\tau$ $\mu$K, where $\tau$ is the optical depth of the cluster. The expected signal of the typical cluster polarization contributions caused by the quadrupole ranges from $0.02$ to $0.1$ $\mu$K \cite{2003PhRvD..67f3505C}. Therefore, detecting signals from individual galaxy clusters would require a detector with this level of sensitivity at arc-minute angular scales. In reality, however, this is not necessary \cite{2014PhRvD..90f3518H}. As we have already discussed, the polarization signals from galaxy clusters are correlated on large scales; hence, rather than detecting the signals of individual galaxy clusters, we only need to catch the signals aligned on large scales \cite{2017PhRvD..96l3509L} (Fig.~(\ref{fig:slice_QU})). As shown herein, $6000$ locations between redshifts $z=0$ and $z=2$ are sufficient. As a rough estimate, the volume of the universe up to redshift $z=2$ is approximately $600$ Gpc$^3$, which means that we can average the signals from the clusters of the galaxies over a volume of $0.1$ Gpc$^3$. At $z\lesssim 1$ we may expect approximately $10^3$ clusters in that volume with $\sim 10^{14}M_\odot$ halo mass. The simulations we performed are idealized in many aspects. In our simulation, the magnitude of the galaxy cluster polarization was assumed to be known, and the noise in the polarization observations was supposed to be very small. Except for the CMB quadrupole, we also ignored the polarization sources, such as those caused by the kSZ effect and the primordial polarization of the background CMB \cite{1999MNRAS.310..765S,2012ApJ...757...44R}, although separation would be possible based on their different frequency spectra \cite{2003PhRvD..67f3505C,1997ApJ...482..577D}. In addition, the instrumental noise-limited quadrupole measurement of the CMB and the signal separation from other contaminants would be an experimental challenge due to the correlated noise in the time-ordered CMB data and the galactic foregrounds. Although our work is preliminary, we have shown that future cluster polarization measurements combined with the local CMB quadrupole measurement would offer a powerful probe for the nature of dark energy. More importantly, this method is qualitatively distinct from conventional methods based on summary statistics, which always suffer from cosmic variance errors. A more detailed study in a more realistic setting is necessary. \begin{acknowledgments} One of the authors (KI) would like to thank D. Huterer and A. Cooray for helpful communication. This work is supported in part by the JSPS grant numbers 18K03616, 17H01110 and JST AIP Acceleration Research Grant JP20317829 and JST FOREST Program JPMJFR20352935 (K.I.), and the Ministry of Science and Technology (MOST) of Taiwan, Republic of China, under Grants No. MOST 109-2112-M-032-006 (G.C.L.) \end{acknowledgments} \bibliographystyle{apsrev4-2}
1,941,325,220,802
arxiv
\section{Definition and construction of the Nicolai map} \noindent The key idea is best illustrated by an example. Let us look at the Wess--Zumino model in $3{+}1$ dimensional Minkowski space, consisting of a complex scalar~$\phi$, a Weyl fermion~$\psi$ and a complex auxiliary~$F$, characterized by a superpotential~$W(\phi)$ and featured in the off-shell lagrangian~\footnote{ A multi-field generalization is straightforward.} \begin{equation} {\cal L} \= \partial_\mu\phi^*\partial^\mu\phi + F^*F + \tfrac{\mathrm{i}}{2} \bar\psi\bar\sigma{\cdot}\partial\psi - \tfrac{\mathrm{i}}{2} \psi\sigma{\cdot}\partial\bar\psi + W'(\phi)\,F + W'(\phi)^*F^* - \tfrac12\psi\,W''(\phi)\,\psi - \tfrac12 \bar\psi\,W''(\phi)^*\bar\psi\ , \end{equation} where $\sigma=(\mathbbm{1},\vec{\sigma})$ and $\bar\sigma=(\mathbbm{1},-\vec{\sigma})$ with Pauli matrices~$\vec\sigma$. Integrating out the auxiliary fields yields $F^*=-W'(\phi)$ and \begin{equation} {\cal L}_{\textrm{SUSY}} \= \bigl|\partial\phi\bigr|^2 -\bigl|W'(\phi)\bigr|^2 + \bigl( \tfrac{\mathrm{i}}{2} \bar\psi\,\bar\sigma{\cdot}\partial\psi - \tfrac12\psi\,W''(\phi)\,\psi + \textrm{h.c.}\bigr)\ . \end{equation} Integrating out the fermions $(\psi,\bar\psi)$ produces a functional determinant $\det M=\exp\{\tfrac{\mathrm{i}}{\hbar}{\cdot}(-\mathrm{i}\hbar\,\mathrm{tr}\ln M)\}$ so that the action becomes \begin{equation} S_g[\phi] \= \smallint\!\mathrm{d}^4\!x\ \bigl\{ |\partial\phi|^2 - |W'|^2 \bigr\} \ -\ \mathrm{i}\hbar\,\mathrm{tr}\ln \Bigl(\begin{smallmatrix} W'' & \mathrm{i}\,\sigma{\cdot}\partial \\[4pt] -\mathrm{i}\,\bar\sigma{\cdot}\partial & {W''}^* \end{smallmatrix} \Bigr) \ =:\ S_g^b[\phi] + \hbar\,S_g^f[\phi]\ . \end{equation} Here, $g$ denotes some coupling constant(s) or parameter(s) inside the superpotential~$W(\phi)$. The objects of desire are quantum correlators \begin{equation} \label{Ycorr} \bigl\< Y[\phi] \bigr\>_g \= \int\!\!{\cal D}\phi\ \mathrm{e}^{\frac{\mathrm{i}}{\hbar} S_g[\phi]}\ Y[\phi] \quad\quad\textrm{with}\quad\quad \bigl\<\mathbbm{1}\bigr\> \= 1 \end{equation} for any bosonic (local or nonlocal) functional~$Y$. The path integral in (\ref{Ycorr}) describes a purely bosonic nonlocal field theory. What is characteristic of its supersymmetric origin? In other words: given such a nonlocal action~$S_g$, how could one infer its hidden supersymmetric root? This question was answered in 1980 by Hermann Nicolai~\cite{Nic1,Nic2,Nic3}: Such hiddenly supersymmetric theories admit a nonlocal and nonlinear invertible map \begin{equation} \label{Nicdef} T_g:\ \phi\,\mapsto\ \phi'[\phi;g] \qquad\textrm{such that}\qquad \bigl\<Y[\phi]\bigr\>_g \= \bigl\<Y[T_g^{-1}\phi]\bigr\>_0 \quad \forall\,Y\ , \end{equation} relating correlators in the interacting theory ($g{\neq}0$) to (more complicated) correlators in the free theory ($g{=}0$). For the path integrals, this is equivalent to \begin{equation} {\cal D}\phi\ \exp\bigl\{ \tfrac{\mathrm{i}}{\hbar} S_g[\phi] \bigr\} \= {\cal D}(T_g\phi)\ \exp\bigl\{ \tfrac{\mathrm{i}}{\hbar} S_0[T_g\phi] \bigr\} \= {\cal D}\phi\ \exp\bigl\{ \tfrac{\mathrm{i}}{\hbar} S_0[T_g\phi] + \mathrm{tr}\ln\tfrac{\delta T_g\phi}{\delta\phi} \bigr\}\ . \end{equation} Separating powers of $\hbar$ in the exponent, this splits into two properties, \begin{subequations} \begin{equation} \label{freeaction} S_0^b[T_g\phi] \= S_g^b[\phi] \qquad\textrm{``free action condition''} \ , \end{equation} \begin{equation} \label{detmatching} {}\quad S_0^f - \mathrm{i}\,\mathrm{tr}\ln\tfrac{\delta T_g\phi}{\delta\phi} \= S_g^f[\phi] \qquad\textrm{``determinant matching condition''} \ . \end{equation} \end{subequations} Every Nicolai map has to fulfil these two conditions, which originally were taken as its definition. The reason for the name of~(\ref{detmatching}) is that its exponentiation gives an equality of the functional fermion determinant~$\det M$ with the Jacobian of the transformation (the first term is a constant since $S_0^f$ does not depend on~$\phi$). From now on we put $\hbar{=}1$. In 1984, the author derived (for his dissertation) an infinitesimal version~\cite{L1,DL1,DL2,L2} of the Nicolai map by considering the $g$-derivative of~(\ref{Nicdef}), \begin{equation} \label{Nicinf} \begin{aligned} \partial_g\bigl\<Y[\phi]\bigr\>_g &\ \stackrel{(\ref{Nicdef})}{=}\ \partial_g \bigl\<Y[T_g^{-1}\phi]\bigr\>_0 \\ &\=\, \bigl\<\partial_g Y[\phi]\bigr\>_g\ +\ \bigl\< \smallint (\partial_g T_g^{-1}\phi)\cdot\tfrac{\delta Y}{\delta\phi}[T_g^{-1}\phi]\bigr\>_0 \\ &\!\stackrel{(\ref{Nicdef})^{-1}}{=} \bigl\<\partial_g Y[\phi]\bigr\>_g\ +\ \bigl\< \smallint (\partial_g T_g^{-1}\circ T_g)\phi\cdot\tfrac{\delta Y}{\delta\phi}[\phi]\bigr\>_g \ =:\ \bigl\< \bigl(\partial_g + R_g[\phi]\bigr)\,Y[\phi] \bigr\>_g \end{aligned} \end{equation} with a ``flow operator''~\footnote{ We write $\mathrm{d} x$ for the spacetime volume differential as long as its dimension remains unspecified.} \begin{equation} R_g[\phi] \= \int\!\mathrm{d} x\ \bigl( \partial_g T_g^{-1}\circ T_g\bigr)\phi(x)\,\frac{\delta}{\delta\phi(x)} \end{equation} representing a functional differential operator derived from~$T_g$. Nothing is gained, however, by these formal considerations, unless we can reverse the logic and somehow obtain~$R_g$ and exponentiate it in order to create a finite flow~$T_g$ from $g'{=}0$ to $g'{=}g$, by inverting \begin{equation} \bigl(T_g^{-1}\phi\bigr)(x) \= \exp\bigl\{ g\,\bigl(\partial_{g'}+R_{g'}[\phi]\bigr)\bigr\}\,\phi(x)\,\big|_{g'=0} \= \smallsum_{n=0}^\infty \tfrac{g^n}{n!}\,\bigl( \partial_{g'} + R_{g'}[\phi] \bigr)^n\ \phi(x)\,\big|_{g'=0} \ . \end{equation} At this stage two remarks are in order. Firstly, $R_g$ is a derivation, and hence $T_g^{-1}$ acts distributively, \begin{equation} R_g\,Y[\phi] \= \smallint \tfrac{\delta Y}{\delta\phi}\cdot R_g\phi \qquad\Leftrightarrow\qquad T_g^{-1} Y[\phi] \= Y[T_g^{-1}\phi] \ . \end{equation} Secondly, by moving the map ``to the other side'', \begin{equation} \bigl\<Y[\phi]\bigr\>_0 \= \bigl\<Y[T_g\phi]\bigr\>_g\ , \end{equation} choosing $\partial_g Y=0$ and differentiating with respect to~$g$, we learn that \begin{equation} 0 \= \partial_g \bigl\<Y[T_g\phi]\bigr\>_g \ \stackrel{(\ref{Nicinf})}{=}\ \bigl\<\bigl(\partial_g+R_g[\phi]\bigr)\,Y[T_g\phi] \bigr\>_g \= \bigl\< \smallint \bigl(\partial_g+R_g[\phi]\bigr)\,T_g\phi \cdot \tfrac{\delta Y}{\delta\phi} [T_g\phi]\bigr\>_g \end{equation} for any (not explicitly $g$-dependent) functional~$Y$, and therefore \begin{equation} \label{fixpoint} \bigl(\partial_g + R_g[\phi]\bigr)\,T_g\phi(x) \= 0\ . \end{equation} This ``fixpoint property'' of the Nicolai map under the infinitesimal flow allows us to directly construct $T_g\phi$ from $R_g$ without invoking the inverse first. Indeed, (\ref{fixpoint}) is formally solved by a path-ordered exponential, \begin{equation} \label{universal} T_g\phi \= {\cal P} \exp \Bigl\{-\!\!\int_0^g\!\!\mathrm{d} h\ R_h[\phi]\Bigr\}\ \phi \= \sum_{s=0}^\infty (-1)^s\!\!\int_0^g\!\!\mathrm{d} h_s \ldots \!\int_0^{h_3}\!\!\!\!\mathrm{d} h_2 \!\int_0^{h_2}\!\!\!\!\mathrm{d} h_1\ R_{h_s}[\phi] \ldots R_{h_2}[\phi]\,R_{h_1}[\phi]\ \phi\ , \end{equation} providing a ``universal formula'' for the Nicolai map in terms of the infinitesimal coupling flow~\cite{LR1}. It is often useful to expand the flow operator in powers of the coupling, \begin{equation} R_g[\phi] \= \sum_{k=1}^\infty g^{k-1} r_k[\phi] \= r_1[\phi] + g\,r_2[\phi] + g^2 r_3[\phi] + \ldots \end{equation} from which one easily computes a power series expansion for the map itself, \begin{equation} T_g\phi \= \!\sum_{\bf n} g^n\,c_{\bf n}\,r_{n_s}[\phi]\ldots r_{n_2}[\phi]\,r_{n_1}[\phi]\ \phi \qquad\textrm{with}\quad {\bf n} = (n_1,n_2,\ldots,n_s)\ ,\quad n_i\in\mathds N \ ,\quad \smallsum_i n_i = n\ , \end{equation} where $1\le s \le n$ and the $n{=}0$ term is the identity. The numerical coefficients are computed as \begin{equation} c_{\bf n} \ = (-1)^s\!\!\int_0^1\!\!\mathrm{d} x_s\;x_s^{n_s-1} \ldots \!\!\int_0^{x_3}\!\!\!\!\mathrm{d} x_2\;x_2^{n_2-1} \!\!\int_0^{x_2}\!\!\!\!\mathrm{d} x_1\;x_1^{n_1-1} \= (-1)^s\bigl[ n_1\cdot(n_1+n_2)\cdots(n_1+n_2+\ldots+n_s)\bigr]^{-1} \end{equation} and related to the Stirling numbers of the second kind. Writing out the first few terms, the perturbative Nicolai map reads \begin{equation} \begin{aligned} T_g\phi &\= \phi \ -\ g\,r_1 \phi \ -\ \sfrac12g^2\bigl(r_2-r_1^2\bigr)\phi\ -\ \sfrac16g^3\bigl(2r_3-r_1r_2-2r_2r_1+r_1^3\bigr)\phi \\ &\quad -\sfrac{1}{24}g^4\bigl(6r_4-2r_1r_3-3r_2r_2+r_1^2r_2-6r_3r_1 +2r_1r_2r_1+3r_2r_1^2-r_1^4\bigr)\phi \ +\ {\cal O}(g^5)\, . \end{aligned} \end{equation} For computing correlation functions \`a la (\ref{Nicdef}) we need the inverse map. It possesses an analogous universal representation in terms of an anti-path-ordered exponential, which gives rise to a different power series expansion, \begin{equation} T_g^{-1}\phi \= \!\sum_{\bf n} g^n\,d_{\bf n}\,r_{n_s}[\phi]\ldots r_{n_2}[\phi]\,r_{n_1}[\phi]\ \phi \quad\textrm{with}\quad c_{\bf n} \= \bigl[ n_s\cdot(n_s+n_{s-1})\cdots(n_s+n_{s-1}+\ldots+n_1)\bigr]^{-1} \end{equation} whose first terms are \begin{equation} \begin{aligned} T_g^{-1}\phi &\= \phi \ +\ g\,r_1 \phi \ +\ \sfrac12g^2\bigl(r_2+r_1^2\bigr)\phi\ +\ \sfrac16g^3\bigl(2r_3+2r_1r_2+r_2r_1+r_1^3\bigr)\phi \\ &\quad +\sfrac{1}{24}g^4\bigl(6r_4+6r_1r_3+3r_2r_2+3r_1^2r_2+2r_3r_1 +2r_1r_2r_1+r_2r_1^2+r_1^4\bigr)\phi \ +\ {\cal O}(g^5)\, . \end{aligned} \end{equation} Still, we have to establish the existence of the flow operator~$R_g$ and find an explicit expression for it. We shall do this now for the exemplary case of scalar theories (gauge theories will be treated in the following section). If supersymmetry is realized off-shell on the action~$S$ then there exists a functional $\mathring{\Delta}_\alpha[\phi,\psi,F]$ such that \begin{equation} \partial_g S[\phi,\psi,F] \= \delta_\alpha \mathring{\Delta}_\alpha[\phi,\psi,F] \end{equation} for the supersymmetry transformations~$\delta_\alpha$, where $\alpha$ denotes a Majorana spinor index. Integrating out the auxiliary~$F$ one has that \begin{equation} \label{dgS} \partial_g S_{\textrm{SUSY}}[\phi,\psi] \= \delta_\alpha \Delta_\alpha[\phi,\psi] \qquad\textrm{with}\quad \Delta_\alpha[\phi,\psi] = \mathring{\Delta}_\alpha[\phi,\psi,-\smash{W'}^*(\phi)] \end{equation} for the on-shell action $S_{\textrm{SUSY}}\,{=}\int\!\!\mathrm{d} x\,{\cal L}_{\textrm{SUSY}}$ with an anticommuting functional~$\Delta_\alpha$. For our Wess--Zumino model example, it reads $\Delta_\alpha=\tfrac12\int\!\mathrm{d}^4\!x\,\psi_\alpha\,\partial_gW'(\phi)$. The construction of~$R_g$ employs the supersymmetry Ward identity: \begin{equation} \partial_g \!\int\!\!{\cal D}\phi \!\int\!\!{\cal D}\psi\ \mathrm{e}^{\mathrm{i} S_{\textrm{\tiny SUSY}}[\phi,\psi] }\ Y[\phi] \= \int\!\!{\cal D}\phi \!\int\!\!{\cal D}\psi\ \mathrm{e}^{\mathrm{i} S_{\textrm{\tiny SUSY}}[\phi,\psi] }\ \bigl( \partial_g + \mathrm{i}\Delta_\alpha[\phi,\psi]\ \delta_\alpha\bigr) Y[\phi]\ , \end{equation} Integrating out the fermions contracts bilinears to produce fermion propagators $\bcontraction{}{\psi}{\,}{\psi} \psi\,\psi$ (in the $\phi$ background), hence~\cite{L1} \begin{equation} R_g[\phi] \= \mathrm{i}\,\bcontraction{}{\Delta}{_\alpha[\phi]\ }{\delta} \Delta_\alpha[\phi]\ \delta_\alpha \= \mathrm{i}\int\!\mathrm{d} x\ \bcontraction{}{\Delta}{_\alpha[\phi]\ }{\delta} \Delta_\alpha[\phi]\ \delta_\alpha \phi(x)\ \frac{\delta}{\delta\phi(x)} \end{equation} For a simple example of the Wess--Zumino model with (massless) superpotential $W=\tfrac13 g\phi^3$, one finds that \begin{equation} R_g[\phi] \= \tfrac{\mathrm{i}}{2} \smallint\!\!\smallint\mathrm{d}^4\!x\,\mathrm{d}^4\!y\ \bigl\{ \phi^2(x)\,\bcontraction{}{\psi}{(x)\ }{\psi} \psi(x)\ \psi(y) \ +\ {\smash{\phi^*}}^2(x)\,\bcontraction{}{\bar\psi}{(x)\ }{\psi} \bar\psi(x)\ \psi(y) \bigr\}_{\alpha\alpha}\, \tfrac{\delta}{\delta\phi(y)} \ -\ \textrm{h.c.} \end{equation} where the subscript on the curly brace indicates a spin trace. It is instructive to develop a diagrammatical shorthand notation. For the sake of illustration, here we oversimplify $(\phi,\phi^*)\sim\phi$ and write \\ \centerline{\includegraphics[width=0.7\paperwidth]{diagram1.pdf}} \\ with the graphical rules~\cite{FL} \\[2pt] \centerline{\includegraphics[width=0.7\paperwidth]{diagram2.pdf}} \\ The linear tree for $R_g$ exponentiates to a series of branched trees for $T_g\phi$, \\[2pt] \centerline{\includegraphics[width=0.7\paperwidth]{diagram3.pdf}} \\ and likewise for the inverse~$T_g^{-1}\phi$. Inserting the latter into (\ref{Nicdef}) and performing the free-theory bosonic contractions, one obtains an alternative Feynman perturbation series for correlators, as displayed here for the two-point function: \\[2pt] \centerline{\includegraphics[width=0.7\paperwidth]{diagram4.pdf}} \\ Notably, the multiple action of~$R_g$ produces multiple spin traces (graphically separated by dots). The supersymmetric cancellation of the leading UV divergencies is automatially built in, as pure fermion loops are absent as well as boson tadpoles. \section{The case of gauge theories} \noindent Suprsymmetric gauge theories present additional challenges. Firstly, one has to deal with the gauge redundancy necessitating a (supersymmetry-breaking) gauge fixing and, secondly, the $g$-derivative of the supersymmetric action cannot easily be expressed as a supervariation. We eliminate the auxiliary field ($D$-term), use a local gauge-fixing functional $\mathcal{G}$ to fix a gauge $\mathcal{G}(A){=}0$ with a parameter~$\xi$ and include the corresponding ghost fields to formulate a BRST-invariant on-shell action \begin{equation} S_{\textrm{SUSY}}[A,\lambda,c,\bar c] \= \int\!\mathrm{d} x\ \mathrm{tr}\bigl\{ -\tfrac14 F_{\mu\nu}F^{\mu\nu} -\tfrac{1}{2\xi}\mathcal{G}(A)^2 -\tfrac{\mathrm{i}}{2}\bar\lambda\slashed{D}\lambda + \bar{c}\,\tfrac{\partial\mathcal{G}}{\partial A_\mu}D_\mu c \bigr\} \end{equation} for $su(N)$-valued gluons~$A_\mu{=}A_\mu^A T^A$, gluinos~$\lambda_\alpha{=}\lambda_\alpha^A T^A$, a ghost $c{=}c^A T^A$ and an antighost $\bar c{=}\bar c^A T^A$, with $F_{\mu\nu}{=}\partial_\mu A_\nu{-}\partial_\nu A_\mu{+}g[A_\mu,A_\nu]{=}F_{\mu\nu}^AT^A$ and group generators subject to $[T^A,T^B]{=}f^{ABC}T^C$ with $A,B,\ldots=1,2,\ldots,N^2{-}1$. The trace refers to the color degrees of freedom. We allow for various spacetime dimensionalities~$D$ by letting the fields live on~$\mathds R^{1,D-1}$ so that $\mu,\nu,\ldots=0,1,\ldots,D{-}1$ and $\alpha=1,\ldots,r$, where $r$ is the complex dimension of the corresponding Majorana representation, i.e.~$\lambda^A\in\mathds C^r$. It essentially grows exponentially with~$D$. In the following, we present two different attempts to emulate the successful scalar-field procedure. In version A~\cite{ALMNPP}, from $F=\mathrm{d} A+g\,A{\wedge}A$ we see that $g{=}0$ is the free theory. A quick computation shows things now are more involved than in~(\ref{dgS}), \begin{equation} \partial_g S_{\textrm{SUSY}} \= \delta_\alpha \Delta_\alpha \ +\ q\smallint\mathrm{tr}\,\mathrm{i}\bar\lambda\slashed{A}\lambda \ +\ \smallint\mathrm{tr}\,\bar{c}\tfrac{\partial\mathcal{G}}{\partial A_\mu}A_\mu c \qquad\textrm{with}\quad q=\tfrac{D-1}{r}-\tfrac12 \end{equation} where ($\gamma^{\mu\nu}=\tfrac12[\gamma^\mu,\gamma^\nu]$) \begin{equation} \Delta_\alpha \= -\tfrac{1}{2r}\smallint\mathrm{tr}\,(\gamma^{\mu\nu}\lambda)_\alpha A_\mu A_\nu \end{equation} is the gauge-theory counterpart of the on-line functional in~(\ref{dgS}). However, we have to fight with a ghost contribution and a ``mismatch''~$q$ in the construction of~$R_g$. With the help of the broken-supersymmetry and BRST Ward identities one derives that \begin{equation} \partial_g\bigl\<Y[A]\bigr\>_g \= \bigl\< \bigl(\partial_g+R_g[A]+Z_g[A]\bigr)\,Y[A]\bigr\>_g \end{equation} where \begin{equation} R_g \= \mathrm{i}\,\bcontraction{}{\Delta}{_\alpha\ }{\delta}\Delta_\alpha\ \delta_\alpha \ -\ \bcontraction{}{\Delta}{_\alpha (}{\delta}\Delta_\alpha (\delta_\alpha \bcontraction{}{\Delta}{_{\textrm{gh}})\,}{s}\Delta_{\textrm{gh}})\,s \qquad\textrm{with}\quad \Delta_{\textrm{gh}} = \smallint\mathrm{tr}\,\bar{c}\,\mathcal{G}(A)\ , \end{equation} \begin{equation} Z_g \= (\bcontraction[1.5ex]{}{s}{\,\bcontraction{}{\Delta}{_\alpha)\,(}{\delta}\Delta_\alpha)\,(\delta_\alpha}{\Delta} s\,\bcontraction{}{\Delta}{_\alpha)\,(}{\delta}\Delta_\alpha)\,(\delta_\alpha\Delta_{\textrm{gh}}) \ -\ q \smallint\mathrm{tr}\,\bcontraction{}{\bar\lambda}{\slashed{A}}{\lambda}\bar\lambda\slashed{A}\lambda \ +\ \mathrm{i}\smallint\mathrm{tr}\,\bcontraction{}{\bar{c}}{\tfrac{\partial\mathcal{G}}{\partial A_\mu}A_\mu}{c} \bar{c}\tfrac{\partial\mathcal{G}}{\partial A_\mu}A_\mu c \ , \qquad\quad{} \end{equation} and $s$ denotes the BRST (or Slavnov) variation. The contractions signify gaugino or ghost propagators. The multiplicative contribution~$Z_g$ destroys the derivation property of~$R_g$ and hence the distributivity of~$T_g$, which is not acceptable. A somewhat lengthy computation reveals, however, that in the Landau gauge, $\mathcal{G}{=}\partial^\mu A_\mu$ with $\xi{\to}\infty$, the obstacle may be overcome, \begin{equation} Z_g=0 \qquad\textrm{if and only if}\quad q=\tfrac1r \quad\Leftrightarrow\quad r=2(D{-}2) \quad\Leftrightarrow\quad D=3,4,6,10\ . \end{equation} Amazingly, these are precisely the ``critial spacetime dimensions'' which admit super Yang--Mills theory to exist~\cite{BSS}, demonstrating that the Nicolai map knows about them~\cite{ANPP}! For version B~\cite{L1,MN,LR2}, we restrict to a linear gauge $\mathcal{G}(A)=n{\cdot}A$ or $\partial{\cdot}A$ and rescale all fields to tilded versions in order to pull out the gauge coupling. In particular, \begin{equation} g\,A=:\widetilde A \quad\Rightarrow\quad S_{\textrm{SUSY}}[\widetilde A,\widetilde\lambda,\widetilde{c},\widetilde{\bar c}] \= \tfrac{1}{g^2}\!\!\int\!\!\mathrm{d} x\ \mathrm{tr}\bigl\{ - \tfrac14 \widetilde F_{\mu\nu}\widetilde F^{\mu\nu} -\tfrac{1}{2\xi}\mathcal{G}(\widetilde A)^2 - \tfrac{\mathrm{i}}{2}\widetilde{\bar\lambda}\widetilde{\slashed{D}}\widetilde{\lambda} + \sqrt{g}\,\widetilde{\bar{c}}\,\tfrac{\partial\mathcal{G}}{\partial\widetilde A_\mu}\widetilde{D}_\mu \widetilde{c} \bigr\} \end{equation} where the tilded quantities are $g$-independent (or evaluated at $g{=}1$). Since the $g$-derivative now is proportional to the action itself,\footnote{ except for the ghost term, which has to be scaled non-canonically} we can use off-shell supersymmetry (only in $D{\le}4$ though) to obtain \begin{equation} \partial_g S_{\textrm{SUSY}} \= -\tfrac{1}{g^3} \, \bigl\{ \delta_\alpha\widetilde\Delta_\alpha - \sqrt{g}\,\,s\,\widetilde\Delta_{\textrm{gh}} \bigr\} \end{equation} where \begin{equation} \widetilde\Delta_\alpha = -\tfrac{1}{2r}\smallint\mathrm{tr}\,(\gamma^{\mu\nu}\widetilde\lambda)_\alpha \widetilde F_{\mu\nu} \quad\quad\textrm{and}\quad\quad \widetilde\Delta_{\textrm{gh}} = \smallint\mathrm{tr}\,\widetilde{\bar{c}}\,\mathcal{G}(\widetilde A)\ . \end{equation} Now we may proceed using broken-supersymmetry and BRST Ward identities to get \begin{equation} \partial_g \bigl\< Y[\widetilde A] \bigr\>_g \= \bigl\< \bigl(\partial_g + \widetilde R_g[\widetilde A] \bigr)\,Y[\widetilde A] \bigr\>_g \end{equation} where \begin{equation} \widetilde R_g \= -\mathrm{i}\,\bcontraction{}{\widetilde\Delta}{_\alpha\,}{\delta}\widetilde\Delta_\alpha\,\delta_\alpha \ +\ \tfrac{\mathrm{i}}{\sqrt{g}}\,\bcontraction{}{\widetilde\Delta}{_{\textrm{gh}}\,}{s}\widetilde\Delta_{\textrm{gh}}\,s \ -\ \tfrac{1}{\sqrt{g}}\,\bcontraction{}{\widetilde\Delta}{_\alpha (}{\delta}\widetilde\Delta_\alpha (\delta_\alpha \bcontraction{}{\widetilde\Delta}{_{\textrm{gh}})\,}{s}\widetilde\Delta_{\textrm{gh}})\,s \ . \end{equation} Yet, in this version, we cannot expand around $g{=}0$ but for perutrbation theory must scale back to \begin{equation} A \= \tfrac1g\widetilde A \qquad\Rightarrow\qquad R_g[A] \= \tfrac1g \bigl(\widetilde R_g[\widetilde A]-\smallint\widetilde A\tfrac{\delta}{\delta\widetilde A} \bigr)\ . \end{equation} Note that $\widetilde R_g[gA]\neq gR_g[A]$ but contains an Euler operator w.r.t.~$A$. This is crucial to remove the formal $g{\to}0$ singularity in the above expression, so that in fact $\lim_{g\to0}R_g$ is finite. We can give an explicit expression for any gauge but limited to $D{\le}4$: \begin{equation} \overleftarrow{R}_g[A] \= \tfrac{1}{2r} \smallint\!\!\smallint\!\!\smallint\mathrm{tr}\ \overleftarrow{\tfrac{\delta}{\delta A_\mu}}\,P_\mu^{\ \nu}\,\bigl\{ \gamma_\nu\,\bcontraction{}{\bar\lambda}{\ }{\lambda} \bar\lambda\ \lambda\,\gamma^{\rho\sigma} A_\rho (A-2\partial\Box^{-1}\partial{\cdot}A)_\sigma \bigr\}_{\alpha\alpha} +\ \smallint\!\!\smallint\mathrm{tr}\ \overleftarrow{\tfrac{\delta}{\delta A_\mu}}\,A_\mu\,\Box^{-1}\partial{\cdot}A \ +\ O(\mathcal{G}) \end{equation} with the non-Abelian transversal projector \begin{equation} P_\mu^{\ \nu} \= \delta_\mu^{\ \nu}\mathbbm{1}\ -\ D_\mu \bcontraction{}{c}{\ }{{\bar c}}c\ \bar c\,\tfrac{\partial\mathcal{G}}{\partial A_\nu} \qquad\Rightarrow\qquad \tfrac{\partial\mathcal{G}}{\partial A_\mu}\,P_\mu^{\ \nu} = 0 = P_\mu^{\ \nu} D_\nu \end{equation} forcing the flow onto the gauge surface: $R_g\mathcal{G}\sim\mathcal{G}$. For the Landau gauge, $\mathcal{G}{=}\partial{\cdot}A$, all expressions simplify considerably. We have reversed the direction of the derivatives since acting towards the left is more convenient for the graphical representation. So the upshot of both versions A and~B is that our explicit construction formula~(\ref{universal}) carries over to gauge theory, for $D{\le}4$ in any gauge and for $D{=}6$ and~$10$ in the Landau gauge, \begin{equation} \label{universalA} T_g A\= {\cal P} \exp \Bigl\{-\!\int_0^g\!\!\mathrm{d} h\ R_h[A]\Bigr\}\ A \= \sum_{\bf n} g^n\,c_{\bf n}\,r_{n_s}[A]\ldots r_{n_2}[A]\,r_{n_1}[A]\ A \end{equation} from a decomposition into homogeneous pieces \begin{equation} R_g[A] \= r_1[A]\ +\ g\,r_2[A]\ +\ g^2 r_3[A]\ +\ldots \qquad\textrm{with}\qquad \smallint A\tfrac{\delta}{\delta A}\,r_k[A] = k\,r_k[A]\ . \end{equation} Let us finally look a the diagrammatics in the Landau gauge~\cite{FL}. With the solid line representing the free fermion propagator $(\mathrm{i}\slashed\partial)^{-1}$ and the dashed line standing for the free ghost propagator $\Box^{-1}$, we obtain the tree expansion \\[2pt] \centerline{\includegraphics[width=0.7\paperwidth]{diagram5.pdf}} \\ Iterating this in the universal formula~(\ref{universalA}) produces (with rules analogous to the scalar case) \\[4pt] \centerline{\includegraphics[width=0.7\paperwidth]{diagram6.pdf}} \\ where the color structure follows the graphical one, and we have suppressed the Lorentz and spinor indices. In fact, performing the spin traces creates various contractions of Lorentz indices on the gauge-field legs and on the propagators due to $(\mathrm{i}\slashed{\partial})^{-1}\!=\mathrm{i}\gamma^\mu\partial_\mu\Box^{-1}$, so that the number of terms at $O(g^n)$ grows rapidly with~$n$, namely $1, 3, 34, 344, \ldots$. Nevertheless, the expansion is algorithmic and may be implemented on a computer. Explicit computations were performed to order~$g^3$ in~\cite{ALMNPP} and to order~$g^4$ in~\cite{MN}. For first evaluations of correlators, see~\cite{DL2,NP}. \section{Application to the supermembrane} \noindent In the last part of this talk I would like to describe a recent application~\cite{LN} of the Nicolai map towards a quantization of the maximal supersymmetric membrane, an outstanding unsolved problem.\footnote{ See also H.~Nicolai's talk at the Humboldt Kolleg on Quantum Gravity and Fundamental Interactions, which was part of the same Corfu Summer Institute 2021.} The $D{=}11$ supermembrane~\cite{BST} can be obtained as an $N{\to}\infty$ limit of a maximally supersymmetric (so-called BFSS) matrix model~\cite{CH,BRR,Flume,dWHN,BFSS}. More concretely, in a Minkowski background in the light-cone gauge, the supermembrane can be viewed as a one-dimensional gauge theory of area-preserving diffeomorphisms (APD), which is regularized by the SU($N$) BFSS matrix model. This matrix model arises also in two other ways. Firstly, it can be seen as the worldline theory of a large number of $D0$-branes in type IIA~string theory (the double dimensional reduction of the supermembrane). Secondly, it appears as the Kaluza--Klein compactification of super Yang--Mills theory from $1{+}9$ to $1{+}0$ dimensions. The Yang--Mills, matrix-model and APD coupling~$g$ can be seen to be proportional to the membrane tension~$T$, which combines the two key parameters of string theory via $T=g_s^{-2/3}(\alpha')^{-1}$. Hence, a perturbative quantization of the BFSS matrix model (in powers of~$g$) can serve as a low-$T$ expansion of the quantum supermembrane. Here, we attempt to set this up via the Nicolai map, by dimensionally reducing its $D{=}10$ SU($N$) super Yang--Mills version to a map for the matrix quantum mechanics and finally (in the $N{\to}\infty$ limit) for the APD gauge theory. The Nicolai map for super Yang--Mills theory was described in the previous section. Let us allow for $D=3,4,6$ or~$10$. The dimensional reduction from $\mathds R^{1,D-1}$ to $\mathds R^{1,0}$ effects \begin{equation} \partial_\mu \to (\partial_t, 0)\ ,\quad A_\mu^A \to (\omega^A, X_a^A)\ ,\quad \lambda_\alpha^A \to \theta_\alpha^A\ ,\quad D_\mu \to (D_t{=}\partial_t{+}g\omega{\times}\,,\ g X_a^A{\times}) \end{equation} where $\mu=(0,a)=(0,1,\ldots,D{-}1)$, $\alpha=1,\ldots,r$ and $A=1,\ldots,N^2{-}1$. We use the $\times$ symbol to hide the SU($N$) structure constants, as in $(\omega{\times})^{AB}\equiv f^{ACB}\omega^C$. The spinor index notation is a bit sloppy here: while $\lambda^A$ is an SO($D$) Majorana spinor, the SO($D{-}1$) Majorana $\theta^A$ has only half as many components (the other half gets projected out). The non-dynamical Lagrange multiplier~$\omega^A$ enforces the Gau\ss\ constraint. The Lorenz gauge simplifies to \begin{equation} \mathcal{G}(A)=\partial{\cdot}A \quad\longrightarrow\quad \mathcal{G}(\omega) = \dot\omega \equiv \partial_t\omega = D_t\omega\ , \end{equation} thus $\mathcal{G}{=}0$ forces $\omega$ to be constant in time. Interestingly, the reduced temporal gauge $\omega{=}0$ implies the reduced Lorenz gauge $\dot\omega{=}0$. Hiding color, Lorentz and spin indices, and integrating out the auxiliary $D$~field, the Yang--Mills lagrangian reduces as follows, \begin{equation} \begin{aligned} {\cal L}_{\textrm{YM}} &\= -\tfrac14 F^2-\tfrac{1}{2\xi}\mathcal{G}(A)^2 -\tfrac{\mathrm{i}}{2}\bar\lambda\slashed{D}\lambda + \bar{c}\,\tfrac{\partial\mathcal{G}}{\partial A}Dc \quad\longrightarrow\\ {\cal L}_{\textrm{MM}} &\= \tfrac12(D_t X)^2 -\tfrac14 g^2(X{\times}X)^2 - \tfrac{\mathrm{i}}{2}\theta\,(D_t+g\gamma{\cdot}X{\times})\,\theta - \tfrac{1}{2\xi}\dot\omega^2 + \bar{c}\,\partial_tD_t c\ . \end{aligned} \end{equation} To construct the coupling flow operator for the matrix model, we may either employ version~A of the previous section or directly dimensionally reduce the Yang--Mills flow operator already given there. Either way, one arrives at \begin{equation} \overleftarrow{R}_g \= -\tfrac{1}{r}\smallint\!\!\smallint\!\!\smallint \overleftarrow{\tfrac{\delta}{\delta X_a}} \Bigl[ \bigl(\underbrace{\gamma_a\mathbbm{1} - g X_a{\times}D_t^{-1}}_{\textrm{from}\ \ P_\mu^{\ \nu}}\bigr)\, \bcontraction{}{\theta}{\ }{\theta}\theta\ \theta\, \bigl(\underbrace{\tfrac12\gamma^{cd} X_c{\times}X_d + \gamma^d \omega{\times} X_d}_{\textrm{from}\ \ \gamma^{\rho\sigma}A_\rho A_\sigma}\bigr) \Bigr]_{\alpha\alpha} \end{equation} where the Euclidean indices $a,b,\ldots=1,\ldots,D{-}1$ and the spin trace $[\ldots]_{\alpha\alpha}$ have been exhibited but color and the temporal argument in $X^A_a(t)$ are suppressed. This operator is to be iterated on~$X$ to yield~$(T_gX)^A_a(t)$. Since no $\tfrac{\delta}{\delta\omega}$ appears, $R_g\omega{=}0$, and hence $T_g\omega=\omega$ respects the gauge slice. For simplicity, we pass to the temporal subgauge~$\omega{\equiv}0$. Then, only odd powers of~$g$ show up in the perturbative expansion of~$R_g$.\footnote{ At least up to eighth order, where a nonzero contribution $\sim(\gamma_{a_1}\cdots\gamma_{a_9})_{\alpha\alpha}\sim\varepsilon_{a_1\cdots a_9}$ seems possible.} With a solid line now depicting the one-dimensional propagator $\partial_t^{-1}=\tfrac12\textrm{sgn}(t)=:\epsilon(t)$ up to a constant (and a linear term in case of a zero mode on a circle), the diagrammatical expansion of the flow operator reads\\ \centerline{\includegraphics[width=0.7\paperwidth]{diagram7.pdf}} \\ giving rise to the branched-tree expansion\\[4pt] \centerline{\includegraphics[width=0.7\paperwidth]{diagram8.pdf}} \\ Remarkably, this expression passes all tests. The free-action condition is met for any value of~$D$,\\[4pt] \centerline{\includegraphics[width=0.7\paperwidth]{diagram9.pdf}} \\ where $\#$ stands for the various $g$-powers in the sum. It is nontrivial that all but one term cancel in the infinite sum over double trees. The determinant matching, in contrast, works only for $D\in\{3,4,6,10\}$,\\ \centerline{\includegraphics[width=0.7\paperwidth]{diagram10.pdf}} \\ Here, it is amazing that (with the help of the Jacobi identity) all loops with trees attached cancel out, leaving only the standard one-loop graphs. The $N{\to}\infty$ limit leads to the area-preserving-diffeomorphism (APD) gauge theory, \begin{equation} X^A_a(t) \to X_a(\vec{\sigma},t) \quad\quad\textrm{and}\quad\quad f^{ABC} \to \smallint\mathrm{d}^2\!\sigma\,\sqrt{w(\vec\sigma)}\ Y^A(\vec\sigma)\,\bigl\{Y^B(\vec\sigma)\,,Y^C(\vec\sigma)\bigr\}\ , \end{equation} with membrane coordinates $\vec{\sigma}=(\sigma^1,\sigma^2)$, a complete orthonormal basis $\bigl\{Y^A(\vec\sigma)\bigr\}$ of functions on the membrane, and an irrelevant reference density~$w(\vec\sigma)$, which cancels when inserting the APD bracket \begin{equation} \bigl\{A(\vec\sigma)\,,B(\vec\sigma)\bigr\} \= \tfrac{1}{\sqrt{w(\vec\sigma)}}\, \bigl( \partial_{\sigma^1} A(\vec\sigma)\,\partial_{\sigma^2} B(\vec\sigma) - \partial_{\sigma^2} A(\vec\sigma)\,\partial_{\sigma^1} B(\vec\sigma) \bigr)\ . \end{equation} Using the $Y$~basis, the $N{\to}\infty$ limit of the color summation is converted into an integral over~$\vec\sigma$, and the matrix interaction gets encoded in the APD bracket, e.g. \begin{equation} f^{ABC} X^B_b(t) X^C_c(t) \quad\longrightarrow\quad \bigl\{ X_b(\vec{\sigma},t)\,, X_c(\vec{\sigma},t) \bigr\}\ . \end{equation} This limit carries some subtleties. In particular, the APD bracket produces derivative (in $\sigma$) interactions, which may require a point-splitting regularization. In contrast, the absence of $\sigma$~derivatives in the quadratic part of the APD action renders the latter ultralocal. This leads to singular $\delta(\vec{\sigma}{-}\vec{\sigma})$ factors in the fermion determinant which, however, cancel against like factors in the Jacobian of the Nicolai map. Therefore, the the map remains well-defined in the large-$N$ limit because supersymmetry reigns!\footnote{ Conversely, it explains why this limit does not exist for the purely bosonic matrix model, and why the bosonic membrane is `non-renormalizable'.} More annoyingly, when the Nicolai map is employed in the perturbative computation of APD correlators (e.g.~for membrane vertex operators), the ultralocal free propagator will lead to singularities $\sim\delta(\vec{0})^{-1}$.\footnote{ We thank J.~Plefka for this remark.} This suggests that a partial resummation is needed to pass from a worldline propagator to a membrane world-volume propagator, in analogy with a geometric sum over mass insertions to shift from a massless propagator to a massive one. In APD language and suppressing the common $\vec\sigma$ arguments, the $N{\to}\infty$ limit of the above Nicolai map takes the form \begin{equation} \begin{aligned} T_g X_a (t) \=\ X_a(t) &\ -\ \tfrac12 g^2 \int\!\!\!\!\int\!\!\mathrm{d} s\,\mathrm{d} u\; \varepsilon(t{-}s)\,\varepsilon(s{-}u)\, \Big\{ X_b(s) \,,\,\bigl\{ X_b(u) , X_a(u) \bigl\} \Big\} \\ &\ +\ \tfrac18 g^4 \int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int\!\!\mathrm{d} s\,\mathrm{d} u\,\mathrm{d} v\,\mathrm{d} w\; \varepsilon(t{-}s)\,\varepsilon(s{-}u)\,\varepsilon(u{-}v)\,\varepsilon(v{-}w) \\ &\qquad\times\ \Biggl[ \ 6\ \biggl\{ X_b(s)\,,\,\Bigl\{ X_c(u)\,,\,\bigl\{ X_{[a}(v)\,,\,\{ X_b(w),X_{c]}(w)\}\bigr\}\Bigr\}\biggr\} \\ &\qquad\ \ +\ 2\ \biggl\{ X_b(s)\,,\,\Bigl\{ X_{[b}(u)\,,\,\bigl\{ X_{|c|}(v)\,,\,\{ X_{a]}(w),X_c(w)\}\bigr\}\Bigr\}\biggr\} \\ &\qquad\ \ +\ 2\ \biggl\{ X_a(s)-X_a(t)\,,\,\Bigl\{ X_b(u)\,,\,\bigl\{ X_c(v)\,,\,\{ X_b(w),X_c(w)\}\bigr\}\Bigr\}\biggr\}\, \Biggr] \\ &\quad +\ \tfrac18 g^4 \int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int\!\!\mathrm{d} s\,\mathrm{d} u\,\mathrm{d} v\,\mathrm{d} w\; \varepsilon(t{-}s)\,\varepsilon(s{-}u)\,\varepsilon(s{-}v)\,\varepsilon(v{-}w) \\ &\qquad\quad\times\ \biggl\{\bigl\{X_a(u),X_b(u)\bigr\}\,,\,\Bigl\{X_c(v)\,,\,\bigl\{X_b(w),X_c(w)\bigr\}\Bigr\}\biggr\} \ \ +\ O(g^6)\ . \end{aligned} \end{equation} By computer, this expression can easily be continued to any desired order in the coupling. \section{Outlook} \noindent We have proposed a new angle of attack on the supermembrane, based on the Nicolai map for the APD gauge theory. The perturbative small-tension expansion offers a path to quantization. A distant goal is to establish quantum target-space Lorentz invariance for the supermembrane. Closer in reach appears a computation of physically relevant correlation functions, e.g.~of graviton-emission vertex operators~\cite{DNP} \begin{equation} \begin{aligned} V_h[X,\theta;k] \,=\, h^{ab} \Bigl[ & D_t X_a\,D_t X_b - \{X_a,\!X_c\}\{X_b,\!X^c\} - \mathrm{i}\bar\theta\gamma_a\{X_b,\!\theta\} \\ - \tfrac12 & D_t X_a\bar\theta\gamma_{bc}\theta k^c - \tfrac12\{X_a,\!X^c\}\bar\theta\gamma_{bcd}\theta\,k^d + \tfrac12\bar\theta\gamma_{ac}\theta\,\bar\theta\gamma_{bd}\theta\,k^c\!k^d \Bigr] \,\mathrm{e}^{-\mathrm{i} \vec{k}{\cdot}\vec{X} +\mathrm{i} k^-t} \end{aligned} \end{equation} with graviton polarization~$h_{ab}$. Another perspective is a control over the convergence of the perturbation series with the help of the universal formula~(\ref{universal}) for the map. Puzzling is the special r\^ole of the Landau gauge for spacetime dimensions beyond four. Finally, it would be marvellous to detect traces of ``integrability'' for maximally supersymmetric Yang--Mills theory in four dimensions.
1,941,325,220,803
arxiv
\section{Introduction} One of the approaches to the problem of understanding quantum entanglement is to look for functions on the state space which are invariant under the action of the local unitary (LU) group and enable us to distinguish between different types of states. For a multipartite quantum system with distinguishable subsystems, this group is the product of the unitary groups acting on the Hilbert spaces of the individual subsystems. The problem of separating the orbits can be reduced to finding the set of polynomial invariants \cite{MW}, which form an algebra. Unfortunately, a description in terms of generators and relations is available only in the case of some special Hilbert space dimensions and particle numbers, in other cases, only partial results exist, see e.g. \cite{Verstraete, Brylinski, LT, LTT}. In \cite{HW} it was pointed out that the dimension of LU-invariant homogenous polynomials with a fixed degree stabilizes as the dimensions of the Hilbert spaces of the subsystems increase. Based on this observation, one can introduce an algebra which can be thought of as gluing together the algebras of LU-invariant polynomials of various finite dimensional quantum systems \cite{Vrana}, much like one studies the algebra of symmetric polynomials independently of the number of variables. The outline of the paper is as follows. In section~\ref{sec:LUinv} we summarize the construction of the inverse limit of the algebras of LU-invariant polynomials over finite dimensional state spaces of quantum systems with a fixed number of subsystems. We call this object the algebra of local unitary invariants \cite{Vrana}. In section~\ref{sec:cover} we collect some facts about finite coverings of a graph. In particular, we describe a bijection between conjugacy classes of finite index subgroups of a free group, finite coverings of a certain graph and orbits of tuples of permutations under simultaneous conjugation following ref. \cite{Kwak}. In section~\ref{sec:alggen} we prove that the algebra of local unitary invariants is free by giving an algebraically independent generating set. Our proof makes use of the invariants introduced in ref. \cite{HWW}. Section~\ref{sec:conclusion} contains some concluding remarks, including the interpretation of our result in the context of LU-invariants of mixed states. \section{The algebra of local unitary invariants}\label{sec:LUinv} Let $k\in\mathbb{N}$ and for every $k$-tuple $n=(n_1,\ldots,n_k)\in\mathbb{N}^k$ let us consider the complex Hilbert space $\mathcal{H}_n=\mathbb{C}^{n_1}\otimes\cdots\otimes\mathbb{C}^{n_k}$ describing the pure states of a composite system with $k$ distinguishable subsystems. The group of local unitary transformations, $LU_n=U(n_1,\mathbb{C})\times\cdots\times U(n_k,\mathbb{C})$, acts on $\mathcal{H}$ in the obvious way, i.e. regarding $\mathbb{C}^{n_i}$ as the standard representation of $U(n_i,\mathbb{C})$. Let $I_{k,n}$ denote the algebra of $LU_n$-invariant polynomial functions over $\mathcal{H}_n$, regarded as a real vector space. Polynomial functions (with respect to any fixed basis) are in bijection with elements in $S(\mathcal{H}_n\oplus\mathcal{H}_n^{*})$, the symmetric algebra on $\mathcal{H}_n\oplus\mathcal{H}_n^{*}$ on which an action of $LU_n$ is induced and we have \begin{equation} I_{k,n}=S(\mathcal{H}_n\oplus\mathcal{H}_n^{*})^{LU_n} \end{equation} Note that in $I_{k,n}$ the polynomials are of the same degree in the coefficients and their complex conjugates, therefore we find it convenient to use a grading which is different from the usual one in a factor of two, and call homogenous degree $m$ the polynomials which are of degree $m$ both in the coefficients and their conjugates. For $n\le n'\in\mathbb{N}^k$ with respect to the componentwise order, we have the inclusion $\iota_{n,n'}:\mathcal{H}_n\hookrightarrow\mathcal{H}_{n'}$ which is the tensor product of the usual inclusions $\mathbb{C}^{n_i}\hookrightarrow\mathbb{C}^{n'_i}$ sending an $n_i$-tuple to the first $n_i$ components. Similarly, we regard $LU_n$ as a subgroup of $LU_{n'}$ which stabilizes the image of $\iota_{n,n'}$, and thus $\iota_{n,n'}$ is an $LU_n$-equivariant linear map, inducing a morphism of graded algebras $\varrho_{n,n'}:I_{k,n'}\to I_{k,n}$. $((I_{k,n})_{n\in\mathbb{N}^k},(\varrho_{n,n'})_{n\le n'\in\mathbb{N}^k})$ is an inverse system of graded algebras, the inverse limit of which will be denoted by $I_k$ and called the algebra of LU-invariants: \begin{equation} I_k:=\mathop{\varprojlim}\limits_{n\in\mathbb{N}^k}I_{k,n}=\left\{(f_n)_{n\in\mathbb{N}^k}\in\prod_{n\in\mathbb{N}^k}I_{k,n}\Bigg|\forall n\le n':f_n=\varrho_{n,n'}f_{n'}\right\} \end{equation} Note that $I_{k,(n_1,\ldots,n_k)}$ is a quotient of $I_k$ and the restriction of the quotient map to the subspace of elements of degree at most $\min\{n_1,\ldots,n_k\}$ is an isomorphism. The dimension of the homogenous degree $m$ subspace of $I_k$ is given by \begin{equation}\label{eq:stabdim} d_{k,m}=\sum_{a\Vdash m}\left(\prod_{i=1}^{m}i^{a_i}a_i!\right)^{k-2} \end{equation} and the Hilbert series of $I_k$ is \cite{Vrana} \begin{equation} \sum_{m\ge 0}d_{k,m}t^m = \prod_{d\ge 1}(1-t^d)^{-u_d(F_{k-1})} \end{equation} where $u_d(F_{k-1})$ denotes the number of conjugacy classes of index $d$ subgroups of $F_{k-1}$, the free group on $k-1$ generators. Our aim is to prove that $I_k$ is free, and the number of degree $d$ invariants in an algebraically independent generating set equals the number of conjugacy classes of index $d$ subgroups in the free group on $k-1$ generators, as the Hibert series suggests. \section{Graph coverings}\label{sec:cover} Let $G=(V,E)$ be a connected graph with coloured and directed edges (possibly multiple edges and/or loops). A graph $\tilde{G}=(\tilde{V},\tilde{E})$ together with a projection $p:\tilde{G}\to G$ is said to be a covering of $G$ if $p_V:\tilde{V}\to V$ and $p_E:\tilde{E}\to E$ are two surjections where the image of the head (tail) of an edge is the head (tail) of its image, $p_E$ respects colours and such that the indegree and outdegree of every vertex $\tilde{v}\in\tilde{V}$ is the same as that of $p_V(\tilde{v})$ in each subgraph determined by the colours. A covering $p:\tilde{G}\to G$ is said to be finite if $|p_V^{-1}(v)|<\infty$ and $m$-fold if $|p_V^{-1}(v)|=m$ for all $v\in G$. Two coverings $p_1:\tilde{G}_1\to G$ and $p_2:\tilde{G}_2\to G$ are said to be isomorphic if there exists an isomorphism $\varphi:\tilde{G}_1\to\tilde{G}_2$ making the following diagram commute: \begin{equation} \xymatrix{\tilde{G}_1 \ar@{->>}[dr]_{p_1}\ar[rr]^{\varphi} & & \tilde{G}_2 \ar@{->>}[dl]^{p_2} \\ & G} \end{equation} The set of isomorphism classes of finite coverings of a graph $G$ will be denoted by $\Iso(G)$, the set of isomorphism classes of $m$-fold coverings by $\Iso(G,m)$, and the connected ones by $\Isoc(G)$ and $\Isoc(G,m)$, respectively. Let $G$ be the graph with a single vertex and $k-1$ directed coloured loops. It is well-known that its fundamental group $\pi_1(G)$ is the free group of rank $k-1$, and a set of generators may be identified with the directed loops. There exists a bijection between $\Isoc(G,m)$ and conjugacy classes of subgroups of index $m$ of $\pi_1(G)$. Let $S_m$ denote the group of bijections from $\{1,\ldots,m\}$ to itself. This group acts on $S_m^{k-1}=S_m\times S_m\times\cdots\times S_m$ by simultaneous conjugation: \begin{equation} \pi\cdot(\sigma_1,\ldots,\sigma_{k-1})=(\pi\sigma_1\pi^{-1},\ldots,\pi\sigma_{k-1}\pi^{-1}) \end{equation} Let us denote the orbit of $(\sigma_1,\ldots,\sigma_{k-1})\in S_m^{k-1}$ by \begin{equation} [\sigma_1,\ldots,\sigma_{k-1}]=\{\pi\cdot(\sigma_1,\ldots,\sigma_{k-1})|\pi\in S_m\} \end{equation} To a (not necessarily connected) $m$-fold covering $\tilde{G}$ of $G$ we can associate an element in \begin{equation} S_m^{k-1}/S_m=\{[\sigma_1,\ldots,\sigma_{k-1}]|\forall i:\sigma_i\in S_m\} \end{equation} as follows. Label the vertices of $\tilde{G}$ arbitrarily with the numbers $\{1,\ldots,m\}$ using each label exactly once. Let $\sigma_i$ be the permutation which sends $a$ to $b$ if there is a directed edge from $a$ to $b$ of colour $i$ in $\tilde{G}$. Note that this gives indeed a $k-1$-tuple of permutations as the indegree and outdegree of every vertex in $\tilde{G}$ is $1$ in the subgraph determined by any colour. As relabelling corresponds to simultaneous conjugation, we have indeed a well-defined map $\Phi:\Iso(G,m)\to S_m^{k-1}/S_m$. Now let $\tilde{G}_1$ and $\tilde{G}_2$ be two coverings of $G$ where $\tilde{G}_1$ is $m_1$-fold and $\tilde{G}_2$ is $m_2$-fold. The disjoint union $\tilde{G}:=\tilde{G}_1\sqcup\tilde{G}_2$ is then an $m_1+m_2$-fold covering of $G$. We would like to relate the orbits of $k-1$-tuples of permutations of the three coverings. Let us choose the numbering of the vertices of $\tilde{G}$ so that $\tilde{G}_1$ is labelled with $\{1,\ldots,m_1\}$ and $\tilde{G}_2$ is labelled with $\{m_1+1,\ldots,m_1+m_2\}$. Let $(\sigma^{(j)}_1,\ldots,\sigma^{(j)}_{r-1})$ be the representative of the orbit corresponding to $\tilde{G}_j$ and $(\sigma_1,\ldots,\sigma_{r-1})$ be that of $\tilde{G}$ which can be read off from the above-chosen labelling (after subtracting $m_1$ in the $j=2$ case). It is easy to see that for all $1\le i\le k-1$ we have \begin{equation}\label{eq:star} \sigma_i(a)=\left\{\begin{array}{ll} \sigma^{(1)}_i(a) & \textrm{if $a\le m-1$} \\ \sigma^{(2)}_i(a-m_1)+m_1 & \textrm{if $a> m-1$} \\ \end{array}\right. \end{equation} given by the usual homomorphism $S_{m_1}\times S_{m_2}\hookrightarrow S_{m_1+m_2}$. This map clearly induces a map $\star:S_{m_1}^{k-1}/S_{m_1}\times S_{m_2}^{k-1}/S_{m_2}\to S_{m_1+m_2}^{k-1}/S_{m_1+m_2}$ on the orbits (we will use infix notation, i.e. the map sends $(a,b)\mapsto a\star b$). One can see immediately that $\star$ turns the set $\bigsqcup_{m=1}^{\infty}S_m^{k-1}/S_m$ into a commutative semigroup. Also, $\Iso(G)$ can be equipped with a semigroup structure induced by disjoint union, and $\Phi$ is an isomorphism. \section{Algebraically independent generators of the algebra of LU-invariants}\label{sec:alggen} To an orbit in $S_m^{k-1}/S_m$ we can associate an element in $I_k$ as follows. It was shown in ref. \cite{Vrana} that every element of $I_k$ is represented in some $I_{k,n}$. We will give a representative in $I_{k,n}$ where $n=(n_1,\ldots,n_k)\ge(m,\ldots,m)$ following ref. \cite{HWW}. A vector in $\mathcal{H}_{n}$ is of the form \begin{equation} \psi=\sum_{i_1,\ldots,i_k}\psi_{i_1,\ldots,i_k}e_{i_1}\otimes\cdots\otimes e_{i_k} \end{equation} where in the sum $1\le i_j \le n_j$ for all $1\le j\le k$. Let $(\sigma_1,\ldots,\sigma_{k-1})\in S_m^{k-1}$ be a representative. The value of the associated polynomial on $\psi$ is \begin{equation}\label{eq:f} f_{[\sigma_1,\ldots,\sigma_{k-1}]}(\psi)=\sum_{i^{1}_{1},\ldots,i^{m}_{k}}\psi_{i^{1}_1,\ldots,i^{1}_k}\cdots\psi_{i^{m}_1,\ldots,i^{m}_k}\overline{\psi_{i^{\sigma_1(1)}_1,\ldots,i^{\sigma_{k-1}(1)}_{k-1},i^{1}_k}}\cdots\overline{\psi_{i^{\sigma_1(m)}_1,\ldots,i^{\sigma_{k-1}(m)}_{k-1},i^{m}_k}} \end{equation} where the sum is over all $k\cdot m$-tuples of integers where $1\le i^l_j \le n_j$ for all $1\le j\le k$ and $1\le l\le m$. Note that the expression defining $f_{[\sigma_1,\ldots,\sigma_{k-1}]}$ is independent of the choice of the representative, justifying the notation. An important observation is the following: \begin{lem} Let $[\sigma^{(1)}_1,\ldots,\sigma^{(1)}_{k-1}]\in S_{m_1}^{k-1}/S_{m_1}$ and $[\sigma^{(2)}_1,\ldots,\sigma^{(2)}_{k-1}]\in S_{m_2}^{k-1}/S_{m_2}$. Then \begin{equation} f_{[\sigma^{(1)}_1,\ldots,\sigma^{(1)}_{k-1}]\star[\sigma^{(2)}_1,\ldots,\sigma^{(2)}_{k-1}]}= f_{[\sigma^{(1)}_1,\ldots,\sigma^{(1)}_{k-1}]}f_{[\sigma^{(2)}_1,\ldots,\sigma^{(2)}_{k-1}]} \end{equation} \end{lem} \begin{proof} Let the representative of the orbit $[\sigma^{(1)}_1,\ldots,\sigma^{(1)}_{k-1}]\star[\sigma^{(2)}_1,\ldots,\sigma^{(2)}_{k-1}]$ given by eq. (\ref{eq:star}) be $(\sigma_1,\ldots,\sigma_{k-1})\in S_{m_1+m_2}^{k-1}$ and let us denote $m_1+m_2$ by $m$. Then \begin{equation} \begin{split} & \sum_{i^{1}_{1},\ldots,i^{m}_{k}}\psi_{i^{1}_1,\ldots,i^{1}_k}\cdots\psi_{i^{m}_1,\ldots,i^{m}_k}\overline{\psi_{i^{\sigma_1(1)}_1,\ldots,i^{\sigma_{k-1}(1)}_{k-1},i^{1}_k}}\cdots\overline{\psi_{i^{\sigma_1(m)}_1,\ldots,i^{\sigma_{k-1}(m)}_{k-1},i^{m}_k}} \\ = & \sum_{i^{1}_{1},\ldots,i^{m_1}_{k}}\sum_{i^{m_1+1}_{1},\ldots,i^{m}_{k}}\psi_{i^{1}_1,\ldots,i^{1}_k}\cdots\psi_{i^{m}_1,\ldots,i^{m}_k}\overline{\psi_{i^{\sigma_1(1)}_1,\ldots,i^{\sigma_{k-1}(1)}_{k-1},i^{1}_k}}\cdots\overline{\psi_{i^{\sigma_1(m)}_1,\ldots,i^{\sigma_{k-1}(m)}_{k-1},i^{m}_k}} \\ = & \sum_{i^{1}_{1},\ldots,i^{m_1}_{k}}\psi_{i^{1}_1,\ldots,i^{1}_k}\cdots\psi_{i^{m_1}_1,\ldots,i^{m_1}_k}\overline{\psi_{i^{\sigma^{(1)}_1(1)}_1,\ldots,i^{\sigma^{(1)}_{k-1}(1)}_{k-1},i^{1}_k}}\cdots\overline{\psi_{i^{\sigma^{(1)}_1(m_1)}_1,\ldots,i^{\sigma^{(1)}_{k-1}(m_1)}_{k-1},i^{m_1}_k}} \\ \cdot & \sum_{i^{1}_{1},\ldots,i^{m_2}_{k}}\psi_{i^{1}_1,\ldots,i^{1}_k}\cdots\psi_{i^{m_2}_1,\ldots,i^{m_2}_k}\overline{\psi_{i^{\sigma^{(2)}_1(1)}_1,\ldots,i^{\sigma^{(2)}_{k-1}(1)}_{k-1},i^{1}_k}}\cdots\overline{\psi_{i^{\sigma^{(2)}_1(m_2)}_1,\ldots,i^{\sigma^{(2)}_{k-1}(m_2)}_{k-1},i^{m_2}_k}} \\ \end{split} \end{equation} \end{proof} In other words, the map $\bigsqcup_{m=1}^{\infty}S_m^{k-1}/S_m\to I_k$ given by $s\mapsto f_s$ is a semigroup-homomorphism. Now we are ready to prove our main theorem: \begin{thm} $I_k$ is freely generated by the set \begin{equation} F:=\{f_{\Phi(\tilde{G})}|\tilde{G}\in\Isoc(G)\} \end{equation} \end{thm} \begin{proof} In ref. \cite{HWW} it was shown that the set \begin{equation} \{f_s|s\in S_m^{k-1}/S_m\}=\{f_{\Phi(\tilde{G})}|\tilde{G}\in\Iso(G,m)\} \end{equation} forms a basis of the degree $m$ homogenous subspace of $I_{k,n}$ (when represented as polynomials) if $n\ge(m,\ldots,m)$. Therefore, it is also a basis of the degree $m$ homogenous subspace of $I_k$. As $I_k$ is the direct sum of its homogenous subspaces, we conclude that $\{f_{\Phi(\tilde{G})}|\tilde{G}\in\Iso(G)\}$ is a basis of $I_k$. Note that this also implies that the map $\tilde{G}\mapsto f_{\Phi(\tilde{G})}$ is injective. An element of the form $f_s$ where $s\in S_m^{k-1}/S_m$ can be uniquely written as the product of some elements of $F$. Indeed, $\Phi^{-1}(s)$ is a covering of $G$, which can be uniquely written as a disjoint union of connected coverings $\tilde{G}_1,\ldots,\tilde{G}_d$ (up to isomorphism and ordering), and therefore \begin{equation} f_s = f_{\Phi(\tilde{G}_1\sqcup\cdots\sqcup\tilde{G}_d)} = f_{\Phi(\tilde{G}_1)\star\cdots\star\Phi(\tilde{G}_{d})} = f_{\Phi(\tilde{G}_1)}\cdots f_{\Phi(\tilde{G}_{d})} \end{equation} \end{proof} \section{Conclusion}\label{sec:conclusion} We have shown that the inverse limit $I_k$ of the algebras of LU-invariant polynomials of pure states of $k$-partite quantum systems with finite dimensional Hilbert spaces is free, and an algebraically independent generating set can be given in terms of finite connected coverings of a graph with a single vertex and $k$ loops. The number of homogenous degree $2d$ polynomials in the algebraically independent generating set equals the number of isomorphism classes of $d$-fold connected coverings, which in turn equals the number of conjugacy classes of index $d$ subgroups of a free group on $k-1$ generators. In light of the close relationship between LU-equivalence classes of mixed states of a $k$-particle quantum system and those of pure states of a $k+1$-particle quantum system \cite{Albeverio,Vrana}, one should be able to interpret our result in the context of mixed states. This can be done as follows. Observe that each term on the right hand side of eq. (\ref{eq:f}) depends only on the reduced density matrix obtained when we trace over the last subsystem. Therefore it is easy to translate the result to the case of mixed state local unitary invariants. Let $I^{mixed}_{k}$ denote the inverse limit of the algebras of LU-invariants of mixed states over $k$-partite quantum systems with finite dimensional Hilbert space as in \cite{Vrana}. Let $G$ be the graph with a single vertex and $k$ directed labelled edges. To a connected covering $\tilde{G}\in\Iso(G,m)$ we associate the following invariant with $[\sigma_1,\ldots,\sigma_{k}]=\Phi(\tilde{G})$. \begin{equation} f_{[\sigma_1,\ldots,\sigma_{k}]}(\varrho)=\sum_{i^{1}_{1},\ldots,i^{m}_{k}}\varrho_{i^{1}_1,\ldots,i^{1}_k,i^{\sigma_1(1)}_1,\ldots,i^{\sigma_{k}(1)}_{k}}\cdots\varrho_{i^{m}_1,\ldots,i^{m}_k,i^{\sigma_1(m)}_1,\ldots,i^{\sigma_{k}(m)}_{k}} \end{equation} where \begin{equation} \varrho=\sum_{\substack{i_1,\ldots,i_k \\ j_1,\ldots,j_k}}\varrho_{i_1,\ldots,i_k,j_1,\ldots,j_k}e_{i_1}\otimes\cdots\otimes e_{i_k}\otimes e^*_{j_1}\otimes\cdots\otimes e^*_{j_k} \end{equation} is an arbitrary mixed state. Explicite descriptions of the algebras $I_{k,n}$ are known in only a limited number cases, including $k=2$, $n$ arbitrary, $k=3$, $n=(2,2,2)$ \cite{MW} and $k=4$, $n=(2,2,2,2)$ \cite{Wallach}. It should be noted that for any $k\in\mathbb{N}$ and $n\in\mathbb{N}^k$, $I_{k,n}$ is a quotient of $I_k$. It would be interesting to determine the kernels of the quotient maps in each case. We would like to emphasize that in spite of our lack of knowledge about the structure of every single $I_{k,n}$, fortunately the generators of $I_k$ can be directly interpreted as generators of the algebras of invariants of pure states of arbitrary $k$-partite quantum systems.
1,941,325,220,804
arxiv
\section{Introduction} In the recent years there has been much interest in various loop models. Loop models are graphical models defined by drawing closed loops along the bonds of the underlying lattice. The loops may come in $n$ different flavours (colours). No two loops can share a bond, while sharing a vertex is generally allowed. Explicitly, the bond configurations are such that each vertex houses an even number -- possibly zero -- of bonds of each colour. Each loop configuration is assigned a ``weight'' that depends on the number of participating vertices of each type. In the cases of interest these weights are actually positive hence, at least in finite volume, they define a {\em probability measure\/} on the set of all loop configurations. Thus, for a finite lattice the loop partition function may be written as: \begin{equation} \label{part} Z = \sum_{\mathcal{G}}R^{b} \nu_1^{m_1} \nu_2^{m_2} \ldots \nu_V^{m_V} \, , \end{equation} with the sum running over all allowed loop configurations $\mathcal{ G}$. Here $b$ is the total number of participating bonds, $m_i$ ($i=1,\ldots,V$) is the number of vertices of type $i$ and $\nu_i$ is the corresponding vertex factor.\footnote{Many authors consider an additional factor of the form $F_1^{l_1} F_2^{l_2} \ldots F_n^{l_n}$ where $F_i$ is a ``loop fugacity'' and $l_i$ is the number of loops of the $i$-th colour. Although the objects $l_i$ are unambiguous when self-intersections are forbidden, in the general case they are not easily defined. Nevertheless, the essence of such a term -- at least in the case of integer $F_i$ -- is captured by the introduction of additional colours.} This definition is slightly different from the one typically found in literature ({\em cf.}\ Refs.~\cite{Kondev-96:loops,Warnaar-93}) since it also includes the bond fugacity $R$. Although strictly speaking it is not needed (since the bond fugacity can always be incorporated into the vertex factors), we find it convenient to keep $R$ as a separate parameter. We remark that by relabeling the empty bonds as an additional colour, these models may be formally regarded as ``fully packed''. The reason loop models have been extensively studied is because they appear quite naturally as representations (often approximate) of various statistical - mechanical models. These include, among others, the Ising model (this approach dates back to Kramers and Wannier~\cite{KW} and was later used to solve the model exactly~\cite{KacWard,Vdov}), the Potts model (polygon expansion~\cite{Baxter}), $\mathrm O(n)$ spin models~ \cite{Domany-81,Nienhuis:On,Nienhuis:On_square,BN:On,BatNW:On}, 1-D quantum spin models~\cite{Aizenman-94}, a supersymmetric spin chain~\cite{SuperSpin}, the $q$-colouring problem~\cite{Baxter:col,Kondev:4col} and polymer models~\cite{JK-98,JK-99}. Here we consider the loop models explicitly related to the high-temperature expansions of the standard $\mathrm O(n)$, corner-cubic (AKA diagonal-cubic) and face-cubic spin models. This is, in fact, the same set of models that was treated in Ref.~\cite{Domany-81}. However, in this paper, we provide a careful treatment of the large $n$ cases -- and we treat the standard $d$-dimensional lattices. As a result, we arrive at quite unexpected results concerning the behaviour of these models in the high fugacity region. In particular, despite the considerable attention the subject has received, most authors (with certain exceptions, e.g.\ \cite{KW,KacWard,Vdov,Nienhuis:On_square,SuperSpin}) chose to consider models where only loops of {\em different\/} colours are allowed to cross each other (if at all). On the other hand spin systems (in the high-temperature approximation) naturally generate self-intersecting loops. In order to avoid this issue, an exorbitant amount of work has been done on lattices with coordination number $z=3$ (e.g.\ the honeycomb lattice), where loop intersections simply cannot occur. Overall this approach appears to be justified since one is usually interested in the critical properties of the underlying spin systems. Indeed, consider the archetypal $n$-component spin system with $\abs{\mathbf S_i} \equiv 1$ and let us write $\exp \bigl(\lambda \sum_{\langle i, j \rangle} \mathbf{ S}_i \! \cdot \mathbf{ S}_j \bigr) \sim \prod_{\langle i, j \rangle} \left(1 + \lambda \mathbf{ S}_i \! \cdot \mathbf{ S}_j \right)$. Although as a spin system the right hand side makes strict sense only if $\abs{\lambda} \leq 1$ (the ``physical regime''), the associated loop model turns out to be well defined for all $\lambda$. Since the systems can be identified for $\abs{\lambda} \ll 1$ it can be argued that the critical properties of the spin system and those of the loop model are the same and are independent of the underlying lattice. Notwithstanding, for $n \gg 1$ any phase transition in the actual spin system is not anticipated until temperatures of order $1/n$ (i.e. $\lambda \sim n$), which we note is well outside the physical regime of the loop model. At first glance this appears to be borne out: the natural parameter in the loop model (as well as in the spin system) seems to be $\lambda/ n$. Thus, the loop model could, in principle, capture the essential features of the spin system up to -- and including -- the critical point. We have found such a picture to be overly optimistic. Indeed, depending on the specific details, e.g.\ the lattice structure, there may be a phase transition in the region $1\ll \abs{\lambda} \ll n$ (specifically, $\lambda \sim n^{3/4}$), well outside the physical regime but well before the validity of the approximation was supposed to break down. Furthermore, it would seem that both the temperature scale and the nature of the transition (not to mention the existence of the transition) depend on such details. Finally, we shall demonstrate that in contrast to their spin system counterparts, the large-$n$ models have {\em no\/} phase transition -- for any value of bond fugacity -- associated with the formation of large loops (i.e.\ divergent loop correlations). The structure of this paper is as follows. \Sref{sec:models} is dedicated to the description of the spin models and their connection to the loop models. Specific results for those models with the two-dimensional spin variable ($n=2$) are presented in \Sref{sec:n=2}. Finally, \Sref{sec:n_large} contains the discussion of reflection positivity as well as some results concerning phase transitions in the large $n$ case. \vfill \section{$n$-component models} \label{sec:models} \subsection{$\mathrm O(n)$ model} \label{sec:On} Let us start by considering the $\mathrm O(n)$ model on some finite lattice $\Lambda \subset {\mathbb Z}^d$ {\em defined\/} by the following {partition function}: \begin{equation} Z = \mathrm{Tr}\prod_{\langle i, j \rangle} \left(1 + \lambda \mathbf{S}_i \cdot \mathbf{S}_j \right) \label{O(n)_part} \end{equation} with $\mathbf{S}_i \in {\mathbb R}^n$, $\abs{\mathbf{S}_i} = 1$ and $\mathrm{Tr}$ denoting normalised summation (integration) over all possible spin configurations. The corresponding loop model is readily obtained along the lines of a typical ``high-temperature'' expansion. We write $\mathbf{S}_i \cdot \mathbf{S}_j = S_i^{(1)} S_j^{(1)} + \ldots + S_i^{(n)} S_j^{(n)}$ and define $n$ different colours (each associated with a coordinate direction of the $\mathrm O(n)$-spins). Expanding the product, we have $n$ choices for each bond plus a possibility of a vacant bond. Thus, various terms are represented by $n$-coloured bond configurations: $\mathcal{G} = (\mathcal{G}_1, \ldots, \mathcal{G}_n)$ with $\mathcal{G}_\ell$ denoting those bonds where the term $S_i^{(\ell)} S_j^{(\ell)}$ has been selected. Clearly, the various $\mathcal{G}_\ell$'s are pairwise (bond) disjoint. Thus, for each $\mathcal{G}$ we obtain the weight \begin{equation} W_\mathcal{G} = \mathrm{Tr} \prod_{\langle i, j \rangle \in \mathcal{G}_1} \lambda S_i^{(1)} S_j^{(1)} \ldots \prod_{\langle i, j \rangle \in \mathcal{G}_n} \lambda S_i^{(n)} S_j^{(n)}\,. \label{bond_weight} \end{equation} On the basis of elementary symmetry considerations it is clear that $W_\mathcal{G} \neq 0$ if and only if each vertex houses an even number (which could be zero) of bonds of each colour. Once this constraint is satisfied, we get an overall factor of $\lambda^{b(\mathcal{G})}$ -- with $b(\mathcal{G})$ being the total number of participating bonds -- times the product of the {\em vertex factors\/} obtained by performing the appropriate $\mathrm O(n)$ integrals. The details and results of these calculations are presented in Appendix~\ref{sec:KMF}. In general, it is seen that the vertex factors depend only on the number of participating colours and the number of bonds of each colour emanating from a given vertex, i.e.\ not on the particular colours that were involved nor on the directions of these bonds. In the case of a square lattice ($d=2$) we have only three main types of (non-empty) vertices: those where two bonds of the same colour join together, those with two pairs of bonds of two different colours and those with four bonds of the same colour. These have weights of $1/n$, $1/n(n+2)$ and $3/n(n+2)$ correspondingly. Rescaling the bond fugacity from $R=\lambda$ to $R=\lambda/{n}$ we arrive at the vertex weights $\nu_1=1$, $\nu_2 =n/(n+2)$ and $\nu_3 = 3n/(n+2)$. The factor of 3 relating $\nu_2$ to $\nu_3$ has an interesting interpretation (which, as shown in Appendix~\ref{sec:KMF}, turns out to be quite general). Indeed, each vertex of the third type may be decomposed into three different vertices as shown in \fref{O(n)_vertices}. Each of the new vertices is now assigned equal weight, which is also that of $\nu_2$. We thus split each $\mathcal{G}$ into $3^{m_3}$ different graphs -- each of equal weight -- in which every vertex with four bonds now provides explicit instructions relating outgoing and incoming directions of an individual walk. \begin{figure}[hbt] \begin{center} \includegraphics[width=12cm]{On_vertices.eps} \caption{ Decomposition of a type 3 vertex into three new vertices in the two-dimensional O(n) loop model.} \label{O(n)_vertices} \end{center} \end{figure} Hence in every such graph (now defined with the walking instructions encoded at every vertex) the individual loops are well defined. Furthermore, changing the colour of any loop does not change the weight of the graph. Thus we may write \begin{equation} Z = \sum_{\mathcal{K}} \left(\frac{\lambda}{n}\right)^{b(\mathcal{K})} \left(\frac{n}{n+2}\right)^{m(\mathcal{K})} n^{\ell(\mathcal{K})} \label{O(n)_loops} \end{equation} where the summation now takes place over all configurations $\mathcal{K}$ of colourless loop graphs in which every vertex housing four bonds is resolved by ``walking instructions'', and $\ell$ is the number of such loops (being now defined completely unambiguously). In addition to the advantages of a manifestly colourless expression, the above permits continuation to non-integer $n$. We conclude this subsection with the following series of remarks and observations. \begin{itemize} \item{As shown in Appendix~\ref{sec:KMF}, such vertex decomposition works, in fact, for an arbitrary lattice in an arbitrary number of spatial dimensions (with the proper weights for vertices housing 6, 8, etc. bonds).} \item{Notice that only $\abs{\lambda} \leq 1$ region of the parameter space is ``physical'' (in a sense of the underlying Hamiltonian: $-\beta H=\sum_{\langle i, j \rangle}\ln [1 + \lambda \mathbf{ S}_i \cdot \mathbf{ S}_j ]$), while for $\abs{\lambda} > 1$ one presumes, no spin Hamiltonian can be written at all. The corresponding loop model, however, makes perfect sense in the entire parameter space.} \item{If we consider a 2D XY model ($n=2$), we notice that the factor $2^{l(\mathcal{ G'})}$ in \eref{O(n)_loops} can be obtained by assigning directions to the colourless loops. The above decomposition of type 3 vertices makes this procedure unambiguous. Having done that, we can turn this model into a random surface model by assigning heights to the plaquettes in such a way that a plaquette to the right of a directed bond is always one step higher than the plaquette to the left. Not surprisingly, this random surface model turns out to be identical to the one obtained by the standard means of Fourier-transforming the original weights in \eref{O(n)_part} ({\em cf.}\ Refs.~\cite{JKKN-77,Knops-77:XY-SOS}).} \item{Finally, it is worth mentioning that in the fully packed limit $R \to \infty$, the $n=2$ loop model on the square lattice (with the colour degrees of freedom being replaced by assigning directions to the loops) turns out to be nothing but the square ice model (i.e. the six-vertex model with all six weights being equal -- see Ref.~\cite{Baxter} for a definition of this model). The mapping between the vertices of these models is shown in \fref{6-vertex}. We remark that the perspective of the ice model (and, for that matter, other six-vertex models) as a two colour loop model provides additional flexibility in the analysis of these systems. These issues will be pursued in a future publication.} \end{itemize} \begin{figure}[htb] \begin{center} \includegraphics[width=13cm]{6-vertex.eps} \caption{ Mapping of a fully packed $\mathrm O(2)$ model onto the square ice model.} \label{6-vertex} \end{center} \end{figure} \subsection{Corner-cubic model} \label{sec:corner-cubic} We now consider the following ``discretised'' modification of the above $\mathrm O(n)$ model (given by \eref{O(n)_part}): \begin{equation} Z = \mathrm{Tr}\prod_{\langle i, j \rangle} \left[1 + \frac{\lambda}{n} \left({\sigma_i^{(1)}\sigma_j^{(1)}+ \sigma_i^{(2)}\sigma_j^{(2)}\ldots+ \sigma_i^{(n)}\sigma_j^{(n)}}\right)\right] \label{corner_part} \end{equation} with $\sigma_i^{(k)} = \pm 1$. For small values of $\lambda$ this model may be viewed as a high-temperature limit of a corner-cubic model. Indeed, it describes an interaction of the type in \eref{O(n)_part} where spins $\mathbf{ S}_i$ are allowed to point at the corners of an $n$-dimensional hypercube (with the origin being placed at the centre of the cube). Mapping it onto an $n$-colour loop model is almost identical to the $\mathrm O(n)$ case, with the only difference being the vertex factor: $\left({\sigma_i^{(1)}}\right)^{2k_1} \ldots \left({\sigma_i^{(n)}}\right)^{2k_n} = 1 $. We can choose to associate the weight of $R = {\lambda}/{n}$ with each bond, thus making all vertex weights $\nu_i$ to be equal to unity. In other words, the resulting loops in this model do not interact with each other via vertices (there is still a hard core bond repulsion, however). The partition function is then simply \begin{equation} Z = \sum_{\mathcal{G}} \left(\frac{\lambda}{n}\right)^{b(\mathcal{ G})}. \label{bonds-corner} \end{equation} \begin{figure}[htb] \begin{center} \includegraphics[width=13cm]{3cross.eps} \caption{ A fragment of a possible two-dimensional loop configuration for an intersecting loop model. Three different possible colourings (out of the total of 32 in the case of $n=2$) for a loop model of {\em corner-cubic\/} type are represented here as (a), (b) and (c). For the {\em face-cubic\/} type, only monotonous clusters like (a) remain allowed.} \label{cross-cluster} \end{center} \end{figure} \subsection{Face-cubic model} \label{sec:face-cubic} Finally, let us examine a different model with cubic symmetry given by the following partition function: \begin{equation} Z = \mathrm{Tr}\prod_{\langle i, j \rangle} \left[1 + {\lambda}\left( u_i^{(1)} u_j^{(1)} + u_i^{(2)} u_j^{(2)} \ldots + u_i^{(n)} u_j^{(n)}\right)\right]\,. \label{face-cubic} \end{equation} Here $u_i^{(k)} = 0,\pm 1$, and for a given site $i$ exactly one of $u_i^{(k)}$ ($k = 1, 2, \ldots n$) has a non-zero value. In fact, one may think of $u$'s as components of an $n$-dimensional unit vector that is only allowed to point along the coordinate axes (or from the centre to the faces of an $n$-dimensional hypercube - thus the name face-cubic). While the corner-cubic model described earlier had $2^n$ degrees of freedom per site (the number of corners of a hypercube), the present model has only $2n$ such degrees of freedom (the number of faces). Once again, the corresponding loop model is obtained by performing multiplication in \eref{face-cubic} and then summing the resulting terms over all possible values of $u$'s. But since for each site $i$ only one of the spin components $u_i^{(k)} \neq 0$ at a time, we notice that no terms that mix different $k$'s are allowed. In terms of resulting loops this means hard-core repulsion of different colours: only loops of the same colour can share a vertex. The vertex factors are now: zero for any vertex with multiple colours and $1/n$ for vertices with two or more bonds of the same colour; the bond fugacity is given by $R = \lambda$. \section{Results for the $n=2$ case} \label{sec:n=2} \subsection{The $n=2$ models with cubic symmetry, Ashkin--Teller and random surface models} \label{sec:AT} In this section we shall restrict our attention to the models with cubic symmetries. Firstly, let us slightly change the notations for convenience: let $\sigma_i^{(1)}=\sigma_i$ and $\sigma_i^{(2)}=\tau_i$ for the corner-cubic, while $u_i^{(1)}=u_i$ and $u_i^{(2)}=v_i$ for the face-cubic model. The corresponding partition functions are then written as \begin{equation} Z_\mathrm{CC} = \mathrm{Tr}\prod_{\langle i, j \rangle} \left[1 + \frac{\lambda}{2} \left({\sigma_i \sigma_j + \tau_i \tau_j}\right)\right] \label{CC2} \end{equation} and \begin{equation} Z_\mathrm{FC} = \mathrm{Tr}\prod_{\langle i, j \rangle} \left[1 + {\lambda}\left( u_i u_j + v_i v_j\right)\right]\,. \label{FC2} \end{equation} While we have seen that the loop models generated by these partition functions are very different, the spin models themselves turn out to be identical. Indeed, \eref{FC2} is obtained from \eref{CC2} by the following transformation: $u_i = (\sigma_i + \tau_i)/2$, $v_i = (\sigma_i - \tau_i)/2$. This is equivalent to a $45^\mathrm{o}$ rotation in the spin space (along with a $\sqrt{2}/2$ rescaling) and is very specific to the $n=2$ case. In turn, both models are equivalent to the Ashkin--Teller model~\cite{AT} with a particular choice of parameters that will be detailed in \Sref{sec:AT_duality}. The two-colour loop models generated by \eref{CC2} and \eref{FC2} are given by the following sets of parameters in \eref{part}: $R=\lambda/2$, with all vertices having weight one, and $R=\lambda$, with all multi-colour vertices given weight zero and all other non-empty vertices given weight one-half respectively. Turning our attention to the particular case of two spatial dimensions, we remark that in the former model one can sum over all possible colourings to obtain the following result for the partition function: \begin{equation} Z = \sum_{\mathcal{G'}} \left(\frac{\lambda}{2}\right)^{b(\mathcal{ G'})} 2^{f(\mathcal{ G'})} \label{FK-corner} \end{equation} with ${b(\mathcal{ G'})}$ being the total number of occupied (colourless) bonds and ${f(\mathcal{ G'})}$ being the total number of {\em faces\/} in the clusters they form. The number of faces is the minimum number of bonds that one must remove in order for the remaining clusters to be tree-like. For example, the cluster in \fref{cross-cluster} has five {\em faces\/}, while it can at most consist of four {\em loops\/}. Curiously enough, this result appears to have no simple generalisation for $n>2$. It appears that the two-dimensional loop model derived from the corner-cubic model can not be mapped directly onto a random surface model. However, the other loop representation (the one obtained via expansion of the face-cubic model) does correspond to a random surface model. Indeed, consider the following ``recipe'': take a loop configuration generated by a particular term in the expansion of \eref{FC2}. Let red be the colour of loops originating from $u$'s, while blue corresponds to $v$'s. Take all plaquettes at the outermost region to be at height zero (these plaquettes are said to form a substrate). On this substrate we have clusters of loops. The outermost boundaries of these clusters are themselves closed loops. The plaquettes immediately adjacent to these boundaries are assigned the hight of $+1$ if the loop forming a boundary is red, or $-1$ if it is blue. These plaquettes, along with any other plaquette accessible from them without crossing coloured bonds are said to form a plateau and thus have the same height.\footnote{The strict definition is as follows: plaquettes A and B are said to belong to the same plateau if and only if there exits an unbroken path along the bonds of a {\em dual\/} lattice that connects the centre of plaquette A to the centre of plaquette B without crossing a single coloured bond of the direct lattice.} Inside such a plateau region there may be other loop clusters, that may or may not touch the boundary of a plateau (only corners are allowed to touch, since no bond sharing between the loops is possible). Every such cluster is now treated in the same way: its boundary defines the ``secondary'' plateau with the height being that of the ``primary'' plateau $\pm 1$ depending on the colour of the boundary. This procedure is repeated until all plaquettes are assigned their heights. As an example, consider the cluster in \fref{cross-cluster}(a), which may now only consist of the bonds of a single colour. If this were a red cluster, the heights would be $+1$ for the plaquettes 1-4 and $+2$ for the plaquette 5. In fact, this description is essentially identical to that given in Ref.~\cite{Abraham-88} in the context of wetting transition with the only difference that we allow for two-coloured clusters instead of single-coloured, and therefore the heights in our case may be both positive and negative.\footnote{ This random surface model, however, has a few important differences with those of a more conventional type (like the one obtained for the $\mathrm O(2)$ case). Firstly, due to an irreducible four-leg vertex factor, it cannot be described by a nearest-neighbour Hamiltonian (i.e. a Hamiltonian that depends only on the height difference of neighbouring plaquettes). Secondly, since no directions are assigned to the loops separating the plateaux, there is no way of deciding on the {\em sign\/} of their relative height difference without going through the necessary construction steps starting from the {\em outside\/}. By contrast, the $\mathrm O(2)$-related random surface model can be constructed starting from {\em any\/} plaquette -- a particular choice simply determines the overall additive constant. In this sense the mapping between the present random surface and the loop model is {\em nonlocal\/}.} The important feature of this random surface model is that it must have a phase transition whenever the underlying Ashkin--Teller model undergoes a transition. \subsection{Random cluster representation} Let us derive yet another graphical representation for the $n=2$ (Ashkin--Teller) model considered in the previous section, this time it will be a {\em random cluster\/} representation closely resembling the FK representation for the Potts model. We start from \eref{corner_part} (with $n=2$), and with the help of the identity ${\sigma_i} {\sigma_j} = 2 \delta_{{\sigma_i} {\sigma_j}} - 1$ rewrite it as follows: \begin{equation} Z \propto \sum_{\sigma}\prod_{\langle i, j \rangle} \left[1 + {v} \left({\delta_{{\sigma_i} {\sigma_j}} + \delta_{{\tau_i} {\tau_j}}}\right)\right] \label{AT_part} \end{equation} with $v = {\lambda}/{(1-\lambda)}$. The random cluster representation is generated by evaluating the product over all bonds in \eref{AT_part} and then summing over the possible values of $\sigma$'s and $\tau$'s. If we think of the bonds originating from the $\sigma$ variables as green (g), and the bonds originating from the $\tau$ variables as orange (o), then each of the resulting terms in the partition function can be graphically represented as a collection of green and orange clusters as well as empty sites. The clusters of different colours may share sites, but not bonds. Denoting the configurations of green and orange bonds as $\omega_\mathrm{g}$ and $\omega_\mathrm{o}$ respectively, we can then write the partition function as \begin{equation} Z \propto \sum_{\omega} {v}^{b({\omega_\mathrm{g}})} {v}^{b({\omega_\mathrm{o}})} 2^{c({\omega_\mathrm{g}})}2^{c({\omega_\mathrm{o}})} \label{AT_FK} \end{equation} with ${b({\omega})}$ being the total number of bonds (of a specified colour), and ${c({\omega})}$ being the number of corresponding connected components. The rule for counting the connected components is as follows: every site that is not a part of a green cluster is considered to be a separate connected component for the purposes of ${c({\omega}_\mathrm{g})}$, even if this site is a part of an orange cluster, and vise versa. In particular, the quantities ${v}^{b({\omega_\mathrm{g}})}2^{c({\omega_\mathrm{g}})}$ and ${v}^{b({\omega_\mathrm{o}})}2^{c({\omega_\mathrm{o}})}$ are to be interpreted exactly as in the usual random cluster models. \subsection{Self-duality and criticality at $\lambda = 1$} The duality relations for such random cluster representation of the standard AT model in two dimensions were established in Refs.~\cite{CM1} and \cite{PfV}. Firstly, let us write the generic AT Hamiltonian as \begin{equation} -\beta H = \sum_{\langle i, j \rangle} \left[K \left({\delta_{{\sigma_i} {\sigma_j}} + \delta_{{\tau_i} {\tau_j}}}\right) + L{\delta_{{\sigma_i} {\sigma_j}}\delta_{{\tau_i} {\tau_j}}}\right]. \label{AT_hamlt} \end{equation} The graphical representation for the partition function is then obtained along the lines of the previous section. The only difference is that this time double-coloured (i.e. green and orange at the same time) bonds are also allowed. The graphical weight of a given bond configuration $\omega$ is \begin{equation} W(\omega) = {A}^{b({\omega_\mathrm{g}}\vee{\omega_\mathrm{o}})} {B}^{b({\omega_\mathrm{g}}\wedge{\omega_\mathrm{o}})} 2^{c({\omega_\mathrm{g}})}2^{c({\omega_\mathrm{o}})} \label{AT_random} \end{equation} where \begin{equation} A = \mathrm{e}^K - 1\:\:\:\:\: \mbox{ and }\:\:\:\:\: B = \frac{\mathrm{e}^{L+2K} - 2\mathrm{e}^K + 1}{\mathrm{e}^K - 1}\,. \label{notations} \end{equation} Observe that \eref{AT_FK} describes a particular case of this model provided that $A = v \equiv \lambda / (1 - \lambda)$ while $B=0$. \label{sec:AT_duality} The dual model is obtained by placing orange bonds between the sites of a dual lattice every time when it does not cross a green bond of the original lattice. Correspondingly, the green bonds on the dual lattice are dual to the original orange bonds. The duality relations are given by: \begin{equation} A^* = 2 B^{-1}\:\:\:\:\: \mbox{ and }\:\:\:\:\: B^* = 2 A^{-1}, \label{duality} \end{equation} And the model becomes self-dual when $AB = 2$. It is therefore suggestive that our model becomes self-dual at $\lambda = 1$ (or $v = \infty$). In order to show that it is indeed {\em exactly\/} self-dual, we shall perform the above duality transformation to the orange bonds only (${\omega_\mathrm{o}} \to {\omega_\mathrm{o}^*}$), leaving the green bonds intact. This results in having green bonds on both original and the dual lattices. The green bonds on the dual lattice can be then split into those traversal to the original green bonds and those traversal to the previously vacant bonds, or symbolically ${\omega_\mathrm{o}^*} = {\Omega_\mathrm{g}} \vee \Omega_{\varnothing}$. Here ${\omega_\mathrm{o}^*}$ is the configuration of (green) bonds {\em dual\/} to the orange bonds while ${\Omega_\mathrm{g}}$ and $\Omega_{\varnothing}$ are the configurations of bonds {\em transversal\/} to the original green and vacant bonds correspondingly. The corresponding weight is now given by: \begin{equation} W(\omega) = {v}^{b({\omega_\mathrm{g}})} \left({\frac{2}{v}}\right)^{b({\Omega_\mathrm{g}}) + b({\Omega_{\varnothing}})} 2^{c({\omega_\mathrm{g}})}\,2^{c({\Omega_\mathrm{g}} \vee \Omega_{\varnothing})}. \label{mixed_weight} \end{equation} We now observe that ${b({\Omega_\mathrm{g}})} = {b({\omega_\mathrm{g}})}$, and also that $b({\Omega_{\varnothing}}) \to 0$ as $v \to \infty$ (the original random cluster model becomes fully packed according to \eref{AT_FK}). Then the weight in this limit becomes simply \begin{equation} W = {2}^{b({\omega_\mathrm{g}})} \, 2^{c({\omega_\mathrm{g}})} \, 2^{c({\Omega_\mathrm{g}})}. \label{limiting_weight} \end{equation} The model described by such weights is manifestly self-dual since ${\omega_\mathrm{g}^*} = {\Omega_\mathrm{o}} \vee \Omega_{\varnothing} \to {\Omega_\mathrm{o}}$ and ${\Omega_\mathrm{g}^*} = {\omega_\mathrm{o}} \vee \omega_{\varnothing} \to {\omega_\mathrm{o}}$ as $v \to \infty$, and there is a symmetry between the green and the orange bonds. It is tempting to speculate that a phase transition occurs exactly at the self-dual point. Although this is plausible, it is not the only possibility. In particular, there may be a phase transition at some $\lambda_\mathrm{t} < 1$. However, we can say the following: If at $\lambda=1$ there is no magnetisation (i.e. percolation of green or orange bonds) then the theorem proved by two of us~\cite{CS:self-dual} applies; $\lambda=1$ is a critical point in the sense of infinite correlation length and infinite susceptibility. The only other possibility is positive magnetisation at $\lambda=1$ which implies a magnetic transition -- which could be continuous or first order -- at some $\lambda_\mathrm{t} \leq 1$. (In particular, this is shown to happen, with a first order transition for the large-$q$ versions of these models~\cite{BC}). Although we find these alternative scenarios unlikely, we have, in any case, established the existence of a transition in this model for some value of $\lambda$ between zero and one. \subsection{Speculative remarks on relation to the critical 4-state Potts model} As mentioned above, in our opinion the most likely scenario is that a phase transition occurs precisely at $\lambda=1$. The interesting question then is that of the universality class. Without any supporting mathematical statements, we suggest that {\em at\/} this point our model behaves similarly to the 4-state Potts ferromagnet at its critical point. In order to substantiate this claim, let us first recall the random cluster representation for a $q$-state Potts model: \begin{equation} Z = \sum_{\omega} {K}^{b({\omega})} q^{c({\omega})}. \label{Potts_FK} \end{equation} For $q \leq 4$ this model is universally accepted to have a continuous transition at the self-dual point $K=\sqrt{q}$. Thus for the $q=4$ model at the self-dual point we have \begin{equation} Z = \sum_{\omega} {2}^{b({\omega})} \,4^{c({\omega})}. \label{4-state} \end{equation} On the other hand, we can use \eref{limiting_weight} to rewrite the partition function of our model at $\lambda=1$ as follows: \begin{equation} Z \propto \sum_{\omega} {2}^{b({\omega})} \, 4^{c({\omega})} \, 2^{c({\Omega})-c({\omega})}. \label{stupid} \end{equation} The difference between the two models is in the last factor of $2^{c({\Omega})-c({\omega})}$ in \eref{stupid}. It is, however, reasonable to speculate that it can be neglected. Indeed, on average ${c({\Omega})=c({\omega})}$, and therefore one would expect the typical value of the difference ${c({\Omega})-c({\omega})}$ to be sublinear in the system size. By contrast, the individual terms $c({\Omega})$ and $c({\omega})$ indeed scale linearly with the size of the system so this correction may be ``unimportant''. This, however, does not mean that the two models approach the self-dual point in a similar fashion. In other words, we expect the exponents associated with the critical point itself (such as $\eta$ and $\delta$) of the two models to be the same, while this needs not be true for the exponents associated with the {\em approach\/} to the critical point (such as $\alpha$, $\beta$ and $\nu$). In fact, the $\lambda=1$ point of our model may well be an edge of a critical Kosterlitz-Thouless phase in which case the approach exponents would take on extreme values (zero or infinity). \section{Reflection positivity and phase transitions in the large $n$ limit} \label{sec:n_large} \subsection{Reflection positivity} \label{sec:RP} This section concerns the reflection positivity property of the loop models defined by \eref{part} which in turn permits the analysis of their large $n$ limit. Let $\Lambda$ denote a $d$--dimensional torus. Here, and for the remainder of this paper, it will be assumed that the linear dimensions of $\Lambda$ are all the same, and are of the form $L = 2^{\ell}$. We denote by $N = L^{d}$ the number of sites in the torus. Let $\mathcal G$ denote the set of all possible loop configurations on $\Lambda$. Finally let $\mathcal P$ denote a hyperplane perpendicular to one of the coordinate axes which cuts through the bonds parallel to this axis dividing the torus into two equal parts. Let $\mathcal{G}_{\mathcal{L}}$ and $\mathcal{G}_{\mathcal{R}}$ be the bond configurations on the two sides of the ``cut'', with the bonds intersected by $\mathcal P$ belonging to {\em both\/} sets. Thus $\mathcal{G} = \mathcal{G}_{\mathcal{L}} \cup \mathcal{ G}_{\mathcal{R}}$, while $\mathcal G_{\mathcal{P}} \equiv \mathcal{G}_{\mathcal{L}} \cap \mathcal{G}_{\mathcal{R}}$ contains only the intersected bonds. We now define a map $\vartheta_{\!\mathcal{P}}: \mathcal{G}_{\mathcal{L}} \to \mathcal{G}_{\mathcal{R}}$ such that it simply reflects the configuration on the ``left'' to that on the ``right''. Let $f: \mathcal{G}_{\mathcal{R}} \to {\mathbb R}$ be a function that depends only on the bond configuration on the right and define $\vartheta_{\!\mathcal{P}} f({\mathfrak g_{\mathcal{ L}}}) \equiv f(\vartheta_{\!\mathcal{P}} (\mathfrak g_{\mathcal{L}}))$ for any $\mathfrak g_{\mathcal{L}} \in \mathcal{G}_{\mathcal{L}}$. Similarly, we can use $\vartheta_{\!\mathcal{P}}$ to map $\mathcal{G}_{\mathcal{R}} \to \mathcal{G}_{\mathcal{L}}$; with this convention $\vartheta_{\!\mathcal{P}}^2$ is the identity. A probability measure $\mu$ on the set $\mathcal{G}$ is said to be {\em reflection positive\/} if for every such $\mathcal{P}$ and any functions $f$ and $h$ as described above $\langle f \,\vartheta_{\!\mathcal{P}} f \rangle_{\mu} \geq 0$ and $\langle h \;\vartheta_{\!\mathcal{P}} f \rangle_{\mu} = \langle f \;\vartheta_{\!\mathcal{P}} h \rangle_{\mu}$. \begin{thm} \label{Theorem_4_1} The measures $\mu$ determined by the weights in \eref{part} are reflection positive on any even $d$-dimensional torus. \end{thm} \begin{pf*}{Proof.} Let $\Lambda$ denote one such torus and $\mathcal{P}$ denote one of the above described planes. Let $\mathfrak g_{\mathcal{P}} \in \mathcal{G}_{\mathcal{P}}$ be a configuration of bonds going through this plane. Assuming that $\mu(\mathfrak g_{\mathcal{P}}) \neq 0$, let us consider the measure $\mu(\cdot \mid\mathfrak g_{\mathcal{P}})$. Our claim is that this splits into two measures, which we will call $\mu_{\mathcal{L}}(\cdot \mid\mathfrak g_{\mathcal{P}})$ and $\mu_{\mathcal{R}}(\cdot \mid\mathfrak g_{\mathcal{P}})$, defined on $\mathcal{G}_{\mathcal{L}}$ and $\mathcal{G}_{\mathcal{R}}$ which are independent and identical under the reflection $\vartheta_{\!\mathcal{P}}$. Indeed, it is not hard to see that $\mu(\mathfrak g_{\mathcal{ P}}) \neq 0$ if and only if $\mathfrak g_{\mathcal{P}}$ has an even number of bonds of each colour. In each half of the torus, the endpoints of these bonds serve as ``source/sinks'' for bond configurations. In other words, a configuration in, say, $\mathcal G_{\mathcal{L}}$ must contain lines of the appropriate colour that pair up these sources. But, aside from having to satisfy these ``boundary conditions'', the weights are the same as given in \eref{part}. These two measures defined accordingly on $\mathcal G_{\mathcal{L}}$ and $\mathcal G_{\mathcal{R}}$ are the above mentioned $\mu_{\mathcal{L}}(\cdot \mid\mathfrak g_{\mathcal{P}})$ and $\mu_{\mathcal{R}}(\cdot \mid\mathfrak g_{\mathcal{P}})$ respectively. It is clear that if $\mathfrak g_{\mathcal{L}}\in \mathcal{G}_{\mathcal{L}}$ then \begin{equation} \mu_{\mathcal{L}}\left(\mathfrak g_{\mathcal{L}} \mid\mathfrak g_{\mathcal{P}}\right) = \mu_{\mathcal{R}}\left(\vartheta_{\!\mathcal{P}} (\mathfrak g_{\mathcal{L}})\mid\mathfrak g_{\mathcal{P}}\right)\,. \label{RP1} \end{equation} Furthermore if $\mathfrak g_{\mathcal{P}}$ is a configuration and $\mathfrak g_{\mathcal{L}}$ is any configuration that agrees with $\mathfrak g_{\mathcal{P}}$ and has non-zero weight then for every $\mathfrak g_{\mathcal{R}} \in \mathcal{G}_{\mathcal{R}}$ we see that \begin{equation} \mu_{\mathcal{R}}(\mathfrak g_{\mathcal{R}}\mid\mathfrak g_{\mathcal{P}}) = \mu(\mathfrak g_{\mathcal{R}}\mid \mathfrak g_{\mathcal{L}})\,. \label{RP2} \end{equation} Thence, for every $f$ that is determined on $\mathcal{G}_{\mathcal{R}}$ we have \begin{equation} \begin{split} \langle f\;\vartheta_{\!\mathcal{P}}f \rangle_{\mu}& = \sum_{\mathfrak g_{\mathcal{P}}}\mu(\mathfrak g_{\mathcal{P}}) \langle f\mid \mathfrak g_{\mathcal{P}} \rangle_{\mu_{\mathcal{R}}} \langle\vartheta_{\!\mathcal{P}}f\mid \mathfrak g_{\mathcal{P}} \rangle_{\mu_{\mathcal{L}}} \\ & =\sum_{\mathfrak g_{\mathcal{P}}}\mu(\mathfrak g_{\mathcal{P}}) \langle f\mid \mathfrak g_{\mathcal{P}} \rangle_{\mu_{\mathcal{R}}}^{2} \end{split} \label{RP3} \end{equation} which cannot be negative. Similarly we get that $\langle h \;\vartheta_{\!\mathcal{P}} f \rangle_{\mu} = \langle f \;\vartheta_{\!\mathcal{P}} h \rangle_{\mu}$.\qed \end{pf*} One of the important consequences of reflection positivity is a Cauchy--Schwartz-type inequality: \begin{equation} \langle f \;\vartheta_{\!\mathcal{P}} h \rangle_{\mu} \leq \sqrt{\langle f \;\vartheta_{\!\mathcal{P}} f \rangle_{\mu} \langle h \;\vartheta_{\!\mathcal{P}} h \rangle_{\mu}}\,. \label{RP:CS} \end{equation} which in turn leads to the {\em chessboard estimates\/} to be described below. (The reader interested in a more detailed description of reflection positivity is referred to the review \cite{Shlosman:RP} and the references therein). \subsection{Uniform exponential decay for large $n$} In this subsection we will consider some $n$--colour models with $n \gg 1$ and vertex factors that are uniformly bounded above and below independently of $n$: $0< c \leq \nu_1, \dots \nu_m \leq C$. Examples include the $\mathrm O(n)$-type models and the corner-cubic models discussed in \Sref{sec:models}. However, the face-cubic model does not fall into this category since all the multi-coloured vertex factors vanish. It is no coincidence that we cannot treat these models since, as is obvious such models have a colour-symmetry broken phase for high enough value of bond fugacity (for brevity we omit a formal proof). The suppression of long contours will be established by showing that long lines of any particular colour are exponentially rare in the length of the line. To prove this we will need the so called chessboard estimate which in the present context reads as follows: \begin{prop} For $x \in \Lambda$ let $\omega_1, \dots \omega_k$ denote indicator functions for bond events that are determined by the bonds emanating from the site $x$. (The $\omega_j$ need not all be distinct.) For any of these $\omega_j$, cover the lattice with (multiple) reflections of the corresponding event and let $Z_j$ denote the partition function constrained so that at each site, the appropriately reflected event is satisfied. Then \begin{equation} \langle \prod_{j}\omega_j(x) \rangle_{\mu} \leq \left(\frac{Z_1}Z \right)^{\frac 1N} \dots \left(\frac{Z_k}Z \right)^{\frac 1N}. \label{chess-board} \end{equation} \end{prop} \begin{pf*}{Proof.} See Section 2.4 of Ref.~\cite{Shlosman:RP}. \end{pf*} Our principal result of this subsection: \begin{thm} \label{Theorem_4_3} Consider an $n$-colour loop model as described by \eref{part} on the torus $\Lambda$ (which is taken to be ``sufficiently large'') and suppose that the vertex factors are bounded below by $c > 0$ and above by $C < \infty$ uniformly in $n$. For sites $x$ and $y$ in $\Lambda$, let $\mathcal L_{x,y}$ denote the probability that these sites belong to the same loop. Then, provided $n$ is sufficiently large, there is a $\xi_{n} > 0$ such that for all values of $R$, \begin{equation} {\mathcal L}_{x,y} \leq K\mathrm{e}^{-\abs{x - y}/\xi_n} \end{equation} where $\abs{x-y}$ denotes the minimum length of a walk between $x$ and $y$ and $K$ is a constant. \end{thm} \begin{rmk} For conceptual clarity, we will start with a proof of the case $d = 2$; all of the essential ideas are contained in this case. The problems in $d > 2$ involve some minor technicalities and the general proof can be omitted on a preliminary reading. \end{rmk} \begin{pf*}{Proof of Theorem~\ref{Theorem_4_3} ($d = 2$).} Let us focus on a particular colour -- red -- and show the statement is true for red loops; this only amounts to a factor of $n$ in the prefactor. We define the ``red event'' as an event where at least two red bonds are connected to the site in question. It is clear that there are two main types of red events: those where the red bonds attached to a given site line up along the straight line and those where they form the right angle. These two types are shown in Figures~\ref{reflection}(I-a) and \ref{reflection}(II-a) respectively. \begin{figure}[hbt] \begin{center} \includegraphics[width=14cm]{reflection.eps} \caption{ Chess-board estimate on the ``red'' events of types I and II. The events are shaded in (a). Parts (b) and (c) represent the results of the first two reflections with respect to the dashed lines. The resulting tilings of the entire plane (torus) are shown in part (d).} \label{reflection} \end{center} \end{figure} We will denote by $\omega_{I}^{(\alpha)}$, $\alpha = 1,2$ the events of the first type and $\omega_{II}^{(\beta)}$, $\beta = 1, \dots 4$ those of the second type. Obviously there are only two distinct constrained partition functions which we respectively denote by $Z_{I}$ and $Z_{II}$. \Fref{reflection} shows these single events (a), the results of the first two reflections with respect to the dashed lines (b), (c) and finally the configurations obtained by applying each reflection $\ell d =\log_2N$ times in order to completely tile the surface of the torus. The grey lines correspond to yet unidentified bonds -- these are the degrees of freedom left after the process of tiling has been completed. Let us perform the estimate on $Z_{II}/Z$ first. We claim that if $\mathfrak r_{II}$ is any legitimate configuration that contributes to $Z_{II}$ each of the red squares -- of which there are $N/4$ -- can be independently replaced by vacant bonds or a square loop of any other colour or left as red. Of course this may cost us an exchange of the ``best'' for the ``worst'' vertex factor but even so, the result is \begin{equation} \frac{Z_{II}}Z \leq \left( \frac{R^4}{1 + nR^4} \right)^{\frac N4} \left( \frac Cc \right)^{N} \leq \left ( \frac1{n^{\frac 14}}\frac Cc \right )^N. \label{eq:Z_II} \end{equation} Let us now turn attention to the $Z_I$ estimate. We start with a factor of $R^N$ for the red bonds already in place -- as well as another worst case scenario of $C^N$. As for the lines that are orthogonal to (horizontal in \fref{reflection}(I-d)) once started in any colour they must continue until they wrap the torus, a total length of $L = \sqrt N$. There are $n$ possible choices of colour for each line as well as the possibility of no bonds at all. Since there are a total of $L$ lines altogether, this gives \begin{equation} Z_I \leq C^N R^N(1 + nR^{L})^{L}. \label{eq:Z_I} \end{equation} To obtain our estimate on $Z$, we simply pick the even (or odd) sublattice of dual sites and surround each site with one of $n$ coloured elementary loops or with a ``loop'' of vacant bonds. Folding in the worst case scenario for the vertex factors this gives \begin{equation} Z \geq c^N(1 + nR^4)^{\frac{N}{2}}. \label{partition_bound} \end{equation} The ratio may be expressed as a product of two terms namely $[R/(1 + nR^4)^{1/4}]^N$ and $(1 + nR^{L})^{L}/(1 + nR^4)^{N/4}$ -- times an additional $(C/c)^N$. Clearly the first term is bounded by $n^{-N/4}$. As for the second ratio, if $R < 1$ we may neglect $nR^{L}$ for $N \gg 1$ and the ratio is bounded by one. On the other hand, if $R > 1$, we may neglect the 1 and we get, modulo a factor of $n^{L}$, another $n^{-N/4}$; we will settle for the bound of 1. Thus we have \begin{equation} \lim_{N \to \infty}\left (\frac {Z_{I}} Z \right )^{\frac 1N} \leq \left ( \frac1{n^{\frac 14}}\frac Cc \right ) \label{eq:therm_limit} \end{equation} with the same upper bound for $(\frac {Z_{II}} Z )^{1/N}$ valid for all $N$. We thus denote the mutual upper bound by $\epsilon_n \sim n^{-1/4}$. The desired result now follows from a standard Peierls argument: If $x$ and $y$ are part of the same loop, some subset of this loop must be a self-avoiding walk of length at least $2\abs{x-y}$. We enumerate all such walks and use the chess board estimate on each particular walk. Then if $\epsilon_n \lambda_2 < 1$ where $\lambda_2$ is the two dimensional connectivity constant we write \begin{equation} \epsilon_n \lambda_2 = \exp\{-1/2\xi_n\} \end{equation} (the factor of 2 because we must go there and back) and the stated result follows. \qed \end{pf*} \begin{pf*}{Proof of Theorem~\ref{Theorem_4_3} ($d>2$).} The preliminary steps are the same as the two-dimensional case: There are again only two types of $\omega$'s (but with more indices) and two constrained partition functions which we again denote by $Z_I$ and $Z_{II}$. The pattern for $Z_{II}$ is the two-dimensional pattern in \fref{reflection}(II-d) reflected in all directions orthogonal to the plane visualised. Noting that each reflection doubles the number of red squares, we see that in the $Z_{II}$ patterns there are a total of $(2^{\ell})^{d-2}\times\frac 14 L^{2} = \frac 14 N$ squares altogether. Repeating the argument leading to \eref{eq:Z_II} we end up with exactly the same bound. What is a little harder is the estimates on $Z_I$ and the partition function itself. We first claim that $Z$ can be estimated by \begin{equation} Z \geq c^N(1 + nR^4)^{\frac d4 N}. \label{eq:Z_bound} \end{equation} To achieve this, we assert that the following holds: There is a set of plaquettes of the lattice with the property that each bond of the lattice belongs to exactly one plaquette. Once this claim is established it is clear that \eref{eq:Z_bound} holds; indeed there are just $[1/4] \times Nd$ plaquettes in question, we consider those configurations in which each of them is independently left vacant or traversed with an elementary loop in one of the $n$ possible colours. Let us turn to a proof of the above assertion. Let $\mathbf{\hat e}_1, \dots \mathbf{\hat e}_d$ denote the elementary unit vectors. We adapt the following notation for plaquettes: If, starting at $\mathbf{x} \in \mathbb{Z}^d$ we first move in the $\mathbf{\hat e}_j$ direction then in the $\mathbf{\hat e}_k$ direction and then complete the circuit we will denote this plaquette by $[\mathbf{\hat e}_j \diamond \mathbf{\hat e}_k]_\mathbf{x}$ In general we can have $[\pm \mathbf{\hat e}_j \diamond \pm \mathbf{\hat e}_k]_\mathbf{x}$ and it is noted that $[ \mathbf{\hat e}_j \diamond \mathbf{\hat e}_k]_\mathbf{x}$ = $[ \mathbf{\hat e}_k \diamond \mathbf{\hat e}_j]_\mathbf{x}$ Starting with the origin, consider the following list of instructions for plaquettes: \begin{equation} [\mathbf{\hat e}_1 \diamond -\mathbf{\hat e}_2]_{0}, \ [\mathbf{\hat e}_2 \diamond -\mathbf{\hat e}_3]_{0}, \dots , [\mathbf{\hat e}_d \diamond -\mathbf{\hat e}_1]_{0}. \label{eq:plaquette_instructions} \end{equation} So far so good -- each bond emanating from the origin belongs to exactly one plaquette. If $\mathbf{x} = (x_1, \dots x_d)$ let us define $\pi_j(\mathbf{x}) = (-1)^{x_j}$ to be the parity of the $j^{\mbox{th}}$ coordinate. Then at the site $\mathbf{x}$, select the following: $[\pi_1(\mathbf{x})\mathbf{\hat e}_1 \diamond -\pi_2(\mathbf{x})\mathbf{\hat e}_2]_\mathbf{x}, \dots [\pi_d(\mathbf{x})\mathbf{\hat e}_d \diamond -\pi_1(\mathbf{x})\mathbf{\hat e}_1]_\mathbf{x}$. Taking the union of all these lists, it is clear that indeed each bond belongs to {\em at least\/} one plaquette, the only question is whether there has been an over-counting. To settle this issue we establish the following assertion: Let $\mathbf{x}$ denote a site. Then a plaquette is specified by the instructions at $\mathbf{x}$ if and only if the same plaquette is specified by the neighbours of $\mathbf{x}$ on that plaquette. Indeed, consider one such plaquette namely $[\pi_j(\mathbf{x}) \mathbf{\hat e}_j \diamond -\pi_{j+1}(\mathbf{x})\mathbf{\hat e}_{j+1}]_\mathbf{x}$. (If necessary we use the convention $\mathbf{\hat e}_{d+1} = \mathbf{\hat e}_1$). One of the relevant neighbours is $\mathbf{x^\prime} = \mathbf{x} + \pi_j(\mathbf{x})\mathbf{\hat e}_j$. But at $\mathbf{x^\prime}$, we have instructions for the plaquette $[\pi_j(\mathbf{x^\prime})\mathbf{\hat e}_j \diamond -\pi_{j+1}(\mathbf{x^\prime})\mathbf{\hat e}_{j+1}]_\mathbf{x^\prime}$. Since $\pi_j(\mathbf{x^\prime}) = -\pi_j(\mathbf{x^\prime})$ and $\pi_{j+1}(\mathbf{x^\prime}) = \pi_{j+1}(\mathbf{x^\prime})$ this is seen to be the same plaquette. The other neighbour, at $\mathbf{\tilde x} = \mathbf{x} - \pi_{j+1}\mathbf{\hat e}_{j+1}$, follows from the same argument. Reversing the r\^oles of $\mathbf{x}$ and $\mathbf{x^\prime}$ (as well as $\mathbf{x}$ and $\mathbf{\tilde x}$) establishes the `only if' part of the assertion. It is thus evident that no bond can belong to two plaquettes: If $\mathbf{x}$ and $\mathbf{x^\prime}$ are neighbours and there are instructions coming from $\mathbf{x}$ to make some particular plaquette that includes the bond $\langle \mathbf{x},\mathbf{x^\prime} \rangle$ then: \noindent (a) These are the only instructions coming from $\mathbf{x}$ pertaining to this bond. \noindent (b) The list at $\mathbf{x^\prime}$ (and the other corners of this plaquette) also include this plaquette. \noindent (c) If $\mathbf{y}$ is another neighbour of $\mathbf{x}$ which is {\em not\/} in this plaquette there cannot be instructions at $\mathbf{y}$ to include the plaquette containing $\mathbf{x}$, $\mathbf{x^\prime}$ and $\mathbf{y}$. Items (a) and (b) are immediate. To see item (c), note that by the assertion, such instructions would also have to appear on the list at $x$. But, by item (a), they don't. \Eref{eq:Z_bound} is now established. Finally, let us estimate $Z_{I}$ from above. First it is noted that the $Z_{I}$ patterns consist of parallel lines (pointing in the direction of the original red bonds) that pass through every site. Hence the only loops we can draw are confined to the various $(d-1)$--dimensional hyperplanes orthogonal to these lines. Modulo vertex factors -- which we estimate by $C^{N}$ -- each hyperplane yields the $(d-1)$--dimensional partition function. We thus have \begin{equation} Z_{II} \leq R^N C^N (\Xi)^{L} \label{eq:Z_II_d} \end{equation} where $\Xi$ is the $(d-1)$--dimensional partition function with all vertex factors equal to unity. (This observation is not of crucial importance -- the key ingredient is the number of available bonds -- but it serves to compartmentalise the argument.) Let us tend to the estimate of $\Xi$. We denote by $\Xi_M$ the contribution to $\Xi$ that comes about when there are exactly $M$ bonds in the configuration. Given the placement of the bonds, let us count (estimate) the number of ways that they can be organised into self-returning walks and then independently colour each walk. Throwing in a combinatoric factor for the placement of the $M$ bonds and the fact that there cannot conceivably be more than $M/4$ loops to colour we arrive at \begin{equation} \Xi_M \leq {{(d-1)L^{d-1}}\choose M} R^{M}n^{\frac M4}\times Y(M) \label{eq:Xi_bound} \end{equation} where $Y(M)$ is defined as follows: For $M$ bonds, consider their placement on the lattice chosen so as to maximise the number of ways in which the resulting (colourless) graph can be decomposed into distinct loops. (Such decompositions are done by ``resolving'' all intersecting vertices into connected pairs of bonds, similar to \fref{O(n)_vertices}.) Then the quantity $Y(M)$ denotes this maximum possible number of decompositions. Let $\mathfrak z$ (which equals $2(d-1)-1=2d -3$) denote the greatest possible number of local options for an on-going self-avoiding walk. We claim that \begin{equation} Y(M) \leq (2\mathfrak z)^M. \label{eq:Y_bound} \end{equation} Indeed, take this optimal placement of $M$ bonds and, starting at some predetermined bond (and moving in some predetermined direction) draw a loop of length $P$. There are no more than $\mathfrak z^P$ possibilities for this loop. Then what is left over cannot yield a total better than $Y(M - P)$. This must be done for every possible value of $P$ and summed. Defining $Y(0) = 1$, we arrive at the recursive inequality \begin{equation} Y(M) \leq \sum_{0 < P \leq M}\mathfrak z^PY(M-P) \end{equation} and the bound in \eref{eq:Y_bound} can be established inductively. Putting together equations (\ref{eq:Xi_bound}) and (\ref{eq:Y_bound}) we find \begin{equation} \Xi \leq \sum_M {{(d-1)L^{d-1}}\choose M} R^{M}n^{\frac M4}(2\mathfrak z)^M = (1 + 2\mathfrak zRn^{\frac 14})^{(d-1)L^{d-1}}. \label{eq:Xi} \end{equation} And thus \begin{equation} Z_{I} \leq C^NR^{N}[(1 + 2\mathfrak zRn^{\frac 14})^{(d-1)L^{d-1}}]^L = [CR(1 + 2\mathfrak zRn^{\frac 14})^{(d-1)}]^N. \label{eq:Z_I_bound} \end{equation} An estimate for the straight segments is nearly complete. We write \begin{equation} \left[\frac {Z_I}Z\right]^{\frac 1N} \leq \frac cC\frac {R(1 + 2\mathfrak zRn^{\frac 14})^{(d-1)}} {(1 + nR^4)^{d/4}}\,. \label{eq:Z_I/Z} \end{equation} Let us split the power of the denominator: $d/4 = 1/4 + (d-1)/4$; the first term will handle the $R$ in the numerator and this ratio is less than $n^{-1/4}$. As for what is left over, it is easy to see that \begin{equation} \frac{1 + 2\mathfrak zRn^{\frac 1 4}} {(1 + nR^4)^{\frac 1 4}} = \frac{1}{(1 + nR^4)^{\frac 1 4}} + 2\mathfrak z \frac{Rn^{\frac 1 4}} {(1 + nR^4)^{\frac 1 4}} \leq 1 + 2\mathfrak z\,. \end{equation} Thus we have \begin{equation} \left[\frac {Z_I}Z\right]^{\frac 1N} \leq \frac Cc (1 + 2\mathfrak z)^{d-1} \frac 1{n^{1/4}} \label{eq:Z_I/Z_bound} \end{equation} so again all of the $\omega$'s have estimates with the scaling of $n^{-1/4}$. The remainder of the argument follows the same course as the two-dimensional argument with modifications where appropriate. \qed \end{pf*} \subsection{Translational symmetry breaking and related phase transitions} \label{sec:transl_sym} So far we have shown that for a sufficiently large $n$, there are no phase transitions associated with the divergent loop correlation length. However, this does not rule out the possibility of phase transitions of a totally different nature -- in fact we shall see that such transitions do indeed take place. The type of these transitions happens to be sensitive to the lattice structure and the vertex factors that correspond to loop intersections. This leads to a variety of critical phenomena which has not been observed in the case of a honeycomb lattice \cite{Domany-81}. We shall consider the case of all vertex factors being positive and uniformly bounded. Let the number of colours $n$ be very large and consider the high fugacity limit of $R \to \infty$. The model becomes fully-packed in this limit, i.e. every bond is occupied. But since $n$ is large, according to the estimates of the previous subsection the partition function is dominated by configurations in which there are as many loops as possible. In two dimensions, it is clear that these ``maximum entropy states'' break the translational symmetry: the loops of different colours may run around either odd or even plaquettes of the lattice. (We remark that in $d>2$, the situation is considerably more complicated and is currently under study.) On the other hand, if the bond fugacity is low, the system must be translationally invariant. Therefore one would anticipate that at some large but finite value of $R$ a translational symmetry-breaking transition takes place. Due to a doubly positionally degenerate nature of the resulting high-density state, we find it natural to expect that such transition is of the Ising type. Indeed, it appears to be very similar to the transition in the Ising antiferromagnet. Let us now turn to the rigorous arguments supporting this qualitative picture. \begin{thm} \label{thm:symmetry-breaking} Consider an $n$-colour loop model on $\mathbb{Z}^2$ with vertex factors bounded below by $c>0$ and above by $C < \infty$ uniformly in $n$. Then, if $n$ is sufficiently large there is a phase transition characterised by the breaking of translational symmetry. In particular, there exist $\epsilon$ (sufficiently small) and $\Delta$ (sufficiently large) such that for $R^4 n < \epsilon$ all states that emerge as limits of torus states are translation invariant. On the other hand, if $R^4 n > \Delta$, there are (at least) two states with broken translational symmetry. \end{thm} \begin{pf*}{Proof.} Let us start with the formal proof of translation-invariance at low fugacity. Any given site belongs to zero, two or four bonds. Let $\odot$, $\ominus$ and $\oplus$ denote the corresponding events and $Z_\odot$, $Z_\ominus$ and $Z_\oplus$ the constrained partition functions in which every site is of the stated type. We will estimate $\mathrm{Prob}(\ominus) \leq \left(Z_\ominus/Z_\odot \right)^{1/n}$ and $\mathrm{Prob}(\oplus) \leq \left(Z_\oplus/Z_\odot \right)^{1/n}$; obviously $Z_\odot = 1$. In calculating $Z_\oplus$ we notice that in all configurations, each bond must be coloured and there are at most $N/2$ separate loops. This gives \begin{equation} \label{Z-four} Z_\oplus \leq \left[R^2 n^{1/2}\right]^N Y(2N) \end{equation} where $Y(M)$ is the same quantity that appears in \eref{eq:Xi_bound}. Similarly, in $Z_\ominus$ half the bonds are used (resulting in a factor of $R^N$) and there are no more than $N/4$ loops. We get \begin{equation} \label{Z-two} Z_\ominus \leq \left[R n^{1/4}\right]^N Y(N) \end{equation} -- essentially the square root of the above. In order to show that there is translation invariance it is sufficient to establish the following: Let $A$ and $B$ denote local events, let $T_{\mathbf x}$ be the translation operator to the point $x \in \Lambda$ and let $\mathbf{\hat e}$ denote any unit vector. Then we must show that for all ${\mathbf x}$ with $\abs{{\mathbf x}}$ large the probabilities of $\mu_\Lambda \left(A\cap T_{\mathbf x} B\right)$ and $\mu_\Lambda \left(A\cap T_{\mathbf x + \mathbf{\hat e}} B\right)$ are essentially the same. We note that either the supports of $A$ and $T_{\mathbf x} B$ are attached by a $\ast$-connected path of sites of type $Z_\ominus$ and $Z_\oplus$, or they are separated by a connected circuit of cites of type $Z_\odot$. (Two sites are considered $\ast$-connected if they share the same plaquette.) However, in case the support of A is surrounded by a connected circuit of empty sites, it is clear that for any $\mathbf{\hat e}$, $A$ and $T_{\mathbf{\hat e}} A$ have the same probability. Thus we arrive at \begin{equation} \label{eq:mu_difference} \abs{ \mu_\Lambda \left(A\cap T_{\mathbf x} B\right) - \mu_\Lambda \left(A\cap T_{\mathbf x + \mathbf{\hat e}} B\right)} \leq \mu_\Lambda \left(\mathbb C_{A, T_\mathbf x B} \right) \end{equation} where $\mathbb C_{A, T_\mathbf x B}$ is the event that there is a $\ast$-connected path of non-$\odot$ sites between the support of A and that of $T_{\mathbf x} B$. Now, provided $R n^{1/4}$ is small, $\mu_\Lambda \left(\mathbb C_{A, T_\mathbf x B} \right) \leq \mathrm{e}^{-\kappa \abs{x}}$ with $\mathrm{e}^{-\kappa} \sim \alpha R n^{1/4}$ for some constant $\alpha$. Thence translation invariance (among the states that emerge from the torus) is established. Let us now turn attention to the opposite limit, namely $R n^{1/4} \gg 1$. We claim that with high probability the plaquettes on the lattice are of one of the two types: they are either surrounded by four bonds of the same colour, or by bonds of four different colours. \begin{figure}[htb] \begin{center} \includegraphics[width=5cm]{AB-type.eps} \caption{ Even and odd sites of A- and B-types.} \label{A-B_sites} \end{center} \end{figure} We will again define site events representative of the two purported ground states. If a site is on the even sublattice, we say it is of the A-type if it has four bonds of two different colours, one colour occupying the positive $\mathbf{\hat e}_1$ and $\mathbf{\hat e}_2$ directions and the other, the negative $\mathbf{\hat e}_1$ and $\mathbf{\hat e}_2$ directions. The odd site of the A-type is defined as the mirror reflection of an even A-type site. A site is said to be of B-type if the r\^oles of the even and the odd sublattices are reversed -- see \fref{A-B_sites}. Any site which is not of one of these two types will be called ``bad''. There are three choices for such bad sites, namely housing no bonds, only two bonds and a ``+'' configuration in which the $\mathbf{\hat e}_1$ and $-\mathbf{\hat e}_1$ directions have the same colour and similarly for the $\pm \mathbf{\hat e}_2$ directions. The first two are the old $\odot$ and $\ominus$ from the preceding portion of this proof, while the constrained partition function for the latter event, denoted (for the lack of better notations) by $Z_{\#}$ can be bounded by \begin{equation} \label{eq:Z_plus} Z_{\#} \leq R^{2N} n^{2 \sqrt{N}}\,. \end{equation} On the other hand, we can always estimate $Z$ from below by $\left[R n^{1/4}\right]^2N$ as discussed previously. Thus we have (as $\abs{\Lambda} \to \infty$) \begin{subequations} \label{eq:mu_estimates} \begin{gather} \mu_\Lambda(\odot) \leq R^{- 2} n^{- 1/2}\,, \\ \mu_\Lambda(\ominus) \leq n^{- 1/2}\,, \\ \mu_\Lambda({\#}) \leq n^{- 1/4}\,. \end{gather} \end{subequations} Thus almost all sites are either A-type or B-type. However, there is no constraint that A and B sites cannot be neighbours. But, as we show below, this possibility is also suppressed for $n \gg 1$. Firstly, let us notice that the reflections through bonds map the even into the odd sites (and vice versa). By definition, the A-even and A-odd events are mirror images, and similarly for B. Thus, under multiple reflections, A-sites map to A-sites and B's to B's. Hence, if (A-B) is the event that the origin is of the A-type and its right nearest neighbour is of the B-type, we have \begin{equation} \label{eq:mu_AB} \mu_\Lambda(\mbox{A-B}) \leq \left(\frac{Z_{\mbox{\scriptsize A-B}}}{Z}\right)^{2/N} \end{equation} where $Z_{\mbox{\scriptsize A-B}}$ is the partition function constrained according to the pattern shown in \fref{A-B_pattern}. \begin{figure}[hbt] \begin{center} \includegraphics[width=7cm]{AB-tiling.eps} \caption{ Tiling of the torus obtained by multiple reflections of the (A-B) event. The sites marked as $\mathrm{A_e}$ are the even sites of the A-type etc. The zigzag lines that form loops wrapping the torus and separating the columns of A- and B-sites are shown black.} \label{A-B_pattern} \end{center} \end{figure} Now, the interior of the A or B columns still consists of the four-bond loops -- just as a pure A or pure B-type tiling would. However, as is easily seen in \fref{A-B_pattern}, a boundary between the columns is formed by a zigzag line of a single colour running around the torus. Thus we arrive at \begin{equation} \label{eq:Z_AB} Z_{\mbox{\scriptsize A-B}} = R^{2N} n^{\frac{1}{4} N} n^{\frac{1}{2} \sqrt{N}} \end{equation} and hence, in the large $N$ limit, $\mu_\Lambda(\mbox{A-B}) \leq n^{-1/2}$. The rest of the proof follows easily: A connected cluster of A-type sites has, on its boundary a ``bad'' site or an A-B pair; similarly for the clusters of B-sites. Thus, if $R n ^{1/4}$ and $n$ are large enough, the possibility that any given site is isolated from the origin is small. Thence, given that the origin is of the A-type -- which has probability very near one half -- the population of B-type sites is small and vice versa. Evidently, there are two states, one with an abundance of A-sites and the other with an abundance of B's. It is not hard to see that in these two states, translation symmetry is broken -- both are dominated by the appropriate staggered patterns. \qed \end{pf*} \section{Final remarks and conclusions} So far we have succeeded in proving the following general statements: \begin{itemize} \item {A generic multi-coloured loop model with {\em all\/} vertex factors $\nu_1, \ldots \nu_m$ uniformly bounded from above and below does not have a phase transition corresponding to the divergence of the loop size in {\em any\/} dimension {\em provided\/} that the number of colours $n$ is sufficiently large.} \item {In two dimensions these models undergo a different phase transition, presumably of the Ising type, that is associated with breaking the {\em translational\/} symmetry\footnote{ At present it is not clear whether this statement holds in $d>2$. While the ``ground states'' (the states at $R=\infty$) of the system are {\em not\/} translationally invariant, their degeneracy appears to grow very fast with $d$, and it is not obvious that the resulting ``entropy'' will not destroy the transition at any finite value of $R$.}. While the examples of the models that have been considered in this paper all have their vertex factors independent of the particular arrangements of different colours entering the vertex, all these results remain valid even if this is not the case as long as all different vertex factors are still reflection-symmetric and bounded from above and below by some positive numbers.} \end{itemize} {A clear example of a model that does not follow this rule is the loop model derived from the face-cubic spin model in \sref{sec:face-cubic}. Here the bonds of different colours are simply not allowed to share a vertex, making the corresponding vertex factor vanish. It is clear that as the bond fugacity $R$ increases, the system will have a phase transition (possibly of the first order) associated with breaking the {\em colour\/} symmetry. Indeed, the only way to ``pack'' more bonds into the system is to force all of them to be of the same colour. This transition is very similar to the Widom-Rowlinson transition.} {Another interesting observation can be made about the two-dimensional loop model with all two- and four-leg vertices having the same weight (this is the model originating from the corner-cubic spin model -- see \sref{sec:corner-cubic}). Comparing the states of the system at $R=\infty$ when all bonds are occupied (the fully packed limit), and at $R=1$ when no additional weight is associated with placing extra loops, we conclude that the $(n+1)$-coloured model at $R=\infty$ is identical to the $n$-coloured model at $R=1$. Indeed, one should simply consider the vacant bonds at $R=1$ as being coloured grey to turn the $n$-coloured loop system into a fully-packed $(n+1)$-coloured system. Now start with the large enough $n$ at $R=\infty$. The result of \sref{sec:transl_sym} guarantees the existence of a broken translational symmetry in this case. By the argument presented here this means the existence of a broken symmetry in $(n-1)$-coloured, $R=1$ case. Now let us continuously increase the value of the bond fugacity in this $(n-1)$-coloured model. As $R$ grows, there are only two possible scenarios: the broken translational symmetry is either lost via a phase transition or is retained all the way to $R=\infty$. The former case corresponds to the {\em intermediate\/} symmetry-broken phase surrounded by the phase transitions at $R \leq 1$ and $R \geq 1$. If the latter scenario is realised, then go to $R=\infty$ and repeat the process of mapping it onto a lower $n$ model. Notice that this latter scenario can not continue all the way to $n=1$ because the $n=1$, $R=1$ case is nothing but a loop representation for the Ising magnet at $T=0$ which does not have broken translational symmetry.\footnote{Indeed, these loops appearing in the $T=0$ ``high temperature'' expansion are also the domain walls of the Ising model on the dual lattice at $T \to \infty$. These dual spins are assigned their values independently with probability $1/2$, and the contours separating regions of opposite type are manifestly translation invariant.} Therefore one {\em must\/} find an intermediate phase at some not very large value of $n$ (although we cannot rule out a possibility of it being just the point $R=1$).} {From the above we also learn that the $n=2$ (Ashkin--Teller-like) model is {\em not\/} critical in its fully-packed limit (since it is just the $T=0$ Ising model). On the other hand the $\mathrm O(2)$ loop model discussed in \sref{sec:On} {\em is\/} critical in this limit (it maps onto the square ice model). But the only difference between the two is the factor of 3 in the same-colour four-leg vertex factor.} On the basis of this observation (along with the first two remarks of this section) we reiterate that the vertex factors and lattice details are often important for determining the phase diagram of a particular model.
1,941,325,220,805
arxiv
\section{Introduction} CLIO (Cryogenic Laser Interferometer Observatory)~\cite{Ohashi2003,Miyoki2004,Miyoki2006,Ohashi2008}, is a prototype for the next Japanese gravitational wave (GW) telescope project, LCGT (Large Scale Cryogenic Gravitational-Wave Telescope)~\cite{LCGT2006}, featuring the use of cryogenic mirrors and a quiet underground site. The main goal of CLIO is to demonstrate an improvement of sensitivity through a reduction of mirror thermal noise by cooling the sapphire mirrors. The design sensitivity is limited by the mirror thermal noise and the suspension thermal noise around 100Hz, which will be reduced after cooling. Through works on noise hunting, we achieved a thermal-noise-limited sensitivity at room temperature. One of main factors concerning the sensitivity improvement was to remove thermal noise due to a conductive coil-holder coupled with a pendulum through magnets. Firstly, we report on the best displacement sensitivity we have achieved and noise hunting in recent works. Next, we focus on verification of the thermal noise due to the coil-holder by theory and an experiment. \section{Thermal-noise-limited sensitivity} \subsection{Configuration of CLIO} CLIO is located in Kamioka mine, which is 220 km from Tokyo, and lies 1000 m underground from the top of a mountain. LCGT is planed to be constructed in this area. This underground site is suitable for interferometric GW detectors, because the seismic noise is less than that in an urban area by about 2 orders~\cite{Araya1993,Yamamoto2006}. This merit is helpful to achieve the target sensitivity, and to obtain stability. Owing to this advantage in low frequency, observations for the Vela pulsar was performed in 2007~\cite{Akutsu2008}. \begin{figure}[b] \begin{center} \includegraphics[width=13cm,clip]{Figure1_CLIO} \end{center} \caption{\label{fig:CLIO} Schematic view of CLIO; CLIO is a so-called locked Fabry-Perot interferometer, which has two 100 m Fabry-Perot cavities with a mode cleaner. The optics, like the lenses, the Faraday-isolators, and some wave plates are omitted in this figure. Abbreviations denote: EOM, electro-optic modulator; HWP, half wave plate; QWP, quarter wave plate; MMT, mode matching telescope; and PBS, polarized beam splitter. } \end{figure} Figure~\ref{fig:CLIO} shows a schematic view of CLIO. A laser beam (Innolight Inc. Mephisto) has a power of 2 watt and a 1064 nm wavelength. It is shaped into TEM00 through a mode cleaner (MC) cavity with a length of 9.5 m, and then injected to two 100 m length FP cavities after being divided by a beam splitter. These cavities are arranged in an L-shape. All of the returned (reflected) beam from each arm cavity is extracted by an optical circulator formed by a $ \lambda /4 $ plate and a polarized beam splitter. Thus, this configuration does employ neither optical recombination by a Michelson interferometer nor optical recycling schemes. The cavities are kept on resonance, called locked, by servo systems. The Pound-Drever-Hall technique~\cite{Drever1983} is employed as a readout method for displacement signals of the mirrors. For that purpose, the phase modulations at 15.8 MHz and 11.97 MHz are used for the arm cavities and the mode cleaner cavity, respectively. A multistage control system~\cite{Nagano2003} is applied for laser frequency stabilization, which has two cascaded loops of the MC and an arm cavity. The inline arm is locked by controlling the frequency. The perpendicular arm is locked by controlling the mirror. A differential displacement between two arm cavities, which corresponds to GW signals, can be obtained from a feedback signal to the coil-magnet actuators. The four mirrors of the two arm cavities are individually suspended by 6-stage pendulums, which include 4-stage blade springs and 2-stage wire suspensions for isolation from any seismic vibration. The mirrors have weights of 1.8 kg (end) and 1.9 kg (front), made of sapphire. They are suspended by bolfur~\cite{Bolfur} wires at the last stage, whose resonant frequency is 0.79 Hz. The pendulum has a resonant frequency of a primary mode of about 0.5 Hz. The angular alignments of all mirrors are tuned by movable stages on the suspensions. Coil-magnet actuators are set for the front mirror in the perpendicular cavity only, so as to keep the optical path length of the cavity locked. Until now, cryogenic systems have been developed~\cite{Tomaru2004,Tomaru2005,Li2005} and progressed~\cite{Uchiyama2006}. Thus, cooling mirrors was realized~\cite{Yamamoto2007}. All sapphire mirrors were cooled down to the required temperature of under 20 K. However, a thermal noise due to a conductive coil-holder, which appeared from 20 Hz to 300 Hz with a slope of $ f^{-2} $ ($ f $ was frequency) prevented the sensitivity from reaching the thermal noises of suspensions and mirrors, at both room temperature and the cool temperature~\cite{Yamamoto2007}. We could remove the extra thermal noise by noise hunting in recent works, and reached the design sensitivity at room temperature. \subsection{Progress of noise hunting CLIO displacement noise reached the predicted thermal-noise levels. Figure~\ref{fig:Best} shows the improvement of the displacement noise in 2008. the current best-floor sensitivity is $ 2.5 \times 10^{-19} $\,m/$ \sqrt{\rm{Hz}} $ at 250 Hz. In the frequency region of 20 Hz to 80 Hz, the spectrum is close to the suspension thermal noise, which comes from wire-material dissipations of the structure damping (the internal damping~\cite{Saulson1990}). The theoretical prediction was calculated using the quality factor of a pendulum of $ 10^5 $, which was estimated from the measured violin Q. The mirror thermal noise is also close to the sensitivity, and estimated from the bulk thermal noise of the thermoelastic damping. Details of this progress and estimations will be described in an independent article~\cite{Uchiyama2010}. \begin{figure}[h] \begin{center} \includegraphics[width=10cm,clip]{Figure2_Best} \end{center} \caption{\label{fig:Best} Improvement of the sensitivity in 2008; Solid lines of red and blue are the measured sensitivity after noise hunting (Current best) and before noise hunting, respectively. The green doted line is the estimated suspension thermal noise from wire material dissipations using Q-factors of the violin modes. The pink dash line indicates the calculated mirror thermal noise due to the substrate of the thermoelastic damping. } \end{figure} The noise hunting of CLIO interferometer has progressed at room temperature. The improvement of the broad region from 20 Hz to 300 Hz came mainly from avoiding a kind of pendulum thermal noise due to eddy currents in a coil-holder induced by magnets glued onto the mirror. A beam centering adjustment also reduced noisy structures in this region. By adjusting the beam centering, we could reduce the length fluctuation of the cavities coupled with a mirror-angle fluctuation. In the high-frequency region, repairing the servo-circuits malfunction contributed to the reach of the shot noise. Using thinner suspension wires made of bolfur whose diameters were changed from 0.1 mm to 0.05 mm, shifted the violin modes to higher frequencies, so as to separate the skirts of violin modes from the mirror thermal noise region. One reason for the improvement at around 10 Hz was the removal of magnets on the upper mass for extended actuation, which had been shaken by damping magnets. The most contributed thing for improving the sensitivity was to identify the pendulum thermal noise due to a coil-holder with a slope of $ f^{-2} $. Let us emphasize that this is not the suspension thermal noise. The design sensitivity was calculated to be limited by the suspension thermal noise with a slope of $ f^{-5/2} $, which comes from wire-material dissipations of the internal damping~\cite{Saulson1990}. On the other hand, the thermal noise due to a coil-holder was caused by mechanical losses by eddy-currents in the conductive coil-holder coupled with a pendulum. The details are explained in the next section. \section{Thermal noise due to a coil-holder} \subsection{Theoretical estimation} Pendulum thermal fluctuation is caused by several kinds of dissipations coupled with the whole pendulum. There are studies of those dissipations: an internal loss in the materials of suspension wires~\cite{Saulson1990,Gonzalez1995}, clamps of wires~\cite{Dawid1997}, residual air~\cite{Kajima1999}, coil-magnet actuators~\cite{Agatsuma2010_PRL}, and a reference mass with coils~\cite{Cagnoli1998,Frasca1999}. The pendulum thermal noise due to the reference mass with coils (the coil-holder in our case) is the most interesting in this section. An oscillation of the pendulum gives rise to eddy currents by electromagnetic induction in the coil-holder. These currents generate Joule heat in the material of the coil-holder. According to the fluctuation-dissipation theorem (FDT)~\cite{Callen1951}, the dissipation of the Joule heat causes a thermal fluctuation of the pendulum through coupling with magnets glued onto the mirror. A quality factor of the pendulum, caused by the losses of conductive materials (reference masses) near the magnets, was derived~\cite{Cagnoli1998} as \begin{equation} Q = \frac{m \omega _0}{2 \pi \sigma \left( \frac{3 \mu _0 \mathcal{M}}{4 \pi} \right)^2 J } , \label{eq:Qh} \end{equation} which was verified by not only Cagnoli~\cite{Cagnoli1998}, but also Frasca~\cite{Frasca1999}. Here, $ \omega_0 = 2 \pi f_0 $, $ f_0 $ is the pendulum frequency, $ m $ the mirror mass, $ \sigma$ the median conductivity of the materials, $ \mu_0 $ the permeability, $ \mathcal{M} $ the magnetic dipole moment of the magnet, and $ J $ a geometrical factor that depends on the shape of the conductor, given by \begin{equation} J = \int_{z1}^{z2} \int_{r1}^{r2} \frac{r^3 z^2}{(r^2 + z^2)^{5}} drdz . \label{eq:J} \end{equation} Here, the conductor is assumed to have a center-holed cylindrical shape with an inner radius $ r1 $, outer radius $ r2 $, and length $ z2-z1 $ for simplicity. The center of the magnet is placed at $ z=0 $, and the length from the magnet to the edge of a conductor is $ z1 $. The cause of thermal fluctuation is dissipation of the viscous damping, whose loss angle is $ \phi = \omega /( \omega _0 Q ) $, due to eddy currents in the conductor. By using the FDT and applying the Q-factor of Eq.~(\ref{eq:Qh}), the thermal fluctuation of a pendulum (a harmonic oscillator) in a higher off-resonant region is approximately written as \begin{equation} G= \frac{4 k_B T N}{m^2 \omega ^4} 2 \pi \sigma \left( \frac{3 \mu _0 \mathcal{M}}{4 \pi} \right)^2 J . \label{eq:Gh} \end{equation} $ \sqrt{G} $ is a one-sided power spectrum density, $ k_B $ the Boltzmann constant, and $ T $ the temperature of the conductors and the pendulum. $ N $ is the pair number of a magnet and a conductor that has a homogeneous shape around each magnet. \begin{figure}[t] \begin{center} \includegraphics[width=7.1cm,clip]{Figure3_Coil} \end{center} \caption{\label{fig:coordinate} Model of a coil-magnet actuator} \end{figure} In order to estimate the thermal noise due to the coil-holder by Eq.~(\ref{eq:Gh}) and the Q-factor of pendulum by Eq.~(\ref{eq:Qh}), the magnetic dipole moment, $ \mathcal{M} $, is needed to know. However, the magnetic moment is not usually easy to measure. It is practicable to utilize a coil-magnet actuator coupling so as to estimate the magnetic moment. By using one coil-magnet actuator, the force of $ F = \alpha I $ can be applied for the test mass. The coupling factor, $ \alpha $, is the conversion efficiency between the current, $ I $, in a coil-circuit and the force, $ F $. This coupling factor is related to the magnetic moment of a magnet by \begin{equation} \alpha = \frac{3\, \mu _0 \mathcal{M}}{ 2 } \sum_{s=0}^{u-1} \sum_{n=0}^{w-1} \frac{(z_1 + d n)(r_1 + d s)^2}{((z_1 + d n)^2 + (r_1 + d s)^2)^{5/2}} . \label{eq:Mu} \end{equation} Here, the solenoidal coil, which consists of a conductive wire with a diameter of $ d $, is wound around a bobbin with a radius of $ r_{1} - d/2 $. We approximately regarded the coil as a bunch of rings for modeling, as shown in Figure~\ref{fig:coordinate}. On the heels of the first ring at a distance of $ z = z_1 $ from the magnet, rings of $ w $ turns are lined by a gap of $ d $ side by side ($ z = z_1 + d n ;\, n = 0,1,2,\cdots ,w-1 $). When the coil is folded like layers to outside of its radius direction, it can be regarded as $ r = r_1 + d s ;\, s = 0,1,2,\cdots ,u-1 $ using a folding number of $ u $. By measuring $ \alpha $, we can estimate the magnetic dipole moment, $ \mathcal{M} $, using Eq.~(\ref{eq:Mu}). The coupling factor per one coil-magnet actuator, $ \alpha $, is yielded from the relation $ \alpha = A_{100} R_{\rm{c}} m (2 \pi \times 100)^2 / N_{\rm{c}} $. $ A_{100} $ is the measured actuator response at 100 Hz, which is a transfer function from the driver input voltage to the mirror displacement. This response at the front mirror is measured using a Michelson interferometer constructed with front mirrors and the BS. $ R_{\rm{c}} $ ($ R_{\rm{c}} = 50$ $ \Omega $ in CLIO) is the resistance of volt-current conversion in the coil-driver, and $ m $ is the weight of a test mass ($ m = 1.9 $ kg at the front mirror in CLIO). $ N_{\rm{c}} $ ($ N_{\rm{c}} = 4 $) is the pair number of a magnet and a coil. The sensitivity of CLIO was improved by replacing a coil-holder. We tried to prove that the noise floor with the previous coil-holder came from the above thermal noise. However, the spectrum before noise hunting included some other noises. In order to take the other noises into account, an intermediate spectrum before the best sensitivity in Figure~\ref{fig:Best} was used as a background (BG) noise without eddy currents. Figure~\ref{fig:080515} shows a comparison between the measured spectrum before replacing the coil-holder and the calculation from Eq.~(\ref{eq:Gh}). The noise floor was almost matched with the calculation from 20 Hz to 200 Hz. The parameters for the estimates are indicated in Table~\ref{Table1}. The spectrum of the ``New coil-holder'' in Figure~\ref{fig:080515} is employed as the BG noise owing to the first measurement using the new coil-holder, which is close to the BG noise of the ``Before noise hunting''. For simplicity, we regarded a spectrum of the ``New coil-holder'' from 20 Hz to 100 Hz as the ``BG model'' with a slope of $ f^{-5/2} $ by fit-by-eye. The sensitivity in a high frequency region of Figure~\ref{fig:080515} was improved because the servo-circuit had been already mended. The spectrum of the ``New coil-holder'' include some changes not only the coil-holder but also other interferometer settings. Therefore a special experiment, which replaces only the coil-holder, is described in section 3.2. \begin{figure \begin{center} \includegraphics[width=13.6cm,clip]{Figure4_Theory} \end{center} \caption{\label{fig:080515} Comparison between the measured spectrum and a theoretical calculation of the thermal noise due to the coil-holder; The solid lines of blue and gray indicates measured spectra with the previous coil-holder configuration and with the new coil-holder configuration, respectively. The dash line denotes a background (BG) model as a part of sensitivity with the new coil-holder. The orange band is the sum of the theoretical calculation of the thermal noise due to coil-holder material ($J$ of Eq.~\ref{eq:J}: from 5600 to 9800) and the BG model. } \end{figure} \begin{table} \caption{\label{Table1} Parameters; $ \alpha $ and $ \mathcal{M} $ are the average of all magnets. $ r2 $ at the previous coil-holder indicates the length from center of coil to a side or to a diagonal corner. The $ J $ values were calculated from $ r2 $ of two patterns (and the outer frame).} \begin{indented} \lineup \item[] \begin{tabular}{@{}lll} \br Parameter & Previous coil-holder & New coil-holder \\ \mr Coil-holder (bobbin) & Al & Macor \\ $ \sigma \,[\rm{\Omega ^{-1} m^{-1}}]$& \0\0\, $ 3.6 \times 10^7 $ & $ 10^{-13} $ \\ $ r1 $ [mm] & \0\, 12 & --- \\ $ r2 $ [mm] & \0\, 15 -- 21 & --- \\ $ z1 $ [mm] & \0\0\, 0 & --- \\ $ z2 $ [mm] & \0\, 20 & --- \\ \mr Coil & Cu & Cu \\ $ w $ (turns) & \0\, 22 & 15 \\ $ u $ (layers) & \0\0\, 1 & \, 2 \\ $ d $ [mm] & \0\0\, 0.5 & \, 0.5 \\ $ r_1 $ [mm] & \0\0\, 5.25 & \, 8.25 \\ $ z_1 $ [mm] & \0\0\, 0 & \, 5 \\ \mr Magnet & Nd-Fe-B & Sm-Co \\ Magnet size & $ \phi $ 2mm $ \times $ 10mm & $ \phi $ 1mm $ \times $ 10mm \\ $ \alpha $ [N/A]& \0\0\, $ 3.6 \times 10^{-3} $ & \, $ 4.8 \times 10^{-4} $ \\ \mr Estimated value \\ \mr $ \mathcal{M} $ [J/T]& \0\0\, 0.0165 & \, 0.0034 \\ $ J $ [1/$ \rm{m}^3 $]& 5600 -- 9000 (9800) & --- \\ $ Q_{\rm{holder}} $ & \0\0\, $ 4.6 \times 10^{4} $ & --- \\ $ \sqrt{G_{\rm{holder}}} $ at 100Hz [m/$ \sqrt{\rm{Hz}} $] & \0\0\, $ 2.5 \times 10^{-18} $ & --- \\ \br \end{tabular} \end{indented} \end{table} The evaluation did not perfectly agree with the measured spectrum in Figure~\ref{fig:080515}. One reason is a limitation of the coil-holder modeling. The estimated geometrical factor, $ J $, is more uncertain than the other parameters, because the previous coil-holder had a cubic shaped structure around a coil in spite of the calculation limited by the cylindrical shape from Eq.~(\ref{eq:J}). A picture of this coil-holder is shown in Figure~\ref{fig:ExchangeHolder}. We estimated $ J $ to be a band line that has the lower limit of $ J=5600 $, and the upper limit of $ J=9800 $. $ J $ of 5600 is the lower limit and an underestimate because it does not include volume of the cubic corner beyond the cylindrical shape with $ r2 $ of 15 mm. $ J $ of 9000 is approximation of the whole cubic, which corresponds to $ r2 $ of a length from the center of the coil to a diagonal corner (21 mm). The upper limit is calculated as $ J=9800 $ by adding the effect of the outer frame (beyond the cubic around the coil) assuming a plate ($r1=21$ mm, $r2=32$ mm, $z1=10$ mm and $z2=20$ mm), which is rather an overestimate since the actual outer frame is not a plate shape. Numerical simulation is needed for more precise estimation using J of the coil-holder with a complex shape. Other reason is an error of the measured $ \alpha $, which is about 10 \%. The other reason is most likely a difference of BG noise in the region from 20 Hz to 50 Hz. In the ``Before noise hunting'', it is possible to inject the seismic noise via the coil-holder fixed on an optical stage or inject angular fluctuations of mirrors via an imperfection of a beam centering as mentioned in section 2.2. \subsection{Experimental verification We tested whether the sensitivity could be improved by replacing the coil-holder with an electrical isolator. Figure~\ref{fig:DiflonBobbin} shows the result of that experiment. The sensitivity was improved by replacing the coil-holder. The settings of the CLIO interferometer were not changed in the test, except for the coil-holder. That is why the noise floor with the previous coil-holder can be regarded as being the thermal noise due to the coil-holder coupled with the pendulum. In this experiment, diflon bobbins (not the ``New coil-holder'' in Table~\ref{Table1} and Figure~\ref{fig:080515}) were employed so as to suppress eddy currents. The spectrum of the diflon bobbin was not employed as the BG noise in Figure~\ref{fig:080515} because the bobbin was supported by aluminum plates with a complex shape, which did not perfectly suppress eddy currents. The aluminum was used as a material of the previous coil-holder for a reason of cryogenic compatibility as good thermal conduction. Further, a precise and quantitative identification of the pendulum thermal fluctuation due to viscous damping was performed using coil-magnet actuators~\cite{Agatsuma2010_PRL}. \begin{figure}[htb] \begin{center} \includegraphics[width=7.1cm,clip]{Figure5_Experiment} \end{center} \caption{\label{fig:DiflonBobbin} Improvement of the sensitivity by replacing the coil-holder; The coil-holder made of aluminum was replaced with diflon bobbins, which were an electrical isolator. } \end{figure} \newpage \subsection{Solution The coil-holder was redesigned so that the pendulum could sufficiently avoid mechanical losses due to eddy currents in the coil-holder. The new coil-holder at room temperature is shown in Figure~\ref{fig:ExchangeHolder}. Coil bobbins are made of a macor (ceramic), which has an electrical conductivity of $ 10^{-13}$ $\rm{\Omega ^{-1} m^{-1}} $. The aluminum frame is separated from the magnets apart. The magnets were also changed from 2 mm to 1 mm of diameter, so that its magnetic moment could be reduced, and changed from Nd-Fe-B to Sm-Co of its material as a precaution against Barkhausen noise. The current best sensitivity at room temperature as shown in Figure~\ref{fig:Best} was accomplished in this coil-holder configuration. For a cryogenic compatibility, the macor bobbins were replaced with aluminum nitride bobbins, which is an electrical isolator with a thermal conductivity. \begin{figure}[hbt] \begin{center} \includegraphics[width=13cm,clip]{Figure6_Holder} \end{center} \caption{\label{fig:ExchangeHolder} Photographs of coil holders; Left: The previous coil-holder. Coils are surrounded by a coil-holder made of aluminum, and fixed into the coil-holder using stycast. Right: The new coil-holder. Coils are wound on macor bobbins fixed on an aluminum frame through diflon holders. } \end{figure} \section{Summary} CLIO is a prototype interferometer for LCGT, which is located in an underground site of the Kamioka mine. After a few years of commissioning work, we achieved a thermal-noise-limited sensitivity at room temperature. The predicted suspension thermal noise and the mirror thermal noise were very close to the measured sensitivity. The main factor concerning the sensitivity improvement was to remove thermal noise due to a conductive coil-holder coupled with a pendulum through magnets. The experimental result was supported by the theoretical estimation. This result became a direct verification in a noise spectrum of the estimation by Cagnoli and Frasca. We are ready to proceed to the cryogenic experiment, which is the main goal of thermal noise reduction in CLIO. Currently, we are preparing to cool the mirrors as the next step. \section{Acknowledgments} This work was supported in part by Global COE Program "the Physical Sciences Frontier", MEXT, Japan and in part by a JSPS Grant-in-Aid for Scientific Research (No. 18204021). \section{References}
1,941,325,220,806
arxiv
\section{Introduction} Following the discovery of the (Brout-Englert-)Higgs boson~\cite{Chatrchyan:2012xdj,Aad:2012tfa}, a central aim of the Large Hadron Collider (LHC) at CERN is to search for new particles beyond the Standard Model of particle physics. Such particles could show up in any one or more of the processes studied by the LHC experiments. The LHC has recently released its first data from proton collisions at a centre-of-mass energy of $\sqrt{s} = 13$~TeV. The most intriguing result of this recent data-release has been the potential observation of a resonance-like feature above the expected continuum background in the diphoton channel, at an invariant mass $m_{\gamma \gamma} \sim 750$~GeV, which has provoked a great deal of excitement amongst the theoretical community \cite{Harigaya:2015ezk,Mambrini:2015wyu,Backovic:2015fnp,Angelescu:2015uiz,Knapen:2015dap,Buttazzo:2015txu,Pilaftsis:2015ycr,Franceschini:2015kwy,DiChiara:2015vdm,Higaki:2015jag,McDermott:2015sck,Ellis:2015oso,Low:2015qep,Bellazzini:2015nxw,Gupta:2015zzs,Petersson:2015mkr,Molinaro:2015cwg,Chao:2015ttq,Fichet:2015vvy,Curtin:2015jcv,Bian:2015kjt,Chakrabortty:2015hff,Ahmed:2015uqt,Agrawal:2015dbf,Csaki:2015vek,Falkowski:2015swt,Aloni:2015mxa,Bai:2015nbs,Dutta:2015wqh,Cao:2015pto,Matsuzaki:2015che,Kobakhidze:2015ldh,Martinez:2015kmn,Cox:2015ckc,Becirevic:2015fmu,No:2015bsn,Demidov:2015zqn,Gabrielli:2015dhk,Benbrik:2015fyz,Kim:2015ron,Alves:2015jgx,Megias:2015ory,Carpenter:2015ucu,Bernon:2015abk,Chakraborty:2015jvs,Ding:2015rxx,Han:2015dlp,Han:2015qqj,Luo:2015yio,Chang:2015sdy,Bardhan:2015hcr,Feng:2015wil,Antipin:2015kgh,Wang:2015kuj,Cao:2015twy,Huang:2015evq,Liao:2015tow,Heckman:2015kqk,Dhuria:2015ufo,Bi:2015uqd,Kim:2015ksf,Berthier:2015vbb,Cho:2015nxy,Cline:2015msi,Bauer:2015boy,Chala:2015cev,Barducci:2015gtd,Kulkarni:2015gzu,Chao:2015nsm,Arun:2015ubr,Han:2015cty,Chang:2015bzc,Boucenna:2015pav,Murphy:2015kag,Hernandez:2015ywg,Dey:2015bur,Pelaggi:2015knk,deBlas:2015hlv,Belyaev:2015hgo,Dev:2015isx,Huang:2015rkj,Moretti:2015pbj,Patel:2015ulo,Badziak:2015zez,Chakraborty:2015gyj,Cao:2015xjz,Altmannshofer:2015xfo,Cvetic:2015vit,Gu:2015lxj,Allanach:2015ixl,Davoudiasl:2015cuo,Craig:2015lra,Das:2015enc,Cheung:2015cug,Liu:2015yec,Zhang:2015uuo,Casas:2015blx,Hall:2015xds,Han:2015yjk,Park:2015ysf,Salvio:2015jgu,Chway:2015lzg,Li:2015jwd,Son:2015vfl,Tang:2015eko,An:2015cgp,Cao:2015apa,Wang:2015omi,Cai:2015hzc,Cao:2015scs,Kim:2015xyn,Gao:2015igz,Chao:2015nac,Bi:2015lcf,Goertz:2015nkp,Anchordoqui:2015jxc,Dev:2015vjd,Bizot:2015qqo,Ibanez:2015uok,Chiang:2015tqz,Kang:2015roj,Hamada:2015skp,Huang:2015svl,Kanemura:2015bli,Kanemura:2015vcb,Low:2015qho,Hernandez:2015hrt,Jiang:2015oms,Kaneta:2015qpf,Marzola:2015xbh,Ma:2015xmf,Dasgupta:2015pbr,Jung:2015etr,Potter:2016psi,Palti:2016kew,Nomura:2016fzs,Han:2016bus,Ko:2016lai,Ghorbani:2016jdq,Palle:2015vch,Danielsson:2016nyy,Chao:2016mtn,Csaki:2016raa,Karozas:2016hcp,Hernandez:2016rbi,Modak:2016ung,Dutta:2016jqn,Deppisch:2016scs,Ito:2016zkz,Zhang:2016pip,Berlin:2016hqw,Ma:2016qvn,Bhattacharya:2016lyg,D'Eramo:2016mgv}. This has been claimed by both the ATLAS and CMS experiments with the former reporting a significance of up to $3.9\sigma$ locally and $2.3\sigma$ globally \cite{ATLAS}. The global significance represents the statistical preference for a resonance-like signal over the background, incorporating the fact that $\emph{a priori}$ the resonance could have appeared at any value of $m_{\gamma \gamma}$, a correction known as the Look-Elsewhere Effect. In order to give a quantitative statement regarding the preference of their data for a resonance-like feature, the ATLAS collaboration must assume a functional form for their continuum background. The significance of any potential signal then depends crucially on how well this choice was made, and whether it fully captures the uncertainties in the background near the potential resonance. This is particularly important in the case of a potential resonance at 750 GeV since this is located at an invariant mass where there is not much photon data at higher energies. Because of this, one cannot fit the continuum background as well as in the ideal case where one has reliable data either side of the signal region where it would be possible to unambiguously determine the form of the continuum background across the signal region. Instead, one is forced to fit with the low bins and to some degree extrapolate the function to higher bins. We seek to understand the motivation for the choice of the continuum background function made by the ATLAS collaboration in their 13~TeV diphoton analysis, and whether their choice introduced a bias into their analysis. Specifically in this article we quantify to what extent the choice of empirical function, used to model the continuum background, affects the significance of a resonance-like feature around $m_{\gamma \gamma} \sim 750$~GeV. To do this we repeat the analysis performed by the ATLAS collaboration using their form for the empirical background function and our own extension of this function. \section{The importance of background uncertainties in the analysis} \begin{figure*}[tb] \centering \includegraphics[trim={3cm 0 0 0},clip,width=0.98\textwidth]{bg_plots_all.pdf} \caption{Comparison of empirical functions for the continuum diphoton background which best fit to the data. We consider the fit function used by the ATLAS collaboration with and without the log $x$ exponent, and also our own empirical function.} \label{fig:funcs_plot} \end{figure*} The degree to which the latest ATLAS diphoton dataset prefers the presence of a resonance-like feature around 750~GeV depends crucially on our understanding of the continuum background and its uncertainties, particularly in the `signal region' i.e. values of $m_{\gamma \gamma}$ near the potential resonance. In order to quantify this preference we employ a profile likelihood analysis, which incorporates the uncertainties on both the signal and background distributions as nuisance parameters. This is done by evaluating the likelihood function for a wide range of values for the nuisance parameters, denoted by the symbol $\nu$, and finding the value of the likelihood which is largest over this range. We do this for both the background-only scenario, where there is no signal, and where we introduce a resonance-like signal component. Following the ATLAS collaboration we write the former as $\mathcal{L}(\sigma = 0, \hat{\hat{\nu}})$ and the latter as $\mathcal{L}(\sigma, m_X, \alpha, {\hat{\nu}})$, where $\hat{\hat{\nu}}$ denotes the values of the nuisance parameters which maximise the background-only likelihood, and ${\hat{\nu}}$ means the same but for each non-zero value of the signal amplitude $\sigma$ (and also the central value $m_X$ and width parameter $\alpha$). The preference for a signal over background is then quantified using the test statistic, \begin{equation} q_{\sigma} = -2 \, \mathrm{Log} \left[ \frac{\mathcal{L}(\sigma, m_X, \alpha, {\hat{\nu}})}{\mathcal{L}(\sigma = 0, \hat{\hat{\nu}})} \right] , \label{eqn:q_sigma} \end{equation} where we quote all logarithms to the exponential base in this work. The larger the value of $q_{\sigma}$ the greater the statistical preference for a signal feature in the data, compared to fitting with the background alone. To simplify this process the ATLAS collaboration model the background with an empirical function, chosen to resemble the spectrum from Monte Carlo simulations. This function takes the form, \begin{equation} y = (1 - x^{1/3})^b x^{a_0 + a_1 \mathrm{log} x} , \label{eqn:atlas_fit} \end{equation} where $x = m_{\gamma \gamma} / \sqrt{s}$ and in their analysis the ATLAS collaboration set $a_1 = 0$. In the case of the profile likelihood we therefore have that for the nuisance parameters of the background $\nu = (b,a_0)$ when $a_1 = 0$ and $\nu = (b,a_0,a_1)$ otherwise. The ATLAS collaboration used a Fisher test to justify their choice of function and also setting $a_1 = 0$, since they were able to fit the background adequately with only two parameters. However we show in section~\ref{sec:disscussion} that this conclusion is not correct, as their function fits almost entirely to the low-energy points, which have the smallest error bars, while saying little about the region around $750$~GeV. Hence their functional choice essentially underfits the signal region and so does not capture the full background uncertainties. In figure~\ref{fig:funcs_plot} we plot this function with $a_1 = 0$ and when allowing $a_1$ to vary freely for the best-fit parameters to the diphoton data. Modelling the uncertainties in the background is then reduced to scanning over the parameters of the empirical function and treating them as the nuisance parameters in the profile likelihood. However this is only accurate if the variability of the empirical function within its parameter ranges is close to that of the background itself, otherwise the analysis will be biased. Hence in full generality one should include also the uncertainty introduced through \emph{the choice of function itself}. To understand how important this is for the ATLAS diphoton analysis, and whether it has been accounted for properly, we need to choose another suitable function to model the background. This function needs to be suitably different from equation (\ref{eqn:atlas_fit}) but must also resemble as close as possible the result from Monte Carlo simulations. A logical choice for a new background empirical function is to extend equation~(\ref{eqn:atlas_fit}) to a function with two components, allowing the fit near the resonance to have more freedom. Our choice therefore is a function of the form, \begin{equation} y = (1 - x^{1/3})^b (x^{c_0} + \, x^{a_0 + a_1 \mathrm{log} x}) . \label{eqn:our_fit} \end{equation} The two-component nature of our function means that it avoids the problem whereby the low-energy data-points, which have smaller error bars, control the fit of the background in the signal region (see section~\ref{sec:disscussion}). Here for the background-only fit we have that $\nu = (a_0,a_1,b,c_0)$. If the resulting significance of any resonance-like feature differs substantially when using this new function as compared to the one used by the ATLAS collaboration in their fit, then this implies that the latter function does not adequately capture the full uncertainty of the background model. As can be seen from figure~\ref{fig:funcs_plot} our choice of function prefers larger values in the region around $m_{\gamma \gamma} \sim 750$~GeV than for the function used by ATLAS, especially compared to the case where $c_1 = 0$. However in order to understand what effect this has on any preference for a resonance we perform a full profile likelihood analysis in the next section, as the best-fit parameters will change with a non-zero signal contribution. In order to confirm the suitability of the above empirical functions we have also run our own Monte Carlo simulations. We study 13 TeV proton proton collisions with two types of final states, $\gamma\gamma$ and jet + $\gamma\gamma$, obtaining the relevant processes using \textsc{MadGraph} \cite{madgraph}. The diagrams from \textsc{MadGraph} are then passed on to \textsc{Pythia} \cite{pythia8,pythia6} for event generation and showering, using the NNPDF2.3 parton distribution function. We then apply the ATLAS diphoton cuts, following closely the event selection procedure in \cite{ATLAS}. Both photons must satisfy $|\eta| < 2.37$ and have a minimum transverse energy $E_T > 25$ GeV. There are additional mass dependent cuts $E_T^{\gamma_1} > 0.4 m_{\gamma \gamma}$ and $E_T^{\gamma_2} > 0.3 m_{\gamma \gamma}$ where $\gamma_1$ is the photon with the greatest $E_T$, and $\gamma_2$ is the photon with the next highest $E_T$. There is a final isolation cut on each photon, $E_T^{\text{iso}} < 0.05 E_T^{\gamma} + 6$ GeV, where $E_T^{\text{iso}}$ is defined as the magnitude of the vector sum of the transverse momenta of all stable particles, excluding muons and neutrinos, in a cone of radius $\Delta R = \sqrt{(\Delta \eta)^2 +(\Delta \phi)^2} = 0.4$. These Monte Carlo events are binned and scaled to find the expected number of events for a 13 TeV proton collider with $3.2 \rm{fb^{-1}}$ of data, which is, \begin{equation} N_{\text{exp}}=3.2 \rm{fb^{-1}}*\frac{\rm{A}\sigma_{diphoton} N_{bin}}{N_{total}}, \label{eqn:bin_scaling} \end{equation} where A is the acceptance ratio for standard model events given the cuts, and $\sigma_{\text{diphoton}}$ is the cross section, in fb, calculated by \textsc{Pythia} for the processes generated in \textsc{MadGraph}. We have confirmed that both forms of the empirical function chosen by the ATLAS collaboration fit well, as does our own function. However the results of the simulation are not precise enough to prefer any particular functional dependence. The simplest function with only $b$ and $a_0$ is a perfectly adequate fit to the simulated data, but we note that the mock data is not a perfect fit to the lower energy event rates reported in the ATLAS diphoton results. Using this simulated data to motivate the choice of background function for the real data would therefore be dangerous. \section{Results for different background functions} \begin{figure}[tb] \centering \includegraphics[width=0.49\textwidth]{likes_plot_all.pdf} \caption{Values of the likelihood resulting from an analysis peformed using \textsc{MultiNest} plotted against the amplitude of a potential resonance added to one of three different choices for the continuum diphoton background. The lines give the maximum likelihood for each amplitude value. 'Signal Amplitude' refers to the prefactor multiplying the signal contribution, which is a resonance normalised to unity with mass between 700 and 800 GeV.} \label{fig:likes_all} \end{figure} In the previous section we discussed how the modelling of the smooth component of the background with an empirical function, for a search for potential resonances in the diphoton data, is complicated by the need to incorporate not only the uncertainties in the empirical function itself, but also in the choice of function. We seek to understand if this was adequately accounted for in the analysis of the ATLAS collaboration. We show in figure~\ref{fig:likes_all} the amplitude of a resonance-like feature compared with the value of the likelihood function, resulting from a profile likelihood analysis performed using \textsc{MultiNest} \cite{multinest1,multinest2,multinest3}, when this feature is added to each of the background empirical functions. Each dot represents a particular set of nuisance parameters, while the lines mark the maximum likelihood for each amplitude value, used for the profile likelihood analysis. The relative height of the peak compared to the likelihood as the amplitude tends to zero then gives a result proportional to $q_{\sigma}$. We use a Breit-Wigner distribution to model the shape of the resonance-like feature, however our results have been cross-checked using a Crystal Ball distribution instead~\cite{ATLAS}. When using the ATLAS background function i.e. equation~(\ref{eqn:atlas_fit}) with $a_1 = 0$ there is a clear preference for a resonance-like feature around 750~GeV, in agreement with the results from the analysis of the ATLAS collaboration. Indeed the significance of this preference is $3.9\sigma$ when allowing the width of the resonance to vary freely, as detailed also in table~\ref{table_sigmas}. For the Narrow Width Approximation (NWA) we assume a Crystal Ball function for the signal with a width fixed to the photon energy resolution~\cite{ATLAS}. However when allowing $a_1$ to vary and treating it as an additional nuisance parameter in the profile likelihood, the preference for such a feature drops to $2.9\sigma$ with a freely varying width. The effect is even more drastic when using our own empirical function i.e. equation~(\ref{eqn:our_fit}), where the preference for a resonance is now much lower at approximately $2\sigma$ local significance. The fit now also prefers a smaller resonance, as expected since the best-fit form of this function prefers a larger continuum background in the signal region (see figure~\ref{fig:funcs_plot}). Hence it is clear that the significance of a potential resonance-like feature depends strongly on the choice of empirical function used to model the background, as summarised in table~\ref{table_sigmas}. Indeed the sensitivity of the analysis to a change in the continuum background function causes severe concern, and places doubt on the statistical significance of this feature in the ATLAS 13~TeV diphoton data. In the next section we address the issue of free parameters in the background function, and show that the choice made by ATLAS is underfit in the region around 750 GeV. \begin{table}[t] \begin{center} \begin{tabular}{ c || c | c } \normalsize{Background function} & \normalsize{Free width} & \normalsize{NWA} \\ \hline \normalsize{$y = (1 - x^{1/3})^b x^{a_0}$} & \normalsize{$3.9 \sigma$} & \normalsize{$3.6 \sigma$} \\ \normalsize{$y = (1 - x^{1/3})^b x^{a_0 + a_1 \mathrm{log} x}$} & \normalsize{$2.9 \sigma$} & \normalsize{$2.6 \sigma$} \\ \normalsize{$y = (1 - x^{1/3})^b (x^{c_0} + \, x^{a_0 + a_1 \mathrm{log} x})$} & \normalsize{$2.0 \sigma$} & \normalsize{$2.0 \sigma$} \\ \end{tabular} \end{center} \caption{Local significance for a resonance-like signal at $m_{\gamma \gamma} \sim 750$~GeV under different assumptions for the functional dependence of the smooth background. The first function is the one used by ATLAS in their analysis. We either allow the width of the resonance to vary freely, or keep it fixed in the case of the Narrow Width Approximation (NWA).} \label{table_sigmas} \end{table} \section{Discussion of results \label{sec:disscussion}} \begin{figure}[b] \includegraphics[width=0.49\textwidth]{bg_plots_comp_parts.pdf} \caption{Comparison of the best-fit form of the function used by the ATLAS collaboration in their own analysis i.e. equation~(\ref{eqn:atlas_fit}) with both components of our own best-fit function i.e. equation~(\ref{eqn:our_fit}) labelled as `Part 1' and `Part 2' (and the sum of these two as the `Total'), and the diphoton data from the ATLAS 13~TeV run.} \label{fig:bg_func_parts} \end{figure} We have found that the significance of a resonance-like feature in the 13 TeV ATLAS diphoton data around an invariant mass of 750~GeV depends sensitively on the choice of function used to model the continuum background. In this section we seek to understand why the analysis performed by the ATLAS collaboration may have over-estimated the significance of a signal feature. An important issue is the number of parameters on which the background function depends \emph{in the signal region}. In their own analysis~\cite{ATLAS} the ATLAS Collaboration justified their rather simple two-parameter function (equation~(\ref{eqn:atlas_fit}) with $a_1 = 0$) using a Fisher test, which showed that adding an additional free parameter (i.e. allowing $a_1$ to take on any value) did not improve the fit to the diphoton data enough to justify its inclusion. However here we show that their choice of function is fit almost entirely to the low-energy region. Hence the result of the Fisher test performed by ATLAS has little relevance for the background in the region around $750$~GeV. We illustrate this with figure~\ref{fig:bg_func_parts}, where we show both components of our function i.e. equation~(\ref{eqn:our_fit}) labelled as `Part 1' and `Part 2', compared to the function the ATLAS collaboration use in their analysis i.e. equation~(\ref{eqn:atlas_fit}) with $a_1 = 0$. The important point to note is that the first component of our best-fit function, obtained by fitting to the data without any signal component, is almost identical to the form of the ATLAS best-fit function. This implies that the latter is determined almost entirely by the low-energy points, as expected, while its form at high energy near the potential resonance does not depend strongly on the data in this region. Hence the function is at best under-fitting to the background in the signal region, at at worst hardly fitting to the data in this region at all. Indeed the fit of our function in figure~\ref{fig:bg_func_parts} shows clearly that the low and high energy regions (below and above $\sim 500$~GeV) are in significant tension, since they prefer different background spectra. The issue is exacerbated for this particular data-set due to the lack of data at values of $m_{\gamma \gamma}$ larger than $\sim 750$~GeV. This is because the background function is only effectively fixed at the low-energy end, while its value at higher energies has much more freedom. If instead there were much more data at energies above $\sim 750$~GeV then it is unlikely the choice of background function would make much difference to the final result of the profile likelihood analysis, since all fits would then give the same continuum fit in the signal region. The Fisher test is not the only method of estimating the required number of free parameters. An alternative metric is the Bayesian Information Criterion (BIC) \cite{BIC}, which takes the form $\mathrm{BIC} = -2\ln\mathcal{L} + k \ln n$, where $k$ is the number of parameters in the model and $n = 27$ is the number of data-points. The model with the lowest value of BIC is the best choice, however only if the difference $\Delta \mathrm{BIC} \gtrsim 2$, otherwise both models provide equally good fits. For the simplified ATLAS model with $a_1 = 0$ we have that $k = 2$ and so $\mathrm{BIC} \approx 36.6$ without any signal component, while when $a_1 \neq 0$ we find $\mathrm{BIC} \approx 33.4$ with $k = 3$ and for our own function (equation~(\ref{eqn:our_fit})) we have $k = 4$ and so $\mathrm{BIC} \approx 29.2$. Hence under the BIC our model for the background is justified despite its increased number of free parameters, as it clearly fits the background better over the whole range of invariant masses. Of course fitting a resonance to the data with the simplest background function only has one more degree of freedom compared to our two extra parameter background function, but the change in $\mathrm{BIC}$ is less favourable in that case. Even if this was not the case, only when the background-only hypothesis becomes severely disfavoured could we be sure that a new resonance has appeared in teh data. It goes without saying that we very much hope that such a resonance exists has and this note of caution is unnecessary. In summary we have doubts over the application of the Fisher test by the ATLAS collaboration, which justified their choice of simplified background function, as their fits are dominated by data points at much lower invariant mass than the tentative resonance. We showed that an alternative test, the Bayesian Information Criterion, which gives the best-fit model weighted by its number of free parameters, has a clear preference for our own empirical model over either of the ATLAS functions even given its additional degrees of freedom. \section{Conclusion} The first data-release from collisions of protons with a centre-of-mass energy of $\sqrt{s} = 13$~TeV at the LHC has lead to claims from both the ATLAS and CMS experiments of a preference in their data from the diphoton channel for a resonance-like feature around a centre-of-mass energy of $m_{\gamma \gamma} \sim 750$~GeV. Indeed the analysis performed by the ATLAS collaboration finds at most a $3.9\sigma$ local preference for a resonance-like feature in their diphoton data around 750~GeV~\cite{ATLAS}. The ATLAS analysis was performed by making an assumption on the form of the continuum background for this search, based on knowledge from Monte Carlo simulations. The significance of a resonance in the data above the background therefore depends crucially on how well this empirical function captures the uncertainties of the background, especially in the region of $m_{\gamma \gamma}$ where the resonance is claimed to be present, and where there is little data compared to lower energies. In this work we have quantified to what extent the preference of the data for a resonance-like feature around 750~GeV depends on the choice of this empirical function. To do this we have written down a new function which allows the background around $750$~GeV to be fit independently of the low-energy region (see figure~\ref{fig:funcs_plot}). By performing a profile likelihood analysis using our own empirical background function and the one use by the ATLAS collaboration we have calculated the significance of a resonance-like feature around $750$~GeV. The results of this analysis are shown in figure~\ref{fig:likes_all} and table~\ref{table_sigmas}. We find that the results of the analysis are highly sensitive to the choice of background function, and that if we use our own form the preference for a resonance-like feature is only at the level of $2 \sigma$ locally. The reason for this disagreement with the analysis of the ATLAS collaboration~\cite{ATLAS} is that their choice of function fits almost entirely to the data at low-energy, while underfitting (or extrapolating) in the region around $750$~GeV. Hence while they found, using a Fisher test, that only two free parameters were needed to adequately describe the full continuum background, this was only for the low energy region region, and not for diphoton invariant mass near the potential resonance. We showed in figure~\ref{fig:bg_func_parts} that an additional component is needed in the function to describe the region above $500$~GeV, and to fully capture the uncertainties in the background around the potential resonance. Additionally the fact the Bayesian Information Criteron gives a clear preference for our own model despite its increased number of free parameters is suggestive that the background function used by the ATLAS collaboration is not adequate. We attempted to model the standard model diphoton background and, like the ATLAS collaboration, found no evidence requiring a more complicated fit than that of equation (\ref{eqn:atlas_fit}) with $a_1=0$. However, we understand (and have found both in this situation and in others) that the precise modelling of LHC background events with Monte Carlo generators is very challenging - the ATLAS collaboration do not use their Monte Carlo simulations to fit the background events but rather to motivate the functional form of the fit to the background in the data. If the explanation of the deviation between this simple curve and the data is due to an incorrectly chosen functional form for the background, this may point to new insight into standard model physics. In summary we have found that the background is not known well enough, and the high-energy data not yet precise enough, to make such a strong statement on the presence of such a resonance. Instead we find a local significance for such a feature at the level of only $2 \sigma$ in the ATLAS 13~TeV diphoton data if we assume a rather simple extension of the background model. The fact that different statistical treatments might lead to different interpretations clearly indicates the need for more data. We hope to see the next run of the LHC provide this data and continue its groundbreaking test of high energy physics. \section*{Acknowledgements} The research leading to these results has received funding from the European Research Council through the project DARK HORIZONS under the European Union's Horizon 2020 program (ERC Grant Agreement no.648680). JH and MF are also grateful for funding from the UK Science and Technology Facilities Council (STFC). \begingroup\raggedright
1,941,325,220,807
arxiv
\section{Introduction} \label{sec:intro} We have, at present, strong evidence for Dark Energy (DE) from the large amount of available cosmological data \citep[e.g.,][]{Ade:2015xua}. Nonetheless, this evidence is mostly based on precise constraints from the Cosmic Microwave Background (CMB) epoch extrapolated to the present time. Local, or present-day, constraints on DE are, instead, mostly given by SuperNovae (SN) data, which are not yet precise enough for accurately constraining the properties and time evolution of DE \citep[e.g.,][]{Betoule14}. Thus, it is important to look for alternative local DE probes. In this respect such a DE-sensitive measurement is given by the late-time Integrated Sachs-Wolfe effect (ISW) on the CMB \citep{sachs_perturbations_1967}. {This effect is imprinted in the angular pattern of the CMB in the presence of a time-varying cosmological gravitational potential, which appears in the case of a non-flat universe \cite{Kamionkowski:1996ra,Kinkhabwala:1998zj}, as well as for a flat one in the presence of DE, but also for various modified gravity theories \cite[e.g.,][]{Song:2006ej,Barreira12}. Thus, for standard General Relativity (GR) and flat cosmology a non-zero ISW implies the presence of DE.} The effect is very small and cannot be well measured using the CMB alone since it peaks at large angular scales (small multipoles, $\ell \lesssim 40$) which are cosmic-variance limited. On the other hand, it was realized that this effect can be more efficiently isolated by cross-correlating the CMB with tracers of the Large-Scale Structure (LSS) of the Universe at low ($z\lesssim1$) redshift \citep{1998NewA....3..275B,boughn_cross_2001}, with most of the signal lying in the range $z\in [0.3,1.5]$ for a standard $\Lambda$CDM cosmological model \citep{Afshordi:2004kz}. In the past, many ISW analyses were performed using a large variety of tracers at different redshifts \citep{dupe_measuring_2011,2013arXiv1303.5079P,Planck15_ISW,2004PhRvD..69h3524A,2004Natur.427...45B,Fosalba:2003ge, 2007MNRAS.381.1347C,2006MNRAS.372L..23C,2006PhRvD..74f3520G,2008MNRAS.386.2161R,2007MNRAS.377.1085R,xia09, Ferraro:2014msa,Hernandez-Monteagudo:2013vwa,Shajib:2016bes,Pietrobon:2006gh,McEwen:2006my,Vielva:2004zg}. In a few cases, global analyses were performed combining different LSS tracers, giving the most stringent constraints and evidence for the ISW effect at the level of $\sim 4\ \sigma$ \citep{giannantonio_combined_2008,2012MNRAS.426.2581G,ho_correlation_2008}. Related methodology, which has been explored more recently, consists in stacking CMB patches overlapping with locations of large-scale structures, such as superclusters or voids \cite{Granett:2008ju,Papai2011,Ilic:2013cn,Cai2014,Granett2015,Kovacs:2015bda,Kovacs2017}. A further idea, which was sometimes exploited, is to use the redshift information of a given catalog to divide it into different redshift bins, compute the cross-correlation in each bin, and then combine the information. This tomographic approach was pursued, for example, in the study of 2MASS \citep{francis_integrated_2010} or SDSS galaxies \citep{2003astro.ph.Scranton,sawangwit_cross-correlating_2010}. Typically, the use of tomography does not provide strong improvement over the no-binning case, either because the catalog does not contain a large enough number of objects and splitting them increases the shot-noise, or because the redshift range is not well suited for ISW studies. Nonetheless, in the recent years, several catalogs with redshift information and with a very large number of objects have become available thanks to the use of photometric redshifts (photo-$z$s) instead of spectroscopic ones. Although photo-$z$s\ are not as accurate as their spectroscopic counterparts, the former are sufficient for performing a tomographic analysis of the ISW with coarse $z$ bins. Hence we can exploit these large catalogs, which have the advantage of giving a low shot noise even when divided into sub-samples. In this work, we combine for the first time the two above approaches: we use several datasets covering different redshifts ranges, and we bin them into redshift sub-samples to perform a global tomography. We show that in this way we are able to improve the significance of the ISW effect from $\sim 4\ \sigma$ without redshift binning to $\sim 5\ \sigma$ exploiting the full tomography information. When combining the various catalogs, we take special care to minimize their overlap both in terms of common sources and the same LSS traced, in order not to use the same information many times. This is done by appropriate data cleaning and masking. We then use these improved measurement of the ISW effect to study deviations of DE from the simplest assumption of a cosmological constant. Finally, the correlation data derived in this work and the associated likelihood will soon be made publicly available, in the next release of the {\sc MontePython}\footnote{See \url{http://baudren.github.io/montepython.html} and \\ \url{https://github.com/brinckmann/montepython_public} } package~\citep{Audren:2012wb}. \section{Theory} \label{sec:theory} The expression for the cross-correlation angular power spectrum (CAPS) between two fields $I$ and $J$ is given by: \begin{equation} C_l^{I,J} =\frac{2}{\pi} \int k^2 P(k) [G_{\ell}^I(k)] [G_{\ell}^J(k)] dk, \label{eq:angularspectrum} \end{equation} where $P(k)$ is the present-day power spectrum of matter fluctuations. In the above expression we have assumed an underlying cosmological model, like $\Lambda$CDM, in which the evolution of density fluctuations is separable in wavenumber $k$ and redshift $z$ on linear scales. A different expression applies, for example, in the presence of massive neutrinos \citep{Lesgourgues:2007ix}, where the $k$ and $z$ evolution is not separable. {Moreover, in the following, we assume standard GR and a flat $\Lambda$CDM model. For studies of the ISW effect for non-zero curvature or modified gravity see \cite{Kamionkowski:1996ra,Kinkhabwala:1998zj,Song:2006ej,Barreira12}.} For the case $I=c$ of the fluctuation field of a catalog of discrete objects, one has \begin{equation} G_{\ell}^c(k)=\int \frac{dN(z)}{dz} b_c(z) D(z) j_{\ell}[k \chi(z)]dz, \label{eq:crossocorr} \end{equation} where $dN(z)/dz$ and $ b_c(z)$ represent the redshift distribution and the galaxy bias factor of the sources, respectively, $j_{\ell}[k \chi(z)]$ are spherical Bessel functions, $D(z)=(P(k,z)/P(k))^{1/2}$ is the linear growth factor of density fluctuations and $\chi(z)$ is the comoving distance to redshift $z$. For the case of cross-correlation with the temperature fluctuation field obtained from the CMB maps ($J=T$), the ISW effect in real space is given by \citep[e.g.,][]{nishizawa_integrated_2014} \begin{equation} \label{eqISW} \Theta(\hat{n})=-2\int \frac{d\Phi(\hat{n}\chi,\chi)}{d\chi}d\chi, \end{equation} where $\Phi$ represents the gravitational potential. In the expression, we neglect a factor of $\exp ( -\tau)$, which introduces an error of the order of $10 \%$, smaller than the typical accuracy achieved in the determination of the ISW itself. Furthermore, using the Poisson and Friedmann equations, \footnote{{Eqs.~\ref{eqISW}-\ref{eqPoisson} are valid assuming GR. For modified gravity different appropriate expressions would apply (see, e.g., \cite{Song:2006ej,Barreira12}).} } {and considering scales sufficiently within the horizon} \begin{equation} \label{eqPoisson} \Phi(k,z)= - \frac{3}{2\, c^2} \frac{\Omega_m}{a(z)} \frac{H_0^2}{k^2} \ \delta(k,z) \end{equation} where $c$ is the {speed} of light, $a(z)$ is the cosmological scale factor, $H_0$ is the Hubble parameter today, $\Omega_m=\Omega_b+\Omega_c$ is the fractional density of matter today, and $\delta(k,z)$ is the matter fluctuation field in Fourier space, we can write \begin{equation} G_{\ell}^{T}(k)= \frac{3\ \Omega_m}{c^2} \frac{\mathrm{H}_0^2}{k^2} \int \frac{d}{dz}\left(\frac{D(z)}{a(z)}\right) j_{\ell}[k\chi(z)]dz~. \label{eq:isw} \end{equation} Finally, the equations above can be combined through Eq.~\eqref{eq:angularspectrum} to give the CAPS expected for the ISW effect resulting from the correlation between a catalog of extragalactic objects, tracing the underlying mass distribution, and the CMB. Using the Limber approximation \citep{1953ApJ...117..134L} the correlation becomes \citep{ho_correlation_2008} \begin{equation} \begin{split} C_{\ell}^{cT} = \frac{3\Omega_m\mathrm{H}_0^2}{c^3\left(l+\frac{1}{2}\right)^2}\int dz \, &b_c(z)\frac{dN}{dz}H(z)D(z)\frac{d}{dz}\left(\frac{D(z)}{a(z)}\right)\\ &\times P\left(k=\frac{l+\frac{1}{2}}{\chi(z)}\right). \end{split} \label{eq:isw2} \end{equation} The Limber approximation is very accurate at $\ell>10$ and accurate at the level of $10\%$ at $\ell<10$ \citep{1953ApJ...117..134L}, which is sufficient for the present analysis. In our study, we use the public code {\sc class}\footnote{See \url{http://class-code.net}}~\citep{blas_cosmic_2011} to compute the linear power spectrum of density fluctuations. As an option, this code can compute internally the spectra $C_{\ell}^{cT}$ and $C_{\ell}^{cc}$, for arbitrary redshift distribution functions, using either the Limber approximation or a full integral in $(k,z)$ space. We prefer, nonetheless, to use the Limber approximation since CAPS calculations are significantly faster. Also, to get better performances and more flexibility, we choose to perform these calculations directly inside our python likelihood, reading only $P(k,z)$ from the {\sc class}{} output. We checked on a few examples that our spectra do agree with those computed internally by {\sc class}. \section{CMB maps} \label{sec:cmbmaps} We use CMB maps from the Planck 2015 data release\footnote{See \url{http://pla.esac.esa.int/pla/\#maps}} \citep{Ade:2015xua} which have been produced using four different methods of foreground subtraction: {\tt Commander}, {\tt NILC}, {\tt SEVEM}, and {\tt SMICA}. Each method provides a confidence mask which defines the region of the sky in which the CMB maps can be used. We construct a combined mask as the union of these four confidence masks. This mask is applied on the CMB maps before calculating the cross-correlation. We will use the {\tt SEVEM} map as default for the analysis. Nonetheless, we have also tested the other maps to check the robustness of the results. The test is described in more detail in Sec.~\ref{sec:tests}. As the ISW effect is achromatic, for further cross-checks we also use CMB maps at different frequencies. In particular we use maps at 100 GHz, 143 GHz, and 217 GHz. The results using these maps are also described in Sec.~\ref{sec:tests}. \section{Additional cosmological datasets} \label{sec:cosmodata} In the following we will perform parameter fits using the ISW data obtained with the cross-correlation. Beside this, in some setups, we will also use other cosmological datasets in conjunction. In particular, we will employ the Planck 2015 public likelihoods\footnote{See \url{http://pla.esac.esa.int/pla/\#cosmology}} \citep{Ade:2015xua} and the corresponding {\sc MontePython}\ interfaces {\tt Planck\_highl\_lite} (for high-$\ell$ temperature), {\tt Planck\_lowl} (for low-$\ell$ temperature and polarization), and {\tt Planck\_lensing} (CMB lensing reconstruction). {The accuracy of the {\tt Planck\_highl\_lite} likelihood (which performs an internal marginalization over all the nuisance parameters except one) with respect to the full Planck likelihood (where the nuisance parameters are not marginalized) has been tested in \cite{Aghanim:2015xee,Ade:2015rim} where the authors find that the difference in the inferred cosmological parameters is at the level of 0.1 $\sigma$.} Finally we will use BAO data from 6dF \citep{Beutler:2011hx}, SDSS DR7 \citep{Ross:2014qpa} and BOSS DR10\&11 \citep{Anderson:2013zyy}, which are implemented as {\tt bao\_boss} and {\tt bao\_boss\_aniso} in {\sc MontePython}. \section{Catalogs of Discrete Sources} \label{sec:catmaps} For the cross-correlation with the CMB, as tracers of matter distribution we use five catalogs of extragalactic sources. As the ISW is a wide-angle effect, they were chosen to cover as large angular scales as possible, and two of them are all-sky. Furthermore, our study does not require exact, i.e.\ spectroscopic, redshift information, thus photometric samples are sufficient. Except for one case, the datasets employed here include individual photo-$z$s\ for each source, which allows us to perform a tomographic approach by splitting the datasets into redshift bins. The catalogs we use span a wide redshift range; see Fig.~\ref{fig:dNdzs} for their individual redshift distributions. Table \ref{tab:catalogs} quantifies their properties (sky coverage, number of sources, mean projected density) as effectively used for the analysis, i.e., after applying both the catalog and CMB masks. For a plot of the sky maps and masks of the catalogs described below, and for their detailed description, see~\cite{Cuoco:2017bpv}. Below we provide a short summary of the properties of the datasets. \begin{figure} \includegraphics[width = 0.48\textwidth]{dndzs_logx_new.pdf} \caption{Photometric redshift distributions for the five catalogs used for the cross-correlation. The $dN/dz$ curves are normalized to a unit integral. For the NVSS case the analytical approximation described in the text is used, since no redshifts information is available for the single catalog objects.} \label{fig:dNdzs} \end{figure} \begin{table} \begin{tabular}{lcrc} \hline source & sky & number & mean surface \\ catalog & coverage & of sources & density [deg$^{-2}$] \\ \hline NVSS & 62.3\% & 431,724 & 67.2 \\ 2MPZ & 64.2\% & 661,060 & 24.9 \\ WISE$\times$SCOS & 64.5\% & 17,695,635 & 665\\ SDSS DR12 & 18.7\% & 23,907,634 & 3095 \\ SDSS DR6 QSO & 15.6\% & 461,093 & 71.8 \\ \hline \end{tabular} \caption{\label{tab:catalogs} Statistics of the catalogs used in the analysis. The numbers refer to the area of the sky effectively employed in the analysis, i.e., applying both the catalog and CMB masks.} \end{table} \subsection{2MPZ} \label{sec:2MPZ} As a tracer of the most local LSS in this study we use the 2MASS Photometric Redshift catalog\footnote{Available from \url{http://ssa.roe.ac.uk/TWOMPZ.html}.} \citep[2MPZ,][]{bilicki14}. This dataset was built by merging three all-sky photometric datasets covering optical, near-infrared (IR), and mid-IR passbands: SuperCOSMOS scans of UKST/POSS-II photographic plates \citep{peacock16}, 2MASS Extended Source Catalog \citep{jarrett2000}, and Wide-field Infrared Survey Explorer \citep[WISE,][]{wright10}. Photo-$z$s were subsequently estimated for all the included sources, by calibrating on overlapping spectroscopic datasets. 2MPZ includes $\sim 935,000$ galaxies over almost the full sky. Part of this area is however undersampled due to the Galactic foreground and instrumental artifacts, we thus applied a mask described in \cite{alonso15}. When combined with the CMB mask, this leaves over 660,000 2MPZ galaxies on $\sim64\%$ of the sky (Table~\ref{tab:catalogs}). 2MPZ provides the best-constrained photo-$z$s\ among the catalogs used in this paper. They are practically unbiased ($\langle \delta z \rangle \sim 0$) and their random errors have RMS scatter $\sigma_{\delta z} \simeq 0.015$, to a good accuracy independent of redshift. We show the 2MPZ redshift distribution in Fig.~\ref{fig:dNdzs} with the dot-dashed green line; the peak is at $z\sim 0.06$ while the mean $\langle z \rangle \sim 0.08$. The overall surface density of 2MPZ is $\sim25$ sources per square degree. For the tomographic analysis we split the catalog in three redshift bins: $z \in [0.00,0.105]$, $[0.105,0.195]$ and $[0.195, 0.30]$. The first two include the bulk of the distribution, approximately divided into two comparable sub-samples, while the third bin explores the tail of the $dN/dz$ where most of the ISW signal is expected. A precursor of 2MPZ, based on 2MASS and SuperCOSMOS only, was used in a tomographic ISW analysis by \cite{francis_integrated_2010}, while an early application of 2MPZ itself to ISW tomography is presented in \cite{Steward14}. In both cases no significant ISW signal was found, consistent with expectations. Another ISW-related application of 2MPZ is presented in \cite{Planck15_ISW}, where it was applied to reconstruct ISW anisotropies caused by the LSS. \subsection{WISE $\times$ SuperCOSMOS} \label{sec:WIxSC} The WISE~$\times$~SuperCOSMOS photo-$z$\ catalog\footnote{Available from \url{http://ssa.roe.ac.uk/WISExSCOS.html}.} \citep[WI$\times$SC,][]{bilicki16} is an all-sky extension of 2MPZ obtained by cross-matching WISE and SuperCOSMOS samples. WI$\times$SC\ reaches roughly 3 times deeper than 2MPZ and has almost 30 times larger surface density. However, it suffers from more severe foreground contamination, and its useful area is $\sim70\%$ of the sky after applying its default mask. This is further reduced to $\sim65\%$ once the Planck mask is also used; the resulting WI$\times$SC\ sample includes about 17.5 million galaxies. WI$\times$SC\ photo-$z$s\ have overall mean error $\langle \delta z \rangle \sim 0$ and distance-dependent scatter of $\sigma_{\delta z} \simeq 0.033(1+z)$. The redshift distribution is shown in Fig.~\ref{fig:dNdzs} with the dashed orange curve. The peak is at $z\sim0.2$, and the majority of the sources are within $z<0.5$. In the tomographic approach, the WI$\times$SC\ sample is divided into four redshift bins: $z \in [0.00,0.09]$, $[0.09,0.21]$, $[0.21, 0.30]$, and $[0.30, 0.60]$, with approximately equal number of galaxies in each bin. As far as we are aware, our study employs the WI$\times$SC\ dataset for an ISW analysis for the first time. Various studies based using WISE have been performed in the past \citep{Goto:2012yc,Kovacs13,Ferraro:2014msa,Shajib:2016bes}. However, the samples used there differed significantly from WI$\times$SC, and none included individual redshift estimates which would allow for redshift binning. \subsection{SDSS DR12 photometric} \label{sec:DR12} Currently there are no all-sky photo-$z$\ catalogs available reaching beyond WI$\times$SC. Therefore, in order to look for the ISW signal at $z>0.5$, we used datasets of smaller sky coverage. The first of them, with the largest number density of all employed in this paper, is based on the Sloan Digital Sky Survey Data Release 12 (SDSS-DR12) photo-$z$\ sample compiled by \cite{beck16}; to our knowledge, our study is its first application to an ISW analysis, although earlier versions (DR 6 and DR 8) were used in \cite{giannantonio_combined_2008,2012MNRAS.426.2581G} (but without $z$ binning). The parent SDSS-DR12 photo-$z$\ dataset includes over 200 million galaxies. Here we however use a subsample described in detail in \cite{Cuoco:2017bpv}, which was obtained via appropriate cleaning as recommended by \cite{beck16}, together with our own subsequent purification of problematic sky areas. In particular, as the SDSS galaxies are distributed in two disconnected regions in the Galactic south and north, with most of the area in the northern part, and uneven sampling in the south, we have excluded the latter region from the analysis. After additionally employing the Planck CMB mask, we were left with about 24 million SDSS DR12 sources with mean $\langle z \rangle = 0.34$ and mostly within $z < 0.6$. The resulting sky coverage is $\sim 19\%$ and the mean surface density is $\sim 3100$ deg$^{-2}$. The redshift distribution is shown in Fig.~\ref{fig:dNdzs} with the solid blue line. Thanks to the very large projected density of objects, we were able to split the SDSS-DR12 sample into several redshift bins, keeping low shot-noise in each shell. For the tomographic analysis we divided the dataset into six bins: $z \in [0.0,0.1]$, $[0.1,0.3]$, $[0.3, 0.4]$, $[0.4, 0.5]$, $[0.5, 0.7]$ and $[0.7, 1.0]$. The range $z \in [0.1,0.3]$ is not subdivided further since this redshift range is best covered by WI$\times$SC, where we already have sub-bins. The photo-$z$\ accuracy of SDSS-DR12 depends on the `photo-$z$\ class' defined by \cite{beck16}, and each class has an associated error estimate. Our specific preselection detailed in \cite{Cuoco:2017bpv} leads to an effective photo-$z$\ scatter of $\sigma_{\delta z} =0.022(1+z)$ based on the overall error estimates from \cite{beck16}. \subsection{SDSS DR6 QSO} \label{sec:QSO} As a tracer of high-$z$ LSS, we use a catalog of photometric quasars (QSOs) compiled by \cite{richards09} from the SDSS DR6 dataset (DR6-QSO in the following), used previously in ISW studies by e.g. \cite{giannantonio_combined_2008,xia09} and \cite{2012MNRAS.426.2581G}. We apply the same preselections as in \cite{xia09}, and the resulting sample includes $6\times10^5$ QSOs on $\sim25\%$ of the sky. We exclude from the analysis three narrow stripes present in the south Galactic sky and use only the northern region. The DR6-QSO sources are provided with photo-$z$s\ spanning formally $0<z<5.75$ but with a relatively peaked $dN/dz$ and mean $\langle z \rangle \simeq 1.5$ (dotted red line in Fig.~\ref{fig:dNdzs}). For tomographic analysis, this QSO dataset will be split into three bins of $z\in [0.5,1.0]$, $[1.0,2.0]$, and $[2.0, 3.0]$, selected in a way to have similar number of objects in each bin. We excluded the QSOs in the range $z\in [0.0,0.5]$ in order to minimize the overlap with the other catalogs in this redshift range. Nonetheless, there are very few DR6-QSO catalog objects at these redshifts, thus this choice has only a very minor impact on the results. The typical photo-$z$\ accuracy of this dataset is $\sigma_{\delta z} \sim 0.24$ as reported by \cite{richards09}, and we will use this number for the extended modeling of underlying $dN/dz$s per redshift bin in Sec.~\ref{sec:tests}. \subsection{NVSS} \label{sec:nvss} The NRAO VLA Sky Survey \citep[NVSS,][]{1998AJ....115.1693C} is a catalog of radio sources, most of which are extragalactic. This sample has already been used for multiple ISW studies \citep[e.g.][]{boughn_cross_2001,Pietrobon:2006gh,Vielva:2004zg,McEwen:2006my,2008MNRAS.386.2161R}. The dataset covers the whole sky available for the VLA instrument; after appropriate cleanup of likely Galactic entries and artifacts, the NVSS sample includes $\sim5.7\times10^5$ objects flux-limited to $> 10$ mJy, located at declinations $\delta\gtrsim -40^\circ$ and Galactic latitudes $|b| > 5^{\circ}$. This is the only of the datasets considered in this work which does not provide even crude redshift information for the individual sources. We thus use it without tomographic binning and, where relevant, assume its $dN/dz$ to follow the model of \cite{dezotti10} (purple short-long-dashed line in Fig.~\ref{fig:dNdzs}). This sample spans the broadest redshift range of all the considered catalogs, namely $0<z<5$. \subsection{Masks} In the correlation of the CMB with each catalog we use the CMB mask, described in Sec.~\ref{sec:cmbmaps}, combined with the specific mask of the given catalog. Beside this, we define specific masks which we use when combining the signal from the different catalogs in order to circumvent including the same information twice, and to avoid the need to take into account the cross-correlations between various tracers of the same LSS. We proceeded in the following way. \begin{itemize} \item SDSS catalogs (i.e. SDSS DR6 QSOs and SDSS DR12 galaxies) are used without additional masks. When combining the information with other catalogs we, however, exclude the first SDSS DR12 bin, since the region $z \in [0.0,0.1]$ is best covered by 2MPZ. \item To avoid correlations with the SDSS catalogs, when using all the remaining ones (i.e. NVSS, 2MPZ, WI$\times$SC) we apply a mask which is a complementary of the joint mask of SDSS DR12 galaxies and SDSS DR6 QSOs (in short, SDSS mask in the following). \item For 2MPZ and WI$\times$SC\ it is not possible to define mutually exclusive masks since both these datasets cover practically the same part of the sky. Nonetheless, we use them together, since WI$\times$SC\ was built excluding most of the objects already contained in 2MPZ \citep{bilicki16}. The two catalogs, thus, have practically no common sources. In this way the correlation among the two datasets is significantly suppressed, although not totally, since both trace the same underlying LSS in the overlapping redshift ranges. We will, however, not consider the first bin, $z \in [0.0,0.1]$, of WI$\times$SC\ in the combined analysis since in this redshift range 2MPZ has better redshift determination and basically no stellar contamination. Nonetheless, as we will show in Sec.~\ref{sec:method}, the evidence for ISW in the range $z \in [0.0,0.2]$, where 2MPZ and WI$\times$SC\ have most of the overlap, is very small, so, in practice, this has only a marginal effect on the final ISW significance. \item Similarly, also for NVSS, 2MPZ and WI$\times$SC\ it is not possible to define a mutually exclusive mask due to the large common area of the sky. In this case, we note that 2MPZ and WI$\times$SC\ cover only the low redshift tail of NVSS. Thus, the overlap and correlation among them is minimal. \end{itemize} We will thus use the above setup when reporting combined significances of the ISW from the different catalogs. For simplicity, we will use the same setup also to derive auto-correlations of the single catalogs. In this case the significances could be increased slightly for NVSS, 2MPZ, and WI$\times$SC\ if their proper masks were used, but we checked that the improvement is only marginal. \begin{figure*} \includegraphics[width=0.45\textwidth]{autocorrelation/sdss_bin_1.pdf} \includegraphics[width=0.45\textwidth]{crosscorrelation/sdss_bin_2.pdf} \caption{Left: Example of measured source catalog auto-correlation and best-fit model with free galaxy bias, referring to the case of SDSS-DR12 in the labeled $z$ bin. Right: example of measured cross-correlation between sources and CMB temperature and best-fit model, referring to the case of SDSS-DR12 in the labeled $z$ bin. Dots refer to the measured single multipoles, while data points with error bars refer to binned measurements. } \label{fig:biasexample} \end{figure*} \section{Cross-Correlation Analysis} \label{sec:corranalysis} In the previous section we have presented the catalogs of extragalactic objects that we use in the analysis. Their input format is that of a 2D pixelized map of object counts $n(\hat{\Omega}_i)$, where $\hat{\Omega}_i$ specifies the angular coordinate of the $i$-th pixel. For the cross-correlation analysis we consider maps of normalized counts $n(\hat{\Omega}_i)/\bar{n}$, where $\bar{n}$ is the mean object density in the unmasked area, and CMB temperature maps, also pixelized with a matching angular resolution. In our analysis we compute both the angular 2-point cross-correlation function, CCF, $w^{(cT)}(\theta)$, and its harmonic transform, the angular power spectrum $\bar C_\ell^{(cT)}$, CAPS. However, we restrict the quantitative analysis to the CAPS only. The reason for this choice is that the CAPS has the advantage that different multipoles are almost uncorrelated, especially after binning. Their covariance matrix is therefore close to diagonal, which simplifies the comparison between models and data. Similarly, we compute also the auto-correlation power spectrum of the catalogs (APS) and the related auto-correlation function (ACF). \begin{table*} \begin{tabular}{cccc|cc|cc} \hline catalog & z & b & $\chi^2_{min}$ & $b_{\rm Halofit}$ & $\chi^2_{min}$& $b_{\rm Halofit+\sigma_{\delta z}}$ & $\chi^2_{min}$\\ \hline SDSS & 0-0.1 & $ 0.70 \pm 0.02 $ & 3.59& $ 0.69 \pm 0.02 $ & 3.76& $ 0.71 \pm 0.02 $ & 4.11 \\ & 0.1-0.3 & $ 1.03 \pm 0.03 $ & 1.71 & $ 1.03 \pm 0.03 $ & 1.68& $ 1.02 \pm 0.03 $ & 1.63 \\ & 0.3-0.4 & $ 0.88 \pm 0.03 $ & 0.64 & $ 0.88 \pm 0.03 $ & 0.63& $ 0.87 \pm 0.03 $ & 0.61 \\ & 0.4-0.5 & $ 0.79 \pm 0.02 $ & 4.84 & $ 0.80 \pm 0.02 $ & 4.65& $ 0.84 \pm 0.03 $ & 4.99 \\ & 0.5-0.7 & $ 1.14 \pm 0.04 $ & 6.16 & $ 1.13 \pm 0.04 $ & 5.86& $ 1.23 \pm 0.04 $ & 6.35 \\ & 0.7-1 & $ 1.02 \pm 0.11 $ & 15.04 & $ 1.03 \pm 0.11 $ & 14.99& $ 1.23 \pm 0.13 $ & 15.16 \\ WIxSC & 0-0.09 & $ 0.62 \pm 0.03 $ & 0.46 & $ 0.60 \pm 0.03 $ & 0.38 & $ 0.57 \pm 0.03 $ & 0.28 \\ & 0.09-0.21 & $ 0.89 \pm 0.03 $ & 2.38 & $ 0.87 \pm 0.03 $ & 2.67 & $ 0.88 \pm 0.03 $ & 2.62 \\ & 0.21-0.3 & $ 0.80 \pm 0.02 $ & 10.07 & $ 0.81 \pm 0.02 $ & 10.14 & $ 0.80 \pm 0.02 $ & 10.09 \\ & 0.3-0.6 & $ 0.96 \pm 0.03 $ & 5.62 & $ 1.03 \pm 0.04 $ & 5.88 & $ 1.24 \pm 0.04 $ & 5.55 \\ QSO & 0-1 & $ 1.55 \pm 0.16 $ & 5.9 & $ 1.56 \pm 0.16 $ & 5.93 & $ 1.45 \pm 0.15 $ & 4.97 \\ & 0.5-1 & $ 1.54 \pm 0.26 $ & 3.09 & $ 1.55 \pm 0.26 $ & 3.07 & $ 1.52 \pm 0.26 $ & 3.07 \\ & 1-2 & $ 2.64 \pm 0.27 $ & 3.61 & $ 2.66 \pm 0.27 $ & 3.59 & $ 2.61 \pm 0.27 $ & 3.6 \\ & 2-3 & $ 3.19 \pm 0.50 $ & 7.08 & $ 3.21 \pm 0.51 $ & 7.05 & $ 3.51 \pm 0.55 $ & 7.08 \\ 2MPZ & 0-0.105 & $ 1.09 \pm 0.03 $ & 4.41 & $ 1.03 \pm 0.03 $ & 1.30 & $ 1.03 \pm 0.03 $ & 1.26 \\ & 0.105-0.195 & $ 1.12 \pm 0.04 $ & 2.00 & $ 1.12 \pm 0.04 $ & 2.07 & $ 1.19 \pm 0.04 $ & 2.17 \\ & 0.195-0.3 & $ 1.84 \pm 0.09 $ & 6.54 & $ 1.86 \pm 0.09 $ & 6.67& $ 2.03 \pm 0.09 $ & 6.34 \\ NVSS & 0-6 & $ 2.18 \pm 0.08 $ & 3.02 & $ 2.04 \pm 0.08 $ & 0.64 & ---&---\\ \hline \hline catalog & z & b & $\chi^2_{min}$&$b_{\rm Halofit}$ & $\chi^2_{min}$ \\ \hline SDSS & 0-1 & $ 1.34 \pm 0.04 $ & 1.25& $ 1.35 \pm 0.04 $ & 1.27 & $ 1.39 \pm 0.04 $ & 1.59 \\ WIxSC & 0-0.6 & $ 1.08 \pm 0.03 $ & 3.15 & $ 1.07 \pm 0.03 $ & 3.74& $ 1.12 \pm 0.03 $ & 3.94 \\ QSO & 0-3 & $ 2.67 \pm 0.23 $ & 2.77 & $ 2.68 \pm 0.23 $ & 2.76& $ 2.66 \pm 0.23 $ & 2.4 \\ 2MPZ & 0-0.3 & $ 1.23 \pm 0.04 $ & 5.19& $ 1.17 \pm 0.04 $ & 2.04 & $ 1.20 \pm 0.04 $ & 1.98 \\ \hline \end{tabular} \caption{\label{tab:biases}Linear biases for the different redshift bins of the various catalogs fitted for a fixed cosmological model. The reported errors on the bias are derived from the fit of Eq.\ \eqref{eq:chi21}; goodness of fit is quantified in the relevant $\chi^2$ columns. The $\chi^2$ refers to the case of a fit with 4 bins in the multipole range 10-60.} \end{table*} We use the {\it PolSpice}\footnote{See \url{http://www2.iap.fr/users/hivon/software/PolSpice/}} statistical toolkit \citep{szapudi01,chon04,efstathiou04,challinor05} to estimate the correlation functions and power spectra. {\it PolSpice} automatically corrects for the effect of the mask. In this respect, we point out that the effective geometry of the mask used for the correlation analysis is obtained by combining that of the CMB maps with those of each catalog of astrophysical objects. The accuracy of the {\it PolSpice} estimator has been assessed in \cite{xia15} by comparing the measured CCF with the one computed using the popular Landy-Szalay method \citep{ls93}. The two were found to be in very good agreement. {\it PolSpice} also provides the covariance matrix for the angular power spectrum, $\bar V_{\ell\ell'}$ \citep{2004MNRAS.349..603E}. For the case of source catalog APS a further step is required. Contrary to the CAPS, the APS contains shot noise due to the discrete nature of the objects in the map. The shot noise is constant in multipole and can be expressed as $C_{\rm N}=4 \pi f_{\rm sky}/N_{\rm gal}$, where $f_{\rm sky}$ is the fraction of sky covered by the catalog in the unmasked area and $N_{\rm gal}$ is the number of catalog objects, again in the unmasked area. The above shot-noise has been subtracted from our final estimated APS. The Planck Point Spread Function and the map pixelization affect in principle the estimate of the CAPS. However, the CAPS contains information on the ISW only up to $\ell \sim100$ where these effects are negligible. We will thus not consider them further. Finally, to reduce the correlation in nearby multipoles induced by the angular mask, we use an $\ell-$binned version of the measured CAPS. The number of bins and the maximum and minimum $\ell$ used in the analysis will be varied to assess the robustness of the results. We indicate the binned CAPS with the same symbol as the unbinned one, $C_\ell^{(cT)}$. It should be clear from the context which one is used. The $C_\ell^{(cT)}$ in each bin is given by the simple unweighted average of the $C_\ell^{(cT)}$ within the bin. For the binned $C_\ell^{(cT)}$ we build the corresponding covariance matrix as a block average of the unbinned covariance matrix $V_{\ell\ell'}$, i.e., $\sum_{\ell\ell'} V_{\ell\ell'}/\Delta\ell/\Delta\ell'$, where $\Delta\ell, \Delta\ell'$ are the widths of the two multipole bins, and $\ell, \ell'$ run over the multipoles of the first and the second bin. The binning procedure is very efficient in removing correlation among nearby multipoles, resulting in a block covariance matrix that is, to a good approximation, diagonal. We will use nonetheless the full block covariance matrix in the following, although we have checked that using the diagonal only gives minor differences. When showing CAPS plots, however, we use the diagonal terms to plot the errors on the $C_\ell$, $\left(\Delta C_{\ell} \right)^2=\sum_{\ell \ell'} V_{\ell\ell'}/\Delta\ell^2$, where the sum runs over the multipoles of the bin contributing to $C_\ell$. \begin{table*} \begin{tabular}{lllrrrr} \hline catalog & z & $A_{\rm ISW}$ & $\frac{A}{\sigma_A}$ & $\chi^2_0$ & $\chi^2_{min}$ & $\Delta{\chi^2}$ \\ \hline SDSS & 0-0.1 & $ 0.23 \pm 3.35 $ & 0.07 & 1.224 & 1.219 & 0.005\\ & 0.1-0.3 & $ 0.90 \pm 1.03 $ & 0.87 & 3.89 & 3.12 & 0.76 \\ & 0.3-0.4 & $ 1.94 \pm 1.24 $ & 1.57 & 4.47 & 2.01 & 2.45 \\ & 0.4-0.5 & $ 2.77 \pm 1.36 $ & 2.03 & 6.57 & 2.45 & 4.12 \\ & 0.5-0.7 & $ 2.59 \pm 1.13 $ & 2.28 & 9.28 & 4.06 & 5.22 \\ & 0.7-1 & $ 1.00 \pm 2.72 $ & 0.37 & 6.76 & 6.62 & 0.13 \\ WIxSC & 0-0.09 & $ 5.24 \pm 4.86 $ & 1.08 & 2.84 & 1.68 & 1.16 \\ & 0.09-0.21 & $ 0.34 \pm 1.01 $ & 0.33 & 4.63 & 4.52 & 0.11 \\ & 0.21-0.3 & $ 1.04 \pm 0.94 $ & 1.1 & 3.62 & 2.4 & 1.21 \\ & 0.3-0.6 & $ 1.33 \pm 0.94 $ & 1.41 & 4.91 & 2.92 & 1.99 \\ QSO & 0-1 & $ 2.50 \pm 1.64 $ & 1.52 & 5.95 & 3.64 & 2.31 \\ & 0.5-1 & $ 2.39 \pm 1.65 $ & 1.45 & 7.46 & 5.34 & 2.11 \\ & 1-2 & $ 2.49 \pm 1.64 $ & 1.52 & 3.99 & 1.68 & 2.31 \\ & 2-3 & $ 1.83 \pm 4.80 $ & 0.38 & 3.11 & 2.96 & 0.14 \\ 2MPZ & 0-0.105 & $ 1.25 \pm 3.43 $ & 0.36 & 1.26 & 1.13 & 0.13 \\ & 0.105-0.195 & $ 0.53 \pm 1.77 $ & 0.3 & 1.12 & 1.03 & 0.09 \\ & 0.195-0.3 & $ 1.04 \pm 1.47 $ & 0.71 & 1.66 & 1.16 & 0.5 \\ NVSS & 0-6 & $ 1.70 \pm 0.57 $ & 2.97 & 14.9 & 6.11 & 8.79 \\ \hline \end{tabular} \vspace{0.5cm} \begin{tabular}{llrrrr} \hline catalog & $A_{\rm ISW}$ & $\frac{A}{\sigma_A}$ & $\chi^2_0$ & $\chi^2_{min}$ & $\Delta{\chi^2}$ \\ \hline SDSS & $ 1.89 \pm 0.57 $ & 3.29 & 30.96 & 20.11 & 8.46 \\ WIxSC & $ 0.93 \pm 0.56 $ & 1.67 & 13.16 & 10.39 & 2.76 \\ Quasars & $ 2.41 \pm 1.13 $ & 2.13 & 14.55 & 10.01 & 2.99 \\ 2MPZ & $ 0.87 \pm 1.07 $ & 0.81 & 4.04 & 3.38 & 0.65 \\ SDSS+WIxSC & $ 1.39 \pm 0.40 $ & 3.49 & 44.12 & 31.94 & 11.21 \\ SDSS+Quasars & $ 1.99 \pm 0.51 $ & 3.9 & 45.51 & 30.28 & 11.45 \\ SDSS+WIxSC+Quasars & $ 1.51 \pm 0.38 $ & 4 & 58.67 & 42.66 & 14.2 \\ SDSS+WIxSC+Quasars+NVSS+2MPZ & $ 1.51 \pm 0.30 $ & 5 & 77.61 & 52.61 & 22.16 \\ \hline SDSS+WIxSC+Quasars+NVSS & $ 1.56 \pm 0.31 $ & 4.97 & 73.57 & 48.85 & 21.52 \\ SDSS+WIxSC+NVSS+2MPZ & $ 1.44 \pm 0.31 $ & 4.6 & 63.06 & 41.92 & 19.17 \\ SDSS+Quasars+NVSS+2MPZ & $ 1.75 \pm 0.36 $ & 4.88 & 64.45 & 40.67 & 19.41 \\ SDSS+WIxSC+Quasars+2MPZ & $ 1.44 \pm 0.36 $ & 4.04 & 62.71 & 46.35 & 14.85 \\ WIxSC+Quasars+NVSS+2MPZ & $ 1.36 \pm 0.35 $ & 3.84 & 46.65 & 31.9 & 13.71 \\ \hline \end{tabular} \caption{\label{tab:AISW}Summary of the measured ISW and related significances for the single redshift bins of each catalogs (top table) and for various combinations of the catalogs, where, in the latter case, also the individual redshift bins of each catalog were combined (bottom table). {The last five rows give the cases in which a single catalog is excluded from the fit each time.} The $\chi^2$ refers to the case of a fit with 4 bins in the multipole range 4-100.} \end{table*} \begin{table*} \begin{tabular}{llrrrr} \hline catalog & $A_{\rm ISW}$ & $\frac{A}{\sigma_A}$ & $\chi^2_0$ & $\chi^2_{min}$ & $\Delta{\chi^2}$ \\ \hline SDSS & $ 0.96 \pm 0.65 $ & 1.49 & 5.3 & 3.09 & 2.21 \\ WIxSC & $ 0.62 \pm 0.61 $ & 1.02 & 5.28 & 4.24 & 0.65 \\ Quasars & $ 1.28 \pm 0.63 $ & 2.03 & 5.55 & 1.41 & 3.94 \\ 2MPZ & $ 0.90 \pm 2.32 $ & 0.39 & 0.87 & 0.72 & 0.15 \\ NVSS & $ 1.70 \pm 0.57 $ & 2.97 & 14.9 & 6.11 & 8.79 \\ SDSS+WIxSC & $ 0.94 \pm 0.42 $ & 2.23 & 18.47 & 13.48 & 4.96 \\ SDSS+Quasars & $ 1.32 \pm 0.56 $ & 2.35 & 19.85 & 14.33 & 5.2 \\ SDSS+WIxSC+Quasars & $ 1.12 \pm 0.40 $ & 2.84 & 33.02 & 24.97 & 7.95 \\ SDSS+WIxSC+Quasars+NVSS & $ 1.31 \pm 0.33 $ & 4.02 & 47.91 & 31.76 & 15.27 \\ SDSS+WIxSC+Quasars+NVSS+2MPZ & $ 1.27 \pm 0.31 $ & 4.08 & 51.95 & 35.28 & 15.92 \\ \hline \end{tabular} \caption{\label{tab:AISWnocatbin}Summary of the measured ISW and related significances for the the case of no redshift binning of the catalogs. Various combinations of the catalogs are shown. } \end{table*} \begin{table} \begin{center} \begin{tabular} { l c} Parameter & 68\% limits\\ \hline {\boldmath$10^{-2}\omega_{b }$} & $2.226\pm 0.019 $\\ {\boldmath$\omega_{cdm } $} & $0.1187\pm 0.0012 $\\ {\boldmath$n_{s } $} & $0.9674\pm 0.0043 $\\ {\boldmath$10^{-9}A_{s } $} & $2.152\pm 0.052 $\\ {\boldmath$h $} & $0.6780\pm 0.0053 $\\ {\boldmath$\tau_{\rm reio } $} & $0.068\pm 0.013 $\\ {\boldmath$10^{-2}A_{\rm Planck }$} & $100.01\pm 0.25 $\\ \hline {\boldmath$\Omega_{\Lambda }$} & $0.6916\pm 0.0071 $\\ \end{tabular} \end{center} \caption{\label{tab:PlanckBAO}Results of the {\sc MontePython}{} fit to Planck + BAO data only. Here $\Omega_{\Lambda }$ is a derived parameter and $A_{\rm Planck }$ a Planck nuisance parameter.} \end{table} \section{Derivation of the ISW significance} \label{sec:method} In this section we illustrate the two methods we use to quantify the significance of the ISW. We will assume for the first method a flat $\Lambda$CDM model with cosmological parameters $\Omega_{\rm b} h^2 = 0.022161$, $\Omega_{\rm c} h^2 = 0.11889$, $\tau= 0.0952$, $h = 0.6777$, $\ln{10^{10}A_{\rm s}} = 3.0973$ at $k_0=0.05$ Mpc$^{-1}$, and $n_{\rm s} =0.9611$, in accordance with the most recent Planck results \citep{Ade:2015xua}. \subsection{Method 1} This is the usual method employed in previous publications to study the significance of the ISW. In this case we fix the cosmological model to the best-fit one measured by Planck, and we derive with {\sc class}{} the matter power spectrum $P(k,z)$, which is used to calculate the expected auto-correlation $C_{\ell}$ for each catalog for the appropriate redshift bin. The measured auto-correlation is then used to fit the linear bias, as a proportionality constant in the predicted $C_{\ell}$. An example of this fit is shown in the left panel of Fig.~\ref{fig:biasexample}. A simple $\chi^2$ over the bins of the auto-correlation is used for the fit: \begin{equation} \chi^2_{AC} \equiv \chi^2(b^2)=\sum_{\rm \ell~bins} {\frac{(\hat{C}^{\rm c}_{\ell}(b^2) - C^{\rm c}_{\ell})^2}{(\Delta C^{\rm c}_{\ell})^2} } \, , \label{eq:chi21} \end{equation} where $\hat{C}^{\rm c}_{\ell}$ and $C^{\rm c}_{\ell}$ represent the model and the measured CAPS, and the sum is over all $\ell$ bins. As mentioned in Sec.~\ref{sec:corranalysis} we tested that the use of the full covariance matrix with respect to the diagonal expression for the $\chi^2$ above does not give appreciable differences. Table~\ref{tab:biases} summarizes the various measured biases, and the default binning used for the auto-correlations. We tested the robustness of the fitted biases changing the number of bins from 4 to 6 and the maximum $\ell$ from 40-80, and we found stable results, with variations of the order of 10\%. A maximum $\ell$ of 40-80 is chosen since above this range typically non-linear effects become significant. As the default case, we use 4 bins in the range 10-60. As a further test we checked the impact of using non-linear corrections to the matter power spectrum to model the auto-correlation of the catalogs. The non-linear corrections were implemented through the version of Halofit~\citep{Takahashi:2012em} implemented in {\sc class}{} v2.6.1. The last 2 columns of Table~\ref{tab:biases} show the bias and the best-fit $\chi^2$ obtained using the non-linear model. It can be seen that the biases obtained with and without non-linear corrections are fully compatible. The only exception is the first redshift bin of 2MPZ where the best-fit bias changes at the $2\ \sigma$ level. More importantly, the fit shows a visible improvement from $\chi^2\sim4.4$ to $\chi^2\sim1.3$. This is expected, since at these low redshifts even the small $\ell$s correspond mostly to small, non-linear, physical scales. As we show below, however, 2MPZ presents little or no imprint of the ISW effect, so we conclude that the use of the linear $P(k,z)$ has a negligible impact on the study of the ISW effect in this analysis. As an additional comment about the galaxy biases reported in Table~\ref{tab:biases}, we note that the $\sim$10\% variation quoted above is typically larger than the statistical errors given in that Table, the latter being sometimes only a few \%; this means that the bias errors are systematics- rather statistics-limited. Also, in some cases, for example most notably in the $z\in [0.7, 1.0]$ bin of SDSS DR12 galaxies, the minimum $\chi^2$ is quite large, indicating a poor quality of the fit. This is also visible in some of the AC plots provided in Appendix \ref{acplots}. This is likely related to non-uniformities of the catalogs, which are more severe in the tails of the redshift distribution, which in particular leads to excessively large measured low-$\ell$ AC power in some cases. Therefore, in such instances, the small statistical errors on $b$ should be taken with care. In general, we stress that the precise determination of the bias error is not crucial in this analysis, which is, instead, focused on the determination of the significance of the ISW effect. To this aim, the error, and even the value of the bias, have only a limited impact. See further discussion below. In the second step, all the galaxy biases are fixed to best-fit values previously derived, and only the measured cross-correlations are used. At this point only a single parameter $A_{\rm ISW}$ is fitted using as data either a single measured cross-correlation or a combination of them, with the $\chi^2$ statistics: \begin{equation} \chi^2_{CC} \equiv \chi^2(A_{\rm ISW})=\sum_{z-\rm{bins} } \sum_{\rm cat.} \sum_{\rm \ell~bins} {\frac{(A_{\rm ISW} \hat{C}^{\rm T c}_{\ell} - C^{\rm T c}_{\ell})^2}{(\Delta C^{\rm T c}_{\ell})^2} } \, , \label{eq:chi22} \end{equation} where $\hat{C}^{\rm T c}_{\ell}$ and $C^{\rm T c}_{\ell}$ represent the model (for the standard $\Lambda$CDM cosmological model considered) and the measured catalog -- CMB temperature cross-correlation for a given redshift bin, respectively, the sum is over all the $\ell$ bins, and over different catalogs and different redshift bins. The linear parameter $A_{\rm ISW}$ quantifies the agreement with the above standard model expectation. In the denominator we use the error provided by \textit{Polspice} discussed in the previous section. In principle, however, one should use an error where the model is taken into account. For the case of binned data, however, this is a small effect (see for example discussion in \cite{Fornasa:2016ohl}). An example of measured cross-correlation and fit to the model is shown in the right panel of Fig.~\ref{fig:biasexample}. Table~\ref{tab:AISW} summarizes the results of the fit for each single $z$-bin of each catalog, for each catalog combining the different $z$-bins, and for different combinations of the catalogs, where, again, for each catalog $z$-binning has been used. For the default case we use four multipole bins between $\ell$ of 4 and 100, but, again, we have verified that the results are stable when changing the number of bins from 4 to 6 and the maximum $\ell$ from 60 to 100, which is expected, since the ISW effect is rapidly decreasing as a function of $\ell$, and not much signal is expected beyond $\ell \sim 60$. To quantify the significance of the measurement we use as test-statistic the quantity \begin{equation} \rm{TS} = \chi^2(0)-\chi_{min}^2 \; , \end{equation} where $\chi_{min}^2$ is the minimum $\chi^2$, and $\chi^2(0)$ is the $\chi^2$ of the null hypothesis of no ISW effect, i.e.\ of the case $A_{\rm ISW} = 0$. TS is expected to behave asymptotically as a $\chi^2$ distribution with a number of degrees of freedom equal to the number of fitted parameters, allowing us to derive the significance level of a measurement based on the measured TS. In this case, since there is only one fitted parameter, the significance in sigma is just given by $\sqrt{TS}$. From Table~\ref{tab:AISW} one can see that the maximum significance achieved with Method 1 when using all the catalogs in combination is $\sqrt{22.16}=4.7 \sigma$. From the different results it can also be seen that the main contribution is given by NVSS and SDSS DR12 galaxies. We remind that the cross-correlation with NVSS is calculated masking the area of the sky used to calculate the correlation with SDSS. The two are, thus, completely independent. A smaller, and comparable, contribution, is given by WI$\times$SC\ and SDSS-QSO. 2MPZ instead show basically no sign of ISW, which is expected given the very low $z$ range. In the Table we also include a column with the signal to noise (S/N=$A/\sigma_A$) of the ISW measurement for comparison with other works since this quantity is often reported in the literature. We can see that the global fit reaches a S/N of 5. We also show in Table~\ref{tab:AISWnocatbin} the result of the fit when no redshift binning is used. It is clear that without such binning the significance of the ISW is significantly reduced, especially for SDSS-DR12 and WI$\times$SC\ , while the significance of SDSS-QSO is almost unchanged. Overall, when no redshift binning is used, the significance of the ISW effect combining all the catalog is 4.0 $\sigma$, which is significantly reduced with respect to the 4.7 $\sigma$ achieved with the redshift binning. As mentioned above, the derived significance is very weakly dependent on the exact values of the biases used. For the case of a single catalog redshift bin, this is clear looking at Eqs.~\eqref{eq:isw2} \& \eqref{eq:chi22}, which show that the ISW signal is linear in $b$. The fit to the cross-correlation thus constraints the quantity $b A_{\rm ISW}$ and the value of $b$ is not important for the determination of the significance, although, clearly, is relevant in determining the value of $A_{\rm ISW}$. When several redshift bins and catalogs are used, the above argument is not exact anymore, but remains approximately valid. We checked, indeed, that using different biases derived from the autocorrelation fits using different $\ell_{max}$ and different number of $\ell$ bins, gives unchanged significances. {We can see that the preferred $A_{\rm ISW}$ value from the combined fit is slightly larger than 1 at a bit more than 1 $\sigma$. In the single catalog fits, both NVSS, QSOs and SDSS seem to drive the $A_{\rm ISW}$ value above 1. This is confirmed in the last 5 rows of Table~\ref{tab:AISW} where different fits are performed each time excluding only one catalog and combining the remaining four. All the fits give compatible results with $A_{\rm ISW}$ above 1 at around 1 $\sigma$ or a bit more. This result is further scrutinized in Section~\ref{sec:DE} where we investigate if this slight difference of $A_{\rm ISW}$ from 1 can be interpreted as an indication of departure of DE from the simple case of a cosmological constant. } \begin{table*} \begin{center} \begin{tabular}{l ||ccc} \hline Parameter & AC+CC&CC&PL+AC+CC\\ \hline {\boldmath$10^{-2}\omega_{b }$} & $2.230\pm 0.014$ & $2.229\pm 0.013 $& $2.228\pm 0.020$ \\ {\boldmath$\omega_{cdm } $} & $0.1060\pm 0.0062$ & $0.1045^{+0.0093}_{-0.023} $& $0.1185\pm 0.0012$ \\ {\boldmath$n_{s } $} & $0.9670\pm 0.0039$ & $0.9667\pm 0.0036 $& $0.9678\pm 0.0043$ \\ {\boldmath$10^{-9}A_{s } $} & $2.132\pm 0.049$ & $2.142\pm 0.044 $& $2.149\pm 0.051$ \\ {\boldmath$h $} & $0.6770\pm 0.0044$ & $0.6775\pm 0.0044 $& $0.6790\pm 0.0053$ \\ {\boldmath$\tau_{\rm reio } $} &--- &--- & $0.068\pm 0.013$ \\ {\boldmath$10^{-2}A_{\rm Planck }$} &--- &--- & $100.01\pm 0.25 $\\ {\boldmath$A_{\rm ISW} $} & $1.53\pm 0.29 $ & $1.57\pm 0.29 $& $1.62\pm 0.30 $\\ {\boldmath$b_{0,{\rm 2MPZ} } $} & $1.276\pm 0.059 $& $1.37^{+0.29}_{-0.24} $& $1.194\pm 0.028 $\\ {\boldmath$b_{1,{\rm 2MPZ} } $} & $1.243\pm 0.049 $ & $1.31^{+0.22}_{-0.31} $& $1.188\pm 0.030 $\\ {\boldmath$b_{2,{\rm 2MPZ} } $} & $1.795\pm 0.080 $ & $1.89\pm 0.27 $& $1.743\pm 0.070 $\\ {\boldmath$b_{0,{\rm SDSS} } $} & $1.104\pm 0.043 $ & $1.11^{+0.15}_{-0.24} $& $1.060\pm 0.030 $\\ {\boldmath$b_{1,{\rm SDSS} } $} & $0.904\pm 0.030 $ & $0.887^{+0.11}_{-0.089} $& $0.883\pm 0.025 $\\ {\boldmath$b_{2,{\rm SDSS} } $} & $0.820\pm 0.027 $& $0.84^{+0.21}_{-0.13} $& $0.800\pm 0.023 $\\ {\boldmath$b_{3,{\rm SDSS} } $} & $1.178\pm 0.038 $& $1.11^{+0.22}_{-0.14} $& $1.160\pm 0.034 $\\ {\boldmath$b_{4,{\rm SDSS} } $} & $1.12^{+0.12}_{-0.11}$& $0.99^{+0.22}_{-0.25} $& $1.11^{+0.12}_{-0.10} $\\ {\boldmath$b_{0,{\rm WISC} } $} & $0.951\pm 0.040 $& $1.01^{+0.15}_{-0.23} $& $0.914\pm 0.030 $\\ {\boldmath$b_{1,{\rm WISC} } $} & $0.851\pm 0.032 $& $0.85^{+0.16}_{-0.21} $& $0.828\pm 0.026 $\\ {\boldmath$b_{2,{\rm WISC} } $} & $1.005\pm 0.038 $& $0.99^{+0.16}_{-0.20} $& $0.988\pm 0.034 $\\ {\boldmath$b_{0,{\rm QSO} } $} & $1.44^{+0.25}_{-0.22}$& $1.26^{+0.45}_{-0.32} $& $1.40^{+0.27}_{-0.22} $\\ {\boldmath$b_{1,{\rm QSO} } $} & $2.46^{+0.26}_{-0.22}$& $1.90^{+0.60}_{-0.41} $& $2.47^{+0.27}_{-0.24} $\\ {\boldmath$b_{2,{\rm QSO} } $} & $3.35^{+0.41}_{-0.33}$& $2.68^{0.60}_{-0.52} $& $3.34^{+0.46}_{-0.39} $\\ {\boldmath$b_{\rm NVSS } $} & $2.54\pm 0.11 $& $2.31\pm 0.39 $& $2.479\pm 0.097 $\\ \hline {\boldmath$\Omega_{\Lambda }$} & $0.720\pm 0.014 $ & $0.722^{+0.050}_{-0.022} $&$0.694\pm 0.005$ \\ {\boldmath TS }&$22.0$&$26.5$&$24.9$\\ {\boldmath $\sigma$ }&$4.7$&$5.1$&$5.0$\\ \hline {\boldmath $\Delta \log({\rm ev})$ }&$ 11.9$&$ 11.5$&$12.7$\\ \hline \end{tabular} \end{center} \caption{Result of the {\sc MontePython}{} fits in the $\Lambda$CDM model with using several combinations of Planck data, AC data and CC data. When the Planck data is not used, Gaussian priors on the cosmological parameters except $\omega_{cdm}$ are assumed. Here $\Omega_{\Lambda }$ is a derived parameter. The third to last row gives the Test Statistics (TS) which is equal to $\Delta\chi^2$ for the fit in the first two column and $- 2\, \Delta \log {\cal L}$ for the fit in the third column. The second to last row gives the significance $\sigma=\sqrt{TS}$. {Finally, the last row gives the logarithm of the Bayes factor, representing the evidence for non-zero $A_{\rm ISW}$ in Bayesian terms.} } \label{tab:lcdm_fit} \end{table*} \subsection{Method 2} The first method is, in principle, not fully self-consistent, because the auto-correlations, and hence the biases, are sensitive to the underlying matter power spectrum. We fixed the matter power spectrum to the Planck $\Lambda$CDM best-fitting model, but this may not be the best fit to the auto-correlation data. The induced error should be negligible when CMB and BAO are also used, since they impose $P(k)$ to be very close to the fiducial model. But more importantly, the cross-correlation determines a given amount of ISW, and this has in principle an effect on cosmology, since a different ISW means a different Dark Energy model and thus also a different $P(k)$. For these reasons it is more consistent to fit to the data at the same time as the bias parameters, the cosmological parameters, and the $A_{\rm ISW}$ parameter used to assess the detection significance. We perform such a fit using the {\sc MontePython}{} environment. The fit typically involves many parameters ($>15$) which can present degeneracies which are not known in advance. To scan efficiently this parameter space we run {\sc MontePython}{} in the Multinest mode~\citep{Feroz:2008xx}. In this way we can robustly explore the posterior with typically $\sim 10^6$ likelihood evaluations, and efficiencies of the order of 10\%. We consider two cases. In the first case, we only use cross-correlation and auto-correlation measurements. We call this dataset AC+CC, and we fit a total of 22 parameters, i.e, 15 biases, $A_{\rm ISW}$, and the six $\Lambda$CDM parameters ($\omega_b$, $\omega_{cdm}$, $n_s$, $h$, $A_s$, $\tau_{\rm reio}$). When Planck data are used, we also include the nuisance parameter $A_{\rm Planck }$ \citep{Ade:2015xua}. For all cosmological parameters except $\omega_{cdm}$, we use Gaussian priors derived from a fit of Planck+BAO summarized in Table \ref{tab:PlanckBAO}, which are consistent with those published in \cite{Ade:2015xua}. The error bars from Planck+BAO are so small that we find essentially the same result for $A_{\rm ISW}$ when fixing these five parameters to their best fit values instead of marginalizing over them with Gaussian priors. Our results for this fit are shown in the first column of Table~\ref{tab:lcdm_fit}. As expected, the constraint on $\omega_{cdm}$ coming from the AC+CC data is weaker than that from Planck+BAO data, by about a factor 6. Also, the $\omega_{cdm}$ best-fit of the AC+CC analysis is lower than the Planck+BAO fit, by about 2$\sigma$. The fitted galaxy biases are typically compatible with those of Method 1, although in several cases they are 10-20\% larger, which can be understood as a consequence of the lower $\omega_{cdm}$, resulting in a lower $P(k)$ normalization. Indeed, the measured auto-correlations basically fix the product of the squared biases and of the overall $P(k)$ amplitude. Comparing the case with free $A_{\rm ISW}$ to the one with $A_{\rm ISW}=0$, we find TS=$\Delta \chi^2$= 22, giving a significance of 4.7 $\sigma$, identical to the one found in \mbox{Method 1}. With the same setup we also perform a fit using CC data only. The results are shown in the second column of Table~\ref{tab:lcdm_fit}. In this case the biases are determined from the cross-correlation only, without relying on the autocorrelation. It is interesting to see that, even in this case, good constraints on the biases can be achieved, although, clearly, the errors are much larger (by a factor of $\sim$ 4-5) than when including the AC data. We find for this case TS=26.5 corresponding to a significance of 5.1 $\sigma$, thus reaching the 5 $\sigma$ threshold. The increase in significance seems to be due to the larger freedom in the fit of the biases which allows to reach an overall better best-fit of the CC data with respect to the case in which the biases are constrained by the AC data. In the second case, we fit the same parameters to the data, but we now include the full Planck+BAO likelihoods instead of Gaussian priors on five parameters. Formally, we use the Planck and BAO likelihoods combined with the $\chi^2$ from the AC+CC data: \begin{equation} \log L = \log L_{\rm PL} + \log L_{\rm BAO} - \chi^2_{AC}/2 - \chi^2_{CC}/2. \end{equation} It should be noted that the use of other data besides AC+CC does not affect the ability to derive the significance of the ISW detection, which is only encoded in the parameter $A_{\rm ISW}$ entering the AC+CC likelihood. Results of this fit are shown in the third column of Table~\ref{tab:lcdm_fit}. The main difference with respect to the previous fit is the value of $\omega_{cdm }$, now driven back to the Planck best-fit. This upward shift in $\omega_{cdm }$ results, again, in a global downward shift of the biases, by about 10-20\%, giving now a better compatibility with the results of Method 1. In general, apart from the small degeneracy with $\omega_{cdm }$ resolved by the inclusion of Planck+BAO data, the biases are well constrained by the fit. This means that the sub-space of biases is approximately orthogonal to the rest of the global parameter space, which simplifies the fit and speeds up its convergence. To measure the significance, in this case we define the test statistic as TS= $- 2\, \Delta \log {\cal L}$, which shares the same properties of the TS defined in terms of the $\chi^2$. Comparing the case with free $A_{\rm ISW}$ to the one with $A_{\rm ISW}=0$, we now get TS= $- 2\, \Delta \log {\cal L}$ = 24.9, which gives a significance of 5.0 $\sigma$. Since the cosmology is basically fixed by the Planck+BAO data to a point in parameter space very close to the fiducial model of Method 1, this improvement in significance comes, apparently, from fitting jointly the biases and $A_{\rm ISW}$ (while in Method 1 the biases were kept fixed using the results of the first step of the method). The joint fit explores the correlations which exist between the biases and $A_{\rm ISW}$. This results in a better global fit, and also in a slightly enhanced $A_{\rm ISW}$ significance, reaching the 5 $\sigma$ threshold. {Finally, since the fit performed with Multinest automatically provides also the {\it evidence} of the Posterior, in the last row of Table \ref{tab:lcdm_fit} we additionally report the logarithm of the Bayes factor, i.e., the logarithm of the ratio of the evidences for the two fits where $A_{\rm ISW}$ is free and where it is fixed to $A_{\rm ISW}=0$. We find in all cases values around $\sim$12. Logarithm of the Bayes factors larger than 5 represents {\it strong evidence} according to Jeffreys' scale \cite{Trotta:2017wnx}.} \section{Robustness tests} \label{sec:tests} In this section we describe some further tests performed to verify the robustness of the results. As mentioned in Sec.~\ref{sec:cmbmaps}, several CMB maps are available from Planck, resulting from different foreground cleaning methods. In Fig.~\ref{fig:2mpz_cmb} we show the results of the cross-correlation using four CMB maps cleaned with four different methods. We pick up as an example the cross-correlation with the full 2MPZ catalog, without subdivision in redshift bins. It clearly appears that the use of different maps has no appreciable impact on the result. Another important aspect is the possible frequency dependence of the correlation. In particular, while the ISW effect is expected to be {achromatic, some secondary effects, like a correlation due to a Sunyaev-Zel'dovich \citep{SZeffect} or Rees-Sciama \citep{RSeffect} imprint in the CMB map, are expected to be frequency dependent. To test this possibility, we use available Planck CMB maps at 100 GHz, 143 GHz and 217 GHz. Again the the full 2MPZ catalog is used as example, since these effects are expected to peak at low redshift. Fig.~\ref{fig:2mpz_freq} shows the result of the correlation at different frequencies. We observe a very small trend of the CAPS with frequency, especially for the first $\ell$ bin, but this effect is negligible with respect to the error bars of the data points. Results are similar for the other catalogs, showing no frequency dependence. \begin{figure} \includegraphics[width = 0.48\textwidth]{2mpz_cmb.pdf} \caption{Measured cross-correlation of 2MPZ in one single redshift bin with CMB maps from {\tt Commander}, {\tt NILC}, {\tt SEVEM}, and {\tt SMICA}.} \label{fig:2mpz_cmb} \end{figure} \begin{figure} \includegraphics[width = 0.48\textwidth]{2mpz_freq.pdf} \caption{Measured cross-correlation of 2MPZ in one single redshift bin with CMB at 100 GHz, 143 GHz and 217 GHz.} \label{fig:2mpz_freq} \end{figure} Finally, we tested the effect of photo-$z$\ errors. In the basic setup, the theoretical predictions for the auto- and cross-correlation functions per redshift bin are modeled by assuming that the true redshift distribution is well approximated by the photo-$z$\ one, i.e.\ $dN/dz_\mathrm{true} \simeq dN/dz_\mathrm{phot}$. In reality, sharp cuts in $dN/dz_\mathrm{phot}$ will correspond to more extended tails in $dN/dz_\mathrm{true}$ because the photo-$z$s\ are smeared out in the radial direction. However, we can easily take photo-$z$\ errors into account if we know their statistical properties. In the case of 2MPZ, the photo-$z$\ error is basically constant in $z$ and has roughly Gaussian scatter of $\sigma_{\delta z} \simeq 0.015$ centered at $\langle \delta z \rangle =0$, while for WI$\times$SC\ the scatter is $\sigma_{\delta z}(z)=0.033(1+z)$ with also approximately zero mean in $\delta z$. For SDSS QSOs it is also approximately constant in $z$ and equal to 0.24. Finally, for SDSS DR12 the error is $\sigma_{\delta z}(z)=0.022(1+z)$ (see Sec.~\ref{sec:catmaps}). We thus derive the effective true redshift distribution of a given bin by convolving the measured photo-$z$ selection function in that bin with a $z$-dependent Gaussian of width $\sigma_{\delta z}(z)$. The resulting true-$z$ distribution is a smoothed version of the photo-$z$ distribution, presenting tails outside the edges of the bin. We then use this distribution to fit again the auto- and cross-correlations data. The results are shown in the last column of Table~\ref{tab:biases}. We find that the effect of photo-$z$\ errors has some impact on the determination of the biases. The effect is most important in the high-$z$ tails of various catalogs, and, in particular, WI$\times$SC\ and SDSS DR12. This is not surprising since, in these cases, the photo-$z$\ errors increase with redshift and are largest at high-$z$. The effect is at the level of 10-20\%. This corresponds to a decrease in $A_{\rm ISW}$ of the same amount in these bins. Nonetheless, since the above bins only have a limited weight on the combined fit, the impact on the final $A_{\rm ISW}$ determined from the global fit of all bins and catalogs is basically negligible. \begin{figure} \includegraphics[width=0.45\textwidth]{triangle_w0_wa_comparison.pdf} \caption{Marginalized posterior in the $w_0-w_a$ plane for the three different fits, Planck+BAO, Planck+BAO+CC and CC only.} \label{fig:wowazoom} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{triangle_w0_wa_comparison_AC.pdf} \caption{Marginalized posterior in the $w_0-w_a$ plane for the three different fits, Planck+BAO, CC only, and AC only.} \label{fig:wowazoomAC} \end{figure} \begin{table} \begin{center} \begin{tabular} { l c} Parameter & 68\% limits\\ \hline {\boldmath$10^{-2}\omega_{b }$} & $2.224\pm 0.021 $\\ {\boldmath$\omega_{cdm } $} & $0.1190\pm 0.0017 $\\ {\boldmath$n_{s } $} & $0.9668\pm 0.0051 $\\ {\boldmath$10^{-9}A_{s } $} & $2.137\pm 0.063 $\\ {\boldmath$h $} & $0.639^{+0.018}_{-0.029} $\\ {\boldmath$w_0 $} & $-0.58^{+0.30}_{-0.25} $\\ {\boldmath$w_a $} & $-1.10\pm 0.76 $\\ {\boldmath$\Omega_{0,fld }$} & $0.650^{+0.024}_{-0.029} $\\ \hline \end{tabular} \end{center} \caption{\label{tab:wowa_planck}Results of the {\sc MontePython}{} fit with using Planck + BAO data.} \end{table} \section{Dark energy fit} \label{sec:DE} In this section we investigate the power of the cross-correlation data to constrain DE, in a similar framework as presented in \cite{Corasaniti:2005pq,Pogosian:2005ez} and \cite{Pogosian:2004wa}. For this purpose, we do not use the $A_{\rm ISW}$ parameter employed in Sec.~\ref{sec:method}, since it is only an artificial quantity necessary to evaluate the ISW significance from the cross-correlation data. However, as shown in Sec.~\ref{sec:method}, there is indication that the best-fit value of $A_{\rm ISW}$ is above 1 at slightly more than 1$\sigma$. This suggests (although with low statistical significance) that DE could differ from a simple cosmological constant. To investigate this more in detail, we perform a fit with Method 2 of Sec.~\ref{sec:method}, but with $A_{\rm ISW}=1$, and with extra parameters accounting for dynamical Dark Energy. For simplicity, we use the $w_0-w_a$ empirical parametrization \citep{Linder:2002et,Chevallier:2000qy} and the parameterized post-Friedmann framework of \cite{hu_parameterized_2007} and \cite{fang_crossing_2008}, which are implemented in {\sc class}, to study models with $w<-1$. We test several different fit setups. In particular, since the AC dataset is a cosmological probe with its own sensitivity to the cosmological parameters, we test various combinations in which the AC and CC data are used separately. A further reason to study the AC data separately from the CC ones is that the APS of extragalactic objects are typically difficult to model accurately, even at small $\ell$, due to the non-linearity and possible stochasticity of the galaxy bias with respect to matter. Separate fits to the AC and CC data could then reveal inconsistencies that might be associated to our minimal assumption that the bias is linear and scale-independent. A further reason to study separately the AC and CC data is the fact that the AC ones are more prone to possible systematic effects present in the catalogs like, for example, non-uniform calibration across the sky. These systematics would more severely bias the AC-based cosmological inference, while the CC measurements are more robust in this respect, since systematic offsets or mis-calibrations across the sky do not generally correlate with the LSS nor the CMB. We perform the following fits: (a) Planck+BAO, (b) Planck+BAO+CC+AC, (c) Planck+BAO+CC, (d) CC only, (e) AC only, (f) AC+CC. Case (a) has the standard 6 $\Lambda$CDM parameters, plus $w_a$, $w_0$, and one Planck nuisance parameter, $A_{\rm Planck }$, required for the evaluation of the Planck likelihood \citep{Ade:2015xua}, thus 9 parameters in total. The results of this baseline fit are shown in Table~\ref{tab:wowa_planck}. Case (b) includes CC and AC datasets, and uses additionally 15 bias parameters (24 parameters in total). Case (c) is similar to (b) but without AC data. Since the biases are still needed for the CC fitting, they are still included in the fit, but with a Gaussian prior coming from fit (b). We verified that just fixing the biases to the best fit (b), instead of including them in the fit with Gaussian priors, does not actually change the results. Similarly, the result does not change if the biases are taken from another fit than (b), like (e) or (f). For fit (d), featuring only CC data, all cosmological parameters except ($w_a$, $w_0$, $\omega_{cdm }$) and all bias parameters are either fixed or marginalized with Gaussian priors. For fit (e), featuring only AC data, all cosmological parameters except ($w_a$, $w_0$, $\omega_{cdm }$) are fixed to best-fit values, while the biases are left free, since they are constrained by the AC data. Finally fit (f) combines AC and CC data, and uses the same setup as fit (e). \begin{table*} \begin{center} \begin{tabular} {l |ccccc } Parameter & AC+CC&CC+bias priors&AC& PL+AC+CC&PL+CC+bias priors\\ \hline {\boldmath$10^{-2}\omega_{b }$} & $2.222\pm 0.021$ & $2.222\pm 0.022 $& $2.222\pm 0.021 $ & $2.232\pm 0.022 $ &$2.227\pm 0.022 $\\ {\boldmath$\omega_{cdm } $} & $0.1134\pm 0.0075 $ & $0.111^{+0.016}_{-0.029} $& $0.114\pm 0.011 $ & $0.1179\pm 0.0018 $& $0.1185\pm 0.0018 $ \\ {\boldmath$n_{s } $} & $0.9652\pm 0.0055 $ & $0.9642\pm 0.0057 $& $0.9647\pm 0.0055 $ & $0.9691\pm 0.0056 $& $0.9681\pm 0.0054 $\\ {\boldmath$10^{-9}A_{s } $} & $2.162\pm 0.076 $ & $2.187\pm 0.080 $& $2.183\pm 0.077 $ & $2.151\pm 0.065 $& $2.152\pm 0.064 $ \\ {\boldmath$h $} & $0.624^{+0.023}_{-0.029} $ & $0.641\pm 0.031 $& $0.592\pm 0.058 $ & $0.625^{+0.026}_{-0.030} $& $0.625^{+0.028}_{-0.031} $\\ {\boldmath$\tau_{\rm reio } $} &--- &--- &--- & $0.069\pm 0.017 $& $0.068\pm 0.016 $\\ {\boldmath$\Omega_{\Lambda }$} & $0.650\pm 0.029 $ & $0.672^{+0.068}_{-0.048} $& $0.605^{+0.069}_{-0.049}$ &$0.639\pm 0.038 $& $0.635^{+0.037}_{-0.032} $ \\ {\boldmath$w_0$} & $0.97^{+0.57}_{-0.44} $ & $0.39^{+0.57}_{-0.46} $& $1.46^{+0.55}_{-0.27} $ & $-0.37\pm 0.33 $& $-0.43^{+0.32}_{-0.36} $\\ {\boldmath$w_a$} & $-3.6^{+1.2}_{-1.5} $ & $-3.2^{+1.4}_{-1.9} $& $-4.47^{+0.59}_{-1.4} $ & $-1.63^{+1.0}_{-0.86} $& $-1.44^{+1.0}_{-0.81} $\\ {\boldmath$10^{-2}A_{\rm Planck }$} &--- &--- &--- & $100.02\pm 0.25 $ & $100.02\pm 0.25 $\\ {\boldmath$b_{0,{\rm 2MPZ} } $} & $1.56^{+0.13}_{-0.12} $ & $1.2220^{+0.0073}_{-0.021} $& $1.68^{+0.11}_{-0.042} $ & $1.240\pm 0.040 $& $1.2220^{+0.0076}_{-0.021} $\\ {\boldmath$b_{1,{\rm 2MPZ } } $} & $1.46\pm 0.11 $ & $1.188\pm 0.030 $ & $1.56^{+0.10}_{-0.056} $ & $1.228\pm 0.041 $& $1.188\pm 0.030 $\\ {\boldmath$b_{2,{\rm 2MPZ } } $} & $1.94\pm 0.15 $ & $1.743\pm 0.070 $& $2.04^{+0.14}_{-0.11} $ & $1.773\pm 0.076 $& $1.744\pm 0.069 $\\ {\boldmath$b_{0,{\rm SDSS } } $} & $1.195\pm 0.090 $ & $1.060\pm 0.030 $& $1.254^{+0.081}_{-0.056} $& $1.078\pm 0.033 $& $1.060\pm 0.030 $\\ {\boldmath$b_{1,{\rm SDSS } } $} & $0.861^{+0.065}_{-0.086} $ & $0.882\pm 0.030 $& $0.879^{+0.057}_{-0.065} $& $0.880\pm 0.027 $& $0.884\pm 0.030 $\\ {\boldmath$b_{2,{\rm SDSS } } $} & $0.743^{+0.057}_{-0.082} $ & $0.800\pm 0.025 $& $0.747^{+0.052}_{-0.065} $& $0.792\pm 0.024 $& $0.801\pm 0.025 $\\ {\boldmath$b_{3,{\rm SDSS } } $} & $1.016^{+0.074}_{-0.12} $ & $1.161\pm 0.035 $& $1.004^{+0.069}_{-0.11} $& $1.141\pm 0.036 $& $1.161\pm 0.035 $\\ {\boldmath$b_{4,{\rm SDSS } } $} & $0.935^{+0.098}_{-0.13} $ & $1.110\pm 0.020 $& $0.902^{+0.089}_{-0.13} $& $1.09^{+0.11}_{-0.10} $& $1.110\pm 0.020 $\\ {\boldmath$b_{0,{\rm WISC } } $} & $1.085\pm 0.083 $ & $0.913\pm 0.030 $& $1.155^{+0.078}_{-0.053} $& $0.940\pm 0.035 $& $0.913\pm 0.030 $\\ {\boldmath$b_{1,{\rm WISC } } $} & $0.884^{+0.068}_{-0.077} $ & $0.828\pm 0.030 $& $0.924^{+0.062}_{-0.055} $& $0.840\pm 0.029 $& $0.828\pm 0.031 $\\ {\boldmath$b_{2,{\rm WISC } } $} & $0.981^{+0.078}_{-0.097} $ & $0.987\pm 0.041 $& $1.008\pm 0.070 $ & $0.990\pm 0.036 $& $0.988\pm 0.040 $\\ {\boldmath$b_{0,{\rm QSO } } $} & $1.14\pm 0.22 $ & $1.401\pm 0.030 $& $1.10\pm 0.20 $ & $1.40\pm 0.22 $& $1.401\pm 0.030 $\\ {\boldmath$b_{1,{\rm QSO } } $} & $1.77\pm 0.26 $ & $2.470\pm 0.030 $& $1.67^{+0.23}_{-0.32} $ & $2.44^{+0.26}_{-0.23} $& $2.470\pm 0.030 $\\ {\boldmath$b_{2,{\rm QSO } } $} & $2.47^{+0.35}_{-0.40} $ & $3.341\pm 0.050 $& $2.34^{+0.27}_{-0.49} $ & $3.34^{+0.41}_{-0.35} $& $3.339\pm 0.049 $\\ {\boldmath$b_{\rm NVSS } $} & $2.36^{+0.17}_{-0.24} $ & $2.47\pm 0.10 $& $2.38^{+0.17}_{-0.22} $ & $2.484\pm 0.099 $& $2.487\pm 0.098 $\\ \hline \end{tabular} \end{center} \caption{Result of the {\sc MontePython}{} fits in the $\Lambda$CDM +$w_0$ +$w_a$ model with using several combinations of Planck + BAO (PL) data, AC data and CC data. When the Planck data is not used, Gaussian priors on all cosmological parameters except ($\omega_{cdm}$, $w_0$, $w_a$) are assumed.} \label{tab:lcdm_w0wa_fit} \end{table*} Figs.~\ref{fig:wowazoom}-\ref{fig:wowazoomAC} show the results for $w_0$ and $w_a$ (marginalized over all the remaining parameters) for some of these fits. Table~\ref{tab:lcdm_w0wa_fit} gives the confidence intervals on all the parameters for all our fits. The most evident result is that the AC-only fit selects a region of parameter space significantly in tension with the Planck+BAO constraints, basically excluding the standard case $(w_0, w_a) = (-1, 0)$ at more than 3 $\sigma$. This is either a consequence of the linear bias model not being accurate enough to provide reliable cosmological constraints, or an indication of some systematic effects in some of the catalogs. Problems in the modeling of the bias might be particularly relevant for the auto-correlation of the catalogs in the highest redshift bins, which are the most sensitive to deviations from a standard cosmological constant, but also the ones lying in the tail of the redshift distribution of the catalog, where different population of galaxies are probably selected, which requires more accurate modeling. More sophisticated approach to the modeling of the catalog auto-correlations might be thus required to address properly this issue. Various bias models have been proposed beyond linear bias, like for instance models based on the halo occupation distribution of the catalog objects (see for instance \cite{Ando:2017wff}). We leave a systematic study of this subject for future work. Intrinsic artifacts in the catalog, like non-uniformity in the sky coverage, or large errors in the photo-$z$\ determination, are also a likely issue. These problems can become more evident especially in the tails of the redshift distribution. Indeed, the largest $\chi^2$ for AC fits from Table~\ref{tab:biases} are for the $z$-bins in the tail of the distribution, especially for SDSS DR12 and QSOs, indicating a poor match between the model and the data. This can be seen more explicitly also in the related plots in Appendix \ref{acplots}. Hence, in deriving DE constraints it is more conservative to discard information from AC and focus on CC only. We see that the constraints from the CC data are compatible with Planck+BAO results. However, given the relatively low significance of the ISW effect, the former are about three times weaker than the latter for each parameter. The direction of the degeneracy between $w_0$ and $w_a$ is approximately the same in the two fits, which was not obvious a priori, since the two data sets are sensitive to Dark Energy through different physical effects {(the ISW effect in CMB temperature angular spectrum for the CC fit, and the constraint on the BAO scale for the Planck + BAO fit)}. It appears that the valley of well-fitting models with $w_0>-1$ always corresponds to $w(z)$ crossing $-1$ in the range $0.0<z<1.5$, but with very different derivatives $w'(z)$. Even when $w_0$ is very large, all models in this valley do feature accelerated expansion of the Universe in the recent past, but not necessarily today. In fact, when $w_0$ increases while $w_a$ decreases simultaneously, the stage of accelerated expansion is preserved but translated backward in time. Since the CC data are less sensitive than Planck and do not feature a different direction of degeneracy, the joint constraints from Planck+BAO+CC are basically unchanged with respect to Planck+BAO only. \section{Discussion and Conclusions} \label{sec:discussion} We derived an updated measurement of the ISW effect through cross-correlations of the cosmic microwave background with several galaxy surveys, namely, 2MASS Photometric Redshift catalog (2MPZ), NVSS, SDSS QSOs, SDSS DR12 photometric redshift dataset, and WISE~$\times$~SuperCOSMOS; the two latter are here used for the first time for an ISW analysis. We also improved with respect to previous analyses performing tomography within each catalog, i.e., exploiting the photometric redshifts and dividing each catalog into redshift bins. We found that the current cross-correlation data provide strong evidence for the ISW effect and thus for Dark Energy, at the 5 $\sigma$ level. However, current catalogs are still not optimal to derive cosmological constraints from the ISW, for two main reasons. First, the clustering of objects requires complicated modeling, probably beyond the simple linear bias assumption. On this last point, improvements are possible using more sophisticated modeling, but at a price of introducing more nuisance parameters. Also, the tails of the redshift distributions of the objects might be more strongly affected by catalog systematics such as uneven sampling or large photo-$z$\ errors. Second, the data used in this paper are sensitive mostly to the redshift range $0<z< 0.6$, while the ISW effect is expected to be important for $0.3<z< 1.5$. Several planned or forthcoming wide-angle galaxy surveys will cover this redshift range and should thus bring (major) improvement for ISW detection via cross-correlation with CMB. For the Euclid satellite, the predicted significance of such a signal is $\sim 8 \sigma$ \citep{Euclid_cosmol}, and one should expect similar figures from the Large Synoptic Survey Telescope \citep{LSST}, and the Square-Kilometer Array \citep{SKA}. The very high S/N of ISW from these deep and wide future catalogs will not only allow for much stronger constraints on dark energy than we obtained here, but even on some modified gravity models which often predict very different ISW signatures than $\Lambda$CDM \citep[e.g.][]{Renk17}. \section*{acknowledgments} Simulations were performed with computing resources granted by RWTH Aachen University under project thes0263. MB is supported by the Netherlands Organization for Scientific Research, NWO, through grant number 614.001.451, and by the Polish National Science Center under contract \#UMO-2012/07/D/ST9/02785. Some of the results in this paper have been derived using the HEALPix package\footnote{\url{http://healpix.sourceforge.net/}} \citep{2005ApJ...622..759G}. This research has made use of data obtained from the SuperCOSMOS Science Archive, prepared and hosted by the Wide Field Astronomy Unit, Institute for Astronomy, University of Edinburgh, which is funded by the UK Science and Technology Facilities Council. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is \url{http://www.sdss3.org/}. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. Some of the results in this paper have been derived using the GetDist package\footnote{\url{https://github.com/cmbant/getdist}}.
1,941,325,220,808
arxiv
\section{The Complexity of SPR} In this section, we study the computational complexity of the problems defined below. \noindent\begin{definition}[Computational problems]~ \begin{itemize} \item \textsc{SPR/FA}: Given an SPR instance and a matching $Y$, does an allocation $\mu$ exist such that $(Y,\mu)$ is a feasible matching? \item \textsc{SPR/Nw/Verif}: Given an SPR instance and a feasible matching $(Y,\mu)$, is it nonwasteful? \item \textsc{SPR/Nw/Find}: Given an SPR instance, find a nonwasteful matching $(Y,\mu)$. \item \textsc{SPR/Stable/Verif}: Given an SPR instance and a feasible matching, is it stable? \item \textsc{SPR/Stable/Exist}: Given an SPR instance, does a stable matching exist? \end{itemize} \end{definition} \begin{reminder}[Computational Complexity] We assume the following common knowledge: (decision) problem, length function, classes P, NP, complementation, hardness and completeness. An SPR instance has length $\Theta(|S||P|+|P||R|)$. A number problem is said \emph{strongly} hard if its hardness holds even when restricting to instances whose numbers are polynomially bounded. For instance, NP-complete problem $\textsc{Partition}$ (as well as $\textsc{SubsetSum}$ or $\textsc{Knapsack}$) admits an algorithm polynomial in its largest number; hence, it is not strongly hard. However, problem $\textsc{4-Partition}$ is NP-hard even when its numbers are polynomially bounded \cite{garey1979computers}. Therefore, it is a \emph{strongly} NP-hard problem. While a decision problem only allows for one $\{0,1\}$ (no/yes) output, a function problem allows for an entire $\{0,1\}$-word (hence, any finite discrete object, or w.l.o.g. an integer). A function problem in class $\text{FP}^\text{NP}[\text{poly}]$ (resp. $\text{FP}^\text{NP}[\text{log}]$) can be solved by a polynomial (resp. logarithmic) number of calls to an NP-oracle. Typically, any optimization problem whose decision version (whether a solution better than a threshold exists) is in NP, is in $\text{FP}^\text{NP}[\text{poly}]$ or $\text{FP}^\text{NP}[\text{log}]$: one finds the optimum by a binary search that calls the decision version. It is usually polynomial in the number of bits for numbers, but when instances have no numbers then the binary search is typically logarithmic. Hardness in these classes is induced by metric reductions from function problem $\Pi$ to $\Pi'$, where finding the output for $\Pi'$ in polynomial time provides the output for $\Pi$ in polynomial time. Class NP is the class of problems whose yes-instances admit a certificate (e.g. a solution) that can be verified in polynomial-time. When the verification procedure requires an NP-oracle, the problem is in class $\text{NP}^\text{NP}$. Class coNP (resp. $\text{coNP}^\text{NP}$) is the complement of class NP (resp. $\text{NP}^\text{NP}$). \end{reminder} Let us start by simply observing how brute-force methods depend on the parameters of these problems. At first glance, there are $O(|P|^{|S|})$ matchings and $O(|P|^{|R|})$ resource allocations. Whether a matching $Y$ is feasible by some allocation can be decided using dynamic programming on subproblems $T_k(\kappa_1,\ldots,\kappa_{|P|})\in\{\text{false},\text{true}\}$ (for integers $0\leq k\leq |R|$ and $0\leq \kappa_p\leq |S|$) which ask whether some allocation can provide $\kappa_p$ seats for each $p\in P$, using only resources $\{r_1,\ldots,r_k\}$. There are $O\left(|R||S|^{|P|}\right)$ subproblems. Each subproblem can be solved in time $O(|P|)$ by the following recurrence. First, $T_0(\bm{\kappa})=\left\{\text{true if }\bm{\kappa}\equiv 0, \text{false otherwise}\right\}$, and second, for $k>0$ and $\bm{\kappa}\in[0,n]^p$, $T_k(\bm{\kappa})=\bigvee\nolimits_{{i=1}\mid{\kappa_i\geq q_{r_k}}}^{p} T_{k-1}(\kappa_1,\ldots,\kappa_i-q_{r_k},\ldots,\kappa_p)$, both hold. Therefore, the dynamic program takes time $O(|S|^{|P|}|P||R|)$, including a last iteration that queries for an allocation with at least the required numbers of seats (rather than exactly). Consequently: \begin{itemize} \item \textsc{SPR/FA} can be decided in time $O(|S|^{|P|}|P||R|)$, \item \textsc{SPR/Nw/Find} can be solved in time $O(|S|^{|P|+1}|P|^2|R|)$ by mechanism SD, and \end{itemize} these two problems are XP-tractable with respect to parameter $|P|$ (while stability seems harder to decide). A verification problem \textsc{SPR/FA} decidable in polynomial-time would contain our problems to class NP. However, this is not the case in general, as the following theorem shows. \begin{theorem} \label{thm:feasibility} \textsc{SPR/FA} is NP-complete. \end{theorem} \begin{proof} Since an allocation $\mu$ that makes $(Y,\mu)$ a feasible matching is an efficiently verifiable certificate for yes-instances, \textsc{SPR/FA} belongs to NP. For hardness, any instance of \textsc{4-Partition}, defined by positive integers multiset $W=\{w_1,\ldots,w_{4m}\}$ and target $\theta\in\mathbb{N}$ (with $\sum_{w\in W}w=m\theta$ and $\forall i\in[4m],\frac{\theta}{5}<w_i<\frac{\theta}{3}$) is reduced to an instance of \textsc{SPR/FA} with $m$ projects $p_1,\ldots,p_m$. In the given matching $Y$, $\theta$ students are matched to each project. Resources $R$ are identified with weights $W$: $q_R=(w_1,\ldots,w_{4m})$ and $T_r= P$ for every $r\in R$. The correspondence is straightforward between a partition of $W$ into $m$ subsets of size $4$ that hit $\theta$, and an allocation with capacity for $\theta$ students on $m$ projects (hence $4$ resources per project). Crucially, since \textsc{4-Partition} is NP-hard even if its integers are polynomially bounded, so is the number of students and the reduction is polynomial. \end{proof} Intricate complexity results follow from the hardness of feasibility.\footnote{It tends to push problems to be strictly harder than NP.} Also, in Th. \ref{thm:feasibility}, the \emph{strong} NP-hardness of \textsc{4-Partition} is necessary: a similar construction from \textsc{Partition} with two projects would require exponentially many students, hence the reduction would not be polynomial. Therefore, we need to create \textsc{ParetoPartition} and \textsc{$\forall\exists$-4-Partition} and show them \emph{strongly} hard, so our reductions have polynomially many students. \noindent\begin{definition}[New fundamental problems]~ \begin{itemize} \item \textsc{ParetoPartition}:\\ Given positive integer multiset $W=\{w_1,\ldots,w_{|W|}\}$, a number $m\in\mathbb{N}$ of desired subsets, and target $\theta\in\mathbb{N}$, any partition of $W$ into a list $V_1,\ldots,V_{m}$ of $m$ subsets is mapped to deficit vector $\bm{\delta}\in\mathbb{Z}^m$ that is defined for every\footnote{$[m]$ is shorthand of $\{1, \ldots, m\}$.} $i\in[m]$ by: $$\delta_i=\min\left\{w(V_i)-\theta,0\right\},$$ where $w(V_i)=\sum_{w\in V_i}w$. (Subset $V_i$ has negative deficit if it sums below $\theta$, and deficit zero if it surpasses $\theta$.) The problem is to find one partition of $W$ into $m$ subsets whose deficit vector $\bm{\delta}$ is Pareto efficient\footnote{Given two vectors $\delta,\delta'\in\mathbb{Z}^m$, vector $\delta$ Pareto dominates $\delta'$ if and only if: $\forall i\in[m],\delta_i\geq\delta'_i$ and $\exists i\in[m],\delta_i>\delta'_i$. For a set of vectors $\Delta$ and $\delta \in \Delta$, $\delta$ is Pareto efficient in $\Delta$ when no other vector $\delta' \in \Delta$ Pareto dominates it.} within the deficit vectors of all partitions of $W$. \item \textsc{$\forall\exists$-4-Partition}:\\ Given target $\theta\in\mathbb{N}$, list of integers $W=(w_1,\ldots,w_{4m})$ s.t. $\frac{\theta}{5}\!<\!w_i\!<\!\frac{\theta}{3}$ and list of disjoint couples $\mathcal{L}=(u_1,v_1),\ldots,(u_\ell,v_\ell)$ from $W$, for map $\sigma:[\ell]\rightarrow\{0,1\}$, a partition of $W$ into $m$ subsets $V_1,\ldots,V_m$ is $\sigma$-satisfying if and only if: \begin{itemize} \setlength{\itemsep}{0.25em} \item $\forall i\in[m]$, $|V_i|=4$ and $w(V_i)=\theta$, \item $\forall i\in[\ell]$, $u_i\in V_i$ \enskip and\enskip $\forall i\in[\ell]$, $v_i\in V_i$ if and only if $\sigma(i)=1$. \end{itemize} (Thus, $u_i$ and $v_i$ are together in $V_i$ if and only if $\sigma(i)=1$.) The question is: Does, for every map $\sigma:[\ell]\rightarrow\{0,1\}$, a $\sigma$-satisfying partition of $W$ into $m$ subsets exist? \end{itemize} \end{definition} \subsection{The Complexity of Nonwastefulness} Here we first show that there is no natural verification procedure that would make computing a nonwasteful matching\footnote{whose existence is guaranteed by mechanism SD} belong to NP. Indeed, we then show that \textsc{SPR/Nw/Find} is $\text{FP}^{\text{NP}}[\text{log}]$-hard: one can embed a logarithmic number of calls to SAT in a single call to \textsc{SPR/Nw/Find}, which is strictly harder than NP. \begin{figure}[t] \centering \begin{tikzpicture}[scale=1.1, every node/.style={scale=0.9}] \node at (0,-0.8) [rectangle,draw, fill=black!10] (p1) {$\theta$ students}; \node at (0,-1.2) [] (p2) {$\vdots$}; \node at (0,-1.8) [rectangle,draw, fill=black!10] (pm) {$\theta$ students}; \node at (0.25,-2.6) [rectangle,draw, fill=black!10] (pm1) {\enskip $m\theta$ students \quad~}; \node at (0.5,-3.4) [rectangle,draw, fill=black!10] (pm2) {\enskip $m\theta+m$ students \quad~}; \node[above = 0cm of p1] {Projects:}; \node[left = 0cm of p1] {$p_1$}; \node[left = 0cm of pm] {$p_m$}; \node[left = 0cm of pm1] {$p_{m+1}$}; \node[left = 0cm of pm2] {$p_{m+2}$}; \node at (5,-0.8) [rectangle,draw] (r1) {$\theta+1$}; \node at (5,-1.2) [] (r2) {$\vdots$}; \node at (5,-1.8) [rectangle,draw] (rm) {$\theta+1$}; \node at (4.85,-2.6) [rectangle,draw] (rm11) {$w_1$}; \node at (5.4,-2.6) [] (rm12) {$\ldots$}; \node at (6.1,-2.6) [rectangle,draw] (rm1m) {$w_{4m}$}; \node at (5.4,-3.4) [rectangle,draw] (rz) {$m\theta+m-1$}; \node[above = 0cm of r1] {Resources:}; \node[right = 0cm of r1] {$r_{x_1}$}; \node[right = 0cm of rm] {$r_{x_m}$}; \node[right = 0cm of rm1m] {$r_i$}; \node[right = 0cm of rz] {$r_z$}; \draw[->] (r1) -- (p1); \draw[->] (rm) -- (pm); \draw[->] (rm11) -- (pm1); \draw[->] (rz) -- (pm2); \draw[->,dashed,thin] (r1) to [out = 180, in = 0] (pm2); \draw[->,dashed,thin] (rm) to [out = 180, in = 0] (pm2); \draw[->,dashed,thick] (rm11) to [out = 180, in = 0] (p1); \draw[->,dashed,thick] (rm11) to [out = 180, in = 0] (0.8,-1.3); \draw[->,dashed,thick] (rm11) to [out = 180, in = 0] (pm); \draw[->,dashed,thin] (rz) to [out = 180, in = 0] (pm1); \node at (2.2,-1.3) [rectangle,draw,dashed,fill=black!05] {4-Partition?}; \end{tikzpicture} \caption{Reducing \textsc{4-Partition} to \textsc{SPR/Nw/Verif}. Students specified in project boxes are the students that are acceptable for each project. While the horizontal resource allocation makes almost all capacity requirements feasible, one more student can be matched to $p_{m+2}$ if and only if the dashed resource allocation (with a solution to \textsc{4-Partition}) is feasible.}\label{fig:1} \end{figure} \begin{theorem} \textsc{SPR/Nw/Verif} is coNP-complete, even if each student only has one acceptable project. \end{theorem} \begin{proof} Claiming pair $(s,p)$ and allocation $\mu'$ that makes it feasible are efficiently verifiable no-certificates. Hence, \textsc{SPR/Nw/Verif} is in coNP. To show coNP-hardness, any instance $W=\{w_1,\ldots,w_{4m}\}$ of \textsc{4-Partition} with target $\theta$ (assuming $\sum_{w\in W} w = m\theta$ and $\frac{\theta}{5}<w_i<\frac{\theta}{3}$) is reduced to the following co-instance, whose yes-answers are for existent claiming pairs (see Fig. \ref{fig:1}). There are $m+2$ projects. For $i\in[m]$, $\theta$ students only consider $p_i$ acceptable, $m\theta$ students only consider $p_{m+1}$ acceptable, and $m\theta+m$ only consider $p_{m+2}$ acceptable. Projects also rank the corresponding students acceptable, arbitrarily. In matching $Y$, all students are matched except one student $s^\ast$, who wanted $p_{m+2}$. In allocation $\mu$, for every $i\in[m]$, project $p_i$ receives resource $r_{x_i}$ with capacity $q_{r_{x_i}}=\theta+1$ and $T_{r_{x_i}}=\{p_i,p_{m+2}\}$. Project $p_{m+1}$ receives $4m$ resources $r_i$ identified with integer set $W=\{w_1,\ldots,w_{4m}\}$: for every $i\in[4m]$, resource $r_i$ has capacity $q_{r_{i}}=w_i$ and $T_{r_{i}}=\{p_i\mid i\in[m+1]\}$. Project $p_{m+2}$ receives resource $r_z$ with capacity $q_{r_z}=m\theta+m-1$ and $T_{r_z}=\{p_{m+1},p_{m+2}\}$. Since integers $w_i$ and $\theta$ are polynomially bounded, so is the number of students, and the reduction is polynomial-time. There exists a solution $V_1,\ldots,V_m$ to \textsc{4-Partition} if and only if allocation $\mu'$ (dashed in Figure \ref{fig:1}) is feasible, i.e. $(s^\ast,p_{m+2})$ is a claiming pair. \end{proof} \begin{theorem} \textsc{SPR/Nw/Find} belongs to $\text{FP}^{\text{NP}}[\text{poly}]$ and is $\text{FP}^{\text{NP}}[\text{log}]$-hard, even if each student only has a single acceptable project. \end{theorem} \begin{proof} Mechanism SD shows that \textsc{SPR/Nw/Find} belongs to $\text{FP}^{\text{NP}}[\text{poly}]$. Hardness follows from Lemmas \ref{lem:1} and \ref{lem:2} below. \end{proof} \begin{lemma}\label{lem:1} \textsc{ParetoPartition} is $\text{FP}^{\text{NP}}[\text{poly}]$-complete and strongly $\text{FP}^{\text{NP}}[\text{log}]$-hard. \end{lemma} \begin{proof} \textsc{ParetoPartition} (a partition into $m$ subsets targeting $\theta$) belongs to $\text{FP}^{\text{NP}}[\text{poly}]$. Indeed a Leximax partition (thus Pareto efficient) can be found by making a polynomial number of calls to an NP-oracle on the following subproblem: Given one deficit per subset ${\delta}_1,\ldots,{\delta}_{m}$, decide whether a mapping from $W$ to subsets $V_1,\ldots,V_{m}$ exists, such that deficits are greater or equal to ${\delta}_1,\ldots,{\delta}_{m}$. A Leximax partition can be found by iterating on $V_i$ from $V_1$ to $V_{m}$. Assuming the first components ${\delta}_1,\ldots,{\delta}_{i-1}$ of a Leximax Pareto efficient partition were previously fixed by iterations $V_1$ to $V_{i-1}$ and ${\delta}_{i+1}=\ldots={\delta}_{m}=-\theta$, we set ${\delta}_{i}$ to the best feasible deficit for $V_i$ by a binary search in $[-\theta,0]$ using the NP-oracle on the subproblem above. Let any instance of \textsc{Max3DM} be defined by finite sets $A,B,C$ with $|A|=|B|=|C|=d$ and triplets set $N\subseteq A\times B\times C$, $|N|=n$. Triplet $t=(a,b,c)\in N$ is mapped to payoff $v_t\in\mathbb{N}$. In a (partial) 3-dimensional matching (3DM) $N'\subseteq N$, any element of $A\cup B\cup C$ occurs at most once. The goal is to maximize $\sum_{t\in N'}v_t$ for $N'\subseteq N$ any (partial) 3-dimensional matching. Note that maximizing $-\sum_{t\in N'\setminus N}v_t$ is an equivalent goal. This problem is $\text{FP}^{\text{NP}}[\text{poly}]$-complete \cite[Th. 3.5]{gasarch1995optp}. For every $a\in A$ (resp. $b\in B$, $c\in C$), let $\#a$ (resp. $\#b$, $\#c$) denote the number of occurrences of $a$ (resp. $b$, $c$) in $N$: the number of triplets that contain $a$ (resp. $b$, $c$). Let $v_N$ denote total $\sum_{t\in N}v_t$. Elements $a_i\in A,b_j\in B,c_k\in C$ and triplets $t\in N$ are identified with integers $i,j,k\in[d]$ and $t\in[n]$. We reduce this problem to the following instance of \textsc{ParetoPartition} for which finding a Pareto efficient solution produces the optimum for the given \textsc{Max3DM} instance. Set $W$ contains $8n$ integers that must be partitioned into $m=n+1$ subsets (of various cardinalities). Given basis $\beta\in\mathbb{N}_{\geq 2}$ and integer sequence $(z_i)_{i\in\mathbb{N}}$, we define integer $\langle \ldots z_2~z_1~z_0\rangle$ by $\sum_{i\geq 0}z_i\beta^i$. Let $\beta$ be an integer large enough for such representation in basis $\beta$ to never have carryovers, even when one adds all the integers in $W$. Choosing $\beta=\max\{30n^3d,nv_N\}+1$ largely satisfies this purpose. Let $\Sigma_n$ denote $\sum_{t=1}^{n}t=\frac{n(n+1)}{2}$. The integers in set $W$ are represented below. For each $t=(a_i,b_j,c_k)\in N$, there is a \emph{triplet}-integer $w(t)$. For each $a_i\in A$, we introduce one \emph{actual}-integer $w(a_i)$ representing the actual element intended to go with the triplets in a (partial) 3DM, and $\#a_i-1$ \emph{dummies}, present in triplets that are not in the 3DM. Similarly, we introduce $\#b_j$ (resp. $\#c_k$) integers for every $b_j\in B$ (resp. $c_k\in C$). For each $t\in N$, there are four \emph{value}-integers $w(v_t)$. Target $\theta$ is below. We also indicate values $\theta-w(t)$ which will be useful later. $$ \begin{array}{lrcccccccccl} & & z_7 & z_6 & z_5 & z_4 & z_3 & z_2 & z_1 & z_0\\[1ex] \hline \forall t\!\in\! N, & w(t\!=\!(a_ib_jc_k)) = \langle & {3n\!-\!4} & {24n\!-\!15} & -i & -j & -k & {3\Sigma_n\!-\!t} & 3d\!+\!3n\!-\!3 & v_N&\rangle\\[2ex] \forall a_i\!\in\! A,& ^{\text{one actual,}}_{\#a_i-1\text{ dum.}} ~ w(a_i) = \langle& 1 & 1 & i & 0 & 0 & 0 & ^{1\text{ (actual)}}_{0\text{ (dum.)}} & 0 &\rangle\\[2ex] \forall b_j\!\in\! B,& ^{\text{one actual,}}_{\#b_j-1\text{ dum.}} ~ w(b_j) = \langle& 1 & 2 & 0 & j & 0 & 0 & ^{1\text{ (actual)}}_{0\text{ (dum.)}} & 0 &\rangle\\[2ex] \forall c_k\!\in\! C,& ^{\text{one actual,}}_{\#c_k-1\text{ dum.}} ~ w(c_k) = \langle& 1 & 4 & 0 & 0 & k & 0 & ^{1\text{ (actual)}}_{0\text{ (dum.)}} & 0 &\rangle\\[2ex] \forall t\!\in\! N,& ^{^{\text{``zero''}}_{\text{``one''}}}_{^{\text{``two''}}_{\text{``three''}}} ~ w(v_t) = \bigg\langle& 1 & 8 & 0 & 0 & 0 & t & ^{^{0\text{ (zero)}}_{1\text{ (one)}}}_{^{2\text{ (two)}}_{3\text{ (three)}}} & ^{^{-v_t}_{0}}_{^{0}_{0}} &\bigg\rangle\\[2ex] \hline \text{Target } & \theta = \langle& 3n & 24n & 0 & 0 & 0 & 3\Sigma_n & 3d+3n & 0 & \rangle\\[2ex] \hline \textit{Remark:} & \theta-w(t) = \langle& 4 & 15 & i & j & k & t & 3 & -v_N & \rangle \end{array} $$ Since every subset has same target $\theta$, given a partition $\left(V_i\mid i\in[m]\right)$ with deficits $\bm{\delta}\in\mathbb{Z}^m$ and any permutation $\sigma:[m]\leftrightarrow[m]$, deficits $(\delta_{\sigma(i)}\mid i\in[m])$ are also feasible by the permuted partition $(V_{\sigma(i)}\mid i\in[m])$. On every column but $z_0$, total offer (weights) equates total demand (targets). For instance, on column $z_1$, it holds that $n(3d+3n-3)+3d+6n=(n+1)(3d+3n)$. Given any maxim\underline{al} 3DM $N'\subseteq N$, one can make partitions such that for one arbitrary subset $V_{(\ast)}$ deficit is $\delta_{(\ast)}=-\sum_{t\in N\setminus N'}v_t$ and for the $n$ other subsets $V_{(t)}$ deficit is $\delta_{(t)}=0$, as follows: \begin{itemize} \item For every $t=(a_i,b_j,c_k)\in N'$, we make a subset $V_{(t)}$ that contains $w(t)$, the three actuals $w(a_i),w(b_j),w(c_k)$ and integer $w(v_t)$ ``zero''. Integers $w(v_t)$ ``one, two and three'' are sent to $V_{(\ast)}$ without the $-v_t$ deficit from ``zero''. \item For every $t=(a_i,b_j,c_k)\in N\setminus N'$, we make a subset $V_{(t)}$ that contains $w(t)$, actual or dummy integers $w(a_i),w(b_j),w(c_k)$;\footnote{A complete 3DM may not exist. A partial 3DM may leave some actuals in $N\setminus N'$.} and, if subset $V_{(t)}$ contains resp. one, two or three dummies,\footnote{By maxim\underline{al}ity of $N'$, zero dummies is not possible.} integer $w(v_t)$ respectively ``one, two or three''. The other integers $w(v_t)$ which include deficit $-v_t$ are sent to $V_{(\ast)}$. \end{itemize} From any optimal 3DM $N'$ and $i\in[m]$, let $\bm{\delta}^{\text{opt}(i)}$ be the deficit vector ${\delta}^{\text{opt}(i)}_{i}=-\sum_{t\in N\setminus N'}v_t$ and $\bm{\delta}^{\text{opt}(i)}_{-i}\equiv 0$\footnote{Given a vector $\bm{\delta}\in\mathbb{Z}^{n+1}$ and $i\in[n+1]$, $\bm{\delta}_{-i}\in\mathbb{Z}^{n}$ denotes the same vector where the $i$th component is removed.} where $V_{(\ast)}=V_i$. Below, we show that this family of $m$ deficit vectors dominate all the others, hence are the only Pareto efficient ones. The idea is that every subset $V_i$ (which objective is to maximize $\delta_i$ up to zero), has a column-wise lexicographic preference on integers, from heaviest column $z_7$ (weight $\beta^7$) to the lowest $z^0$ (weight $\beta^0$). Indeed, since in each column (but $z_0$), total offer (weights) equates total demand (targets), an unbalanced partition is always dominated: at efficiency, a column's deficit is exactly zero and cannot overrun a lower one. And, sums of integers in $W$ never have carryovers from a column to a heavier one. By reasoning from $z^7$ to $z^1$, any partition which does not satisfy all the following conditions is clearly Pareto dominated by some $\bm{\delta}^{\text{opt}(i)}$ because of one huge deficit in multiples of $\beta$ on some component $\delta_i$. \begin{description} \setlength{\itemsep}{1em} \item[$z^7$: ] No subset contains two triplet-integers. Therefore, $n$ subsets (among $m\!=\!n\!+\!1$) can be identified from the triplet-integer $w(t)$ contained by $V_{(t)}$; and we identify the last one by $V_{(\ast)}$. These subsets can be ordered indifferently. For a subset $V_{(t)}$, remaining deficit $\theta-w(t)$ is: $$ \arraycolsep=6.0pt \begin{array}{rcccccccc} \theta_{(t)}=\langle 4 & 15 & i & j & k & t & 3 & -v_N \rangle \end{array} $$ Thus, subsets $V_{(t)}$ must contain four other integers to cancel the deficit 4 at $z^7$. Then, $V_{(\ast)}$ must contain the remainder of the $n$ integers; the deficit on column $z^7$ becomes $0$. \item[$z^6$:] Subset $V_{(\ast)}$ contains $3n$ \emph{value}-integers $w(v_t)$, so its value at $z^6$ be $24n$ (i.e., no deficit). Subsets $V_{(t)}$ must contain one of each in integers $w(a)$, $w(b)$, $w(c)$ and $w(v)$ to cancel deficit 15 at $z^6$. \item[$z^{5}$--$z^{2}$:] To cancel deficits from $z^5$ to $z^2$, for $t\!=\!(a_i,b_j,c_k)\!\in\!N$, subset $V_{(t)}$ contains \emph{precisely} one of each in integers $w(a_i)$, $w(b_j)$, $w(c_k)$, and $w(v_t)$. Also, $V_{(\ast)}$ has deficit $3\sum_{n}$ at $\beta^2$. Thus, it needs \emph{exactly} three $w(v_t)$ of every triplet $t$ to cancel the deficit, otherwise some $V_{(t)}$ would be missing his. \item[$z^{1}$--$z^{0}$:] Again, due to tightness of offer on demand for $z_1$, subset $V_{(t)}$ must contain either (i) three actual elements and integer $w(v_t)$ ``zero'' or (ii) one, two or three dummy elements and integer $w(v_t)$ respectively ``one, two or three''. In case (i), integers $w(v_t)$ one, two and three go to $V_{(\ast)}$ without degrading it. In case (ii), three integers $w(v_t)$ which include integer $w(v_t)$ ``zero'' go to $V_{(\ast)}$ and degrade it by $-v_t$. \end{description} All in all, Pareto efficiency constrains partitions to structure as in the mapping from a 3-dimensional matching $N'$ given above: the only possible Pareto efficient deficit vectors are $\bm{\delta}^{\text{opt}(i)}$ for $i\in[m]$ and thus provide the optimum for \textsc{Max3DM}. Consequently, this reduction is metric. Since weighted \textsc{Max3DM} is $\text{FP}^{\text{NP}}[\text{poly}]$-hard \cite{gasarch1995optp}, so is \textsc{ParetoPartition}. Since \emph{unweighted} \textsc{Max3DM} is $\text{FP}^{\text{NP}}[\text{log}]$-hard and for $v_t\!\in\!\{0,1\}$ no integer exceeds polynomial $\beta^8$, \textsc{ParetoPartition} is also \emph{strongly} $\text{FP}^{\text{NP}}[\text{log}]$-hard. \end{proof} \begin{lemma}\label{lem:2} If the numbers in problem \textsc{ParetoPartition} are polynomially bounded, then the reduction \textsc{ParetoPartition} $\leq_p$ \textsc{SPR/Nw/Find} holds. \end{lemma} \begin{proof} We reduce any instance $W=\{w_1,\ldots,w_{|W|}\}$, $m\in\mathbb{N}$, $\theta\in\mathbb{N}$ of \textsc{ParetoPartition} to an instance of \textsc{SPR/Nw/Find}. There are $m$ projects $p_1,\ldots,p_m$; for every project $p_i$ there is a disjoint set of $\theta$ students who consider only $p_i$ acceptable (and reciprocally). Project $p_i$ ranks these students arbitrarily. Resources $R$ are identified with set $W$: any resource is compatible with any project and $q_R=(w_1,\ldots,w_{|W|})$. Crucially, with numbers in \textsc{ParetoPartition} polynomially bounded, there are only polynomially many students. Computing a nonwasteful matching $(Y,\mu)$ outputs a partition $V_1,\ldots,V_m\equiv \mu^{-1}(p_1),\ldots,\mu^{-1}(p_m)$ with Pareto efficient deficits. Indeed, by definition, a claiming pair would exist if and only if there was an allocation (resp. partition) where the number of unmatched students per project (resp. deficit vector) Pareto dominated the ``deficit vector'' of allocation/partition $V_1,\ldots,V_m$. \end{proof} \subsection{The Complexity of Stability} A matching that is both nonwasteful and fair (i.e., stable) may not exist. In this section, we settle the complexity of deciding whether such a matching exists in a given SPR as $\text{NP}^{\text{NP}}$-complete, which is strictly more intractable than NP-complete. \begin{theorem}\label{th:stable:verif} \textsc{SPR/Stable/Verif} is coNP-complete, even if students only have one acceptable project. \end{theorem} \begin{proof} The construct is the same as for \textsc{SPR/Nw/Verif}. Assuming that in the given matching project $p_{m+2}$ has its $m\theta+m-1$ top-preferred students, the concept of an envious pair becomes empty in this construction; hence stability amounts to nonwastefulness. Therefore, the same proof holds. \end{proof} \begin{theorem} \textsc{SPR/Stable/Exist} is $\text{NP}^{\text{NP}}$-complete. \end{theorem} \begin{proof} A stable matching is a yes-certificate verifiable by NP-oracle (Theorem \ref{th:stable:verif}); hence, \textsc{SPR/Stable/Exist} belongs to $\text{NP}^{\text{NP}}$. Hardness follows from Lemmas \ref{lem:3} and \ref{lem:4} below. \end{proof} \begin{lemma}\label{lem:3} \textsc{$\forall\exists$-4-Partition} is strongly $\text{coNP}^{\text{NP}}$-hard. \end{lemma} \begin{proof} Let any instance of \textsc{$\forall\exists$-3DM} be defined by finite sets $A,B,C$ with $|A|=|B|=|C|=d$ and two disjoint triplet sets $M,N\subseteq A\times B\times C$, with $|M|=n'$ and $|N|=n$. This decision problem asks the following question: $$ \forall M'\subseteq M,\quad \exists N'\subseteq N,\quad M'\cup N'\text{ is a 3DM,} $$ where ``$M'\cup N'$ is a 3DM'' means that any element of $A\cup B\cup C$ occurs exactly once in $M'\cup N'$. This is a $\text{coNP}^{\text{NP}}$-complete problem \citep{mcloughlin1984}. For every $a_i\in A$ (resp. $b_j\in B$, $c_k\in C$), let $\#a_i$ (resp. $\#b_j$, $\#c_k$) denote the number of occurrences of $a_i$ (resp. $b_j$, $c_k$) in $M\cup N$: how many triplets contain $a_i$ (resp. $b_j$, $c_k$)? We identify elements and triplets with integers $i,j,k\in[d]$ and $t\in[n'+n]$. We reduce this instance to the following \textsc{$\forall\exists$-4-Partition} instance. List $W$ contains the $4(n'+n)$ integers depicted below in basis $\beta=4(n'+n)d+1$ (definition in proof of Lemma \ref{lem:1}). For every triplet $t=(a_i,b_j,c_k)\in M\cup N$, there is one ``triplet'' integer $w(a_i,b_j,c_k)\in\mathbb{N}$. For every element $a\in A$, we introduce one \emph{actual} integer $w(a)$ that represents the actual element intended to go with the triplets in the 3DM and $\#a-1$ \emph{dummies} that will go with the triplets that are not in the 3-dimensional matching. Similarly, we introduce $\#b$ integers for each $b\in B$ and $\#c$ integers for each $c\in C$. Target $\theta=4\beta^5+15\beta^4$ is also depicted below. Numbers are polynomially bounded by $\beta^6$. $$ \arraycolsep=10pt \begin{array}{lrcccccccl} \forall t\in M,& w({t=(a_ib_jc_k)}) = \langle & 1 & 1 & -i & -j & -k & 0& \rangle\\[1ex] \forall a_i\in A,& ^{\quad\text{one actual}}_{\#a_i\!-\!1\text{ dum.}} ~w(a_i) = \langle& 1 & 2 & i & 0 & 0 & ^{-2\text{ (actual)}}_{~~0\text{ (dummy)}}& \rangle\\[1ex] \forall b_j\in B,& ^{\quad\text{one actual}}_{\#b_j\!-\!1\text{ dum.}} ~w(b_j) = \langle& 1 & 4 & 0 & j & 0 & ^{+1\text{ (actual)}}_{~~0\text{ (dummy)}}& \rangle\\[1ex] \forall c_k\in C,& ^{\quad\text{one actual}}_{\#c_k\!-\!1\text{ dum.}} ~w(c_k) = \langle& 1 & 8 & 0 & 0 & k & ^{+1\text{ (actual)}}_{~~0\text{ (dummy)}}& \rangle\\[1ex] \hline \textbf{target } & \theta = \langle& 4 & 15 & 0 & 0 & 0 & 0 & \rangle \end{array} $$ List $\mathcal{L}$ has length $\ell=|M|$: every triplet $t=(a_i,b_j,c_k)\in M$ is reduced to couple $u_tv_t$ between ``triplet'' integer $u_t=w(a_i,b_j,c_k)$ and ``actual'' integer $v_t=w(a_i)$. First, since $\beta$ is large enough, and column-wise offer (weights) equates demand (targets), additions in $W$ never have carryovers. Therefore, subsets must hit the target on each column of this representation. Consequently, in any 4-partition of $W$, there are four elements, one of each in the following: ``triplet'' integers, element-$a$ integers, element-$b$ integers and element-$c$ integers. Moreover, ``triplet'' integer $w(a_i,b_j,c_k)$ is with ``its'' elements $w(a_i)$, $w(b_j)$ and $w(c_k)$. Also, \emph{actual} elements must be in the same subset and dummies in the others. Therefore, any 3-dimensional matching $M'\cup N'$ is in correspondence with such a 4-partition. Validity follows from the correspondence between $M'$ (taking or not elements in $M$) and $\sigma$ (enforcing integers $w(t)$ for $t\in M$ in the same subsets as its actual elements $w(a_i)$ and the two others.) (yes$\Rightarrow$yes) Assume the 3DM instance is a yes one, and let $\sigma:[\ell]\rightarrow\{0,1\}$ be any couple enforcement/forbidding function. We construct a $\sigma$-satisfying 4-partition in correspondence with the following 3-dimensional matching $M'\cup N'$: for $t\in[\ell]\!\equiv\!M$, triplet $t$ is in $M'$ if and only if $\sigma(t)=1$; then the assumption gives $N'$ such that $M'\cup N'$ is a 3DM. We construct the corresponding 4-partition (see paragraph above), and it is $\sigma$-satisfying. (yes$\Leftarrow$yes) Assume the partition instance is a yes one, and let us show that $\forall M'\subseteq M, \exists N'\subseteq N$ s.t. $M'\cup N'$ is a 3DM. Given $M'$, let $\sigma$ be defined as $\sigma(t)=1$ if and only if $t\in M'$. A $\sigma$-satisfying 4-partition exists, and is in correspondence with some 3DM $M'\cup N'$, by construction, as above. \end{proof} \begin{lemma}\label{lem:4} \textsc{$\forall\exists$-4-Partition} $\leq_p$ \textsc{co-SPR/Stable/Exist} \end{lemma} \begin{figure}[t] \centering \begin{tikzpicture} \node at (0,1.0) [rectangle,draw] (p1p) {$p'_1:\overline{s_{v_1}}$}; \node at (0,0.4) [] (pdp) {$\vdots$}; \node at (0,-0.4) [rectangle,draw] (plp) {$p'_\ell:\overline{s_{v_\ell}}$}; \node at (4,1.0) [rectangle,draw] (rv1) {$r_{v_1}:v_1$}; \node at (4,0.4) [] (rdv) {$\vdots$}; \node at (4,-0.4) [rectangle,draw] (rvl) {$r_{v_\ell}:v_\ell$}; \node at (-4,1.0) [rectangle,draw] (sv1) {$\overline{s_{v_1}}:p'_1\succ p_1$}; \node at (-5.5,1.0)[]{\footnotesize $v_1$}; \node at (-4,0.4) [] (sd1) {$\vdots$}; \node at (-5.5,0.4) [] (sd1) {$\vdots$}; \node at (-4,-0.4) [rectangle,draw] (svl) {$\overline{s_{v_\ell}}:p'_\ell\succ p_\ell$}; \node at (-5.5,-0.4)[]{\footnotesize $v_\ell$}; \node at (0,-2.2) [rectangle,draw,minimum width=2.1cm] (p1) {$p_1:\overline{s_{v_1}} \succ \overline{s_1}$}; \node at (0,-2.8) [] (pd1) {$\vdots$}; \node at (0,-3.6) [rectangle,draw,minimum width=2.1cm] (pl) {$p_\ell:\overline{s_{v_\ell}} \succ \overline{s_\ell}$}; \node at (-4,-2.2) [rectangle,draw,minimum width=1.8cm] (s1) {$\overline{s_1}:p_1$}; \node at (-5.55,-2.2)[]{\footnotesize $\theta\!-\!u_1\!-\!v_1$}; \node at (-4,-2.8) [] (sd1) {$\vdots$}; \node at (-5.55,-2.8) [] (sd1) {$\vdots$}; \node at (-4,-3.6) [rectangle,draw,minimum width=1.8cm] (sl) {$\overline{s_\ell}:p_\ell$}; \node at (-5.55,-3.6)[]{\footnotesize $\theta\!-\!u_\ell\!-\!v_\ell$}; \node at (0,-4.4) [rectangle,draw,minimum width=2.1cm] (pl1) {~~$p_{\ell+1}:\overline{s_{\ell+1}}$~~}; \node at (0,-5.0) [] (pd2) {$\vdots$}; \node at (0,-5.8) [rectangle,draw,minimum width=2.1cm] (pm) {~~~~$p_m:\overline{s_m}$~~~~}; \node at (-4,-4.4) [rectangle,draw,minimum width=1.8cm] (sl1) {$\overline{s_{\ell+1}}:p_{\ell+1}$}; \node at (-5.55,-4.4)[]{\footnotesize $\theta$}; \node at (-4,-5.0) [] (sd1) {$\vdots$}; \node at (-5.55,-5.0) [] (sd1) {$\vdots$}; \node at (-4,-5.8) [rectangle,draw,minimum width=1.8cm] (sm) {~~~$\overline{s_m}:p_m$~~~}; \node at (-5.55,-5.8)[]{\footnotesize $\theta$}; \node at (3.25,-4) [rectangle,double, draw,inner sep=0.5em] (rw) {$\{r_w:w\!\mid\!w\!\in\!W\!\setminus\!\mathcal{L}\}$}; \node at (0,-7.4) [rectangle,draw] (pa) {$p_a:s_b\succ s_a$}; \node at (0,-8.2) [rectangle,draw] (pb) {$p_b:s_a\succ s_b$}; \node at (2.5,-7.8) [rectangle,draw] (rd) {$r_1:1$}; \node at (-4,-7.4) [rectangle,draw] (sa) {$s_a:p_a\succ p_b$}; \node at (-4,-8.2) [rectangle,draw] (sb) {$s_b:p_b\succ p_a$}; \path[->] (sv1) edge[out=0,in=180] (p1p) (sv1) edge[out=330,in=150] (p1) (svl) edge[out=0,in=180] (plp) (svl) edge[out=330,in=160] (pl) (s1) edge[out=0,in=180] (p1) (sl) edge[out=0,in=180] (pl) (sl1) edge[out=0,in=180] (pl1) (sm) edge[out=0,in=180] (pm) (sa) edge[out=0,in=180] (pa) (sa) edge[out=350,in=170] (pb) (sb) edge[out=10,in=190] (pa) (sb) edge[out=0,in=180] (pb); \path[->] (rv1) edge[out=180,in=0] (p1p) (rv1) edge[dashed] (1.4,-2.8) (rvl) edge[out=180,in=0] (plp) (rvl) edge[dashed] (1.5,-3.6) (rw) edge[out=120,in=0,dashed] (p1) (rw) edge[dashed] (pl) (rw) edge[dashed] (pl1) (rw) edge[out=240,in=0,dashed] (pm) (rd) edge[dashed] (1.5,-5.0) (rd) edge (pa) (rd) edge (pb); \draw[draw=black!25,very thick, rounded corners] (-6.3,1.5) rectangle (5,-0.9); \draw[draw=black!25,very thick, rounded corners] (-6.3,-1.65) rectangle (5,-6.3); \draw[draw=black!25,very thick, rounded corners] (-6.3,-7) rectangle (5,-8.6); \node at (2.0,0.3) [circle,draw,black!25,very thick] {\huge $\forall$}; \node at (3.6,-2.8) [circle,draw,black!25,very thick] {\huge $\exists$}; \node at (4.1,-7.6) [rectangle, rounded corners,draw,black!25,very thick] {\large yes/no}; \node at (-5.5,2) {(number)}; \node at (-4,2) {students}; \node at (0,2) {projects}; \node at (4,2) {resources}; \end{tikzpicture} \caption{From \textsc{$\forall\exists$-4-Partition} to \textsc{co-SPR/Stable/Exist}. Left-right arrows depict acceptable projects and right-left arrows, compatible projects. Dashed arrows go to any project $p_1\ldots p_m$, but $p_j$ for resource $r_{v_j}$.}\label{fig:2} \end{figure} \begin{proof} Given a \textsc{$\forall\exists$-4-Partition} instance defined by $m\in\mathbb{N}$, list $W=\{w_1,\ldots,w_{4m}\}$, target $\theta\in\mathbb{N}$, and list of couples $\mathcal{L}=(u_1,v_1),\ldots,(u_\ell,v_\ell)$ of $W$, we construct a \textsc{co-SPR/Stable/Exist} instance depicted in Figure \ref{fig:2}. It contains: \begin{itemize} \item $\ell+m+2$ projects $p'_1,p'_2\ldots,p'_\ell$,\enskip $p_1,p_2,\ldots,p_m$\enskip and $p_a,p_b$, \item $\ell$ subsets of students $\overline{s_{v_1}},\overline{s_{v_2}},\ldots,\overline{s_{v_\ell}}$ where each subset $\overline{s_{v_i}}$ contains $v_i$ students who all have preference $\overline{s_{v_i}}:p'_i\succ p_i\succ \emptyset$, \item $m$ subsets of students $\overline{s_1},\overline{s_2},\ldots,\overline{s_\ell},\overline{s_{\ell+1}},\ldots,\overline{s_m}$ where each subset $\overline{s_{i}}$ for $i\in[\ell]$ contains $\theta-u_i-v_i$ students, each subset $\overline{s_{i}}$ for $i\in[\ell+1,m]$ contains $\theta$ students and in every subset $\overline{s_{i}}$ students all have preference $\overline{s_{i}}:p_i\succ\emptyset$, and \item two students $s_a,s_b$ who have preferences $s_a:p_a\succ p_b\succ\emptyset$ and $s_b:p_b\succ p_a\succ\emptyset$. \item For every $i\in[\ell]$, project $p'_i$ has preference $p'_i:\overline{s_{v_i}}\succ\emptyset$, and project $p_i$ has preference $p'_i:\overline{s_{v_i}}\succ\overline{s_{i}}\succ\emptyset$. For every $i\in[\ell+1,m]$, project $p_i$ has preference $p_i:\overline{s_{i}}\succ\emptyset$. Project $p_a$ has preference $p_a:s_b\succ s_a$ and $p_b$ preference $p_b:s_a\succ s_b$ (as in Example \ref{counterex:stable}). \end{itemize} Since \textsc{$\forall\exists$-4-Partition} is strongly hard, we can assume that its numbers are polynomially bounded (e.g. w.r.t. $m$); hence, there is a polynomial number of students. There are $|W|-\ell+1$ resources: \begin{itemize} \item for every $i\in[\ell]$, resource $r_{v_i}$ has capacity $q_{r_{v_i}}=v_i$ and is compatible with $\{p'_i\}\cup\{p_j\mid j\neq i\}$, \item for every weight $w\in W\setminus\mathcal{L}$ resource $r_w$ has capacity $q_{r_w}=w$ and $T_{r_w}=\{p_1,\ldots,p_m\}$, and \item resource $r_1$ has capacity $q_{r_1}=1$ and compatibilities $T_{r_1}=\{p_a,p_b\}\cup\{p_1,\ldots,p_m\}$. \end{itemize} The idea is that capacity requirements of projects $p_1,\ldots,p_m$ model the $m$ targets of a 4-partition. Since integers $u_1,\ldots,u_\ell$ are in $V_1,\ldots,V_\ell$, we already subtract them from $p_1,\ldots,p_\ell$. The universal quantifier is encoded as follows. \begin{itemize} \setlength{\itemindent}{5mm} \item[\underline{$\sigma(i)\!=\!1$:}] Enforcing $u_i$ and $v_i$ together in a 4-partition will correspond to letting the capacity requirement of project $p_i$ be $\theta-u_i-v_i$ (like if $u_i$ and $v_i$ were already inside): students $\overline{s_{v_i}}$ are matched with $p'_i$ and resources $r_{v_i}$ are allocated to $p'_i$. \item[\underline{$\sigma(i)\!=\!0$:}] Conversely, forbidding $u_i$ and $v_i$ to be together in a 4-partition will correspond to trying to match $\overline{s_{v_i}}$ with $p_i$, hence bringing its capacity requirement to $\theta-u_i$, while resource $r_{v_i}$ cannot be allocated to $p_i$. \end{itemize} We are now set to formally prove the validity of this reduction. (yes$\Rightarrow$yes) For each $\sigma:[\ell]\rightarrow\{0,1\}$, there is a $\sigma$-satisfying 4-partition $V_1,\ldots,V_{\ell},V_{\ell+1},\ldots,V_m$. For the sake of contradiction, let us assume that there exists a stable matching $(Y,\mu)$. By definition, for each resource $r_{v_i}, i\in[\ell]$, either (1) $\mu(r_{v_i})=p'_i$ or (2) $\mu(r_{v_i})\in\{p_j\mid j\neq i\}$. Let us consider a particular mapping $\sigma$ defined by $\sigma(i)=1$ if (1), and $\sigma(i)=0$ if (2). By premise, there exists a $\sigma$-satisfying 4-partition $V_1,\ldots,V_{\ell},V_{\ell+1},\ldots,V_m$: for every $i\in[\ell]$, first $u_i\in V_i$ and second $v_i\in V_i$ if and only if $\sigma(i)=1$. From this $\sigma$-satisfying 4-partition, there exists an allocation of $\{r_{v_i}\mid \sigma(i)=0\}$ and $\{r_w\mid w\in W\setminus\mathcal{L}\}$ to projects $p_1,\ldots,p_m$ that makes feasible the full matching $Y(\overline{s_i})=p_i,\forall i\in[m]$ and $Y(\overline{s_{v_i}})=p_i,\forall i\in[m]\text{ s.t. }\sigma(i)=0$. Therefore it would be wasteful to use resource $r_1$ on projects $\{p_1,\ldots,p_m\}$, contradicting stability, and consequently $r_1$ is allocated to $p_a$ or $p_b$. The SPR defined by $s_a,s_b,p_a,p_b,r_1$ cannot be stable (as in Example \ref{counterex:stable}). Consequently, a stable matching is impossible. (no$\Rightarrow$no) Assume that there exists a mapping $\sigma$ such that no $\sigma$-satisfying 4-partition exists, and let us build a stable matching $(Y,\mu)$ as follows. For every $i\in[\ell]$: \begin{itemize} \item if $\sigma(i)=1$, then $Y(\overline{s_{v_i}})=\{p'_i\}$ and $\mu(r_{v_i})=p'_i$; \item if $\sigma(i)=0$, then $Y(\overline{s_{v_i}})=\{p_i\}$ and $\mu(r_{v_i})\!\in\!\{p_j\mid j\!\neq\!i\}$. \end{itemize} Then, we allocate the other resources ($\{r_w\mid w\in W\setminus\mathcal{L}\}$ \emph{and} $r_1$) in a way that minimizes the number of unmatched students in $\overline{s_1},\ldots,\overline{s_m}$. The students from $\overline{s_{v_1}},\ldots,\overline{s_{v_\ell}}$ cannot be involved in a claiming (or envious) pair since they obtain their top choice if matched to $p'_i$, and one claiming pair from $p_i$ to $p'_i$ would deprive $p_1,\ldots,p_m$ from resource $r_{v_i}$ (not allocated to $p_i$), which is not feasible. Since the number of unmatched students in $\overline{s_1},\ldots,\overline{s_m}$ is minimized, one more seat is not possible. Since no $\sigma$-satisfying 4-partition exists, without resource $r_1$, some projects in $p_1,\ldots,p_m$ would loose a seat. Then $r_1$ cannot be re-allocated to $p_a$ or $p_b$ without canceling a seat. The remaining SPR defined by $s_a,s_b,p_a,p_b$ has no resource at all and is therefore stable. \end{proof} \section{Related Work} This paper follows a stream of works dealing with constrained matching. Two-sided matching has been attracting considerable attention from AI and TCS researchers~\citep{aziz2017stable,hamada2017weighted,hosseini2015manipulablity,kawase2017near}. A standard market deals with maximum quotas, i.e., capacity limits that cannot be exceeded. However, many real-world matching markets are subject to a variety of distributional constraints~\citep{kty:2014}, including regional maximum quotas, which restrict the total number of students assigned to a set of schools% ~\citep{kamakoji-basic}, minimum quotas, which guarantee that a certain number of students are assigned to each school% ~\citep{fragiadakis::2012,Goto:aamas:2014,kurata:aaams2016,% sonmez_switzer2013,sonmez_rotc2011}, and diversity constraints% ~\citep{hafalir2013effective,ehlers::2012,kojima2012school,kurata:jair2017}. Other works examine the computational complexity for finding a matching with desirable properties under distributional constraints, including~\citep{% biro:tcs:2010,Fleiner16,hamda:esa:2011}. A similar model was recently considered \citet{ismaili2018prima}, but with a compact representation scheme which handles exponentially many students and induces intrinsically different computational problems. There exist several works on three-sided matching problems~\cite{% alkan1988,NgH91,huang2007} where three types of players/agents, e.g., males, females, and pets, are matched. Although their model might look superficially similar to our model, they are fundamentally different. In the student-project allocation problem \citet{abraham2007two}, students are matched to projects, while each project is offered by a lecturer. A student has a preference over projects, and a lecturer has a preference over students. Each lecturer has her capacity limit. This problem can be considered as a standard two-sided matching problem with distributional constraints. More specifically, this problem is equivalent to a two-sided matching problem with regional maximum quotas \citet{kty:2014}. A 3/2-approximation algorithm exists for the student-project allocation problem \cite{CooperM18}, and one can also obtain super-stability, despite ties \cite{OlaosebikanM18}. In our model, a resource is not an agent/player; it has no preference over projects/students. Also, a project/student has no preference over resources; a project just needs to be allocated enough resources to accommodate applying students. \section{Model} In this section, we introduce necessary definitions and notations. \begin{definition}[Student-Project-Resource (SPR) Instance] \label{def:SPRinstance} It is a tuple $(S,P,R,\succ_S,\succ_P,T_R,q_R)$. \begin{itemize} \item $S=\{s_1, \ldots, s_{|S|}\}$ is a set of students. \item $P=\{p_1, \ldots, p_{|P|}\}$ is a set of projects. \item $R=\{r_1, \ldots, r_{|R|}\}$ is a set of resources. \item $\succ_S = (\succ_s)_{s \in S}$ are the students' preferences over set $P\cup\{\emptyset\}$. \item $\succ_P =(\succ_p)_{p \in P}$ are the projects' preferences over set $S\cup\{\emptyset\}$. \item Resource $r$ has capacity $q_r\in\mathbb{N}_{>0}$, and $q_R=(q_r)_{r\in R}$. \item Resource $r$ is compatible with $T_r\subseteq P$, and $T_R=(T_r)_{r\in R}$. \end{itemize} For soundness,\footnote{Without these properties, this work is still valid, though a claiming or envious pair $(s,p)$ may not necessarily make sense.} every preference $\succ_p$ may extend to $2^S$ in a non-specified manner such that: \begin{itemize} \setlength{\itemsep}{0.5em} \item $\forall s,s'\in S, \forall S'\subseteq S\setminus\{s,s'\}, s \succ_p s' \Leftrightarrow S'\cup \{s\} \succ_p S'\cup \{s'\}$ (responsiveness) and \item $\forall s\in S,\forall S'\subseteq S\setminus\{s\}, s\succ_p\emptyset \Leftrightarrow S'\cup \{s\}\succ_pS'$ (separability). \end{itemize} \end{definition} Contract $(s,p)\in S \times P$ means that student $s$ is matched to project $p$. Contract $(s,p)$ is acceptable for student $s$ (resp. project $p$) if $p \succ_s \emptyset$ holds (resp. $s \succ_p \emptyset$). The contract is acceptable when both hold. W.l.o.g., we define set of contracts $X\subseteq S\times P$ by $(s, p) \in X$ if and only if it is acceptable for $p$.\footnote{% For designing a strategyproof mechanism, we assume each $\succ_s$ is private information of $s$, while the rest of parameters are public. Thus, $X$ does not need to be part of the input, since it is characterized by projects' preferences.} \begin{definition}[Matching] A matching is a subset $Y\subseteq X$, where for every student $s\in S$, subset $Y_s=\{(s,p)\in Y\mid p\in P\}$ satisfies $|Y_s|\leq 1$, and either \begin{itemize} \item $Y_s=\emptyset$, or \item $Y_s=\{(s,p)\}$ and $p\succ_s\emptyset$, holds. \end{itemize} For a matching $Y$, let $Y(s)\in P\cup\{\emptyset\}$ denote the project $s$ is matched, and $Y(p)\subseteq S$ denote the set of students assigned to project $p$. \end{definition} \begin{definition}[Allocation] An allocation $\mu:R\rightarrow P$ maps each resource $r$ to a project $\mu(r)\in T_r$. (A resource is indivisible.) Let $q_{\mu}(p)=\sum_{r \in \mu^{-1}(p)} q_r$.\footnote{For $\mu^{-1}(p)=\emptyset$, we assume that an empty sum equals zero.} \end{definition} \begin{definition}[Feasibility] A feasible matching $(Y,\mu)$ is a couple of a matching and an allocation where for every project $p\in P$, it holds that $|Y(p)| \leq q_{\mu}(p)$. \end{definition} In other words, matching $Y$ is feasible with allocation $\mu$ if each project $p$ is allocated enough resources by $\mu$ to accommodate $Y(p)$. We say $Y$ is feasible if there exists $\mu$ such that $(Y, \mu)$ is feasible. Traditionally (e.g. with fixed quotas), for feasible matching $(Y, \mu)$ and $(s,p)\in X\setminus Y$, we say student $s$ \emph{claims an empty seat} of $p$ if $p \succ_s Y(s)$ and matching $Y \setminus \{(s,Y(s))\} \cup \{(s, p)\}$ is feasible with \emph{same} allocation $\mu$. However, in our setting \cite{Goto:AEJ-micro:2016}, since the distributional constraint is endogenous and as flexible as allocations are, the definition of nonwastefulness uses this flexibility, as follows. \begin{definition}[Nonwastefulness] Given feasible matching $(Y,\mu)$, a contract $(s,p)\in X\setminus Y$ is a claiming pair if and only if: \begin{itemize} \item student $s$ has preference $p \succ_s Y(s)$, and \item matching $Y\setminus \{(s,Y(s))\}\cup\{(s,p)\}$ is feasible with some \emph{possibly new} allocation $\mu'$. \end{itemize} A feasible matching $(Y,\mu)$ is nonwasteful if it has no claiming pair. \end{definition} In other words, $(s, p)$ is a claiming pair if it is possible to move $s$ to a more preferred project $p$ while keeping the assignment of other students unchanged with allocation $\mu'$. Note that $\mu'$ can be different from $\mu$. Thus, $(s, p)$ can be a claiming pair even if moving her to $p$ is impossible with the current allocation $\mu$, but it becomes possible with a different/better allocation $\mu'$. \begin{definition}[Fairness]\label{def:fair} Given feasible matching $(Y,\mu)$, contract $(s,p)\in X\setminus Y$ is an envious pair if and only if: \begin{itemize} \item student $s$ has preference $p \succ_s Y(s)$, and \item there exists student $s'\in Y(p)$ such that $p$ prefers $s \succ_p s'$.\footnote{Note that matching $(Y\setminus \{(s,Y(s)),(s',Y(s'))\})\cup\{(s,p)\}$ is still feasible with same allocation $\mu$.} \end{itemize} We also say $s$ has justified envy toward $s'$ when the above conditions hold. A feasible matching $(Y,\mu)$ is fair if it has no envious pair (equivalently, no student has justified envy). \end{definition} In other words, student $s$ has justified envy toward $s'$, if $s'$ is assigned to project $p$, although $s$ prefers $p$ over her current project $Y(s)$ and project $p$ also prefers $s$ over $s'$. \begin{definition}[Stability] A feasible matching $(Y,\mu)$ is stable if it is nonwasteful and fair (no claiming/envious pair). \end{definition} \begin{definition}[Pareto Efficiency] Matching $Y$ is Pareto dominated by $Y'$ if all students weakly prefer $Y'$ over $Y$ and at least one student strictly prefers $Y'$. A feasible matching is Pareto efficient if no feasible matching Pareto dominates it. \end{definition} \begin{comment} We say feasible matching is Pareto efficient, if it is not Pareto dominated by another feasible matching.\footnote{% $Y$ is weakly dominated by $Y'$ if all students weakly prefer $Y'$ (over $Y$) and at least one student strictly prefers $Y'$.} \end{comment} Pareto efficiency implies nonwastefulness (not vice versa). \begin{definition}[Mechanism] Given any SPR instance, { a mechanism} outputs a feasible matching $(Y, \mu)$. If a mechanism always obtains a feasible matching that satisfies property A (e.g., fairness), we say this mechanism is A (e.g., fair). A mechanism is strategyproof if no student gains by reporting a preference different from her true one. \end{definition} An SPR belongs to a general class of problems, where distributional constraints satisfy a condition called \emph{heredity}\footnote{Heredity means that if matching $Y$ is feasible, then any of its subsets are also feasible. An SPR satisfies this property.} \citep{Goto:AEJ-micro:2016}. Two general strategyproof mechanisms exist in this context \citet{Goto:AEJ-micro:2016}. \emph{First}, Serial Dictatorship (SD) obtains a Pareto efficient (thus also nonwasteful) matching. {% SD matches students one by one, based on a fixed ordering. Let $Y$ denote the current (partial) matching. For next student $s$ from the fixed order, SD chooses $(s,p) \in X$ and add it to $Y$, where $p$ is her most preferred project s.t. $Y\cup\{(s,p)\}$ is feasible with some allocation $\mu'$.} Unfortunately, SD is computationally expensive\footnote{% It requires to solve \textsc{SPR/FA} (see below) $O(|X|)$ times.} and unfair. \emph{Second}, Artificial Caps Deferred Acceptance (ACDA) obtains a fair matching in polynomial-time. The idea is to fix a resource allocation $\mu$ and run the well-known Deferred Acceptance (DA)~\citep{Gale:AMM:1962}. In DA, each student first applies to her most preferred project. Then each project deferred accepts applicants up to its capacity limit based on its preference and the rest of the students are rejected. Then a rejected student applies to her second choice, and so on.\footnote{% Each project deferred accepts applying students, without distinguishing newly applied and already deferred accepted students.} However, ACDA is inefficient since {$\mu$ is chosen independently from students' preferences.} \begin{example}\label{counterex:stable} Nonwastefulness and fairness are incompatible since there exists an instance with no stable matching. Let us show a simple example with two students $s_a, s_b$, two projects $p_a, p_b$, and a unitary resource compatible with both. Students' preferences are $p_a\succ_{s_a}p_b$ and $p_b\succ_{s_b}p_a$. Projects' are $s_b\succ_{p_a}s_a$ and $s_a\succ_{p_b}s_b$. By symmetry, assume the resource is allocated to $p_a$. From fairness, $s_b$ must be allocated to $p_a$. Then $(s_b, p_b)$ becomes a claiming pair.\footnote{We use this example as a building block in the next section.} \end{example}
1,941,325,220,809
arxiv
\section{Introduction and summary} \label{sec:intro} The tidal interaction between neutron stars in a close binary system has recently been the subject of intense investigation, following the observation \cite{flanagan-hinderer:08} that the tidal deformation of each body could have a measurable impact on the emitted gravitational waves. The effect depends on the tidal deformability of each neutron star, and a large effort has been deployed to the computation of this quantity for realistic models of neutron stars, and to ascertain the importance of the tidal deformation on the gravitational-wave signal \cite{hinderer-etal:10, baiotti-etal:11, vines-flanagan-hinderer:11, pannarale-etal:11, lackey-etal:12, damour-nagar-villain:12, read-etal:13, vines-flanagan:13, maselli-gualtieri-ferrari:13, lackey-etal:14, favata:14, yagi-yunes:14, wade-etal:14, bernuzzi-etal:15, lackey-etal:17, ferrari-gualtieri-maselli:12, maselli-etal:12, maselli-etal:13, hinderer-etal:16, steinhoff-etal:16, xu-lai:17, cullen-etal:17, harry-hinderer:18}. The recent observation of GW170817 by the LIGO and Virgo instruments \cite{GW170817:17}, with its first attempt to measure the tidal deformability of neutron stars, inaugurated a new era of gravitational-wave astronomy that is likely, in the fullness of time, to reveal some aspects of nuclear matter equation of state and neutron-star internal structure. Detailed modeling of the tidal dynamics of compact objects in general relativity requires the precise specification of the tidal environment in which the compact object resides. In a context in which the tidal field is weak and varies slowly compared with the dynamical timescale of the compact body, the tidal environment can be described in terms of a number of tidal multipole moments. These come in two guises. The gravitoelectric moments $\E_{ab}$, $\E_{abc}$, $\E_{abcd}$, and so on, are produced by mass densities external to the compact body, and have direct analogues in Newtonian gravity. The gravitomagnetic moments $\B_{ab}$, $\B_{abc}$, $\B_{abcd}$, and so on, are produced by external mass currents, and have no analogues in Newtonian gravity. A specification of the tidal environment amounts to a determination of the tidal moments in terms of the state of motion of the two-body system. This task is the central concern of this paper. A method to determine tidal moments, in a context in which the compact object is a member of a post-Newtonian binary system, was developed by Taylor and Poisson in Ref.~\cite{taylor-poisson:08}, the first paper in this sequence --- hereafter referred to as Paper I. The method, a generalization to compact objects of a previous implementation limited to weakly self-gravitating bodies \cite{damour-soffel-xu:91, damour-soffel-xu:92, damour-soffel-xu:93}, relies on the matching of two distinct metrics in an overlapping domain of validity. The situation is illustrated in Fig.~\ref{fig:domains}. \begin{figure} \includegraphics[width=0.6\linewidth]{fig1} \caption{A post-Newtonian system consisting of a black hole (left, black) and a normal star (right, yellow online). The post-Newtonian domain is pictured as an ellipse (blue online), and it excludes the fuzzy white region surrounding the black hole. The black-hole domain is pictured as the dark fuzzy region (red online), which extends all the way down to the black hole. The matching of the black-hole and post-Newtonian metrics is carried out in the overlap between the black-hole and post-Newtonian domains.} \label{fig:domains} \end{figure} The first metric is that of a tidally deformed compact object, presented as a perturbed version of the Schwarzschild metric, and expressed in terms of the tidal multipole moments. This metric is valid in a small domain that surrounds the compact body, and the tidal moments appear in it as freely specifiable functions of time. These cannot be determined by integrating the Einstein field equations in the small domain, because the companion body is excluded and the integration cannot incorporate its precise influence on the gravitational field of the compact object. The second metric is a post-Newtonian metric given in a large domain that includes both bodies, but leaves out a small region around the compact object, in which gravity is too strong to be adequately captured by a post-Newtonian approximation. In this way, the internal gravity of the compact object is allowed to be strong, while the mutual gravity between bodies is assumed to be weak. Because the domain excludes the compact body, the post-Newtonian metric also contains freely specifiable functions of time. There exists an overlap between the small domain of the first metric and the large domain of the second metric. Matching the metrics in this overlap determines the tidal moments and the missing details of the post-Newtonian metric. In this manner, the tidal environment of the compact body is determined, up to a desired number of moments, and up to a desired post-Newtonian order. In Paper I \cite{taylor-poisson:08} the method was exploited to calculate the leading order tidal moments, $\E_{ab}$ and $\B_{ab}$, through the first post-Newtonian ($1\pn$) approximation. In this paper we extend this calculation to the next two tidal moments, $\E_{abc}$ and $\E_{abcd}$, as well as $\B_{abc}$ and $\B_{abcd}$; this calculation also is carried out to $1\pn$ order. The multipole moments are defined precisely in Sec.~\ref{sec:moments}, and their scaling with the mass $M_2$ of the companion body, the interbody distance $b$, and the orbital velocity $V$ is also described in this section. For concreteness we choose the compact object to be a nonrotating black hole of mass $M_1$. The precise nature of the body, however, is of no consequence in the determination of the tidal moments at $1\pn$ order: the effects of spin enter at $1.5\pn$ order, and finite-size effects enter at $5\pn$ order. Our moments, therefore, determine the tidal environment of a neutron star just as well as that of a black hole, and these objects are allowed to rotate. The construction of the metric of a tidally deformed black hole begins in Sec.~\ref{sec:potentials} with the introduction of tidal potentials constructed from the tidal moments. The metric is next obtained in $(v,r,\theta,\phi)$ coordinates in Sec.~\ref{sec:metric1}, with $v$ denoting advanced time, and transformed to $(t,r,\theta,\phi)$ coordinates in Sec.~\ref{sec:metric2}. The goal is to obtain a metric accurate through fourth order in an expansion in powers of $r/b$, where $r$ is the distance to the black hole and $b$ is the interbody distance. This metric incorporates terms that involve the tidal quadrupole moments $\E_{ab}$ and $\B_{ab}$ and their time derivatives, the octupole moments $\E_{abc}$ and $\B_{abc}$ and their time derivatives, and the hexadecapole moments $\E_{abcd}$ and $\B_{abcd}$. The nonlinearity of the field equation implies that terms at order $(r/b)^4$ also include bilinear combinations of $\E_{ab}$ and $\B_{ab}$. To achieve all this we rely heavily on the formalism of Poisson and Vlasov \cite{poisson-vlasov:10}, which provides an essential foundation for this work. In fact, the metric of a tidally deformed black hole, accurate through order $(r/b)^4$, was already constructed by Poisson and Vlasov, and in principle, this metric could have been imported directly without having to perform the additional work described in Secs.~\ref{sec:metric1} and \ref{sec:metric2}. The need for this work comes from the fact that the Poisson-Vlasov metric is given in a form that does not facilitate a matching with the post-Newtonian metric. First, the metric is expressed in light-cone coordinates, and its post-Newtonian expansion does not reduce to the standard post-Newtonian form that is required for matching. Second, the Poisson-Vlasov metric is expressed in terms of tidal moments that were specifically defined to simplify the description of the deformed event horizon; these definitions make the post-Newtonian expression of the metric more complicated than it has to be. In our developments in Secs.~\ref{sec:metric1} and \ref{sec:metric2}, we endeavor to arrive at a form for the black-hole metric that will simplify, to the fullest extent possible, the task of matching it to a post-Newtonian metric. And we aim to achieve this not just at the $1\pn$ order of the calculations carried out in this paper, but also at higher post-Newtonian orders, in preparation for future work. In this regard, the most important property of the black-hole metric is that it becomes compatible with the standard form of the post-Newtonian metric after expansion through $1\pn$ order. This is achieved with two technical devices. First, the metric of the unperturbed black hole, given by the Schwarzschild solution, is presented in the harmonic radial coordinate $\bar{r}$, related to the usual areal radius $r$ by $\bar{r} = r - M$. This ensures that the unperturbed metric reduces to the standard post-Newtonian metric after expansion. Second, the tidal perturbation is presented in Regge-Wheeler gauge, which happily produces a perturbed metric that continues to respect the standard post-Newtonian form. Such simplicity could not be achieved with the Poisson-Vlasov metric. It also was not achieved in Paper I \cite{taylor-poisson:08}, with a perturbed metric presented initially in the light-cone gauge. Matching with the post-Newtonian metric required a transformation of the black-hole metric to harmonic coordinates, a technically demanding step that was (in retrospect) unnecessary. Our new construction allows us to avoid this step altogether, and provides a solid infrastructure that will facilitate future extensions of this work. As stated previously, the metric of a tidally deformed black hole can be expressed in terms of tidal multipole moments that appear as freely specifiable functions of time. The moments, however, are not uniquely defined, and they admit redefinitions that leave the form of the metric unchanged up to integration constants. It was this freedom that was exploited by Poisson and Vlasov \cite{poisson-vlasov:10} to simplify the description of the perturbed horizon. In this work we exploit it differently, to simplify the post-Newtonian expansion of the metric. The redefinition of tidal moments is explored in Sec.~\ref{sec:calibrations}, and sets of tidal moments that belong to different ``calibrations'' are related to one another. The task of matching the black-hole and post-Newtonian metrics begins in Sec.~\ref{sec:PN} with the post-Newtonian expansion of the black-hole metric and the extraction of the gravitational potentials (a Newtonian potential $U$, a vector potential $U_a$, and a post-Newtonian potential $\Psi$). The matching itself is carried out in Sec.~\ref{sec:matching}. The final products are the tidal moments expressed in terms of external potentials that represent the gravitational field of the companion body. These are evaluated concretely in Sec.~\ref{sec:2body}, and the tidal moments are finally obtained in terms of the orbital degrees of freedom of the two-body system. At this stage our task is complete: the quadrupole, octupole, and hexadecapole tidal moments are all determined through the first post-Newtonian order. We note that the octupole tidal moments were previously given in Ref.~\cite{johnsonmcdaniel-etal:09} for the specific case of circular orbits; our expressions agree with theirs. The hexadecapole moments are presented here for the first time. As an application of our results, in Sec.~\ref{sec:horizon} we examine the intrinsic geometry of the tidally deformed event horizon. To describe the deformation it is useful to introduce a fictitious two-dimensional surface embedded in a three-dimensional flat space, to describe this surface by the equation $r = 2M_1(1 + \varepsilon)$, and to choose the displacement function $\varepsilon$ in such a way that the embedded surface possesses the same intrinsic geometry as a two-dimensional cross-section of the event horizon. In relativistic units in which $G = c =1$, used throughout the paper, we find that $\varepsilon$ is given by \begin{align} \varepsilon &= \frac{1}{2} q_1^2 q_2 \biggl(\frac{M}{b}\biggr)^3 \Biggl\{ -\biggl[ 1 + \frac{1}{2} q_1 V^2 \biggr] (3\cos^2\theta-1) + 3 \biggl[ 1 + \frac{1}{2} (q_1 - 4)V^2 \biggr] \sin^2\theta \cos(2\psi) \nonumber \\ & \quad \mbox{} - \frac{3}{5} q_1 V^2 \sin\theta(5\cos^2\theta-1) \cos(\psi) + q_1 V^2 \sin^3\theta\cos(3\psi) + 1.5\pn \Biggr\}, \end{align} where $M := M_1 + M_2$, $q_1 := M_1/M$, $q_2 := M_2/M$, $V = (M/b)^{1/2}$ is the orbital velocity, and $(\theta,\phi)$ are polar angles that specify a position on the embedded surface, with the polar axis taken to be normal to the orbital plane. The phase function is defined by $\psi := \phi - \bar{\omega} v$, with $v$ representing advanced-time on the horizon, and \begin{equation} \bar{\omega} = \sqrt{\frac{M}{b^3}} \biggl[ 1 - \frac{1}{2}(3+q_1 q_2) V^2 + 2\pn \biggr]. \end{equation} is the angular frequency of the tidal field. The displacement function can be decomposed into a quadrupole term that is independent of the phase $\psi$, and another quadrupole term that oscillates at twice the frequency $\bar{\omega}$; both terms originate from $\E_{ab}$. The remaining terms at $1\pn$ order consist of octupole deformations that oscillate at once and three times the tidal frequency; both terms originate from $\E_{abc}$, and they were omitted in the earlier statement of this result given in Paper I. The hexadecapole tidal moment $\E_{abcd}$ makes no appearance in $\varepsilon$ at $1\pn$ order, and the gravitomagnetic moments are excluded on the grounds that they have the wrong parity to contribute to a scalar quantity such as $\varepsilon$. The tidal displacement represents a bulge aligned with $\phi = \bar{\omega} v$. Some of the calculations presented in the main text rely on tables and technical developments relegated to three appendices. In Appendix~\ref{sec:irreducible} we provide decompositions of Cartesian tensors into irreducible pieces, and in Appendix~\ref{sec:distance} we record various derivatives of a distance function that appears in gravitational potentials. In Appendix~\ref{sec:tables} we display a large number of tables that contain definitions of various quantities used in our calculations. \section{Tidal moments and scales} \label{sec:moments} We consider a nonrotating black hole of mass $M_1$ immersed in a tidal environment described by the gravitoelectric tidal moments $\E_{ab}$, $\E_{abc}$, $\E_{abcd}$, and the gravitomagnetic tidal moments $\B_{ab}$, $\B_{abc}$, $\B_{abcd}$; all tidal moments are defined as three-dimensional Cartesian tensors, and are functions of time only. The gravitational field of the tidally deformed black hole is described by a metric $g_{\alpha\beta}$ presented as an expansion in powers of $r/b \ll 1$, where $r$ is the distance to the black hole and $b$ is the characteristic length scale of the tidal field --- the typical distance to the remote bodies responsible for the tidal environment. The tidal moments determine the behavior of the metric when $r \gg M_1$, and their time dependence is arbitrary; it cannot be determined by integrating the field equations locally, in a neighborhood of the black hole limited to $r \ll b$. To determine the tidal moments, the local metric $g_{\alpha\beta}$ must be matched to a global metric that extends beyond $r = b$ and includes the remote bodies. The tidal moments can be loosely interpreted as describing the Weyl curvature of the tidally deformed black hole in a distance interval $M_1 \ll r \ll b$, in which the tidal field dominates over the hole's own gravity. Formally, however, the tidal moments are defined in terms of the Weyl tensor of a different, but related, spacetime. The metric of this spacetime is the limit of $g_{\alpha\beta}$ when $M_1$ is taken to zero while keeping the tidal moments fixed. The Weyl tensor and its covariant derivatives are then computed for this metric, and their components in local Lorentzian coordinates $(x^0, x^a)$ are evaluated at $r = 0$, now describing a regular world line of the spacetime. Additional details regarding the definition of the tidal moments are provided in Sec.~II of Ref.~\cite{poisson-vlasov:10}. The tidal quadrupole moments are defined by \begin{subequations} \label{tidalmoment_2} \begin{align} \E_{ab} &:= \bigl( C_{0a0b} \bigr)^{\rm STF}, \\ \B_{ab} &:= \frac{1}{2} \bigl( \epsilon_{apq} C^{pq}_{\ \ b0} \bigr)^{\rm STF} \end{align} \end{subequations} in terms of the Weyl tensor $C_{\alpha\beta\gamma\delta}$ of the related spacetime. Here $\epsilon_{abc}$ is the permutation symbol, and the STF sign instructs us to symmetrize all free indices and remove all traces. This operation is also indicated in an angular-bracket notation, such as $A_\stf{abcd} := (A_{abcd})^{\rm STF}$. The tidal moments are functions of coordinate time $x^0$ (either $t$ or advanced-time $v$), and we use overdots to indicate differentiation with respect to this coordinate. The tidal octupole moments are defined by \begin{subequations} \label{tidalmoment_3} \begin{align} \E_{abc} &:= \bigl( C_{0a0b|c} \bigr)^{\rm STF}, \\ \B_{abc} &:= \frac{3}{8} \bigl( \epsilon_{apq} C^{pq}_{\ \ b0|c} \bigr)^{\rm STF}, \end{align} \end{subequations} in which a vertical stroke indicates covariant differentiation. These also are functions of coordinate time, and time derivatives are again indicated with overdots. The tidal hexadecapole moments are defined by \begin{subequations} \label{tidalmoment_4} \begin{align} \E_{abcd} &:= \frac{1}{2} \bigl( C_{0a0b|cd} \bigr)^{\rm STF}, \\ \B_{abcd} &:= \frac{3}{20} \bigl( \epsilon_{apq} C^{pq}_{\ \ b0|cd} \bigr)^{\rm STF}. \end{align} \end{subequations} The numerical factors inserted in these equations are inherited from Zhang's choice of normalization for the tidal moments \cite{zhang:86}. The tidal environment is characterized by an external mass scale $M_2$, a distance scale $b$, a velocity scale $V \sim \sqrt{M/b}$, where $M := M_1 + M_2$, and an angular-velocity scale $\omega \sim V/b$. The gravitoelectric moments scale as \begin{equation} \E_{ab} \sim \frac{M_2}{b^3}, \qquad \E_{abc} \sim \frac{M_2}{b^4}, \qquad \E_{abcd} \sim \frac{M_2}{b^5}, \end{equation} and the gravitomagnetic moments scale as \begin{equation} \B_{ab} \sim \frac{M_2 V}{b^3}, \qquad \B_{abc} \sim \frac{M_2 V}{b^4}, \qquad \B_{abcd} \sim \frac{M_2 V}{b^5}. \end{equation} Time derivatives of the gravitoelectric tidal moments scale as \begin{equation} \dot{\E}_{ab} \sim \frac{M_2 V}{b^4}, \qquad \ddot{\E}_{ab} \sim \frac{M_2 V^2}{b^5}, \qquad \dot{\E}_{abc} \sim \frac{M_2 V}{b^5}, \end{equation} while the derivatives of the gravitomagnetic moments scale as \begin{equation} \dot{\B}_{ab} \sim \frac{M_2 V^2}{b^4}, \qquad \ddot{\B}_{ab} \sim \frac{M_2 V^3}{b^5}, \qquad \dot{\B}_{abc} \sim \frac{M_2 V^2}{b^5}. \end{equation} We assume that $b \gg M_1$, so that the source of the tidal field is situated far from the black hole. This implies that $V^2 \ll (1+M_2/M_1)$, so that the orbital velocity is much smaller than the speed of light. It also follows that \begin{equation} M_1 \omega \sim (1 + M_2/M_1)^{1/2} (M_1/b)^{3/2} \ll 1, \end{equation} so that the time scale $\omega^{-1}$ associated with variations of the tidal field is much longer than $M_1$, the time scale associated with the black hole. The tidal field is thus assumed to be weak and to vary slowly. The metric of a tidally deformed black hole shall be presented as an expansion in powers of $r/b$, assuming that $r$, the distance to the black hole, is much smaller than $b$, the distance scale of the tidal field. The metric shall be expanded through order $(r/b)^4$, and expressed in terms of tidal potentials constructed from the tidal moments. \section{Tidal potentials} \label{sec:potentials} The metric of a tidally deformed black hole can be expressed in terms of a number of tidal potentials that are constructed from the tidal moments. Details are provided in Sec.~II of Ref.~\cite{poisson-vlasov:10}, and we summarize the main points here. The tidal moments are combined with $\Omega^a := x^a/r$, with $r$ denoting the usual Euclidean distance, so as to form scalar and vector potentials that form an irreducible representation of the rotation group labeled by multipole order $\ell$. (Tensor potentials are not required, by virtue of the gauge choice adopted in Secs.~\ref{sec:metric1} and \ref{sec:metric2}.) Each vector potential is required to be purely transverse, in the sense of being orthogonal to $\Omega^a$. The required potentials are displayed in Table~\ref{tab:potentials}. A transformation from Cartesian coordinates $x^a$ to spherical coordinates $(r,\theta,\phi)$ is effected by $x^a = r \Omega^a(\theta^A)$, in which $\Omega^a$ is now parameterized by two polar angles $\theta^A = (\theta,\phi)$. Explicitly, we have that $\Omega^a = [\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta]$. The Jacobian matrix is given by \begin{equation} \frac{\partial x^a}{\partial r} = \Omega^a, \qquad \frac{\partial x^a}{\partial \theta^A} = r \Omega^a_A, \label{Jacob} \end{equation} with $\Omega^a_A := \partial \Omega^a/\partial \theta^A$. We have that $\Omega_{AB} := \delta_{ab} \Omega^a_A \Omega^b_B = \mbox{diag}[1,\sin^2\theta]$ is the metric on the unit two-sphere, and $\Omega^{AB}$ is its inverse. The inverse of the Jacobian matrix is \begin{equation} \frac{\partial r}{\partial x^a} = \Omega_a, \qquad \frac{\partial \theta^A}{\partial x^a} = \frac{1}{r} \Omega^A_a, \label{Jacob_inverse} \end{equation} where $\Omega^A_a := \Omega^{AB} \delta_{ab} \Omega^b_B$. We introduce $D_A$ as the covariant-derivative operator compatible with $\Omega_{AB}$, and $\epsilon_{AB}$ as the Levi-Civita tensor on the unit two-sphere (with nonvanishing components $\epsilon_{\theta\phi} = -\epsilon_{\phi\theta} = \sin\theta$). We adopt the convention that uppercase Latin indices are raised and lowered with $\Omega^{AB}$ and $\Omega_{AB}$, respectively. Finally, we note that $D_C \Omega_{AB} = D_C \epsilon_{AB} = 0$. We convert the vector potentials from their initial Cartesian forms to angular-coordinate versions by making use of the matrix $\Omega^a_A$. We thus define \begin{equation} \BB{q}_A := \BB{q}_a \Omega^a_A, \label{conversion} \end{equation} and apply the same rule to all other vector potentials. After this conversion the tidal potentials become scalar and vector fields on the unit two-sphere, and they become independent of $r$. It is easy to show that the conversion is reversed with $\BB{q}_a = \BB{q}_A \Omega^A_a$. The tidal potentials can all be expressed in terms of spherical harmonics. Let $Y^{\ell m}$ be real-valued spherical-harmonic functions (as defined in Table~\ref{tab:Ylm}). The relevant vectorial and tensorial harmonics of even parity are \begin{subequations} \label{Ylm_even} \begin{align} & Y^{\ell m}_A := D_A Y^{\ell m}, \\ & Y^{\ell m}_{AB} := \Bigl[ D_A D_B + \frac{1}{2} \ell(\ell+1) \Omega_{AB} \Bigr] Y^{\ell m}; \end{align} \end{subequations} notice that $Y^{\ell m}_{AB}$ is tracefree, $\Omega^{AB} Y^{\ell m}_{AB}= 0$, by virtue of the eigenvalue equation satisfied by the spherical harmonics. The vectorial and tensorial harmonics of odd parity are \begin{subequations} \label{Ylm_odd} \begin{align} & X^{\ell m}_A := -\epsilon_A^{\ B} D_B Y^{\ell m}, \\ & X^{\ell m}_{AB} := -\frac{1}{2} \Bigl( \epsilon_A^{\ C} D_B + \epsilon_B^{\ C} D_A \Bigr) D_C Y^{\ell m} = 0; \end{align} \end{subequations} the tensorial harmonics $X^{\ell m}_{AB}$ also are tracefree: $\Omega^{AB} X^{\ell m}_{AB} = 0$. The decomposition of the tidal potentials in spherical harmonics is presented in Tables~\ref{tab:E_ang}, \ref{tab:B_ang}, \ref{tab:EE_ang}, \ref{tab:BB_ang}, \ref{tab:EBeven_ang}, and \ref{tab:EBodd_ang}. \section{Metric in $(v,r,\theta,\phi)$ coordinates} \label{sec:metric1} In this section and the next we construct the metric of a tidally deformed black hole, expanded through order $(r/b)^4$. In these sections and the ones that follow, we denote the mass of the black hole by $M$ instead of $M_1$. The original practice will be resumed in Sec.~\ref{sec:2body}. In the standard $(v,r,\theta^A)$ coordinates, the metric of an unperturbed black hole is given by the Schwarzschild solution, which has the nonvanishing components \begin{equation} g_{vv} = -f, \qquad g_{vr} = 1, \qquad g_{AB} = r^2 \Omega_{AB}, \label{Schw_vr} \end{equation} where $f := 1-2M/r$. The tidal perturbation introduces additional terms in the metric. These are constructed by incorporating a tidal potential such as $\EE{q}(v,\theta^A)$, multiplied by a radial function such as $\ee{q}{vv}(r)$, in the metric. All potentials listed in Table~\ref{tab:potentials} participate, along with their derivatives with respect to $v$. Including all such combinations produces a complete metric ansatz that can then be inserted within the Einstein field equations to determine the radial functions. The perturbation is presented in the Regge-Wheeler gauge (see, for example, Ref.~\cite{martel-poisson:05} for definitions and a review of black-hole perturbation theory), which requires even-parity terms to be confined to the $vv$, $vr$, $rr$, and $AB$ components of the metric, and odd-parity terms to be confined to the $vA$ and $rA$ components of the metric. In addition, the Regge-Wheeler gauge requires $g_{AB}$ to be proportional to $\Omega_{AB}$. The gauge is uniquely defined for $\ell \geq 2$, but it is not defined when $\ell = 0$ and $\ell = 1$. The choices made in the monopole and dipole cases will be described below. The metric of a tidally deformed, nonrotating black hole takes the form of \begin{subequations} \label{blackhole_metric_vr} \begin{align} g_{vv} &= -f + r^2 \ee{q}{vv}\, \EE{q} + r^3 \eedot{q}{vv}\, \EEd{q} + r^4 \eeddot{q}{vv}\, \EEdd{q} + r^3 \ee{o}{vv}\, \EE{o} + r^4 \eedot{o}{vv}\, \EEd{o} + r^4 \ee{h}{vv}\, \EE{h} + r^4 \bigl( \pp{m}{vv}\, \PP{m} + \qq{m}{vv}\, \QQ{m} \bigr) \nonumber \\ & \quad \mbox{} + r^4 \bigl( \pp{q}{vv}\, \PP{q} + \qq{q}{vv}\, \QQ{q} \bigr) + r^4 \bigl( \pp{h}{vv}\, \PP{h} + \qq{h}{vv}\, \QQ{h} \bigr) + r^4 \bigl( \g{d}{vv}\, \GG{d} + \g{o}{vv}\, \GG{o} \bigr) + O(r^5), \\ g_{vr} &= 1 + r^2 \ee{q}{vr}\, \EE{q} + r^3 \eedot{q}{vr}\, \EEd{q} + r^4 \eeddot{q}{vr}\, \EEdd{q} + r^3 \ee{o}{vr}\, \EE{o} + r^4 \eedot{o}{vr}\, \EEd{o} + r^4 \ee{h}{vr}\, \EE{h} + r^4 \bigl( \pp{m}{vr}\, \PP{m} + \qq{m}{vr}\, \QQ{m} \bigr) \nonumber \\ & \quad \mbox{} + r^4 \bigl( \pp{q}{vr}\, \PP{q} + \qq{q}{vr}\, \QQ{q} \bigr) + r^4 \bigl( \pp{h}{vr}\, \PP{h} + \qq{h}{vr}\, \QQ{h} \bigr) + r^4 \bigl( \g{d}{vr}\, \GG{d} + \g{o}{vr}\, \GG{o} \bigr) + O(r^5), \\ g_{rr} &= r^2 \ee{q}{rr}\, \EE{q} + r^3 \eedot{q}{rr}\, \EEd{q} + r^4 \eeddot{q}{rr}\, \EEdd{q} + r^3 \ee{o}{rr}\, \EE{o} + r^4 \eedot{o}{rr}\, \EEd{o} + r^4 \ee{h}{rr}\, \EE{h} + r^4 \bigl( \pp{m}{rr}\, \PP{m} + \qq{m}{rr}\, \QQ{m} \bigr) \nonumber \\ & \quad \mbox{} + r^4 \bigl( \pp{q}{rr}\, \PP{q} + \qq{q}{rr}\, \QQ{q} \bigr) + r^4 \bigl( \pp{h}{rr}\, \PP{h} + \qq{h}{rr}\, \QQ{h} \bigr) + r^4 \bigl( \g{d}{rr}\, \GG{d} + \g{o}{rr}\, \GG{o} \bigr) + O(r^5), \\ g_{vA} &= r^3 \bb{q}{v}\, \BB{q}_A + r^4 \bbdot{q}{v}\, \BBd{q}_A + r^5 \bbddot{q}{v}\, \BBdd{q}_A + r^4 \bb{o}{v}\, \BB{o}_A + r^5 \bbdot{o}{v}\, \BBd{o}_A + r^5 \bb{h}{v}\, \BB{h}_A + r^5 \bigl( \hh{q}{v}\, \HH{q}_A + \hh{h}{v}\, \HH{h}_A \bigr) + O(r^6), \\ g_{rA} &= r^3 \bb{q}{r}\, \BB{q}_A + r^4 \bbdot{q}{r}\, \BBd{q}_A + r^5 \bbddot{q}{r}\, \BBdd{q}_A + r^4 \bb{o}{r}\, \BB{o}_A + r^5 \bbdot{o}{r}\, \BBd{o}_A + r^5 \bb{h}{r}\, \BB{h}_A + r^5 \bigl( \hh{q}{r}\, \HH{q}_A + \hh{h}{r}\, \HH{h}_A \bigr) + O(r^6), \\ g_{AB} &= r^2 \Omega_{AB} \Bigl[ 1 + r^2 \ee{q}{}\, \EE{q} + r^3 \eedot{q}{}\, \EEd{q} + r^4 \eeddot{q}{}\, \EEdd{q} + r^3 \ee{o}{}\, \EE{o} + r^4 \eedot{o}{}\, \EEd{o} + r^4 \ee{h}{}\, \EE{h} + r^4 \bigl( \pp{m}{}\, \PP{m} + \qq{m}{}\, \QQ{m} \bigr) \nonumber \\ & \quad \mbox{} + r^4 \bigl( \pp{q}{}\, \PP{q} + \qq{q}{}\, \QQ{q} \bigr) + r^4 \bigl( \pp{h}{}\, \PP{h} + \qq{h}{}\, \QQ{h} \bigr) + r^4 \bigl( \g{d}{}\, \GG{d} + \g{o}{}\, \GG{o} \bigr) + O(r^5) \Bigr], \end{align} \end{subequations} where, for example, $\EE{q}$ is the tidal potential introduced in Table~\ref{tab:E_ang}, and $\ee{q}{vv}$, $\ee{q}{vr}$, $\ee{q}{rr}$, and $\ee{q}{}$ are the radial functions that come with it. An overdot on a tidal potential indicates differentiation with respect to $v$ --- the time dependence is contained in the tidal moments --- and the radial functions attached to time-differentiated potentials are also adorned with overdots (though these functions are not differentiated with respect to $v$). The radial functions are determined by integrating the Einstein field equations in vacuum. The solutions are presented in Tables \ref{tab:e_functions_vr}, \ref{tab:b_functions_vr}, \ref{tab:pq_functions_vr}, and \ref{tab:gh_functions_vr}. Integration for $\ell = 0$ and $\ell = 1$ requires some gauge choices. For $\ell = 0$ we let $\pp{m}{vv}$ and $\pp{m}{}$ be the two independent functions, and we set $\pp{m}{vr} = -\pp{m}{vv}/f$ and $\pp{m}{rr} = 2\pp{m}{vv}/f^2$, a choice inspired from the structure of the solutions for $\ell \geq 2$. For $\ell = 1$ we set $\g{d}{}=0$, and find that $\g{d}{vv}=0$ as a consequence of the field equations. The radial functions depend on a number of integration constants, which are denoted with a sans-serif symbol such as $\ce{q}{1}$. (Other constants of integration are fixed by demanding that the metric be well behaved at $r=2M$. An exception to this rule concerns the terms proportional to $\cp{m}$ and $\cq{m}$; this point is discussed below.) These constants correspond to the freedom to redefine the tidal moments according to \begin{subequations} \label{moment-transf1} \begin{align} \E_{ab} &\to \E_{ab} + \ce{q}{1}\, M \dot{\E}_{ab} + \ce{q}{2}\, M^2 \ddot{\E}_{ab} + \cp{q}\, M^2 \E_{p\langle a} \E^p_{\ b\rangle} + \cq{p}\, M^2 \B_{p\langle a} \B^p_{\ b\rangle}, \\ \B_{ab} &\to \B_{ab} + \cb{q}{1}\, M \dot{\B}_{ab} + \cb{q}{2}\, M^2 \ddot{\B}_{ab} + \ch{q}\, M^2 \E_{p\langle a} \B^p_{\ b\rangle}, \\ \E_{abc} &\to \E_{abc} + \ce{o}{1}\, M \dot{\E}_{abc} + \cg{o}\, M \epsilon_{pq\langle a} \E^p_{\ b} \B^q_{\ c\rangle}, \\ \B_{abc} &\to \B_{abc} + \cb{o}{1}\, M \dot{\B}_{abc} \end{align} \end{subequations} and \begin{subequations} \label{moment-transf2} \begin{align} 2 \E_{abcd} &\to 2 \E_{abcd} + \cp{h}\, \E_{\langle ab} \E_{cd \rangle} + \cq{h}\, \B_{\langle ab} \B_{cd \rangle}, \\ \frac{10}{3} \B_{abcd} &\to \frac{10}{3} \B_{abcd} + \ch{h}\, \E_{\langle ab} \B_{cd \rangle}, \end{align} \end{subequations} as well as the freedom to redefine the mass parameter according to \begin{equation} M \to M + \cp{m} M^5 \E_{pq} \E^{pq} + \cq{m} M^5 \B_{pq} \B^{pq}. \label{mass_transf} \end{equation} More precisely stated, the radial functions presented in the Tables can be obtained from bare radial functions --- the same functions with all integration constants set to zero --- by applying the redefinitions of Eqs.~(\ref{moment-transf1}), (\ref{moment-transf2}), and (\ref{mass_transf}). The freedom described by Eqs.~(\ref{moment-transf1}) can be used to calibrate the tidal moments in a number of ways. For example, in Ref.~\cite{poisson-vlasov:10} the freedom to redefine the tidal moments was exploited to ensure that in the light-cone gauge employed there, the event horizon of the deformed black hole continues to be situated at $r=2M$. It was also exploited to ensure that the intrinsic geometry of the deformed horizon is independent of all $v$-derivatives of the tidal moments. In this ``horizon calibration'', the integration constants are fixed to \begin{subequations} \label{Hcal1} \begin{align} & \ce{q}{1} = -\frac{92}{15}, \qquad \ce{q}{2} = \frac{5569}{225}, \qquad \ce{o}{1} = -\frac{188}{21}, \qquad \cb{q}{1} = -\frac{76}{15}, \qquad \cb{q}{2} = \frac{18553}{1050}, \qquad \cb{o}{1} = -\frac{919}{105}, \\ & \cp{q} = -\frac{2}{7}, \qquad \cq{q} = \frac{18}{7}, \qquad \cg{o} = -10, \qquad \ch{q} = -\frac{44}{7}. \end{align} \end{subequations} The redefinitions of Eqs.~(\ref{moment-transf2}) do not involve the black-hole mass $M$, and the freedom contained in these equations must be exploited to ensure that the tidal moments $\E_{abcd}$ and $\B_{abcd}$ that appear in the metric are those that are actually defined by Eqs.~(\ref{tidalmoment_4}). This is achieved with \begin{equation} \cp{h} = \frac{355}{3}, \qquad \cq{h} = -\frac{5}{3}, \qquad \ch{h} = -10. \label{Hcal2} \end{equation} Finally, the freedom described by Eq.~(\ref{mass_transf}) was also used by Poisson and Vlasov \cite{poisson-vlasov:10} to simplify the description of the event horizon. In this ``horizon calibration'' of the mass parameter, \begin{equation} \cp{m} = 0, \qquad \cq{m} = 0. \label{Hcal3} \end{equation} The results of Eqs.~(\ref{Hcal1}), (\ref{Hcal2}), and (\ref{Hcal3}) can all be established by showing that the perturbation presented here in Regge-Wheeler gauge can be obtained from the light-cone gauge of Poisson and Vlasov by a gauge transformation; this requires these specific values for the integration constants. The ``horizon calibration'' is convenient for the purposes of examining the intrinsic geometry of the deformed horizon, but it may not be convenient for other purposes. An alternative calibration, designed to simplify the form of the radial functions, would be obtained by setting all integration constants (except for $\cp{h}$, $\cq{h}$, and $\ch{h}$) to zero. Yet another choice is the ``post-Newtonian calibration'' to be introduced in Sec.~\ref{sec:metric2}. Some of the radial functions listed in the Tables have factors of $f := 1-2M/r$ appearing in denominators, and this might suggest that these functions are not regular at $r=2M$. Closer scrutiny, however, reveals that the functions are in fact regular. It can indeed be shown that $\eeddot{q}{vr} = O(f)$, $\eeddot{q}{rr} = O(1)$, and $\bbddot{q}{r} = O(1)$ when $r \to 2M$. The functions $\pp{m}{vr}$, $\pp{m}{rr}$, $\qq{m}{vr}$, and $\qq{m}{rr}$, however, are genuinely singular at $r=2M$. The singular terms in the metric come with the integration constants $\cp{m}$ and $\cq{m}$, which represent the shift in mass parameter described by Eq.~(\ref{mass_transf}). These contributions to the monopole perturbation can easily be recast in a nonsingular form with a gauge transformation. But the (singular) gauge adopted here emerges as a natural choice when the metric is expressed in the $(t,r,\theta^A)$ coordinates of Sec.~\ref{sec:metric2}. In any event, the choices of Eq.~(\ref{Hcal3}) ensure that the singular terms are eliminated from the metric. \section{Metric in $(t,r,\theta,\phi)$ coordinates} \label{sec:metric2} The metric in $(t,r,\theta^A)$ coordinates can be obtained from the metric of Eq.~(\ref{blackhole_metric_vr}) by performing the coordinate transformation \begin{equation} v = t + r \Delta, \qquad \Delta := 1 + \frac{2M}{r} \ln\biggl( \frac{r}{2M} - 1 \biggr), \end{equation} which implies that $dv = dt + f^{-1}\, dr$. In Sec.~\ref{sec:metric1}, a tidal potential such as $\EE{q}$ was considered to be a function of $v$, and it must now be re-expressed in terms of $t$. Given our assumption that the tidal moments vary slowly, this can be done with the help of a Taylor expansion, \begin{equation} \EE{q}(v) = \EE{q}(t) + r \Delta \EEd{q}(t) + \frac{1}{2} r^2 \Delta^2 \EEdd{q}(t) + O(r^3). \end{equation} After making such substitutions and performing the coordinate transformation, we find that the metric of a tidally deformed, nonrotating black hole can also be expressed as \begin{subequations} \label{blackhole_metric_tr} \begin{align} g_{tt} &= -f + r^2 \ee{q}{tt}\, \EE{q} + r^3 \eedot{q}{tt}\, \EEd{q} + r^4 \eeddot{q}{tt}\, \EEdd{q} + r^3 \ee{o}{tt}\, \EE{o} + r^4 \eedot{o}{tt}\, \EEd{o} + r^4 \ee{h}{tt}\, \EE{h} + r^4 \bigl( \pp{m}{tt}\, \PP{m} + \qq{m}{tt}\, \QQ{m} \bigr) \nonumber \\ & \quad \mbox{} + r^4 \bigl( \pp{q}{tt}\, \PP{q} + \qq{q}{tt}\, \QQ{q} \bigr) + r^4 \bigl( \pp{h}{tt}\, \PP{h} + \qq{h}{tt}\, \QQ{h} \bigr) + r^4 \bigl( \g{d}{tt}\, \GG{d} + \g{o}{tt}\, \GG{o} \bigr) + O(r^5), \\ g_{tr} &= r^3 \eedot{q}{tr}\, \EEd{q} + r^4 \eeddot{q}{tr}\, \EEdd{q} + r^4 \eedot{o}{tr}\, \EEd{o} + r^4 \bigl( \g{d}{tr}\, \GG{d} + \g{o}{tr}\, \GG{o} \bigr) + O(r^5), \\ g_{rr} &= f^{-1} + r^2 \ee{q}{rr}\, \EE{q} + r^3 \eedot{q}{rr}\, \EEd{q} + r^4 \eeddot{q}{rr}\, \EEdd{q} + r^3 \ee{o}{rr}\, \EE{o} + r^4 \eedot{o}{rr}\, \EEd{o} + r^4 \ee{h}{rr}\, \EE{h} + r^4 \bigl( \pp{m}{rr}\, \PP{m} + \qq{m}{rr}\, \QQ{m} \bigr) \nonumber \\ & \quad \mbox{} + r^4 \bigl( \pp{q}{rr}\, \PP{q} + \qq{q}{rr}\, \QQ{q} \bigr) + r^4 \bigl( \pp{h}{rr}\, \PP{h} + \qq{h}{rr}\, \QQ{h} \bigr) + r^4 \bigl( \g{d}{rr}\, \GG{d} + \g{o}{rr}\, \GG{o} \bigr) + O(r^5), \\ g_{tA} &= r^3 \bb{q}{t}\, \BB{q}_A + r^4 \bbdot{q}{t}\, \BBd{q}_A + r^5 \bbddot{q}{t}\, \BBdd{q}_A + r^4 \bb{o}{t}\, \BB{o}_A + r^5 \bbdot{o}{t}\, \BBd{o}_A + r^5 \bb{h}{t}\, \BB{h}_A + r^5 \bigl( \hh{q}{t}\, \HH{q}_A + \hh{h}{t}\, \HH{h}_A \bigr) + O(r^6), \\ g_{rA} &= r^4 \bbdot{q}{r}\, \BBd{q}_A + r^5 \bbddot{q}{r}\, \BBdd{q}_A + r^5 \bbdot{o}{r}\, \BBd{o}_A + r^5 \bigl( \hh{q}{r}\, \HH{q}_A + \hh{h}{r}\, \HH{h}_A \bigr) + O(r^6), \\ g_{AB} &= r^2 \Omega_{AB} \Bigl[ 1 + r^2 \ee{q}{}\, \EE{q} + r^3 \eedot{q}{}\, \EEd{q} + r^4 \eeddot{q}{}\, \EEdd{q} + r^3 \ee{o}{}\, \EE{o} + r^4 \eedot{o}{}\, \EEd{o} + r^4 \ee{h}{}\, \EE{h} + r^4 \bigl( \pp{m}{}\, \PP{m} + \qq{m}{}\, \QQ{m} \bigr) \nonumber \\ & \quad \mbox{} + r^4 \bigl( \pp{q}{}\, \PP{q} + \qq{q}{}\, \QQ{q} \bigr) + r^4 \bigl( \pp{h}{}\, \PP{h} + \qq{h}{}\, \QQ{h} \bigr) + r^4 \bigl( \g{d}{}\, \GG{d} + \g{o}{}\, \GG{o} \bigr) + O(r^5) \Bigr], \end{align} \end{subequations} where all tidal potentials are now given as functions of $t$. The new radial functions are listed in Tables \ref{tab:e_functions_tr}, \ref{tab:b_functions_tr}, \ref{tab:pq_functions_tr}, and \ref{tab:gh_functions_tr}. The freedom to redefine the tidal moments was introduced in Eqs.~(\ref{moment-transf1}), (\ref{moment-transf2}), and (\ref{mass_transf}), and the ``horizon calibration'' of these moments was introduced in Eqs.~(\ref{Hcal1}) and (\ref{Hcal3}). For our purposes in Sec.~\ref{sec:PN}, it is convenient to adopt an alternative ``post-Newtonian calibration'' that ensures that all radial functions begin an expansion in powers of $M/r$ with the largest power possible. For example, with a generic value for $\ce{q}{1}$, the function $\eedot{q}{tt}$ possesses an expansion that begins at order $M/r$. Setting $\ce{q}{1} = -52/15$ eliminates this leading term, as well as terms of order $(M/r)^2$ and $(M/r)^3$, and leaves an expansion that begins at order $(M/r)^6$. This choice of integration constant therefore defines the post-Newtonian calibration for this radial function. Adapting the procedure to all other radial functions, we find that the post-Newtonian calibration is achieved with \begin{equation} \ce{q}{1} = -\frac{52}{15}, \qquad \ce{o}{1} = -\frac{30}{7}, \qquad \cb{q}{1} = -\frac{29}{10}, \qquad \cb{o}{1} = -\frac{293}{70}, \qquad \cg{o} = 0. \label{PNcal1} \end{equation} The values of $\cp{h}$, $\cq{h}$, and $\ch{h}$ continue to be fixed by Eq.~(\ref{Hcal2}), \begin{equation} \cp{h} = \frac{355}{3}, \qquad \cq{h} = -\frac{5}{3}, \qquad \ch{h} = -10. \label{PNcal2} \end{equation} And because the remaining constants do not affect the leading terms in expansions of the radial functions in powers of $M/r$, we set them to zero for simplicity: \begin{equation} \ce{q}{2} = \cb{q}{2} = \cp{m} = \cp{q} = \cq{m} = \cq{q} = \ch{q} = 0. \label{PNcal3} \end{equation} \section{Relation between tidal moments} \label{sec:calibrations} Tidal moments corresponding to a calibration $\bm{\mu}$, where $\bm{\mu}$ collectively denotes a specific set of integration constants $\{\ce{q}{1}, \ce{q}{2}, \cdots, \ch{q}, \ch{h}\}$, are obtained from bare tidal moments through the redefinitions of Eqs.~(\ref{moment-transf1}). We have that \begin{subequations} \begin{align} \E_{ab}(\bm{\mu}) &= \E_{ab}(\bm{0}) + \ce{q}{1}\, M \dot{\E}_{ab}(\bm{0}) + \ce{q}{2}\, M^2 \ddot{\E}_{ab}(\bm{0}) + \cp{q}\, M^2 \E_{p\langle a} \E^p_{\ b\rangle}(\bm{0}) + \cq{p}\, M^2 \B_{p\langle a} \B^p_{\ b\rangle}(\bm{0}), \\ \B_{ab}(\bm{\mu}) &= \B_{ab}(\bm{0}) + \cb{q}{1}\, M \dot{\B}_{ab}(\bm{0}) + \cb{q}{2}\, M^2 \ddot{\B}_{ab}(\bm{0}) + \ch{q}\, M^2 \E_{p\langle a} \B^p_{\ b\rangle}(\bm{0}),\\ \E_{abc}(\bm{\mu}) &= \E_{abc}(\bm{0}) + \ce{o}{1}\, M \dot{\E}_{abc}(\bm{0}) + \cg{o}\, M \epsilon_{pq\langle a} \E^p_{\ b} \B^q_{\ c\rangle}(\bm{0}),\\ \B_{abc}(\bm{\mu}) &= \B_{abc}(\bm{0}) + \cb{o}{1}\, M \dot{\B}_{abc}(\bm{0}), \end{align} \end{subequations} where $\E_{ab}(\bm{\mu})$ and so on stand for the calibrated moments, while $\E_{ab}(\bm{0})$ and so on stand for the bare moments. It follows from these relations that tidal moments corresponding to different calibrations can be related to each other, independently of the bare moments. We have that \begin{subequations} \begin{align} \E_{ab}(\bm{\mu}_2) &= \E_{ab}(\bm{\mu}_1) + [ \ce{q}{1}(2)-\ce{q}{1}(1) ] M \dot{\E}_{ab}(\bm{\mu}_1) + \bigl\{ \ce{q}{2}(2)-\ce{q}{2}(1) - \ce{q}{1}(1) [ \ce{q}{1}(2)-\ce{q}{1}(1) ] \bigr\} M^2 \ddot{\E}_{ab}(\bm{\mu}_1) \nonumber \\ & \quad \mbox{} + [ \cp{q}(2)-\cp{q}(1) ] M^2 \E_{p\langle a} \E^p_{\ b\rangle}(\bm{\mu}_1) + [ \cq{q}(2)-\cp{q}(1) ] M^2 \B_{p\langle a} \B^p_{\ b\rangle}(\bm{\mu}_1), \\ \B_{ab}(\bm{\mu}_2) &= \B_{ab}(\bm{\mu}_1) + [ \cb{q}{1}(2)-\cb{q}{1}(1) ] M \dot{\B}_{ab}(\bm{\mu}_1) + \bigl\{ \cb{q}{2}(2)-\cb{q}{2}(1) - \cb{q}{1}(1) [ \cb{q}{1}(2)-\cb{q}{1}(1) ] \bigr\} M^2 \ddot{\B}_{ab}(\bm{\mu}_1) \nonumber \\ & \quad \mbox{} + [ \ch{q}(2)-\ch{q}(1) ] M^2 \E_{p\langle a} \B^p_{\ b\rangle}(\bm{\mu}_1), \\ \E_{abc}(\bm{\mu}_2) &= \E_{abc}(\bm{\mu}_1) + [ \ce{o}{1}(2)-\ce{o}{1}(1) ] M \dot{\E}_{abc}(\bm{\mu}_1) + [ \cg{o}(2)-\cg{o}(1) ] M \epsilon_{pq\langle a} \E^p_{\ b} \B^q_{\ c\rangle}(\bm{\mu}_1), \\ \B_{abc}(\bm{\mu}_2) &= \B_{abc}(\bm{\mu}_1) + [ \cb{o}{1}(2)-\cb{o}{1}(1) ] M \dot{\B}_{abc}(\bm{\mu}_1), \end{align} \end{subequations} where constants such as $\ce{q}{1}(1)$ are elements of $\bm{\mu}_1$, while constants such as $\ce{q}{1}(2)$ are elements of $\bm{\mu}_2$. With these rules it follows from Eqs.~(\ref{Hcal1}), (\ref{Hcal2}), (\ref{Hcal3}), (\ref{PNcal1}), (\ref{PNcal2}), and (\ref{PNcal3}) that the tidal moments in the horizon and post-Newtonian calibrations are related by \begin{subequations} \label{H-PNcalibration} \begin{align} \E_{ab}(\mbox{H}) &= \E_{ab}(\mbox{PN}) - \frac{8}{3} M \dot{\E}_{ab}(\mbox{PN}) + \frac{1163}{75} M^2 \ddot{\E}_{ab}(\mbox{PN}) - \frac{2}{7} M^2 \E_{p\langle a} \E^p_{\ b\rangle}(\mbox{PN}) + \frac{18}{7} M^2 \B_{p\langle a} \B^p_{\ b\rangle}(\mbox{PN}), \\ \B_{ab}(\mbox{H}) &= \B_{ab}(\mbox{PN}) - \frac{13}{6} M \dot{\B}_{ab}(\mbox{PN}) + \frac{23911}{2100} M^2 \ddot{\B}_{ab}(\mbox{PN}) - \frac{44}{7} M^2 \E_{p\langle a} \B^p_{\ b\rangle}(\mbox{PN}), \\ \E_{abc}(\mbox{H}) &= \E_{abc}(\mbox{PN}) - \frac{14}{3} M \dot{\E}_{abc}(\mbox{PN}) - 10 M \epsilon_{pq\langle a} \E^p_{\ b} \B^q_{\ c\rangle}(\mbox{PN}), \\ \B_{abc}(\mbox{H}) &= \B_{abc}(\mbox{PN}) - \frac{137}{30} M \dot{\B}_{abc}(\mbox{PN}). \end{align} \end{subequations} We recall that $\E_{abcd}$ and $\B_{abcd}$ have a fixed calibration determined by the definitions of Eqs.~(\ref{tidalmoment_4}). \section{Post-Newtonian potentials of a tidally deformed black hole} \label{sec:PN} In this section we obtain the post-Newtonian limit of the metric of a tidally deformed black hole. We shall show that with a shift of the radial coordinate and a transformation to Cartesian coordinates, it can be expressed in the standard form (see, for example, Sec.~8.2 of Ref.~\cite{poisson-will:14}) \begin{subequations} \label{metric_PN_cartesian1} \begin{align} g_{tt} &= -1 + 2U + 2(\Psi-U^2) + 2\pn, \\ g_{ta} &= -4U_a + 2\pn, \\ g_{ab} &= (1 + 2U) \delta_{ab} + 2\pn, \end{align} \end{subequations} in terms of a Newtonian potential $U$, a vector potential $U_a$, and a post-Newtonian potential $\Psi$. This exercise prepares the way for the matching calculation of Sec.~\ref{sec:matching}. The integration constants of Eqs.~(\ref{PNcal1}), (\ref{PNcal2}), and (\ref{PNcal3}) --- which define the ``post-Newtonian calibration'' of the tidal moments --- are inserted in the metric of Eqs.~(\ref{blackhole_metric_tr}), which is then expressed as a post-Newtonian expansion truncated after the first post-Newtonian ($1\pn$) order. The gravitoelectric tidal moments are themselves given post-Newtonian expansions of the form \begin{subequations} \begin{align} \E_{ab} &= \E_{ab}(0\pn) + \E_{ab}(1\pn) + 2\pn, \\ \E_{abc} &= \E_{abc}(0\pn) + \E_{abc}(1\pn) + 2\pn, \\ \E_{abcd} &= \E_{abcd}(0\pn) + \E_{abcd}(1\pn) + 2\pn, \end{align} \end{subequations} while the gravitomagnetic moments are truncated at their leading, $1\pn$ order. The metric is expressed in terms of the harmonic radial coordinate \begin{equation} \bar{r} = r - M, \end{equation} where $r$ is the usual areal coordinate employed previously; this shift by $-M$ is a post-Newtonian correction. With all this the metric becomes \begin{subequations} \label{metric_PN_angular} \begin{align} g_{tt} &= -1 + \biggl[ \frac{2M}{\bar{r}} - \bar{r}^2\, \EE{q}(0\pn) - \frac{1}{3} \bar{r}^3\, \EE{o}(0\pn) - \frac{1}{12} \bar{r}^4\, \EE{h}(0\pn) \biggr] \nonumber \\ & \quad \mbox{} + \biggl[ -\frac{2M^2}{\bar{r}^2} + 2M\bar{r}\, \EE{q}(0\pn) + \frac{2}{3} M \bar{r}^2\, \EE{o}(0\pn) + \frac{1}{6} M \bar{r}^3\, \EE{h}(0\pn) - \frac{11}{42} \bar{r}^4\, \EEdd{q}(0\pn) \nonumber \\ & \quad \mbox{} - \bar{r}^2\, \EE{q}(1\pn) - \frac{1}{3} \bar{r}^3\, \EE{o}(1\pn) - \frac{1}{12} \bar{r}^4\, \EE{h}(1\pn) - \frac{1}{15} \bar{r}^4\, \PP{m} - \frac{2}{7} \bar{r}^4\, \PP{q} + \frac{1}{3} \bar{r}^4\, \PP{h} \biggr] + O(\bar{r}^5, 2\pn), \\ g_{tr} &= -\frac{2}{3} \bar{r}^3\, \EEd{q}(0\pn) - \frac{1}{6} \bar{r}^4\, \EEd{o}(0\pn) + O(\bar{r}^5, 2\pn), \\ g_{rr} &= 1 + \biggl[ \frac{2M}{\bar{r}} - \bar{r}^2\, \EE{q}(0\pn) - \frac{1}{3} \bar{r}^3\, \EE{o}(0\pn) - \frac{1}{12} \bar{r}^4\, \EE{h}(0\pn) \biggr] + O(\bar{r}^5, 2\pn), \\ g_{tA} &= \frac{2}{3} \bar{r}^3\, \BB{q}_A + \frac{1}{4} \bar{r}^4\, \BB{o}_A + \frac{1}{15} \bar{r}^5\, \BB{h}_A + O(\bar{r}^6, 2\pn), \\ g_{rA} &= 2\pn, \\ g_{AB} &= r^2 \Omega_{AB} + r^2 \Omega_{AB} \biggl[ \frac{2M}{\bar{r}} - \bar{r}^2\, \EE{q}(0\pn) - \frac{1}{3} \bar{r}^3\, \EE{o}(0\pn) - \frac{1}{12} \bar{r}^4\, \EE{h}(0\pn) \biggr] + O(\bar{r}^7, 2\pn). \end{align} \end{subequations} In $g_{tt}$, the first set of terms within square brackets are of Newtonian ($0\pn$) order, while the second set of terms are $1\pn$ corrections. It is understood that in the post-Newtonian terms, the tidal potentials $\PP{m}$, $\PP{q}$, and $\PP{h}$ are constructed from $\E_{ab}(0\pn)$. Conversion to Cartesian coordinates $x^a = \bar{r} \Omega^a(\theta^A)$ gives rise to the standard post-Newtonian form of the metric, as given by Eqs.~(\ref{metric_PN_cartesian1}), with \begin{subequations} \label{metric_PN_cartesian2} \begin{align} U &= \frac{M}{\bar{r}} - \frac{1}{2} \bar{r}^2\, \EE{q}(0\pn) - \frac{1}{6} \bar{r}^3\, \EE{o}(0\pn) - \frac{1}{24} \bar{r}^4\, \EE{h}(0\pn) + O(\bar{r}^5) , \\ \Psi-U^2 &= -\frac{M^2}{\bar{r}^2} + M\bar{r}\, \EE{q}(0\pn) + \frac{1}{3} M \bar{r}^2\, \EE{o}(0\pn) + \frac{1}{12} M \bar{r}^3\, \EE{h}(0\pn) - \frac{11}{84} \bar{r}^4\, \EEdd{q}(0\pn) \nonumber \\ & \quad \mbox{} - \frac{1}{2} \bar{r}^2\, \EE{q}(1\pn) - \frac{1}{6} \bar{r}^3\, \EE{o}(1\pn) - \frac{1}{24} \bar{r}^4\, \EE{h}(1\pn) - \frac{1}{30} \bar{r}^4\, \PP{m} - \frac{1}{7} \bar{r}^4\, \PP{q} + \frac{1}{6} \bar{r}^4\, \PP{h} + O(\bar{r}^5), \\ U_a &= -\frac{1}{6} \bar{r}^2\, \BB{q}_a - \frac{1}{16} \bar{r}^3\, \BB{o}_a - \frac{1}{60} \bar{r}^4\, \BB{h}_a + \frac{1}{6} \bar{r}^3\Omega_a\, \EEd{q}(0\pn) + \frac{1}{24} \bar{r}^4\Omega_a\, \EEd{o}(0\pn) + O(\bar{r}^5). \end{align} \end{subequations} That the standard post-Newtonian form of the metric is recovered in our calculation is the reason for employing $\bar{r}$ as a radial coordinate; this form would {\it not} be achieved in the original radial coordinate $r$. It should be noted that in spite of our use of $\bar{r}$, the coordinates $x^a$ are {\it not} harmonic: the potentials do not satisfy the harmonic condition $\partial_t U + \partial_a U^a = 0$. The reason for this, of course, is that the metric perturbation is presented in Regge-Wheeler gauge instead of a harmonic gauge. The square of the Newtonian potential $U$ can be calculated with the help of the identity $\EE{q} \EE{q} = \frac{2}{15} \PP{m} + \frac{4}{7} \PP{q} + \PP{h}$, and this reveals that the post-Newtonian potential is given by \begin{equation} \Psi = -\frac{1}{2} \bar{r}^2\, \EE{q}(1\pn) - \frac{1}{6} \bar{r}^3\, \EE{o}(1\pn) - \frac{1}{24} \bar{r}^4\, \EE{h}(1\pn) -\frac{11}{84} \bar{r}^4\, \EEdd{q}(0\pn) + \frac{5}{12} \bar{r}^4\, \PP{h} + O(\bar{r}^5). \end{equation} Notice that $\Psi$ does not involve $\PP{m}$ and $\PP{q}$, which have cancelled out in these manipulations. To obtain the final form of the potentials we insert the definitions of the tidal potentials in terms of the tidal moments. This yields \begin{subequations} \label{PNpotentials} \begin{align} \bar{U} &= \frac{M}{\bar{r}} - \frac{1}{2} \bar{\E}_{ab}(0\pn)\, \bar{x}^a \bar{x}^b - \frac{1}{6} \bar{\E}_{abc}(0\pn)\, \bar{x}^a \bar{x}^b \bar{x}^c - \frac{1}{12} \bar{\E}_{abcd}(0\pn)\, \bar{x}^a \bar{x}^b \bar{x}^c \bar{x}^d + O(\bar{r}^5), \\ \bar{U}_a &= -\frac{1}{6} \epsilon_{apq} \bar{x}^p \bar{\B}^q_{\ b} \bar{x}^b - \frac{1}{12} \epsilon_{apq} \bar{x}^p \bar{\B}^q_{\ bc} \bar{x}^b \bar{x}^c - \frac{1}{18} \epsilon_{apq} \bar{x}^p \bar{\B}^q_{\ bcd} \bar{x}^b \bar{x}^c \bar{x}^d \nonumber \\ & \quad \mbox{} + \frac{1}{6} \bar{x}_a \dot{\bar{\E}}_{bc}(0\pn) \bar{x}^b \bar{x}^c + \frac{1}{24} \bar{x}_a \dot{\bar{\E}}_{bcd}(0\pn) \bar{x}^b \bar{x}^c \bar{x}^d + O(\bar{r}^5), \\ \bar{\Psi} &= - \frac{1}{2} \bar{\E}_{ab}(1\pn)\, \bar{x}^a \bar{x}^b - \frac{1}{6} \bar{\E}_{abc}(1\pn)\, \bar{x}^a \bar{x}^b \bar{x}^c - \frac{1}{12} \bar{\E}_{abcd}(1\pn)\, \bar{x}^a \bar{x}^b \bar{x}^c \bar{x}^d \nonumber \\ & \quad \mbox{} -\frac{11}{84} \bar{r}^2 \ddot{\bar{\E}}_{ab}(0\pn) \bar{x}^a \bar{x}^b + \frac{5}{12} \bar{\E}_{\langle ab}\bar{\E}_{cd \rangle}(0\pn) \bar{x}^a \bar{x}^b \bar{x}^c \bar{x}^d + O(\bar{r}^5). \end{align} \end{subequations} We have placed overbars on the potentials, tidal moments, and coordinates in anticipation of the developments of Sec.~\ref{sec:matching}; in this notation the tidal moments are functions of time $\bar{t}$. The overbars indicate that the potentials, tidal moments, and coordinates refer to the black hole's reference frame, which is to be distinguished from the post-Newtonian barycentric frame to be introduced below. It is straightforward to show that the potentials satisfy the post-Newtonian field equations \begin{subequations} \begin{align} 0 &= \bar{\nabla}^2 \bar{U}, \\ 0 &= \bar{\nabla}^2 \bar{\Psi} + 3 \partial_{\bar{t}\bar{t}} \bar{U} + 4 \partial_{\bar{t}\bar{a}} \bar{U}^a, \\ 0 &= \bar{\nabla}^2 \bar{U}_a - \partial_{\bar{t}\bar{a}} \bar{U} - \partial_{\bar{a}\bar{b}} \bar{U}^b. \end{align} \end{subequations} These reduce to the standard post-Newtonian equations (Sec.~8.2 of Ref.~\cite{poisson-will:14}) when the potentials satisfy the harmonic condition $\partial_{\bar t} \bar{U} + \partial_{\bar{a}} \bar{U}^a = 0$. \section{Matching to a post-Newtonian metric} \label{sec:matching} The metric of a tidally deformed black hole was constructed in Secs.~\ref{sec:metric1} and \ref{sec:metric2}, and it was expressed in terms of tidal moments $\bar{\E}_{ab}$, $\bar{\E}_{abc}$, $\bar{\E}_{abcd}$, $\bar{\B}_{ab}$, $\bar{\B}_{abc}$, $\bar{\B}_{abcd}$. (See the remark regarding the overbar at the end of Sec.~\ref{sec:PN}.) The tidal moments occur in the metric as freely specifiable functions of time, and these cannot be determined by solving the Einstein field equations in a domain limited to the black hole's immediate neighborhood. Their determination must instead rely on matching the black-hole metric to another metric that incorporates all relevant information regarding the black hole's remote environment. We achieve this in this section, taking the black hole to be a member of a post-Newtonian system containing any number of external bodies. In this treatment, the black hole's internal gravity is allowed to be strong, but the mutual gravity between black hole and external bodies is assumed to be sufficiently weak to be adequately described by a post-Newtonian expansion of the metric. The matching between the black-hole and post-Newtonian metrics will determine the tidal moments, which are calculated accurately through the first post-Newtonian ($1\pn)$ approximation. \subsection{Barycentric potentials} \label{subsec:barycentric} The black hole is taken to be a member of a post-Newtonian system of gravitating bodies. The metric is given by Eq.~(\ref{metric_PN_cartesian1}), in terms of harmonic coordinates $(t,x^a)$ attached to the system's barycenter, and in terms of barycentric potentials $U$, $U_j$, and $\Psi$. The post-Newtonian metric is valid in a domain that contains all bodies but leaves out a sphere of radius $\bar{r} \gg M$ surrounding the black hole; this region is excluded because the black hole's internal gravity is too strong to be adequately captured by a post-Newtonian expansion. As illustrated in Fig.~\ref{fig:domains}, there exists an overlap region in which the black-hole metric of Sec.~\ref{sec:metric2} and the post-Newtonian metric are both valid; the matching of the metrics takes place in this region. It is helpful to define new post-Newtonian potentials $\psi$ and $X$ by the relation $\Psi = \psi + \frac{1}{2} \partial_{tt} X$. In the vacuum region between bodies, the potentials satisfy the post-Newtonian field equations \begin{equation} \nabla^2 U = 0, \qquad \nabla^2 U_j = 0, \qquad \nabla^2 \psi = 0, \qquad \nabla^2 X = 2 U, \label{PNeqns} \end{equation} as well as the harmonic condition \begin{equation} \partial_t U + \partial_j U^j = 0. \label{harmonic} \end{equation} Each equation is linear, and a solution describing a black hole and a collection of external bodies can be obtained by linear superposition. We model the black hole as a post-Newtonian monopole of mass $M$ at position $\bm{x} = \bm{r}(t)$ in the barycentric frame. We let $\bm{v} := d\bm{r}/dt$ be the black hole's velocity vector, and $\bm{a} := d\bm{v}/dt$ be its acceleration vector. The potentials are written as \begin{subequations} \label{PNpotentials1} \begin{align} U(t,\bm{x}) &= \frac{M}{s} + U_{\rm ext}(t,\bm{x}), \\ U^j(t,\bm{x}) &= \frac{M v^j}{s} + U^j_{\rm ext}(t,\bm{x}), \\ \psi(t,\bm{x}) &= \frac{M \mu}{s} + \psi_{\rm ext}(t,\bm{x}), \\ X(t,\bm{x}) &= M s + X_{\rm ext}(t,\bm{x}), \end{align} \end{subequations} where $s := |\bm{x}-\bm{r}|$ is the Euclidean distance between the black hole and a field point at $\bm{x}$, while $U_{\rm ext}$, $U^j_{\rm ext}$, $\psi_{\rm ext}$, and $X_{\rm ext}$ are the potentials created by the external bodies; these separately satisfy the field equations of Eq.~(\ref{PNeqns}) and the harmonic condition of Eq.~(\ref{harmonic}). The arbitrary function $\mu(t)$ represents a post-Newtonian correction to the mass parameter. It cannot be determined by integrating the post-Newtonian field equations in a domain that excludes the black hole, and must instead be obtained by matching the post-Newtonian metric with the black-hole metric of Sec.~\ref{sec:metric2}. Inserting $\psi$ and $X$ in the expression for $\Psi$ returns \begin{equation} \Psi(t,\bm{x}) = -\frac{M}{2s^3} (\bm{v} \cdot \bm{s})^2 + \frac{M}{s} \biggl( \mu + \frac{1}{2} v^2 \biggr) - \frac{M}{2s} \bm{a} \cdot \bm{s} + \Psi_{\rm ext}(t,\bm{x}), \label{PNpotentials2} \end{equation} where $\bm{s} := \bm{x}-\bm{r}$, $v^2 := \bm{v} \cdot \bm{v}$, and $\Psi_{\rm ext} := \psi_{\rm ext} + \frac{1}{2} \partial_{tt} X_{\rm ext}$. \subsection{Transformation to the black-hole frame} The post-Newtonian metric of the preceding subsection was presented in coordinates $(t, x^a)$ attached to the barycentric frame of the entire post-Newtonian system. On the other hand, the black-hole metric of Sec.~\ref{sec:metric2} was presented in coordinates $(\bar{t},\bar{x}^a)$ that are attached to the black hole's own reference frame, which is moving relative to the barycentric frame. To match the metrics we must transform the post-Newtonian potentials to the coordinates $(\bar{t},\bar{x}^a)$, and compare the expressions with Eqs.~(\ref{PNpotentials}). The systems $(t,x^a)$ and $(\bar{t},\bar{x}^a)$ are both compatible with the standard form of the post-Newtonian metric, and the coordinate transformation relating two such systems is presented in Sec.~8.3 of Ref.~\cite{poisson-will:14} (a summary of work previously carried out in Ref.~\cite{racine-flanagan:05}). It should be noted that while $(t, x^a)$ is a system of harmonic coordinates, the $(\bar{t},\bar{x}^a)$ coordinates are not harmonic. The transformation described in Ref.~\cite{poisson-will:14} applies to such situations, but the reader should be warned that the summary of Box 8.2 refers strictly to two systems of harmonic coordinates. The coordinate transformation is characterized by arbitrary functions $A(\bar{t})$, $H^a(\bar{t})$, $R^a(\bar{t})$, and $\beta(\bar{t},\bar{x}^a)$ in addition to the black-hole position vector $r^a(\bar{t})$, here expressed as a function of the barred time coordinate. The transformation is given by \begin{equation} t = \bar{t} + \bigl( A + v_a \bar{x}^a \bigr) + \beta + 3\pn, \qquad x^a = \bar{x}^a + r^a + \Bigl( H^a + H^a_{\ b}\, \bar{x}^b + \frac{1}{2} H^a_{\ bc}\, \bar{x}^b \bar{x}^c \Bigr) + 2\pn, \end{equation} where $H_{ab} := \epsilon_{abc} R^c + \frac{1}{2} v_a v_b - (\dot{A} - \frac{1}{2} v^2) \delta_{ab}$ and $H_{abc} := -\delta_{ab} a_c - \delta_{ac} a_b + \delta_{bc} a_a$, with $v^a := dr^a/d\bar{t}$ and $a^a := dv^a/d\bar{t}$; an overdot indicates differentiation with respect to $\bar{t}$. The bracketed terms in the equation for $t$ represent a $1\pn$ adjustment to the time coordinate that impacts the Newtonian potential $U$, while $\beta$ represents a $2\pn$ adjustment that impacts the vector and post-Newtonian potentials. The bracketed terms in the equation for $x^a$ are of $1\pn$ order. In the following the acceleration vector is decomposed into Newtonian and post-Newtonian terms, \begin{equation} a^a = a^a(0\pn) + a^a(1\pn) + 2\pn; \end{equation} the acceleration that appears in $H_{abc}$ can be truncated at Newtonian order. The transformed potentials $\bar{U}$, $\bar{U}^j$, and $\bar{\Psi}$, those of the black-hole frame, are expressed in terms of ``hatted potentials'' $\hat{U}$, $\hat{U}^j$, and $\hat{\Psi}$, related to the original potentials $U$, $U^j$, and $\Psi$ by equations of the form \begin{equation} \hat{U}(\bar{t},\bm{\bar{x}}) := U \bigl(t = \bar{t}, \bm{x}=\bm{\bar{x}} + \bm{r}(\bar{t}) \bigr); \end{equation} the hatted potentials are therefore equal to the original potentials evaluated at the time $\bar{t}$ and position $\bm{\bar{x}} + \bm{r}$. Because of the time dependence contained in $\bm{r}(\bar{t})$, the time derivative of a hatted potential is related by \begin{equation} \partial_{\bar{t}} \hat{U} = \partial_t U + v^a \partial_a U \label{diff_rule} \end{equation} to derivatives of the original potential; it is understood that the right-hand side of this equation is evaluated at $t=\bar{t}$ and $x^a = \bar{x}^a + r^a(\bar{t})$. Spatial derivatives are related by $\partial_{\bar{a}} \hat{U} = \partial_a U$, where again the right-hand side is evaluated at the new time and position. By virtue of these differentiation rules, the harmonic condition satisfied by the hatted potentials takes the form \begin{equation} \partial_{\bar{t}} \hat{U} - v^a \partial_{\bar{a}} \hat{U} + \partial_{\bar{a}} \hat{U}^a = 0. \label{harmonic_hatted} \end{equation} The field equations satisfied by the Newtonian and vector potentials become \begin{equation} \bar{\nabla}^2 \hat{U} = 0, \qquad \bar{\nabla}^2 \hat{U}^j = 0, \label{PNeqn_hatted1} \end{equation} while the field equation satisfied by $\Psi$, $\nabla^2 \Psi - \partial_{tt} U = 0$, takes the new form \begin{equation} \bar{\nabla}^2 \hat{\Psi} - \partial_{\bar{t}\bar{t}} \hat{U} + 2 v^a \partial_{\bar{t}\bar{a}} \hat{U} + a^a \partial_{\bar{a}} \hat{U} - v^a v^b \partial_{\bar{a}\bar{b}} \hat{U} = 0. \label{PNeqn_hatted2} \end{equation} We recall that the harmonic condition and the field equations are satisfied separately by the external potentials $\hat{U}_{\rm ext}$, $\hat{U}^j_{\rm ext}$, and $\hat{\Psi}_{\rm ext}$. We take advantage of the large separation between the black hole and all external bodies to express each external potential as a Taylor expansion about $\bar{x}^a = 0$. We write, for example, \begin{align} \hat{U}_{\rm ext}(\bar{t},\bm{\bar{x}}) &= \hat{U}_{\rm ext}(\bar{t},\bm{0}) + \bar{x}^a\, \partial_{\bar{a}} \hat{U}_{\rm ext}(\bar{t},\bm{0}) + \frac{1}{2} \bar{x}^a \bar{x}^b\, \partial_{\bar{a}\bar{b}} \hat{U}_{\rm ext}(\bar{t},\bm{0}) + \frac{1}{6} \bar{x}^a \bar{x}^b \bar{x}^c\, \partial_{\bar{a}\bar{b}\bar{c}} \hat{U}_{\rm ext}(\bar{t},\bm{0}) \nonumber \\ & \quad \mbox{} + \frac{1}{24} \bar{x}^a \bar{x}^b \bar{x}^c \bar{x}^d\, \partial_{\bar{a}\bar{b}\bar{c}\bar{d}} \hat{U}_{\rm ext}(\bar{t},\bm{0}) + O(\bar{r}^5), \end{align} where $\bar{r}:= |\bm{\bar{x}}|$. In all equations that appear below, the external potentials and their derivatives shall be evaluated at $\bar{x}^a = 0$. In a similar way we express the function $\beta$ that appears in the coordinate transformation as a Taylor expansion of the form \begin{equation} \beta(\bar{t},\bm{\bar{x}}) = \mbox{}_0 \beta(\bar{t}) + \mbox{}_1 \beta_a(\bar{t})\, \bar{x}^a + \frac{1}{2} \mbox{}_2 \beta_{ab}(\bar{t})\, \bar{x}^a \bar{x}^b + \frac{1}{6} \mbox{}_3 \beta_{abc}(\bar{t})\, \bar{x}^a \bar{x}^b \bar{x}^c + \frac{1}{24} \mbox{}_4 \beta_{abcd}(\bar{t})\, \bar{x}^a \bar{x}^b \bar{x}^c \bar{x}^d + O(\bar{r}^5), \end{equation} in which the expansion coefficients are fully symmetric tensors; the number that appears before each tensor symbol indicates the associated power of $\bar{r}$ in the expansion. With all these ingredients in place, a long but straightforward computation reveals that the equations listed in Secs.~8.3.2 and 8.3.3 of Ref.~\cite{poisson-will:14} become \begin{subequations} \label{PNpotentials_transf} \begin{align} \bar{U} &= \frac{M}{\bar{r}} + \mbox{}_0 \bar{U} + \mbox{}_1 \bar{U}_a\, \bar{x}^a + \frac{1}{2} \mbox{}_2 \bar{U}_{ab}\, \bar{x}^a \bar{x}^b + \frac{1}{6} \mbox{}_3 \bar{U}_{abc}\, \bar{x}^a \bar{x}^b \bar{x}^c + \frac{1}{24} \mbox{}_4 \bar{U}_{abcd}\, \bar{x}^a \bar{x}^b \bar{x}^c \bar{x}^d + O(\bar{r}^5), \\ \bar{U}_j &= \mbox{}_0 \bar{U}_j + \mbox{}_1 \bar{U}_{ja}\, \bar{x}^a + \frac{1}{2} \mbox{}_2 \bar{U}_{jab}\, \bar{x}^a \bar{x}^b + \frac{1}{6} \mbox{}_3 \bar{U}_{jabc}\, \bar{x}^a \bar{x}^b \bar{x}^c + \frac{1}{24} \mbox{}_4 \bar{U}_{jabcd}\, \bar{x}^a \bar{x}^b \bar{x}^c \bar{x}^d + O(\bar{r}^5), \\ \bar{\Psi} &= -\frac{M}{\bar{r}^3} F_a \bar{x}^a + \frac{M}{\bar{r}} ( \dot{A} - 2v^2 + \mu ) + \mbox{}_0 \bar{\Psi} + \mbox{}_1 \bar{\Psi}_a\, \bar{x}^a + \frac{1}{2} \mbox{}_2 \bar{\Psi}_{ab}\, \bar{x}^a \bar{x}^b + \frac{1}{6} \mbox{}_3 \bar{\Psi}_{abc}\, \bar{x}^a \bar{x}^b \bar{x}^c \nonumber \\ & \quad \mbox{} + \frac{1}{24} \mbox{}_4 \bar{\Psi}_{abcd}\, \bar{x}^a \bar{x}^b \bar{x}^c \bar{x}^d + O(\bar{r}^5), \end{align} \end{subequations} where $F_a := H_a - A v_a$ and the remaining expansion coefficients are given by \begin{subequations} \begin{align} \mbox{}_0 \bar{U} &= \hat{U}_{\rm ext} - \dot{A} + \frac{1}{2} v^2, \\ \mbox{}_1 \bar{U}_a &= \partial_{\bar{a}} \hat{U}_{\rm ext} - a_a(0\pn), \\ \mbox{}_2 \bar{U}_{ab} &= \partial_{\bar{a}\bar{b}} \hat{U}_{\rm ext}, \\ \mbox{}_3 \bar{U}_{abc} &= \partial_{\bar{a}\bar{b}\bar{c}} \hat{U}_{\rm ext}, \\ \mbox{}_4 \bar{U}_{abcd} &= \partial_{\bar{a}\bar{b}\bar{c}\bar{d}} \hat{U}_{\rm ext}, \end{align} \end{subequations} \begin{subequations} \begin{align} 4\, \mbox{}_0 \bar{U}_j &= 4 \hat{P}_j + (2\dot{A} - v^2) v_j - \dot{H}_j + \epsilon_{jpq} v^p R^q + \mbox{}_1 \beta_j, \\ 4\, \mbox{}_1 \bar{U}_{ja} &= 4 \partial_{\bar{a}} \hat{P}_j + \frac{3}{2} v_j a_a + \frac{1}{2} a_j v_a + (\ddot{A} - 2 v_p a^p) \delta_{ja} - \epsilon_{jap} \dot{R}^p + \mbox{}_2 \beta_{ja}, \\ 4\, \mbox{}_2 \bar{U}_{jab} &= 4 \partial_{\bar{a}\bar{b}} \hat{P}_j + 2\delta_{j(a} \dot{a}_{b)} - \delta_{ab} \dot{a}_j + \mbox{}_3 \beta_{jab}, \\ 4\, \mbox{}_3 \bar{U}_{jabc} &= 4 \partial_{\bar{a}\bar{b}\bar{c}} \hat{P}_j + \mbox{}_4 \beta_{jabc}, \\ 4\, \mbox{}_4 \bar{U}_{jabcd} &= 4 \partial_{\bar{a}\bar{b}\bar{c}\bar{d}} \hat{P}_j + \mbox{}_5 \beta_{jabcd}, \end{align} \end{subequations} with $\hat{P}_j := \hat{U}^{\rm ext}_j - v_j \hat{U}^{\rm ext}$ and $a_a \equiv a_a(0\pn)$, and \begin{subequations} \begin{align} \mbox{}_0 \bar{\Psi} &= \hat{P} + A \partial_{\bar{t}} \hat{U}_{\rm ext} + F^p \partial_{\bar{p}} \hat{U}_{\rm ext} + \frac{1}{2} \dot{A}^2 - \dot{A} v^2 + \frac{1}{4} v^4 + \dot{H}_p v^p - \mbox{}_0 \dot{\beta}, \\ \mbox{}_1 \bar{\Psi}_a &= \partial_{\bar{a}} \hat{P} + A \partial_{\bar{t}\bar{a}} \hat{U}_{\rm ext} + v_a \partial_{\bar{t}} \hat{U}_{\rm ext} + F^p \partial_{\bar{p}\bar{a}} \hat{U}_{\rm ext} + \Bigl( \dot{A} - \frac{1}{2} v^2 \Bigr) \bigl[ a_a(0\pn) - \partial_{\bar{a}} \hat{U}_{\rm ext} \bigr] - \frac{1}{2} v_a v^p \partial_{\bar{p}} \hat{U}_{\rm ext} \nonumber \\ & \quad \mbox{} + \epsilon_{pqa} R^p \partial^{\bar{q}} \hat{U}_{\rm ext} - a_a(1\pn) - \Bigl[ \ddot{A} - \frac{3}{2} v_p a^p(0\pn) \Bigr] v_a - \epsilon_{apq} v^p \dot{R}^q - \mbox{}_1 \dot{\beta}_a, \\ \mbox{}_2 \bar{\Psi}_{ab} &= \partial_{\bar{a}\bar{b}} \hat{P} + A \partial_{\bar{t}\bar{a}\bar{b}} \hat{U}_{\rm ext} + 2 v_{(a} \partial_{\bar{t}\bar{b})} \hat{U}_{\rm ext} + F^p \partial_{\bar{p}\bar{a}\bar{b}} \hat{U}_{\rm ext} - 2 \Bigl( \dot{A} - \frac{1}{2} v^2 \Bigr) \partial_{\bar{a}\bar{b}} \hat{U}_{\rm ext} - v^p v_{(a} \partial_{\bar{b})\bar{p}} \hat{U}_{\rm ext} \nonumber \\ & \quad \mbox{} + 2\epsilon_{pq(a} R^p \partial^{\bar{q}}_{\ \bar{b})} \hat{U}_{\rm ext} - 2 a_{(a}(0\pn) \partial_{\bar{b})} \hat{U}_{\rm ext} + \delta_{ab}\, a^{p}(0\pn) \partial_{\bar{p}} \hat{U}_{\rm ext} + a_a(0\pn) a_b(0\pn) \nonumber \\ & \quad \mbox{} - 2 v_{(a} \dot{a}_{b)}(0\pn) + \delta_{ab}\, v_p \dot{a}^p(0\pn) - \mbox{}_2 \dot{\beta}_{ab}, \\ \mbox{}_3 \bar{\Psi}_{abc} &= \partial_{\bar{a}\bar{b}\bar{c}} \hat{P} + A \partial_{\bar{t}\bar{a}\bar{b}\bar{c}} \hat{U}_{\rm ext} + 3 v_{(a} \partial_{\bar{t}\bar{b}\bar{c})} \hat{U}_{\rm ext} + F^p \partial_{\bar{p}\bar{a}\bar{b}\bar{c}} \hat{U}_{\rm ext} - 3 \Bigl( \dot{A} - \frac{1}{2} v^2 \Bigr) \partial_{\bar{a}\bar{b}\bar{c}} \hat{U}_{\rm ext} - \frac{3}{2} v^p v_{(a} \partial_{\bar{b}\bar{c})\bar{p}} \hat{U}_{\rm ext} \nonumber \\ & \quad \mbox{} + 3\epsilon_{pq(a} R^p \partial^{\bar{q}}_{\ \bar{b}\bar{c})} \hat{U}_{\rm ext} - 6 a_{(a}(0\pn) \partial_{\bar{b}\bar{c})} \hat{U}_{\rm ext} + 3 \delta_{(ab}\, a^{p}(0\pn) \partial_{\bar{c})\bar{p}} \hat{U}_{\rm ext} - \mbox{}_3 \dot{\beta}_{abc}, \\ \mbox{}_4 \bar{\Psi}_{abcd} &= \partial_{\bar{a}\bar{b}\bar{c}\bar{d}} \hat{P} + A \partial_{\bar{t}\bar{a}\bar{b}\bar{c}\bar{d}} \hat{U}_{\rm ext} + 4 v_{(a} \partial_{\bar{t}\bar{b}\bar{c}\bar{d})} \hat{U}_{\rm ext} + F^p \partial_{\bar{p}\bar{a}\bar{b}\bar{c}\bar{d}} \hat{U}_{\rm ext} - 4 \Bigl( \dot{A} - \frac{1}{2} v^2 \Bigr) \partial_{\bar{a}\bar{b}\bar{c}\bar{d}} \hat{U}_{\rm ext} - 2 v^p v_{(a} \partial_{\bar{b}\bar{c}\bar{d})\bar{p}} \hat{U}_{\rm ext} \nonumber \\ & \quad \mbox{} + 4\epsilon_{pq(a} R^p \partial^{\bar{q}}_{\ \bar{b}\bar{c}\bar{d})} \hat{U}_{\rm ext} - 12 a_{(a}(0\pn) \partial_{\bar{b}\bar{c}\bar{d})} \hat{U}_{\rm ext} + 6 \delta_{(ab}\, a^{p}(0\pn) \partial_{\bar{c}\bar{d})\bar{p}} \hat{U}_{\rm ext} - \mbox{}_4 \dot{\beta}_{abcd}, \end{align} \end{subequations} with $\hat{P} := \hat{\Psi}_{\rm ext} - 4 v_p \hat{U}^p_{\rm ext} + 2 v^2 \hat{U}_{\rm ext}$. As stated previously, it is understood that all external potentials and their derivatives are evaluated at $\bar{x}^a = 0$. \subsection{Matching} \label{subsec:matching} The potentials of Eq.~(\ref{PNpotentials_transf}) are the barycentric potentials transformed to the black-hole frame, and they must agree with the potentials of Eq.~(\ref{PNpotentials}), obtained in the post-Newtonian expansion of the black-hole metric. A precise match between the expressions shall reveal the details of the coordinate transformation, the identity of the metric function $\mu(\bar{t})$, and the tidal moments. The Newtonian potentials match at order $\bar{r}^{-1}$, and a match at order $\bar{r}^0$ implies \begin{equation} \dot{A} = \hat{U}_{\rm ext} + \frac{1}{2} v^2. \label{Adot} \end{equation} A match at order $\bar{r}^1$ further reveals that $a_a(0\pn) = \partial_{\bar{a}} \hat{U}^{\rm ext}$. Examining now the post-Newtonian potentials $\bar{\Psi}$, a match at order $\bar{r}^{-3}$ implies that $F_a = 0$, so that $H_a = A v_a$. A match at order $\bar{r}^{-1}$ further produces $\mu = \frac{3}{2} v^2 - \hat{U}_{\rm ext}$. A match at order $\bar{r}^0$ then yields \begin{equation} \mbox{}_0 \dot{\beta} = \hat{\Psi}_{\rm ext} - 4 v_p \hat{U}^p_{\rm ext} + \frac{1}{2} \hat{U}^2_{\rm ext} + \frac{5}{2} v^2 \hat{U}_{\rm ext} + \frac{3}{8} v^4 + A \bigl[ \partial_{\bar{t}} \hat{U}_{\rm ext} + v_p a^p(0\pn) \bigr]. \label{beta_0} \end{equation} Turning next to the vector potentials $\bar{U}_j$, a match at order $\bar{r}^0$ reveals that \begin{equation} \mbox{}_1 \beta_a = -4 \hat{U}_a^{\rm ext} + \Bigl( 3 \hat{U}_{\rm ext} + \frac{1}{2} v^2 \Bigr) v_a + A a_a(0\pn) - \epsilon_{apq} v^q R^q. \label{beta_1} \end{equation} A match at order $\bar{r}^1$ produces \begin{equation} 4 \partial_{\bar{b}} \hat{P}_a + \frac{3}{2} v_a a_b(0\pn) + \frac{1}{2} a_a(0\pn) v_b - \delta_{ab}\, \partial_{\bar{p}} \hat{U}^p_{\rm ext} - \epsilon_{abp} \dot{R}^p + \mbox{}_2 \beta_{ab} = 0 \end{equation} after making use of the harmonic condition of Eq.~(\ref{harmonic_hatted}). Taking the symmetric part of this equation yields \begin{equation} \mbox{}_2 \beta_{ab} = -4 \partial_{(\bar{a}} \hat{U}^{\rm ext}_{b)} + 2 v_{(a} a_{b)}(0\pn) + \delta_{ab}\, \partial_{\bar{p}} \hat{U}^p_{\rm ext}, \label{beta_2} \end{equation} while taking the antisymmetric part implies \begin{equation} \epsilon_{abp} \dot{R}^p = -4 \partial_{[\bar{a}} \hat{U}^{\rm ext}_{b]} - 3 v_{[a} a_{b]}(0\pn). \label{Rdot} \end{equation} Returning to $\bar{\Psi}$, we find that a match at order $\bar{r}^1$ requires \begin{equation} a_a(1\pn) = \partial_{\bar{a}} \hat{\Psi}_{\rm ext} - 4 v_p \partial_{\bar{a}} \hat{U}^p_{\rm ext} + 4 \partial_{\bar{t}} \hat{U}^{\rm ext}_a + (v^2 - 4 \hat{U}_{\rm ext}) a_a(0\pn) - \bigl[ 3\partial_{\bar{t}} \hat{U}_{\rm ext} + v_p a^p(0\pn) \bigr] v_a. \end{equation} At this stage the acceleration of the black hole in the barycentric frame, $a^a = a^a(0\pn) + a^a(1\pn)$, is determined, and some important details of the coordinate transformation are provided by Eqs.~(\ref{Adot}) and (\ref{Rdot}). We next match the terms in the potentials that occur at order $\bar{r}^2$ and beyond. The Newtonian potential returns \begin{equation} \bar{\E}_{ab}(0\pn) = -\partial_{\bar{a}\bar{b}} \hat{U}_{\rm ext}, \qquad \bar{\E}_{abc}(0\pn) = -\partial_{\bar{a}\bar{b}\bar{c}} \hat{U}_{\rm ext}, \qquad 2\bar{\E}_{abcd}(0\pn) = -\partial_{\bar{a}\bar{b}\bar{c}\bar{d}} \hat{U}_{\rm ext}, \label{E_0pn} \end{equation} where (as stated previously) the hatted potentials are evaluated at $\bar{x}^a = 0$ after differentiation. From the vector potential at order $\bar{r}^2$ we get the matching condition \begin{equation} -\frac{2}{3} \bigl( \epsilon_{ja}^{\ \ \, p} \bar{\B}_{pb} + \epsilon_{jb}^{\ \ \, p} \bar{\B}_{pa} \bigr) = 4 \partial_{\bar{a}\bar{b}} \hat{P}_j + \delta_{ja} \dot{a}_b + \delta_{jb} \dot{a}_a - \delta_{ab} \dot{a}_j + \mbox{}_3 \beta_{jab}, \end{equation} in which $a_a \equiv a_a(0\pn)$ --- the same shorthand will be employed in all equations below. To extract the consequences of this equation we decompose $\mbox{}_3 \beta_{jab}$ into irreducible pieces according to Eq.~(\ref{Sdecomp3}), and the remaining terms of the right-hand side are decomposed according to Eq.~(\ref{Adecomp3}). Equating all this with the left-hand side reveals that \begin{equation} \mbox{}_3 \beta_\stf{abc} = -4 \partial_{\langle \bar{a}\bar{b}} \bigl( \hat{U}^{\rm ext}_{c\rangle} - v_{c\rangle} \hat{U}^{\rm ext} \bigr),\qquad \mbox{}_3 \beta_a = \dot{a}_a, \label{beta_3} \end{equation} and \begin{equation} \bar{\B}_{ab} = 2 \epsilon^{pq}_{\ \ (a} \partial_{\bar{b})\bar{p}} \bigl( \hat{U}^{\rm ext}_{q} - v_{q} \hat{U}^{\rm ext} \bigr). \label{Bab} \end{equation} To arrive at these results we made use of the harmonic condition of Eq.~(\ref{harmonic_hatted}) and the field equations (\ref{PNeqn_hatted1}). At the next order we get \begin{equation} -\frac{2}{3} \bigl( \epsilon_{ja}^{\ \ \, p} \bar{\B}_{pbc} + \epsilon_{jc}^{\ \ \, p} \bar{\B}_{pab} + \epsilon_{jb}^{\ \ \, p} \bar{\B}_{pca} \bigr) + \frac{4}{3} \bigl( \delta_{ja}\, \dot{\bar{\E}}_{bc} + \delta_{jc}\, \dot{\bar{\E}}_{ab} + \delta_{jb}\, \dot{\bar{\E}}_{ca} \bigr) = 4\partial_{\bar{a}\bar{b}\bar{c}} \hat{P}_j + \mbox{}_4 \beta_{jabc}, \end{equation} and we decompose $\mbox{}_4 \beta_{jabc}$ according to Eq.~(\ref{Sdecomp4}), and $4\partial_{\bar{a}\bar{b}\bar{c}} \hat{P}_j$ according to Eq.~(\ref{Adecomp4}). After simplifying the results with the harmonic condition, the field equations, and Eq.~(\ref{E_0pn}), we arrive at \begin{equation} \mbox{}_4 \beta_\stf{abcd} = -4 \partial_{\langle \bar{a}\bar{b}\bar{c}} \bigl( \hat{U}^{\rm ext}_{d\rangle} - v_{d\rangle} \hat{U}^{\rm ext} \bigr), \qquad \mbox{}_4 \beta_\stf{ab} = \frac{8}{3} \dot{\bar{\E}}_{ab}, \qquad \mbox{}_4 \beta = 0, \label{beta_4} \end{equation} and \begin{equation} \bar{\B}_{abc} = \frac{3}{2} \epsilon^{pq}_{\ \ (a} \partial_{\bar{b}\bar{c})\bar{p}} \bigl( \hat{U}^{\rm ext}_{q} - v_{q} \hat{U}^{\rm ext} \bigr). \label{Babc} \end{equation} Matching the vector potentials at order $\bar{r}^4$ produces \begin{equation} -\frac{4}{3} \bigl( \epsilon_{ja}^{\ \ \, p} \bar{\B}_{pbcd} + \epsilon_{jd}^{\ \ \, p} \bar{\B}_{pabc} + \epsilon_{jc}^{\ \ \, p} \bar{\B}_{pdab} + \epsilon_{jb}^{\ \ \, p} \bar{\B}_{pcda} \bigr) + \bigl( \delta_{ja}\, \dot{\bar{\E}}_{bcd} + \delta_{jd}\, \dot{\bar{\E}}_{abc} + \delta_{jc}\, \dot{\bar{\E}}_{dab} + \delta_{jb}\, \dot{\bar{\E}}_{cda} \bigr) = 4\partial_{\bar{a}\bar{b}\bar{c}\bar{d}} \hat{P}_j + \mbox{}_5 \beta_{jabcd}. \end{equation} After decomposing $\mbox{}_5 \beta_{jabcd}$ according to Eq.~(\ref{Sdecomp5}), $4\partial_{\bar{a}\bar{b}\bar{c}\bar{d}} \hat{P}_j$ according to Eq.~(\ref{Adecomp4}), and simplifying, we arrive at \begin{equation} \mbox{}_5 \beta_\stf{abcde} = -4 \partial_{\langle \bar{a}\bar{b}\bar{c}\bar{d}} \bigl( \hat{U}^{\rm ext}_{e\rangle} - v_{e\rangle} \hat{U}^{\rm ext} \bigr), \qquad \mbox{}_5 \beta_\stf{abc} = 2 \dot{\bar{\E}}_{abc}, \qquad \mbox{}_5 \beta_a = 0 \label{beta_5} \end{equation} and \begin{equation} \bar{\B}_{abcd} = \frac{3}{5} \epsilon^{pq}_{\ \ (a} \partial_{\bar{b}\bar{c}\bar{d})\bar{p}} \bigl( \hat{U}^{\rm ext}_{q} - v_{q} \hat{U}^{\rm ext} \bigr). \label{Babcd} \end{equation} At this stage the gravitomagnetic tidal moments are all determined, as well as the details of the coordinate transformation contained in the function $\beta(\bar{t},\bm{\bar{x}})$. We finally return to the post-Newtonian potential. Matching at order $\bar{r}^2$ implies that \begin{equation} \bar{\E}_{ab}(1\pn) = -\partial_{\bar{a}\bar{b}} \hat{P} - 2 \hat{U}_{\rm ext}\, \bar{\E}_{ab} - v_{(a} \bar{\E}_{b)p} v^p + a_a a_b - \delta_{ab} \bigl( a^2 + v_p \dot{a}^p \bigr) + \mbox{}_2 \dot{\beta}_{ab} + A \dot{\bar{\E}}_{ab} + 2 \epsilon_{pq(a} R^p \bar{\E}^q_{\ b)}, \end{equation} where $\hat{P} := \hat{\Psi}_{\rm ext} - 4 v_p \hat{U}^p_{\rm ext} + 2 v^2 \hat{U}_{\rm ext}$, $a_a \equiv a_a(0\pn)$, and $\bar{\E}_{ab} \equiv \bar{\E}_{ab}(0\pn)$ on the right-hand side of the equation --- a similar shorthand will be employed in all equations below. Making use of Eq.~(\ref{beta_2}) and decomposing all tensors into irreducible pieces, we arrive at \begin{equation} \bar{\E}_{ab}(1\pn) = -\partial_\stf{\bar{a}\bar{b}} \hat{P} - 4 \partial_{\bar{t} \langle \bar{a}} \hat{U}^{\rm ext}_{b\rangle} - 2 \hat{U}_{\rm ext}\, \bar{\E}_{ab} - v_{\langle a} \bar{\E}_{b\rangle p} v^p + 3 a_{\langle a} a_{b \rangle} + 2 v_{\langle a} \dot{a}_{b\rangle} + A \dot{\bar{\E}}_{ab} + 2 \epsilon_{pq(a} R^p \bar{\E}^q_{\ b)}. \label{Eab} \end{equation} At order $\bar{r}^3$ we get \begin{equation} \bar{\E}_{abc}(1\pn) = -\partial_{\bar{a}\bar{b}\bar{c}} \hat{P} + 3 v_{(a} \dot{\bar{\E}}_{bc)} - 3 \hat{U}_{\rm ext}\, \bar{\E}_{abc} - \frac{3}{2} v_{(a} \bar{\E}_{bc)p} v^p - 6 a_{(a} \bar{\E}_{bc)} + 3 \delta_{(ab} \bar{\E}_{c)p} a^p + \mbox{}_3 \dot{\beta}_{abc} + A \dot{\bar{\E}}_{abc} + 3 \epsilon_{pq(a} R^p \bar{\E}^q_{\ bc)}, \end{equation} and substitution of Eq.~(\ref{beta_3}) and decomposition into irreducible pieces yields \begin{equation} \bar{\E}_{abc}(1\pn) = -\partial_\stf{\bar{a}\bar{b}\bar{c}} \hat{P} - 4 \partial_{\bar{t} \langle \bar{a}\bar{b}} \hat{U}^{\rm ext}_{c\rangle} - v_{\langle a} \dot{\bar{\E}}_{bc\rangle} - 10 a_{\langle a} \bar{\E}_{bc\rangle} - 3 \hat{U}_{\rm ext}\, \bar{\E}_{abc} - \frac{3}{2} v_{\langle a} \bar{\E}_{bc\rangle p} v^p + A \dot{\bar{\E}}_{abc} + 3 \epsilon_{pq(a} R^p \bar{\E}^q_{\ bc)}. \label{Eabc} \end{equation} At order $\bar{r}^4$ the matching condition is \begin{align} & 2 \bar{\E}_{abcd}(1\pn) + \frac{11}{21} \bigl( \delta_{ab} \ddot{\bar{\E}}_{cd} + \mbox{all symmetric permutations} \bigr) - 10 \bar{\E}_{\langle ab} \bar{\E}_{cd \rangle} = -\partial_{\bar{a}\bar{b}\bar{c}\bar{d}} \hat{P} + 4 v_{(a} \dot{\bar{\E}}_{bcd)} - 8 \hat{U}_{\rm ext}\, \bar{\E}_{abcd} \nonumber \\ & \quad \mbox{} - 4 v_{(a} \bar{\E}_{bcd)p} v^p - 12 a_{(a} \bar{\E}_{bcd)} + 6 \delta_{(ab} \bar{\E}_{cd)p} a^p + \mbox{}_4 \dot{\beta}_{abcd} + 2A \dot{\bar{\E}}_{abcd} + 8 \epsilon_{pq(a} R^p \bar{\E}^q_{\ bcd)}, \end{align} and this yields \begin{align} 2 \bar{\E}_{abcd}(1\pn) &= -\partial_\stf{\bar{a}\bar{b}\bar{c}\bar{d}} \hat{P} - 4 \partial_{\bar{t} \langle \bar{a}\bar{b}\bar{c}} \hat{U}^{\rm ext}_{d\rangle} + 10 \bar{\E}_{\langle ab} \bar{\E}_{cd \rangle} - 16 a_{\langle a} \bar{\E}_{bcd\rangle} \nonumber \\ & \quad \mbox{} - 8 \hat{U}_{\rm ext}\, \bar{\E}_{abcd} - 4 v_{\langle a} \bar{\E}_{bcd\rangle p} v^p + 2 A \dot{\bar{\E}}_{abcd} + 8 \epsilon_{pq(a} R^p \bar{\E}^q_{\ bcd)} \label{Eabcd} \end{align} after substituting Eq.~(\ref{beta_4}) and decomposing all terms into irreducible pieces. At this stage the gravitoelectric tidal moments are all determined, and the matching procedure has come to a close. \subsection{Barycentric tidal moments} The tidal moments obtained in the preceding subsection are defined in the black-hole frame $(\bar{t},\bm{x}^a)$. For our purposes in Sec.~\ref{sec:2body}, it is convenient to follow Racine and Flanagan \cite{racine-flanagan:05} and introduce barycentric versions of these moments. We do so with the transformations \begin{subequations} \label{moment_transf1} \begin{align} \E_{ab}(t) &:= {\cal M}_a^{\ j}(\bar{t}) {\cal M}_b^{\ k}(\bar{t})\, \bar{\E}_{jk}(\bar{t}), \\ \E_{abc}(t) &:= {\cal M}_a^{\ j}(\bar{t}) {\cal M}_b^{\ k}(\bar{t}) {\cal M}_c^{\ m}(\bar{t})\, \bar{\E}_{jkm}(\bar{t}), \\ \E_{abcd}(t) &:= {\cal M}_a^{\ j}(\bar{t}) {\cal M}_b^{\ k}(\bar{t}) {\cal M}_c^{\ m}(\bar{t}) {\cal M}_d^{\ n}(\bar{t})\, \bar{\E}_{jkmn}(\bar{t}), \end{align} \end{subequations} as well as similar ones relating $\B_{ab\cdots}(t)$ to $\bar{\B}_{ab\cdots}(\bar{t})$. The transformation matrix is defined by \begin{equation} {\cal M}_{aj}(\bar{t}) := \delta_{aj} + \epsilon_{ajp} R^p(\bar{t}) + 2\pn, \label{moment_transf2} \end{equation} with $R^p$ determined by Eq.~(\ref{Rdot}). Because the tidal moments are tensors defined at $\bar{x}^a=0$, the relation between the time coordinates is given by \begin{equation} t = \bar{t} + A(\bar{t}) + 2\pn, \label{moment_transf3} \end{equation} with $A$ determined by Eq.~(\ref{Adot}). We apply Eq.~(\ref{moment_transf1}) to the tidal moments obtained previously, and express the results in terms of the original potentials $U$, $U_j$, and $\Psi$ (instead of the hatted ones). Recalling the differentiation rule of Eq.~(\ref{diff_rule}), we find that the barycentric version of the gravitoelectric tidal moments are given by \begin{subequations} \label{Eab_bary} \begin{align} \E_{ab} &= \E_{ab}(0\pn) + \E_{ab}(1\pn) + 2\pn, \\ \E_{ab}(0\pn) &= -\partial_{ab} U_{\rm ext}, \\ \E_{ab}(1\pn) &= -\partial_\stf{ab} \Psi_{\rm ext} - 4 \partial_{t \langle a} U^{\rm ext}_{b\rangle} + 4 \bigl( \partial_\stf{ab} U^{\rm ext}_p - \partial_{p \langle a} U^{\rm ext}_{b\rangle} \bigr) v^p + 2 (v^2 - U_{\rm ext}) \E_{ab} \nonumber \\ & \quad \mbox{} - v_{\langle a} \E_{b\rangle p} v^p + 3 a_{\langle a} a_{b \rangle} + 2 v_{\langle a} \dot{a}_{b\rangle}, \end{align} \end{subequations} \begin{subequations} \label{Eabc_bary} \begin{align} \E_{abc} &= \E_{abc}(0\pn) + \E_{abc}(1\pn) + 2\pn, \\ \E_{abc}(0\pn) &= -\partial_{abc} U_{\rm ext}, \\ \E_{abc}(1\pn) &= -\partial_\stf{abc} \Psi_{\rm ext} - 4 \partial_{t \langle ab} U^{\rm ext}_{c\rangle} + 4 \bigl( \partial_\stf{abc} U^{\rm ext}_p - \partial_{p \langle ab} U^{\rm ext}_{c\rangle} \bigr) v^p + (2v^2 - 3U_{\rm ext}) \E_{abc} \nonumber \\ & \quad \mbox{} - \frac{3}{2} v_{\langle a} \E_{bc\rangle p} v^p - v_{\langle a} \dot{\E}_{bc\rangle} - 10 a_{\langle a} \E_{bc\rangle}, \end{align} \end{subequations} and \begin{subequations} \label{Eabcd_bary} \begin{align} \E_{abcd} &= \E_{abcd}(0\pn) + \E_{abcd}(1\pn) + 2\pn, \\ 2\E_{abcd}(0\pn) &= -\partial_{abcd} U_{\rm ext}, \\ 2\E_{abcd}(1\pn) &= -\partial_\stf{abcd} \Psi_{\rm ext} - 4 \partial_{t \langle abc} U^{\rm ext}_{d\rangle} + 4 \bigl( \partial_\stf{abcd} U^{\rm ext}_p - \partial_{p \langle abc} U^{\rm ext}_{d\rangle} \bigr) v^p + 4(v^2 - 2U_{\rm ext}) \E_{abcd} \nonumber \\ & \quad \mbox{} - 4 v_{\langle a} \E_{bcd\rangle p} v^p - 16 a_{\langle a} \E_{bcd \rangle} + 10 \E_{\langle ab} \E_{cd\rangle}. \end{align} \end{subequations} The external potentials are evaluated at $x^a = r^a(t)$ after differentiation, and it is understood that in the expressions for the post-Newtonian terms, $a_a \equiv a_a(0\pn) = \partial_a U_{\rm ext}$, $\E_{ab} \equiv \E_{ab}(0\pn)$, and so on. Because the gravitomagnetic moments are quantities of $1\pn$ order, the transformations analogous to those of Eq.~(\ref{moment_transf1}) have a trivial effect on them. The barycentric version of these moments are therefore given by \begin{subequations} \label{Bmoments_bary} \begin{align} \B_{ab} &= 2 \epsilon^{pq}_{\ \ (a} \partial_{b) p} \bigl( U^{\rm ext}_{q} - v_{q} U^{\rm ext} \bigr), \label{Bab_bary} \\ \B_{abc} &= \frac{3}{2} \epsilon^{pq}_{\ \ (a} \partial_{bc) p} \bigl( U^{\rm ext}_{q} - v_{q} U^{\rm ext} \bigr), \label{Babc_bary} \\ \B_{abcd} &= \frac{3}{5} \epsilon^{pq}_{\ \ (a} \partial_{bcd) p} \bigl( U^{\rm ext}_{q} - v_{q} U^{\rm ext} \bigr). \label{Babcd_bary} \end{align} \end{subequations} Here also the external potentials are evaluated at $x^a = r^a(t)$ after differentiation. \section{Tidal moments for a two-body system} \label{sec:2body} The tidal moments obtained in Sec.~\ref{sec:matching} are expressed in terms of generic external potentials that could describe any post-Newtonian spacetime. In this section we specialize them to the specific case in which the black hole is a member of a two-body system. \subsection{External potentials} We take the spacetime to contain a body of mass $M_2$ at position $\bm{r}_2(t)$ in addition to the black hole. We let $\bm{v}_2(t) := d\bm{r}_2/dt$ and $\bm{a}_2(t) := d\bm{v}_2/dt$. We refine the notation employed in Sec.~\ref{sec:matching}: the black hole's mass shall again be denoted $M_1$ instead of $M$, and its position in the barycentric frame shall be $\bm{r}_1(t)$ instead of $\bm{r}$; we also set $\bm{v}_1(t) := d\bm{r}_1/dt$ (previously $\bm{v}$) and $\bm{a}_1(t) := d\bm{v}_1/dt$ (previously $\bm{a}$). We take the external body to be another post-Newtonian monopole, and write the external potentials as \begin{equation} U_{\rm ext} = \frac{M_2}{s}, \qquad U^j_{\rm ext} = \frac{M_2 v_2^j}{s}, \qquad \psi_{\rm ext} = \frac{M_2 \mu_2}{s}, \qquad X_{\rm ext} = M_2 s, \end{equation} where $s$ now stands for $|\bm{x}-\bm{r}_2|$, the Euclidean distance between the field point $\bm{x}$ and the external body. We recall that $\Psi_{\rm ext} = \psi_{\rm ext} + \frac{1}{2} \partial_{tt} X_{\rm ext}$. According to our findings in Sec.~\ref{subsec:matching}, the post-Newtonian correction to the mass parameter is given by $\mu_2 = \frac{3}{2} v_2^2 - M_1/b$, where $b$ is the inter-body distance. We let $\bm{b} := \bm{r}_1 - \bm{r}_2$ be the separation between bodies, $b := |\bm{b}|$, and $\bm{n} := \bm{b}/b$ is a unit vector directed from the external body to the black hole. The relative velocity is $\bm{v} := \bm{v}_1 - \bm{v}_2$, and $\dot{b} = \bm{v} \cdot \bm{n}$ is its radial component. The post-Newtonian equations of motion imply that $\bm{a}_1 = -(M_2/b^2) \bm{n} + 1\pn$ and $\bm{a}_2 = (M_1/b^2) \bm{n} +1\pn$. They also imply that $\bm{v}_1 = q_2 \bm{v} + 1\pn$ and $\bm{v}_2 = -q_1 \bm{v} + 1\pn$, where \begin{equation} q_1 := \frac{M_1}{M_1 + M_2}, \qquad q_2 := \frac{M_2}{M_1 + M_2} \end{equation} are the mass ratios, constrained by the identity $q_1 + q_2 = 1$. The various derivatives of the external potentials are evaluated with the help of results collected in Appendix~\ref{sec:distance}. The position $\bm{x}$ is set equal to $\bm{r}_1$ after differentiation, and in this limit the vector $\bm{s} := \bm{x} - \bm{r}_2$ becomes $\bm{b}$; similarly, $\bm{\hat{s}} \to \bm{n}$ and $s \to b$. The equations of motion are used to eliminate the accelerations $\bm{a}_1$ and $\bm{a}_2$ from all expressions, and to express the velocities $\bm{v}_1$ and $\bm{v}_2$ in terms of the relative velocity $\bm{v}$. \subsection{Barycentric tidal moments} A long but straightforward calculation returns \begin{subequations} \label{Eab_2body} \begin{align} \E_{ab} &= \E_{ab}(0\pn) + \E_{ab}(1\pn) + 2\pn, \\ \E_{ab}(0\pn) &= -3 \frac{M_2}{b^3} n_\stf{ab}, \\ \E_{ab}(1\pn) &= -3 \frac{M_2}{b^3} \Biggl\{ \biggl[ 2 v^2 - \frac{5}{2} q_1^2\, \dot{b}^2 - \frac{1}{2} (6-q_1) \frac{M}{b} \biggr] n_\stf{ab} - (3-q_1^2)\, \dot{b}\, v_{\langle a} n_{b \rangle} + v_{\langle a} v_{b\rangle} \Biggr\}, \end{align} \end{subequations} \begin{subequations} \label{Eabc_2body} \begin{align} \E_{abc} &= \E_{abc}(0\pn) + \E_{abc}(1\pn) + 2\pn, \\ \E_{abc}(0\pn) &= 15\frac{M_2}{b^4} n_\stf{abc}, \\ \E_{abc}(1\pn) &= 15 \frac{M_2}{b^4} \Biggl\{ \biggl[ 2 v^2 - \frac{7}{2} q_1^2\, \dot{b}^2 - (5-3q_1) \frac{M}{b} \biggr] n_\stf{abc} - \frac{1}{2} (5-3q_1^2)\, \dot{b}\, v_{\langle a} n_b n_{c \rangle} + v_{\langle a} v_b n_{c\rangle} \Biggr\}, \end{align} \end{subequations} \begin{subequations} \label{Eabcd_2body} \begin{align} \E_{abcd} &= \E_{abcd}(0\pn) + \E_{abcd}(1\pn) + 2\pn, \\ \E_{abcd}(0\pn) &= -\frac{105}{2} \frac{M_2}{b^5} n_\stf{abcd}, \\ \E_{abcd}(1\pn) &= -\frac{15}{2} \frac{M_2}{b^5} \Biggl\{ \biggl[ 14 v^2 - \frac{63}{2} q_1^2\, \dot{b}^2 - \frac{25}{2} (4-3q_1) \frac{M}{b} \biggr] n_\stf{abcd} - 14 (1-q_1^2)\, \dot{b}\, v_{\langle a} n_b n_c n_{d \rangle} + 6 v_{\langle a} v_b n_c n_{d\rangle} \Biggr\} \end{align} \end{subequations} for the barycentric version of the gravitoelectric tidal moments, where $n_\stf{ab\cdots c}$ is shorthand for $n_{\langle a} n_b \cdots n_{c\rangle}$. The gravitomagnetic moments are given by \begin{subequations} \label{Bmoments_2body} \begin{align} \B_{ab} &= -6 \frac{M_2}{b^3} \epsilon_{pq(a} n_{b)} n^p v^q, \\ \B_{abc} &= \frac{45}{2} \frac{M_2}{b^4} \epsilon_{pq(a} \Bigl( n_b n_{c)} - \frac{1}{5} \delta_{bc)} \Bigr) n^p v^q, \\ \B_{abcd} &= -63 \frac{M_2}{b^5} \epsilon_{pq(a} \Bigl( n_b n_c n_{d)} - \frac{3}{7} \delta_{bc} n_{d)} \Bigr) n^p v^q. \end{align} \end{subequations} Alternative expressions are obtained if the relative velocity vector is decomposed according to \begin{equation} \bm{v} = \dot{b}\, \bm{n} + v_\perp\, \bm{\lambda}, \end{equation} in terms of radial and perpendicular components; $\bm{\lambda}$ is a unit vector orthogonal to $\bm{n}$. We recall that the post-Newtonian motion takes place in a fixed orbital plane, with a vanishing normal component for the velocity. The tidal moments become \begin{subequations} \label{Emoments_decomp} \begin{align} \E_{ab}(1\pn) &= 3 \frac{M_2}{b^3} \Biggl\{ \biggl[ \frac{3}{2} q_1^2\, \dot{b}^2 - 2 v_\perp^2 + \frac{1}{2} (6-q_1) \frac{M}{b} \biggr] n_\stf{ab} + (1-q_1^2)\, \dot{b} v_\perp \, \lambda_{\langle a} n_{b \rangle} - v_\perp^2 \lambda_{\langle a} \lambda_{b\rangle} \Biggr\}, \\ \E_{abc}(1\pn) &= 15 \frac{M_2}{b^4} \Biggl\{ \biggl[ \frac{1}{2}(1- 4q_1^2)\, \dot{b}^2 + 2 v_\perp^2 - (5-3q_1) \frac{M}{b} \biggr] n_\stf{abc} - \frac{1}{2} (1-3q_1^2)\, \dot{b} v_\perp\, \lambda_{\langle a} n_b n_{c \rangle} + v_\perp^2 \lambda_{\langle a} \lambda_b n_{c\rangle} \Biggr\}, \\ \E_{abcd}(1\pn) &= -\frac{15}{2} \frac{M_2}{b^5} \Biggl\{ \biggl[ \frac{1}{2} (12 - 35 q_1^2)\, \dot{b}^2 + 14 v_\perp^2 - \frac{25}{2} (4-3q_1) \frac{M}{b} \biggr] n_\stf{abcd} \nonumber \\ & \quad \mbox{} - 2 (1-7q_1^2)\, \dot{b} v_\perp\, \lambda_{\langle a} n_b n_c n_{d \rangle} + 6 v_\perp^2 \lambda_{\langle a} \lambda_b n_c n_{d\rangle} \Biggr\} \end{align} \end{subequations} and \begin{subequations} \label{Bmoments_decomp} \begin{align} \B_{ab} &= -6 \frac{M_2 v_\perp}{b^3} \epsilon_{pq(a} n_{b)} n^p \lambda^q, \\ \B_{abc} &= \frac{45}{2} \frac{M_2 v_\perp}{b^4} \epsilon_{pq(a} \Bigl( n_b n_{c)} - \frac{1}{5} \delta_{bc)} \Bigr) n^p \lambda^q, \\ \B_{abcd} &= -63 \frac{M_2 v_\perp}{b^5} \epsilon_{pq(a} \Bigl( n_b n_c n_{d)} - \frac{3}{7} \delta_{bc} n_{d)} \Bigr) n^p \lambda^q. \end{align} \end{subequations} The tidal moments in the black-hole frame are obtained by inverting the transformation of Eqs.~(\ref{moment_transf1}), (\ref{moment_transf2}), and (\ref{moment_transf3}). Making the relevant substitutions in Eq.~(\ref{Adot}), we find that $A$ is determined by \begin{equation} \frac{dA}{dt} = \frac{1}{2} (1-q_1)^2 \bigl( \dot{b}^2 + v_\perp^2 \bigr) + (1-q_1) \frac{M}{b}, \label{Adot_decomp} \end{equation} and \begin{equation} \frac{dR_a}{dt} = -\frac{1}{2} (1-q_1)(3+q_1) \frac{M v_\perp}{b^2} \epsilon_{abc} n^b \lambda^c \label{Rdot_decomp} \end{equation} follows from Eq.~(\ref{Rdot}). \subsection{Circular motion} Setting $\dot{b} = 0$ specializes the motion to a circular orbit with $b = \mbox{constant}$. The equations of motion imply (see, for example, Sec.~10.1.2 of Ref.~\cite{poisson-will:14}) \begin{equation} V \equiv v_\perp = \sqrt{\frac{M}{b}} \biggl[ 1 - \frac{1}{2}(3-q_1 q_2) \frac{M}{b} + 2\pn \biggr], \end{equation} and this equation reveals that $M/b = V^2 + 1\pn$. Making the substitutions within Eqs.~(\ref{Emoments_decomp}) produces \begin{subequations} \label{Emoments_circ} \begin{align} \E_{ab}(1\pn) &= 3 \frac{M_2 V^2}{b^3} \biggl[ \frac{1}{2} (2-q_1) n_\stf{ab} - \lambda_{\langle a} \lambda_{b\rangle} \Bigr], \\ \E_{abc}(1\pn) &= -15 \frac{M_2 V^2}{b^4} \Bigl[ 3(1-q_1) n_\stf{abc} - \lambda_{\langle a} \lambda_b n_{c\rangle} \Bigr], \\ \E_{abcd}(1\pn) &= \frac{15}{2} \frac{M_2 V^2}{b^5} \biggl[ \frac{1}{2} (72 - 75q_1) n_\stf{abcd} - 6 \lambda_{\langle a} \lambda_b n_c n_{d\rangle} \biggr]. \end{align} \end{subequations} The equations (\ref{Bmoments_decomp}) stay unchanged. In the case of a circular orbit the basis vectors can be given the explicit representation \begin{equation} \bm{n} = [ \cos(\omega t), \sin(\omega t), 0 ], \qquad \bm{\lambda} = [-\sin(\omega t), \cos(\omega t), 0], \label{vectorbasis1} \end{equation} where \begin{equation} \omega := \frac{V}{b} = \sqrt{\frac{M}{b^3}} \biggl[ 1 - \frac{1}{2}(3-q_1 q_2) \frac{M}{b} + 2\pn \biggr] \end{equation} is the orbital angular velocity in the barycentric frame. It is useful to complete the vector basis with \begin{equation} \bm{\ell} := [0, 0, 1], \label{vectorbasis2} \end{equation} a unit vector normal to the orbital plane. Equations (\ref{Adot_decomp}) and (\ref{Rdot_decomp}) become \begin{equation} \frac{dA}{dt} =\frac{1}{2} (1-q_1)(3-q_1) V^2, \qquad \frac{dR^a}{dt} = -\frac{1}{2}(1-q_1)(3+q_1) \sqrt{\frac{M}{b^3}} V^2\,\ell^a \end{equation} in the case of circular motion. The equations integrate to \begin{equation} A = k t, \qquad k = \frac{1}{2} (1-q_1)(3-q_1) V^2 \label{A_circular} \end{equation} and \begin{equation} R^a = -\Omega t\, \ell^a, \qquad \Omega := \frac{1}{2}(1-q_1)(3+q_1) \sqrt{\frac{M}{b^3}} V^2, \label{R_circular} \end{equation} with $\Omega$ denoting the precessional angular velocity of the black-hole frame relative to the barycentric frame. \subsection{Transformation to the black-hole frame} The transformation from the barycentric frame of the post-Newtonian spacetime to the black hole's moving frame is effected by inverting Eqs.~(\ref{moment_transf1}), (\ref{moment_transf2}), and (\ref{moment_transf3}). Under the inverse transformation, a vector $p_a(t)$ defined in the barycentric frame becomes \begin{equation} \bar{p}_a(\bar{t}) = \bigl[ {\cal M}^{-1}(t) \bigr]_a^{\ j}\, p_j(t) \end{equation} in the black-hole frame, with \begin{equation} \bigl[ {\cal M}^{-1}(t) \bigr]_{aj} = \delta_{aj} - \epsilon_{ajp} R^p(t) + 2\pn \end{equation} and $\bar{t} = t - A(t) + 2\pn$. In the case of circular motion, Eqs.~(\ref{A_circular}) and (\ref{R_circular}) imply that the transformation takes the form of \begin{equation} \bm{\bar{p}} = \bm{p} - (\Omega t)\, \bm{\ell} \times \bm{p}, \end{equation} with $t = (1+k) \bar{t}$ inserted onto the right-hand side. Applying this rule to our vectorial basis yields \begin{equation} \bm{\bar{n}} = \bm{n} - (\Omega t)\, \bm{\lambda}, \qquad \bm{\bar{\lambda}} = \bm{\lambda} + (\Omega t)\, \bm{n}, \qquad \bm{\bar{\ell}} = \bm{\ell}. \end{equation} With the representation of Eqs.~(\ref{vectorbasis1}) and (\ref{vectorbasis2}), and with $\Omega$ recognized as a post-Newtonian correction to the angular velocity $\omega$, we have that \begin{equation} \bm{\bar{n}} = [\cos(\bar{\omega} \bar{t}), \sin(\bar{\omega} \bar{t}), 0], \qquad \bm{\bar{\lambda}} = [-\sin(\bar{\omega} \bar{t}), \cos(\bar{\omega} \bar{t}), 0], \qquad \bm{\bar{\ell}} = [0,0,1], \end{equation} where $\bar{\omega} = (1+k)\omega - \Omega$, or \begin{equation} \bar{\omega} = \sqrt{\frac{M}{b^3}} \biggl[ 1 - \frac{1}{2}(3+q_1 q_2) V^2 + 2\pn \biggr]. \end{equation} This is the angular frequency of the tidal field as measured in the black-hole frame. It differs from $\omega$, the orbital angular velocity in the barycentric frame, because of the mismatch in the time coordinates (measured by $k$), and also because of the relative precession of the two frames (measured by $\Omega$). The tidal moments in the black-hole frame are obtained directly from the barycentric moments by replacing $\bm{n}$ with $\bm{\bar{n}}$, and $\bm{\lambda}$ with $\bm{\bar{\lambda}}$. In this transcription, however, we shall also take the opportunity to modify our convention for the basis vectors. We recall that $\bm{n}$ is proportional to $\bm{r}_1 - \bm{r}_2$, and is therefore directed from the external body to the black hole. In the black-hole frame it is convenient to reverse this direction, and we therefore let $\bm{\bar{n}} \to -\bm{\bar{n}}$ in our expressions for the tidal moments. Similarly, we recall $\bm{\lambda}$ is proportional to $\bm{v}_1 - \bm{v}_2$, and choose to reverse this direction as well by letting $\bm{\bar{\lambda}} \to -\bm{\bar{\lambda}}$. With these changes accounted for, we find that the tidal moments in the black-hole frame are given by \begin{subequations} \label{Eab_BH} \begin{align} \bar{\E}_{ab} &= \bar{\E}_{ab}(0\pn) + \bar{\E}_{ab}(1\pn) + 2\pn, \\ \bar{\E}_{ab}(0\pn) &= -3 \frac{M_2}{b^3} \bar{n}_\stf{ab}, \\ \bar{\E}_{ab}(1\pn) &= 3 \frac{M_2 V^2}{b^3} \biggl[ \frac{1}{2} (2-q_1) \bar{n}_\stf{ab} - \bar{\lambda}_{\langle a} \bar{\lambda}_{b\rangle} \Bigr], \\ \end{align} \end{subequations} \begin{subequations} \label{Eabc_BH} \begin{align} \bar{\E}_{abc} &= \bar{\E}_{abc}(0\pn) + \bar{\E}_{abc}(1\pn) + 2\pn, \\ \bar{\E}_{abc}(0\pn) &= -15\frac{M_2}{b^4} \bar{n}_\stf{abc}, \\ \bar{\E}_{abc}(1\pn) &= 15 \frac{M_2 V^2}{b^4} \Bigl[ 3(1-q_1) \bar{n}_\stf{abc} - \bar{\lambda}_{\langle a} \bar{\lambda}_b \bar{n}_{c\rangle} \Bigr], \\ \end{align} \end{subequations} \begin{subequations} \label{Eabcd_BH} \begin{align} \bar{\E}_{abcd} &= \bar{\E}_{abcd}(0\pn) + \bar{\E}_{abcd}(1\pn) + 2\pn, \\ \bar{\E}_{abcd}(0\pn) &= -\frac{105}{2} \frac{M_2}{b^5} \bar{n}_\stf{abcd}, \\ \bar{\E}_{abcd}(1\pn) &= \frac{15}{2} \frac{M_2 V^2}{b^5} \biggl[ \frac{1}{2} (72 - 75q_1) \bar{n}_\stf{abcd} - 6 \bar{\lambda}_{\langle a} \bar{\lambda}_b \bar{n}_c \bar{n}_{d\rangle} \biggr]. \end{align} \end{subequations} and \begin{subequations} \label{Bmoments_BH} \begin{align} \bar{\B}_{ab} &= 6 \frac{M_2 V}{b^3} \epsilon_{pq(a} \bar{n}_{b)} \bar{n}^p \bar{\lambda}^q, \\ \bar{\B}_{abc} &= \frac{45}{2} \frac{M_2 V}{b^4} \epsilon_{pq(a} \Bigl( \bar{n}_b \bar{n}_{c)} - \frac{1}{5} \delta_{bc)} \Bigr) \bar{n}^p \bar{\lambda}^q, \\ \bar{\B}_{abcd} &= 63 \frac{M_2 V}{b^5} \epsilon_{pq(a} \Bigl( \bar{n}_b \bar{n}_c \bar{n}_{d)} - \frac{3}{7} \delta_{bc} \bar{n}_{d)} \Bigr) \bar{n}^p \bar{\lambda}^q. \end{align} \end{subequations} \subsection{Harmonic components of the tidal moments} \label{subsec:harmonic} The harmonic components of the tidal moments are defined in Tables~\ref{tab:E_ang} and \ref{tab:B_ang}. Making use of Eqs.~(\ref{Eab_BH}), (\ref{Eabc_BH}), (\ref{Eabcd_BH}), and (\ref{Bmoments_BH}), we find that in the case of circular motion, the nonvanishing components are \begin{subequations} \label{Eab_harmonic} \begin{align} \EEbar{q}_0 &= -\frac{1}{2} \frac{M_2}{b^3} \Biggl[ 1 + \frac{1}{2} q_1 V^2 + 2\pn \Biggr], \\ \EEbar{q}_{2c} &= -\frac{3}{2} \frac{M_2}{b^3} \Biggl[ 1 + \frac{1}{2} (q_1 - 4) V^2 + 2\pn \Biggr] \cos(2\bar{\omega} \bar{t}), \\ \EEbar{q}_{2s} &= -\frac{3}{2} \frac{M_2}{b^3} \Biggl[ 1 + \frac{1}{2} (q_1 - 4) V^2 + 2\pn \Biggr] \sin(2\bar{\omega} \bar{t}), \end{align} \end{subequations} \begin{subequations} \label{Eabc_harmonic} \begin{align} \EEbar{o}_{1c} &= -\frac{3}{2} \frac{M_2}{b^4} \biggl[ 1 + \frac{1}{3} (9q_1 - 8) V^2 + 2\pn \biggr] \cos(\bar{\omega} \bar{t}), \\ \EEbar{o}_{1s} &= -\frac{3}{2} \frac{M_2}{b^4} \biggl[ 1 + \frac{1}{3} (9q_1 - 8) V^2 + 2\pn \biggr] \sin(\bar{\omega} \bar{t}), \\ \EEbar{o}_{3c} &= -\frac{15}{4} \frac{M_2}{b^4} \biggl[ 1 + ( 3q_1 - 4 ) V^2 + 2\pn \biggr] \cos(3\bar{\omega} \bar{t}), \\ \EEbar{o}_{3s} &= -\frac{15}{4} \frac{M_2}{b^4} \biggl[ 1 + ( 3q_1 - 4 ) V^2 + 2\pn \biggr] \sin(3\bar{\omega} \bar{t}), \end{align} \end{subequations} \begin{subequations} \label{Eabcd_harmonic} \begin{align} \EEbar{h}_{0} &= -\frac{9}{4} \frac{M_2}{b^5} \biggl[ 1 + \frac{1}{14}(75q_1 - 68) V^2 + 2\pn \biggr], \\ \EEbar{h}_{2c} &= -\frac{15}{2} \frac{M_2}{b^5} \biggl[ 1 + \frac{3}{14}(25q_1 - 24) V^2 + 2\pn \biggr] \cos(2\bar{\omega} \bar{t}), \\ \EEbar{h}_{2s} &= -\frac{15}{2} \frac{M_2}{b^5} \biggl[ 1 + \frac{3}{14}(25q_1 - 24) V^2 + 2\pn \biggr] \sin(2\bar{\omega} \bar{t}), \\ \EEbar{h}_{4c} &= -\frac{105}{8} \frac{M_2}{b^5} \biggl[ 1 + \frac{3}{14}(25q_1 - 28) V^2 + 2\pn \biggr] \cos(4\bar{\omega} \bar{t}), \\ \EEbar{h}_{4s} &= -\frac{105}{8} \frac{M_2}{b^5} \biggl[ 1 + \frac{3}{14}(25q_1 - 28) V^2 + 2\pn \biggr] \sin(4\bar{\omega} \bar{t}), \end{align} \end{subequations} and \begin{subequations} \label{Bab_harmonic} \begin{align} \BBbar{q}_{1c} &= 3 \frac{M_2 V}{b^3} \cos(\bar{\omega} \bar{t}), \\ \BBbar{q}_{1s} &= 3 \frac{M_2 V}{b^3} \sin(\bar{\omega} \bar{t}), \end{align} \end{subequations} \begin{subequations} \label{Babc_harmonic} \begin{align} \BBbar{o}_{0} &= 3 \frac{M_2 V}{b^4}, \\ \BBbar{o}_{2c} &= 5 \frac{M_2 V}{b^4} \cos(2\bar{\omega} \bar{t}), \\ \BBbar{o}_{2s} &= 5 \frac{M_2 V}{b^4} \sin(2\bar{\omega} \bar{t}), \end{align} \end{subequations} \begin{subequations} \label{Babcd_harmonic} \begin{align} \BBbar{h}_{1c} &= \frac{45}{4} \frac{M_2 V}{b^5} \cos(\bar{\omega} \bar{t}), \\ \BBbar{h}_{1s} &= \frac{45}{4} \frac{M_2 V}{b^5} \sin(\bar{\omega} \bar{t}), \\ \BBbar{h}_{3c} &= \frac{105}{8} \frac{M_2 V}{b^5} \cos(3\bar{\omega} \bar{t}), \\ \BBbar{h}_{3s} &= \frac{105}{8} \frac{M_2 V}{b^5} \sin(3\bar{\omega} \bar{t}). \end{align} \end{subequations} The harmonic components of the gravitoelectric moments can be substituted within the tidal potentials introduced in Sec.~\ref{sec:potentials}. With $\bar{\Omega}^a = [\sin\bar{\theta} \cos\bar{\phi}, \sin\bar{\theta}\sin\bar{\phi},\cos\bar{\theta}]$, we have that \begin{subequations} \label{Epotentials} \begin{align} \EEbar{q} &:= \bar{\E}_{ab} \bar{\Omega}^a \bar{\Omega}^b = \frac{1}{2} \frac{M_2}{b^3} \biggl[ 1 + \frac{1}{2} q_1 V^2 + 2\pn \biggr] (3\cos^2\bar{\theta} - 1) - \frac{3}{2} \frac{M_2}{b^3} \biggl[ 1 + \frac{1}{2} (q_1 - 4) V^2 + 2\pn\biggr] \sin^2\bar{\theta} \cos(2\bar{\psi}), \\ \EEbar{o} &:= \bar{\E}_{abc} \bar{\Omega}^a \bar{\Omega}^b \bar{\Omega}^c = \frac{9}{4} \frac{M_2}{b^4} \biggl[ 1 + \frac{1}{3} (9q_1 - 8) V^2 + 2\pn \biggr] \sin\bar{\theta} (5\cos^2\bar{\theta}-1) \cos(\bar{\psi}) \nonumber \\ & \quad \mbox{} - \frac{15}{4} \frac{M_2}{b^4} \biggl[ 1 + (3q_1 - 4) V^2 + 2\pn \biggr] \sin^3\bar{\theta} \cos(3\bar{\psi}), \\ \EEbar{h} &:= \bar{\E}_{abcd} \bar{\Omega}^a \bar{\Omega}^b \bar{\Omega}^c \bar{\Omega}^d = -\frac{9}{16} \frac{M_2}{b^5} \biggl[ 1 + \frac{1}{14} (75q_1 - 68) V^2 + 2\pn \biggr] (35\cos^4\bar{\theta} - 30\cos^2\bar{\theta} + 3) \nonumber \\ & \quad \mbox{} + \frac{15}{4} \frac{M_2}{b^5} \biggl[ 1 + \frac{3}{14} (25q_1 - 24) V^2 + 2\pn \biggr] \sin^2\bar{\theta} (7\cos^2\bar{\theta}-1) \cos(2\bar{\psi}), \nonumber \\ & \quad \mbox{} - \frac{105}{16} \frac{M_2}{b^5} \biggl[ 1 + \frac{3}{14} (25q_1 - 28) V^2 + 2\pn \biggr] \sin^4\bar{\theta} \cos(4\bar{\psi}), \end{align} \end{subequations} where $\bar{\psi} := \bar{\phi} - \bar{\omega} \bar{t}$. \section{Geometry of a tidally deformed horizon} \label{sec:horizon} The induced metric on the event horizon of a tidally deformed black hole was constructed in Sec.~IV B of Ref.~\cite{poisson-vlasov:10} --- refer to their Eq.~(4.3). This metric is expressed in terms of tidal moments defined in a ``horizon calibration'' that differs from the ``post-Newtonian calibration'' adopted in this work. The transformation between these calibrations was detailed in Sec.~\ref{sec:calibrations}, and Eqs.~(\ref{H-PNcalibration}) reveal that the tidal moments are identical up to corrections of order $v^3$ and beyond, \begin{equation} \E_{ab}(\mbox{H}) = \E_{ab}(\mbox{PN}) + 1.5\pn, \qquad \E_{abc}(\mbox{H}) = \E_{abc}(\mbox{PN}) + 1.5\pn, \end{equation} and \begin{equation} \B_{ab}(\mbox{H}) = \B_{ab}(\mbox{PN}) + 1.5\pn, \qquad \B_{abc}(\mbox{H}) = \B_{abc}(\mbox{PN}) + 1.5\pn. \end{equation} These corrections are of no concern in the determination of the horizon's geometry through $1\pn$ order. We recall that $\E_{abcd}$ and $\B_{abcd}$ have a fixed calibration, and are therefore the same in the horizon and post-Newtonian calibrations. We begin with an examination of the quadrupole, octupole, and hexadecapole contributions to the induced metric, neglecting all contributions that are bilinear in the tidal moments. We have \begin{equation} \gamma_{AB} = (2M_1)^2 \biggl[ \Omega_{AB} - \frac{2}{3} M_1^2 \bigl( \EE{q}_{AB} + \BB{q}_{AB} \bigr) - \frac{2}{15} M_1^3 \bigl( \EE{o}_{AB} + \BB{o}_{AB} \bigr) - \frac{2}{105} M_1^4 \bigl( \EE{h}_{AB} + \BB{h}_{AB} \bigr) \biggr], \end{equation} where the tidal potentials are given in the horizon calibration. This metric reflects a choice of coordinates on the horizon: $\theta^A$ is defined to be constant on the horizon's null generators, and the tidal moments are given as functions of the advanced-time coordinate $v$. To keep the notation uncluttered, we no longer make use of the overbar on the coordinates and tidal moments; it is now understood that all expressions refer to the black-hole frame. Continuing to neglect all terms that are bilinear in the tidal moments, the Ricci scalar associated with the induced metric is given by \begin{equation} {\cal R} = \frac{1}{2M_1^2} \biggl( 1 - 4 M_1^2 \EE{q} - \frac{4}{3} M_1^3 \EE{o} - \frac{2}{7} M_1^4 \EE{h} \biggr), \end{equation} where $\EE{q}$, $\EE{o}$, and $\EE{h}$ are the tidal potentials introduced in Sec.~\ref{sec:potentials} and evaluated for circular motion in Sec.~\ref{subsec:harmonic}. There is no gravitomagnetic scalar potential, and $\B_{ab}$, $\B_{abc}$ and $\B_{abcd}$ do not appear in the Ricci scalar. It is convenient to relate the horizon's intrinsic geometry to that of a fictitious two-dimensional surface embedded in a three-dimensional flat space. The surface is described by \begin{equation} r = 2M_1 \bigl[ 1 + \varepsilon(\theta^A) \bigr], \end{equation} where $\varepsilon$ is a displacement function that represents the tidal deformation. With \begin{equation} \varepsilon = -M_1^2 \EE{q} - \frac{2}{15} M_1^3 \EE{o} - \frac{1}{63} M_1^4 \EE{h}, \end{equation} the embedded surface possesses the same intrinsic Ricci scalar as the horizon. Looking separately at each multipole component of the displacement function, we substitute Eqs.~(\ref{Epotentials}) and obtain \begin{subequations} \begin{align} \varepsilon[\ell=2] &= -\frac{M_1^2 M_2}{2b^3} \biggl\{ \biggl[ 1 + \frac{1}{2} q_1 V^2 + 2\pn \biggr] (3\cos^2\theta - 1) \nonumber \\ & \quad \mbox{} - 3 \biggl[ 1 + \frac{1}{2} (q_1 - 4) V^2 + 2\pn\biggr] \sin^2\theta \cos(2\psi) + 1.5\pn \biggr\}, \\ \varepsilon[\ell=3] &= -\frac{M_1^3 M_2}{10 b^4} \biggl\{ 3\biggl[ 1 + \frac{1}{3} (9q_1 - 8) V^2 + 2\pn \biggr] \sin\theta (5\cos^2\theta-1) \cos(\psi) \nonumber \\ & \quad \mbox{} - 5\biggl[ 1 + (3q_1 - 4) V^2 + 2\pn \biggr] \sin^3\theta \cos(3\psi) + 1.5\pn \biggr\}, \\ \varepsilon[\ell=4] &= -\frac{M_1^4 M_2}{336 b^5} \biggl\{ -3 \biggl[ 1 + \frac{1}{14} (75q_1 - 68) V^2 + 2\pn \biggr] (35\cos^4\theta - 30\cos^2\theta + 3) \nonumber \\ & \quad \mbox{} + 20\biggl[ 1 + \frac{3}{14} (25q_1 - 24) V^2 + 2\pn \biggr] \sin^2\theta (7\cos^2\theta-1) \cos(2\psi), \nonumber \\ & \quad \mbox{} -35 \biggl[ 1 + \frac{3}{14} (25q_1 - 28) V^2 + 2\pn \biggr] \sin^4\theta \cos(4\psi) + 2\pn \biggr\}, \end{align} \end{subequations} where $\psi :=\phi - \bar{\omega} v$. The quadrupole displacement scales as $M_1^2 M_2/b^3$, which we take to be of Newtonian order, and it features a $1\pn$ correction of fractional order $V^2$. The octupole displacement comes with an additional factor of order $M_1/b = q_1 V^2$ and therefore represents a $1\pn$ correction to the quadrupole deformation. The hexadecapole displacement, with its additional factor of order $(M_1/b)^2$, represents a $2\pn$ correction to the quadrupole displacement. The neglected bilinear terms would scale as $M_1^4 M_2^2/b^6$ and therefore represent a $3\pn$ correction to the leading term. If we truncate the displacement function to an overall accuracy of $1\pn$ order, we obtain \begin{align} \varepsilon &= \frac{M_1^2 M_2}{2 b^3} \Biggl\{ -\biggl[ 1 + \frac{1}{2} q_1 V^2 \biggr] (3\cos^2\theta-1) + 3 \biggl[ 1 + \frac{1}{2} (q_1 - 4)V^2 \biggr] \sin^2\theta \cos(2\psi) \nonumber \\ & \quad \mbox{} - \frac{3}{5} q_1 V^2 \sin\theta(5\cos^2\theta-1) \cos(\psi) + q_1 V^2 \sin^3\theta\cos(3\psi) + 1.5\pn \Biggr\}. \end{align} As was discussed further in Sec.~\ref{sec:intro}, the displacement function describes a tidal bulge aligned with $\phi = \bar{\omega} v$. A consequence of the black hole's tidal interaction is the fact that its mass slowly increases. The equation that describes this tidal heating is derived in Sec.~IV D of Ref.~\cite{poisson-vlasov:10}. According to their Eqs.~(4.14) and (4.16), the rate of change of the mass is given by \begin{equation} \dot{M_1} = \dot{M_1}[\ell = 2] + \dot{M_1}[\ell = 3] + \mbox{higher order}, \end{equation} where \begin{subequations} \begin{align} \dot{M_1}[\ell = 2] &= \frac{16}{45} M_1^6 \bigl( \dot{\E}_{ab} \dot{\E}^{ab} + \dot{\B}_{ab} \dot{\B}^{ab}\bigr), \\ \dot{M_1}[\ell = 3] &= \frac{16}{4725} M_1^8 \biggr( \dot{\E}_{abc} \dot{\E}^{abc} + \frac{16}{9} \dot{\B}_{abc} \dot{\B}^{abc} \biggr). \end{align} \end{subequations} The influence of the hexadecapole moments on the mass is not currently known, and we therefore exclude it from the expression. We insert our previous expressions for the quadrupole and octupole contributions, and obtain \begin{subequations} \begin{align} \dot{M_1}[\ell = 2] &= \frac{32}{5} q_1^6 q_2^2 \biggl( \frac{M}{b} \biggr)^9 \Bigl[ 1 + (q_1^2 - 6) V^2 + 1.5\pn \Bigr], \\ \dot{M_1}[\ell = 3] &= \frac{64}{35} q_1^8 q_2^2 \biggl( \frac{M}{b} \biggr)^{11} \biggl[ 1 + \Bigl( q_1^2 + 5q_1 - \frac{175}{18} \Bigr) V^2 + 1.5\pn \biggr]. \end{align} \end{subequations} The leading, quadrupole term scales as $(M/b)^9 = V^{18}$, and it represents a correction of $4\pn$ order to the energy lost to gravitational waves. The octupole term scales as $(M/b)^{11} = V^{22}$; it represents a $4\pn$ correction to the quadrupole contribution, and a $8\pn$ correction to the radiated energy. \begin{acknowledgments} This work was supported by the Natural Sciences and Engineering Research Council of Canada. \end{acknowledgments}
1,941,325,220,810
arxiv
\subsection*{Introduction}\label{intro} Two-dimensional quantum gravity is not much of a gravity theory in the sense that there are no propagating gravitons. It has nevertheless been a fertile playground when it comes to testing various aspects of diffeomorphism-invariant theories, and it is potentially important for string theory which can be viewed as two-dimensional quantum gravity coupled to specific, conformal invariant matter fields. The 2d quantum gravity aspect has been particularly important in the study of non-critical string theories. Most of the studies where the quantum gravity aspect has been emphasized have considered two-dimensional Euclidean quantum gravity with compact space-time. The study of 2d Euclidean quantum gravity with non-compact space-time was initiated by the Zamolodchikovs (ZZ) \cite{zz} when they showed how to use conformal bootstrap and the cluster-decomposition properties to quantize Liouville theory on the pseudo-sphere (the Poincare disk). Martinec \cite{martinec} and Seiberg et al.\ \cite{ss} showed how the work of ZZ fitted into framework of non-critical string theory, where the ZZ-theory could be reinterpreted as special branes, now called ZZ-branes. Let $W_{\tilde{\La}}({\tilde{X}})$ be the ordinary, so-called disk amplitude for 2d Euclidean gravity on a compact space-time. ${\tilde{X}}$ denotes the boundary cosmological constant of the disk and ${\tilde{\La}}$ the cosmological constant. It was found that the ZZ-brane of 2d Euclidean gravity was associated with the zero of \begin{equation}\label{0.0} W_{\tilde{\La}}({\tilde{X}}) = ({\tilde{X}} -\frac{1}{2} \sqrt{{\tilde{\La}}})\sqrt{{\tilde{X}} + \sqrt{{\tilde{\La}}}}. \end{equation} At first sight this is somewhat surprising since from a world-sheet point of view the disk is compact while the Poincare disk is non-compact. In \cite{aagk} it was shown how it could be understood in terms of world sheet geometry, i.e.\ from a 2d quantum gravity point of view: When the boundary cosmological constant ${\tilde{X}}$ reaches the value ${\tilde{X}} \!=\! \sqrt{{\tilde{\La}}}/2$ where the disk amplitude $W_{\tilde{\La}}({\tilde{X}}) \!=\! 0$, the geodesic distance from a generic point on the disk to the boundary diverges, in this way effectively creating a non-compact space-time. In this article we show that the same phenomenon occurs for a different two-dimensional theory of quantum gravity called quantum gravity from causal dynamical triangulations (short: CDT) \cite{al}. This theory has been generalized to higher dimensions where potentially interesting results have been obtained \cite{ajl5} using computer simulations. However, here we will concentrate on the two-dimensional theory which can be solved analytically. \section*{CDT} The idea of CDT, i.e.\ quantum gravity defined via causal dynamical triangulations, is two-fold: firstly, inspired by Teitelboim \cite{teitelboim}, we insist, starting in a space-time with a Lorentzian signature, that only causal histories contribute to the quantum gravity path integral, and secondly, we assume a global time-foliation. ``Dynamical triangulation'' (DT) provides a simple regularization of the sum over {\it geometries} by providing a grid of piecewise linear geometries constructed from building blocks ($d$-dimensional simplices if we want to construct a $d$-dimensional geometry, see \cite{book,leshouches} for reviews). The ultraviolet cut-off is the length of the side of the building blocks. CDT uses DT as the regularization of the path integral (see \cite{al,ajl5} for detailed descriptions of which causal geometries are included in the grid). In two dimensions it is natural to study the proper-time ``propagator'', i.e.\ the amplitude for two space-like boundaries to be separated a proper time (or geodesic distance) $T$. While this is a somewhat special amplitude, it has the virtue that other amplitudes, like the disk amplitude or the cylinder amplitude, can be calculated if we know the proper-time propagator \cite{kawai0,kn,gk,al}. When the path integral representation of this propagator is defined using CDT we can further, for each causal piecewise linear Lorentzian geometry, make an explicit rotation to a related Euclidean geometry. After this rotation we perform the sum over geometries in the this Euclidean regime. This sum is now different from the full Euclidean sum over geometries, leading to an alternative quantization of 2d quantum gravity (CDT). Eventually we can perform a rotation back from Euclidean proper time to Lorentzian proper time in the propagator if needed. In the following we will use continuum notation. A derivation of the continuum expressions from the regularized (lattice) expressions can be found in \cite{al}. We assume space-time has the topology $S^1\times [0,1]$, The action (rotated to Euclidean space-time) is: \begin{equation}\label{2.a} S[g] = \Lambda \int \int \d x \d t \sqrt{g(x,t)} + X \oint \d l_1 + Y \oint \d l_2, \end{equation} where $\Lambda$ is the cosmological constant, $X,Y$ are two boundary cosmological constants, $g$ is a metric describing a geometry of the kind mentioned above, and the line integrals refer to the length of the boundaries, induced by $g$. The propagator $G_\Lambda(X,Y;T)$ is defined by \begin{equation}\label{2.a0} G_\Lambda (X,Y;T) = \int {\cal D} [g] \; e^{-S[g]}, \end{equation} where the functional integration is over all ``causal'' geometries $[g]$ such that the ``exit'' boundary with boundary cosmological constant $Y$ is separated a geodesic distance $T$ from the ``entry'' boundary with boundary cosmological constant $X$. As shown in \cite{al}, calculating the path integral \rf{2.a0} using the CDT regularization and taking the continuum limit where the side-length $a$ of the simplices goes to zero leads to the following expression\footnote{The asymmetry between $X$ and $Y$ is just due to the convention that the entrance boundary contains a marked point. Symmetric expressions where the the boundaries have no marked points or both have marked points can be found in \cite{alnr}}: \begin{equation}\label{2.a3} G_\Lambda (X,Y;T) = \frac{{\bar{X}}^2(T,X)-\Lambda}{X^2-\Lambda} \; \frac{1}{{\bar{X}}(T,X)+Y}, \end{equation} where ${\bar{X}}(T,X)$ is the solution of \begin{equation}\label{2.3} \frac{\d {\bar{X}}}{\d T} = -({\bar{X}}^2-\Lambda),~~~{\bar{X}}(0,X)=X, \end{equation} or \begin{equation}\label{2.4} {\bar{X}}(t,X)= \sqrt{\La} \coth \sqrt{\La}(t+t_0),~~~~X=\sqrt{\La} \coth \sqrt{\La} \,t_0. \end{equation} Viewing $G_\Lambda(X,Y;T)$ as a propagator, ${\bar{X}}(T)$ can be viewed as a ``runing'' boundary cosmological constant, $T$ being the scale. If $X > - \sqrt{\La}$ then ${\bar{X}}(T) \to \sqrt{\La}$ for $T \to \infty$, $\sqrt{\La}$ being a ``fixed point'' (a zero of the ``$\b$-function'' $-({\bar{X}}^2-\Lambda)$ in eq.\ \rf{2.3}). Let $L_1$ denote the length of the entry boundary and $L_2$ the length of the exit boundary. Rather than consider a situation where the boundary cosmological constant $X$ is fixed we can consider $L_1$ as fixed. We denote the corresponding propagator $G_\Lambda (L_1,Y;T)$. Similarly we can define $G_\Lambda(X,L_2;T)$ and $G_\Lambda(L_1,L_2;T)$. They are related by Laplace transformations. For instance: \begin{equation}\label{2.a5} G_\Lambda(X,Y;T)= \int_0^\infty \d L_2 \int_0^\infty \d L_1\; G(L_1,L_2;T) \;\mbox{e}^{-XL_1-YL_2}. \end{equation} and one has the following composition rule for the propagator: \begin{equation}\label{2.a6} G_\Lambda (X,Y;T_1+T_2) = \int_0^\infty \d L \; G_\Lambda (X,L;T_1)\,G(L,Y,T_2). \end{equation} We can now calculate the expectation value of the length of the spatial slice at proper time $t \in [0,T]$: \begin{equation}\label{2.a7} \langle L(t)\rangle_{X,Y,T} = \frac{1}{G_\Lambda (X,Y;T)} \int_0^\infty \d L\; G_\Lambda (X,L;t) \;L\; G_\Lambda (L,Y;T-t). \end{equation} In general there is no reason to expect $\langle L(t) \rangle$ to have a have a classical limit. Consider for instance the situation where $X$ and $Y$ are larger than $\sqrt{\La}$ and where $T \gg 1/\sqrt{\La}$. The average boundary lengths will be of order $1/X$ and $1/Y$. But for $ 0 \ll t \ll T $ the system has forgotten everything about the boundaries and the expectation value of $L(t)$ is, up to corrections of order $e^{-2\sqrt{\La} t}$ or $e^{-2\sqrt{\La} (T-t)}$, determined by the ground state of the effective Hamiltonian $H_{eff}$ corresponding to $G_\Lambda(X,Y;T)$ (see \cite{al} for details and \cite{alnr} for a discussion of various forms of $H_{eff}$. Here we do not need the explicit expression for $H_{eff}$). One finds for this ground state $\langle L \rangle = 1/\sqrt{\La}$. This picture is confirmed by an explicit calculation using eq.\ \rf{2.a7} as long as $X,Y > \sqrt{\La}$. The system is thus, except for boundary effects, entirely determined by the quantum fluctuations of the ground state of $H_{eff}$. We will here be interested in a different and more interesting situation where a non-compact space-time is obtained as a limit of the compact space-time described by \rf{2.a7}. Thus we want to take $T \to \infty$ and at the same time also the length of the boundary corresponding to proper time $T$ to infinity. Since $T \to \infty$ forces ${\bar{X}}(T,X) \to \sqrt{\La}$ it follows from \rf{2.a3} that the only choice of boundary cosmological constant $Y$ independent of $T$ where the length $\langle L(T)\rangle_{X,Y,T}$ goes to infinity for $T \to \infty$ is $Y\!=\! \!-\! \sqrt{\La}$ since we have: \begin{equation}\label{2.a8} \langle L(T)\rangle_{X,Y,T} = -\frac{1}{G_\Lambda (X,Y;T)} \, \frac{\partial G_\Lambda (X,Y;T)}{\partial Y} = \frac{1}{{\bar{X}}(T,X)+Y}. \end{equation} With the choice $Y \!=\! \!-\! \sqrt{\La}$ one obtains from \rf{2.a7} in the limit $T \to \infty$: \begin{equation}\label{2.a9} \langle L(t) \rangle_{X} = \frac{1}{\sqrt{\La}} \; \sinh (2\sqrt{\La}(t+t_0(X))), \end{equation} where $t_0(X)$ is define in eq.\ \rf{2.4}. We have called $L_2$ the (spatial) length of the boundary corresponding to $T$ and $\langle L(t) \rangle_X$ the spatial length of a time-slice at time $t$ in order to be in accordance with earlier notation \cite{nakayama,al}, but starting from a lattice regularization and taking the continuum limit $L$ is only determined up to a constant of proportionality which we fix by comparing with a continuum effective action. In the next section we will show that such a comparison leads to the identification of $L$ as $L_{cont}/\pi$ and we are led to the following \begin{equation}\label{2.a10} L_{cont}(t) \equiv \pi \langle L(t)\rangle_X = \frac{\pi}{\sqrt{\La}} \; \sinh (2\sqrt{\La}(t+t_0(X))). \end{equation} Consider the classical surface where the intrinsic geometry is defined by proper time $t$ and spatial length $L_{cont}(t)$ of the curve corresponding to constant $t$. It has the line element \begin{equation}\label{2.a11} \d s^2 = \d t^2 + \frac{L_{cont}^2}{4\pi^2}\; \d \theta^2 = \d t^2 + \frac{\sinh^2 (2\sqrt{\La} (t+t_0(X)))}{4 \Lambda} \;\d \theta^2, \end{equation} where $t \ge 0$ and $t_0(X)$ is a function of the boundary cosmological constant $X$ at the boundary corresponding to $t \!=\! 0$ (see eq.\ \rf{2.4}). What is remarkable about the formula \rf{2.a11} is that the surfaces for different boundary cosmological constants $X$ can be viewed as part of the same surface, the Poincare disk with curvature $R= -8\Lambda$, since $t$ can be continued to $t=\!-\! t_0 $. The Poincare disk itself is formally obtained in the limit $X \to \infty$ since an infinite boundary cosmological constant will contract the boundary to a point. \section*{The classical effective action} Consider the non-local ``induced'' action of 2d quantum gravity, first introduced by Polyakov \cite{polyakov} \begin{equation}\label{3.1} S[g]= \int \d t \d x \sqrt{g} \left( \frac{1}{16} R_g \frac{1}{-\Delta_g} R_g +\Lambda \right), \end{equation} where $R$ is the scalar curvature corresponding to the metric $g$, $t$ denotes ``time'' and $x$ the ``spatial'' coordinate. Nakayama \cite{nakayama} analyzed the action \rf{3.1} in proper time gauge assuming the manifold had the topology of the cylinder with a foliation in proper time $t$, i.e. the metric was assumed to be of the form: \begin{equation}\label{3.3} g = \begin{pmatrix} 1& 0 \\ 0 & \gamma(t,x)\end{pmatrix}. \end{equation} It was shown that in this gauge the classical dynamics is described entirely by the following one-dimensional action: \begin{equation}\label{3.4} S_\kappa = \int_0^T \d t \left(\frac{\dot{l}^2(t)}{4l(t)} + \Lambda l(t)+ \frac{\kappa}{l}\right), \end{equation} where \begin{equation}\label{3.5} l(t) = \frac{1}{\pi}\int \d x \sqrt{\gamma}, \end{equation} and where $\kappa$ is an integration constant coming from solving for the energy-momentum tensor component $T_{01}=0$ and inserting the solution in \rf{3.1}. Thus $\pi l(t)$ is precisely the length of the spatial curve corresponding to a constant value of $t$, calculated in the metric \rf{3.3}. The classical solutions corresponding to action \rf{3.4} are \begin{eqnarray} l(t) &=& \frac{\sqrt{\kappa}}{\sqrt{\La}} \; \sinh 2\sqrt{\La} t,~~~~~~~~~~\kappa>0~~ \mbox{elliptic case}, \label{3.6a}\\ l(t) &=& \frac{\sqrt{-\kappa}}{\sqrt{\La}} \; \cosh 2\sqrt{\La} t,~~~~~~~\kappa<0~~\mbox{hyperbolic case}, \label{3.6b}\\ l(t) &=& \mbox{e}^{2\sqrt{\La} t},~~~~~~~~~~~~~~~~~~~~~~~\kappa=0 ~~\mbox{parabolic case}, \label{3.6c} \end{eqnarray} all corresponding to cylinders with constant negative curvature $-8 \Lambda$. In the elliptic case, where $t$ must be larger than zero, there is a conical singularity at $t =0$ unless $\kappa = 1$. For $\kappa \!=\! 1$ the geometry is regular at $t=0$ and this value of $\kappa$ corresponds precisely to the Poincare disk, $t=0$ being the ``center'' of the disk. Nakayama quantized the actions $S_\kappa$ for $\kappa = (m+1)^2$, $m$ a non-negative integer, and for $m \!=\! 0$ he obtained precisely the propagator obtained by the CDT path integral approach. \section*{Quantum fluctuations} In many ways it is more natural to fix the boundary cosmological constant than to fix the length of the boundary. However, one pays the price that the fluctuations of the boundary size are large, in fact of the order of the average length of the boundary itself \footnote{This is true also in Liouville quantum theory, the derivation essentially the same as that given in \rf{5.1}, as is clear from \cite{aagk}.}: from \rf{2.a8} we have \begin{equation}\label{5.1} \langle L^2(T) \rangle_{X,Y;T} - \langle L(T) \rangle^2_{X,Y;T} = -\frac{\partial \langle L(T) \rangle_{X,Y;T}}{\partial Y} = \langle L(T) \rangle^2_{X,Y;T}. \end{equation} Such large fluctuations are also present around $\langle L(t)\rangle_{X,Y;T}$ for $t< T$. From this point of view it is even more remarkable $\langle L(t)\rangle_{X,Y=- \sqrt{\La};T=\infty}$ has such a nice semiclassical interpretation. Let us now by hand fix the boundary lengths $L_1$ and $L_2$. This is done in the Hartle-Hawking Euclidean path integral when the geometries $[g]$ are fixed at the boundaries \cite{hh}. For our one-dimensional boundaries the geometries at the boundaries are uniquely fixed by specifying the lengths of the boundaries, and the relation between the propagator with fixed boundary cosmological constants and with fixed boundary lengths is given by a Laplace transformation as shown in eq.\ \rf{2.a5}. Let us for simplicity analyze the situation where we take the length $L_1$ of the entrance loop to zero by taking the boundary cosmological constant $X \to \infty$. Using the decomposition property \rf{2.a6} one can calculate the connected ``loop-loop'' correlator for fixed $L_2$ and $0< t \leq t+\Delta < T$ \begin{equation}\label{5.a1} \langle L(t)L(t+\Delta)\rangle^{(c)}_{L_2,T} \equiv \langle L(t+\Delta)L(t)\rangle_{L_2,T}-\langle L(t)\rangle \langle L(t+\Delta)\rangle_{L_2,T}. \end{equation} One finds \begin{eqnarray}\label{5.a2} \langle L(t)L(t\!+\!\Delta)\rangle^{(c)}_{L_2,T} &=& \frac{2}{\Lambda} \frac{\sinh^2 \sqrt{\La} t \sinh^2 \sqrt{\La} (T\!-\! (t\!+\!\Delta))}{\sinh^2 \sqrt{\La} T}+ \\ && \frac{2L_2}{\sqrt{\La}} \frac{\sinh^2 \sqrt{\La} t \sinh\sqrt{\La} (t\!+\!\Delta) \sinh\sqrt{\La}(T \!-\!(t\!+\!\Delta))}{\sinh^3 \sqrt{\La} T}.\nonumber \end{eqnarray} We also note that \begin{equation}\label{5.b2} \langle L(t)\rangle_{L_2,T}= \frac{2}{\sqrt{\La}} \frac{\sinh \sqrt{\La} t \sinh \sqrt{\La} (T\!-\! t)}{\sinh \sqrt{\La} T}+ L_2\frac{\sinh^2 \sqrt{\La} t}{\sinh^2 \sqrt{\La} T}. \end{equation} For fixed $L_2$ and $T \to \infty$ we obtain \begin{equation}\label{5.a3} \langle L(t)L(t+\Delta)\rangle^{(c)}_{L_2} = \frac{1}{2\Lambda} \; \mbox{e}^{-2\sqrt{\La} \Delta } \left(1-\mbox{e}^{-2\sqrt{\La} t} \right)^2 \end{equation} and \begin{equation}\label{5.b3} \langle L(t)\rangle_{L_2}=\frac{1}{\sqrt{\La}}\left( 1-\mbox{e}^{-2\sqrt{\La} t}\right). \end{equation} Eqs.\ \rf{5.a3} and \rf{5.b3} tell us that except for small $t$ we have $\langle L(t)\rangle_{L_2}\!=\! 1/\sqrt{\La}$. The quantum fluctuations $\Delta L(t)$ of $L(t)$ are defined by $(\Delta L(t))^2 = \langle L(t)L(t)\rangle^{(c)}$. Thus the spatial extension of the universe is just quantum size (i.e.\ $1/\sqrt{\La}$, $\Lambda$ being the only coupling constant) with fluctuations $\Delta L(t)$ of the same size. The time correlation between $L(t)$ and $L(t+\Delta)$ is also dictated by the scale $1/\sqrt{\La}$, telling us that the correlation between spatial elements of size $1/\sqrt{\La}$, separated in time by $\Delta$ falls off exponentially as $e^{-2\sqrt{\La} \Delta}$ . The above picture is precisely what one would expect from the action \rf{2.a}: if we force $T$ to be large and choose a $Y$ such that $\langle L_2(T)\rangle$ is not large, the universe will be a thin tube, ``classically'' of zero width, but due to quantum fluctuations of average width $1/\sqrt{\La}$. A more interesting situation is obtained if we choose $Y = -\sqrt{\La}$, the special value needed to obtain a non-compact geometry in the limit $T\to \infty$. To implement this in a setting where $L_2$ is not allowed to fluctuate we fix $L_2(T)$ to the average value \rf{2.a8} for $Y\!=\! \!-\! \sqrt{\La}$: \begin{equation}\label{5.2} L_2(T) = \langle L(T) \rangle_{X,Y= -\sqrt{\La};T} = \frac{1}{\sqrt{\La}} \; \frac{1}{\coth \sqrt{\La} T -1}. \end{equation} From \rf{5.a2} and \rf{5.b2} we have in the limit $T \to \infty$: \begin{equation}\label{5.3} \langle L(t) \rangle = \frac{1}{\sqrt{\La}} \; \sinh 2\sqrt{\La} t \end{equation} in accordance with \rf{2.a9}, and for the ``loop-loop''-correlator \begin{equation}\label{5.4} \langle L(t+\Delta)L(t)\rangle^{(c)}= \frac{2}{\Lambda}\;\sinh^2 \sqrt{\La} t= \frac{1}{\sqrt{\La}} \left( \langle L(t)\rangle -\frac{1}{\sqrt{\La}} \left(1-\mbox{e}^{-2\sqrt{\La} t}\right)\right). \end{equation} It is seen that the ``loop-loop''-correlator is independent of $\Delta$. In particular we have for $\Delta \!=\! 0$: \begin{equation}\label{5.5} (\Delta L(t))^2 \equiv \langle L^2(t)\rangle -\langle L(t)\rangle^2 \sim \frac{1}{\sqrt{\La}} \langle L(t)\rangle \end{equation} for $t \gg 1/\sqrt{\La}$. The interpretation of eq.\ \rf{5.5} is in accordance with the picture presented below \rf{5.b3}: we can view the curve of length $L(t)$ as consisting of $N(t) \approx \sqrt{\La} L(t) \approx e^{2\sqrt{\La} t} $ independently fluctuating parts of size $1/\sqrt{\La}$ and each with a fluctuation of size $1/\sqrt{\La}$. Thus the total fluctuation $\Delta L(t)$ of $L(t)$ will be of order $1/\sqrt{\La} \times \sqrt{N(t)}$, i.e.\ \begin{equation}\label{5.a5} \frac{\Delta L(t)}{\langle L(t)\rangle} \sim \frac{1}{\sqrt{\sqrt{\La} \langle L(t)\rangle}} \sim \mbox{e}^{-\sqrt{\La} t}, \end{equation} i.e.\ the fluctuation of $L(t)$ around $\langle L(t)\rangle$ is small for $t \gg 1/\sqrt{\La}$. In the same way the independence of the ``loop-loop''-correlator of $\Delta$ can be understood as the combined result of $L(t+\Delta)$ growing exponentially in length with a factor $e^{2\sqrt{\La} \Delta}$ compared to $L(t)$ and, according to \rf{5.a3}, the correlation of ``line-elements'' of $L(t)$ and $L(t+\Delta)$ decreasing by a factor $e^{-2\sqrt{\La} \Delta}$. \section*{Discussion} We have described how the CDT quantization of 2d gravity for a special value of the boundary cosmological constant leads to a non-compact (Euclidean) Ads-like space-time of constant negative curvature dressed with quantum fluctuations. It is possible to achieve this non-compact geometry as a limit of a compact geometry as described above. In particular the assignment \rf{5.2} leads to a simple picture where the fluctuation of $L(t)$ is small compared to the average value of $L(t)$. In fact the geometry can be viewed as that of the Poincare disk with fluctuations correlated only over a distance $1/\sqrt{\La}$. Our construction is similar to the analysis of $ZZ$-branes appearing as a limit of compact 2d geometries in Liouville quantum gravity \cite{aagk}. In the CDT case the non-compactness came when the running boundary cosmological constant ${\bar{X}}(T)$ went to the fixed point $\sqrt{\La}$ for $T \to \infty$. In the case of Liouville gravity, represented by DT (or equivalently matrix models), the non-compactness arose when the running (Liouville) boundary cosmological constant ${{\bar{X}}_{liouville}(T)}$ went to the value where the disk-amplitude $W_{\tilde{\La}}({\tilde{X}}) \!=\! 0$, i.e.\ to ${\tilde{X}} \!=\! \sqrt{{\tilde{\La}}}/2$ (see eq.\ \rf{0.0}). It is the same process in the two cases since the relation between Liouville gravity and CDT is well established and summarized by the mapping \cite{ackl}: \begin{equation}\label{6.1} \frac{X}{\sqrt{\La}} = \sqrt{\frac{2}{3}}\; \sqrt{1+\frac{{\tilde{X}}}{\sqrt{{\tilde{\La}}}}}, \end{equation} between the coupling constants of the two theories. The physical interpretation of this relation is discussed in \cite{ackl,al}: one obtains the CDT model by chopping away all baby-universes from the Liouville gravity theory, i.e.\ universes connected to the ``parent-universe'' by a worm-hole of cut-off scale, and this produces the relation \rf{6.1} \footnote{The relation \rf{6.1} is similar to the one encountered in regularized bosonic string theory in dimensions $d\geq 2$ \cite{durhuus,adf,ad}: the world sheet degenerates into so-called branches polymer. The two-point function of these branched polymers is related to the ordinary two-point function of the free relativistic particle by chopping off (i.e.\ integrating out) the branches, just leaving for each branched polymer connecting two points in target space one {\it path} connecting the two points. The mass-parameter of the particle is then related to the corresponding parameter in the partition function for the branched polymers as $X/\sqrt{\La}$ to ${\tilde{X}}/\sqrt{{\tilde{\La}}}$ in eq.\ \rf{6.1}. }. It is seen that $X \to \sqrt{\La}$ corresponds precisely to $\tilde{X} \to \sqrt{{\tilde{\La}}}/2$. While the starting point of the CDT quantization was the desire to include only Lorentzian, causal geometries in the path integral, the result \rf{2.a11} shows that after rotation to Euclidean signature this prescription is in a natural correspondence with the Euclidean Hartle-Hawking no-boundary condition since all of the geometries \rf{2.a11} have a continuation to $t \!=\! -t_0$ where the space-time is regular. It would be interesting if this could be promoted to a general principle also in higher dimensions. The computer simulations reported in \cite{ajl5} seems in accordance with this possibility. \section*{Acknowledgment} J.A.\ and R.J.\ were supported by ``MaPhySto'', the Center of Mathematical Physics and Stochastics, financed by the National Danish Research Foundation. All authors acknowledge support by ENRAGE (European Network on Random Geometry), a Marie Curie Research Training Network in the European Community's Sixth Framework Programme, network contract MRTN-CT-2004-005616. R.J.\ was supported in part by Polish Ministry of Science and Information Technologies grant 1P03B04029 (2005-2008)
1,941,325,220,811
arxiv
\section{Introduction} We provide in this paper a new and detailed geometric proof of the removal of boundary singularities of pseudo-holomorphic curves. More precisely, we shall prove the following Theorems \ref{theorem_A} and \ref{theorem_B} due to Gromov \cite{gromov} by using a doubling argument. The idea of using a doubling argument is also due to him. Precise statements of Theorems \ref{theorem_A} and \ref{theorem_B} are Theorems \ref{theorem_finite_area} and \ref{theorem_tame} respectively. Let $D^{+}$ be the upper half disk on the complex plane and $\check{D}^{+}$ be the punctured upper half disk $D^{+} - \{0\}$ (see (\ref{half_disk}) and (\ref{pucture_half_disk})). Suppose $M$ is a manifold with almost complex structure $J$ and $W$ is an embedded totally real submanifold of $M$. Assume $f: \check{D}^{+} \rightarrow M$ is a smooth and pseudo-holomorphic map such that $f(\partial \check{D}^{+}) \subseteq W$. \begin{theorem}\label{theorem_A} Suppose the image of $f$ is relatively compact and the area of the image is finite (see Definition \ref{definition_image_volume}) for some Riemannian metric. Then $f$ has a smooth extension over $0 \in D^{+}$. \end{theorem} \begin{theorem}\label{theorem_B} Suppose the image of $f$ is relatively compact. Suppose $\alpha$ is a $1$-from on $M$ such that $d \alpha$ tames $J$. Furthermore, assume $\alpha$ is exact on $W$. Then $f$ has a smooth extension over $0 \in D^{+}$. \end{theorem} In Theorem \ref{theorem_B}, the assumption of the exactness of $\alpha$ on $W$ was not stated by Gromov in \cite[1.3.C']{gromov}. However, Theorem \ref{theorem_B} will \textit{no} longer be true if we drop this assumption. We construct the following simple counterexample to show that, without the assumption, there could be \textit{no} continuous extension. (The proof of this example is given in Section \ref{section_main_result}.) \renewcommand\thetheorem{\arabic{theorem}} \numberwithin{theorem}{section} \begin{example}\label{example_no_extension} Let $M = \mathbb{C}$ be the complex plane. Choose $J$ to be the standard complex structure. Let $W= \{ z=x+iy \in \mathbb{C} \mid |z|=1 \}$ be the unit circle. Define $\alpha = xdy$. Define a holomorphic function $f: \check{D}^{+} \rightarrow \mathbb{C}$ as \begin{equation} f(z) = \exp \left( -\frac{i}{z} \right). \end{equation} Then $f: \check{D}^{+} \rightarrow (M, W, J, \alpha)$ satisfies every assumption in Theorem \ref{theorem_B} with one exception: $\alpha$ is not exact on $W$. The conclusion of Theorem \ref{theorem_B} does not hold in this example. \end{example} To the best of our knowledge, up to now, there has been no work in the literature to give a correct statement of Theorem \ref{theorem_B}, let alone to prove it. Theorem \ref{theorem_B} does have certain advantages over the Theorem \ref{theorem_A}. This will be illustrated by a simple Example \ref{example_advantage}. We shall call the above two theorems the \textit{boundary case} of the removal of singularities. In these theorems, if $D^{+}$ (resp. $\check{D}^{+}$) is replaced by the disk $D = \{ z \in \mathbb{C} \mid |z|<1 \}$ (resp. the punctured disk $\check{D} = D - \{0\}$) and all assumptions related to $W$ are dropped, we will get theorems on the removal of interior singularities (see \cite[1.3 \& 1.4]{gromov} and \cite[p.41, Theorem 2.1]{hummel}). We shall call them the \textit{interior case}. The removal of singularities is important to symplectic geometry. It is a key ingredient in Gromov compactness for pseudo-holomorphic curves. Namely it enables us to identify pseudo-holomorphic spheres and disks as the obstructions to compactness for pseudo-holomorphic curves. (See \cite[1.5]{gromov} and \cite[p.71]{mcduff_salamon}.) There has been much work in the literature to present detailed proofs of the removal of singularities in both interior case and boundary case. They can be roughly divided into two types. The first type is analytic. It involves nontrivial tools of analysis such as Sobolev spaces and PDEs. The strategy is as follows. One first proves the pseudo-holomorphic map $f$ on $D$ or $D^{+}$ belongs to the Sobolev space $W^{1,p}$ for some $p>2$, or is H\"{o}lder continuous. Here one improves the integrability of the derivative of $f$ by analytic methods. Then the elliptic regularity of PDEs implies that $f$ is smooth on $D$ or $D^{+}$. This method has been used in, e.g. \cite{oh}, \cite{parker_wolfson}, \cite{ye}, \cite{mcduff_salamon} and \cite{ivashkovich_shevchishin}. The second one is geometric and stems from the original argument by Gromov \cite{gromov}. Rather than using the above analytic machinery, it is based on the geometric insight into the problem. For example, different geometric aspects yield different types of isoperimetric inequalities. The proof relies on the combination and the refinement of these isoperimetric inequalities. The process of this proof is different from that of the analytic one. First, one proves that $f$ has a continuous extension over $0$. Second, one proves that $f$ is Lipschitz continuous at $0$. Then, by a geometric construction, one reduces the study of the derivatives of $f$ to the case of $f$ itself. Thus the $C^{k}$ regularity implies the $C^{k+1}$ regularity, which completes the proof. This method has been used in, e.g. \cite{muller} and \cite{hummel} in the interior case. Up to now, there has been no geometric proof of the removal of boundary singularities. This paper provides such a geometric proof. Therefore, our proof is new for both Theorems \ref{theorem_A} and \ref{theorem_B}. The readers of this paper do not need any knowledge of Sobolev spaces and PDEs. Here is a bibliography of the previous work in the boundary case. The paper \cite{oh} gives a proof of Theorem \ref{theorem_A} under the additional assumption that $J$ is compatible with a symplectic form and $W$ is Lagrangian. A second proof of Theorem \ref{theorem_A} is given in \cite{ye}. The book \cite[Section 4.5]{mcduff_salamon} presents a proof of Theorem \ref{theorem_A} under the additional assumption that $J$ is tamed by a symplectic form and $W$ is Lagrangian. The paper \cite[Corollary 3.2]{ivashkovich_shevchishin} proves a more general theorem which implies Theorem \ref{theorem_A}, where $W$ is not necessarily embedded and the assumption on the area of the image is weakened. We describe now the main idea of our proof. In \cite[1.3.C]{gromov}, Gromov suggests that a doubling argument would reduce the boundary case to the interior case. Our method follows this idea. We try to find a good doubling map $F: \check{D} \rightarrow (M,J)$ of $f: \check{D}^{+} \rightarrow (M,J)$. The good map $F$ is an extension of $f$ and satisfies two conditions: (1) it is pseudo-holomorphic; (2) it has sufficient symmetry. Because of the symmetry, one expects that $F$ has good properties if $f$ does. If one can find such $F$, then the boundary case can be easily reduced to the interior case. It turns out that such a good map is difficult to be obtained. However, fortunately, we can construct in this paper a doubling map close to such a map. A natural way to define a doubling map is as follows: If the image of $f$ is contained in a tubular neighborhood of $W$, one can reflect the image of $f$ with respect to $W$. More generally, if $f$ maps a smaller punctured half disk into this tubular neighborhood, one can apply the reflection to the restriction of $f$ to this smaller punctured half disk. Since the removal of singularities is a local argument, this is sufficient. There are two difficulties when we do a doubling argument. First, under the assumption of Theorem \ref{theorem_B}, it's not easy to show that $f$ maps a smaller punctured half disk into a tubular neighborhood of $W$. This makes the construction of a doubling map difficult. Actually, Lemma \ref{lemma_converge_neighborhood} shows that such a situation becomes better in the case of Theorem \ref{theorem_A}. Second, the triple $(M,W,J)$ lacks sufficient symmetry. In fact, the symmetry of $(M,W,J)$ helps a doubling argument. Example \ref{example_pansu} tells us, if one makes the assumption that $J$ is integrable in a neighborhood of $W$ and $W$ is real analytic, then a holomorphic doubling map is easily obtained. This strong assumption actually gives $(M,W,J)$ more symmetry. (This assumption appears in \cite[2.1.D]{gromov} and \cite[p.244-245]{pansu}.) In order to overcome the first difficulty, we construct the following intrinsic doubling. We pull the geometric data on $M$, such as metrics and forms, back to $\check{D}^{+}$ by using $f$, and then we extend these data symmetrically over $\check{D}$. Instead of constructing a doubling map from $\check{D}$ to $M$, we take $\check{D}$ as our ambient manifold. On $\check{D}$, we can adapt certain arguments which were used by Gromov on $M$ in the interior case. This reduces Theorem \ref{theorem_B} to Theorem \ref{theorem_A}. To make these arguments possible, the extended geometric data on $\check{D}$ need to have good properties. We achieve this by two ingredients: the first one is the fact that the $1$-form $\alpha$ is exact on $W$; the second one is some constructions suggested by Gromov \cite[1.3.C]{gromov} such as finding a Hermitian metric on $M$ which makes $W$ totally geodesic. The above intrinsic doubling is not sufficient because we have to use an extrinsic property of a pseudo-holomorphic curve: it is ``almost minimal" in the ambient manifold (see comment before Lemma \ref{lemma_isoperimetric_disk}). Thus we also establish an extrinsic doubling. Here we do define a doubling map $F: \check{D} \rightarrow M$ by a reflection as mentioned above. However, this map $F$ is not necessarily pseudo-holomorphic because of the second difficulty mentioned above. This difficulty is overcome by the following symmetric construction. We introduce a new almost complex structure $\widetilde{J}$ arising naturally from the reflection. Then $F$ is pseudo-holomorphic with respect to $J$ on $\check{D}^{+}$ and with respect to $\widetilde{J}$ on $\check{D}^{-}$ (the lower half disk). Furthermore, $J$ and $\widetilde{J}$ coincide on $W$. Therefore, our map $F$ is sufficiently close to a pseudo-holomorphic map so that an adaptation of Gromov's approach in the interior case finishes the proof. The outline of this paper is as follows. Section \ref{section_main_result} precisely formulates the main results of this paper. Section \ref{section_set_up} lists some technical results frequently used throughout this paper. In Section \ref{section_reduction_lemma}, we construct our intrinsic doubling which reduces Theorem \ref{theorem_B} to Theorem \ref{theorem_A}. The subsequent sections constitute a progress of improving the regularity of $f$ at $0 \in D^{+}$ under the assumption of Theorem \ref{theorem_A}. Section \ref{section_pre-continuity} shows that $f$ maps a smaller punctured half disk into a tubular neighborhood of $W$. In this part, we follow \cite[p.135-136]{oh}. Such a result paves the way for the extrinsic doubling in the next section. In Section \ref{section_continuity}, we establish the continuous extension of $f$ over $0$ by using the extrinsic doubling. The key argument in this step is to establish certain isoperimetric inequalities. This follows from the ``almost minimal" property of pseudo-holomorphic curves. In Section \ref{section_lipschitz_continuity}, the Lipschitz continuity of $f$ is proved. The proof is a refinement of that in Section \ref{section_continuity}. It relies on an improvement of the previous isoperimetric inequalities. Section \ref{section_almost_structure} recalls the fact that the tangent bundle of $M$ is naturally an almost complex manifold. This is needed for a geometric construction in the next section. Finally, in Section \ref{section_higher_order_derivatives}, we study the higher derivatives of $f$ by a boot-strapping argument, which finishes the proof of Theorems \ref{theorem_A} and \ref{theorem_B}. \renewcommand\thetheorem{\arabic{theorem}} \numberwithin{theorem}{section} \section{Main Result}\label{section_main_result} In this paper, all manifolds are without boundary if we don't say this explicitly. All manifolds, maps, functions, metrics, almost complex structures, forms and so on are smooth if we don't state this explicitly. Similarly, all submanifolds are assumed to be smoothly embedded submanifolds unless otherwise mentioned. We say a submanifold is a closed submanifold if it is a \textit{closed subset} of the ambient manifold. \begin{definition}\label{definition_image_volume} Suppose $N$ is an $n$-dimensional $C^{1}$ Riemannian manifold, $S$ is a $k$-dimensional $C^{1}$ manifold, and $h: S \rightarrow N$ is a $C^{1}$ map. Pulling back the Riemannian metric from $N$ to $S$ by $h$, we get a possibly singular $C^{0}$ metric on $S$. The volume of the image of $h$ is defined as the volume of $S$ with respect to this pull back metric. Denote this volume by $|h(S)|$. When $k=1$ or $2$, we also call it length or area of the image respectively. \end{definition} As a subset of $N$, $h(S)$ has its $k$-dimensional Hausdorff measure with respect to the metric of $N$. By Federer's area formula, we know that $|h(S)|$ is no less than the Hausdorff measure of $h(S)$. It also could happen that $|h(S)|$ is strictly greater than the Hausdorff measure, for example, when $h$ is a covering map. \begin{definition} We say a subset $Y$ is relatively compact in a topological space $X$ if the closure of $Y$ in $X$ is compact. \end{definition} Let's recall some basic definitions related to almost complex manifolds. Suppose $M$ is a manifold with an almost complex structure $J$, that is a field of endomorphisms on $T_{p} M$ for all $p \in M$ such that $J^{2} = - \text{Id}$. We call $M$ an almost complex manifold and also denote it by $(M,J)$. The dimension of $M$ has to be even. We say a $2$-form $\omega$ on $M$ is a symplectic form if $\omega$ is closed and nondegenerate. \begin{definition} We say a $2$-form $\omega$ tames $J$ if $\omega(v,Jv) > 0$ for any nonzero tangent vector $v$ on $M$. We say $\omega$ is compatible with $J$ if $\omega(\cdot, J \cdot)$ is a Riemannian metric on $M$. \end{definition} Clearly, the fact that $\omega$ is compatible with $J$ implies that $\omega$ tames $J$. If $\omega$ tames $J$, then $\omega$ is obviously nondegenerate. In the compatible case, $J$ preserves the Riemannian metric $\omega(\cdot, J \cdot)$, and $H(\cdot, \cdot) = \omega(\cdot, J \cdot) - i \omega(\cdot, \cdot)$ defines a Hermitian metric on $M$ with respect to $J$, where $i$ is the imaginary unit, i.e.\ $i^{2}=-1$. \begin{definition}\label{definition_totally_real} A submanifold $W$ of $(M,J)$ is said to be totally real if $\dim (W) = \frac{1}{2} \dim (M)$ and $J(T_{p}W) \cap T_{p}W = \{0\}$ for all $p \in W$. \end{definition} \begin{definition} Suppose $(S, J_{1})$ and $(N, J_{2})$ are manifolds with almost complex structures $J_{1}$ and $J_{2}$. A $C^{1}$ map $h: S \rightarrow N$ is $J$-holomorphic or pseudo-holomorphic if the derivative of $h$ is complex linear with respect to $J_{1}$ and $J_{2}$, i.e. \[ J_{2} \cdot dh = dh \cdot J_{1}. \] If $S$ is a Riemann surface, we call such a map a $J$-holomorphic curve or a pseudo-holomorphic curve in $N$. \end{definition} We fix now some notations for some subsets of the complex plane $\mathbb{C}$ which are frequently used throughout this paper. Denote by \begin{equation}\label{half_disk} D^{+} = \{ z \in \mathbb{C} \mid |z|<1, \text{Im}z \geq 0 \}, \end{equation} the half disk on the complex plane. Define \begin{equation}\label{pucture_half_disk} \check{D}^{+} = D^{+} - \{0\}. \end{equation} as the punctured half disk. Clearly, $\check{D}^{+}$ is a Riemann surface with boundary. Its boundary is \begin{equation} \partial \check{D}^{+} = \{ z \in \mathbb{R} \mid 0 < |z| < 1 \}. \end{equation} The main goal of this paper is to present a new and geometric proof of the following two theorems due to Gromov \cite[1.3.C]{gromov}. They are the precise versions of Theorems \ref{theorem_A} and \ref{theorem_B} in the Introduction. \begin{theorem}\label{theorem_finite_area} Suppose $(M,J)$ is a smooth almost complex manifold. Suppose $f: \check{D}^{+} \rightarrow (M,J)$ is a smooth $J$-holomorphic map such that $f(\partial \check{D}^{+}) \subseteq W$, where $W$ is a smoothly embedded totally real submanifold and a closed subset of $M$. Suppose the image of $f$ is relatively compact and the area of the image is finite for some Riemannian metric on $M$. Then $f$ has a smooth extension over $0 \in D^{+}$. \end{theorem} Actually, the finiteness of the area of the image in Theorem \ref{theorem_finite_area} does not depend on the Riemannian metric (see comment before Claim \ref{claim_metric}). \begin{theorem}\label{theorem_tame} Suppose $(M,J)$ is a smooth almost complex manifold. Suppose $f: \check{D}^{+} \rightarrow (M,J)$ is a smooth $J$-holomorphic map such that $f(\partial \check{D}^{+}) \subseteq W$, where $W$ is a smoothly embedded totally real submanifold and a closed subset of $M$. Denote by $\iota: W \hookrightarrow M$ the inclusion of $W$. Suppose $J$ is tamed by $d \alpha$, where $\alpha$ is a smooth $1$-from on $M$ such that $\iota^{*} \alpha$ is exact on $W$. Suppose the image of $f$ is relatively compact. Then $f$ has a smooth extension over $0 \in D^{+}$. \end{theorem} Now we give a proof of our counterexample in the Introduction. \begin{proof}[Proof of Example \ref{example_no_extension}] We know that $W$ is compact and $d \alpha = dx \wedge dy$ is even compatible with $J$. However, $\iota^{*} \alpha$ is not exact on the unit circle $W$. Actually, there exists no $1$-form $\beta$ such that $d \beta$ tames $J$ and $\iota^{*} \beta$ is exact. Otherwise, by Stokes' formula, there would be no nonconstant compact $J$-holomorphic curve inside $M$ whose boundary lies on $W$. However, the inclusion of the closed unit disk is such a curve, which results in a contradiction. Clearly, $f$ is $J$-holomorphic. By direct computation, we have $f(\partial \check{D}^{+}) \subseteq W$. Furthermore, $|f(z)| \leq 1$ for all $z \in \check{D}^{+}$, which implies that the image of $f$ is relatively compact. Nevertheless, the limit $\displaystyle \lim_{z \rightarrow 0} f(z)$ does not exist. Actually, $0$ is an essential singularity of $\exp (\frac{-i}{z})$. Therefore, $f$ has no continuous extension over $0$. \end{proof} \begin{remark} Readers are suggested to understand Example \ref{example_no_extension} together with Lemma \ref{lemma_form_zero}, Example \ref{example_stokes_disk} and Lemma \ref{lemma_isoperimetric_form}. \end{remark} As mentioned in the Introduction, the following simple example shows that Theorem \ref{theorem_tame} has its advantage over Theorem \ref{theorem_finite_area} in certain cases. \begin{example}\label{example_advantage} Let $f$ be a complex valued function on $\check{D}^{+}$ such that $f$ is smooth and bounded. Suppose $f$ is holomorphic in the interior of $\check{D}^{+}$. Suppose $f$ takes real values on $\partial \check{D}^{+}$. By the Schwarz reflection principle, $f$ extends to a bounded holomorphic function on $\check{D}$, which implies that $0$ is a removable singularity. This removal of singularities trivially follows from Theorem \ref{theorem_tame} by taking $M = \mathbb{C}$, $W = \mathbb{R}$ and $\alpha = x dy$. However, it's not easy to apply Theorem \ref{theorem_finite_area} because it's nontrivial to show that the area of the image of $f$ is finite. \end{example} \section{Set Up}\label{section_set_up} In this section, we shall give some notation, definitions and results which are frequently used throughout this paper. First, together with (\ref{half_disk}) and (\ref{pucture_half_disk}), we define some subsets of the complex plane $\mathbb{C}$ as \begin{equation} D = \{ z \in \mathbb{C} \mid |z|<1 \}, \qquad \overline{D} = \{ z \in \mathbb{C} \mid |z| \leq 1 \}, \end{equation} \begin{equation}\label{lower_half_disk} D^{-} = \{ z \in \mathbb{C} \mid |z|<1, \text{Im}z \leq 0 \}, \qquad \check{D}^{-} = D^{-} - \{0\}, \end{equation} \begin{equation} D(r) = \{ z \in \mathbb{C} \mid |z|<r \}, \qquad \check{D}^{+}(r) = \{ z \in D(r) \mid z \neq 0, \text{Im}z \geq 0 \}, \end{equation} and \begin{equation} \partial D(r) = \{ z \in \mathbb{C} \mid |z|=r \}. \end{equation} Let's go back to Definition \ref{definition_image_volume}. If $S$ is also a $C^{1}$ Riemannian manifold, then the volume of the image, $|h(S)|$, also equals the Riemannian integral of the (absolute value of the) Jacobian of $h$ on $S$. In particular, suppose $S$ is a Riemann surface and $N$ is an almost complex manifold. Suppose both $S$ and $N$ are equipped with Hermitian metrics. Suppose $h$ is $J$-holomorphic. Then $h$ is conformal and therefore the Jacobian of $h$ equals $\|dh\|^{2}$. Suppose $\varphi: D \rightarrow N$ is a $C^{1}$ $J$-holomorphic map from the unit disk to a $C^{1}$ almost complex manifold $N$. Equip $D$ with the standard Euclidean metric, and equip $N$ with a $C^{1}$ Hermitian metric. For $0< r \leq 1$, define \[ A(r) = |\varphi(D(r))|. \] For $0< r <1$, define \[ L(r) = |\varphi(\partial D(r))|. \] By the conformality of $\varphi$, we infer \begin{equation}\label{holomorphic_circle_length} A(r) = \int_{D(r)} \| d \varphi \|^{2}, \qquad \text{and} \qquad L(r) = \int_{\partial D(r)} \| d \varphi \|. \end{equation} We see that $A(r)$ is $C^{1}$ on $[0,1)$ and $L(r)$ is $C^{0}$ in $[0,1)$. Furthermore, we can easily prove the following formula, for $r \in (0,1)$, (see the statement and the proof in the last line of \cite[p.\ 315]{gromov}) \begin{equation}\label{gromov_formula} \frac{d}{dr} A(r) \geq (2 \pi r)^{-1} L(r)^{2}. \end{equation} This simple and powerful formula will play a key role in our argument. We can obviously make a slight generalization of (\ref{gromov_formula}). Actually, this has been done in the proof of \cite[1.3.B']{gromov}. Suppose the above $\varphi$ is only defined on $D-\{z_{0}\}$ and $A(1)$ is finite. Then $A$ is continuous in $[0,1]$, which follows from the absolute continuity of integral. Furthermore, $A$ is $C^{1}$ in $[0, |z_{0}|) \cup (|z_{0}|,1)$, (here $z_{0}$ could be $0$), and (\ref{gromov_formula}) still holds in the interior of these intervals. This generalization will be frequently used in this paper. Suppose $(M,J)$ is an almost complex manifold. Let $H$ be a Hermitian metric on $M$. Then, $\text{Re}H$, the real part of $H$ is a Riemannian metric on $M$; $-\text{Im}H$, the negative of the imaginary part of $H$ is a nondegenerate $2$-form on $M$. In order to prove Theorems \ref{theorem_finite_area} and \ref{theorem_tame}, Gromov suggests the following lemma (\cite[1.3.C]{gromov}). This construction has been used in some previous proofs (e.g. \cite[Section 4.3]{mcduff_salamon}). It is important for us as well. \begin{lemma}\label{lemma_hermitian} Suppose $W$ is a closed totally real submanifold of $(M, J)$. Then there exists a Hermitian metric $H$ on $M$ which satisfies the following properties. (1). $TW \perp J(TW)$ with respect to $\text{Re}H$. (2). $W$ is totally geodesic with respect to $\text{Re}H$. (3). There exists a $1$-form $\alpha_{0}$ in a neighborhood of $W$ such that $\iota^{*} \alpha_{0} = 0$ on $W$, $d \alpha_{0}$ tames $J$ and $d \alpha_{0} = - \text{Im}H$ on $TM|_{W}$, where $\iota: W \hookrightarrow M$ is the inclusion. \end{lemma} \begin{proof} A proof of (1) and (2) is presented in \cite[Lemma 4.3.3]{mcduff_salamon}. One can construct such a $H$ first in a neighborhood $U$ of $W$ and then extend it (using that $W$ is closed) to a Hermitian metric $H$ defined on $M$ satisfying (1) and (2). Now we prove (3). By (1), we know $J(TW)$ is the normal bundle of $W$ inside $M$. By using the exponential map, a neighborhood of $W$ in $M$ is identified with a neighborhood of $W$ (i.e.\ the zero section) in $J(TW)$. By using $\text{Re}H$, we can further identify $J(TW)$ with $T^{*}W$ as follows. For each $v \in J(T_{x}W)$, the map \[ w \rightarrow \text{Re}H (v, Jw) \] is a linear form on $T_{x}W$, where $w \in T_{x}W$. Therefore, a neighborhood $U$ of $W$ in $M$ is identified with a neighborhood $\mathcal{N}$ of $W$ (i.e.\ the zero section) in $T^{*}W$. On the manifold $T^{*}W$, there exists a natural $1$-form $\alpha_{0}$: it is locally written as $-\sum_{j=1}^{n} p_{j} d q_{j}$ in terms of the standard (Position, Momentum) coordinates $((q_{1}, \cdots, q_{n}),(p_{1}, \cdots, p_{n}))$. By the identification between $U$ and $\mathcal{N}$, we have that $\alpha_{0}$ is defined on $U$ and $\iota^{*} \alpha_{0} = 0$. We can also check that, for $w_{1}, w_{2} \in T_{x}W \subseteq T_{x}U$ and $v_{1}, v_{2} \in J(T_{x}W) \subseteq T_{x}U$, \begin{eqnarray*} & & d \alpha_{0} (w_{1} + v_{1}, w_{2} + v_{2}) \\ & = & - \text{Re}H (v_{1}, Jw_{2}) + \text{Re}H (v_{2}, Jw_{1}) \\ & = & - \text{Im}H (v_{1}, w_{2}) - \text{Im}H (w_{1}, v_{2}) \\ & = & - \text{Im}H (w_{1} + v_{1}, w_{2} + v_{2}). \end{eqnarray*} We infer, $d \alpha_{0} = - \text{Im}H$, i.e.\ $d \alpha_{0}$ is compatible with $J$, on $TM|_{W}$. Therefore, $d \alpha_{0}$ tames $J$ in a (possibly smaller) neighborhood of $W$. \end{proof} In Theorem \ref{theorem_finite_area}, we assume that the image of $f$ has finite area with respect to a specific Riemannian metric. Actually, the finiteness of the area does not depend on the metric: Any two metrics on $f(\check{D}^{+})$ are equivalent since $f(\check{D}^{+})$ is relatively compact. Therefore, we get the following. \begin{observation}\label{claim_metric} Without loss of generality, when we prove Theorem \ref{theorem_finite_area}, we may assume that the metric in the assumption of this theorem is the $\text{Re}H$ in Lemma \ref{lemma_hermitian}. \end{observation} Now we point out that we may assume $f$ is an embedding in the assumption of Theorems \ref{theorem_finite_area} and \ref{theorem_tame} when we prove them. The idea is to replace $f$ by its graph. More precisely, we do the following graph construction. Define \[ \hat{M} = \mathbb{C} \times M, \] which is also an almost complex manifold. Define $\hat{f}: \check{D}^{+} \rightarrow \hat{M} = \mathbb{C} \times M$ as \[ \hat{f} (z) = (z, f(z)). \] Then $\hat{f}$ is certainly a $J$-holomorphic embedding. Let \[ \hat{W} = \mathbb{R} \times W. \] Then $\hat{W}$ is a closed totally real submanifold of $\hat{M}$ and $\hat{f}(\partial D^{+}) \subseteq \hat{W}$. It's easy to see that $\hat{f}: \check{D}^{+} \rightarrow \hat{M}$ satisfies the assumption of Theorems \ref{theorem_finite_area} and \ref{theorem_tame} as long as $f$ does. If $\hat{f}$ has a $C^{k}$ extension over $0$, then certainly so does $f$. Therefore, we get the following. \begin{observation}\label{claim_embedding} Without loss of generality, we may assume $f$ is an embedding in the assumption of Theorems \ref{theorem_finite_area} and \ref{theorem_tame} when we prove them. \end{observation} Now we describe a cone construction which is useful to obtain certain isoperimetric inequalities for $J$-holomorphic curves. Let $\Gamma = \gamma(S^{1})$ be a closed curve in a finite dimensional inner product vector space $V$, where $S^{1}$ is the unit circle and $\gamma: S^{1} \rightarrow V$ is a $C^{1}$ immersion. Define the center of mass of $\Gamma$ as \begin{equation}\label{mass_center} c = \frac{1}{|\Gamma|} \int_{S^{1}} \gamma \|d \gamma\|. \end{equation} The center of mass $c$ does not depend on the parametrization of $\Gamma$. Joining each point in $\Gamma$ to $c$ by a line, we construct a cone with vertex $c$ and boundary $\Gamma$. Denote this cone by $K(\Gamma)$. The following classical isoperimetric inequality shows a relation between the length of $\Gamma$ and the area of $K(\Gamma)$. \begin{lemma}\label{lemma_cone_isoperimetric} \[ |\Gamma|^{2} \geq 4 \pi |K(\Gamma)|. \] \end{lemma} The book \cite[Appendix A]{hummel} presents a quick proof of a result more general than Lemma \ref{lemma_cone_isoperimetric}. (Actually, the computation at the top on \cite[p.\ 116]{hummel} is sufficient for proving this lemma.) So it's safe to omit a proof here. \begin{remark} In the above cone construction, it's important to choose the vertex to be the center of mass $c$ in (\ref{mass_center}). Otherwise, Lemma \ref{lemma_cone_isoperimetric} will no longer be true. \end{remark} \section{A Reduction Lemma}\label{section_reduction_lemma} The main goal of this section is to prove the following apriori estimate which reduces Theorem \ref{theorem_tame} to Theorem \ref{theorem_finite_area}. \begin{lemma}\label{lemma_tame_to_area} Under the assumption of Theorem \ref{theorem_tame}, equip $M$ with an arbitrary Riemannian metric and equip $\check{D}^{+}$ with the standard Euclidean metric. Then there exists a constant $C>0$ such that \begin{equation}\label{lemma_tame_to_area_1} \|df(z)\| \leq \frac{C}{|z| \log\frac{1}{|z|}}. \end{equation} In particular $|f(\check{D}^{+}(r))| < + \infty$ for all $r\in(0,1)$. \end{lemma} Lemma \ref{lemma_tame_to_area} shows that $|f(\check{D}^{+}(r))| < + \infty$ for any $r \in (0,1)$. Therefore, up to a holomorphic reparametrization of $\check{D}^{+}(r)$, we infer that $f: \check{D}^{+}(r) \rightarrow M$ in Theorem \ref{theorem_tame} satisfies the assumption of Theorem \ref{theorem_finite_area}. (More precisely, the map $f_{r}: \check{D}^{+} \rightarrow M$ satisfies the assumption of Theorem \ref{theorem_finite_area}, where $f_{r}(z) = f(rz)$. The map $z \rightarrow rz$ is a holomorphic reparametrization of $\check{D}^{+}(r)$.) Since the removal of singularities at $0 \in \check{D}^{+}$ is a local argument, Theorem \ref{theorem_tame} is reduced to Theorem \ref{theorem_finite_area}. Our strategy to prove (\ref{lemma_tame_to_area_1}) starts from the following observation. An estimate on $\|df\|$ is equivalent to an estimate on the metric on $\check{D}^{+}$ pulled back by $f$. To estimate the pullback metric, we need the intrinsic doubling mentioned in the Introduction. The pullback metric is extended to be a metric $G$ on $\check{D}$. Following Gromov's approach in the interior case (see \cite[1.3.A']{gromov}), we bound $G$ in terms of the hyperbolic metric on $\check{D}$, which immediately implies (\ref{lemma_tame_to_area_1}). To bound $G$, we need to estimate the derivatives of the universal covering maps from the Euclidean disk $D$ to $(\check{D},G)$. In this process, we use certain isoperimetric inequalities resulting from the Gaussian curvature and forms on $\check{D}$ (see Lemmas \ref{lemma_isoperimetric_gauss} and \ref{lemma_isoperimetric_form}). We shall study the metrics, Gaussian curvature and forms on $\check{D}$ at first. Similar as Observation \ref{claim_metric}, we may assume the metric in Lemma \ref{lemma_tame_to_area} is the $\text{Re}H$ in Lemma \ref{lemma_hermitian}. By Observation \ref{claim_embedding}, we may also assume that $f$ is an embedding. Then the pull back metric $f^{*}\text{Re}H$ makes sense and is conformal on $\check{D}^{+}$. Therefore, we have the following two lemmas given by \cite{gromov}. \begin{lemma}\label{lemma_gauss_upper} The Gaussian (sectional) curvature of $f^{*}\text{Re}H$ on $\check{D}^{+}$ has an upper bound. \end{lemma} Lemma \ref{lemma_gauss_upper} is proved in \cite[1.1.B]{gromov} under the assumption that $M$ is compact. More details can be found in \cite[p.\ 219]{muller}. This argument certainly works for us because $f(\check{D}^{+})$ is relatively compact. \begin{lemma}\label{lemma_geodesic} $\partial \check{D}^{+}$ is totally geodesic in $\check{D}^{+}$ with respect to $f^{*} \text{Re}H$. \end{lemma} As mentioned in \cite[1.3.C]{gromov}, Lemma \ref{lemma_geodesic} follows from (1) and (2) in Lemma \ref{lemma_hermitian} and the fact that $f$ is $J$-holomorphic. We omit a proof here since it's easy. Define $\sigma: \mathbb{C} \rightarrow \mathbb{C}$ as the complex conjugation, i.e.\ $\sigma(z) = \bar{z}$. By (\ref{lower_half_disk}), we have $\check{D}^{-} = \sigma (\check{D}^{+})$. Define a Riemannian metric $G$ on $\check{D}$ as \begin{equation}\label{metric_on_disk} G = \begin{cases} f^{*} \text{Re}H & \text{on $\check{D}^{+}$}, \\ \sigma^{*} f^{*} \text{Re}H & \text{on $\check{D}^{-}$}. \end{cases} \end{equation} The importance of (2) in Lemma \ref{lemma_hermitian}, i.e.\ the fact that $W$ is totally geodesic, lies in the following lemma. \begin{lemma}\label{lemma_metric_on_disk} The metric $G$ in (\ref{metric_on_disk}) is a well defined $C^{2}$ conformal metric on $\check{D}$. In particular, the Gaussian curvature of $G$ makes sense. Furthermore, $G$ is smooth on $\check{D}^{+}$ and $\check{D}^{-}$, and $\sigma$ is an isometry. \end{lemma} \begin{proof} By (1) in Lemma \ref{lemma_hermitian} and the fact that $f$ is $J$-holomorphic, we see that $f^{*} \text{Re}H = \sigma^{*} f^{*} \text{Re}H$ on $\partial \check{D}^{+} = \partial \check{D}^{-}$. Therefore, $G$ is well defined. We also see that everything in this lemma is easily to be checked except maybe that $G$ is $C^{2}$. Let's prove this. Since $G$ is conformal, using the coordinate $z = x+iy$, we have \[ G = g(x,y) dx \otimes dx + g(x,y) dy \otimes dy. \] We know that $g$ is continuous on $\check{D}$, and smooth on $\check{D}^{\pm}$. Since $\sigma$ is an isometry, we also have \begin{equation}\label{lemma_metric_on_disk_1} g(x,y) = g(x,-y). \end{equation} By Lemma \ref{lemma_geodesic}, we infer that \begin{equation}\label{lemma_metric_on_disk_2} \frac{\partial g}{\partial y^{+}} (x,0) = 0, \end{equation} where $\frac{\partial g}{\partial y^{+}}$ is the upper half partial derivative with respect to $y$. By (\ref{lemma_metric_on_disk_1}) and (\ref{lemma_metric_on_disk_2}), we infer that $\frac{\partial g}{\partial y} (x,0)$ exists. Then $\frac{\partial^{2} g}{\partial y^{2}} (x,0)$ automatically exists because (\ref{lemma_metric_on_disk_1}) tells us $g$ is an even function of $y$. Now it's easy to check that $g$ is $C^{2}$ on $\check{D}$. \end{proof} Lemmas \ref{lemma_gauss_upper} and \ref{lemma_metric_on_disk} immediately imply the following. \begin{lemma}\label{lemma_gauss} The Gaussian curvature of $G$ on $\check{D}$ has an upper bound. \end{lemma} Now we consider differential forms. We shall construct a bounded 1-form $\alpha'$ on $\check{D}$ which is a piecewise primitive of a symplectic form $\eta$ on $\check{D}$. Both forms $\alpha'$ and $\eta$ are obtained by a symmetric construction involving the pullbacks of $\alpha$ and $d\alpha$ to $\check{D}$. The exactness of $\iota^*\alpha$ enables us to establish a good property for $(\alpha',\eta)$ which ensures a Stokes' formula (see Example \ref{example_stokes_disk}). Such a property will be useful in Lemma 4.9 to do certain arguments on $\check{D}$ which are similar to arguments of Gromov used on $M$. \begin{lemma}\label{lemma_form_zero} There exists a $1$-form $\alpha_{1}$ on $M$ such that $\iota^{*} \alpha_{1} = 0$ on $W$ and $d \alpha_{1} = d \alpha$. \end{lemma} \begin{proof} Since $\iota^{*} \alpha$ is exact, there exists a function $\mu$ on $W$ such that $\iota^{*} \alpha = d \mu$. Since $W$ is closed, we can extend $\mu$ to be a function $\mu_{1}$ on $M$ such that $\iota^{*} \mu_{1} = \mu$. We finish the proof by defining $\alpha_{1} = \alpha - d \mu_{1}$. \end{proof} Lemma \ref{lemma_form_zero} tells us that we may replace $\alpha$ in Theorem \ref{theorem_tame} by $\alpha_{1}$. Therefore, from now on, we assume that $\iota^{*} \alpha = 0$. Define a $2$-form $\eta$ on $\check{D}$ as \begin{equation}\label{2_form_disk} \eta = \begin{cases} f^{*} d \alpha & \text{on $\check{D}^{+}$}, \\ -\sigma^{*} f^{*} d \alpha & \text{on $\check{D}^{-}$}. \end{cases} \end{equation} It's easy to check that $f^{*} d \alpha|_{\partial \check{D}^{+}} = -\sigma^{*} f^{*} d \alpha|_{\partial \check{D}^{+}}$. Therefore, $\eta$ is well defined. We infer that $\eta$ is smooth on $\check{D}^{+}$ and $\check{D}^{-}$, and continuous on $\check{D}$. Similarly, define a $1$-form $\alpha'$ on $\check{D}$ as \begin{equation}\label{1_form_disk} \alpha' = \begin{cases} f^{*} \alpha & \text{on $\check{D}^{+}$}, \\ -\sigma^{*} f^{*} \alpha & \text{on $\check{D}^{-}$}. \end{cases} \end{equation} Let $\iota_{0}: \partial \check{D}^{+} \hookrightarrow \check{D}$ be the inclusion. By the assumptions that $\iota^{*} \alpha = 0$ and $f(\partial \check{D}^{+}) \subseteq W$, we get \[ \iota_{0}^{*} f^{*} \alpha = 0. \] Therefore, it's easy to check that $\alpha'$ is well defined and continuous on $\check{D}$. It's important to observe that, if the assumption $\iota^{*} \alpha = 0$ is dropped, one cannot expect that $\alpha'$ is continuous on $\check{D}$. Clearly, on $\check{D}^{+}$ and $\check{D}^{-}$, $\alpha'$ is smooth and \begin{equation}\label{differential_disk} d \alpha' = \eta. \end{equation} By Example \ref{example_no_extension}, Theorem \ref{theorem_tame} will no longer be true if we drop the assumption that $\iota^{*} \alpha = 0$ (or more generally $\iota^{*} \alpha$ is exact). How does this assumption help our proof? The following is a quintessential example. \begin{example}\label{example_stokes_disk} Suppose $\Omega$ is a closed disk in $\check{D}$. Then $\partial \check{D}^{+}$ divides $\Omega$ into two parts $\Omega^{+}$ and $\Omega^{-}$. (See Figure \ref{figure_1}. The shadowed part is $\Omega$.) \begin{figure}[!htbp] \centering \includegraphics[scale=0.24]{Figure1.pdf} \caption{} \label{figure_1} \end{figure} By (\ref{differential_disk}), we have \[ \int_{\Omega^{\pm}} \eta = \int_{\Omega^{\pm}} d \alpha' = \int_{\partial \Omega^{\pm}} \alpha' \] and \[ \int_{\Omega} \eta = \int_{\Omega^{+}} \eta + \int_{\Omega^{-}} \eta = \int_{\partial \Omega^{+}} \alpha' + \int_{\partial \Omega^{-}} \alpha'. \] Since $\iota_{0}^{*} \alpha' = 0$, we get the integrals of $\alpha'$ along the real line is $0$. Therefore, we get the important formula \begin{equation}\label{example_stokes_disk_1} \int_{\Omega} \eta = \int_{\partial \Omega} \alpha'. \end{equation} Actually, as far as $\alpha'$ is continuous on $\check{D}$, we shall also get (\ref{example_stokes_disk_1}) because the integrals on $\partial \check{D}^{+}$ and $\partial \check{D}^{-}$ cancel each other. However, as mentioned before, $\alpha'$ as defined above would not be continuous on $\check{D}$ if $\iota^{*}\alpha \neq 0$. \end{example} \begin{lemma}\label{lemma_form_disk} There exist constants $C_{1}>0$ and $C_{2}>0$ such that \[ G(v,v) \leq C_{1} \eta(v, iv) \] and \[ \|\alpha'\|_{G} \leq C_{2}, \] where $v$ is any tangent vector on $\check{D}$, $i$ is the complex structure on $\check{D}$ and $\|\alpha'\|_{G}$ is the norm of $\alpha'$ with respect to $G$. \end{lemma} \begin{proof} Since $f(\check{D}^{+})$ is relatively compact and $d \alpha$ tames $J$, we infer there exist $C_{1}>0$ and $C_{2}>0$ such that, on $f(\check{D}^{+})$, \[ H(w,w) \leq C_{1} d \alpha(w, Jw) \] and \[ \|\alpha\|_{\text{Re}H} \leq C_{2}, \] where $w$ is any tangent vector tangent to $f(\check{D}^{+})$. By the definitions of $G$, $\eta$ and $\alpha'$, the lemma follows. \end{proof} Let $\varphi: D \rightarrow N$ be a $C^{1}$ $J$-holomorphic map, where $N$ is a $C^{1}$ almost complex manifold. Let $A(r)= |\varphi(D(r))|$ and $L(r) = |\partial \varphi(D(r))|$. Suppose $A(r) \rightarrow 0$ when $r \rightarrow 0$. As we can see in \cite[1.2 and 1.3]{gromov} and in this paper, studying the decaying rate of $A(r)$ gives very important geometric information. How to obtain such a decaying rate? Gromov tells us it's sufficient to use two tools. The first one is the formula (\ref{gromov_formula}). The second one is an isoperimetric inequality. Following \cite[1.2 and 1.3]{gromov}, we shall build two types of isoperimetric inequalities in this paper. The first one is $L^{2} \geq C A$ or more generally $L^{2} \geq C A - C_{1}A^{2}$. The second one is $L^{2} \geq C A^{2}$. Comparing them, the first one is better when $A$ is small, while the second one becomes better when $A$ is large. The second one is important when we cannot bound $A(1)$, which lies at the heart of how the form $\alpha$ plays a role in Theorem \ref{theorem_tame} and Lemma \ref{lemma_tame_to_area} (see also Remark \ref{remark_isoperimetric_decay}). For $z_{0} \in \check{D}$, let $\varphi_{z_{0}}: D \rightarrow \check{D}$ be a holomorphic universal covering such that $\varphi_{z_{0}}(0) = z_{0}$. It's well-known that $D$ carries a hyperbolic metric $2 (1-|z|^{2})^{-1} |dz|$ and $\check{D}$ carries a hyperbolic metric $\left( |z| \log \frac{1}{|z|} \right)^{-1} |dz|$, and \begin{equation}\label{hyperbolic_covering} \varphi_{z_{0}}: \left( D, \frac{2 |dz|}{1-|z|^{2}} \right) \rightarrow \left( \check{D}, \frac{|dz|}{|z| \log \frac{1}{|z|}} \right) \end{equation} is a local isometry. Even though we use above the hyperbolic metrics to describe the map $\varphi_{z_0}$, whenever not otherwise indicated, the metric on $D$ is the Euclidean metric, and the metric on $\check{D}$ is $G$. In particular the subsequent isoperimetric inequalities are for the metric $G$ on $\check{D}$. (Compare also Definition \ref{definition_image_volume}). By Lemma \ref{lemma_gauss} and the classical isoperimetric inequality in terms of Gaussian curvature \cite[Theorem (1.2)]{barbosa_carmo} (see also \cite[p.\ 1206, (4.25)]{osserman}), we obtain the following isoperimetric inequality. \begin{lemma}\label{lemma_isoperimetric_gauss} There exists a constant $C>0$ such that, for all $r \in (0,1)$ and $z_{0} \in \check{D}$, \[ |\varphi_{z_{0}}(\partial D(r))|^{2} \geq 4 \pi \left( |\varphi_{z_{0}}(D(r))| - C |\varphi_{z_{0}}(D(r))|^{2} \right). \] \end{lemma} In the proof of the following isoperimetric inequality, we shall see the importance of the assumption on $\alpha$ in Theorem \ref{theorem_tame}. (Compare \cite[p.\ 317, (8)]{gromov}.) \begin{lemma}\label{lemma_isoperimetric_form} There exists a constant $C>0$ such that, for all $r \in (0,1)$ and $z_{0} \in \check{D}$, \[ |\varphi_{z_{0}}(\partial D(r))|^{2} \geq 2 \pi C |\varphi_{z_{0}}(D(r))|^{2}. \] \end{lemma} \begin{proof} We know that $\varphi_{z_{0}}^{-1}(\partial \check{D}^{+})$ is a countable union of circular arcs whose ends are on the boundary of $D$. (Figure \ref{figure_2} shows some of these circular arcs. They are in fact geodesics of the hyperbolic metric on $D$.) \begin{figure}[!htbp] \centering \includegraphics[scale=0.24]{Figure2.pdf} \caption{} \label{figure_2} \end{figure} We consider the integral $\int_{D(r)} \varphi_{z_{0}}^{*} \eta$. Clearly, $\varphi_{z_{0}}^{-1}(\partial \check{D}^{+})$ divides $D(r)$ into several domains $\Omega_{1}, \cdots, \Omega_{k}$, which are compact submanifolds with corners insider $D$. (This is illustrated by Figure \ref{figure_2}. The shadowed part is $D(r)$.) In each $\Omega_{i}$, by (\ref{differential_disk}), we have \[ \varphi_{z_{0}}^{*} \eta = d \varphi_{z_{0}}^{*} \alpha'. \] Applying Stokes' formula, we get \[ \int_{\Omega_{i}} \varphi_{z_{0}}^{*} \eta = \int_{\partial \Omega_{i}} \varphi_{z_{0}}^{*} \alpha'. \] Similar to Example \ref{example_stokes_disk}, the line integral of $\varphi_{z_{0}}^{*} \alpha'$ vanishes on $\varphi_{z_{0}}^{-1}(\partial \check{D}^{+})$. Thus, summing up these integrals, we get \[ \int_{D(r)} \varphi_{z_{0}}^{*} \eta = \int_{\partial D(r)} \varphi_{z_{0}}^{*} \alpha'. \] By Lemma \ref{lemma_form_disk}, we get \[ |\varphi_{z_{0}}(D(r))| \leq C_{1} \int_{D(r)} \varphi_{z_{0}}^{*} \eta = C_{1} \int_{\partial D(r)} \varphi_{z_{0}}^{*} \alpha' \leq C_{1} C_{2} |\varphi_{z_{0}}(\partial D(r))|, \] where $C_{1}$ and $C_{2}$ are the constants in Lemma \ref{lemma_form_disk}. This finishes the proof. \end{proof} \begin{lemma}\label{lemma_bound_disk_covering} There exists a constant $C>0$ such that $\| d \varphi_{z_{0}} (0) \| \leq C$ for all maps $\varphi_{z_{0}}$. \end{lemma} \begin{proof} Let $A(t)= |\varphi_{z_{0}}(D(t))|$ and $L(t) = |\varphi_{z_{0}}(\partial D(t))|$. By (\ref{gromov_formula}) and Lemmas \ref{lemma_isoperimetric_form} and \ref{lemma_isoperimetric_gauss}, there exist constants $C_{3}>0$ and $C_{4}>0$ independent of $\varphi_{z_{0}}$ such that \begin{equation}\label{lemma_bound_disk_covering_1} \dot{A} \geq t^{-1} C_{3} A^{2}, \end{equation} and \begin{equation}\label{lemma_bound_disk_covering_2} \dot{A} \geq 2 t^{-1} (A - C_{4} A^{2}). \end{equation} Since $A>0$ for $t \in (0,1)$, by (\ref{lemma_bound_disk_covering_1}), we infer \[ A^{-2} \dot{A} \geq C_{3} t^{-1}. \] So, for $0 < r < \frac{1}{2}$, we have \[ \int_{r}^{\frac{1}{2}} A^{-2} \dot{A} dt \geq \int_{r}^{\frac{1}{2}} C_{3} t^{-1} dt \] or \begin{equation}\label{lemma_bound_disk_covering_3} A(r) \leq \frac{1}{A(\frac{1}{2})^{-1} - C_{3} \log 2r}, \end{equation} where $A(\frac{1}{2}) < + \infty$. Choose $r_{0}$ such that \[ - C_{3} \log 2 r_{0} = 2 C_{4}, \] we get \begin{equation}\label{lemma_bound_disk_covering_4} r_{0} = \frac{1}{2} e^{-\frac{2C_{4}}{C_{3}}}. \end{equation} By (\ref{lemma_bound_disk_covering_3}) and (\ref{lemma_bound_disk_covering_4}), we get \begin{equation}\label{lemma_bound_disk_covering_5} A(r_{0}) \leq \frac{1}{A(\frac{1}{2})^{-1} - C_{3} \log 2r_{0}} \leq \frac{1}{- C_{3} \log 2r_{0}} = \frac{1}{2 C_{4}}. \end{equation} For $0 < r \leq r_{0}$, since $A(r) \leq A(r_{0})$, we have \begin{equation}\label{lemma_bound_disk_covering_6} A(r) - C_{4} A(r)^{2} \geq A(r)(1- C_{4} A(r_{0})) \geq 2^{-1} A(r) > 0. \end{equation} By (\ref{lemma_bound_disk_covering_2}) and (\ref{lemma_bound_disk_covering_6}), we get, for $0 < r \leq r_{0}$, \begin{equation}\label{lemma_bound_disk_covering_7} (A - C_{4} A^{2})^{-1} \dot{A} \geq 2 t^{-1}. \end{equation} Integrating both sides of (\ref{lemma_bound_disk_covering_7}) on $[r,r_{0}]$, we get \[ \log \frac{A(r_{0})}{1 - C_{4} A(r_{0})} - \log \frac{A(r)}{1 - C_{4} A(r)} \geq \log \left( \frac{r_{0}}{r} \right)^{2} \] or \begin{equation}\label{lemma_bound_disk_covering_8} \frac{A(r)}{1 - C_{4} A(r)} \leq \frac{A(r_{0})}{1 - C_{4} A(r_{0})} \left( \frac{r}{r_{0}} \right)^{2}. \end{equation} By (\ref{lemma_bound_disk_covering_4}), (\ref{lemma_bound_disk_covering_5}) and (\ref{lemma_bound_disk_covering_8}), we get, for $r \leq r_{0}$, \begin{eqnarray}\label{lemma_bound_disk_covering_9} A(r) & \leq & \frac{A(r)}{1 - C_{4} A(r)} \leq \frac{A(r_{0})}{1 - C_{4} A(r_{0})} \left( \frac{r}{r_{0}} \right)^{2}\\ \nonumber & \leq & \frac{(2C_{4})^{-1}}{1 - C_{4} (2C_{4})^{-1}} \left( \frac{1}{2} e^{-\frac{2C_{4}}{C_{3}}} \right)^{-2} r^{2} = 4 C_{4}^{-1} e^{\frac{4C_{4}}{C_{3}}} r^{2}. \end{eqnarray} Therefore, \[ \|d\varphi_{z_{0}}(0)\|^{2} = \lim_{r \rightarrow 0} (\pi r^{2})^{-1} A(r) \leq 4 \pi^{-1} C_{4}^{-1} e^{\frac{4C_{4}}{C_{3}}}, \] which finishes the proof. \end{proof} \begin{remark}\label{remark_isoperimetric_decay} In the proof of Lemma \ref{lemma_bound_disk_covering}, by using Lemma \ref{lemma_isoperimetric_form}, we get the uniform estimate (\ref{lemma_bound_disk_covering_5}) for $A(r_{0})$, where both (\ref{lemma_bound_disk_covering_5}) and $r_{0}$ are independent of the maps $\varphi_{z_{0}}$. Based on this estimate, Lemma \ref{lemma_isoperimetric_gauss} yields the decaying rate (\ref{lemma_bound_disk_covering_9}). \end{remark} We are ready to conclude this section. Since the finiteness of $|f(\check{D}^{+}(r))|$ in Lemma \ref{lemma_tame_to_area} follows from (\ref{lemma_tame_to_area_1}), it suffices to prove the following lemma which implies (\ref{lemma_tame_to_area_1}) immediately. \begin{lemma}\label{lemma_metric_hyperbolic_bound} Using the standard coordinate $z=x+iy$ on $\check{D}$, the metric $G$ has the form $g dx \otimes dx + g dy \otimes dy$. There exists a constant $C>0$ such that \[ \sqrt{g(z)} \leq \frac{C}{|z| \log\frac{1}{|z|}}. \] \end{lemma} \begin{proof} Following the proof of \cite[1.3.A']{gromov}, the idea of this proof is to compare $G$ with the hyperbolic metric on $\check{D}$. (See also \cite[p.\ 227]{muller} and \cite[p.\ 42]{hummel}.) Consider the above $\varphi_{z_{0}}: D \rightarrow \check{D}$ with $\varphi_{z_{0}}(0) = z_{0}$. When $\varphi_{z_{0}}$ is viewed as a holomorphic function, we denote its derivative by $\varphi_{z_{0}}'(z)$. Clearly, \[ \| d\varphi_{z_{0}}(0) \| = \sqrt{g(z_{0})} |\varphi_{z_{0}}'(0)|. \] Since (\ref{hyperbolic_covering}) is a local isometry, we infer \[ |\varphi_{z_{0}}'(0)| = 2 |z_{0}| \log \frac{1}{|z_{0}|} . \] By Lemma \ref{lemma_bound_disk_covering}, there exists a constant $C_{0}>0$ such that \[ \| d\varphi_{z_{0}}(0) \| \leq C_{0}. \] Then we get \[ \sqrt{g(z_{0})} \leq \frac{C_{0}}{2} \frac{1}{|z_{0}| \log \frac{1}{|z_{0}|}}, \] which finishes the proof. \end{proof} \section{Pre-continuity}\label{section_pre-continuity} The goal of this section is to prove the following lemma. It is the first step to improve the regularity of $f$ at $0 \in D^{+}$ under the assumption of Theorem \ref{theorem_finite_area}. The proof follows \cite[p.\ 135-136]{oh} (compare \cite[p.\ 41-42]{hummel}) which is a combination of the Courant-Lebesgue Lemma and a monotonicity lemma. \begin{lemma}\label{lemma_converge_neighborhood} Under the assumption of Theorem 2.6, suppose $U$ is a neighborhood of $W$. Then there exists $r \in (0,1]$ such that \[ f(\check{D}^{+}(r)) \subseteq U. \] \end{lemma} In this section, we assume that the metric on $M$ is the $\text{Re}H$ in Lemma \ref{lemma_hermitian} and $f$ is an embedding. \begin{remark} As one can check, the argument in this section does not use the manifold structure on $W$. If we only assume $W$ is a closed subset of $M$, Lemma \ref{lemma_converge_neighborhood} will be still true. \end{remark} First, we formulate a monotonicity lemma which describes an important local property of a $J$-holomorphic curve. \begin{lemma}\label{lemma_monotone} Let $K$ be a compact subset of $M$. There exists $\epsilon_{0} > 0$ and $C>0$ such that the following holds. Suppose $h: S \rightarrow M$ is an arbitrary immersed compact $J$-holomorphic curve with boundary. Suppose $q \in h(S) \cap K$. Suppose $r \in (0, \epsilon_{0})$ and $h(\partial S)$ is contained in the complement of $B(q,r) \subseteq M$, where $B(q,r)$ is the open ball with center $q$ and radius $r$. Then \begin{equation}\label{lemma_monotone_1} |h(S)| \geq C r^{2}. \end{equation} \end{lemma} Lemma \ref{lemma_monotone} is a slight generalization of \cite[p.\ 21, Theorem 1.3]{hummel}. (See also \cite[4.2.1]{muller}.) It is proved in \cite[p.\ 26-28]{hummel} under the assumption that $M$ is compact and $K=M$. The proof is based on an isoperimetric inequality \cite[p.\ 23, Lemma 3.1]{hummel}. The book \cite[4.2.2]{muller} gives a quick proof of such an isoperimetric inequality. Their argument actually works in our case. Essentially, the compactness of $M$ is used in \cite{muller} and \cite{hummel} for proving two facts. First, the injectivity radius of $M$ is positive. Second, $M$ is covered by finitely many open sets such that $J$ is tamed by an exact form in each of these open sets. The second fact can be found in \cite[p.\ 224]{muller}, and will be addressed again in Section \ref{section_lipschitz_continuity} in this paper (see Remark \ref{remark_monotone}). In our case, $K$ also satisfies the above two properties because of its compactness. Therefore, the argument in \cite{hummel} implies Lemma \ref{lemma_monotone}. We shall omit a proof here. Now we prove a Courant-Lebesgue Lemma for our map $f: \check{D} \rightarrow M$. A much more general result can be found in Courant's book \cite[p.\ 101, Lemma\ 3.1]{courant} and \cite[p.\ 239]{d_h_k_w}. Our proof follows \cite{courant}. (See also \cite[Lemma 4.2]{oh}.) For $t \in (0,1)$, define \[ \Gamma_{t} = \{ z \in \check{D}^{+} \mid |z|=t \}. \] \begin{lemma}[Courant-Lebesgue Lemma]\label{lemma_courant_lebesgue} For all $r \in (0,1)$, there exists $t_{0} \in [r^{2},r]$ such that \[ |f(\Gamma_{t_{0}})|^{2} \leq \frac{\pi |f(\check{D}^{+})|}{\log \frac{1}{r}}. \] \end{lemma} \begin{proof} \begin{eqnarray*} |f(\check{D}^{+})| & \geq & \int_{\check{D}^{+}(r) - \check{D}^{+}(r^{2})} \| df \|^{2} \\ & = & \int_{r^{2}}^{r} \left( \int_{\Gamma_{t}} \|df\|^{2} \right) dt \\ & \geq & \int_{r^{2}}^{r} \left( \int_{\Gamma_{t}} \|df\| \right)^{2} \left( \int_{\Gamma_{t}} 1 \right)^{-1} dt \\ & = & \int_{r^{2}}^{r} |f(\Gamma_{t})|^{2} (\pi t)^{-1} dt. \end{eqnarray*} Since $|f(\Gamma_{t})|$ is continuous with respect to $t \in [r^{2},r]$, there exists $t_{0} \in [r^{2},r]$ such that $|f(\Gamma_{t_{0}})|$ attains the minimum. Therefore, \[ |f(\check{D}^{+})| \geq (\pi)^{-1} |f(\Gamma_{t_{0}})|^{2} \int_{r^{2}}^{r} t^{-1} dt = (\pi)^{-1} \left( \log \frac{1}{r} \right) |f(\Gamma_{t_{0}})|^{2}, \] which completes the proof. \end{proof} \begin{remark} More generally, if $f$ is not $J$-holomorphic, we still have a Courant-Lebesgue Lemma. However, we have to replace the $|f(\check{D}^{+})|$ in Lemma \ref{lemma_courant_lebesgue} by the energy (or the Dirichlet integral) of $f$. See \cite{courant}. \end{remark} \begin{proof}[Proof of Lemma \ref{lemma_converge_neighborhood}] Let $\overline{f(\check{D}^{+})}$ be the closure of $f(\check{D}^{+})$. Then $\overline{f(\check{D}^{+})}$ is compact. By Lemma \ref{lemma_monotone}, there exists $\epsilon_{0} > 0$ such that the conclusion of Lemma \ref{lemma_monotone} holds for $K = \overline{f(\check{D}^{+})}$. Since $|f(\check{D}^{+})| < + \infty$, by Lemma \ref{lemma_courant_lebesgue}, there exists a decreasing sequence $\{r_{n}\}$ such that $r_{n} \rightarrow 0$ and $|f(\Gamma_{r_{n}})| \rightarrow 0$ when $n \rightarrow \infty$. Define \[ \Omega_{n} = \{ z \in \check{D}^{+} \mid r_{n+1} \leq |z| \leq r_{n} \}. \] We prove this lemma by contradiction. We may assume $U$ is an open subset. Suppose this lemma is not true. Then, choosing a subsequence of $\{r_{n}\}$ if necessary, there exist $z_{n} \in \Omega_{n}$ such that $f(z_{n}) \notin U$. Denote $f(z_{n})$ by $p_{n}$, we have $p_{n} \in \overline{f(\check{D}^{+})} - U$. Since $\overline{f(\check{D}^{+})}$ is compact and $U$ is open, we infer $\overline{f(\check{D}^{+})} - U$ is compact. Choosing a subsequence of $\{r_{n}\}$ if necessary, we get a sequence $\{p_{n}\}$ such that $p_{n} \in f(\Omega_{n})$ and $p_{n} \rightarrow p \notin U$ when $n \rightarrow \infty$. Since $W$ is closed and $W \subseteq U$, we infer that the distance $d_{0} = d(p,W) > 0$. Since the boundary of $f(\Gamma_{r_{n}})$ is in $W$ and $|f(\Gamma_{r_{n}})| \rightarrow 0$, we get \[ \max_{q \in f(\Gamma_{r_{n}})} d(q,W) \rightarrow 0, \] when $n \rightarrow \infty$. Clearly, \[ f(\partial \Omega_{n}) \subseteq W \cup f(\Gamma_{r_{n}}) \cup f(\Gamma_{r_{n+1}}). \] Thus, for $n$ large enough, we have \[ d(p, f(\partial \Omega_{n})) > \frac{2d_{0}}{3}. \] Deleting first finitely many terms of $\{p_{n}\}$ if necessary, we have $p_{n} \in B(p, \frac{d_{0}}{3})$ for all $n$. We infer that \begin{equation}\label{lemma_converge_neighborhood_1} d(p_{n}, f(\partial \Omega_{n})) > \frac{d_{0}}{3}. \end{equation} Choose $\epsilon_{1} > 0$ such that $\epsilon_{1} < \min \{ \epsilon_{0}, \frac{d_{0}}{3} \}$. Since $\epsilon_{1} < \frac{d_{0}}{3}$, by (\ref{lemma_converge_neighborhood_1}), there exists $S_{n} \subseteq \Omega_{n}$ such that $p_{n} \in S_{n}$ and $f(\partial S_{n})$ is outside of $B(p, \epsilon_{1})$, where $S_{n}$ is a compact surface with boundary. Since $\epsilon_{1} < \epsilon_{0}$, taking $h=f$, $S = S_{n}$ and $r = \epsilon_{1}$ in (\ref{lemma_monotone_1}), we get \[ |f(S_{n})| \geq C \epsilon_{1}^{2}. \] Thus, \[ |f(\check{D}^{+})| \geq \sum_{n=1}^{\infty} |f(\Omega_{n})| \geq \sum_{n=1}^{\infty} |f(S_{n})| \geq \sum_{n=1}^{\infty} C \epsilon_{1}^{2} = + \infty, \] which contradicts the assumption that $|f(\check{D}^{+})| < + \infty$. \end{proof} \section{Continuity}\label{section_continuity} The main goal of this section is to prove that, under the assumption of Theorem \ref{theorem_finite_area}, $f$ has a continuous extension over $0 \in D^{+}$. This will follow from a stronger Lemma \ref{lemma_continuity} which gives a continuous extension of the extrinsic doubling map $F$. The extrinsic doubling map $F$ pulls back important information from $M$ to $\check{D}$, which makes our intrinsic doubling more efficient. In particular, this helps us take advantage of the ``almost minimal" property of $J$-holomorphic curves (see comment before Lemma \ref{lemma_isoperimetric_disk}). First of all, the combination of the intrinsic doubling and Lemmas \ref{lemma_hermitian} and \ref{lemma_converge_neighborhood} already gives us some information. We assume $M$ is equipped with $\text{Re}H$ and $f$ is an embedding. By using Lemma \ref{lemma_hermitian} and the exponential map for the bundle $J(TW) \rightarrow W$, there exists an open neighborhood $U$ of $W$ such that the following holds: (1).\ $U$ is identified with a neighborhood of the zero section of the bundle $J(TW) \rightarrow W$. (2).\ The $1$-form $\alpha_{0}$ in (3) of Lemma \ref{lemma_hermitian} is defined in $U$. Consider the natural reflection \begin{eqnarray}\label{reflection} \tau: J(TW) & \rightarrow & J(TW), \\ \tau(p,v) & = & (p,-v), \nonumber \end{eqnarray} where $p$ is a base point on $W$ and $v \in J(T_{p}W)$. Then $\tau^{2} = \text{Id}$ and $\tau$ is a diffeomorphism with the fixed point set $W$. Shrinking $U$ if necessary, we obtain \[ \tau(U) = U. \] Define a new almost complex structure $\widetilde{J}$ on $U$ as \begin{equation}\label{new_complex_structure} \widetilde{J} = - d \tau J d \tau. \end{equation} Then $U$ has two almost complex structures $J$ and $\widetilde{J}$ and $\tau: (U, J) \rightarrow (U, \widetilde{J})$ is an anti-$J$-holomorphic isomorphism, i.e.\ $d \tau J = - \widetilde{J} d \tau$. For all $p \in W$, $v_{1} \in T_{p} W \subseteq T_{p} U$ and $v_{2} \in J(T_{p} W) \subseteq T_{p} U$, we have \[ d \tau v_{1} = v_{1}, \qquad \text{and} \qquad d \tau v_{2} = - v_{2}. \] Therefore, $\widetilde{J}$ equals $J$ on $TU|_{W}$. Then choose $U$ sufficiently small, we get the following. \begin{lemma}\label{lemma_tame_both} $d \alpha_{0}$ tames both $J$ and $\widetilde{J}$. \end{lemma} Clearly, $\tau^{*}(\text{Re}H)$ is also a Riemannian metric on $U$. By (1) of Lemma \ref{lemma_hermitian}, we infer that $\tau^{*}(\text{Re}H)$ equals $\text{Re}H$ on $TU|_{W}$. It's easy to see that $\widetilde{J}$ preserves $\tau^{*}(\text{Re}H)$. Thus we can construct a Hermitian $\widetilde{H}$ with respect to $\widetilde{J}$ on $U$ such that \begin{equation} \text{Re}\widetilde{H} = \tau^{*}(\text{Re}H). \end{equation} Then we get the following. \begin{lemma}\label{lemma_structrue_in_u} $\widetilde{H}$ is a Hermitian with respect to $\widetilde{J}$. We have $J = \widetilde{J}$ and $H=\widetilde{H}$ on $TU|_{W}$. \end{lemma} \begin{remark} We would like to point out that $\widetilde{H} \neq \tau^{*} H$ because $\tau$ is \textit{anti-$J$-holomorphic}. Actually, we have $\widetilde{H}(v_{1},v_{2}) = \overline{\tau^{*}H(v_{1},v_{2})}$, where $\overline{\tau^{*}H(v_{1},v_{2})}$ is the complex conjugate of $\tau^{*}H(v_{1},v_{2})$. \end{remark} Since $W$ is a closed subset of $M$ and $M$ is a metric space, there exists a neighborhood $U_{1}$ of $W$ such that the closure of $U_{1}$ is contained in $U$. By Lemma \ref{lemma_converge_neighborhood}, there exists $0 < r \leq 1$ such that $f(\check{D}^{+}(r)) \subseteq U_{1}$. Since $f(\check{D}^{+})$ is relatively compact in $M$, we infer that $f(\check{D}^{+}(r))$ is relatively compact in $U$. Therefore, without loss of generality, we may assume \[ f(\check{D}^{+}) \subseteq U, \] and it is relatively compact in $U$. We need the metric $G$ on $\check{D}$ defined in (\ref{metric_on_disk}) again. Since $d \alpha_{0}$ tames $J$ in $U$, we know $f$ satisfies the assumption of Theorem \ref{theorem_tame}. Therefore, all facts about $G$ in Section \ref{section_reduction_lemma} can be used now. By Lemma \ref{lemma_metric_hyperbolic_bound}, we get the following. \begin{lemma}\label{lemma_circle_zero} The length of $\partial D(r)$ with respect to $G$ converges to $0$ when $r$ converges to $0$. \end{lemma} Let $\sigma$ be the complex conjugation on $\check{D}$ as in Section \ref{section_reduction_lemma}. We double $f: \check{D}^{+} \rightarrow U$ to be $F: \check{D} \rightarrow U$, where \begin{equation} F = \begin{cases} f & \text{on $\check{D}^{+}$},\\ \tau f \sigma & \text{on $\check{D}^{-}$}. \end{cases} \end{equation} As we shall see later, Lemma \ref{lemma_circle_zero} implies that the length of $F(\partial D(r))$ shrinks to $0$ when $r$ goes to $0$. In order to obtain the continuous extension of $F$, following Gromov's idea in the interior case, it suffices to show that the diameter of $F(\check{D}(r))$ is bounded in terms of the length of $F(\partial D(r))$. This is the main task in the section. \begin{lemma} $F$ is a well defined $C^{1}$ immersion. It is smooth on $\check{D}^{+}$ and $\check{D}^{-}$. And $F(\check{D})$ is relatively compact. \end{lemma} \begin{proof} Since $\partial \check{D}^{+}$ and $W$ are the fixed point sets of $\sigma$ and $\tau$ respectively, and $f(\partial \check{D}^{+}) \subseteq W$, we have $f|_{\partial \check{D}^{+}} = \tau f \sigma|_{\partial \check{D}^{+}}$. Therefore, $F$ is well defined and continuous. Clearly, $F$ is smooth on $\check{D}^{+}$ and $\check{D}^{-}$. Consider the coordinate $z=x+iy$ on $\check{D}$. Since $f$ is $J$-holomorphic, and $f(\partial \check{D}^{+}) \subseteq W$, we infer that, for $z \in \partial \check{D}^{+}$, \[ df(z) \frac{\partial}{\partial y} \in J(T_{f(z)} W). \] By (\ref{reflection}), we get \[ d (\tau f \sigma)(z) \frac{\partial}{\partial y} = df(z) \frac{\partial}{\partial y}. \] Then it's easy to check that $F$ is $C^{1}$. Since $f$ is an embedding, $F$ is an immersion. Finally, since $F(\check{D}) = f(\check{D}^{+}) \cup \tau f(\check{D}^{+})$ and $f(\check{D}^{+})$ is relatively compact, we have $F(\check{D})$ is also relatively compact. \end{proof} \begin{remark}\label{remark_c_1} There is some inconvenience caused by the modest $C^{1}$ regularity of $F$. For example, if $\alpha$ is a smooth form on $U$, then $F^{*} \alpha$ is only a $C^{0}$ form. We have trouble in applying Stokes' formula. But this can always be saved by the piecewise smoothness of $F$. \end{remark} A natural question is if $F: \check{D} \rightarrow (U,J)$ or $F: \check{D} \rightarrow (U,\widetilde{J})$ is $J$-holomorphic. Unfortunately, we cannot give an affirmative answer. However, fortunately enough, we still have the above Lemma \ref{lemma_structrue_in_u} and the following lemma. \begin{lemma}\label{lemma_structure_F} $F: \check{D}^{+} \rightarrow (U,J)$ and $F: \check{D}^{-} \rightarrow (U,\widetilde{J})$ are $J$-holomorphic. Furthermore, \[ G|_{\check{D}^{+}} = F^{*} \text{Re}H \qquad \text{and} \qquad G|_{\check{D}^{-}} = F^{*} \text{Re}\widetilde{H}. \] \end{lemma} The following are two special examples to illustrate the above construction. \begin{example}\label{example_schwarz} Take $M = \mathbb{C}$ and $W = \mathbb{R}$. Then $f$ is a holomorphic function which takes real values on $\partial \check{D}^{+}$. Define the above $\tau$ as the complex conjugation. Then, on $\check{D}^{-}$, we have $F(z) = \overline{f(\overline{z})}$. By the Schwarz reflection principle, $F$ is holomorphic on $\check{D}$. \end{example} \begin{example}\label{example_pansu} More general than Example \ref{example_schwarz}, suppose there exists a neighborhood $U_{0}$ of $W$ such that the following holds. The almost complex structure is integrable in $U_{0}$, and $W$ is real analytic with respect to this complex structure. Then \cite[p.\ 244-245]{pansu} (see also \cite[2.1.D]{gromov}) defines a reflection $\tau_{M}$ in $U_{0}$ slightly different from (\ref{reflection}). Choose complex coordinate charts such that the coordinates take pieces of $W$ into $\mathbb{R}^{n} \subseteq \mathbb{C}^{n}$. Define $\tau_{M}$ as the complex conjugation in these charts. This definition globally makes sense because the transition functions of these charts are holomorphic. Suppose $f(\check{D}^{+}) \subseteq U_{0}$. Similar to Example \ref{example_schwarz}, a holomorphic doubling map $F: \check{D} \rightarrow U_{0}$ is obtained. \end{example} In the above examples, one gets a $J$-holomorphic doubling $F$ because $(M,W,J)$ has sufficient symmetry. Our case is a further extension of Example \ref{example_pansu}. We have difficulty to get a $J$-holomorphic $F$ unless $J=\widetilde{J}$. However, we can consider the pair $(J,\widetilde{J})$ together. The idea comes from the following phenomenon. Consider a group action on a space. If $X$ is a fixed point, then it has sufficient symmetry such that a symmetric argument can be applied on it. On the other hand, if $X$ is not a fixed point, then we can still apply such an argument on its orbit. It's more convenient to consider $(D,G)$ than $(\check{D},G)$. More precisely, we shall consider $G$ as a metric on $D$ with a singularity $0$ where $G$ is undefined. For clarity, we shall use $|\cdot|_{G}$ to denote the length or area with respect to $G$. Define $\Omega$ as a closed disk inside $D$, \begin{equation}\label{disk_in_disk} \Omega = \{z \in D \mid |z-z_{0}| \leq r, z_{0} \in D, |z_{0}| + r < 1 \}. \end{equation} We shall consider the integral $\int_{\Omega} F^{*} d \alpha_{0}$. As $d \alpha_{0}$ tames both $J$ and $\widetilde{J}$, this integral is actually an integral of a positive function on $\Omega-\{0\}$. So it makes sense. \begin{lemma}\label{lemma_form_integral} There exists a constant $C>0$ independent of $\Omega$ such that \begin{equation}\label{lemma_form_integral_1} |\Omega|_{G} \leq C \int_{\Omega} F^{*} d \alpha_{0}, \end{equation} where $|\Omega|_{G}$ is the area of $\Omega - \{0\}$ with respect to $G$. Furthermore, if $0 \notin \partial \Omega$, then \begin{equation}\label{lemma_form_integral_2} \int_{\Omega} F^{*} d \alpha_{0} = \int_{\partial \Omega} F^{*} \alpha_{0}. \end{equation} \end{lemma} \begin{proof} By Lemma \ref{lemma_structure_F}, we get \[ |\Omega|_{G} = |\Omega \cap \check{D}^{+}|_{G} + |\Omega \cap \check{D}^{-}|_{G} = \int_{\Omega \cap \check{D}^{+}} F^{*} (-\text{Im}H) + \int_{\Omega \cap \check{D}^{-}} F^{*} (-\text{Im}\widetilde{H}). \] Since $F(\check{D})$ is relatively compact, by Lemma \ref{lemma_tame_both}, there exists a constant $C>0$ such that $- \text{Im}H(v,Jv) \leq C d \alpha_{0}(v,Jv)$ and $- \text{Im}\widetilde{H}(v,\widetilde{J}v) \leq C d \alpha_{0}(v,\widetilde{J}v)$ on $F(\check{D})$. Then we obtain \[ \int_{\Omega \cap \check{D}^{+}} F^{*} (-\text{Im}H) + \int_{\Omega \cap \check{D}^{-}} F^{*} (-\text{Im}\widetilde{H}) \leq C \int_{\Omega} F^{*} d \alpha_{0}. \] Since $C$ only depends on $F(\check{D})$, we proved (\ref{lemma_form_integral_1}). For (\ref{lemma_form_integral_2}), we only prove the case of that $0 \in \Omega$. The proof for the case of that $0 \notin \Omega$ is similar and even easier. Since $0 \notin \partial \Omega$, we have $D(\epsilon) = \{ z \mid |z| < \epsilon \} \subseteq \Omega$ when $\epsilon >0$ is small enough. Consider the domain \[ \Omega_{\epsilon} = \Omega - D(\epsilon). \] Define $\Omega_{\epsilon}^{+} = \Omega_{\epsilon} \cap \check{D}^{+}$ and $\Omega_{\epsilon}^{-} = \Omega_{\epsilon} \cap \check{D}^{-}$. Since $F$ is smooth on $\Omega_{\epsilon}^{\pm}$, and $\Omega_{\epsilon}^{\pm}$ is a compact manifold with corners, applying Stokes' formula, we get \[ \int_{\Omega_{\epsilon}^{\pm}} F^{*} d \alpha_{0} = \int_{\Omega_{\epsilon}^{\pm}} d F^{*} \alpha_{0} = \int_{\partial \Omega_{\epsilon}^{\pm}} F^{*} \alpha_{0}. \] Since $F$ is $C^{1}$ on $\check{D}$, we infer that $F^{*} \alpha_{0}$ is continuous on $\check{D}$. Therefore, the integrals of $F^{*} \alpha_{0}$ on the real line cancel each other, i.e. \[ \int_{\Omega_{\epsilon}} F^{*} d \alpha_{0} = \int_{\partial \Omega_{\epsilon}} F^{*} \alpha_{0} = \int_{\partial \Omega} F^{*} \alpha_{0} - \int_{\partial D(\epsilon)} F^{*} \alpha_{0}. \] Since $F(\check{D})$ is relatively compact, we infer $\| \alpha_{0} \|_{\text{Re}H}$ and $\| \alpha_{0} \|_{\text{Re}\widetilde{H}}$ are bounded on $F(\check{D})$. By Lemma \ref{lemma_circle_zero}, $|\partial D(\epsilon)|_{G} \rightarrow 0$ when $\epsilon \rightarrow 0$. Thus $\int_{\partial D(\epsilon)} F^{*} \alpha_{0} \rightarrow 0$. Furthermore, $\int_{\Omega} F^{*} d \alpha_{0}$ is actually an integral of a positive function on $\Omega-\{0\}$. Since the integral $\int_{\Omega_{\epsilon}} F^{*} d \alpha_{0}$ is monotone increasing when $\epsilon \rightarrow 0$, we get \[ \int_{\Omega} F^{*} d \alpha_{0} = \lim_{\epsilon \rightarrow 0} \int_{\Omega_{\epsilon}} F^{*} d \alpha_{0} = \int_{\partial \Omega} F^{*} \alpha_{0}. \] \end{proof} In \cite[1.3.B \& 1.3.B']{gromov}, Gromov makes the following crucial observation: A $J$-holomorphic curve is ``almost minimal", which results in isoperimetric inequalities. The idea is as follows. Suppose $J$ is tamed by an exact form. (This is true in our case by Lemma \ref{lemma_tame_both}.) Let $S$ be a compact $J$-holomorphic curve with boundary. Then the area of $S$ is controlled by some other compact surfaces $K$ with $\partial K = \partial S$. If a special $K$ satisfies an isoperimetric inequality, then we can in turn establish an isoperimetric inequality for $S$. This is indeed the case if $\partial S$ lies in a suitable coordinate chart. Gromov's choice of $K$ is a minimal surface with respect to the Euclidean metric of the chart. However, as shown in \cite[p.\ 25]{hummel}, there is an even more elementary choice. That is the cone constructed at the end of Section \ref{section_set_up}. Applying Gromov's idea, we obtain the following isoperimetric inequality which is the most important for this section. \begin{lemma}\label{lemma_isoperimetric_disk} There exists a constant $\mu >0$ such that the following holds. For all disks $\Omega$ defined in (\ref{disk_in_disk}) such that $0 \notin \partial \Omega$, we have \[ |\partial \Omega|_{G}^{2} \geq 2 \pi \mu |\Omega|_{G}. \] \end{lemma} \begin{proof} Since $F(\check{D})$ is relatively compact, $F(\check{D})$ can be covered by finitely many subsets $B_{j}$ ($j=1, \dots, k$) of $U$ which have the following properties: (1).\ Each $B_{j}$ is in a coordinate chart. (2).\ By these coordinates, each $B_{j}$ is identified with a closed ball in $\mathbb{R}^{2n}$. (3).\ In each $B_{j}$, $\text{Re}H$ and $\text{Re}\widetilde{H}$ are equivalent with the Euclidean metric on $\mathbb{R}^{2n}$, and $d \alpha_{0}$ is bounded by the Euclidean metric. (4).\ There exists a constant $C_{1} > 0$ such that, if $\Gamma$ is a curve in $\check{D}$ with $|\Gamma|_{G} \leq C_{1}$, then $F(\Gamma)$ is contained in one of these $B_{j}$. If $|\partial \Omega|_{G} > C_{1}$, we get \[ |\partial \Omega|_{G}^{2} =\frac{|\partial \Omega|_{G}^{2}}{|\Omega|_{G}} |\Omega|_{G} > \frac{C_{1}^{2}}{|\check{D}|_{G}} |\Omega|_{G}. \] Since $|\check{D}|_{G} = 2 |\check{D}^{+}|_{G} < + \infty$, we already obtained the isoperimetric inequality. Therefore, it remains to check the case of that $|\partial \Omega|_{G} \leq C_{1}$. By the above property (4), we may assume $F(\partial \Omega) \subseteq B_{1}$. To make the idea clear, we first prove the case of that either $\Omega \subseteq D^{+}$ or $\Omega \subseteq D^{-}$. Since $0 \notin \partial \Omega$, by Lemma \ref{lemma_form_integral}, we get \begin{equation}\label{lemma_isoperimetric_disk_1} |\Omega|_{G} \leq C \int_{\partial \Omega} F^{*} \alpha_{0} = C \int_{F(\partial \Omega)} \alpha_{0}. \end{equation} Let $\gamma: S^{1} \rightarrow F(\partial \Omega)$ be a reparametrization of $F(\partial \Omega)$. Since $\Omega \subseteq D^{+}$ or $\Omega \subseteq D^{-}$, and $0 \notin \partial \Omega$, we infer that $F$ is smooth on $\partial \Omega$. Thus we can also require that $\gamma$ is smooth. Since $F$ is an immersion, we can also require that $\gamma$ is an immersion. As Section \ref{section_set_up}, using the Euclidean metric $\| \cdot \|_{\text{Eu}}$ of this chart, we construct the cone $K(\gamma(S^{1}))$ with vertex $c$ in (\ref{mass_center}) and the boundary $\gamma(S^{1}) = F(\partial \Omega)$. Define a smooth function $\rho: [0,1] \rightarrow [0,1]$ such that $\rho|_{[0,\frac{1}{2}]}=0$ and $\rho$ strictly increases from $0$ to $1$ on $[\frac{1}{2}, 1]$. Denote the closed unit disk by $\overline{D}$. Use the angle coordinate $\theta$ for $S^{1}$. Define $\xi: \overline{D} \rightarrow B_{1}$ as \begin{equation}\label{lemma_isoperimetric_disk_2} \xi(r e^{i \theta}) = \begin{cases} c + \rho(r) (\gamma(\theta) - c) & r \geq \frac{1}{2}, \\ c & r \leq \frac{1}{2}. \end{cases} \end{equation} Then $\xi(\overline{D})$ is $K(\gamma(S^{1}))$. Since $B_{1}$ is convex and closed in $\mathbb{R}^{2n}$, by (\ref{mass_center}), $c$ and the cone $K(\gamma(S^{1}))$ are in $B_{1}$. Therefore, the map $\xi$ is well defined. By Lemma \ref{lemma_cone_isoperimetric}, we get \begin{equation}\label{lemma_isoperimetric_disk_3} |\xi(\overline{D})|_{\text{Eu}} \leq (4 \pi)^{-1} |\gamma(S^{1})|_{\text{Eu}}^{2}. \end{equation} Since $\gamma$ and $\rho$ are smooth, we know that $\xi$ is smooth. Therefore, \begin{equation}\label{lemma_isoperimetric_disk_4} \int_{F(\partial \Omega)} \alpha_{0} = \int_{\xi(\partial \overline{D})} \alpha_{0} = \int_{\xi(\overline{D})} d \alpha_{0}. \end{equation} Since $d \alpha_{0}$ is bounded by the Euclidean metric in $B_{1}$, there exists $C_{2} > 0$ such that \begin{equation}\label{lemma_isoperimetric_disk_5} \int_{\xi(\overline{D})} d \alpha_{0} \leq C_{2} |\xi(\overline{D})|_{\text{Eu}}. \end{equation} Furthermore, since $\text{Re}H$ and $\text{Re}\widetilde{H}$ are equivalent to the Euclidean metric in $B_{1}$, by Lemma \ref{lemma_structure_F}, we infer that there exists $C_{3}>0$ such that \begin{equation}\label{lemma_isoperimetric_disk_6} |\gamma(S^{1})|_{\text{Eu}}^{2} \leq C_{3} |\partial \Omega|_{G}^{2}. \end{equation} By (\ref{lemma_isoperimetric_disk_1}), (\ref{lemma_isoperimetric_disk_4}), (\ref{lemma_isoperimetric_disk_5}), (\ref{lemma_isoperimetric_disk_3}) and (\ref{lemma_isoperimetric_disk_6}), we get \[ |\Omega|_{G} \leq C \int_{F(\partial \Omega)} \alpha_{0} = C \int_{\xi(\overline{D})} d \alpha_{0} \leq C C_{2} (4\pi)^{-1} C_{3} |\partial \Omega|_{G}^{2}, \] where the constant $C C_{2} (4\pi)^{-1} C_{3}$ only depends on $B_{1}$. Since there are only finite many $B_{j}$, we infer that there exists $C_{4} > 0$ independent of $\Omega$ such that \begin{equation}\label{lemma_isoperimetric_disk_7} |\partial \Omega|_{G}^{2} \geq C_{4 } |\Omega|_{G}. \end{equation} Now we deal with the case that $|\partial \Omega|_{G} \leq C_{1}$ and $\Omega$ is contained neither $D^{+}$ nor $D^{-}$. We shall prove that (\ref{lemma_isoperimetric_disk_7}) still holds. The proof is the same as the one of the previous case with one exception: the maps $F|_{\partial \Omega}$ and $\xi$ in (\ref{lemma_isoperimetric_disk_2}) are only $C^{1}$ now. (See Remark \ref{remark_c_1}.) Checking the above argument, it suffices to show that (\ref{lemma_isoperimetric_disk_4}) is still true. Since $F$ is smooth on $\check{D}^{\pm}$ and $C^{1}$ on $\check{D}$, we can require that the reparametrization $\gamma: S^{1} \rightarrow F(\partial \Omega)$ satisfies the following properties: $\gamma$ is $C^{1}$, $\gamma([0, \theta_{0}]) \subseteq F(\check{D}^{+})$, $\gamma([\theta_{0}, 2 \pi]) \subseteq F(\check{D}^{-})$, and $\gamma$ is smooth on $[0, \theta_{0}]$ and $[\theta_{0}, 2 \pi]$. Define \[ Q = \{ r e^{i \theta} \in \overline{D} \mid \tfrac{1}{2} \leq r \leq 1 \}, \] \[ Q^{+} = \{ r e^{i \theta} \in Q \mid 0 \leq \theta \leq \theta_{0} \} \qquad \text{and} \qquad Q^{-} = \{ r e^{i \theta} \in Q \mid \theta_{0} \leq \theta \leq 2 \pi \}. \] Since $\xi(D(\frac{1}{2})) = \{c\}$, we have $\xi^{*} d \alpha_{0}$ vanishes on $D(\frac{1}{2})$ and \[ \int_{\xi(\overline{D})} d \alpha_{0} = \int_{\overline{D}} \xi^{*} d \alpha_{0} = \int_{Q} \xi^{*} d \alpha_{0}. \] Now $Q^{\pm}$ is a compact manifold with corners, and $\xi$ is smooth on $Q^{\pm}$. Applying Stokes' formula on $Q^{\pm}$, canceling line integrals on $\partial Q^{+} \cap \partial Q^{-}$, we get \[ \int_{\xi(\overline{D})} d \alpha_{0} = \int_{\partial Q} \xi^{*} \alpha_{0} = \int_{\partial \overline{D}} \xi^{*} \alpha_{0} - \int_{\partial D(\frac{1}{2})} \xi^{*} \alpha_{0}. \] Since $\xi^{*} \alpha_{0}$ vanishes on $\partial D(\frac{1}{2})$, we get (\ref{lemma_isoperimetric_disk_4}) again. In summary, we get \[ |\partial \Omega|_{G}^{2} \geq \min \left \{ \frac{C_{1}^{2}}{|\check{D}|_{G}}, C_{4} \right \} |\Omega|_{G}. \] Defining $\mu = \frac{1}{2\pi} \min \left \{ \frac{C_{1}^{2}}{|\check{D}|_{G}}, C_{4} \right \}$, we finish the proof. \end{proof} For the closed unit disk $\overline{D}$ and $\Omega$ as in (\ref{disk_in_disk}), let \begin{equation}\label{holomorphic_isomorphism} \varphi: \overline{D} \rightarrow \Omega \end{equation} be an arbitrary holomorphic isomorphism. Let $r_{*}= |\varphi^{-1}(0)|$. Since a M\"{o}bius transformation maps a disk to a disk, we infer that $\varphi(D(t))$ is a disk in $\Omega$. Furthermore, if $t \neq r_{*}$, then $0 \notin \partial [\varphi(D(t))]$ and Lemma \ref{lemma_isoperimetric_disk} can be applied to $\varphi(D(t))$. Note that $G$ is a conformal $C^{2}$ metric on $\Omega - \{0\}$. The function $\| d \varphi \|$ is undefined at $\varphi^{-1} (0)$ and $C^{2}$ elsewhere. Therefore, by the comment below (\ref{gromov_formula}), we can apply (\ref{gromov_formula}) to $\varphi$. Similar to the proof of Lemma \ref{lemma_bound_disk_covering}, we obtain the following decaying rate (compare \cite[p.\ 91]{mcduff_salamon}). \begin{lemma}\label{lemma_decay_rate} For $0< r_{1} \leq r_{2} \leq 1$ and $\varphi$ in (\ref{holomorphic_isomorphism}), we have \[ |\varphi (D(r_{1}))|_{G} \leq \left( \frac{r_{1}}{r_{2}} \right)^{\mu} |\varphi (D(r_{2}))|_{G}, \] where $\mu >0$ is the constant in Lemma \ref{lemma_isoperimetric_disk}. \end{lemma} \begin{proof} Denote $A(t) = |\varphi (D(t))|_{G}$ and $L(t) = |\varphi (\partial D(t))|_{G}$. For $t \neq r_{*}$ and $t > 0$, by (\ref{gromov_formula}) and Lemma \ref{lemma_isoperimetric_disk}, we get \[ \dot{A} \geq \mu t^{-1} A. \] Since $A > 0$ for $t > 0$, we get \begin{equation}\label{lemma_decay_rate_1} A^{-1} \dot{A} \geq \mu t^{-1}. \end{equation} Since $A$ is continuous on $[0,1]$, the fundamental theorem of calculus still holds for $\frac{d}{d t} \log A = A^{-1} \dot{A}$ on $[r_{1}, r_{2}]$. Integrating both sides of (\ref{lemma_decay_rate_1}) on $[r_{1}, r_{2}]$, we finish the proof. \end{proof} Consider $\|d\varphi (te^{i \theta})\|_{G}$ as a function of $(t,\theta)$. We shall estimate the integral of $\|d\varphi (te^{i \theta})\|_{G}$ on $(t,\theta) \in [0,r] \times [0, 2\pi]$. Using this estimate, we will derive useful geometric consequences. Before doing this, we shall prove a lemma which is its analytic translation. \begin{lemma}\label{lemma_analytic} Suppose $\lambda(t)$ is a nonnegative measurable function on $[0,a]$. Suppose there exist constants $C>0$ and $\nu >0$ such that, for all $r_{1}$ and $r_{2}$ with $0 < r_{1} \leq r_{2} \leq a$, we have \[ \int_{0}^{r_{1}} \lambda^{2} t dt \leq C \left( \frac{r_{1}}{r_{2}} \right)^{\nu} \int_{0}^{r_{2}} \lambda^{2} t dt. \] Then, for all $r \in [0,a]$, \[ \int_{0}^{r} \lambda dt \leq C^{\frac{1}{2}} (1 - 2^{-\frac{\nu}{2}})^{-1} (\log 2)^{\frac{1}{2}} \left( \int_{0}^{r} \lambda^{2} t dt \right)^{\frac{1}{2}}. \] \end{lemma} \begin{proof} For all integers $k \geq 0$, we have \begin{eqnarray*} \left( \int_{2^{-k-1}r}^{2^{-k}r} \lambda dt \right)^{2} & \leq & \int_{2^{-k-1}r}^{2^{-k}r} \lambda^{2} t dt \cdot \int_{2^{-k-1}r}^{2^{-k}r} t^{-1} dt \\ & = & (\log 2) \int_{2^{-k-1}r}^{2^{-k}r} \lambda^{2} t dt. \end{eqnarray*} Since \[ \int_{2^{-k-1}r}^{2^{-k}r} \lambda^{2} t dt \leq \int_{0}^{2^{-k}r} \lambda^{2} t dt \leq C (2^{-k})^{\nu} \int_{0}^{r} \lambda^{2} t dt, \] we get \[ \left( \int_{2^{-k-1}r}^{2^{-k}r} \lambda dt \right)^{2} \leq C (\log 2) (2^{-k})^{\nu} \int_{0}^{r} \lambda^{2} t dt. \] Therefore, \begin{eqnarray*} \int_{0}^{r} \lambda dt & = & \sum_{k=0}^{\infty} \int_{2^{-k-1}r}^{2^{-k}r} \lambda dt \\ & \leq & \sum_{k=0}^{\infty} C^{\frac{1}{2}} (\log 2)^{\frac{1}{2}} (2^{-\frac{\nu}{2}})^{k} \left( \int_{0}^{r} \lambda^{2} t dt \right)^{\frac{1}{2}} \\ & = & C^{\frac{1}{2}} (1 - 2^{-\frac{\nu}{2}})^{-1} (\log 2)^{\frac{1}{2}} \left( \int_{0}^{r} \lambda^{2} t dt \right)^{\frac{1}{2}}. \end{eqnarray*} \end{proof} \begin{lemma}\label{lemma_derivative_area} For the $\varphi$ in (\ref{holomorphic_isomorphism}), we have \[ \int_{0}^{r} \int_{0}^{2 \pi} \|d \varphi (te^{i \theta}) \|_{G} d \theta dt \leq C(\mu) |\varphi(D(r))|_{G}^{\frac{1}{2}}, \] where \[ C(\mu) = (2 \pi)^{\frac{1}{2}} (1 - 2^{-\frac{\mu}{2}})^{-1} (\log 2)^{\frac{1}{2}}, \] and $\mu >0$ is the constant in Lemma \ref{lemma_isoperimetric_disk}. \end{lemma} \begin{proof} Define \[ \lambda(t) = \left( \int_{0}^{2 \pi} \|d \varphi (te^{i \theta}) \|_{G}^{2} d \theta \right)^{\frac{1}{2}}, \] then \[ \int_{0}^{r} \lambda^{2} t dt = \int_{0}^{r} \int_{0}^{2 \pi} \|d \varphi\|_{G}^{2} t d \theta dt = \int_{D(r)} \|d \varphi\|_{G}^{2}. \] By Lemma \ref{lemma_decay_rate}, $\lambda(t)$ satisfies the assumption of Lemma \ref{lemma_analytic} together with $C=1$ and $\nu = \mu$. Applying Lemma \ref{lemma_analytic}, we get \begin{eqnarray*} \int_{0}^{r} \int_{0}^{2 \pi} \|d \varphi\|_{G} d \theta dt & \leq & \int_{0}^{r} \left( \int_{0}^{2 \pi} \|d \varphi\|_{G}^{2} d \theta \right)^{\frac{1}{2}} (2 \pi)^{\frac{1}{2}} dt \\ & \leq & (2 \pi)^{\frac{1}{2}} (1 - 2^{-\frac{\mu}{2}})^{-1} (\log 2)^{\frac{1}{2}} \left( \int_{D(r)} \|d \varphi\|_{G}^{2} \right)^{\frac{1}{2}}, \end{eqnarray*} which finishes the proof. \end{proof} \begin{corollary}\label{corollary_ray} For $\varphi$ defined in (\ref{holomorphic_isomorphism}), we have the following conclusion. There exists a measurable subset $\Theta \subseteq [0,2 \pi]$ such that $\Theta$ has a positive measure, and for each $\theta_{0} \in \Theta$, the length of the curve has the bound \[ |\varphi(\{ te^{i \theta_{0}} \mid 0 \leq t \leq r \})|_{G} \leq (2 \pi)^{-1} C(\mu) |\varphi(D(r))|_{G}^{\frac{1}{2}}, \] where $C(\mu)$ is the constant in Lemma \ref{lemma_derivative_area}. \end{corollary} \begin{proof} Denote $|\varphi(\{ te^{i \theta} \mid 0 \leq t \leq r \})|_{G}$ by $\mathfrak{l}(\theta)$. Then \[ \int_{0}^{2 \pi} \mathfrak{l}(\theta) d \theta = \int_{0}^{2 \pi} \int_{0}^{r} \|d \varphi \|_{G} dt d \theta. \] Thus Lemma \ref{lemma_derivative_area} immediately implies this corollary. \end{proof} The above lemmas lead to the following important geometric consequence which is motivated by lines 2-3 in \cite[p.\ 318]{gromov}. In particular, it implies that the diameter of $F(\check{D}(r))$ is bounded in terms of the length of $F(\partial D(r))$. \begin{lemma}\label{lemma_diameter_boundary} There exists a constant $C>0$ such that the following holds. For any disk $\Omega$ in (\ref{disk_in_disk}), if $0 \notin \partial \Omega$, then \[ d(\Omega)_{G} \leq C |\partial \Omega|_{G}, \] where $d(\Omega)_{G}$ is the diameter of $\Omega - \{0\}$ with respect to $G$. \end{lemma} \begin{proof} Let $z_{0}$ be an interior point of $\Omega$ such that $z_{0} \neq 0$. Choose $\varphi$ in (\ref{holomorphic_isomorphism}) such that $\varphi(0) = z_{0}$. Since $\varphi^{-1}(0) \neq 0$, by Corollary \ref{corollary_ray}, we can find a curve $\gamma(t) = \varphi(te^{i \theta_{0}})$, $t \in [0,1]$, such that $0 \notin \gamma([0,1])$ and its length has the bound \[ |\gamma([0,1])|_{G} \leq (2 \pi)^{-1} C(\mu) |\varphi(D(1))|_{G}^{\frac{1}{2}} = (2 \pi)^{-1} C(\mu) |\Omega|_{G}^{\frac{1}{2}}. \] Since $0 \notin \partial \Omega$, by Lemma \ref{lemma_isoperimetric_disk} and the above inequality, we have \[ |\gamma([0,1])|_{G} \leq (2 \pi)^{-\frac{3}{2}} C(\mu) \mu^{- \frac{1}{2}} |\partial \Omega|_{G}. \] Since $0 \notin \gamma([0,1])$, we know that $\gamma$ is a curve in $\Omega - \{0\}$ and it connects $z_{0}$ with $\partial \Omega$. Thus, for every $z_{0} \in \Omega - \{0\}$, the distance between $z_{0}$ and $\partial \Omega$ is bounded by $(2 \pi)^{-\frac{3}{2}} C(\mu) \mu^{- \frac{1}{2}} |\partial \Omega|_{G}$. Since $\partial \Omega$ is connected, we get \[ d(\Omega)_{G} \leq 2 (2 \pi)^{-\frac{3}{2}} C(\mu) \mu^{- \frac{1}{2}} |\partial \Omega|_{G} + |\partial \Omega|_{G}. \] This lemma is proved because the constant $\mu$ is independent of $\Omega$, . \end{proof} \begin{lemma}\label{lemma_continuity} The map $F: \check{D} \rightarrow U$ has a continuous extension over $0 \in D$. \end{lemma} \begin{proof} Since $F(\check{D})$ is relatively compact, $\text{Re}H$ and $\text{Re}\widetilde{H}$ are equivalent on $F(\check{D})$. By Lemma \ref{lemma_structure_F}, there exists a constant $C_{1} > 0$ such that the diameter of $F(\check{D}(r))$ with respect to $\text{Re}H$ is bounded by $C_{1} d(D(r))_{G}$, where $\check{D}(r) = D(r)-\{0\}$ and $d(D(r))_{G}$ is the diameter of $\check{D}(r)$ with respect to $G$. By Lemma \ref{lemma_diameter_boundary}, $d(D(r))_{G}$ is bounded by $C |\partial D(r)|_{G}$. By Lemma \ref{lemma_circle_zero}, $|\partial D(r)|_{G} \rightarrow 0$ when $r \rightarrow 0$. Thus the diameter of $F(\check{D}(r))$ shrinks to $0$ when $r \rightarrow 0$. Since $F(\check{D})$ is relatively compact, by Cauchy's criterion, the limit of $F(z)$ exists when $z \rightarrow 0$, which completes the proof. \end{proof} \section{Lipschitz Continuity}\label{section_lipschitz_continuity} The goal of this section is to prove the Lipschitz continuity of $f$ near $0 \in D^{+}$. This step is one of the main differences between a geometric proof and an analytic proof. More precisely, we shall prove the following lemma. \begin{lemma}\label{lemma_lipschitz} Under the assumption of Theorem \ref{theorem_finite_area}, consider the standard Euclidean metric on $\check{D}^{+}$. Then $\|df\|$ is bounded in $\check{D}^{+}(r)$ for some $r \in (0,1)$. In particular, f has a Lipschitz continuous extension over $0 \in D^{+}$. \end{lemma} As seen from the above lemma, the main task in this section is to estimate derivatives. We shall apply an argument similar to the proof of Lemma \ref{lemma_bound_disk_covering}. Typically, we need an isoperimetric inequality stronger than Lemma \ref{lemma_isoperimetric_disk}. In order to get such an inequality, we need another simple and powerful observation due to Gromov \cite[1.3.B]{gromov}: An almost complex manifold is locally tamed by an exact form which is ``almost" compatible with the almost complex structure. The idea is as follows. Let $H$ be a Hermitian metric on $(M,J)$. Denote by $H_{p}$ the Hermitian at $p$, where $p \in B$ and $B$ is contained in a coordinate chart. Fixing $p$, by using the coordinate in $B$, we can also consider $-\text{Im}H_{p}$ as a differential form in $B$. As a constant form, $-\text{Im}H_{p}$ is exact in $B$. Since $-\text{Im}H_{p}$ is compatible with $J$ at $p$, by the continuity of $J$, the form $-\text{Im}H_{p}$ tames and is even ``almost" compatible with $J$ near $p$. \begin{remark}\label{remark_monotone} The comment below Lemma \ref{lemma_monotone} mentions that $J$ is locally tamed by exact forms. One can use the above construction to obtain these exact forms. \end{remark} In this section, we use the same assumption as in Section \ref{section_continuity}. Therefore, every result in Section \ref{section_continuity} can be used now. By Lemma \ref{lemma_continuity}, we know that $F(0)$ is defined and $F(0)$ is in $U$. We can choose in a chart a neighborhood $B \subseteq U$ of $F(0)$, which is identified through the coordinates with a closed ball in $\mathbb{R}^{2n}$. Denote by $J_{p}$, $\widetilde{J}_{p}$, $H_{p}$ and $\widetilde{H}_{p}$ the almost complex structures or Hermitians at $p \in B$. They are functions on $B$ whose values are matrices (operators and bilinear forms). Since $B \subseteq \mathbb{R}^{2n}$, the tangent spaces $T_{p}B$ and $T_{q}B$ are both naturally identified with $\mathbb{R}^{2n}$ for $q \in B$. Therefore, $J_{p}$, $\widetilde{J}_{p}$, $H_{p}$ and $\widetilde{H}_{p}$ are also defined on $T_{q}B$. Denote the Euclidean metric on $\mathbb{R}^{2n}$ by $\langle \cdot, \cdot \rangle_{\text{Eu}}$ and $\| \cdot \|_{\text{Eu}}$. We consider $H_{p}(v, J_{q}v)$ and $\widetilde{H}_{p}(v, \widetilde{J}_{q}v)$ as smooth functions of $(p,q,v)$, where $v \in \mathbb{R}^{2n}$. Since $-\text{Im}H_{p}(v, J_{p} v) > 0$ and $-\text{Im}\widetilde{H}_{p}(v, \widetilde{J}_{p} v) > 0$ for $v \neq 0$, we can choose $B$ so small that \[ -\text{Im}H_{p}(v, J_{q} v) \geq C_{0} \langle v, v \rangle_{\text{Eu}} \] and \[ -\text{Im}\widetilde{H}_{p}(v, \widetilde{J}_{q} v) \geq C_{0} \langle v, v \rangle_{\text{Eu}} \] for some constant $C_{0}>0$ and all $p$ and $q$ in $B$. \begin{lemma}\label{lemma_lipschitz_form} There exists a constant $C>0$ such that, for all $p$, $q \in B$ and $v \in \mathbb{R}^{2n}$, we have \[ -\text{Im}H_{q} (v, J_{q} v) \leq (1 + C \|p-q\|_{\text{Eu}}) (-\text{Im}H_{p}) (v, J_{q} v), \] \[ -\text{Im}\widetilde{H}_{q} (v, \widetilde{J}_{q} v) \leq (1 + C \|p-q\|_{\text{Eu}}) (-\text{Im}\widetilde{H}_{p}) (v, \widetilde{J}_{q} v), \] \[ \text{Re}H_{q} (v, v) \leq (1 + C \|p-q\|_{\text{Eu}}) \text{Re}H_{p} (v, v), \] and \[ \text{Re}\widetilde{H}_{q} (v, v) \leq (1 + C \|p-q\|_{\text{Eu}}) \text{Re}\widetilde{H}_{p} (v, v). \] \end{lemma} \begin{proof} Since $B$ is compact, the derivative of $-\text{Im}H_{p}$ with respect to $p$ is bounded on $B$. Since $B$ is also convex, applying the fundamental theorem of calculus to $-\text{Im}H_{p}$, we infer that there exists a constant $C_{1} > 0$ such that \[ |\text{Im}H_{p} (v, J_{q} v) - \text{Im}H_{q} (v, J_{q} v)| \leq C_{1} \|p-q\|_{\text{Eu}} \|J_{q}\|_{\text{Eu}} \|v\|_{\text{Eu}}^{2}. \] Clearly, $\|J_{q}\|_{\text{Eu}}$ is also bounded on $B$. As mentioned above, \[ -\text{Im}H_{p}(v, J_{q} v) \geq C_{0} \|v\|_{\text{Eu}}^{2}. \] Therefore, there exists a constant $C>0$ such that \[ |\text{Im}H_{p} (v, J_{q} v) - \text{Im}H_{q} (v, J_{q} v)| \leq C \|p-q\|_{\text{Eu}} (-\text{Im}H_{p}) (v, J_{q} v). \] and \begin{eqnarray*} -\text{Im}H_{q} (v, J_{q} v) & \leq & -\text{Im}H_{p} (v, J_{q} v) + |\text{Im}H_{p} (v, J_{q} v) - \text{Im}H_{q} (v, J_{q} v)| \\ & \leq & (1 + C \|p-q\|_{\text{Eu}}) (-\text{Im}H_{p}) (v, J_{q} v), \end{eqnarray*} which proves the first inequality. By similar arguments, one can prove the other inequalities. \end{proof} By Lemma \ref{lemma_continuity}, $F$ is continuous. Since $B$ is a neighborhood of $F(0)$, there exists $r_{0}>0$ such that $F(D(r_{0})) \subseteq B$ and the diameter of $F(D(r_{0}))$ with respect to $\|\cdot\|_{\text{Eu}}$ is less than $1$. We shall improve Lemma \ref{lemma_isoperimetric_disk} to be the following lemma on disks inside $D(r_{0})$ (compare \cite[p.\ 317, (12)]{gromov}). The proof is a refinement of that of Lemma \ref{lemma_isoperimetric_disk}. Besides Gromov's idea described above, Lemma \ref{lemma_structrue_in_u} needs to be fully employed. \begin{lemma}\label{lemma_isoperimetric_diameter} There exists a constant $C>0$ such that the following holds. Suppose $\Omega$ is a disk in (\ref{disk_in_disk}) such that $\Omega \subseteq D(r_{0})$ and $0 \notin \partial \Omega$, then \[ |\partial \Omega|_{G}^{2} \geq 4 \pi (1 - C d(\Omega)_{\text{Eu}}) |\Omega|_{G}, \] where $d(\Omega)_{\text{Eu}}$ is the diameter of $F(\Omega-\{0\})$ with respect to $\| \cdot \|_{\text{Eu}}$. \end{lemma} \begin{proof} We only give a detailed proof for the case that $\Omega$ is contained in neither $D^{+}$ nor $D^{-}$. The proof for the remaining case is similar and even easier. Define $\Omega^{+} = \Omega \cap \check{D}^{+}$ and $\Omega^{-} = \Omega \cap \check{D}^{-}$. By Lemma \ref{lemma_structure_F}, we get \[ |\Omega|_{G} = \int_{F(\Omega^{+})} -\text{Im}H + \int_{F(\Omega^{-})} -\text{Im}\widetilde{H}. \] Choose an arbitrary point $p_{0} \in F(\Omega-\{0\}) \cap W$ and fix it. By the first two inequalities of Lemma \ref{lemma_lipschitz_form}, there exists a constant $C_{1}>0$ such that \begin{eqnarray*} & & \int_{F(\Omega^{+})} -\text{Im}H + \int_{F(\Omega^{-})} -\text{Im}\widetilde{H} \\ & \leq & (1 + C_{1} d(\Omega)_{\text{Eu}}) \left( \int_{F(\Omega^{+})} -\text{Im}H_{p_{0}} + \int_{F(\Omega^{+})} -\text{Im}\widetilde{H}_{p_{0}} \right). \end{eqnarray*} Since $p_{0} \in W$, by Lemma \ref{lemma_structrue_in_u}, we get $-\text{Im}H_{p_{0}} = -\text{Im}\widetilde{H}_{p_{0}}$. Therefore, \begin{equation}\label{lemma_isoperimetric_diameter_1} |\Omega|_{G} \leq (1 + C_{1} d(\Omega)_{\text{Eu}}) \int_{F(\Omega)} -\text{Im}H_{p_{0}}. \end{equation} Since $-\text{Im}H_{p_{0}}$ is a constant form in $B$, it has a primitive form $\alpha_{p_{0}}$. Since $0 \notin \partial \Omega$, similar to the proof of (\ref{lemma_form_integral_2}), we get \begin{equation}\label{lemma_isoperimetric_diameter_2} \int_{F(\Omega)} -\text{Im}H_{p_{0}} = \int_{F(\partial \Omega)} \alpha_{p_{0}}. \end{equation} Similar to the proof of Lemma \ref{lemma_isoperimetric_disk}, we construct the cone $K$ described in Section \ref{section_set_up}. The boundary of $K$ is $F(\partial \Omega)$. As in (\ref{mass_center}), the vertex of $K$ is the center of mass of $F(\partial \Omega)$ with respect to $\text{Re}H_{p_{0}}$. Construct a map $\xi: \overline{D} \rightarrow K$ as in (\ref{lemma_isoperimetric_disk_2}). Then by Lemma \ref{lemma_cone_isoperimetric}, we get \begin{equation}\label{lemma_isoperimetric_diameter_3} |\xi(\overline{D})|_{\text{Re}H_{p_{0}}} \leq (4 \pi)^{-1} |F(\partial \Omega)|_{\text{Re}H_{p_{0}}}^{2}. \end{equation} Similar to the argument in Lemma \ref{lemma_isoperimetric_disk}, we also get \begin{equation}\label{lemma_isoperimetric_diameter_4} \int_{F(\partial \Omega)} \alpha_{p_{0}} = \int_{\xi(\overline{D})} d \alpha_{p_{0}} = \int_{\xi(\overline{D})} -\text{Im}H_{p_{0}}. \end{equation} Since $-\text{Im}H_{p_{0}}$ is compatible with $J_{p_{0}}$ and $\text{Re}H_{p_{0}}$, we have Wirtinger's inequality \[ |-\text{Im}H_{p_{0}} (v_{1}, v_{2})| \leq \|v_{1}\|_{\text{Re}H_{p_{0}}} \|v_{2}\|_{\text{Re}H_{p_{0}}}. \] Applying Wirtinger's inequality, we get \begin{equation}\label{lemma_isoperimetric_diameter_5} \int_{\xi(\overline{D})} -\text{Im}H_{p_{0}} \leq |\xi(\overline{D})|_{\text{Re}H_{p_{0}}}. \end{equation} By the third and fourth inequalities in Lemma \ref{lemma_lipschitz_form}, for the same constant $C_{1}$ as the above, we have \begin{eqnarray*} |F(\partial \Omega \cap \check{D}^{+})|_{\text{Re}H_{p_{0}}} & \leq & (1 + C_{1} d(\Omega)_{\text{Eu}})^{\frac{1}{2}} |F(\partial \Omega \cap \check{D}^{+})|_{\text{Re}H} \\ & = & (1 + C_{1} d(\Omega)_{\text{Eu}})^{\frac{1}{2}} |\partial \Omega \cap \check{D}^{+}|_{G}. \end{eqnarray*} Similarly, \[ |F(\partial \Omega \cap \check{D}^{-})|_{\text{Re}\widetilde{H}_{p_{0}}} \leq (1 + C_{1} d(\Omega)_{\text{Eu}})^{\frac{1}{2}} |\partial \Omega \cap \check{D}^{-}|_{G}. \] By Lemma \ref{lemma_structrue_in_u} again and the above two inequalities, we get \begin{eqnarray}\label{lemma_isoperimetric_diameter_6} |F(\partial \Omega)|_{\text{Re}H_{p_{0}}} & = & |F(\partial \Omega \cap \check{D}^{+})|_{\text{Re}H_{p_{0}}} + |F(\partial \Omega \cap \check{D}^{-})|_{\text{Re}\widetilde{H}_{p_{0}}} \\ & \leq & (1 + C_{1} d(\Omega)_{\text{Eu}})^{\frac{1}{2}} |\partial \Omega|_{G}. \nonumber \end{eqnarray} Combining (\ref{lemma_isoperimetric_diameter_1}), (\ref{lemma_isoperimetric_diameter_2}), (\ref{lemma_isoperimetric_diameter_4}), (\ref{lemma_isoperimetric_diameter_5}), (\ref{lemma_isoperimetric_diameter_3}) and (\ref{lemma_isoperimetric_diameter_6}), we get \[ |\Omega|_{G} \leq (1 + C_{1} d(\Omega)_{\text{Eu}})^{2} (4 \pi)^{-1} |\partial \Omega|_{G}^{2}. \] Here \[ (1 + C_{1} d(\Omega)_{\text{Eu}})^{2} = 1 + 2 C_{1} d(\Omega)_{\text{Eu}} + C_{1}^{2} d(\Omega)_{\text{Eu}}^{2} \leq 1+ (2 C_{1} + C_{1}^{2}) d(\Omega)_{\text{Eu}}, \] the last inequality comes from the fact that the diameter of $F(D(r_{0})-\{0\})$ with respect to $\| \cdot \|_{\text{Eu}}$ is less than $1$. Let $C = 2 C_{1} + C_{1}^{2}$, we get \[ |\partial \Omega|_{G}^{2} \geq 4 \pi (1 + C d(\Omega)_{\text{Eu}})^{-1} |\Omega|_{G} \geq 4 \pi (1 - C d(\Omega)_{\text{Eu}}) |\Omega|_{G}, \] which proves the lemma for the case that $F(\Omega)$ is contained in neither $D^{+}$ nor $D^{-}$. If $F(\Omega)$ is contained in $D^{+}$ (resp.\ $D^{-}$), then we choose an arbitrary point $p_{0} \in F(\Omega)$ and compare $H$ (resp.\ $\widetilde{H}$) with $H_{p_{0}}$ (resp.\ $\widetilde{H}_{p_{0}}$). By a similar and easier argument, we finish the proof. \end{proof} By Lemma \ref{lemma_isoperimetric_diameter}, we easily get the following isoperimetric inequality which is the most important for this section. (Compare \cite[p.\ 317, (12')]{gromov}.) \begin{lemma}\label{lemma_isoperimetric_boundary} There exists a constant $C>0$ such that the following holds. For every disk $\Omega$ in Lemma \ref{lemma_isoperimetric_diameter}, we have \[ |\partial \Omega|_{G}^{2} \geq 4 \pi (1 - C |\partial \Omega|_{G}) |\Omega|_{G}. \] \end{lemma} \begin{proof} Since $B$ is compact, $\text{Re}H$, $\text{Re}\widetilde{H}$ and $\| \cdot \|_{\text{Eu}}$ are equivalent on $B$. Since $F(D(r_{0})) \subseteq B$, by Lemma \ref{lemma_structure_F}, there exists $C_{1}>0$ such that \[ d(\Omega)_{\text{Eu}} \leq C_{1} d(\Omega)_{G}, \] where $d(\Omega)_{\text{Eu}}$ is defined in Lemma \ref{lemma_isoperimetric_diameter} and $d(\Omega)_{G}$ is defined in Lemma \ref{lemma_diameter_boundary}. Thus Lemmas \ref{lemma_diameter_boundary} and \ref{lemma_isoperimetric_diameter} finish the proof. \end{proof} The following lemma immediately implies Lemma \ref{lemma_lipschitz} as Lemma \ref{lemma_metric_hyperbolic_bound} implies Lemma \ref{lemma_tame_to_area}. The proof follows lines 4-6 in \cite[p.\ 318]{gromov}. It needs an argument similar to that of Lemma \ref{lemma_bound_disk_covering}. \begin{lemma}\label{lemma_metric_bound} Using the standard coordinate $z=x+iy$ on $\mathbb{C}$, the metric $G$ on $\check{D}$ has the form $g dx \otimes dx + g dy \otimes dy$, and $g$ is a bounded function on $D(\frac{r_{0}}{2}) - \{0\}$. \end{lemma} \begin{proof} Let $z_{0}$ be a point in $D(\frac{r_{0}}{2})$ such that $z_{0} \neq 0$. Consider a holomorphic isomorphism $\varphi_{z_{0}}: \overline{D} \rightarrow D(r_{0})$ such that $\varphi_{z_{0}}(0) = z_{0}$. Let $r_{*} = |\varphi_{z_{0}}^{-1}(0)|$. Define $A(t) = |\varphi_{z_{0}}(D(t))|_{G}$ and $L(t) = |\varphi_{z_{0}}(\partial D(t))|_{G}$. The following computation is similar to the proof of Lemma \ref{lemma_decay_rate}. By (\ref{gromov_formula}) and Lemma \ref{lemma_isoperimetric_boundary}, there exists $C_{1} > 0$ such that, for $r \in (0,1)$ and $r \neq r_{*}$, \[ \dot{A} \geq 2 t^{-1} (1- C_{1}L) A. \] Since $A > 0$ when $t >0$, we have \[ A^{-1} \dot{A} \geq 2 t^{-1}- 2 C_{1} t^{-1} L. \] Integrating both sides of the above inequalities on $[r,1]$ for $r>0$, we get \begin{equation}\label{lemma_metric_bound_1} A(r) \leq r^{2} A(1) \exp \left( 2 C_{1} \int_{r}^{1} t^{-1} L dt \right). \end{equation} We have \begin{eqnarray*} \int_{r}^{1} t^{-1} L dt & = & \int_{r}^{1} t^{-1} \int_{0}^{2 \pi} \|d \varphi_{z_{0}} \|_{G}\ t d \theta dt \\ & \leq & \int_{0}^{1}\int_{0}^{2 \pi} \| d \varphi_{z_{0}} \|_{G} d \theta dt. \end{eqnarray*} Applying Lemma \ref{lemma_derivative_area} to the above inequality, we obtain \begin{equation}\label{lemma_metric_bound_2} \int_{r}^{1} t^{-1} L dt \leq C(\mu) |\varphi_{z_{0}}(D)|_{G}^{\frac{1}{2}}. \end{equation} By (\ref{lemma_metric_bound_1}) and (\ref{lemma_metric_bound_2}), we get \begin{equation}\label{lemma_metric_bound_3} A(r) \leq r^{2} |D(r_{0})|_{G} \exp \left( 2 C_{1} C(\mu) |D(r_{0})|_{G}^{\frac{1}{2}} \right). \end{equation} Since $\displaystyle \| d \varphi_{z_{0}} (0) \|_{G}^{2} = \lim_{r \rightarrow 0} (\pi r^{2})^{-1} A(r)$, by (\ref{lemma_metric_bound_3}), we infer that $\| d \varphi_{z_{0}} (0) \|_{G}$ is bounded for all $z_{0} \in D(\frac{r_{0}}{2})$. When we consider $\varphi_{z_{0}}$ as a holomorphic function on $D$, denote by $\varphi_{z_{0}}'(0)$ the derivative of $\varphi_{z_{0}}$ at $0$. Then \[ \|d \varphi_{z_{0}} (0)\|_{G} = \sqrt{g(z_{0})} |\varphi_{z_{0}}'(0)|. \] Since $z_{0} \in D(\frac{r_{0}}{2})$, it's easy to check that $|\varphi_{z_{0}}'(0)|^{-1}$ is bounded for all $z_{0}$. Therefore, $g$ is bounded on $D(\frac{r_{0}}{2}) - \{0\}$. \end{proof} \section{An Almost Complex Structure}\label{section_almost_structure} Suppose $(N,J)$ is an almost complex manifold. Then its tangent bundle $TN$ is also a manifold. In this section, we shall describe a natural structure on $TN$ which makes $TN$ an almost complex manifold. This almost complex structure will be used in next section. This structure was constructed in \cite[Proposition 6.7]{yano_kobayashi} and its properties have been extensively studied in \cite{lempert_szoke}. Denote by $P$ the projection $P: TN \rightarrow N$. For each $q \in N$, $P^{-1}(q)$ is a complex linear space. Certainly, $P^{-1}(q)$ is a complex manifold. For $v \in P^{-1}(q)$, we shall use the pair $(q,v)$ to denote this point in $TN$. Suppose $h: \Omega \rightarrow N$ is a $J$-holomorphic map, where $\Omega$ is an open subset of $\mathbb{C}$. There are liftings of $h$ defined as $\frac{\partial h}{\partial x}: \Omega \rightarrow TN$ and $\frac{\partial h}{\partial y}: \Omega \rightarrow TN$ which are derivatives of $h$, i.e. \[ \frac{\partial h}{\partial x} = dh \cdot \frac{\partial}{\partial x} \qquad \text{and} \qquad \frac{\partial h}{\partial y} = dh \cdot \frac{\partial}{\partial y}, \] and $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$ are vector fields on $\Omega$. We shall define an almost complex structure $J^{(1)}$ on $TN$ such that $\frac{\partial h}{\partial x}$ and $\frac{\partial h}{\partial y}$ are $J$-holomorphic maps. For any coordinate chart $B$ of $N$, by using its coordinate, $TB$ is identified with $B \times \mathbb{R}^{2n}$, and $T^{2} B = TT B$ is identified with $B \times \mathbb{R}^{2n} \times \mathbb{R}^{2n} \times \mathbb{R}^{2n}$. Therefore, a point in $T^{2} B$ is $(q, v, w_{1}, w_{2})$, where $(q,v) \in TB \subseteq TN$, $w_{1} \in T_{q} B = \mathbb{R}^{2n}$, and $w_{2} \in T_{v} \mathbb{R}^{2n} = \mathbb{R}^{2n}$. By this identification, the almost complex structure on $B \subseteq N$ becomes a map $J: B \rightarrow \text{GL}(\mathbb{R}^{2n})$ such that $J(q)^{2} = - \text{Id}$ and the action of $J$ on $T B$ is \[ J(q,v) = (q, J(q) v). \] For convenience, we introduce the notation $d_{v}J$ which is the directional derivative along the direction $v$, i.e.\ $d_{v} J = dJ \cdot v$. Clearly, $d_{v} J $ is a linear transformations on $\mathbb{R}^{2n}$. The definition of $J^{(1)}$ on $T^{2} B$ is (see also \cite[p.\ 76, (3.5)]{lempert_szoke}) \begin{equation} J^{(1)}(q, v, w_{1}, w_{2}) = (q, v, J(q) w_{1}, (d_{v} J)(q) w_{1} + J(q) w_{2} ). \end{equation} By the definition of $J^{(1)}$, we have \begin{eqnarray*} \left( J^{(1)} \right)^{2} (q, v, w_{1}, w_{2}) & = & J^{(1)} (q, v, J w_{1}, (d_{v} J) w_{1} + J w_{2} ) \\ & = & (q, v, J^{2} w_{1}, (d_{v} J) J w_{1} + J (d_{v} J) w_{1} + J^{2} w_{2}). \end{eqnarray*} Clearly, $J^{2} = - \text{Id}$, $J^{2} w_{1} = - w_{1}$ and $J^{2} w_{2} = - w_{2}$. We also have \[ (d_{v} J) J w_{1} + J (d_{v} J) w_{1} = (d_{v} J^{2}) w_{1} = (- d_{v} \text{Id}) w_{1} = 0. \] Therefore, $\left( J^{(1)} \right)^{2} (q, v, w_{1}, w_{2}) = (q, v, -w_{1}, -w_{2})$. We infer that $J^{(1)}$ defines an almost complex structure in a coordinate chart of $TN$. Actually, the definition in a particular chart is enough for our proof of Theorems \ref{theorem_finite_area} and \ref{theorem_tame}. However, for the interest of readers, we would like to point out that this definition globally makes sense. (See \cite[Section 3]{lempert_szoke}.) Moreover, we have following useful proposition (see \cite[Theorem 3.2]{lempert_szoke}). \begin{proposition}\label{proposition_almost_complex} The above $J^{(1)}$ is a well defined an almost complex structure on the manifold $TN$ which satisfies the following properties. (1). The projection $P: TN \rightarrow N$ is $J$-holomorphic. (2). The inclusion of the fibre $P^{-1}(q) \hookrightarrow TN$ is $J$-holomorphic. (3). If $\Omega$ is an open subset of $\mathbb{C}$ and $h: \Omega \rightarrow N$ is $J$-holomorphic, then $\frac{\partial h}{\partial x}: \Omega \rightarrow TN$ and $\frac{\partial h}{\partial y}: \Omega \rightarrow TN$ are $J$-holomorphic. \end{proposition} \begin{proof} (1) and (2) are obviously true in a coordinate chart. Therefore, they are true globally. (3) follows immediately from (c) of \cite[Theorem 3.2]{lempert_szoke}. One can also easily check it by differentiating the equation $\frac{\partial h}{\partial y} = J \frac{\partial h}{\partial x}$. \end{proof} \begin{remark} In \cite[1.4]{gromov}, Gromov describes an almost complex structure which is slightly different from the above $J^{(1)}$. But these two structures are actually related. They also play the same role in the proof of Theorems \ref{theorem_finite_area} and \ref{theorem_tame}. Let $PTN$ be the projectivization of $TN$. The tangent bundle of $PTN$ contains a subbundle $\Theta$. (Here $\Theta$ is the notation in \cite[p.\ 318, 1.4]{gromov}.) The paper \cite{gromov} defines an complex structure on the vector bundle $\Theta \rightarrow PTN$. In fact, this complex structure comes form the above $J^{(1)}$ on $T^{2}N \rightarrow TN$. Define $TN-N = \{ (q,v) \mid q \in N, v \in T_{q}N, v \neq 0 \}$. Denote the natural projection by $P^{(1)}: (TN-N) \rightarrow PTN$. We try to descend $J^{(1)}$ on $TN-N$ to $PTN$. More precisely, suppose $w \in T_{(q,v)} (TN-N)$, $u \in T_{P^{(1)}(q,v)} PTN$, and $d P^{(1)} w = u$, we try to define $J^{(1)} u = d P^{(1)} (J^{(1)} w)$. This definition does not work in general because it depends on the choice of $w$ for a fixed $u$. However, it does work when $u \in \Theta$. This induces the complex structure in \cite{gromov} on $\Theta$. \end{remark} \section{Higher Order Derivatives}\label{section_higher_order_derivatives} The goal of this section is to finish the proof of Theorems \ref{theorem_finite_area} and \ref{theorem_tame}. By Lemma \ref{lemma_lipschitz}, it remains to study the higher order derivatives of $f$ near $0 \in D^{+}$. The proof follows Gromov's approach in \cite[1.4]{gromov}. The idea is as follows. Roughly speaking, based on Lemma \ref{lemma_lipschitz}, the derivatives of $f$ are also $J$-holomorphic maps which satisfy the same assumption as $f$ does. Therefore, a boot-strapping argument finishes the proof. This argument is remarkably different from that of an analytic proof. In an analytic proof, the argument of higher order regularity is reduced to the problem of the elliptic regularity of PDEs (see e.g. \cite[p.\ 92 \& Appendix B.4]{mcduff_salamon}). However, this argument relies on a geometric construction. First, we recall the definition of a differentiable map on $D^{+}(r)= \{z \mid |z|<r, \text{Im}z \geq 0 \}$, where $0 < r \leq 1$. A map $h: D^{+}(r) \rightarrow \mathbb{R}^{m}$ is said to be $C^{k}$ ($1 \leq k \leq +\infty$) if, for every $z \in D^{+}(r)$, there exists an open neighborhood $U_{z}$ of $z$ in $\mathbb{R}^{2}$ and a $C^{k}$ function $h_{z}: U_{z} \rightarrow \mathbb{R}^{m}$ such that $h|_{U_{z} \cap D^{+}(r)} = h_{z}|_{U_{z} \cap D^{+}(r)}$. This is the standard definition of a $C^{k}$ map on a manifold with boundary. We shall use the following Lemmas \ref{lemma_derivative_extension} and \ref{lemma_smooth} about $C^{k}$ maps on $D^{+}(r)$. Their proofs are given in the Appendix. \begin{lemma}\label{lemma_derivative_extension} Suppose $h: D^{+}(r) \rightarrow \mathbb{R}^{m}$ is $C^{k}$ and $h: \check{D}^{+}(r) \rightarrow \mathbb{R}^{m}$ is $C^{k+1}$ for some $k$ such that $0 \leq k < + \infty$. Suppose $d^{k+1}h$ on $\check{D}^{+}(r)$ has a continuous extension over $0 \in D^{+}(r)$. Then $h$ is $C^{k+1}$ in $D^{+}(r)$. \end{lemma} When we consider a function $h$ defined on an open domain of $\mathbb{R}^{n}$. We say $h$ is $C^{\infty}$ or smooth if $h$ is $C^{k}$ for all $k$ such that $0 \leq k < + \infty$. However, when we consider the similar situation on $D^{+}(r)$, the situation becomes slightly subtle. If $h$ is $C^{k}$ on $D^{+}(r)$ for all $k$ such that $0 \leq k < + \infty$, then is $h$ a $C^{\infty}$ function? Or can we find a $C^{\infty}$ function $h_{z}$ on $U_{z}$ such that $h|_{U_{z} \cap D^{+}(r)} = h_{z}|_{U_{z} \cap D^{+}(r)}$ for all $z$? The following lemma gives the affirmative answer. \begin{lemma}\label{lemma_smooth} Suppose $h: D^{+}(r) \rightarrow \mathbb{R}^{m}$ is $C^{k}$ for all $k$ such that $0 \leq k < + \infty$. Then $h$ is $C^{\infty}$. \end{lemma} By Lemma \ref{lemma_lipschitz}, $f$ is continuous at $z=0$. Since $f(\partial \check{D}^{+}) \subseteq W$ and $W$ is closed, we have $f(0) \in W$. We can find an open neighborhood $U_{0}$ of $f(0)$ in $M$ satisfying the following properties: (1).\ By (3) in Lemma \ref{lemma_hermitian}, we can require that a $1$-form $\alpha_{0}$ is defined in $U_{0}$, it vanishes on $W \cap U_{0}$ and \begin{equation}\label{tame_U_0} d \alpha_{0}(v,Jv) \geq C_{0} \|v\|^{2} \end{equation} for some constant $C_{0} > 0$ and all tangent vectors on $U_{0}$. (2).\ $U_{0}$ is a coordinate chart, by this coordinate, $U_{0}$ is identified with $B_{1} \times B_{2} \subseteq \mathbb{R}^{n} \times \mathbb{R}^{n}$, where $B_{1}$ and $B_{2}$ are open subsets of $\mathbb{R}^{n}$, and $W \cap U_{0} = B_{1} \times \{0\}$. (3).\ There exists a complex frame $\{ e_{1}, \cdots, e_{n} \}$ of $TU_{0}$ such that $e_{j}(q) \in T_{q}W$ ($1 \leq j \leq n$) for all $q \in W \cap U_{0}$. Using the above frame, $TU_{0}$ is identified with $U_{0} \times \mathbb{C}^{n}$, \begin{equation}\label{T_2_U_0} T(U_{0} \times \mathbb{C}^{n})= U_{0} \times \mathbb{C}^{n} \times \mathbb{R}^{2n} \times \mathbb{C}^{n} \end{equation} and a point in $T(U_{0} \times \mathbb{C}^{n})$ is represented by a tuple $(q, v, w_{1}, w_{2})$, where $q \in U_{0}$, $v \in \mathbb{C}^{n}$, $w_{1} \in T_{q} U_{0} = \mathbb{R}^{2n}$ and $w_{2} \in T_{v} \mathbb{C}^{n} = \mathbb{C}^{n}$. We use the notation \begin{equation} W_{0} = W \cap U_{0}. \end{equation} Then $W_{0} = B_{1} \times \{0\}$, \begin{equation}\label{T_W_0} TW_{0} = (B_{1} \times \{0\}) \times \mathbb{R}^{n} \subseteq U_{0} \times \mathbb{C}^{n}, \end{equation} and $J(TW_{0}) = (B_{1} \times \{0\}) \times J_{0} \mathbb{R}^{n} \subseteq U_{0} \times \mathbb{C}^{n}$, where $\mathbb{R}^{n} \hookrightarrow \mathbb{C}^{n}$ is the standard inclusion and $J_{0}$ is the standard complex structure on $\mathbb{C}^{n}$. Define the projections \begin{equation} P_{1}: U_{0} \times \mathbb{C}^{n} \rightarrow U_{0} \qquad \text{and} \qquad P_{2}: U_{0} \times \mathbb{C}^{n} \rightarrow \mathbb{C}^{n}. \end{equation} Let's consider the almost complex structure $J^{(1)}$ on $U_{0} \times \mathbb{C}^{n} = TU_{0}$ which is defined in Proposition \ref{proposition_almost_complex}. By (1) and (2) of Proposition \ref{proposition_almost_complex}, we know that $P_{1}$ and the fibre inclusion $\{q\} \times \mathbb{C}^{n} \hookrightarrow U_{0} \times \mathbb{C}^{n}$ are $J$-holomorphic. Thus, by (\ref{T_2_U_0}), $J^{(1)}$ on $T^{2} U_{0}$ has the form \begin{equation}\label{J_1} J^{(1)} (q, v, w_{1}, w_{2}) = (q, v, J(q) w_{1}, \phi(q,v) w_{1} + J_{0} w_{2}), \end{equation} where $\phi$ is a function on $U_{0} \times \mathbb{C}^{n}$ whose values are real linear maps from $\mathbb{R}^{2n}$ to $\mathbb{C}^{n}$. By Lemma \ref{lemma_lipschitz}, we know that there exists $\epsilon >0$ such that the following holds: (1).\ $f(D^{+}(\epsilon)) \subseteq U_{0}$, where $D^{+}(\epsilon) = \{ z \in D^{+} \mid |z| < \epsilon \}$. (2).\ There exists an open ball $B_{3}$ with finite radius in $\mathbb{C}^{n}$ such that the image of $\frac{\partial f}{\partial x}$ (resp. $\frac{\partial f}{\partial y}$) $: \check{D}^{+}(\epsilon) \rightarrow TU_{0} = U_{0} \times \mathbb{C}^{n}$ is actually contained in $U_{0} \times B_{3}$, i.e.\ we get the map \begin{equation} \frac{\partial f}{\partial x} \ \left( \text{resp.}\ \frac{\partial f}{\partial y} \right): \check{D}^{+}(\epsilon) \rightarrow U_{0} \times B_{3} \subseteq TU_{0}. \end{equation} (3).\ The images of $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ are relatively compact in $U_{0} \times B_{3}$. Since $B_{3}$ is relatively compact in $\mathbb{C}^{n}$, shrinking $U_{0}$ if necessary, we get \begin{equation}\label{vertical_bound} \|\phi(q,v)\| \leq C_{1} \end{equation} for some constant $C_{1}>0$, $\phi$ in (\ref{J_1}) and all $(q,v) \in U_{0} \times B_{3}$. Here we consider $\phi(q,v)$ as a real linear map from $T_{q}U_{0}$ to $\mathbb{C}^{n}$, $T_{q}U_{0}$ is equipped with the Hermitian $H_{q}$, $\mathbb{C}^{n}$ is equipped with the standard metric, and the norm of $\phi$ comes from these two metrics. Define \begin{equation} W_{x} = TW_{0} \cap (U_{0} \times B_{3}) \qquad \text{and} \qquad W_{y} = J(TW_{0}) \cap (U_{0} \times B_{3}). \end{equation} \begin{lemma} $W_{x}$ and $W_{y}$ are closed totally real submanifolds of $U_{0} \times B_{3}$. \end{lemma} \begin{proof} It suffices to show that $TW_{0}$ and $J(TW_{0})$ are closed totally real submanifolds of $U_{0} \times \mathbb{C}^{n}$. Clearly, they are closed submanifolds. It remains to check that they are totally real (see Definition \ref{definition_totally_real}). Obviously, we have $\dim (TW_{0}) = \dim (J(TW_{0})) = \frac{1}{2} \dim (U_{0} \times \mathbb{C}^{n})$. By (\ref{T_2_U_0}), (\ref{T_W_0}) and (\ref{J_1}), at the base point $(q,v) \in TW_{0}$, the elements in $T^{2} W_{0}$ and $J^{(1)} (T^{2} W_{0})$ have the form \[ (q, v, w_{1}, w_{2}) \in T^{2} W_{0} \] and \[ (q, v, J(q) u_{1}, \phi(q,v)u_{1} + J_{0} u_{2}) \in J^{(1)}(T^{2} W_{0}), \] where $w_{1}$, $u_{1} \in \mathbb{R}^{n} \times \{0\} \subseteq \mathbb{R}^{2n}$, and $w_{2}$, $u_{2} \in \mathbb{R}^{n} \subseteq \mathbb{C}^{n}$. Then $J(q) u_{1} \in J(q) \mathbb{R}^{n}$ and $J_{0} u_{2} \in J_{0} \mathbb{R}^{n}$. If \[ (q, v, w_{1}, w_{2}) = (q, v, J(q) u_{1}, \phi(q,v)u_{1} + J_{0} u_{2}), \] by the fact $J(q) \mathbb{R}^{n} \cap \mathbb{R}^{n} = \{0\}$ and $J_{0} \mathbb{R}^{n} \cap \mathbb{R}^{n} = \{0\}$, we infer that the vectors $w_{1}$, $w_{2}$, $u_{1}$ and $u_{2}$ are zeros. This implies that $TW_{0}$ is totally real. By a similar argument, one sees that $J(TW_{0})$ is also totally real. \end{proof} By the fact that $f(\partial \check{D}^{+}) \subseteq W$ and $f$ is $J$-holomorphic, we know that $\frac{\partial f}{\partial x} (\partial \check{D}^{+}(\epsilon))$ $\subseteq W_{x}$ and $\frac{\partial f}{\partial y} (\partial \check{D}^{+}(\epsilon)) \subseteq W_{y}$. By (3) of Proposition \ref{proposition_almost_complex}, we infer $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ are $J$-holomorphic in the interior of $\check{D}^{+}(\epsilon)$. By the smoothness of $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$, we know that they are $J$-holomorphic on $\check{D}^{+}(\epsilon)$. Now we define a $1$-form on $U_{0} \times B_{3}$. In $U_{0}$, we already have the $1$-form $\alpha_{0}$ satisfying (\ref{tame_U_0}). Using the standard coordinate $(x_{1}+iy_{1}, \cdots, x_{n}+iy_{n}) \in \mathbb{C}^{n}$, define a $1$-form in $\mathbb{C}^{n}$ as \begin{equation} \beta = \sum_{j=1}^{n} x_{j} d y_{j}. \end{equation} For $\lambda > 0$, define a $1$-form on $U_{0} \times B_{3}$ as \begin{equation}\label{form} \alpha = P_{1}^{*} \alpha_{0} + \lambda P_{2}^{*} \beta. \end{equation} \begin{lemma} The $1$-form $\alpha$ in (\ref{form}) vanishes on $W_{x}$ and $W_{y}$. Furthermore, there exists a $\lambda > 0$ such that $d \alpha$ tames $J^{(1)}$ on $U_{0} \times B_{3}$. \end{lemma} \begin{proof} Since $\alpha_{0}$ vanishes on $W_{0}$ and $\beta$ vanishes on $\mathbb{R}^{n}$ and $J_{0} \mathbb{R}^{n}$ in $\mathbb{C}^{n}$, we know that $\alpha$ vanishes on $W_{x}$ and $W_{y}$. By (\ref{tame_U_0}), (\ref{vertical_bound}) and the fact that $d \beta (w_{2}, J_{0} w_{2}) = \| w_{2} \|^{2}$ for $w_{2} \in \mathbb{C}^{n}$, we have \begin{eqnarray*} & & d \alpha ((w_{1}, w_{2}), J^{(1)}(w_{1}, w_{2})) \\ & = & d \alpha_{0} (dP_{1}(w_{1}, w_{2}), dP_{1} \cdot J^{(1)}(w_{1}, w_{2})) + \lambda d \beta (dP_{2}(w_{1}, w_{2}), dP_{2} \cdot J^{(1)}(w_{1}, w_{2})) \\ & = & d \alpha_{0} (w_{1}, Jw_{1}) + \lambda d \beta (w_{2}, \phi w_{1} + J_{0} w_{2}) \\ & = & d \alpha_{0} (w_{1}, Jw_{1}) + \lambda d \beta (w_{2}, \phi w_{1}) + \lambda d \beta (w_{2}, J_{0} w_{2}) \\ & \geq & C_{0} \|w_{1}\|^{2} - \lambda \|w_{2}\| \|\phi\| \|w_{1}\| + \lambda \|w_{2}\|^{2} \\ & \geq & C_{0} \|w_{1}\|^{2} - \lambda C_{1} \|w_{1}\| \|w_{2}\| + \lambda \|w_{2}\|^{2} \\ & \geq & C_{0} \|w_{1}\|^{2} - \lambda C_{1} \left( \tfrac{1}{2} C_{1} \|w_{1}\|^{2} + \tfrac{1}{2} C_{1}^{-1} \|w_{2}\|^{2} \right) + \lambda \|w_{2}\|^{2} \\ & = & \left( C_{0} - \tfrac{1}{2} \lambda C_{1}^{2} \right) \|w_{1}\|^{2} + \tfrac{1}{2} \lambda \|w_{2}\|^{2}. \end{eqnarray*} We finish the proof by choosing $\lambda = C_{0} C_{1}^{-2}$. \end{proof} In summary, the maps $\frac{\partial f}{\partial x}: (\check{D}^{+}(\epsilon), \partial \check{D}^{+}(\epsilon)) \rightarrow (U_{0} \times B_{3}, W_{x})$ and $\frac{\partial f}{\partial y}: (\check{D}^{+}(\epsilon),$ $\partial \check{D}^{+}(\epsilon)) \rightarrow (U_{0} \times B_{3}, W_{y})$ satisfy the assumption of Theorem \ref{theorem_tame}. Applying Lemma \ref{lemma_tame_to_area} to these two maps, we obtain the following. \begin{lemma}\label{lemma_boot_strap} There exists $\epsilon_{0} > 0$ such that $\frac{\partial f}{\partial x}: (\check{D}^{+}(\epsilon_{0}), \partial \check{D}^{+}(\epsilon_{0})) \rightarrow (U_{0} \times B_{3}, W_{x})$ and $\frac{\partial f}{\partial y}: (\check{D}^{+}(\epsilon_{0}), \partial \check{D}^{+}(\epsilon_{0})) \rightarrow (U_{0} \times B_{3}, W_{y})$ are well-defined $J$-holomorphic maps. Here $W_{x}$ and $W_{y}$ are closed totally real submanifolds of $U_{0} \times B_{3}$. The images of $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ are relatively compact and have finite areas with respect to any metric on $U_{0} \times B_{3}$. \end{lemma} Now we are in a position to finish the proof of Theorems \ref{theorem_finite_area} and \ref{theorem_tame}. The comment after Lemma \ref{lemma_tame_to_area} tells us that, by Lemma \ref{lemma_tame_to_area}, Theorem \ref{theorem_finite_area} immediately implies Theorem \ref{theorem_tame}. Therefore, it suffices to prove Theorem \ref{theorem_finite_area}. \begin{proof}[Proof of Theorem \ref{theorem_finite_area}] Following \cite[1.4.B]{gromov}, this proof is a boot-strapping argument. By Lemma \ref{lemma_boot_strap}, we know that $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ are well-defined on $\check{D}^{+}(\epsilon_{0})$. The proof mainly contains two steps. First, we prove that, if $f$ has a $C^{k}$ extension for some $k$ such that $0 \leq k < + \infty$, then $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ also have $C^{k}$ extensions. By comparing the conclusion of Lemma \ref{lemma_boot_strap} and the assumption of Theorem \ref{theorem_finite_area}, up to a holomorphic reparametrization of the domain, we infer $\frac{\partial f}{\partial x}: \check{D}^{+}(\epsilon_{0}) \rightarrow U_{0} \times B_{3}$ and $\frac{\partial f}{\partial y}: \check{D}^{+}(\epsilon_{0}) \rightarrow U_{0} \times B_{3}$ can be viewed as special cases of $f$ because $f$ is the map in the general Theorem \ref{theorem_finite_area}. Therefore, if $f$ has a $C^{k}$ extension, then, as special cases, so do $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$. Second, we prove $f$ is $C^{k}$ on $D^{+}$ for all $k$ such that $0 \leq k < + \infty$. By Lemma \ref{lemma_continuity}, we know $f$ has a continuous extension over $0 \in D^{+}$. By taking a coordinate chart near $f(0)$, we may assume that $f$ maps $D^{+}(r)$ into $\mathbb{R}^{m}$ for some $r \in (0, \epsilon_{0})$. By the continuity of $f$ on $D^{+}$ and the result of the first step, we infer that $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ also have $C^{0}$ extensions. Clearly, $f$ is smooth on $\check{D}^{+}$. By Lemma \ref{lemma_derivative_extension}, we know that $f$ has a $C^{1}$ extension. In general, if we know that $f$ has a $C^{k}$ extension over $0$, then, by the result of the first step, $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ also have $C^{k}$ extensions. Since $f$ is smooth on $\check{D}^{+}$, by Lemma \ref{lemma_derivative_extension}, we infer $f$ has a $C^{k+1}$ extension. By an induction on $k$, we finished the second step. Finally, Lemma \ref{lemma_smooth} and the result of the second step finish the proof. \end{proof}
1,941,325,220,812
arxiv
\section{Introduction} Stochastic linear models are instruments of paramount importance for describing physical, social or economic systems: despite being simple enough to be analytically tractable, they allow to accurately describe a large number of qualititatively different systems. In this work, our focus will be the study of linear models in the regime of high dimensionality, and the analysis of the effects induced by the collective regime of interactions on the overall stability and on the first and second-order properties of the system. Such analysis is related to multiple fields of activity previously considered in the literature: \begin{itemize} \item The properties of large ecosystems close to the equilibrium have been studied with similar methods, finding that universality controls the stability of large food webs~\cite{May:1972aa}. Our analysis extends in particular the results obtained in~\cite{Jirsa:2004aa}, which consider the \emph{dynamic} version of May's seminal model~\cite{May:1972aa}, to the case of a symmetric interaction kernel (see Sec.~\ref{sec:goe}). \item Linear models for self-exciting marked point-processes are customarily used in order to model seismic activity \cite{Ogata:1988aa} (the mark being used to account for the intensity of the activity). This work generalizes this approach by suggesting that it is possible to see the components $i \in \{1,\dots,N\}$ as a set of \emph{interacting} marks. From this perspective, our results suggest that the critical relaxation measured in these systems might be endogenously generated by the interaction among a large number of marks. \item Multivariate linear models are standard tools in order to account for the cross-correlation among economic variables~\cite{Sims:1980aa}, and in particular linear self-exciting point process can be used in order to assess the risk of contagion in a financial network~\cite{Aitsahalia:2010aa}. \item Linearly interacting point-process are used in order to model neural networks~\cite{Reynaud:2013aa} made by a large number of components (although non-linear generalizations are customarily used in order to model inhibition). \end{itemize} \vskip .3cm In all these cases, one is interested in characterizing the statistical properties of a system composed of a large number $N\gg 1$ of linearly interacting entities $X(t) = \{ X_i(t) \}_{i=1}^N$, $t$ being interpreted as the time and $i$ as an index labeling the different entities. A major concern is the limiting behavior of the system in the regime of high-dimensionality, as the limit $N\to\infty$ is expected to be fixed by universality. We shall effectively address this issue through three angles, namely {\em Stability}, {\em Endogeneity} and {\em Relaxation}. Let us describe more clearly these perspectives: {\bf Stability.} An exogenous noise can drive the system towards the instability point characterized by the divergence of one or more components of $X(t)$. Indeed, the interaction network generates feedback loops which may enhance the susceptivity of the system to external perturbations. Hence, we want to determine the critical interaction level beyond which such instability effects arise, dominating the behavior of the system. {\bf Endogeneity.} We will be interested in characterizing what proportion of the \emph{average intensity}\footnote{ With a slight abuse of language, we will refer to $\Lambda$ as to the vector of average intensities for both processes that we will consider (VAR and Hawkes), see also the footnote of page~\pageref{fnt:abuse_not}. } \begin{equation} \label{eq:2} \Lambda_i=\frac{\langle dX_i(t)\rangle}{dt} \end{equation} is related to exogenous factors and what proportion derives from endogenous cross-contamination effects among the components. {\bf Relaxation.} As we will consider the components of the system to be individually characterized by short memory and fast relaxation, it will be interesting to study whether the lagged cross-correlations \begin{equation} \label{eq:xcorr_intro} c_{ij}(\tau) = \frac{\avx{dX_i(t)dX_j(t+\tau)} -\avx{dX_i(t)}\avx{dX_j(t+\tau)}}{dt^2} \end{equation} display or not slow relaxation in the limit of large $\tau$ as a consequence of the collective regime of interactions. \footnote{ While in principle one could quantify the amount of endogenous interaction by focusing on the cross-correlations, we choose to adopt the average intensity as a proxy. The results of Secs.~\ref{sec:det},~\ref{sec:goe} and~\ref{sec:rrg} show that both quantities appropriately signal the overall degree of self-interaction in the system. } Strongly endogenous effects are commonly found in applications along with slow relaxation of the auto-correlation (between the same components of $X(t)$) or the cross-correlation (between different components of $X(t)$) functions. This is, for instance, the case of high-frequency dynamics in financial markets. A growing number of empirical works, describing by means of linear models this high-frequency dynamics, are in fact revealing that financial correlations are characterized by long-range memory, signaled by power law relaxations with small exponents \cite{Bacry:2012aa,Filimonov:2012aa,Hardiman:2013aa,Bacry:2013ac}. This behavior can in principle be justified on the basis of the self-reflexive character of financial markets, by arguing that, as a consequence of the strongly endogenous regime in which markets operate, slow relaxation can be induced in the system. From this perspective, the results of Refs.~\cite{Bremaud:2001aa,Jaisson:2013aa} can be used in order to investigate analytically the onset of such a critical regime under the implicit assumptions that each of the entities composing the system is \emph{individually} poised at criticality. Our aim is to discuss the complementary scenario in which long-range correlations in the system are induced \emph{as an effect of its interaction network}, by assuming that no component in the system singularly exhibits critical behavior. More precisely, we shall suppose that individual units are fastly relaxing and try to characterize the influence of the network structure on the long-time behavior ($\tau \rightarrow +\infty$) of the correlations $c_{ij}(\tau)$ in the limit $N \rightarrow +\infty$. The limit when the system can collectively encode \emph{long-memory}, signaled by the divergence of the quantity $\hat c_{ij}(0) = \int_0^\infty d\tau \,c_{ij}(\tau)$ will be referred to as the issue of {\bf Criticality} of the system. Let us notice that, as a consequence of the assumption that individual units are fastly relaxing, the regime in which the $\tau\to\infty$ limit is taken before the $N\to\infty$ limit becomes trivial. On the other hand, taking the $N\to\infty$ limit before any other limit will allow us to access a very rich phenomenological behavior. \vskip .2cm The organization of the paper is the following. We introduce our main framework for linear systems in continuous time in Section~\ref{sec:lin_mod}. VAR models are first introduced, followed by Hawkes processes. Then, we restrict our study to the particular case of factorizable linear systems. At the end of this Section, we present the methodology for analyzing these systems that will be used all along the paper on each model Section~\ref{sec:det} studies the case a deterministic regular network of interactions whereas Sections~\ref{sec:goe} and~\ref{sec:rrg} study the case of random models within the framework of two tractable random matrix ensembles: the Gaussian Orthogonal Ensemble and the Regular Random Graph Ensemble. We draw our conclusions in Section~\ref{sec:conclusions}, while the more technical parts of the discussion are relegated to the Appendices. \section{Linear systems in continuous time} \label{sec:lin_mod} In this preliminary section we define the classes of linear models on which we focus our analysis. Beside fixing the notations and the conventions that we will follow throughout the paper, we aim to highlight the main similarities among the two linear models that will be presented. In particular, we want to show that the relations defining their behavior are formally identical and are essentially due to their linear nature. More general linear models can be expected to share the same behavior, as long as relations such as~(\ref{eq:psi}),~(\ref{eq:av}) and~(\ref{eq:xcorr}) hold. \subsection{Vector Autoregressive model} The first type of model that we are going to consider is an $N$-dimensional Vector AutoRegressive Model (VAR), a widespread model introduced in~\cite{Sims:1980aa} and customarily used in econometrics in order to describe the dynamic relastionship among the different components of an economic time-series (for a more exhaustive account of the vast literature concerning VAR processes, we address the reader to the surveys~\cite{Hamilton:1994aa,Watson:1994aa}). The model is defined by a set of $N$ processes $X(t)=\{X_i(t)\}_{i=1}^N$ evolving in discrete time, driven by Gaussian noises $\eta(t) = \{ \eta_i(t) \}_{i=1}^N$, and interacting through a linear matrix kernel $\Phi(\tau)$. In vector notation, the process is defined by the relation \begin{equation} \label{eq:ar} X(t) = \sum_{t^\prime=-\infty}^{t-1} \Phi(t-t^\prime) X(t^\prime) + \eta(t) \; , \end{equation} where we assume the interaction kernel $\Phi(\tau)$ to admit the discrete Fourier transform \begin{equation} \label{eq:disc_fourier} \hat \Phi(\omega) = \sum_{\tau=-\infty}^{+\infty} e^{-i\omega \tau} \Phi(\tau) \, , \end{equation} with components $\hat \Phi(\omega) = \{ \hat \Phi_{ij}(\omega) \}_{ij=1}^N$. We restrict ourselves to systems for which the interaction kernel $\Phi(\tau) $ is causal, i.e., \begin{equation} \label{causality} \Phi(\tau) = 0,~~~~ \forall \tau < 0. \end{equation} \vskip .3cm \noindent {\bf Stability of VAR models.} Most of the time we shall consider that $\Phi$ satisfies the following so-called stability assumption : \begin{itemize} \item {\bf (H1) Stability Assumption.} The spectral radius of $\hat \Phi_0 $ is smaller than 1, which we can indicate by $|| \hat \Phi_0 || < 1$. Equivalently, all the eigenvalues of $\hat \Phi_0 $ have modulus smaller than one. \end{itemize} \vskip .2cm Indeed, under this last assumption, it is possible to prove (see e.g.~\cite{Hamilton:1994aa}) that the infinite sum \begin{equation} \label{psi} \Psi(\tau) = 1 + \Phi(\tau) + \Phi(\tau) * \Phi(\tau) + \Phi(\tau) * \Phi(\tau) * \Phi(\tau) + \dots \end{equation} is well-defined, and its Fourier transform can be written as the matrix \begin{equation} \label{eq:psi} \hat \Psi(\omega) = (\mathbb{I} - \hat \Phi(\omega))^{-1} \; , \end{equation} where $\mathbb{I}$ denotes the $N$-dimensional identity matrix. The process $X(t)$ can then be written as the convolution \begin{equation} \label{eq:conv_ar} X(t) = \Psi(t) * \eta(t) \; , \end{equation} where, by convention, the $*$ operator refers to the regular matrix product where all multiplications have been replaced by discrete convolutions. The process $X(t)$ is proved to admit a stationary state, whose associated probability measure expectation will be denoted by the symbol $\avx{\dots}$. We will be interested in computing first and second order properties of the process under such measure whenever stability assumption hold.\\ \vskip .3cm \noindent {\bf Continuous time VAR models.} In order to emphasize the analogy with a Hawkes process, in the next parts of the discussion we will be considering the continuous-time version of this model, also employed in~\cite{Potters:2005aa,Mastromatteo:2011aa} in order to model high-frequency financial data. In this case, Eq.~(\ref{eq:conv_ar}) can be generalized to the continuous case by replacing the discrete convolution with a continuous one. In particular, we can define the continuous-time VAR by substituting the Gaussian noises $\eta(t)$ with a Wiener process with increments $d\eta(t)$. The continuous time process satisfies, by construction, the same properties of its discrete-time counterpart once we define the continuous-time Fourier transform as \begin{equation} \label{eq:fourier} \hat \Phi(\omega) = \int_{-\infty}^{+\infty} d\tau\, e^{-i\omega \tau} \Phi(\tau) \, . \end{equation} In particular, the analytical formulae for the average $\Lambda$ and the cross-correlation matrix $c(\tau)$ are given by the equations below. \vskip .3cm \noindent {\bf Endogeneity of VAR models.} The expectation $\Lambda dt = \avx{dX(t)} $ can be expressed as \begin{equation} \label{eq:av} \Lambda = \hat \Psi(0) \mu \; , \end{equation} where we write $\avx{d\eta(t) } = \mu \, dt $. The relation among endogenous and exogenous effects is determined by the matrix $\Psi(0)$, whose spectral norm determines the maximum output intensity $\Lambda$ as a function of the norm of the driving vector $\mu$. \vskip .3cm \noindent {\bf Correlation of VAR models.} The cross-correlation matrix $$c(t-t^\prime) dt dt^\prime = \avx{ dX(t) dX^T(t^\prime) } - \avx{ dX(t) } \avx{ dX^T(t^\prime) }$$ can be obtained through the Fourier transform $\hat c(\omega)$, which is given by \begin{equation} \label{eq:xcorr} \hat c(\omega) = \hat \Psi^\star(\omega) \Sigma \hat \Psi^T(\omega) \; , \end{equation} where the symbol $^T$ denotes matrix transposition and $^\star$ indicates complex conjugation. We have assumed the covariance term $\Sigma$ to be defined by the relation \begin{equation} \label{sigmavar} dt dt^\prime \,\Sigma \delta(t-t^\prime) =\avx{ d\eta(t) d\eta^T(t^\prime) } - \avx{ d\eta(t)} \avx{ d\eta^T(t^\prime) }, \end{equation} where $\delta(\tau)$ indicates the Dirac delta function. Moreover, we assumed the matrix $\Sigma$ to be diagonal. \subsection{Hawkes process} The second class of models that we consider are Hawkes processes, a class of interacting point processes customarily used to describe self and cross-excitation phenomena \cite{Hawkes:1971lc,Hawkes:1971nq}. For a long time, Hawkes models have been extensively used to describe the occurrence of earthquakes in some given region \cite{ogata99,helsor02}. They are getting more and more popular in many other applications in which tracking how information diffuses through different ``agents'' is the main concern, e.g., neurobiology (neurons activity) \cite{Rey2}, sociology (spread of terrorist activity) \cite{emhawkes,mo11} or processes on the internet (viral diffusion across social networks) \cite{cranesor08,yangzha}. Their application to finance can be traced back to Refs~\cite{Chavez-Demoulin:2005aa,Hewlett:2006aa}, and has been followed by a still-ongoing spree of activity~\cite{Bowsher:2007aa,Bauwens:2009aa,Giesecke:2011aa,Toke:2011aa,Embrechts:2011aa,Bacry:2012aa,Filimonov:2012aa,Bacry:2013aa,Bacry:2013ab,Bacry:2013ac,Hardiman:2013aa}. An $N$-dimensional Hawkes process is defined by a set of $N$ counting processes evolving in continuous time $X(t) = \{ X_i(t) \}_{i=1}^N$\footnote{\label{fnt:abuse_not}With abuse of notation, we are denoting quantities describing the VAR model with the same symbols adopted for the Hawkes process. We choose to do so in order to show more transparently the close relation among the two models, which satisfy extremely similar relations. We will specify explicitly which of the two frameworks we are considering whenever this notation results ambiguous.}. The probability for an event to be triggered is expressed by a stochastic intensity function $\Lambda(t)=\{\Lambda_i(t) \}_{i=1}^N$ which evolves according to the dynamics: \label{eq:ar_cont} \begin{equation} \Lambda(t) = \mu + \int_{-\infty}^{t} \Phi(t-t^\prime) dX(t^\prime) \; , \end{equation} where the components of $\mu$ are commonly referred as \emph{exogenous intensities} (or {\em baseline intensities}), and $\Phi(t)$ is a positive semidefinite, causal (in the sense of \eqref{causality}), locally $L^1$ matrix kernel. Notice that, unlike in the VAR case, in order for the event probabilities $\Lambda(t)dt$ to be well-defined, we need to assume that \begin{equation} \nonumber \Phi(\tau) {\mbox {~~~satisfies component-wise positivity}}. \end{equation} {\bf Stability of Hawkes processes.} If a Hawkes process satisfies the stability assumption {\bf (H1)} specified above for the VAR model, then one can show that $X(t)$ is stationary and stable \cite{Hawkes:1971lc,Hawkes:1971nq}. Moreover, as in the VAR model, this condition implies that the infinite sum $\Psi(\tau)$ (Eq. \eqref{psi}) is well-defined, and its Fourier transform $\hat \Psi(\omega)$ is given by Eq.~(\ref{eq:psi}). \vskip .3cm \noindent {\bf Endogeneity of Hawkes processes.} As for the VAR model, the mean intensity $\avx{ \Lambda(t) } dt = \avx{ dX(t) } = \Lambda\, dt$ is expressed by Eq.~(\ref{eq:av}). Again, the ratio of endogeneity versus exogeneity is is determined by the matrix $\Psi(0)$, which sets the relation among endogenous intensity $\Lambda$ and exogenous intensity $\mu$. \vskip .3cm \noindent {\bf Correlation of Hawkes processes.} Again, as for the VAR model, the cross-correlation matrix $$c(t-t^\prime) dt dt^\prime=\avx{ dX(t) dX^T(t^\prime) } - \avx{ dX(t) } \avx{ dX^T(t^\prime) }$$ can be expressed by Eq.~(\ref{eq:xcorr}) in which $\Sigma$ is the diagonal matrix defined by (see~\cite{Bacry:2012aa}) \begin{equation} \label{sigmahawkes} \Sigma_{ij}=\Lambda_i \delta_{ij}, \end{equation} where $\delta_{ij}$ stands for the Kronecker symbol, equal to one if $i=j$ and zero otherwise. \subsection{Hawkes processes versus VAR processes.} \label{sec:hawkesvar} The previous results indicate that a Hawkes process is very reminiscent of a VAR process formulated in continuous time with non-negative $\Phi(\tau)$, although some important differences need to be emphasized. First, while the $\mu$ in the autoregressive case identify the average increments of a multivariate Wiener process, in the Hawkes case they emerge as the exogenous component of the average intensity. Secondly, in the autoregressive case the covariance of the noise $\Sigma$ controlling the cross-correlations is independent of $\Phi(\tau)$ and $\mu$, while in the Hawkes case it is \emph{endogenously generated} and is thus fixed to the values $\Sigma_{ii}=\Lambda_i$. Finally, the cross-correlation $c(\tau)$ of a Hawkes process is always singular: as we are considering counting processes with jumps of size one, the relation $dX_i(t) =(dX_i(t))^2$ holds. Then, the cross correlations can be decomposed as $c_{ij}(\tau) = \Lambda_i \delta_{ij}\delta(\tau) + c^{(reg)}_{ij}(\tau)$ where $c^{(reg)}_{ij}(\tau)$ is regular around zero. For VAR processes, such a singular contribution to the correlations emerges just if $\avx{ d\eta(t) d\eta^T(t^\prime) } - \avx{ d\eta(t)} \avx{ d\eta^T(t^\prime) }$ contains a singular component centered at zero. Let us point out that, with a slight abuse of language, we will refer to $\Lambda$ and $\mu$ respectively as mean intensities and exogenous intensities even in the case of a VAR model. \subsection{Factorizable linear systems} \subsubsection{An isotropical factorizable interaction kernel} \label{iso} As we are interested in studying how heterogeneity in the dynamic behavior of the system may emerge as an effect of the interaction network, we take models for which the interaction kernel $\Phi(\tau)$ can be factorized as \begin{equation} \label{eq:factasumpt} \Phi(\tau) = \alpha \phi(\tau) \; , \end{equation} where $\alpha$ is a matrix and $\phi(\tau)$ is a scalar function of $\tau$ satisfying (without loss of generality) $$\hat \phi(0) = \int_0^{\infty} d\tau \phi(\tau)= 1.$$ We additionally suppose that $$ \int_0^{\infty} d\tau \, \tau \phi(\tau) < \infty,$$ as we want to focus on the problem of whether long-range memory in correlations can be \emph{endogenously} induced by a system whose interactions have short-range memory. Hence, we are supposing the different components of the system to react homogeneously in time (i.e., with the same speed) to innovations, while we confine the heterogeneity of the system to the interaction strengths, which are specified by the matrix $\alpha$. Moreover, the $\alpha$ matrix is naturally interpreted as a weighted adjacency matrix specifying the overall strength of the interaction between the different components of the system\footnote{Let us point a factorization assumption like~(\ref{eq:factasumpt}) is very common in space-time Hawkes models for earthquakes~\cite{ogata99}.}. We will consider exclusively cases for which the matrix $\alpha$ can be diagonalized, so that we will always be able to write \begin{equation} \label{eq:decomp} \alpha = U \lambda U^{-1} \; , \end{equation} where $\lambda$ is a diagonal matrix of eigenvalues with elements equal to $ \{ \lambda_a\}_{a=1}^N$ and $U$ is a suitable change of basis matrix, with entries denoted by $\{U_{ia}\}_{ia=1}^N$. Note that we are adopting the convention of using the indices $i,j,\dots$ to denote the components of $X(t)$ and $a,b,\dots$ for the eigenvalues of the $\alpha$ matrix. For simplicity, We will further require some supplementary conditions to hold for the system, namely \vskip .1cm \begin{itemize}[resume] \item {\bf (H2) Unitarity Assumption.} The matrix $U$ is assumed to be unitary (i.e. $U^\dagger=U^{-1}$, where $^\dagger$ denotes Hermitian conjugation). \item {\bf (H3) Homogeneity Assumption.} All the components of $\Sigma$ (defined by \eqref{sigmavar} in the case of the VAR model and by \eqref{sigmahawkes} in the case of the Hawkes model) are assumed to be equal to $\bar \Sigma$, and the components of the the mean intensity vector are equal to $\bar \Lambda$.\footnote{While in the translationally invariant case discussed in Sec.~\ref{sec:det} the system will fulfill by construction the homogeneity assumption defined above, in the random cases presented in Secs.~\ref{sec:goe} and~\ref{sec:rrg} this assumption will hold on average, allowing the system to enjoy the same properties.} \end{itemize} \noindent \vskip 0.3cm The diagonalization assumption \eqref{eq:decomp} allows to write the components of the matrix $\hat \Psi(\omega)$ as \begin{eqnarray} \label{eq:decomp_psi} \hat \Psi(\omega) = \hat \phi(\omega)^{-1} U \left( \hat \phi(\omega)^{-1} - \lambda\right)^{-1} U^{-1} \; . \end{eqnarray} We will often employ this decomposition of $\hat\Psi(\omega)$ in order to link the distribution of eigenvalues of the matrix $\alpha$ with the value of the observables $\Lambda$ and $c(\omega)$. The unitarity {\bf (H2)} and homogeneity {\bf (H3)} assumptions above allow us to disregard the effect on the system of the heterogeneity in the angular components of the interaction matrix, permitting us to focus on the collective effect induced by the isotropical part of the interaction. Even though the global effect of the inhomogeneity of the system is an interesting problem on his own, the simpler case that we consider is a necessary first step in order to understand the collective effects of the large $N$ limit on this type of systems. \vskip .3cm In the homogeneous case, it will be useful to introduce the notation \begin{eqnarray} \avcd{\Lambda} &=&\frac 1 N \sum_i \Lambda_i \\ \avcd{c(\tau)} &=& \frac 1 N \sum_i c_{ii}(\tau) \\ \avcm{c(\tau)}&=& \frac 1 {N^2} \sum_{ij} c_{ij}(\tau) \;, \end{eqnarray} as all the information about the vector of average intensity and the lagged cross-correlation matrix is encoded in these averages over components. Under the unitarity {\bf (H2)} and homogeneity {\bf (H3)} assumptions, it is easy to show that Eq.~(\ref{eq:xcorr}) leads to \begin{equation} \label{eq:acorr_factor} \avcd{\hat c(\omega)} = \frac 1 N \sum_i \hat c_{ii}(\omega) = \frac{\bar \Sigma}{N} \sum_a \frac 1 {||\hat \phi(\omega)^{-1} -\lambda_a ||^2} \; , \end{equation} so that the information about the eigenvectors encoded in $U$ is not required in order to understand the behavior of the average over components of the autocorrelation function. {\bf The behavior of autocorrelations is completely specified through the function $\phi(t)$ and the spectrum of the matrix $\alpha$}. \subsubsection{Case of an exponential factorizable kernel} Finally, we want to introduce an useful benchmark case for the kernel function $\phi(\tau)$, which will be employed in order to characterize the long-time behavior of the system. Specifically, we will often particularize our results to the special case in which the interaction kernel has an exponential shape of the form $$\phi(\tau) = \beta e^{-\beta \tau} \mathds{1}_{\mathbb{R}^+}(\tau),$$ or equivalently $$\hat \phi(\omega) = (1 + i \omega/\beta)^{-1},$$ where $\mathds{1}_{\mathbb{R}^+}(\tau)$ denotes the indicator function on $\mathbb{R}^+$. This particularly simple case will allow us to explore the main qualitative features of the results without losing analytical control on the solution. In this framework, the locations of the poles appearing in the Fourier transforms of the correlation function~(\ref{eq:xcorr}) are trivially dictated by the eigenvalues of the matrix $\alpha$. The poles can be written as \begin{equation} \label{eq:poles} \omega_a = i \beta (1-\lambda_a) \quad \quad \omega_a^\star = -i \beta (1-\lambda_a^\star) \; , \end{equation} where the $\{\lambda_a\}_{a=1}^N$ denote as usual the eigenvalues of $\alpha$. \vskip .3cm \noindent {\bf Relaxation.} In the exponential case, inverting the Fourier transforms appearing in Eq.~(\ref{eq:xcorr}) becomes straightforward, if one supposes the eigenvalues to be non-degenerate. One obtains in fact \begin{eqnarray} \label{eq:xcorr_real} c_{ij}(\tau) &=& \beta \theta(-\tau) \sum_{abk} \frac{\lambda^\star_a (2-\lambda^\star_a) }{2-\lambda_b -\lambda^\star_a} U^\star_{ia} U^{\star \,-1}_{ak} \Sigma_k U^{-1}_{bk} U_{jb} e^{\beta [1-\lambda^\star_a]\tau} \\ &+& \beta \theta(\phantom{-}\tau) \sum_{abk} \frac{\lambda_b (2-\lambda_b) }{2-\lambda_b -\lambda^\star_a} \nonumber U^\star_{ia} U^{\star \,-1}_{ak} \Sigma_k U^{-1}_{bk} U_{jb} e^{-\beta [1-\lambda_b]\tau} \; . \end{eqnarray} Under {\bf (H2)} and {\bf (H3)}, above equation leads to a simplified form of the average autocorrelation, which -- just as Eq.~(\ref{eq:acorr_factor}) above -- is independent of the eigenvectors of $\alpha$: \begin{eqnarray} \label{eq:acorr_factor_real} \avcd{c(\tau)} = \frac 1 N \sum_i c_{ii}(\tau) &=& \frac{ \beta \bar \Sigma}{N} \sum_a \frac{\lambda_a (2-\lambda_a) }{(2-\lambda_a-\lambda_a^\star )} e^{-\beta [1-\lambda_a] |\tau|} \; , \end{eqnarray} Each eigenvalue of $\alpha$ generates a decay mode indexed by $a$ whose associated speed depends upon the distance between the corresponding $\lambda_a$ and 1. In particular, the slowest mode $\lambda_{max} < 1$ controls the behavior of the correlations at large times, fixing the scaling of the correlation at large times to be proportional to $c_{ii}(\tau) \sim \exp (-\beta (1-\lambda_{max})|\tau|)$. The \emph{critical} regime in which the support of the distribution of $\lambda_a$ touches the instability point $\lambda=1$ is of particular interest, and will be considered in specific sections (Secs.~\ref{sec:crit_dim_d},~\ref{sec:crit_dim_1},~\ref{sec:crit_goe} and~\ref{sec:crit_rrg}). \vskip .3cm \noindent {\bf Genericity of exponential kernel.} The main interest in considering the exponential kernel lies in the fact that it allows one to explore the long-time behavior of \emph{any} short-range kernel. In fact, consider a generic kernel satisfying the \emph{short-range} assumption \begin{equation} \label{eq:short_range_kern} \int_0^\infty d\tau \, \tau \phi(\tau) < \infty \; . \end{equation} This condition corresponds in Fourier space to the differentiability in zero of the function $\hat \phi(\omega)$, implying that it is possible to expand it as \begin{equation} \label{eq:short_range_four} \hat \phi(\omega)= 1 + i\omega/\beta + o(\omega) \; , \end{equation} with $\lim_{\omega\to 0} o(\omega)/\omega = 0$. By back-transforming Eq.~(\ref{eq:xcorr}) in real space while keeping into account this expansion, one finds that the leading order term of the large time expansion of the cross-correlations is given by Eq.~(\ref{eq:xcorr_real}). Intuitively, as the limit of large times $\tau\to\infty$ corresponds to the small $\omega$ regime in Fourier space, this indicates that at large times one can neglect the $o(\omega)$ term in the expansion of the kernel, sticking with the term $1 + i\omega/\beta$ which corresponds in real space to the exponential kernel \begin{equation} \label{eq:exp_kern} \phi(\tau) = \beta e^{-\beta \tau}\mathds{1}_{\mathbb{R}^+}(\tau) \; . \end{equation} \subsection{Methodology for analyzing factorizable linear systems} The next sections will analyze the behavior of factorizable linear systems (as defined in Section \ref{iso}), addressing systematically, as explained in the introduction, the issues of stability, endogeneity and relaxation by focusing on a small set of relevant scalars encoding the collective information about the state of the system. In order to do so, we shall study the behavior of different observables. More precisely : \vskip .1cm \begin{itemize} \item {\bf Stability: The spectral norm of $\alpha$.} Due to Eq.~(\ref{eq:av}), the value of the average intensities $\Lambda_i$ for a fixed value of the input $\mu_i$ is determined by the largest eigenvalue of the matrix $\hat \Psi(0)$, which is precisely equal to $1/(1-||\alpha||)$. In particular, for $||\alpha||=1$ (i.e., the largest eigenvalue of $\alpha$ is equal to 1), endogenous effects spoil the validity of {\bf (H1)}, compromising the stationarity of the system and inducing a divergence of the averages. \item {\bf Endogeneity: $\avcd{\Lambda}$.} We will be able to express $\avcd{\Lambda}$ as a linear function of $\avcd{\mu}$ thanks to the homogeneity hypothesis {\bf (H3)}, encoding the information about $\Psi(0)$ in the ratio $ \avcd{\Lambda}/\avcd{\mu}$. Hence, we will be able to interpret $\avcd{\Lambda}/\avcd{\mu}$ as a susceptibility of the system with respect to its driving input, while $\avcd{\Lambda}/\avcd{\mu}-1$ can be thought of as expressing the ratio between the endogenous intensity against the exogenous one. \item {\bf Relaxation: $\avcd{ c(\tau)}$ and $\avcm{c(\tau)}$.} Slow relaxation will be characterized by the behavior at large times of the auto-correlations $\avcd{ c(\tau)}$ and of the cross-correlations $\avcm{c(\tau)}$, distinguishing the cases in which these functions decay exponentially from the one in which they develop broad tails. These quantities can be thought as quantifying the response of the system to noise, as it is proportional to the noise covariance matrix $\Sigma$. \end{itemize} All these quantities will be analyzed in the $N\to\infty$ regime of large dimensionality, in which non-trivial dynamic effects are expected to emerge. We shall focus in particular on the issue of {\bf Criticality}, which may induce long-range correlations (divergence of the $L^1$ norm of the self-correlation function). Accordingly, the scaling in $N$ that we have introduced for the above observables is such that they all attain a finite limit for $N\to\infty$. Nevertheless, specifying how such limit is approached requires a prescription encoding the dependence upon $N$ of the interaction kernels $\Phi(\tau)$, of the components $\mu$ and, if needed, of the parameter $\Sigma$. In the next section we will use a deterministic prescription in order to scale the system with $N$, and we will explore the large $N$ regime of a regular lattice of finite dimension (i.e., $\alpha$ is invariant under translation and the coefficients $\mu$ and $\Sigma$ are constants). Only in Secs.~\ref{sec:goe} and~\ref{sec:rrg} we will study the case in which the coefficients defining the model are free to fluctuate within two given statistical ensembles. \section{The deterministic case} \label{sec:det} \subsection{A translationally-invariant network} In this section we will discuss the case of a translationally-invariant network of interactions for the process $X(t)$, whose components are arranged on the vertices of a regular lattice. This prescription is used in order to explore the effect of the interactions in a completely homogeneous network, disregarding the presence of irregularities in the system. Moreover, this framework allows to discuss the effect of the variation of the connectivity of the system, interpolating from the complete network (corresponding to the infinite dimensional limit, in which the interactions are broadly diluted through the system) to the low-dimensional case (in which strong fluctuations are induced as an effect of the topology), with the main advantage of controlling analytically the behavior of the system throughout the crossover. In order to define the notion of spatial dimension, we preliminarly need to assume that a notion of geometry emerges naturally in the system, as its structure needs to be invariant under translations along a set of $D$ directions. In particular, we consider the case in which each of the components is located on the vertices of a hypercube of dimension $D>0$ (see Fig.~\ref{fig:2d}). \begin{figure} \centering \includegraphics[width=2in]{fig/2d} \caption{Sketch of a translationally-invariant network for $D=2$. In this case the components of $X(t)$ are arranged on the vertices of a square lattice of length $L$.} \label{fig:2d} \end{figure} We will further assume the coordinates to have length $L$, so that we will be able to label them by using a vector index $\mathbf{i} \in \{0,\dots , L-1\}^D$, and to identify the size of the system $N$ with the volume of the hypercube $N=L^D$. \footnote{These models can be thought as discretized versions of continuous-space ones such as the one employed in~\cite{Ogata:1998aa} in order to model the occurrence of earthquakes. In our language the continuous-space limit is recovered in the limit $N\to\infty$. Moreover, the results shown in Sec.~\ref{sec:crit_dim_d} allow us to explore the regime in which long-range behavior is endogenously induced from the structure of the interactions, rather than enforced by construction in the parametric specification of the interaction model as it is assumed in~\cite{Ogata:1998aa}. } We will finally assume $\mu_{\mathbf{i}}$ and $\Sigma_{\mathbf{i}\veci}$ to be constants, i.e., $$\mu_{\mathbf{i}} = \bar \mu ~~{\mbox {and}}~~ \Sigma_{\mathbf{i}\veci} = \bar \Sigma,~~~\forall\,\mathbf{i} \in [0,L-1]^D.$$ The periodicity condition will be enforced through the assumption $$ \alpha_{\mathbf{i} \mathbf{j}}=\alpha_{\mathbf{i} - \mathbf{j}}~~~\forall \, \mathbf{i},\mathbf{j} \in [0,L-1]^D.$$ Let us point out that the diagonalization assumption \eqref{eq:decomp} as well as the Unitarity assumption {\bf (H2)} and the Homogeneity assumption {\bf (H3)} are all satisfied. Indeed, straightforward computations lead to \begin{eqnarray} \label{eq:periodic_decomp} U_{\mathbf{i} \mathbf{a}} &=& \frac{1}{\sqrt{N}} e^{\frac{2\pi i}{L} \mathbf{i} \cdot \mathbf{a} } \; ,\\ \lambda_{\mathbf{a}} &=& \sum_{\mathbf{i}} e^{\frac{2\pi i}{L} \mathbf{i} \cdot \mathbf{a} } \alpha_{\mathbf{i}}\;, \end{eqnarray} where $\mathbf{a} \in \{0,\dots, L-1\}^D$. \subsection{Stability of translationally invariant systems} The condition of stability of the system (i.e.\ assumption {\bf (H1)}) is equivalent to the condition $$||\alpha|| = \max_\mathbf{a} | \lambda_\mathbf{a} | <1$$ which we shall assume to be fulfilled both in the VAR and in the Hawkes case. In the latter case, or in the VAR model when all the entries of $\alpha$ are positive, such condition simplifies to $\lambda_{\mathbf{0}}< 1$. Let us point out that, in this case, the stability of the system is controlled by the parameter $||\alpha||=\lambda_{\mathbf{0}}$, which increases the degree of self-interaction of the system. In particular, a strongly susceptible behavior is detected in the regime of $\lambda_{\mathbf{0}}$ close to 1, where the ratio $\avcd{\Lambda}/\avcd{\mu} -1 $ of endogenously-generated versus exogenously-generated intensity becomes extremely large (see~\eqref{eq:mean_lambda_lattice}). \subsection{Endogeneity and Relaxation of translationally invariant systems} One can easily prove that \begin{eqnarray} \label{eq:mean_lambda_lattice} \bar \Lambda &=& \frac {\bar \mu} {1 - \lambda_{\mathbf{0}}} \\ \avcd{\hat c(\omega)} &=& \frac {\bar \Sigma} N ||\hat \phi(\omega)||^{-2} \sum_{\mathbf{a}} \frac 1 {||\hat \phi(\omega)^{-1} - \lambda_{\mathbf{a}}||^2 } \\ \avcm{ \hat c(\omega)} &=& \frac {\bar \Sigma} N ||\hat \phi(\omega)||^{-2}\frac 1 {||\hat \phi(\omega)^{-1} - \lambda_{\mathbf{0}}||^2 } \; . \end{eqnarray} Also notice that the expressions of $\bar \Lambda$ and $\avcm{ \hat c(\omega)}$ depend just upon the single eigenvalue $\lambda_\mathbf{0}$. This is because the corresponding eigenvalue is $U_{\mathbf{i} \mathbf{0} }=N^{-1/2}(1,\dots,1)$ so that by orthogonality one has $\sum_\mathbf{i} U_{\mathbf{i} \mathbf{a}}=\sqrt{N} \delta_{\mathbf{a} \mathbf{0}}$. An identical phenomenon will occur in Sec.~\ref{sec:rrg}, due to the absence of fluctuations in the sum over row of the $\alpha$ matrix.\\ The autocorrelation function is more conveniently analyzed by rewriting it as \begin{equation} \label{eq:ac_spectrum} \avcd{ \hat c(\omega)} = \bar \Sigma\, ||\hat \phi(\omega)||^{-2} \int_{-\infty}^{+\infty} d\lambda\, \frac { \rho(\lambda)} {||\hat \phi(\omega)^{-1} - \lambda||^2}\; , \end{equation} after introducing the \emph{spectral density} \begin{equation} \label{eq:spectrum} \rho(\lambda) = \frac 1 N \sum_{\mathbf{a}} \delta(\lambda - \lambda_{\mathbf{a}}) \; . \end{equation} The advantage of introducing the spectral density is that $\rho(\lambda)$ is expected to have a well-behaved limit for $L\to\infty$, which we can exploit in order to find the shape of correlations in the large size limit. In particular, if the maximum of the support of the spectrum is $\lambda_{max}<1$, the auto-correlations decay exponentially at large times, while for $\lambda_{max}=1$, the behavior of the spectrum close to $\lambda=1$ may lead to a non-exponential behavior of the correlation in the $L\to\infty$ limit. These scenarios are illustrated by mean of some examples in the next Section. \subsection{Criticality for a non-negative $\alpha$} \label{sec:crit_dim_d} We first want to investigate the behavior of the model in the vicinity of $\lambda_{max} = 1$ in the case in which $\alpha_\mathbf{i}$ is non-negative for all $\mathbf{i}$. Eq.~(\ref{eq:mean_lambda_lattice}) indicates that, for $\lambda_{max}\to 1$, the average intensity $\avcd{\Lambda}$ diverges, and the ratio of endogenous-to-exogenous events explodes. This type of divergence will be common to all the critical cases which we will be investigating, and can be reabsorbed into a suitable definition of $\avcd{\mu}$, similar to what is done in~\cite{Bremaud:2001aa}\footnote{Unlike in~\cite{Bremaud:2001aa}, where it is proven that the existence of a well-behaved $\lambda_{max}\to 1$ limit for the correlation in one dimension requires the kernel $\phi(\tau)$ to be long-ranged, we find that in the multidimensional setting even a short-ranged $\phi(\tau)$ may lead to a well-behaved limit for the correlations. Intuitively, for a fixed value of $\lambda_{max}$, a more densely wired system can redistribute potentially dangerous fluctuations among the components of the system, eventually avoiding the divergence of $c_{ii}(\tau)$.}. Physically, this corresponds to the fact that a critical system operates in a regime where small input signals are translated into large outputs. Interestingly, the average autocorrelation $\avcd{c(\tau)}$ can have a finite limit even though the averge intensity does not. Calculating it requires calculating the behavior of the spectrum $\lambda_{\mathbf{a}}$ close to the maximum eigenvalue, which is equal to $||\alpha||=\lambda_{\mathbf{0}}$. Then we can set $\mathbf{k}=2\pi \mathbf{a}/L$, and calculate the limit \begin{equation} \lambda(\mathbf{k})=\lim_{L\to\infty}\lambda_{\frac{\mathbf{k} L}{2\pi}} \;, \end{equation} which is maximal for $\mathbf{k}=\mathbf{0}$, so that the system is stable for $\lambda_{max} = \lambda(\mathbf{0}) < 1$. \subsubsection{Case of a parabolic spectrum} \label{sec:parabolic} If the function $\lambda(\mathbf{k})$ is twice differentiable around zero, then its gradient vanishes and its Hessian is negative definite. In this case it is easy to estimate (see App.~\ref{app:fin_dim}) the $L\to\infty$ limit of the density $\rho(\lambda)$ close to the point $\lambda=\lambda(\mathbf{0})$, which reads \begin{equation} \label{eq:density_multidim} \rho(\lambda(\mathbf{0})-\epsilon) \approx \det (-H[\lambda(\mathbf{0})])^{-1/2} \frac{\epsilon^{D/2-1} }{\Gamma(D/2)}\left( \frac 1 {2\pi} \right)^{D/2} \;, \end{equation} where $H[\lambda(\mathbf{0})]$ is the Hessian of $\lambda(\mathbf{k})$ calculated in $\mathbf{k}=\mathbf{0}$. Eq.~(\ref{eq:acorr_factor_real}) allows us to relate the exponent of $\epsilon$ in above expansion to the limiting behavior of the autocorrelations when $\bar \alpha$ approaches 1. In particular, the autocorrelations diverge approaching the instability point for $D=1,2$, while for $D>2$ they result \begin{equation} \label{eq:acorr_multidim} \avcd{ c(\tau)} \sim \tau^{1-D/2} \;. \end{equation} This implies that \begin{itemize} \item For $D=3,4$ the process can develop long range memory when $\lambda(\mathbf{0})$ tends to the instability point $\lambda(\mathbf{0})=1$. \item For $D\to\infty$ one finds instead that the system loses its power-law behavior, as the tails of the correlations become increasingly steeper. \end{itemize} Notice that the above result does not depend from the specific form of the interaction that we have chosen, but simply emerges from the non-negativity of $\alpha_\mathbf{i}$ and the differentiability of $\lambda(\mathbf{k})$ around zero. \vskip .3cm \noindent {\bf An example: next-neighbor interaction.} As an example, one can study the behavior of a $D$-dimensional system with next-neighbor interactions, defined through $\alpha_{\mathbf{i}}={\bar \alpha} (2D)^{-1} \sum_{d=1}^D \left( \delta_{\mathbf{i} - \mathbf{e}_d}+\delta_{\mathbf{i} + \mathbf{e}_d}\right)$, with $\mathbf{e}_d$ denoting the $d$-th element of the canonical basis of $\mathbb{R}^D$. The spectrum of such a system is given by \begin{equation} \label{eq:spectrum_multidim} \lambda(\mathbf{k}) = \frac{{\bar \alpha}}{D} \sum_{d=1}^D \cos\left( k_d \right) \; . \end{equation} For such functional form of $\lambda(\mathbf{k})$ one can explicitly calculate the limiting value of the spectral density $\rho(\lambda)$ for $N\to\infty$ (App.~\ref{app:fin_dim}), which results \begin{equation} \label{eq:spec} \rho(\lambda) = \frac D {{\bar \alpha}} \int_{-\infty}^{+\infty} \frac{dz}{2\pi} \, e^{iz\lambda D/{\bar \alpha}} J_0^D(z) \end{equation} where $J_n(z)$ is the Bessel function of the first kind of order $n$ calculated in $z$. One can verify by expanding $\rho(\lambda)$ close to the point $\lambda={\bar \alpha} = 1$ that its limiting behavior is described by Eq.~(\ref{eq:density_multidim}). Indeed, the procedure that we have given simply requires the calculation of $\lambda(\mathbf{k})$ around $\mathbf{k} = \mathbf{0}$, and hence can be used in order to tackle a larger class of problems. The spectrum $\rho(\lambda)$ for a system with next-neighbor interactions is represented in Fig.~\ref{fig:correl_lattice} together with the average over components of the autocorrelation function $\avcd{c(\tau)}$. \begin{figure} \centering \includegraphics{fig/correl_lattice.pdf} \caption{\emph{(Left panel)} Spectrum $\rho(\lambda)$ of a translationally invariant system with next-neighbor interactions for an interaction strength ${\bar \alpha}=1$ and different values of the dimension $D$ (see Example of Section \ref{sec:parabolic}). While for $D=1$ the spectrum diverges at the point $\lambda={\bar \alpha}$, for $D=2$ it tends to a constant, and for $D\geq 3$ it vanishes. The thick lines show the exact spectrum~\ref{eq:spec}, while the soft dashed lines indicate the asymptotic predictions on the basis of Eq.~(\ref{eq:density_multidim}). \emph{(Right panel)} Average over components of the autocorrelation function for the same system at the critical value ${\bar \alpha}=1$. While at $D\leq 2$ the correlation has no finite limit for ${\bar \alpha}\to 1$, we have represented its limiting value for $D=3,4,5$. While the heavy curves refer to the predictions of Eq.~(\ref{eq:spec}), the soft dashed lines plotted for comparison are the asymptotic results of Eq.~(\ref{eq:density_multidim}), predicting a decay of the type $\tau^{1-D/2}$.} \label{fig:correl_lattice} \end{figure} We remark that the above results are valid not only in the regime in which the large size limit $N\to\infty$ is taken before the ${\bar \alpha}\to 1$ one, but also when ${\bar \alpha}$ tends to the critical value ${\bar \alpha} = 1$ slower than ${\bar \alpha} = 1-K/N$ (see App.~\ref{app:fin_dim}). This scaling induces a maximum decay time for correlations, as it bounds the characteristic times of the various decay modes to be smaller than $N/K \beta$. Then, we expect the range of time over which the power-law decay of correlations is observed in a linear system with short range interactions to be at most of the order of the system size $N$. \subsubsection{Case of a non-parabolic spectrum ($D=1$)} \label{sec:crit_dim_1} We want to stress here that a non-trivial behavior may also emerge as a consequence of the non-analyticity of the spectrum around its maximum. In fact, in the preceding section we have been able to argue that if the spectrum of a one-dimensional system $\lambda(\mathbf{k})$ is twice-differentiable, the $L\to\infty$, $\bar\alpha\to 1$ limit of the autocorrelation function is infinite (where we remind that in order not to obtain trivial results we are taking the $L\to\infty$ limit before the other ones). Suppose instead that the one-dimensional interaction matrix has tails proportional to $\alpha_\mathbf{i} \sim |\mathbf{i}|^{-1-\gamma}$, as for example in the case \begin{equation} \label{eq:long_range_kern} \alpha_\mathbf{i} = \bar \alpha \left( \frac {(1+|\mathbf{i}| \bmod L)^{-\gamma-1} + (1-(L-|\mathbf{i}|) \bmod L)^{-\gamma-1} }{2 \zeta(1+\gamma)} \right)\; , \end{equation} where $\zeta(1+\gamma)$ is the Riemann Zeta function calculated in $1+\gamma$ for $\gamma>0$. Then the large $L$ limit of the spectrum can be calculated explicitly, and results \begin{equation} \label{eq:spectrum_long_range_kernel} \lambda(\mathbf{k}) = \bar \alpha (e^{ik}\mathcal{P}[1+\gamma,e^{ik}] + e^{-ik}\mathcal{P}[1+\gamma,e^{-ik}]) \; , \end{equation} where $\mathcal{P}[1+\gamma,z] $ is the Polylogaritmic function of order $1+\gamma$ calculated in $z$. Its maximum $\lambda(\mathbf{0})$ is equal to $\bar\alpha$, but its first derivative at $\mathbf{k}=0$ is not defined for $0<\gamma<1$. The analysis of the behavior of the spectrum in the vicinity of $\mathbf{k}=0$ in that case reveals that the leading term in the expansion of $\lambda(\mathbf{k})$ around $\mathbf{0}$ is proportional to $||\mathbf{k}||^\gamma$, and thus in the limit $\bar\alpha\to 1$ induces a long-time behavior of the autocorrelations of the type \begin{equation} \label{eq:1} \avcd{c(\tau)} \sim \tau^{-(1-\gamma)/\gamma} \;. \end{equation} Intuitively, by diluting the interactions among a larger number of components, it is possible to tame the strong fluctuations arising in one-dimensional systems, obtaining a finite limit for the correlations. This also implies that long-range correlations are present for $1>\gamma>1/2$, when the dilution is less severe. \section{The stochastic case} While the definition of a periodic system such as the one analyzed above requires the existence of a notion of geometry in the space of the coordinates, we want now to focus our analysis on the disordered case in which no particular geometry is present in the system. This is typically a more realistic scenario for complex systems in which the interactions are not thought to be organized according to a peculiar spatial structure. In order to account for this lack of regularity, we model the parameters of the system as random variables extracted from a given \emph{statistical ensemble}. In this type of framework, characterizing the behavior of a linear model requires understanding how the ensemble fluctuations in the parameters defining the model are inherited by the intensities $\Lambda$ and the cross-correlations $c(\tau)$. The first type of ensemble that we are going to consider is a suitable one in order to model disordered realizations of a VAR model, while it cannot be used to model Hawkes processes, as it assigns negative weights to the entries of the interaction matrix $\alpha$ with finite probability. Indeed, a great deal of results can be proved rigorously in this framework. Moreover, many of the following results can be used in order to have an intuitive grasp about more general scenarios, in which an analytical solution is not necessarily available. \subsection{The Gaussian Orthogonal Ensemble} \label{sec:goe} \subsubsection{Definition} We first consider a statistical ensemble in which the $\alpha$ matrices are drawn from the Gaussian Orthogonal Ensemble (GOE), in which each $\alpha$ is drawn according to a weight \begin{equation} \label{eq:GOE} P_N(\alpha) \propto \exp \left( \frac{N}{4\sigma_\alpha^2} {\textrm{tr}}\left[ \left(\alpha - \mathbb{J} \frac {\bar \alpha} N\right)\left(\alpha^T - \mathbb{J} \frac {\bar \alpha} N \right)\right] \right) \; , \end{equation} where $\mathbb{J}$ is the $N$-dimensional matrix with all components equal to one. This choice implies \begin{eqnarray} \av{\mal} &=& \frac{{\bar \alpha}}{N} \\ \var{\mal} &=& \left\{ \begin{array}{ccc} 2 \sigma_\alpha^2 /N &\textrm{if} & i = j \\ &&\\ \sigma_\alpha^2 / N &\textrm{if} & i \neq j \end{array} \right. \end{eqnarray} Where we are using the symbol $\av{\dots}$ in order to denote averages with respect to the measure defined by $P_N(\alpha)$ and $\var{\dots}$ in order to denote the variances with respect to that same measure. The factors of $N$ appearing in the definition of the ensemble have been chosen in order for both the mean and the fluctuations of each of the $N$ quantities $\{ \sum_{j=1}^N \mal \}_{i=1}^N$ not to depend explicitly on $N$. Intuitively, this implies that if ${\bar \alpha}$ and $\sigma_\alpha^2$ are finite in the large $N$ limit, then the interaction strength on each of the $N$ components is also finite. Moreover, it is possible to prove (see~\cite{Mehta:2004aa}) that the matrix $\alpha$ admits almost surely the decomposition \begin{equation} \label{eq:decomp_goe} \alpha = \mathbb{J} \frac{{\bar \alpha}}{N} + O \lambda O^T \; , \end{equation} with $OO^T=O^TO=\mathbb{I}$, which will be more useful than the decomposition~(\ref{eq:decomp}) formerly introduced, due to the better symmetry properties of the $O$ matrices. As usual, $\lambda$ is a diagonal matrix of eigenvalues $\lambda = \{ \delta_{ab} \lambda_a \}_{ab=1}^N$, whose joint probability can be written as \begin{equation} p(\lambda_1,\dots , \lambda_N) \propto e^{-\frac N {4\sigma_\alpha^2} \sum_a \lambda_a^2} \prod_{a\neq b} |\lambda_a - \lambda_b| \; , \end{equation} and $O$ is an orthogonal matrix sampled uniformly and independently of $\lambda$ on the $N$-dimensional Haar sphere~\cite{Mehta:2004aa}. Once the statistical ensemble for the interaction matrix $\alpha$ is fixed, it is necessary to prescribe a rule for the statistics of the vector quantities $\mu_i$, $\Lambda_i$ and $\Sigma_{ii}$ appearing in the model. In particular, we will assume the $\mu_i$ to be independent and identically distributed variables, with mean $\av{\mu_i}=\bar \mu$ and variance $\var{\mu_i}=\sigma_\mu^2$. The parameters $\Lambda_i$ will be indirectly fixed by the relation $\Lambda = \hat \Psi(0) \mu$. Finally, we additionally need to specify the statistics for the $\Sigma_{ii}$. We will take them to be independent and identically distributed with mean $\av{\Sigma_{ii}}=\bar \Sigma$ and variance $\var{\Sigma_{ii}}=\sigma_\Sigma^2$ in the VAR case, while in the Hawkes model the values of $\Sigma$ will be indirectly fixed by the relation $\Sigma_{ii}=\Lambda_i$. \subsubsection{Stability in the GOE ensemble} The statistical ensembles defined above assigns strictly positive probability to $\alpha$ matrices associated with unstable systems (i.e., with finite probability it can be that $||\alpha|| >1$), spoiling the assumption \textbf{(H1)}. In order to focus our analysis to the stable realizations sampled from those ensembles, it is necessary to restrict the expectation $\av{\dots}$ to the set of matrices for which the largest eigenvalue is smaller than one. In particular, it will be useful to consider the probability measure $\overline{P_N}(\alpha)$, defined as: \begin{equation} \label{eq:stable_cond} \overline{P_N} (\alpha) = P_N\Big(\alpha \, \Big|\, ||\alpha|| < 1\Big) = \frac{P_N(\alpha)}{P_N( || \alpha || < 1)} \mathds{1}_{[0,1)}(|| \alpha ||) \; . \end{equation} We denote averages taken with respect to this measure with $\avs{\dots}$. As we are interested in studying the stability of these linear models in the regime of large $N$, we will characterize the large $N$ behavior of $P_N( || \alpha || < 1)$, and in verifying whether $P_N( || \alpha || < 1)$ tends to a non-vanishing constant $P_\infty( || \alpha || < 1)$ in the limit of large $N$. Indeed, the problem of stability in the GOE can be solved exhaustively by exploiting the results of Ref.~\cite{Furedi:1981aa,Feral:2007aa,Baik:2005aa} (also see Ref.~\cite{Arous:2011aa} for a review), where it is shown that the behavior of the largest eigenvalue of the matrix $\alpha$ is dictated by the ratio between ${\bar \alpha}$ and $\sigma_\alpha$. \begin{itemize} \item For ${\bar \alpha} / \sigma_\alpha > 1$, the largest eigenvalue $\lambda_{max}$ is a Gaussian variable of mean ${\bar \alpha} + \sigma_\alpha^2/{\bar \alpha}$ and variance $\sigma_\alpha^2 / N$. \item If ${\bar \alpha} / \sigma_\alpha < 1$, the distribution of the rescaled random variable $( \lambda_{max}/\sigma_\alpha - 2 ) N^{2/3} $ converges to the Tracy-Widom distribution \cite{Tracy:1994aa,Tracy:2000aa}. \end{itemize} This implies that, if ${\bar \alpha} / \sigma_\alpha > 1$ and $1-{\bar \alpha} - \sigma_\alpha^2/{\bar \alpha} \gg N^{-1/2} $, then drawing an unstable sample matrix $\alpha$ becomes a large deviation event, and $P_\infty(||\alpha || < 1) = 1$. In the opposite case, if ${\bar \alpha} / \sigma_\alpha < 1$, it is enough that $1-2\sigma_\alpha \gg N^{-2/3}$ in order to have $P_\infty(||\alpha || < 1) = 1$ (for an exhaustive discussion accounting for this kind of large deviations see Ref.~\cite{Dean:2008aa,Dean:2006aa}). In the special case in which the maximum eigenvalue is close to the instability point ($\lambda_{max}=1+O(N^{-1/2})$ for the case of ${\bar \alpha}/\sigma_\alpha$ large or $\lambda_{max}=1+O(N^{-2/3})$ for $\sigma_\alpha/{\bar \alpha}$ small), then $P_N(||\alpha || < 1)$ tends to a non-vanishing constant depending upon the precise values of ${\bar \alpha}$ and $\sigma_\alpha$. This behavior is summarized in Fig.~\ref{fig:phase_diagram_goe}, where we show the shape of the stability region in the $({\bar \alpha},\sigma_\alpha^2)$ plane. Notice that this type of transition is precisely the one discovered in Ref.~\cite{May:1972aa}, which has been later related to the onset of a third order phase transition between the \emph{pulled} and \emph{pushed} phase of a Coulomb gas~\cite{Majumdar:2014aa}.\\ \begin{figure} \centering \includegraphics{fig/phase_diagram_goe} \caption{Phase space for the VAR model as a function of the parameters defining the statistical ensemble (inthe GOE ensemble) for the matrix $\alpha$ (see Section \ref{sec:goe}). A region of stability (in which $P_\infty(||\alpha || < 1)=1$) is separed from an unstable region (where $P_\infty(||\alpha || < 1)=0$) by a critical line in which $P_\infty(||\alpha || < 1)$ is finite. While for ${\bar \alpha} > \sigma_\alpha$ the maximum eigenvalue is isolated, along the ${\bar \alpha} < \sigma_\alpha$ portion of the critical line (in bold) the maximum eigenvalue corresponds to the edge of the support for the density of eigenvalues.} \label{fig:phase_diagram_goe} \end{figure} Additionally, an important qualitative difference emerges in the phases ${\bar \alpha} < \sigma_\alpha$ and ${\bar \alpha} > \sigma_\alpha$: while in the former case at large $N$ the largest eigenvalue coincides with the support of the spectral density of eigenvalues, in the latter there is a gap among the support of the eigenvalue distribution and the largest eigenvalue (see Sec.~\ref{sec:crit_goe} and Fig.~\ref{fig:spectrum_goe}). We will see that this difference will have a central role in determining the limiting behavior of correlations in the critical case. \subsubsection{Endogeneity and Relaxation in the GOE ensemble} \label{sec:obs_goe} Having established the stability condition for the matrices in the GOE, we enunciate our result for the averages of $\avcd{\lambda}$, $\avcd{\hat c(\omega)}$ and $\avcm{\hat c(\omega)}$. The systematic procedure used to obtain these results is illustrated in App.~\ref{app:obs}. We get: \begin{eqnarray} \avs{\avcd{\Lambda}} \label{eq:mean_lambda_goe} &=& \frac{\bar \mu}{{\bar \alpha}} \, \left( \avs{\gf{1} ({\bar \alpha},\{ (1-\lambda_a)^{-1}\})}-1 \right) \\ \avs{\avcd{ \hat c(\omega)} } &=& {\bar \alpha}^2 \frac{\bar \Sigma}{2N} ||\hat\phi(\omega)||^{-2} \partial_{x^{(3)}}^2 \avs{\gf{3} (({\bar \alpha},{\bar \alpha},0),\{ z_a^\star(\omega)\},\{ z_a(\omega)\},\{ ||z_a(\omega) ||^2\}} \nonumber \\ &+& {\bar \alpha} \frac{\bar \Sigma}{N} ||\hat\phi(\omega)||^{-2} \partial_{x^{(2)}} \avs{\gf{2}(({\bar \alpha},0),\{ z_a^\star(\omega)\},\{ z_a^\star(\omega) ||z_a(\omega) ||^2 \}} \nonumber \\ &+& {\bar \alpha} \frac{\bar \Sigma}{N} ||\hat\phi(\omega)||^{-2} \partial_{x^{(2)}} \avs{\gf{2} (({\bar \alpha},0),\{ z_a(\omega)\},\{ z_a(\omega) ||z_a(\omega) ||^2\}} \nonumber \\ &+& \frac{\bar \Sigma}{N} ||\hat\phi(\omega)||^{-2} \avs{\sum_a \frac{1}{|| \hat \phi^{-1}(\omega) -\lambda_a||^2}} \label{eq:mean_acorr_goe} \\ \avs{\avcm{ \hat c(\omega)} } &=& \frac \bar \Sigma N ||\hat\phi(\omega)||^{-2} \partial_{x^{(3)}} \avs{\gf{3} (({\bar \alpha},{\bar \alpha},0),\{ z_a^\star(\omega)\},\{ z_a(\omega)\},\{ ||z_a(\omega) ||^2\}}\nonumber \; , \\ & & \label{eq:mean_xcorr_goe} \end{eqnarray} with $z_a(\omega) = (\hat \phi^{-1}(\omega) -\lambda_a)^{-1}$. The generating function $\gf{p}(\vec x,\{z_a^{(k)}\}_{a=1}^N) $ appearing in above formulae can be defined after considering a $p$-dimensional vector $\vec x =(x^{(1)},\dots , x^{(p)})$ and a matrix $\{ z_a^{(k)}\}_{(a,k) \in (1,\dots , N) \times (1,\dots p)}$, in whose case it can be written as: \begin{eqnarray} \gf{p}(\vec x,\{z_a^{(k)}\}_{a=1}^N) &=& \frac{\Gamma(N/2)}{\Gamma(N/2-p)} \int_0^1 dt_1 \dots dt_p \prod_{k=1}^p t_k^{k-1} (1-t_p)^{N/2-p-1} \nonumber \\ &\times & \prod_a \left(1 - \sum_{k=1}^p h^{(k)}(\vec t) x^{(k)} z_a^{(k)} \right)^{-1/2} \;, \end{eqnarray} together with a set of auxiliary functions $h^{(k)}(\vec t)$ given by \begin{eqnarray} h^{(1)} &=& t_1 \dots t_p \\ h^{(2)} &=& (1-t_1) t_2 \dots t_p \\ h^{(3)} &=& (1-t_2) t_3 \dots t_p \\ &\dots&\\ h^{(p)} &=& (1-t_{p-1}) t_p \; . \end{eqnarray} Eqs.~(\ref{eq:mean_lambda_goe}),~(\ref{eq:mean_acorr_goe}) and~(\ref{eq:mean_xcorr_goe}) above allow to express the observables as averages over the eigenvalue distribution of the generating functions $\gf{p}(\vec x,\{z_a^{(k)}\}_{a=1}^N)$. In App.~\ref{app:obs} we illustrate how to express a generic observable within the model in terms of these generating functions, and show how to use this formalism in order to derive a systematic expansion in powers of ${\bar \alpha}$ for the momenta of any observable. For example, in the case of the mean intensities, the first term of such an expansion, corresponding to the case ${\bar \alpha}=0$, reads: \begin{eqnarray} \avs{\avcd{\Lambda}} &=& \frac{\bar \mu}{N} \avs{\sum_a \frac 1 {1-\lambda_a}} \label{eq:mean_lambda_goe_al0} \\ \avs{\avcd{ \hat c(\omega)} } &=& \frac{\bar \Sigma}{N} ||\hat\phi(\omega)||^{-2} \avs{\sum_a \frac{1}{|| \hat \phi^{-1}(\omega) -\lambda_a ||^2 }} \\ \avs{\avcm{ \hat c(\omega)}} &=& \frac{\bar \Sigma}{N^2} ||\hat\phi(\omega)||^{-2} \avs{\sum_a \frac{1}{|| \hat \phi^{-1}(\omega) -\lambda_a ||^2 }} \; . \end{eqnarray} Even more interestingly, the leading order term of the $1/N$ expansion of $\gf{p}$ can be computed analytically (see App.~\ref{app:obs}), making possible the asymptotic estimation of the observables by means of the $\gf{p}$ generating functions. \\ Eqs.~(\ref{eq:mean_lambda_goe}) and~(\ref{eq:mean_lambda_goe_al0}) state that, unlike in Eq.~(\ref{eq:mean_lambda_lattice}), all modes of $\alpha$ contribute to the average intensity due to the heterogeneity of the interaction network. As in that case, if any of the eigenvalues exceeds the value $\lambda=1$, the system loses its stability, and the ratio of endogenous-to-exogenous intensity $\avcd{\Lambda}/\avcd{\mu} -1$ explodes. The equations for the correlations for ${\bar \alpha}=0$ are similar to the ones found in the deterministic case, except for the fact that the average cross-correlation for the non-diagonal terms $\hat c_{ij}(\omega)$ with $i\neq j$ is exactly equal to zero. We remark moreover that by specializing the value of $\avs{\hat c_{ii}(\omega)}$ to $\omega =0$, we find an expression proportional to the one of $\avs{\Lambda_{i}^2}$. This implies that the \emph{memory} $\hat c_{ii}(0) = \int d\tau \, c_{ii}(\tau)$ is related to the fluctuations of the mean. In particular, whenever the process develops long-range memory (i.e., $\hat c_{ii}(0)$ diverges), then the fluctuations of the mean intensities are also bound to diverge. Finally, we remind that in the cases in which $P_\infty(||\alpha||<1)$ is equal to one, then $P_\infty(\alpha \, | \, ||\alpha||<1) = P_N(\alpha) \mathds{1}_{[0,1)}(|| \alpha ||)$, so that it is possible to replace the unconditional measure with the actual one in the large $N$ limit. This is the case in all the stability region ${\bar \alpha}+\sigma_\alpha^2/{\bar \alpha} <1$, implying that, for example, the ${\bar \alpha}=0$ value of the average auto-correlation function in the exponential kernel case is given by \begin{equation} \avs{\avcd{ c(\tau) }} \label{eq:acorr_goe_wigner} = \beta \bar \Sigma \int_{-2\sigma_\alpha}^{+2\sigma_\alpha} d\lambda \, \rho_W(\lambda) \frac{\lambda (2-\lambda) }{2(1-\lambda )} e^{-\beta [1-\lambda] |\tau|} \; , \end{equation} where \begin{equation} \rho_W(\lambda) = \av{\frac{1}{N} \sum_a \delta(\lambda - \lambda_a)}= \frac{1}{2 \pi \sigma_\alpha^2} \sqrt{4\sigma_\alpha^2-\lambda^2} \end{equation} is the well-known Wigner semicircle law with support in $[-2\sigma_\alpha,2\sigma_\alpha]$ represented in Fig.~\ref{fig:spectrum_goe} (see \cite{Wigner:1955aa}). Fig.~\ref{fig:correl_goe} shows a set of auto-correlation curves obtained by varying $\sigma_\alpha$ along the ${\bar \alpha}=0$ line, comparing the results of Eq.~(\ref{eq:acorr_goe_wigner}) with the results of numerical simulations. \subsubsection{Fluctuations and law of large numbers for $\avcd{\Lambda}$} A natural question arising in the study of systems with random interactions is the one related to the self-averagingness of the observables: Given a succession of realizations of the system of increasing $N$, does a law of large numbers hold for the quantities $\avcd{\Lambda}$ and $\avcd{ c(\tau)}$? Indeed the mean intensity $\avcd{\Lambda}$ depends upon the eigenvalues $\{ \lambda_a\}_{a=1}^N$, which are strongly correlated variables. As a consequence, one may expect the natural scaling of fluctuations to be altered by the particular statistics of the eigenvalues. The fluctuations of the mean intensity $\avcd{\Lambda}$ can be computed analytically by using the results of App.~\ref{app:obs}, and result \begin{eqnarray} \avs{\avcd{\Lambda\Lambda^T}} &=& \bar \mu^2\left( {\bar \alpha} \partial_{x^{(1)}} + 1 \right) \partial_{x^{(2)}} \avs{\gf{2} (({\bar \alpha},0),\{ (1-\lambda_a)^{-1}\},\{ (1-\lambda_a)^{-2}\})} \nonumber \\ &+& {\bar \alpha}^2 \frac{\sigma^2_\mu}{2N} ({\bar \alpha} \partial_{x^{(1)}} + 1) \partial_{x^{(2)}}^2 \avs{\gf{2} (({\bar \alpha},0),\{ (1-\lambda_a)^{-1}\},\{ (1-\lambda_a)^{-2}\})} \nonumber \\ &+& {\bar \alpha} \frac{2 \sigma^2_\mu} {N} \partial_{x^{(2)}} \avs{\gf{2} (({\bar \alpha},0),\{ (1-\lambda_a)^{-1}\},\{ (1-\lambda_a)^{-3}\})} \nonumber \\ &+& \frac{ \sigma^2_\mu} N \avs{\sum_a \frac 1 {(1-\lambda_a)^2}} \label{eq:fluct_diag_lambda_goe} \\ \avs{\avcm{ \Lambda\Lambda^T}} &=& \frac{\bar \mu^2}{{\bar \alpha}^2} ({\bar \alpha} \partial_{x^{(1)}} - 1) \avs{\gf{1} ({\bar \alpha},\{ (1-\lambda_a)^{-1}\}) - 1 -\frac{{\bar \alpha}}{N}\sum_a \frac 1 {(1-\lambda_a)} } \nonumber \\ &+& \frac{\sigma^2_\mu}{N} ({\bar \alpha} \partial_{x^{(1)}} + 1) \partial_{x^{(2)}} \avs{\gf{2} (({\bar \alpha},0),\{ (1-\lambda_a)^{-1}\},\{ (1-\lambda_a)^{-2}\})} \; , \nonumber \\ && \label{eq:fluct_coll_lambda_goe} \end{eqnarray} While Eq.~(\ref{eq:fluct_diag_lambda_goe}) is useful to estimate the fluctuations of the individual $\Lambda_i$, Eq.~(\ref{eq:fluct_coll_lambda_goe}) can be used in order to estimate the variance of $\avcd{\Lambda}=N^{-1}\sum_i\Lambda_i$. \vskip .3cm \noindent {\bf Considering the case ${\bar \alpha}=0,\sigma_\alpha < 1/2$.} In particular, we are able to estimate analytically the latter quantity along the ${\bar \alpha}=0,\sigma_\alpha < 1/2$ line, as in such region the mean intensity reduces to a linear statistics of the eigenvalues. In that case, for $N$ large enough the variance of $\avcd{\Lambda}$ tends to \begin{eqnarray} \label{eq:lin_stat} \vas{ \avcd{\Lambda} } &\to& \frac{\sigma_\mu^2}{N} \int_{-2\sigma_\alpha}^{2\sigma_\alpha} d\lambda \frac {\rho_W(\lambda)} {(1-\lambda)^2} \\ &+&\frac{\bar \mu^2}{N^2} \mathbb{P} \left\{ \int_{-2\sigma_\alpha}^{2\sigma_\alpha} d\lambda d\lambda^\prime \frac{\rho_W(\lambda,\lambda^\prime)}{(1-\lambda)(1-\lambda^\prime)} \right\}\; , \nonumber \end{eqnarray} where $\mathbb{P}$ denotes the Cauchy principal value, and \begin{eqnarray} \label{eq:spectral_corr} \rho_W(\lambda,\lambda^\prime) &=& -\frac 1 {\pi^2} \left( \frac{1}{[(2\sigma_\alpha+\lambda)(2\sigma_\alpha-\lambda)]^{1/2}} \right) \\ &\times& \frac{\partial^2}{\partial \lambda \partial \lambda^\prime} \left([(2\sigma_\alpha+\lambda^\prime)(2\sigma_\alpha-\lambda^\prime)]^{1/2} \log |\lambda-\lambda^\prime| \right) \end{eqnarray} is an universal \emph{spectral correlation}, which encodes the strongly interacting nature of the eigenvalue distribution~\cite{Brezin:1993aa,Beenakker:1994aa}. As both integrals appearing in Eq.(\ref{eq:lin_stat}) converge, the contribution of the $\sigma_\mu$ and $\bar \mu$ terms scale respectively as $N^{-1}$ and $N^{-2}$. \vskip .3cm Let us point out that we find by numerically evaluating Eq.~(\ref{eq:fluct_coll_lambda_goe}) that this scaling extends to all the values of $({\bar \alpha},\sigma_\alpha)$ in the interior of the stability region. The scaling of fluctuations in the critical regime is indeed non-trivial: taking for example the case ${\bar \alpha}=0$, one can check that both integrals appearing in Eq.~(\ref{eq:lin_stat}) are formally divergent, indicating the emergence of a different scaling with $N$ of the fluctuations. Finding the exact scaling at transition requires indeed a more sophisitcated analysis, as one needs to take into account that $P_N(\alpha)\neq \overline{P_N}(\alpha)$ in order to compute the fluctuations of the largest eigenvalues dominating the divergence. \subsubsection{Criticality} \label{sec:crit_goe} Similar to what has been discussed above for a translationally invariant system, even in this random framework we can imagine the system to be able to develop slow correlation if the spectrum touches the instability point $\lambda=1$. The situation is indeed more delicate in this case, as the maximum value may not to coincide with the edge of the support of $\rho(\lambda)$ as it was in the former case. In particular when ${\bar \alpha} > \sigma_\alpha$ and $\alpha+\sigma_\alpha^2/\alpha=1$ the large time behavior is dictated by an isolated exponential mode whose associated decay speed becomes extremely small. This scenario is similar to the one considered in Ref.~\cite{Bremaud:2001aa}. The phase in which ${\bar \alpha} < \sigma_\alpha$ and $\sigma_\alpha$ is close to the instability point $\sigma_\alpha=1/2$ leads instead to a richer dynamical behavior, and its phenomenology is, to the best of our knowledge yet unexplored. Fig.~\ref{fig:spectrum_goe} summarizes this description by representing the spectrum in both of these cases. \begin{figure} \centering \includegraphics{fig/spectrum_goe} \caption{Spectrum $\rho(\lambda)$ of the Gaussian Orthogonal Ensemble in the cases ${\bar \alpha}>\sigma_\alpha$ (left panel) and ${\bar \alpha} < \sigma_\alpha$ (right panel) (See Section \ref{sec:goe}). We have averaged over 100 realizations of a system of size $N=1000$ in order to obtain the curves depicted in the figure, which at this scale are almost exactly superposed to the Wigner semicircle.} \label{fig:spectrum_goe} \end{figure} \vskip .3cm \noindent {\bf Considering the case ${\bar \alpha}=0,\sigma_\alpha = 1/2$.} In order to be definite, let us consider the case ${\bar \alpha}=0$ and $\sigma_\alpha =1/2$. In that case and in the large $N$ regime the realizations are stable with finite probability $P_\infty(||\alpha|| < 1) = \mathcal{P}_{1}(0) \approx 0.8319$, where $\mathcal{P}_{\beta}(z)$ is the cumulative of the Tracy Widom distribution of index $\beta$ calculated in $z$ (see Refs.~\cite{Dean:2008aa,Dean:2006aa}). The long time regime of the correlations is then dictated by the behavior of the density of eigenvalue close to 1. In particular, if one considers \begin{equation} \bar \rho_W(\lambda) = \avs{\frac{1}{N} \sum_a \delta_{\lambda - \lambda_a}} \;, \end{equation} the behavior of $ \avs{\avcd{ c(\tau) }}$ depends upon the shape of $ \bar \rho_W(\lambda) $ close to 1. Notice that, as $P_\infty(||\alpha||<1) < 1$, one could expect that $ \bar \rho_W(\lambda) \neq \rho_W(\lambda)$. However, at leading order in $N$, the two densities converge to the same one, and it is possible to use Eq.~(\ref{eq:acorr_goe_wigner}) even in the critical regime \cite{Dean:2008aa,Dean:2006aa}. Intuitively, this indicates that the number of eigenvalues exceeding the instability point $\lambda=1$ is of order smaller than $O(N)$, implying that they give a negligible contribution to $\bar \rho_W(\lambda)$ in the large $N$ limit. In the case of the exponential kernel, we find by plugging $\bar \rho_W(\lambda)$ into Eq.~(\ref{eq:acorr_factor_real}) that for $N$ large enough \begin{equation} \label{eq:ac_crit_wigner} \avs{ \avcd{ c(\tau)}} = \frac{\bar \Sigma \beta} \pi \int_{-1}^{1}d\lambda \, \lambda (2-\lambda) \sqrt{ \frac{1+\lambda}{1-\lambda}}e^{-\beta|\tau|(1-\lambda)} \sim \tau^{-1/2} \; . \end{equation} Studying the tail behavior of the average autocorrelation is indeed delicate: one can rigorously finds a scaling $c_{ii}(\tau) \sim \tau^{-1/2}$ just in the setting in which the limit of large $N$ is taken before the one of large $\tau$. In practice the finite $N$ corrections establish a upper cutoff to the correlations, which decay exponentially beyond a time $\tau_{cut}$ whose location depends upon the corrections to the limiting law of $ \bar \rho_W(\lambda) $. Moreover, fluctuations are large at any $\tau$: while for $\sigma_\alpha<1/2$ the variance of $\sum_i c_{ii}(\tau) $ has a finite limit for $N\to\infty$, in the case $\sigma_\alpha=1/2$ such limit becomes infinite. The behavior of the autocorrelations in the critical case ${\bar \alpha}=0,\sigma_\alpha=1/2$ is also represented in Fig.~\ref{fig:correl_goe}, where we have compared the theoretical predictions with the results of numerical simulations. \begin{figure} \centering \includegraphics{fig/correl_goe} \caption{Autocorrelation function of the GOE ensemble for variable $\sigma_\alpha$ in the case ${\bar \alpha}=0$ (See Section \ref{sec:obs_goe}). The solid lines represent the theoretical predictions, while the shaded regions are two-sigmas error bars accounting for the results of the numerical estimations (300 realizations of systems of size $N=1000$). Notice that the theoretical predictions for $\sigma_\alpha<1/2$ are indistinguishable from the numerical results, while for $\sigma_\alpha=1/2$ strong fluctuations arise in the system as a consequence of the critical regime that we are considering.} \label{fig:correl_goe} \end{figure} The divergence of $\hat c_{ii}(0) = \int d\tau\, c_{ii}(\tau)$ in this regime signals that long-range memory is induced in the system as an effect of the structure of the interaction matrix $\alpha$. Notice that the behavior of $\hat c_{ii}(\omega)$ in $\omega=0$ and the one of the fluctuations of the mean intensities $\Lambda_i$ are related. In particular, if $\hat c_{ii}(0)$ diverges, Eq.~(\ref{eq:fluct_diag_lambda_goe}) implies that the mean intensity $\Lambda_i$ has infinite variance. \vskip .3cm We remark that, as in the translationally-invariant case discussed above, the slow relaxation of the correlations is independent on the specific shape of the kernel $\phi(\tau)$, as long as the integral $\int_0^\infty d\tau\,\tau \phi(\tau)$ is finite. \subsection{Random graph ensembles} \label{sec:rrg} \subsubsection{Definition} In this section we consider ensembles in which the interaction matrix $\alpha$ is generated by randomly assigning positive weights to the edges of a graph whose vertices are the $N$ coordinates of the system. As in this setting the matrix $\alpha$ is by construction non-negative, we are free to use the following ensembles in order to build random realizations of a multivariate Hawkes proces.\\ There is indeed an important difference to underline among VAR processes and multivatiate Hawkes models. In fact, while the $\Sigma_{ii}$ appearing in a VAR model are independent of the $\Lambda_i$ variables, in the Hawkes model their values need to coincide. This implies in particular that the expression of the correlations in the two cases can in general be different. Summarizing, while at large times a deterministic Hawkes process is always identical to an appropriately tuned continuous-time VAR, in the random case they can be different, as the noise in the Hawkes case is endogeneously generated. As the ensemble for non-negative realizations of $\alpha$, we take the one of the \emph{$c$-regular random graphs}, in which each $\alpha$ is associated with an adjacency matrix $\mathcal{G}$ uniformly chosen in the space of random undirected graphs of $N$ vertices, each of which is connected exactly to $c$ vertices. We then define $\alpha$ as $\alpha=({\bar \alpha} / c) \mathcal G$. According to these conventions, the first two momenta of each entry result \begin{eqnarray} \av{\mal} &=& \frac{{\bar \alpha}}{N} \\ \var{\mal} &=& \frac{{\bar \alpha}^2}{cN} \left(1- \frac c N \right) \; . \end{eqnarray} We will further suppose $c$ to be an $N$-independent constant. In fact, if $c$ grows with $N$, the bulk of eigenvalues of $\alpha$ shrinks with $N$, leading in the large $N$ limit to an asymptotic spectrum consisting of $N-1$ degenerate eigenvalues equal to zero and to a unique eigenvalue equal to ${\bar \alpha}$ \cite{Kesten:1959aa,McKay:1981aa}. This type of behavior has been proved for several other ensembles of non-negative $\alpha$ matrices (e.g, \cite{Erdos:2012aa,Erdos:2013aa,Bordenave:2011aa}), and can be informally summarized by the statement that \emph{ensembles of non-negative $\alpha$ matrices are expected to have a non-degenerate behavior in the large $N$ limit just in the sparse case}. By \emph{sparse}, we mean that given a random row $1\leq i\leq N$, a number of entries independent of $N$ should account for a finite fraction of the sum $\sum_j \mal$ with large probability. \subsubsection{Stability in the regular random graph ensemble} The issue of stability in this ensemble is trivial, as for ${\bar \alpha}<1$ the system is always stable (the maximum eigenvalue does not fluctuate because the degree of each node of the graph is fixed), and the measure associated with the averages $\avs{\dots}$ coincides with the unrestricted one $\av{\dots}$. Indeed, the maximum eigenvalue of the matrix $\alpha$ is associated with the eigenvector $U_{ia}=N^{-1/2} (1,\dots,1)$. This scenario is analogous to the one analyzed in Sec.~\ref{sec:det}, in which the absence of fluctuations for the maximum eigenvalue led the Perron-Froebenius eigenvalue to be associated with the same eigenvector. \subsubsection{Endogeneity and Relaxation in the regular random graph ensemble} \label{sec:observ-regul-rand} The expressions for the averages of the various observables in the random regular graph are similar to the ones calculated in Sec.~(\ref{sec:det}) due to the absence of fluctuations in the largest eigenvalue of $\alpha$. By using the decomoposition~(\ref{eq:decomp}) we can in fact show that \begin{eqnarray} \avs{\avcd{\Lambda}} &=& \frac{\bar \mu}{1-{\bar \alpha}} \\ \label{eq:ac_rg} \avs{\avcd{ \hat c(\omega)}} &=& \frac {\bar \Sigma} N ||\hat \phi(\omega)||^{-2} \avs{\sum_{a} \frac 1 {||\hat \phi(\omega)^{-1} - \lambda^{a}||^2 }} \\ \avs{\avcm{\hat c(\omega)}} &=& \frac {\bar \Sigma} N ||\hat \phi(\omega)||^{-2}\frac 1 {||\hat \phi(\omega)^{-1} - {\bar \alpha}||^2 } \; . \end{eqnarray} \\ These expressions are valid for both the VAR and the Hawkes model, once one identifies $\Sigma_{ii}$ with $\Lambda_i$. This is because even though the correlations should be calculated differently in each of the two cases, the absence of fluctuations for the maximum eigenvalue induces the same expression of the correlations.\footnote{Including degree fluctuations in the graph associated to $\alpha$ would lead to a different expression of the correlations. In particular, the value of auto- and cross-correlations in the Hawkes case would be sensitive to the \emph{third} power of $\hat \Psi(\omega)$, as opposed to the \emph{second} power of $\hat \Psi(\omega)$ which appears in the expressions which we have reported.} In the $N\to\infty$ limit, the form of the spectrum $\rho(\lambda)$ for this ensemble has been calculated in Refs.~\cite{Kesten:1959aa,McKay:1981aa}. It results: \begin{equation} \label{eq:spectrum_rg} \rho_c(\lambda) = \av{\frac{1}{N} \sum_a \delta_{\lambda - \lambda_a}}= \frac{1}{2 \pi {\bar \alpha}^2} \frac{\sqrt{4 {\bar \alpha}^2(c-1)-c^2\lambda^2}}{{\bar \alpha}^2-\lambda^2} \;, \end{equation} with support $\lambda\in [-2{\bar \alpha}\sqrt{c-1}/c,2{\bar \alpha}\sqrt{c-1}/c]$. This explicit form of the spectral density can be used in order to evaluate numerically the diagonal terms $\avcd{\Lambda\Lambda^T}$ and $\avcd{\hat c(\omega)}$. \subsubsection{Criticality} \label{sec:crit_rrg} The behavior of the system in the vicinity of the point ${\bar \alpha}=1$ can be analyzed easily due to the explicit expressions provided for the shape of the bulk (Eq.~(\ref{eq:spectrum_rg})) and the value of the maximum eigenvalue, which is always equal to ${\bar \alpha}$. As in the previous cases, we will refer explicitly to the case in which the kernel $\phi(\tau)$ is an exponential function ($ \phi(\tau) = \beta e^{-\beta \tau}\mathds{1}_{\mathbb{R}^+}(\tau)$), reminding that this choice doesn't induce any loss of generality at large times as long as the kernel is assumed to be short-ranged.\\ For $c>2$ the edge of the bulk never touches the point ${\bar \alpha}$, indicating that the leading term of the large-time expansion of the auto-correlations is always proportional to $e^{-\beta (1-{\bar \alpha})|\tau| }$. Then the ${\bar \alpha}\to 1 $ limit has the same type of behavior of the ${\bar \alpha} > \sigma_\alpha$ phase of the GOE ensemble already discussed. For $c=2$, the edge of the spectrum touches the point $\lambda={\bar \alpha}$, as the density takes the form \begin{equation} \rho_{c=2}(\lambda) = \frac{1}{\pi {\bar \alpha}^2} ( {\bar \alpha}^2-\lambda^2)^{-1/2} \;. \end{equation} By inserting the above density into Eq.~(\ref{eq:ac_rg}) one finds the type of divergence already encountered in the case of a one-dimensional system: in order to have a finite limit for ${\bar \alpha}=1$, the leading behavior close to that point should be of the type $(1-\lambda)^\gamma$ with $\gamma>0$. This indicates that for $c=2$ in the limit ${\bar \alpha}\to 1$, the correlations are bound to become infinite. It would nevertheless be interesting to find an ensemble of non-negative matrices in which (i) the fluctuations of the maximum eigenvalue tend to zero in the large $N$ limit, (ii) the maximum eigenvalue touches the bulk of the spectrum and (iii) the spectral density vanishes at the edge of the spectrum. This would allow the model to describe the onset of non-trivial correlations at the critical point ${\bar \alpha}=1$ even for Hawkes processes. A natural candidate to this purpose would be the Erd\H{o}s-R\'enyi ensemble of $\alpha$ matrices (informally, it can be thought of as an analytical continuation of the regular random graph to non-integer, fluctuating $c$). Moreover, the spectral density of the matrices sampled in that ensemble has been shown to be vanishing close to the edge of the bulk as in the Wigner case \cite{Bray:1988aa,Rodgers:1988aa,Biroli:1999aa,Semerjian:2002aa,Dorogovtsev:2003aa,Kuhn:2008aa}, allowing in principle non-exponential correlations to arise in the critical case. The problem of the Erd\H{o}s-R\'enyi ensemble lies indeed in points (i) and (ii), as the degrees of the nodes of the matrix $\alpha$ can have potentially large fluctuations, leading to a slow divergence of the maximum eigenvalue in the large $N$ limit~\cite{Krivelevich:2003aa}, spoiling the stablility of the ensemble. Is it possible to generalize the Erd\H{o}s-R\'enyi ensemble by reducing the amount of degree fluctuations, so to control the maxmimal eigenvalue of $\alpha$? We leave this issue as an interesting open question. \section{Conclusions} \label{sec:conclusions} In this work we have investigated the first and second order properties of two linear models (VAR and Hawkes processes) in the regime of high dimensionality, providing the values of the observables characterizing their behavior under different setups. We have addressed in particular the issues of stability, endogeneity, and slow relaxation for both deterministic and stochastic realizations of these models, showing that under the assumptions of factorizability {\bf (H2)} and homogeneity {\bf (H3)}, all the information about these systems can be related to the spectrum $\{\lambda_a \}_{a=1}^N$ of the matrix encoding its interaction network. We have shown that, as opposed to the univariate setup considered in~\cite{Bremaud:2001aa}, collective effects can trigger slow relaxation of the correlation functions even in systems with short-memory interactions provided that the edge of such spectrum \emph{smoothly} touches the instability point $\lambda=1$. More generally, we find that in systems characterized by a strong degree of endogeneity, the dynamic properties are also expected to slow down, akin to what is observed for glassy systems~\cite{Kurchan:1996aa}, for which the proximity to a static transition point is signaled by a slow relaxation of the correlations. We additionally relate the memory of the system with the self-averaging behavior of the intensity $\Lambda$: in the stochastic case long-range correlations are measured just when the variance of the observed signal diverges on a realization-per-realization basis. On a more general footing, we find that random realizations of a large linear systems are always self-averaging unless the system is exactly poised at the critical point. Our study also illustrates the tight relation among the VAR and the Hawkes processes, whose first- and second-order properties are shown to coincide. A Hawkes process is in fact reminiscent of a VAR model in which the noise term is endogenously generated by the system. Such a fundamental difference also explain why random realizations of a Hawkes process differ from the ones sampled from a VAR, as in the former case the endogenous nature of the noise may enhance the level of fluctuations of the system, while in the latter the noise can be taken as completely exogenous. Another fundamental difference among the two models is the excitatory nature of the interactions for a Hawkes system as opposed to a VAR model, in which the sign of interactions is arbitrary: Our study supports the hypothesis that a given degree of frustration (i.e., negative interactions) is required in order to observe slow relaxations in short-memory linear systems. Finally, from the perspective of empirical calibration, our study shows that the parametric fit of a system characterized by power law relaxations with a short range kernel naturally leads to a critical point, the only one which can accommodate such a slow behavior of correlations in data. \section*{Acknowledgements} We warmly thank F.~Lillo for long and fruitful discussions all along the preparation of this manuscript. We also thank J.-P.~Bouchaud, J.~Bun ,S.~Majumdar, P.~Vivo for useful discussions. This research benefited from the support of the ``Chair Markets in Transition'', under the aegis of ``Louis Bachelier Finance and Sustainable Growth'' laboratory, a joint initiative of \'Ecole Polytechnique, Universit\'e d'\'Evry Val d'Essonne and F\'ed\'eration Bancaire Fran\c{c}aise.
1,941,325,220,813
arxiv
\section{Introduction} In this paper, we address in a distributed way stochastic big-data convex optimization problems involving \emph{strongly convex} (possibly nonsmooth) local objective functions, by means of the Distributed Block Proximal Method~\cite{farina2019randomized,farina2019subgradient}. Problems with this structure naturally arise in many learning and control problems in which the decision variable is extremely high dimensional and large datasets are employed. Relevant examples include: direct policy search in reinforcement learning~\cite{recht2019reinforcement}, dynamic problems involving stochastic functions generated from collected samples to be processed online~\cite{xiao2010dual}, learning problems involving massive datasets in which sample average approximation techniques are used~\cite{kleywegt2002sample}, and settings in which only noisy subgradients of the objective functions can be computed at each time instant~\cite{ram2010distributed}. Distributed algorithms for solving stochastic problems have been widely studied~\cite{ram2010distributed,agarwal2011distributed,srivastava2011distributed,nedic2016stochastic,li2018stochastic,ying2018performance}. On the other side, distributed algorithms for big-data problems through block communication have started to appear only recently~\cite{necoara2013random,arablouei2013distributed,wang2017coordinate,notarnicola2018distributed, FARINA2019243}. The Distributed Block Proximal Method solves problems that can be together non-smooth, stochastic and big-data, thus distinguishing from the above works (see~\cite{farina2019randomized} for a comprehensive literature review). This algorithm evolves through block-wise communication and updates (involving subgradients of the local functions and proximal mappings induced by some distance genereting functions) and it has been already shown to achieve a sublinear convergence rate on problems with non-smooth convex objective functions. The contribution of this paper is to extend this result by showing that, under strongly convex (possibly non-smooth) local objective functions and constant stepsizes, the Distributed Block Proximal Method exhibits a linear convergence rate (with a constant error term) to the optimal cost in expected value. The main challenge in the linear-rate analysis relies in the block-wise nature of the algorithm. \section{Set-up and preliminaries}\label{sec:problem} \subsection{Notation, definitions and preliminary results} Given a vector $x\in\mathbb{R}^n$, we denote by $x_\ell$ the $\ell$-th block of $x$, i.e., given a partition of the identity matrix $I=[U_1,\dots,U_B]$, with $U_\ell\in\mathbb{R}^{n\times n_\ell}$ for all $\ell$ and $\sum_{\ell=1}^B n_\ell=n$, it holds $x = \sum_{\ell=1}^B U_\ell x_\ell$ and $x_\ell=(U_\ell)^\top x$. In general, given a vector $x_i\in\mathbb{R}^n$, we denote by $x_{i,\ell}$ the $\ell$-th block of $x_i$. Given a matrix $A$, we denote by $a_{ij}$ the element of $A$ located at row $i$ and column $j$. Given two vectors $a,b\in\mathbb{R}^n$ we denote by $\langle a,b\rangle$ their scalar product. Given a discrete random variable $r\in\until{R}$, we denote by $P(r=\bar{r})$ the probability of $r$ to be equal to $\bar{r}$ for all $\bar{r}\in\until{R}$. Given a nonsmooth function $f$, we denote by $\partial f(x)$ its subdifferential at $x$. The following preliminary result will be used in the paper. \begin{lemma}\label{lemma:series} Given any two scalars $\delta\neq\gamma\neq 1$, it holds that \begin{enumerate}[label=(\roman*)] \item\label{item:1} $\sum_{s=r}^t\delta^s = \frac{\delta^r-\delta^{t+1}}{1-\delta}$ \item\label{item:3} $\sum_{s=0}^t \delta^{t-s}\gamma^s=\frac{\delta^{t+1}-\gamma^{t+1}}{\delta-\gamma}.$\oprocend \end{enumerate} \end{lemma} \subsection{Distributed stochastic optimization set-up} Let us start by formalizing the optimization problem addressed in this paper. We consider problems in the form \begin{equation}\label{pb:problem} \begin{aligned} & \mathop{\textrm{minimize}}_{x\in X} & & \sum_{i=1}^N \mathbb{E}[h_i(x;\xi_i)]. \end{aligned} \end{equation} where $\xi_i$ is a random variable and $x\in\mathbb{R}^n$, $n\gg 1$, has a block structure, i.e., $x=[x_1^\top,\dots,x_B^\top]^\top. $, with $x_\ell\in\mathbb{R}^{n_\ell}$ for all $\ell$ and $\sum_{\ell=1}^B n_\ell = n$. The decision variable $x$ can be very high-dimensional, which calls for block-wise algorithms. Let $f_i(x)\triangleq\mathbb{E}[h_i(x;\xi_i)]$. Moreover, let $g_{i}(x;\xi_i)\in\partial h_i(x;\xi_i)$ (resp. $\textsl{g}_i(x)\in\partial f_i(x)$) be a subgradient of $h_i(x;\xi_i)$ (resp. $f_i(x)$) computed at $x$. Then, the following assumption holds for problem~\eqref{pb:problem}. \begin{assumption}[Problem structure]\label{assumption:problem_structure} \hspace{1ex} \begin{enumerate}[label=(\Alph*)] \item The constraint set $X$ has the block structure $X = X_1\times \dots\times X_B$, where, for $\ell=1,\dots,B$, the set $X_\ell\subseteq\mathbb{R}^{n_\ell}$ is closed and convex, and $\sum_{\ell=1}^B n_\ell = n$. \item\label{assumption:strong_convexity} The function $h_i(x,\xi_i):\mathbb{R}^n\to\mathbb{R}$ is continuous, strongly convex and possibly nonsmooth for all $x\in X$ and every $\xi_i$, for all $i\in\until{N}$. In particular, there exists a constant $m>0$ such that $f_i(a)\geq f_i(b) - \langle \textsl{g}_i(b), b-a\rangle + \frac{m}{2}\| a- b \|^2$, for all $a,b\in X$ and all $i\in\until{N}$. \item\label{subg} every subgradient $g_{i}(x;\xi_i)$ is an unbiased estimator of the subgradient of $f_i$, i.e., $\mathbb{E}[g_{i}(x;\xi_i)]=\textsl{g}_i(x)$. Moreover, there exist constants $G_{i}\in[0,\infty)$ and $\bar{G}_{i}\in[0,\infty)$ such that $\mathbb{E}[\|g_i(x;\xi_i)\|]\leq G_{i}$, and $\mathbb{E}[\|g_i(x;\xi_i)\|^2]\leq \bar{G}_{i}$, for all $x$ and $\xi_i$, for all $i\in\until{N}$.\oprocend \end{enumerate} \end{assumption} Let us denote by $g_{i,\ell}(x;\xi_i)$ the $\ell$-th block of $g_{i}(x;\xi_i)$ and let $\textsl{g}(x)\in\partial f(x)$ be a subgradient of $f$ computed at $x$. Then, Assumption~\ref{assumption:problem_structure}\ref{subg} implies that $\mathbb{E}[\|g_{i,\ell}(x;\xi_i)\|]\leq G_i$ for all $\ell$ and $\|\textsl{g}_i(x)\|\leq G_i$. Moreover, let $\bar{G}\triangleq\sum_{i=1}^N \bar{G}_i$ and $G\triangleq\sum_{i=1}^N G_i$. Then, $\|\textsl{g}(x)\|{\leq} G$ and $\|\textsl{g}_i(x)\|{\leq} G$ for all $i$. \vspace{2ex} Problem~\eqref{pb:problem} is to be cooperatively solved by a network of $N$ agents. Agents locally know only a portion of the entire optimization problem. Namely, agent $i$ knows only $g_i(x;\xi_i)$ for any $x$ and $\xi_i$, and the constraint set $X$. The communication network is assumed to satisfy the next assumption. \begin{assumption}[Communication structure]\label{assumption:communication} \hspace{1ex} \begin{enumerate}[label=(\Alph*)] \item The network is modeled through a weighted \emph{strongly connected} directed graph $\mathcal{G}=(\VV,\mathcal{E}, {W})$ with $\VV=\until{N}$, $\mathcal{E}\subseteq\VV\times\VV$ and ${W}\in\mathbb{R}^{N\times N}$ being the weighted adjacency matrix. We define $\NN_{i,out}\triangleq\{j\mid (i,j)\in\mathcal{E}\}\cup\{i\}$ and $\NN_{i,in}\triangleq\{j\mid (j,i)\in\mathcal{E}\}\cup\{i\}$. \item For all $i,j\in\until{N}$, the weights $w_{ij}$ of the weight matrix ${W}$ satisfy \begin{enumerate}[label=(\roman*)] \item $w_{ij}>0$ if and only if $j\in\NN_{i,in}$; \item there exists a constant $\eta>0$ such that $w_{ii}\geq\eta$ and if $w_{ij}>0$, then $w_{ij}\geq\eta$; \item $\sum_{j=1}^N w_{ij}=1$ and $\sum_{i=1}^N w_{ij}=1$.\oprocend \end{enumerate} \end{enumerate} \end{assumption} In order to solve problem~\eqref{pb:problem} agents will be using ad-hoc \emph{proximal mappings} (see, e.g.,~\cite{dang2015stochastic}). In particular, a function $\omega_{\ell}$ is associated to the $\ell$-th block of the optimization variable for all $\ell$. Let the function $\omega_{\ell}:X_\ell\to\mathbb{R}$, be continuously differentiable and $\sigma_{\ell}$-strongly convex. Functions $\omega_{\ell}$ are sometimes referred to as distance generating functions. Then, we define the \emph{Bregman's divergence} associated to $\omega_{\ell}$ as \begin{equation*} \nu_{\ell}(a,b)=\omega_{\ell}(b)-\omega_{\ell}(a)-\langle \nabla \omega_{\ell} (a), b-a\rangle, \end{equation*} for all $a,b\in X_\ell$. Moreover, given $a\in X_\ell$, $b\in\mathbb{R}^{n_\ell}$ and $c\in\mathbb{R}$, the proximal mapping associated to $\nu_{\ell}$ is defined as \begin{equation}\label{eq:prox} \text{prox}_{\ell} (a,b,c)=\arg\min_{u\in X_\ell}\Bigg(\langle b, u \rangle+\frac{1}{c} \nu_{\ell}(a, u)\Bigg). \end{equation} We make the following assumption on the functions $\nu_{\ell}$. \begin{assumption}[Bregman's divergences properties]\label{assumption:proximal_functions} \hspace{1ex} \begin{enumerate}[label=(\Alph*)] \item\label{assumption:quadratic_growth} There exists a constant $Q>0$ such that \begin{equation}\label{eq:assumption_growth} \nu_{\ell}(a,b)\leq\frac{Q}{2}\|a-b\|^2,\quad\forall a,b\in X_\ell \end{equation} for all $\ell\in\until{B}$. \item\label{assumption:separate} For all $\ell\in\until{B}$, the function $\nu_{\ell}$ satisfies \begin{equation}\label{eq:separable} \hspace{-3ex}\nu_{\ell}\left(\sum_{j=1}^N \theta_j a_j,b\right){\leq} \sum_{j=1}^N \theta_j \nu_{\ell}(a_j,b), \; \forall a_1,\dots,a_N,b{\in} X_\ell, \end{equation} where $\sum_{j=1}^N \theta_j=1$ and $\theta_j\geq 0$ for all $j$.\oprocend \end{enumerate} \end{assumption} Notice that Assumption~\ref{assumption:proximal_functions}\ref{assumption:quadratic_growth} implies that, given any two points $a,b\in X$, \begin{equation} \sum_{\ell=1}^B \nu_{\ell}(a_\ell,b_\ell)\leq\frac{Q}{2}\sum_{\ell=1}^B\|a_\ell-b_\ell\|^2=\frac{Q}{2}\|a-b\|^2. \end{equation} Moreover, Assumption~\ref{assumption:proximal_functions}\ref{assumption:separate} is satisfied by many functions (such as the quadratic function and the exponential function) and conditions on $\omega_{\ell}$ guaranteeing~\eqref{eq:separable} can be provided~\cite{bauschke2001joint}. \section{Distributed Block Proximal Method}\label{sec:algorithm} Let us now recall the Distributed Block Proximal Method for solving problem~\eqref{pb:problem} in a distributed way. The pseudocode of the algorithm is reported in Algorithm~\ref{alg:DBS}, where, for notational convenience, we defined $g_{i,\ell}(t)\triangleq g_{i,\ell}(y_i(t);\xi_i(t))$. We refer to~\cite{farina2019randomized} for all the details. \begin{algorithm} \small \begin{algorithmic} \init $x_i(0)$ \evol for $t=0,1,\dots$ \State \textsc{Update} for all $j\in\NN_{i,in}$ \begin{equation*}\label{eq:xl_update} x_{j,\ell{\mid}i}(t) = \begin{cases} x_{j,\ell}(t), &\text{if }\ell=\ell_j(t-1) \; \text{and} \; s_j(t-1)=1\\ x_{j,\ell{\mid}i}(t-1), &\text{otherwise} \end{cases} \end{equation*} \If{$s_i^t=1$ } \State\textsc{Pick} $\ell_i(t)\in\until{B}$ with $P(\ell_i(t)=\ell)=p_{i,\ell}>0$ \State\textsc{Compute} \begin{equation*}\label{eq:y_update_l} y_i(t) = \sum_{j\in\NN_{i,in}} w_{ij} x_{j{\mid}i}(t) \end{equation*} \State\textsc{Update} \begin{equation*}\label{eq:x_update_l} x_{i,\ell}(t+1) = \begin{cases} \text{prox}_{\ell}\bigg(y_{i,\ell}(t),g_{i,\ell}(t),\alpha_i\bigg),&\text{if } \ell=\ell_i(t)\\ x_{i,\ell}(t),&\text{otherwise} \end{cases} \end{equation*} \State\textsc{Broadcast} $x_{i,\ell_i(t)}(t+1)$ to $j\in\NN_{i,out}$ \Else{ $x_i(t+1)=x_i(t)$} \EndIf \end{algorithmic} \caption{Distributed Block Proximal Method }\label{alg:DBS} \end{algorithm} The algorithm works as follows. Each agent $i$ maintains a local solution estimate $x_i(t)$ and a local copy of the estimates of its in-neighbors (namely, $x_{j{\mid}i}(t)$ denotes the copy of the solution estimate of agent $j$ at agent $i$). The initial conditions are initialized with random (bounded) values $x_i(0)$ which are shared between neighbors. At each iteration, agents can be awake or idle, thus modeling a possible asynchrony in the network. The probability of agent $i$ to be awake is denoted by $p_{i,on}\in (0,1]$. If agent $i$ is awake at iteration $t$, it picks randomly a block $\ell_i(t)\in\until{B}$, some $\xi_i(t)$, and performs two updates: \begin{enumerate}[label=(\roman*)] \item it computes a weighted average of its in-neighbors' estimates $x_{j{\mid}i}(t)$, $j\in\NN_{i,in}$; \item it computes $x_i(t+1)$ by updating the $\ell_i(t)$-th block of $x_i(t)$ through a proximal mapping step (with a constant stepsize $\alpha_i$) and leaving the other blocks unchanged. \end{enumerate} Finally, it broadcasts $x_{i,\ell_i(t)}(t+1)$ to its out-neighbors. The status (awake or idle) of node $i$ at iteration $t$ is modeled as a random variable $s_i(t)\in\{0,1\}$ which is $1$ with probability $p_{i,on}$ and $0$ with probability $1-p_{i,on}$. As already stated in~\cite{farina2019randomized}, it is worth remarking that all the quantities involved in the Distributed Block Proximal Method are local for each node. In fact, each node has locally defined probabilities (both of awakening and block drawing) and local stepsizes. Moreover, it is worth recalling that, from~\cite[Lemma~5]{farina2019randomized}, we have $x_{j{\mid}i}(t)=x_j(t)$ for all $t$ and hence, Algorithm~\ref{alg:DBS} can be compactly rewritten as follows. For all $i\in\until{N}$ and all $t$, if $s_i(t)=1$, \begin{align} y_i(t) &= \sum_{j=1}^N w_{ij} x_j(t),\label{eq:y_update}\\ x_{i,\ell}(t+1) &= \begin{cases} \textup{prox}_{\ell}\bigg(y_{i,\ell}(t),g_{i,\ell}(t),\alpha_i\bigg),&\text{if } \ell=\ell_i(t),\\ x_{i,\ell}(t),&\text{otherwise},\label{eq:x_update} \end{cases} \end{align} else, $x_i(t+1)=x_i(t)$. We will use~\eqref{eq:y_update}-\eqref{eq:x_update} in place of Algorithm~\ref{alg:DBS}, in the following analysis. \section{Algorithm analysis and convergence rate}\label{sec:analysis} Let $\bm{x}(\tau)\triangleq[x_1(\tau)^\top, \dots, x_N(\tau)^\top]^\top$ and let $\mS(t)\triangleq \{\bm{x}(\tau)\mid \tau\in\{0,\dots,t\}\}$ be the se set of estimates generated by the Distributed Block Proximal Method up to iteration $t$. Moreover, define the probability of node $i$ to both be awake and pick block $\ell$ as $$\pi_{i,\ell}\triangleq p_{i,on} p_{i,\ell}$$ and define $a\triangleq[\alpha_1,\dots,\alpha_N]^\top$, $a_M\triangleq\max_{i}\alpha_i$ and $a_m\triangleq\min_{i}\alpha_i$. Moreover, define the average (over the agents) of the local estimates at $t$ as \begin{equation}\label{eq:x_mean} \bar{x}(t)\triangleq\frac{1}{N}\sum_{i=1}^N x_i(t). \end{equation} Finally, let us make the following assumption about the random variables involved in the algorithm. \begin{assumption}[Random variables]\label{assumption:random_variables} \hspace{1ex} \begin{enumerate}[label=(\Alph*)] \item\label{assumption:iid} The random variables $\ell_i(t)$ and $s_i(t)$ are independent and identically distributed for all $t$, for all $i\in\until{N}$ \item\label{assumption:indepentent} For any given $t$, the random variables $s_i(t)$, $\ell_i(t)$ and $\xi_i(t)$ are independent of each other for all $i\in\until{N}$ \item\label{assumption:initial} There exists a constants $C_i\in[0,\infty)$ such that $\mathbb{E}[\|x_i(0)\|]\leq C_i$ for all $i\in\until{N}$ and hence $\mathbb{E}[\|\bm{x}(0)\|]\leq C=\sum_{i=1}^N C_i$. \oprocend \end{enumerate} \end{assumption} In the following we analyze the convergence properties of the Distributed Block Proximal Method with constant stepsizes under the previous assumptions. We start by showing that consensus is achieve in the network, by specializing the results in~\cite{farina2019randomized}. Then, we show that also optimality is achieved in expected value and with a constant error, by studying the properties of an ad-hoc Lyapunov-like function. Finally, we show how the main result implies a linear convergence rate for the algorithm. \subsection{Reaching consensus} The following lemma characterizes the expected distance of $x_i(t)$ and $y_i(t)$ from the average $\bar{x}(t)$ (defined in~\eqref{eq:x_mean}). \begin{lemma}\label{lemma:xi-bx} Let Assumptions~\ref{assumption:problem_structure},~\ref{assumption:communication},~\ref{assumption:random_variables} hold. Then, there exist constants $M\in(0,\infty)$ and $\mu_M\in(0,1)$ such that \begin{align} \mathbb{E}[\|x_i(t)-\bar{x}(t)\|] &\leq \mu_M^{t-1} \bar{R}+\bar{S},\label{eq:xi-bx}\\ \mathbb{E}[\|y_i(t)-x_i(t)\|] &\leq 2 \mu_M^{t-1} \bar{R} + 2\bar{S} \end{align} for all $i\in\until{N}$ and all $t\geq 1$, with $\bar{R}= MB\left(C- \frac{a_M G}{\sigma(1-\mu_M)}\right)$ and $\bar{S}= a_M \frac{MBG}{\sigma}\frac{2-\mu_M}{1-\mu_M}$. \end{lemma} \begin{proof} The proof follows by using constant stepsizes in~\cite[Lemma~7 and Lemma~8]{farina2019randomized}. \end{proof} In the next section, in order to prove the convergence to the optimal cost with a linear rate, we will need the following result assuring the boundedness of a particular quantity. In particular, given a scalar $c\in(0,1)$, let us define \begin{equation} \beta(t)\triangleq\sum_{\tau=0}^t c^{t-\tau} \mathbb{E}[\|x_i(\tau)-\bar{x}(\tau)\|]. \end{equation} Then, the next lemma provides a bound on $\beta(t)$ for all $t$. \begin{lemma}\label{lemma:x_constant} Let Assumptions~\ref{assumption:problem_structure},~\ref{assumption:communication},~\ref{assumption:random_variables} hold. Then, for any scalar $c\in (0,1)$, \begin{enumerate}[label=(\roman*)] \item if $c\neq\mu_M$, \begin{align} \beta(t)&\leq c^{t}\left(C + \frac{\bar{R}}{c-\mu_M}\right) +\frac{1-c^t}{1-c}\bar{S} \label{eq:xbound_sub} \end{align} \item if $c=\mu_M$, \begin{align} \beta(t)&\leq c^{t}\left(C + \frac{t\bar{R}}{c}\right) +\frac{1-c^t}{1-c}\bar{S}\label{eq:xbound_sub_eq} \end{align} \end{enumerate} for all $i\in\until{N}$, for all $t$ \end{lemma} \begin{proof} By using Assumption~\ref{assumption:random_variables}\ref{assumption:initial}, for $\tau=0$, one has \begin{align} \mathbb{E}&[\|x_i(0)-\bar{x}(0)\|]\leq \mathbb{E}[\|x_i(0)\|] +\mathbb{E}[\|\bar{x}(0)\|]\nonumber\\ &\leq C_i +\frac{1}{N}\sum_{j=1}^N C_j \leq C_i +\max_j C_j \leq C\label{eq:x0} \end{align} Hence, $\beta(t)\leq c^t C +\sum_{\tau=1}^t c^{t-\tau}\mathbb{E}[\|x_i(\tau)-\bar{x}(\tau)\|]$ and, from Lemma~\ref{lemma:xi-bx}, we have \begin{align} \beta(t)\leq c^t C + \bar{R} \sum_{\tau=1}^t c^{t-\tau}\mu_M^{\tau-1}+\bar{S}\sum_{\tau=1}^t c^{t-\tau}\label{eq:start} \end{align} Let us consider the case $c\neq \mu_M$. By using Lemma~\ref{lemma:series}, one easily gets \begin{align*} \beta(t)&\leq c^{t}C + \frac{c^t-\mu_M^t}{c-\mu_M}\bar{R} +\frac{1-c^t}{1-c}\bar{S}\\ &\leq c^{t}\left(C + \frac{\bar{R}}{c-\mu_M}\right) +\frac{1-c^t}{1-c}\bar{S} \end{align*} where in the second line we have removed the negative term depending on $\mu_M^t$. For the case $c= \mu_M$ we have \begin{equation} \sum_{\tau=1}^t c^{t-\tau}\mu_M^{\tau-1} = \sum_{\tau=1}^{t} c^{t-1} = t c^{t-1}\label{eq:p1_eq} \end{equation} and~\eqref{eq:xbound_sub_eq} is obtained by substituting~\eqref{eq:p1_eq} in~\eqref{eq:start} and using Lemma~\ref{lemma:series}. \end{proof} \subsection{Reaching optimality} Let us start by defining a Lyapunov-like function \begin{equation}\label{eq:V} V_i^\tau \triangleq \sum_{\ell=1}^B \pi_{i,\ell}^{-1}\nu_{\ell}(x_{i,\ell}^{\tau},x_{\ell}^\star) \end{equation} and let $V^t \triangleq \sum_{i=1}^N V_i^t$. Moreover, define \begin{equation} f_\textsl{best}(\bar{x}^t)\triangleq\min_{\tau\leq t} \mathbb{E}[f(\bar{x}^\tau)] \end{equation} and $\pi_m=\min_{i,\ell}\pi_{i,\ell}$. Then, the following result holds true and will be the key for proving the linear convergence rate of the Distributed Block Proximal Method under the previous assumptions. \begin{lemma}\label{lemma:strongly} Let Assumptions~\ref{assumption:problem_structure},~\ref{assumption:communication},~\ref{assumption:proximal_functions} and~\ref{assumption:random_variables} hold. Moreover, let $\alpha_i\leq\frac{Q}{m}$ for all $i$. Then, for all $t$, \begin{align} \mathbb{E}[V(t&+1)]\leq \left(1-\frac{m a_m \pi_m}{Q}\right)\mathbb{E}[V(t)]- \sum_{i=1}^N \alpha_i\left( \mathbb{E}[f_i(y_i(t))] - f_i(x^\star)\right) + \frac{a_M^2 \bar{G}}{2\sigma}.\label{eq:lemma_strongly} \end{align} \end{lemma} \begin{proof} In order to simplify the notation, let us denote $\textsl{g}_i(t)=\textsl{g}_i(y_i(t))$. By using the same arguments used in the proof of~\cite[Theorem~1]{farina2019randomized} we have \begin{align} &\mathbb{E}[V_i(t+1)\mid\mS(t)]\leq V_i(t) - \sum_{\ell=1}^B\nu_{\ell}(x_{i,\ell}(t),x_{\ell}^\star) + \sum_{\ell=1}^B\nu_{\ell}(y_{i,\ell}(t),x_{\ell}^\star) -\alpha_i \langle \textsl{g}_i(t),y_i(t)-x^\star\rangle+\frac{\alpha_i^2 \bar{G}_i}{2\sigma} \label{eq:start2} \end{align} Now, By exploiting Assumptions~\ref{assumption:problem_structure}\ref{assumption:strong_convexity},~\ref{assumption:proximal_functions}\ref{assumption:quadratic_growth} and~\eqref{eq:assumption_growth}, one has that, for all $t$, \begin{align} \alpha_i \langle \textsl{g}_i(t),y_i(t)-x^\star\rangle &\geq \alpha_i \left( f_i(y_i(t))-f_i(x^\star) + \frac{m}{2}\|y_i(t)-x^\star\|^2 \right)\nonumber\\ &\geq \alpha_i \left( f_i(y_i(t))-f_i(x^\star)\right) + \frac{m\alpha_i}{Q}\sum_{\ell=1}^B\nu_\ell(y_{i,\ell}(t),x_{\ell}^\star). \label{eq:p1} \end{align} Now, by using~\eqref{eq:p1} in~\eqref{eq:start2} and by exploiting the fact that $\alpha_i\leq\frac{Q}{m}$, we get \begin{align} \mathbb{E}[V_i(t+1)\mid\mS(t)] &\leq V_i(t) - \sum_{\ell=1}^B\nu_\ell(x_{i,\ell}(t),x_{\ell}^\star) + \left(1-\frac{m\alpha_i}{Q}\right)\sum_{\ell=1}^B\nu_\ell(y_{i,\ell}(t),x_{\ell}^\star)\nonumber\\ &\hspace{3ex}-\alpha_i \left( f_i(y_i(t))-f_i(x^\star)\right)+\frac{\alpha_i^2 \bar{G}_i}{2\sigma}\nonumber\\ &\leq V_i(t) - \sum_{\ell=1}^B\nu_\ell(x_{i,\ell}(t),x_{\ell}^\star) + \left(1-\frac{m\alpha_i}{Q}\right)\sum_{j=1}^N w_{ij}\sum_{\ell=1}^B\nu_\ell(x_{j,\ell}(t),x_{\ell}^\star)\nonumber\\ &\hspace{3ex}-\alpha_i \left( f_i(y_i(t))-f_i(x^\star)\right)+\frac{\alpha_i^2 \bar{G}_i}{2\sigma}, \end{align} where in the second inequality we used assumption~\ref{assumption:proximal_functions}\ref{assumption:separate}. If we now sum over $i$, by noticing that $a_m\leq\alpha_i$ for all $i$, we obtain \begin{align} \sum_{i=1}^N\mathbb{E}[V_i^{t+1}\mid\mS(t)]\leq &\sum_{i=1}^N V_i(t) - \sum_{i=1}^N \sum_{\ell=1}^B\nu_\ell(x_{i,\ell}(t),x_{\ell}^\star) \nonumber\\ &+ \sum_{i=1}^N \left(1-\frac{m\alpha_i}{Q}\right)\sum_{j=1}^N w_{ij}\sum_{\ell=1}^B\nu_\ell(x_{j,\ell}(t),x_{\ell}^\star)\nonumber\\ &-\sum_{i=1}^N\alpha_i \left( f_i(y_i(t))-f_i(x^\star)\right)+\sum_{i=1}^N\frac{\alpha_i^2 \bar{G}_i}{2\sigma}. \end{align} Now, by using the fact that $a_m\leq \alpha_i\leq a_M$ for all $i$, the double stochasticity of $W$ from Assumption~\ref{assumption:communication}, and the definition of $\bar{G}$, one easily obtains that \begin{align} \sum_{i=1}^N\mathbb{E}[V_i^{t+1}\mid\mS(t)] &\leq \sum_{i=1}^N V_i(t) -\frac{m a_m}{Q}\sum_{i=1}^N \sum_{\ell=1}^B\nu_\ell(x_{i,\ell}(t),x_{\ell}^\star)-\sum_{i=1}^N\alpha_i \left( f_i(y_i(t))-f_i(x^\star)\right)+\frac{a_M^2 \bar{G}}{2\sigma}.\label{eq:V_part} \end{align} Moreover, by using~\eqref{eq:V} we can rewrite \begin{align} \sum_{i=1}^N V_i(t) -\frac{m a_m}{Q}\sum_{i=1}^N \sum_{\ell=1}^B\nu_{\ell}(x_{i,\ell}(t),x_{\ell}^\star) &= \sum_{i=1}^N \sum_{\ell=1}^B \left(\pi_{i,\ell}^{-1}\nu_{\ell}(x_{i,\ell}(t),x_{\ell}^\star) -\frac{m a_m}{Q}\nu_{\ell}(x_{i,\ell}(t),x_{\ell}^\star)\right)\nonumber\\ &\leq\left(1- \frac{m \pi_m a_m}{Q}\right) V(t)\label{eq:Vdec} \end{align} where we have used the fact that $\sum_{\ell=1}^B\pi_m^{-1}\nu_{\ell}(a,b)\geq\sum_{\ell=1}^B\pi_{i,\ell}^{-1}\nu_{\ell}(a,b)$. Finally, by plugging~\eqref{eq:Vdec} in~\eqref{eq:V_part} and by using tower property of conditional expectation one gets~\eqref{eq:lemma_strongly}. \end{proof} Thanks to the previous results, we are now ready to state and prove the main result of this paper. \begin{theorem}\label{theorem:bound} Let Assumptions~\ref{assumption:problem_structure},~\ref{assumption:communication},~\ref{assumption:proximal_functions} and~\ref{assumption:random_variables} hold. Moreover, let $\alpha_i\leq\frac{Q}{m}$ for all $i$ and let $c \triangleq \left(1-\frac{m a_m \pi_m}{Q}\right)$. Then, \begin{enumerate} \item if $c\neq\mu_M$, \begin{align} f_\textsl{best}(\bar{x}^t)-f(x^\star)&\leq\frac{c^{t}}{1-c^{t+1}}(Q+R_1)+S,\label{eq:f_bound} \end{align} \item if $c=\mu_M$, \begin{align} f_\textsl{best}(\bar{x}^t)-f(x^\star)&\leq \frac{c^{t}}{1-c^{t+1}} \left(Q +tR_2\right) +S,\label{eq:f_bound_eq} \end{align} \end{enumerate} where $Q=(1-c)\left(\frac{\mathbb{E}[V^0]}{a_m} + 3GC\right)$, $R_1= \frac{(1-c)3G\bar{R}}{c-\mu_M}$, $R_2= \frac{(1-c)3G\bar{R}}{c}$. and $S=\frac{ a_M^2 \bar{G}}{2\sigma a_m} + 3G\bar{S}$. \end{theorem} \begin{proof} By recursively applying~\eqref{eq:lemma_strongly}, one has \begin{align} \sum_{\tau=0}^{t}c^{t-\tau}&\sum_{i=1}^N \alpha_i\left( \mathbb{E}[f_i(y_i(\tau))] - f_i(x^\star)\right) \leq c^{t+1} \mathbb{E}[V^0] + \sum_{\tau=0}^{t}c^{t-\tau} \frac{a_M^2 \bar{G}}{2\sigma}\nonumber \end{align} Moreover, since $a_m\leq \alpha_i$ for all $i$, \begin{align} \sum_{\tau=0}^{t}c^{t-\tau} a_m \sum_{i=1}^N\left( \mathbb{E}[f_i(y_i(\tau))] - f_i(x^\star)\right) &\leq\sum_{\tau=0}^{t}c^{t-\tau}\sum_{i=1}^N \alpha_i\left( \mathbb{E}[f_i(y_i(\tau))] - f_i(x^\star)\right)\nonumber\\ &\leq c^{t+1} \mathbb{E}[V^0] + \sum_{\tau=0}^{t}c^{t-\tau} \frac{a_M^2 \bar{G}}{2\sigma}\nonumber\\ &= c^{t+1} \mathbb{E}[V^0] + \frac{a_M^2 \bar{G}}{2\sigma}\frac{1-c^{t+1}}{1-c}\label{eq:fsum} \end{align} where in the last line we used Lemma~\ref{lemma:series}, thanks to the fact that since by assumption $\alpha_i\leq\frac{Q}{m}$, we have $c\in(0,1)$. Then, from the convexity of $f$ we have that, at any iteration $t$, \begin{align} \sum_{\tau=0}^t c^{t-\tau} a_m \left( \mathbb{E}[f(\bar{x}(\tau))]-f(x^\star) \right) &\geq \left(a_m\sum_{\tau=0}^t c^{t-\tau} \right)\left(\min_{\tau\leq t} \mathbb{E}[f(\bar{x}(\tau))]-f(x^\star)\right)\nonumber\\ &=\left(a_m\frac{1-c^{t+1}}{1-c} \right)\left(f_\textsl{best}(\bar{x}(t))-f(x^\star)\right)\label{eq:xstar} \end{align} where we used Lemma~\ref{lemma:series} and the definition of $f_\textsl{best}$. Now, by making some manipulation on the term $\mathbb{E}[f(\bar{x}(\tau))]-f(x^\star)=\mathbb{E}[f(\bar{x}(\tau))-f(x^\star)]$, as in~\cite[Theorem~1]{farina2019randomized} we get \begin{align} \mathbb{E}[f(\bar{x}(\tau))-f(x^\star)]\leq\sum_{i=1}^N \mathbb{E}[\left(f_i(y_i(\tau))-f_i(x^\star)\right)] +\sum_{i=1}^N G_i\left(\mathbb{E}[\|y_i(\tau)-x_i(\tau)\|]+\mathbb{E}[\|x_i(\tau)-\bar{x}(\tau)\|]\right).\label{eq:x_i-xstar} \end{align} In the case $c\neq \mu_M$, by substituting~\eqref{eq:x_i-xstar} in~\eqref{eq:xstar} and by using~\eqref{eq:fsum} and Lemma~\ref{lemma:x_constant} one has \begin{align*} \left(a_m\frac{1-c^{t+1}}{1-c} \right)&(f_\textsl{best}(\bar{x}(t))-f(x^\star))\leq c^{t+1} \mathbb{E}[V^0] + \frac{a_M^2 \bar{G}}{2\sigma}\frac{1-c^{t+1}}{1-c}+ 3 a_m G\left( c^{t}\left(C + \frac{\bar{R}}{c-\mu_M}\right) +\frac{1-c^t}{1-c}\bar{S} \right). \end{align*} Now, by dividing both sides by $a_m$ and rearranging the term one has \begin{align*} \left(\frac{1-c^{t+1}}{1-c} \right)(f_\textsl{best}(\bar{x}(t))-f(x^\star)) &\leq c^{t+1} \frac{\mathbb{E}[V^0]}{a_m} + c^{t} 3G\left(C + \frac{\bar{R}}{c-\mu_M}\right)+ \frac{1-c^{t+1}}{1-c} \frac{a_M^2 \bar{G}}{2\sigma a_m} +\frac{1-c^t}{1-c}3G\bar{S} \nonumber\\ & \leq c^{t} \left(\frac{\mathbb{E}[V^0]}{a_m} + 3GC + \frac{3G\bar{R}}{c-\mu_M}\right) + \frac{1-c^{t+1}}{1-c} \left(\frac{a_M^2 \bar{G}}{2\sigma a_m} + 3G\bar{S}\right) \end{align*} where in the second line we used the fact that $c\leq1$. Finally,~\eqref{eq:f_bound} is obtained by dividing both sides by $\frac{1-c^{t+1}}{1-c}$. The case $c=\mu_m$ can be proven in a similar way. In fact, by using the same arguments as before, we have \begin{align*} \left(\frac{1-c^{t+1}}{1-c} \right)(f_\textsl{best}(\bar{x}(t))-f(x^\star)) &\leq c^{t} \frac{\mathbb{E}[V^0]}{a_m} + c^{t} 3G\left(C + \frac{t\bar{R}}{c}\right) + \frac{1-c^{t+1}}{1-c} \left(\frac{a_M^2 \bar{G}}{2\sigma a_m} + 3G\bar{S}\right)\nonumber\\ & \leq c^{t} \left(\frac{\mathbb{E}[V^0]}{a_m} + 3GC \right) + t c^{t}\frac{3G\bar{R}}{c} + \frac{1-c^{t+1}}{1-c} \left(\frac{a_M^2 \bar{G}}{2\sigma a_m} + 3G\bar{S}\right) \end{align*} thus leading to ~\eqref{eq:f_bound_eq} by dividing both sides by $\frac{1-c^{t+1}}{1-c}$. \end{proof} Notice that Theorem~\ref{theorem:bound} implies that convergence with a constant error is attained, i.e., define $\tilde{f}^\star = f(x^\star) + S$, then \begin{equation} \lim_{t\to\infty} f_\textsl{best}(\bar{x}^t)-\tilde{f}^\star = 0. \end{equation} Moreover, the convergence rate is linear. In fact, recall that $c\in(0,1)$. Then, if $c\neq \mu_M$ one has \begin{align*} \lim_{t\to\infty}\frac{f_\textsl{best}(\bar{x}^{t+1})-\tilde{f}^\star}{f_\textsl{best}(\bar{x}^t)-\tilde{f}^\star} &\leq \lim_{t\to\infty}\frac{\frac{c^{t+1}}{1-c^{t+2}}}{\frac{c^{t}}{1-c^{t+1}}} = c, \end{align*} while, if $c=\mu_M$, \begin{align*} \lim_{t\to\infty}\frac{f_\textsl{best}(\bar{x}^{t+1})-\tilde{f}^\star}{f_\textsl{best}(\bar{x}^t)-\tilde{f}^\star} &\leq \lim_{t\to\infty}\frac{\frac{c^{t+1}}{1-c^{t+2}}\left(\bar{\beta} +(t+1)\eta\right)}{\frac{c^{t}}{1-c^{t+1}}\left(\bar{\beta} +t\eta\right)} = c. \end{align*} \begin{remark} Our block-wise algorithm has two main benefits in terms of communication and computation respectively. First, when a limited bandwidth is available in the communication channels, data that exceed the communication bandwidth are transmitted sequentially in classical algorithms. For example, if only one block fits the communication channel, our algorithm performs an update at each communication round, while classical ones need $B$ communication rounds per update. Second, in general, solving the minimization problem in~\eqref{eq:x_update} on the entire optimization variable or on a single block results in completely different computational times. \end{remark} \begin{figure}[t!] \centering \includegraphics[width=0.5\columnwidth]{results} \caption{Numerical example: Evolution of the cost error normalized on the number of blocks.} \label{fig:cost} \end{figure} \section{Numerical example}\label{sec:experiment} We consider as a numerical example a learning problem in which agents have to classify samples belonging to two clusters. Formally, each agent $i\in\until{N}$ has $m_i$ training samples $q_i^1,\dots,q_{i}^{m_i}\in\mathbb{R}^d$ each of which has an associated binary label $b_{i}^r\in\{-1,1\}$ for all $r\in\until{m_i}$. The goal of the agents is to compute in a distributed way a linear classifier from the training samples, i.e., to find a hyperplane of the form $\{z\in\mathbb{R}^{d}\mid \langle \theta, z\rangle + \theta_0=0\}$, with $\theta\in\mathbb{R}^d$ and $\theta_0\in\mathbb{R}$, which better separates the training data. For notational convenience, let $x=[\theta^\top, \theta_0]^\top\in\mathbb{R}^{d+1}$ and $\hat{q}_{i}^r=[(q_{i}^r)^\top, 1]^\top$. Then, the presented problem can be addressed by solving the following convex optimization problem, in which a regularized Hinge loss is used as cost function, \begin{equation*}\label{pb:regression} \begin{aligned} &\mathop{\textrm{minimize}}_{x\in\mathbb{R}^{d+1}} & & \sum_{i=1}^N\frac{1}{m_i}\sum_{r=1}^{m_i}\max\left(0, 1-b_{i}^r\langle x,\hat{q}_{i}^r\rangle\right)+\frac{\lambda}{2}\|x\|^2, \end{aligned} \end{equation*} where $\lambda>0$ is the regularization weight. This problem can be written in the form of~\eqref{pb:problem} by defining $\xi_i^r=(\hat{q}_{i}^r,b_i^r)$ and \begin{align*} \mathbb{E}[h_i(x;\xi_i)] &=\frac{1}{m_i}\sum_{r=1}^{m_i}\left(\max\left(1-b_{i}^r\langle x,\hat{q}_{i}^r\rangle\right)+\frac{\lambda}{2N}\|x\|^2\right) \end{align*} for all $i\in\until{N}$. In fact, as long as each data $\xi_i^r$ is uniformly drawn from the dataset, Assumption~~\ref{assumption:problem_structure}\ref{subg} is satisfied. We implemented the algorithm in DISROPT~\cite{farina2019disropt} and we tested it in this scenario with $N=48$ agents, $x\in\mathbb{R}^{50}$ and different number of blocks, namely $B\in\{1,2,5,10,25\}$. We generated a synthetic dataset composed of $480$ points and assigned $10$ of them to each agent, i.e., $m_1=\dots=m_N=10$. Agents communicate according to a connected graph generated according to an Erd\H{o}s-R\`{e}nyi random model with connectivity parameter $p=0.5$. The corresponding weight matrix is built by using the Metropolis-Hastings rule. Finally, we set $\lambda=1$, $p_{i,\ell}=1/B$ for all $i$ and all $\ell$, $p_{i,on}=0.95$ for all $i$ and local (constant) stepsizes $\alpha_i$ randomly chosen according to a normal distribution with mean $0.005$ and standard deviation $10^{-4}$. The evolution of the cost error adjusted with respect to the number of blocks is reported in Figure~\ref{fig:cost} for the considered block numbers. The linear convergence rate can be easily appreciated from the figure and confirms the theoretical analysis. \section{Conclusions}\label{sec:conclusion} In this paper, we studied the behavior of the Distributed Block Proximal Method when applied to problems involving (non-smooth) strongly convex functions and when agents in the network employ constant stepsizes. A linear convergence rate (with a constant error) has been obtained in terms of the expected distance from the optimal cost. A numerical example involving a learning problem confirmed the theoretical analysis. \bibliographystyle{IEEEtran}
1,941,325,220,814
arxiv
\section{Introduction and summary} A \emph{finite category} over a fixed field $k$, that we will assume to be algebraically closed throughout, is an Abelian category enriched over finite-dimensional $k$-vector spaces which has enough projective objects and finitely many isomorphism classes of simple objects; moreover, one requires that every object has finite length. Any finite category $\cat{C}$ comes with a right exact endofunctor \begin{align} \catf{N}^\catf{r} : \cat{C}\longrightarrow\cat{C}\ , \quad X \longmapsto \catf{N}^\catf{r} X:=\int^{Y\in \cat{C}} \cat{C}(X,Y)^* \otimes Y \ , \end{align} the \emph{(right) Nakayama functor}, where $\otimes$ denotes the tensoring of objects in $\cat{C}$ with finite-dimensional vector spaces. This Morita invariant description was given in \cite{fss}, and it reduces to the usual definition of the (right) Nakayama functor for the category of finite-dimensional modules over a finite-dimensional $k$-algebra. As a consequence of the coend description of $\catf{N}^\catf{r}$, we obtain in Corollary~\ref{cortheiso} natural isomorphisms \begin{align} \cat{C}(P,X) \cong \cat{C}(X,\catf{N}^\catf{r} P)^* \quad \text{for} \quad X\in\cat{C}\ , \quad P\in \operatorname{\catf{Proj}} \cat{C} \ \end{align} turning the subcategory $\operatorname{\catf{Proj}}\cat{C}\subset \cat{C}$ into an $\catf{N}^\catf{r}$-twisted Calabi-Yau category. Through the correspondence between (twisted) Calabi-Yau structures and (twisted) traces, one obtains the trace \begin{align}\catf{t}_P : \cat{C}(P,\catf{N}^\catf{r} P) \longrightarrow k \quad \text{for}\quad P\in \operatorname{\catf{Proj}} \cat{C} \ . \label{eqngentrace} \end{align} It is now an obvious task to relate this relatively generically constructed trace (or rather family of traces) to the modified trace \cite{geerpmturaev,mtrace1,mtrace2,mtrace3,mtrace,bbg18}. Modified traces are not only a concept of independent algebraic interest, but can also be used for the construction of invariants of closed three-dimensional manifolds, see e.g.\ \cite{cgp,bcgpm}. In such constructions, they serve as a non-semisimple replacement for quantum traces. In this article, we prove in Theorem~\ref{thmmtrace} that, for a finite tensor category in the sense of Etingof-Ostrik \cite{etingofostrik}, i.e.\ a finite category with rigid monoidal product and simple unit, the trace~\eqref{eqngentrace} on the tensor ideal of projective objects, indeed produces a twisted modified trace. \begin{reptheorem}{thmmtrace} For any finite tensor category $\cat{C}$, the twisted trace $ (\catf{t}_P : \cat{C}(P,\catf{N}^\catf{r} P) \longrightarrow k)_{P\in \operatorname{\catf{Proj}}\cat{C}}$ from \eqref{eqngentrace} is (twisted) cyclic, non-degenerate and satisfies a generalized partial trace property. Under the additional assumption that on the finite tensor category $\cat{C}$ a pivotal structure has been chosen, the twisted trace $ (\catf{t}_P : \cat{C}(P,\catf{N}^\catf{r} P) \longrightarrow k)_{P\in \operatorname{\catf{Proj}}\cat{C}}$ can be naturally identified with a right modified $D$-trace, where $D\in\cat{C}$ is the distinguished invertible object of $\cat{C}$. \end{reptheorem} This uses crucially that by \cite[Theorem~4.26]{fss} the Nakayama functor of a finite tensor category can be expressed as \begin{align} \catf{N}^\catf{r} \cong D^{-1}\otimes -^{\vee \vee} \end{align} using the distinguished invertible object $D\in \cat{C}$ \cite{eno-d} and the double dual functor $-^{\vee \vee}$. Note that we do not require a pivotal structure to define our traces because the double dual functor can be conveniently absorbed into the Nakayama functor. If $\cat{C}$ is pivotal, however, we recover the usual definitions. Our motivation for unraveling the connection between the Nakayama functor and the modified trace is topological, but does not directly come from invariants of closed three-dimensional manifolds. Instead, we are motivated by two-dimensional topological conformal field theory, a certain type of differential graded two-dimensional open-closed topological field theory: Suppose that we are given a finite tensor category $\cat{C}$ and a \emph{symmetric Frobenius structure}, by which we mean a certain trivialization of the right Nakayama functor as right $\cat{C}$-module functor relative to a pivotal structure (we give the details in Definition~\ref{defsymfrob}; it will amount to a pivotal structure and a trivialization of the distinguished invertible object). Then the trace coming from \emph{this} particular trivialization of $\catf{N}^\catf{r}$ produces, as discussed above, a Calabi-Yau structure on the tensor ideal $\operatorname{\catf{Proj}} \cat{C}\subset \cat{C}$. To this Calabi-Yau structure on $\operatorname{\catf{Proj}} \cat{C}$, Costello's Theorem \cite{costellotcft} associates a topological conformal field theory $\Phiit_\cat{C}$ that we refer to as the \emph{trace field theory} of the finite tensor category $\cat{C}$ with symmetric Frobenius structure. On a technical level, $\Phiit_\cat{C}:\catf{OC}\longrightarrow\catf{Ch}_k$ is a symmetric monoidal functor from a certain differential graded version of the open-closed bordism category to chain complexes, we recall the details in Section~\ref{sectracefieldtheory}. If we evaluate $\Phiit_\cat{C}$ on the \emph{open} part of the two-dimensional bordism category, $\Phiit_\cat{C}$ provides topological tools to compute with traces, but only captures information that one could have obtained by hand. This is drastically different for the \emph{closed} part of the two-dimensional bordism category: On a closed boundary component, i.e.\ on the circle, we obtain, following again \cite{costellotcft}, the Hochschild complex of $\cat{C}$, i.e.\ the homotopy coend $\int_\mathbb{L}^{X\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(X,X)$ over the endomorphism spaces of projective objects. On this complex, we have an action of the prop provided by the chains on moduli spaces of Riemann surfaces with closed boundary components. Phrased differently, the trace field theory $\Phiit_\cat{C}$ captures the higher structures induced by the modified trace on the Hochschild complex of $\cat{C}$ while, at the same time, being very accessible through the tools available for computations with Nakayama functors. These higher structures will be developed in detail elsewhere. For the present article, no homotopy theory is needed, and we focus entirely on the purely linear consequences, i.e.\ on structures induced in homological degree zero. \begin{reptheorem}{thmtracefieldtheory} Let $\cat{C}$ be a finite tensor category with symmetric Frobenius structure and $\Phiit_\cat{C}:\catf{OC}\longrightarrow \catf{Ch}_k$ its trace field theory. The evaluation of $\Phiit_\cat{C}$ on the disk with one incoming open boundary interval whose complementing free boundary carries the label $P\in\operatorname{\catf{Proj}}\cat{C}$ \begin{equation} \Phiit_\cat{C} \left( \tikzfig{diskohne} \right)\ : \ \cat{C}(P,P)\longrightarrow k \end{equation} is a right modified trace, while the evaluation of $\Phiit_\cat{C}$ on the cylinder with one incoming open boundary interval with complementing free boundary label $P\in\operatorname{\catf{Proj}}\cat{C}$ and one outgoing closed boundary circle \begin{align} \Phiit_\cat{C} \left( \tikzfig{ht} \right)\ : \ \cat{C}(P,P)\longrightarrow \int_\mathbb{L}^{P\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(P,P) \end{align} agrees, after taking zeroth homology, with the Hattori-Stallings trace of $\cat{C}$. \end{reptheorem} By evaluation of $\Phiit_\cat{C}$ on the pair of pants, we obtain a non-unital multiplication $\star$ on the Hochschild complex (it will generally not have a unit because the bordism that would normally give us a unit is not admitted in Costello's category $\catf{OC}$). From results of Wahl and Westerland \cite{wahlwesterland}, we can conclude that this multiplication is supported, up to homotopy, in degree zero. Moreover, it is homotopy commutative by construction. Besides the connection between the Nakayama functor and the modified trace, the construction of this multiplication or rather its degree zero remnant is one of the main results of this short article and will be one of the key ingredients for future work. In the present article, we prove that the product $\star$ is block diagonal (Proposition~\ref{propdiag}) and provide a formula for $\star$ (when evaluated on identity morphisms) involving the \emph{handle elements} \begin{align} \xi_{P,Q} := \Phiit_\cat{C}\left( \tikzfig{handleelement} \right)\in\cat{C}(P,P) \quad\text{for}\quad P,Q\in\operatorname{\catf{Proj}}\cat{C} \ . \label{eqnhandleelementintro} \end{align} of the trace field theory. \begin{reptheorem}{thmhandleelement} Let $\cat{C}$ be a finite tensor category with symmetric Frobenius structure. \begin{pnum} \item Let $P,Q\in\operatorname{\catf{Proj}}\cat{C}$. Up to boundary in the Hochschild complex of $\cat{C}$, the $\star$-product of $\mathrm{id}_P$ and $\mathrm{id}_Q$ is the handle element $\xi_{P,Q}$ of $P$ and $Q$: \begin{align} \mathrm{id}_P \star \mathrm{id}_Q \simeq \xi_{P,Q} \ . \end{align} \item All handle elements in the sense of~\eqref{eqnhandleelementintro} are central elements in the endomorphism algebras of $\cat{C}$. \item The modified trace of the handle element is given by \begin{align} \catf{t}_P \xi_{P,Q}=\mathrm{dim} \,\cat{C}(P,Q) \ . \label{eqntraceformulai} \end{align} \end{pnum} \end{reptheorem} Formula~\eqref{eqntraceformulai} tells us that the modified trace of the handle elements recovers the entries of the Cartan matrix of $\cat{C}$. If $P$ is simple, the handle element can be identified with the number \begin{align} \xi_{P,Q} = \frac{\mathrm{dim}\, \cat{C}(P,Q)}{d^\text{m} (P)} \in k \ , \end{align} where $d^\text{m} (P):=\catf{t}_P(\mathrm{id}_P)\in k^\times$ is the modified dimension of $P$. If we denote for an endomorphism $f:P\longrightarrow P$ of a projective object $P$ the Hattori-Stallings trace by $\catf{HS}(f)\in HH_0(\cat{C})$, we obtain the following statement in homology: \begin{repcorollary}{corhs} For any finite tensor category $\cat{C}$ with symmetric Frobenius structure, \begin{align} \catf{t} (\catf{HS} (\mathrm{id}_P) \star \catf{HS} (\mathrm{id}_Q) ) = \mathrm{dim}\, \cat{C}(P,Q) \quad \text{for}\quad P,Q \in \operatorname{\catf{Proj}} \cat{C} \ . \end{align} \end{repcorollary} Here we denote the map on $HH_0(\cat{C})$ induced by the modified trace again by $\catf{t}$. \subparagraph{Conventions.} As already mentioned above, for the entire article, we fix an algebraically closed field $k$ (which is not assumed to have characteristic zero). Concerning the convention on left and right duality, we follow \cite{egno}: In a \emph{rigid} monoidal category $\cat{C}$, every object $X \in \cat{C}$ has \begin{itemize} \item a \emph{left dual} $X^\vee$ that comes with an evaluation $d_X:X^\vee \otimes X\longrightarrow I$ and a coevaluation $b_X:I\longrightarrow X\otimes X^\vee$ subject to the usual zigzag identities, \item and a \emph{right dual ${^\vee \! X}$} that comes with an evaluation $\widetilde d_X : X\otimes {^\vee \! X} \longrightarrow I$ and a coevaluation $\widetilde b_X : I \longrightarrow {^\vee \! X}\otimes X$ again subject to the usual zigzag identities. \end{itemize} \subparagraph{Acknowledgments.} We would like to thank Jürgen Fuchs, Lukas Müller and Nathalie Wahl for helpful discussions. CS is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy -- EXC 2121 ``Quantum Universe'' -- 390833306. LW gratefully acknowledges support by the Danish National Research Foundation through the Copenhagen Centre for Geometry and Topology (DNRF151). \subparagraph{Note.} While finalizing this manuscript, the preprint \cite{ss21} appeared which provides a proof for a connection between the Nakayama functor and modified traces very similar to the one afforded by Theorem~\ref{thmmtrace}. \section{Traces on finite categories\label{sectracesfin}} For any finite category $\cat{C}$, the (right) Nakayama functor $\catf{N}^\catf{r} : \cat{C}\longrightarrow\cat{C}$ is given by \begin{align} \catf{N}^\catf{r} X:= \int^{Y\in \cat{C}} \cat{C}(X,Y)^* \otimes Y \ . \label{defeqnnaka} \end{align} This is the Morita invariant description given in \cite{fss} for the usual Nakayama functor for finite-dimensional modules over a finite-dimensional algebra $A$ which is given by \begin{align}\label{eqnnakayamaalg} \catf{N}^\catf{r} X= \mathrm{Hom}_A(X,A)^*\cong A^* \otimes_A X \end{align} for any finite-dimensional $A$-module $X$. The Nakayama functor sends projective objects to injective objects. \begin{proposition}\label{propdualhom} For any finite category $\cat{C}$, there is a canonical isomorphism of chain complexes \begin{align} \cat{C}(X,Y_\bullet)^* \cong \cat{C}(Y_\bullet, \catf{N}^\catf{r} X ) \end{align} natural in objects $X,Y\in\cat{C}$, where $Y_\bullet$ is a projective resolution of $Y$. \end{proposition} \begin{proof} Since every finite category can be written as finite-dimensional modules over a finite-dimensional algebra, we conclude from the comparison of~\eqref{defeqnnaka} and~\eqref{eqnnakayamaalg} that the right hand side of~\eqref{defeqnnaka} can be modeled as a \emph{finite} colimit. Since $\cat{C}(Y_\bullet,-)$ is exact, the finite colimit used to define $\catf{N}^\catf{r} X$ is preserved, which leads to \begin{align} \cat{C}(Y_\bullet, \catf{N}^\catf{r} X )\cong \int^{Z\in\cat{C}} \cat{C}(Y_\bullet,\cat{C}(X,Z)^*\otimes Z)\cong \int^{Z\in\cat{C}} \cat{C}(Y_\bullet, Z)\otimes \cat{C}(X,Z)^*\cong \cat{C}(X,Y_\bullet)^* \ . \end{align} All coends are computed degree-wise here. In the last step, we have used the Yoneda Lemma. \end{proof} The isomorphisms from Proposition~\ref{propdualhom} can be used to obtain a twisted Calabi-Yau structure on $\operatorname{\catf{Proj}} \cat{C}$. \begin{definition} An \emph{$(F,G)$-twisted Calabi-Yau category} is a linear category $\cat{A}$ with endofunctors $F,G:\cat{A}\longrightarrow\cat{A}$ and isomorphisms $\cat{A}(F(X),Y)\cong \cat{A}(Y,G(X))^*$ natural in $X,Y\in \cat{A}$. \end{definition} In order to avoid overloaded notation, we call a twisted Calabi-Yau category \emph{left $F$-twisted} and \emph{right $G$-twisted} if the twist datum $(F,G)$ is given $(F,\mathrm{id}_\cat{A})$ and $(\mathrm{id}_\cat{A},G)$, respectively. By a \emph{Calabi-Yau category} (without any mention of twists) we will understand an untwisted Calabi-Yau category in the sense that $F=G=\mathrm{id}_\cat{A}$. A Calabi-Yau category with one object is a symmetric Frobenius algebra. \begin{corollary}\label{cortheiso} For a finite category $\cat{C}$, there are canonical isomorphisms \begin{align} \cat{C}(P,X) \cong \cat{C}(X,\catf{N}^\catf{r} P)^* \label{eqntheiso} \end{align} natural in $X\in\cat{C}$ and $P\in \operatorname{\catf{Proj}} \cat{C}$. In particular, $\operatorname{\catf{Proj}} \cat{C}$ is a right $\catf{N}^\catf{r}$-twisted Calabi-Yau category. \end{corollary} \begin{proof For $X\in \cat{C}$ and $P\in \operatorname{\catf{Proj}} \cat{C}$, we find (we denote equivalences of chain complexes aka quasi-isomorphisms by $\simeq$ and isomorphisms, as before, by $\cong$) \begin{align}\begin{array}{rclll} \cat{C}(X,\catf{N}^\catf{r} P)^* &\simeq & \cat{C}(X_\bullet,\catf{N}^\catf{r} P)^* && \text{(because $\catf{N}^\catf{r} P$ is injective)} \\ &\cong& \cat{C}(P,X_\bullet)&& \text{(Proposition~\ref{propdualhom})}\\& \simeq &\cat{C}(P,X) && \text{(because $P$ is projective)} \ . \end{array} \end{align} \end{proof} \begin{definition}[Trace of a finite category]\label{deftracefc} For any finite category $\cat{C}$, we define the pairings \begin{align} \label{eqntheparings} \spr{ -,- } : \cat{C}(P,X)\otimes\cat{C}(X,\catf{N}^\catf{r} P) \ra{ \eqref{eqntheiso} } \cat{C}(X,\catf{N}^\catf{r} P)^*\otimes \cat{C}(X,\catf{N}^\catf{r} P) &\ra{\text{evaluation}} k \\ \quad \text{for}\quad &X\in\cat{C}\ , \ P\in\operatorname{\catf{Proj}}\cat{C} \end{align} and, by considering the case $X=P$ in~\eqref{eqntheparings}, the \emph{twisted trace} \begin{align} \catf{t}_P : \cat{C}(P,\catf{N}^\catf{r} P) \ra{ \spr{\mathrm{id}_P,-} } k \quad \text{for}\quad P\in\operatorname{\catf{Proj}} \cat{C} \label{eqnthetrace} \end{align} on $\cat{C}(P,\catf{N}^\catf{r} P)$. We refer to the family of maps~\eqref{eqnthetrace}, where $P$ runs over all projective objects, as the \emph{twisted trace on $\cat{C}$}. By an \emph{untwisting} of the twisted trace, we mean a trivialization $\catf{N}^\catf{r} \cong \mathrm{id}_\cat{C}$ of $\catf{N}^\catf{r}$ (if there exists any) and the resulting identification of the maps \eqref{eqnthetrace} with maps $\cat{C}(P,P)\longrightarrow k$ that we then refer to as \emph{untwisted trace}, or just \emph{trace} for brevity. \end{definition} It is important to note that the twisted trace is canonical while the untwisting (if possible) will involve choices. An untwisting of the trace is equivalent to an untwisting of the twisted Calabi-Yau structure from Corollary~\ref{cortheiso}. The usual correspondence between Calabi-Yau structures and traces can be adapted to the present situation and leads to the following: \begin{lemma}\label{lemmatrace} For any finite category $\cat{C}$, the twisted trace \begin{align}\catf{t}_P : \cat{C}(P,\catf{N}^\catf{r} P) \longrightarrow k \quad \text{for}\quad P\in \operatorname{\catf{Proj}} \cat{C} \label{eqnthetraces} \end{align} has the following properties: \begin{pnum} \item Cyclicity: For $P,Q\in\operatorname{\catf{Proj}} \cat{C}$, $f: P\longrightarrow \catf{N}^\catf{r} Q$ and $g :Q\longrightarrow P$, we have \begin{align} \catf{t}_Q(fg)=\catf{t}_P(\catf{N}^\catf{r}(g)f) \ . \end{align} \item Non-degeneracy: The trace is non-degenerate in the sense that the pairings \begin{align} \cat{C}(P,X)\otimes \cat{C}(X,\catf{N}^\catf{r} P) \longrightarrow k \ , \quad f\otimes g \longmapsto \catf{t}_P (gf) \label{eqnsecondpairing} \end{align} are non-degenerate. In fact, they agree with the pairings~\eqref{eqntheparings}. \end{pnum} \end{lemma} \begin{proof} Let $X,Y \in \cat{C}$ and $P,Q\in\operatorname{\catf{Proj}}\cat{C}$. Naturality of \eqref{eqntheiso} in $X$ means for $a:P\longrightarrow X$, $b :Y\longrightarrow \catf{N}^\catf{r} P$ and $c: X\longrightarrow Y$ \begin{align} \spr{ {a},bc}=\spr{ {ca},b} \ . \label{hateqn1} \end{align} Naturality of \eqref{eqntheiso} in $P$ means for $a:Q\longrightarrow X$, $b:P\longrightarrow Q$ and $c:X\longrightarrow \catf{N}^\catf{r} P$ \begin{align} \spr{ {a},\catf{N}^\catf{r}(b)c}=\spr{ ab,c} \ . \label{hateqn2} \end{align} This implies for $f: P\longrightarrow \catf{N}^\catf{r} Q$ and $g :Q\longrightarrow P$ \begin{align} \catf{t}_Q(fg)\stackrel{\eqref{eqnthetrace}}{=}\spr{ {\mathrm{id}_Q},fg} \stackrel{\eqref{hateqn1}}{=} \spr{ {g},f} \stackrel{\eqref{hateqn2}}{=} \spr{ {\mathrm{id}_P},\catf{N}^\catf{r}(g)f} \stackrel{\eqref{eqnthetrace}}{=} \catf{t}_P(\catf{N}^\catf{r}(g)f) \ . \end{align} This proves cyclicity. Non-degeneracy holds by construction because it follows easily from~\eqref{hateqn1} that the pairing~\eqref{eqnsecondpairing} agrees with $\spr{-,-}$. \end{proof} \needspace{5\baselineskip} \section{Traces on finite tensor categories and connection to modified traces\label{tracesonfinitfc}} We will now turn to finite \emph{tensor} categories and connect the construction from Definition~\ref{deftracefc} to modified traces \cite{mtrace1,mtrace2,mtrace3,mtrace}. A (twisted, right) modified trace on the tensor ideal of projective objects in a pivotal finite tensor category is a cyclic, non-degenerate trace that satisfies the right partial trace property as we will discuss in detail below. The first two properties hold very generally for traces constructed from linear trivializations of the Nakayama functor thanks to Lemma~\ref{lemmatrace}. The partial trace property makes use of the monoidal structure. In order to understand when we can formulate and prove such a property for the trace from Definition~\ref{deftracefc}, one needs to understand the Nakayama functor of a finite tensor category: Let $\cat{C}$ be any finite tensor category. We denote by $\cat{C}_\cat{C}$ the finite category $\cat{C}$ as regular right module category over itself and by $\cat{C}^{\vee \vee}$ the finite category $\cat{C}$ as $\cat{C}$-right module with action given by $X.Y:=X\otimes Y^{\vee \vee}$ for $X,Y\in\cat{C}$. \begin{theorem}[{\cite[Theorem~3.26]{fss}}] \label{thmnakamodule} For any finite tensor category $\cat{C}$, the (right) Nakayama functor is an equivalence $\catf{N}^\catf{r} : \cat{C}_\cat{C} \ra{\simeq} \cat{C}^{\vee \vee}$ of right $\cat{C}$-module categories; in particular, comes with canonical isomorphisms $ \catf{N}^\catf{r}(-\otimes X)\cong \catf{N}^\catf{r}(-)\otimes X^{\vee\vee}$ for $X\in\cat{C}$. Moreover, $\catf{N}^\catf{r} I\cong D^{-1}$, where $D\in\cat{C}$ is the distinguished invertible object and $D^{-1}$ its dual, and hence \begin{align} \catf{N}^\catf{r} \cong D^{-1}\otimes -^{\vee\vee} \end{align} by a canonical isomorphism. \end{theorem} Together with Proposition~\ref{propdualhom}, this implies: \begin{corollary}\label{cordualhom} For any finite tensor category $\cat{C}$, there are canonical isomorphisms \begin{align} \cat{C}(X,Y_\bullet)^* \cong \cat{C}(Y_\bullet, D^{-1} \otimes X^{\vee \vee} ) \end{align} natural in objects $X,Y\in\cat{C}$, where $Y_\bullet$ is a projective resolution of $Y$. In particular, any pivotal structure on $\cat{C}$ provides canonical isomorphisms \begin{align} \cat{C}(X,Y_\bullet)^* \cong \cat{C}(Y_\bullet, D^{-1} \otimes X ) \ . \end{align} \end{corollary} We now propose a generalization of the partial trace property that does not need a pivotal structure (from our perspective, this will turn out to be more natural): Let $\cat{C}$ be a finite tensor category. For $X\in\cat{C}$ and $P\in\operatorname{\catf{Proj}}\cat{C}$, we may use Theorem~\ref{thmnakamodule} to define a map \begin{align} \cat{C}\left(P\otimes X, \catf{N}^\catf{r}(P)\otimes X^{\vee \vee} \right) \longrightarrow \cat{C}(P, \catf{N}^\catf{r} P ) \label{eqnptpart2} \end{align} sending $f: P\otimes X \longrightarrow \catf{N}^\catf{r}(P)\otimes X^{\vee \vee}$ to \begin{align} P \ra{P\otimes b_X} P \otimes X \otimes X^\vee \ra{f\otimes X^\vee} \catf{N}^\catf{r}(P)\otimes X^{\vee \vee} \otimes X^\vee \ra{\catf{N}^\catf{r}(P)\otimes d_{X^\vee}} \catf{N}^\catf{r} P \ . \end{align} \begin{definition} Let $\cat{C}$ be a finite tensor category, $P\in \operatorname{\catf{Proj}} \cat{C}$ and $X\in\cat{C}$. Then we define the \emph{right partial trace} as the composition \begin{align} \catf{tr}_\catf{r}^X: \cat{C}(P\otimes X, \catf{N}^\catf{r}(P\otimes X)) \ra{\text{Theorem~\ref{thmnakamodule}}} \cat{C}(P\otimes X, \catf{N}^\catf{r}(P)\otimes X^{\vee \vee} ) \ra{\eqref{eqnptpart2}} \cat{C}(P, \catf{N}^\catf{r}(P) ) \ . \label{eqnpartialtrace0} \end{align} \end{definition} All of this crucially uses that $P\otimes X$ (and also $X\otimes P$) is projective if $P$ is, i.e.\ the ideal property property of $\operatorname{\catf{Proj}} \cat{C}$. \begin{remark}\label{rempartialtrace} A pivotal structure is not needed for the definition given here because the double dual is absorbed into the Nakayama functor. In presence of a pivotal structure $\omega : -^{\vee \vee}\cong \mathrm{id}_\cat{C}$, however, our definition specializes to the usual partial trace property for a \emph{right $D$-trace} in the terminology of \cite{mtrace} in the sense that the composition \begin{align} \cat{C}(D\otimes P\otimes X, P\otimes X) \cong \cat{C}(P\otimes X, \catf{N}^\catf{r}( P\otimes X)) \ra{\catf{tr}_\catf{r}^X} \cat{C}(P, \catf{N}^\catf{r}(P) ) \ , \end{align} where the first isomorphism uses duality, Theorem~\ref{thmnakamodule} and the (inverse) pivotal structure, is the usual partial trace. \end{remark} \begin{proposition}\label{proppartialtrace} For any finite tensor category $\cat{C}$, the twisted trace \begin{align}\catf{t}_P : \cat{C}(P,\catf{N}^\catf{r} P) \longrightarrow k \quad \text{for}\quad P\in \operatorname{\catf{Proj}} \cat{C} \end{align} from Definition~\ref{deftracefc} satisfies the right partial trace property: For $X\in\cat{C}$ and $P\in\operatorname{\catf{Proj}} \cat{C}$ and any morphism $f:P\otimes X\longrightarrow \catf{N}^\catf{r}(P\otimes X)$ \begin{align} \catf{t}_P \catf{tr}_\catf{r}^X (f)=\catf{t}_{P\otimes X}( f) \ .\label{eqnpartialtrace} \end{align} \end{proposition} \begin{proof} For $X,Y\in\cat{C}$, $P\in\operatorname{\catf{Proj}}\cat{C}$ and a projective resolution $Y_\bullet$ of $Y$, consider the following diagram in which all maps are isomorphisms (we explain all parts of the diagram and its commutativity afterwards):\small \begin{equation} \begin{tikzcd} \cat{C}(P\otimes X,Y_\bullet)^*\ar[d,swap,"\vee"] & \ar[l,swap,"\text{YL}"] \ar[d,swap,"\vee"]\ar[rr,"(\diamond)"] \int^{Z\in\cat{C}} \cat{C}(Y_\bullet,Z) \otimes \cat{C}(P\otimes X,Z)^* && \cat{C}(Y_\bullet,\catf{N}^\catf{r} (P\otimes X))\ar[dd,swap,"\catf{N}^\catf{r}(-\otimes X)\cong \catf{N}^\catf{r}(-)\otimes X^{\vee\vee}"] \\ \cat{C}(P,Y_\bullet \otimes X^\vee)^* & \ar[l,swap,"\text{YL}"] \int^{Z\in\cat{C}} \cat{C}(Y_\bullet,Z)\otimes\cat{C}(P,Z\otimes X^\vee)^* \ar[d,"\text{relabeling}"] \\ & \ar[lu,"\text{YL}"]\int^{Z' \in \cat{C}} \cat{C}(Y_\bullet\otimes X^\vee,Z')\otimes \cat{C}(P,Z')^* \ar[r,"(\diamond)"] & \cat{C}(Y_\bullet \otimes X^\vee,\catf{N}^\catf{r} P ) \ar[r,"\vee"] & \cat{C}(Y_\bullet,\catf{N}^\catf{r} P \otimes X^{\vee \vee}) \ . \end{tikzcd} \end{equation}\normalsize The isomorphisms labeled `YL' and `$\vee$' come from the Yoneda Lemma and duality, respectively. The isomorphisms $(\diamond)$ pull the coend and the tensoring with vector spaces out of the hom functor using exactness of $\cat{C}(Y_\bullet,-)$ (they follow essentially from the definition~\eqref{defeqnnaka} of the Nakayama functor). The `relabeling' isomorphism $\int^{Z\in\cat{C}} \cat{C}(Y_\bullet,Z)\otimes\cat{C}(P,Z\otimes X^\vee)^*\longrightarrow \int^{Z' \in \cat{C}} \cat{C}(Y_\bullet\otimes X^\vee,Z')\otimes \cat{C}(P,Z')^*$ sends $ f \otimes \alpha \in \cat{C}(Y_\bullet,Z)\otimes\cat{C}(P,Z\otimes X^\vee)^* $ living in the summand indexed by $Z$ of the coend $\int^{Z\in\cat{C}} \cat{C}(Y_\bullet,Z)\otimes\cat{C}(P,Z\otimes X^\vee)^*$ to $(f\otimes X^\vee) \otimes \alpha$ living in the summand indexed by $Z\otimes X^\vee$ of the coend $\int^{Z' \in \cat{C}} \cat{C}(Y_\bullet\otimes X^\vee,Z')\otimes \cat{C}(P,Z')^*$. The vertical isomorphism on the very right uses Theorem~\ref{thmnakamodule}. In fact, the isomorphism $\catf{N}^\catf{r}(-\otimes X)\cong \catf{N}^\catf{r}(-)\otimes X^{\vee\vee}$ can be \emph{obtained} by extracting the isomorphism $\cat{C}(Y_\bullet,\catf{N}^\catf{r} (P\otimes X))\longrightarrow \cat{C}(Y_\bullet,\catf{N}^\catf{r} P\otimes X^{\vee\vee})$ by going in counterclockwise direction in the hexagon on the right (this follows from an analysis of the proof of \cite[Theorem~3.18]{fss}). As a consequence, the hexagon on the right commutes. A direct computation shows that the square and the triangle on the left commute. Therefore, the entire diagram commutes. After taking the linear dual of the entire diagram and remembering that the isomorphisms `YL' and $(\diamond)$ combine into the isomorphisms from Proposition~\ref{propdualhom}, we see that the diagram \begin{equation} \begin{tikzcd} \ar[rrr,"\text{Proposition~\ref{propdualhom}}"] \ar[dd,swap,"\vee"] \cat{C}(P\otimes X,Y_\bullet) &&& \cat{C}(Y_\bullet,\catf{N}^\catf{r}( P\otimes X)) ^*\cong \cat{C}(Y_\bullet,\catf{N}^\catf{r} P\otimes X^{\vee\vee}) ^* \ar[dd,"\vee"] \\ \\ \cat{C}(P,Y_\bullet\otimes X^\vee) \ar[rrr,swap,"\text{Proposition~\ref{propdualhom}}"] &&& \cat{C}(Y_\bullet\otimes X^\vee,\catf{N}^\catf{r} P) ^* \\ \end{tikzcd} \end{equation} commutes. Since $P$ is projective (and hence $\catf{N}^\catf{r} P$ injective --- in fact, the projective objects in $\cat{C}$ even coincide with the injective ones), this reduces to the commutative diagram \begin{equation} \begin{tikzcd} \ar[rrr,"\text{Corollary~\ref{cortheiso}}"] \ar[dd,swap,"\vee"] \cat{C}(P\otimes X,Y) &&& \cat{C}(Y,\catf{N}^\catf{r}( P\otimes X)) ^*\cong \cat{C}(Y,\catf{N}^\catf{r} P\otimes X^{\vee\vee}) ^* \ar[dd,"\vee"] \\ \\ \cat{C}(P,Y\otimes X^\vee) \ar[rrr,swap,"\text{Corollary~\ref{cortheiso}}"] &&& \cat{C}(Y\otimes X^\vee,\catf{N}^\catf{r} P) ^* \\ \end{tikzcd} \end{equation} in which the horizontal maps have specialized to the ones from Corollary~\ref{cortheiso}. If we spell out the commutativity of this diagram in equations for morphisms $g:P\otimes X\longrightarrow Y$ and $h:Y\otimes X^\vee \longrightarrow \catf{N}^\catf{r} P$, we obtain with the bracket notation from Definition~\ref{deftracefc} (we use here additionally the graphical calculus for morphisms in a monoidal category --- to be read from bottom to top; we refer to \cite{kassel} for a textbook treatment) \begin{equation}{\footnotesize\tikzfig{trace}} \label{eqntrace} \end{equation} As another preparation, recall that the double dual functor $-^{\vee \vee}:\cat{C}\longrightarrow\cat{C}$ is monoidal, hence it preserves the duality pairing $d _{{^\vee X}} : X \otimes {^\vee X} \longrightarrow I$ (we use here the canonical identification ${^\vee (X^\vee)}\cong X$) and therefore sends it to $d_{X^\vee} :X^{\vee \vee} \otimes X^\vee \longrightarrow I$. Using Theorem~\ref{thmnakamodule} we find the equality of morphisms \begin{align} \catf{N}^\catf{r} (P\otimes d _{{^\vee X}})=\catf{N}^\catf{r} P\otimes d_{X^\vee} : \catf{N}^\catf{r} (P) \otimes X^{\vee\vee} \otimes X^\vee \longrightarrow \catf{N}^\catf{r} P \ , \end{align} which implies for a morphism $f:P\otimes X\longrightarrow \catf{N}^\catf{r} (P\otimes X)$ \begin{equation}{\footnotesize\tikzfig{trace2}}\label{trace2} \end{equation} \normalsize The desired equality~\eqref{eqnpartialtrace} now follows from: \footnotesize \begin{equation}\tikzfig{trace3} \end{equation} \normalsize\end{proof} \begin{theorem}\label{thmmtrace} For any finite tensor category $\cat{C}$, the twisted trace $ (\catf{t}_P : \cat{C}(P,\catf{N}^\catf{r} P) \longrightarrow k)_{P\in \operatorname{\catf{Proj}}\cat{C}}$ from Definition~\ref{deftracefc} is cyclic, non-degenerate and satisfies the partial trace property in the sense of Proposition~\ref{proppartialtrace}. Under the additional assumption that on the finite tensor category $\cat{C}$ a pivotal structure has been chosen, the twisted trace $( \catf{t}_P : \cat{C}(P,\catf{N}^\catf{r} P) \longrightarrow k)_{P\in\operatorname{\catf{Proj}}\cat{C}}$ from Definition~\ref{deftracefc} can be naturally identified with a right modified $D$-trace, where $D\in\cat{C}$ is the distinguished invertible object of $\cat{C}$. \end{theorem} \begin{remark} More precisely, the twisted trace from Definition~\ref{deftracefc} yields a \emph{canonical} right modified $D$-trace and thereby trivializes the $k^\times$-torsor of right modified $D$-traces in a canonical way. \end{remark} \begin{proof}[{\slshape Proof of Theorem~\ref{thmmtrace}}] We use the pivotal structure $\omega:-^{\vee\vee}\cong \mathrm{id}_\cat{C}$ to obtain isomorphisms \begin{align} \cat{C}(P,\catf{N}^\catf{r} P)\stackrel{\text{Theorem~\ref{thmnakamodule}}}{\cong} \cat{C}(P,D^{-1}\otimes P^{\vee \vee}) \stackrel{\omega \ \text{and duality}}{\cong} \cat{C}(D\otimes P,P) \quad \text{for}\quad P\in\operatorname{\catf{Proj}} \cat{C} \ . \label{eqnthemaps} \end{align} As a consequence, the twisted trace from Definition~\ref{deftracefc} gives us maps $\cat{C}(D\otimes P,P)\longrightarrow k$ which are cyclic and non-degenerate (Lemma~\ref{lemmatrace}). Moreover, Proposition~\ref{proppartialtrace} combined with Remark~\ref{rempartialtrace} gives us the usual partial trace property in presence of a pivotal structure. Note that one needs really a \emph{monoidal} isomorphism $-^{\vee\vee}\cong \mathrm{id}_\cat{C}$ to get the desired maps $\cat{C}(D\otimes P,P)\longrightarrow k$. If $\omega$ is just linear, one would get maps $\cat{C}(D\otimes P,P)\longrightarrow k$, but they would not necessarily satisfy the partial trace property: The proof of the partial trace property in its $\catf{N}^\catf{r}$-twisted version (Proposition~\ref{proppartialtrace}) relies on the monoidal structure of $-^{\vee\vee}$. The partial trace property only transfers along the isomorphisms $\cat{C}(P,D^{-1}\otimes P^{\vee \vee})\cong \cat{C}(P,D^{-1}\otimes P)$ if $-^{\vee\vee}$ is replaced by $\mathrm{id}_\cat{C}$ \emph{as monoidal functor}. \end{proof} \needspace{5\baselineskip} \section{The trace field theory\label{sectracefieldtheory}} We now introduce the topological conformal field theory induced by the Calabi-Yau structure appearing the previous section. To this end, let us recall from \cite{costellotcft} the definition of the (differential graded) open-closed two-dimensional cobordism category, see \cite{egas,wahlwesterland} for models of this symmetric monoidal differential graded category in terms of fat graphs. An \emph{open-closed Riemann surface} is a Riemann surface with the following data: \begin{itemize} \item A subset of its boundary components, the so-called \emph{closed boundary components}. They are para\-metrized and labeled as incoming or outgoing. \item A finite number of embedded intervals in the remaining boundary components, the so-called \emph{open boundary intervals}. They are also parametrized and labeled as incoming or outgoing. \end{itemize} The free boundary components are defined as the complement (in the boundary) of the closed boundary components and the open boundary intervals. It will be required that each connected component of the Riemann surface has at least one free boundary component or at least one incoming closed boundary. An example (that additionally contains certain labels that will be discussed in a moment) is depicted in Figure~\ref{figoc}. One can now define the symmetric monoidal differential graded category $\catf{OC}$ of \emph{open-closed cobordisms} for a set $\Lambdait$ of labels (that we will fix later and that will be suppressed in the notation; the set of labels is sometimes referred to as set of `D-branes'): The objects are pairs of finite sets $O$ and $C$ (that in a moment will play the role of open boundary intervals and closed boundary components of Riemann surfaces) and two maps $s,t : O\longrightarrow \Lambdait$ (that attach a `start' and an `end' label to any open boundary). The chain complex of morphisms from $(O,C,s,t)$ to $(O',C',s',t')$ is given by the $k$-chains on the moduli space of Riemann surfaces $\Sigmait$ with \begin{itemize} \item an identification of its set of incoming open and incoming closed boundary components with $(O,C)$, an identification of its set of outgoing open and outgoing closed boundary components with $(O',C')$, \item a label in the set $\Lambdait$ of D-branes for each free boundary component \end{itemize} subject to the following requirement: First observe that any incoming open boundary interval $o\in O$ inherits a label for its start point and its end point, namely the label of the free boundary component that it is bounded by. We require that this label agrees with $(s(o),t(o))$; the analogous requirement is imposed for outgoing open boundary intervals. Explicitly, for the objects $X=(O,C,s,t)$ and $X'=(O',C',s',t')$, the morphism complexes are given, up to equivalence, by \begin{align} \catf{OC} \left( X,X' \right) \simeq \bigoplus_{S : X \longrightarrow X'} C_*(B \catf{Map}(S);k) \ , \end{align} where the direct sum is running over all topological types of compact oriented open-closed bordisms $S$ with incoming and outgoing boundary described by $X$ and $X'$, respectively, and $\catf{Map}(S)$ is the mapping class group of $S$; we refer to \cite{egas,wahlwesterland} for a description of these morphism complexes by means of classifying spaces of categories of fat graphs. Composition in $\catf{OC}$ is by gluing. Disjoint union provides a symmetric monoidal structure. \begin{figure}[h]\centering \tikzfig{oc} \caption{An open-closed surface with D-brane labels. As a morphism in $\catf{OC}$, we read the surface from left to right, i.e.\ with the source object (constituted by the incoming boundary components) on the left and the target object (constituted by the outgoing boundary components) on the right. We will, however, deviate from the left-to-right drawing convention at times if it simplifies the surface; for this reason, we also indicate by `in' and `out' whether a boundary component is incoming or outgoing. The source object is given by $\{c,o_1,o_2\}$ (where the identification with boundary components is through the dotted arrows in the picture) plus the assignment $s(o_1)=s(o_2)=P$ and $t(o_1)=t(o_2)=Q$. The target object is $\{o'\}$ plus the assignments $s(o')=t(o')= R$. } \label{figoc} \end{figure} \begin{definition}[Costello \cite{costellotcft} following Getzler~\cite{getzler} and Segal~\cite{segal}]For a fixed set $\Lambdait$ of $D$-branes, an \emph{open-closed topological conformal field theory} is a symmetric monoidal functor $\Phiit : \catf{OC} \longrightarrow \catf{Ch}_k$. An \emph{open topological conformal field theory} is a symmetric monoidal functor $\catf{O}\longrightarrow\catf{Ch}_k$ defined only on the subcategory $\catf{O}\subset \catf{OC}$ of open bordisms. \end{definition} Open-closed topological conformal field theories are a differential graded generalization of ordinary vector space-valued two-dimensional (open-closed) topological field theories. The latter can be constructed and classified in terms of symmetric and commutative Frobenius algebras, see \cite{kock,laudapfeiffer} for the precise statements. In \cite{costellotcft}, Costello proves that one may construct an \emph{open} topological conformal field theory from a (linear) Calabi-Yau category (Costello actually considers differential graded Calabi-Yau categories, but we just need the linear case). By homotopy left Kan extension, one obtains an open-closed topological conformal field theory: \needspace{5\baselineskip} \begin{theorem}[Costello \cite{costellotcft}]\label{thmcostello}\begin{pnum}\item Any linear Calabi-Yau category $\cat{A}$ gives rise to an open-closed topological conformal field theory $\catf{OC} \longrightarrow \catf{Ch}_k$ with the object set of $\cat{A}$ as the set of $D$-branes. \item The value of this field theory on the circle is equivalent to the Hochschild complex of $\cat{A}$. \end{pnum} \end{theorem} In particular, if $\cat{M}_{p,q}$ is the moduli space of Riemann surfaces with $p$ incoming closed $(p\ge 1)$, $q$ outgoing closed and no open and no free boundary components, there are maps \begin{align} C_*(\cat{M}_{p,q};k) \otimes \left( \int_\mathbb{L}^{a \in \cat{A}}\cat{A}(a,a)\right)^{\otimes p} \longrightarrow \left( \int_\mathbb{L}^{a \in \cat{A}}\cat{A}(a,a)\right)^{\otimes q} \ . \end{align} We refer to \cite{dva} for details on the homotopy coends appearing here. In the present text, this is only needed to a very limited extent. It suffices to know that in degree zero the complex $\int_\mathbb{L}^{a \in \cat{A}} \cat{A}(a,a)$ is given by $\bigoplus_{a\in\cat{A}} \cat{A}(a,a)$. \begin{remark} In \cite{costellotcft}, it is actually required that $k$ has characteristic zero, but an extension to fields of arbitrary characteristic is given in \cite{egas,wahlwesterland}. \end{remark} From Corollary~\ref{cortheiso} and Costello's result, we immediately obtain: \begin{corollary}\label{corollarytracefieldtheory} For a finite category $\cat{C}$, any trivialization of the Nakayama functor $ \catf{N}^\catf{r} :\cat{C}\longrightarrow\cat{C}$ yields a Calabi-Yau structure on $\operatorname{\catf{Proj}} \cat{C}$ and hence gives rise to a topological conformal field theory $\Phiit_\cat{C} : \catf{OC} \longrightarrow \catf{Ch}_k$ with set of D-branes given by the set of projective objects of $\cat{C}$. \end{corollary} \begin{remark}\label{remtraces} The evaluation of the field theory $\Phiit_\cat{C}$ on the disk with one incoming open boundary interval whose complementing free boundary carries the D-brane label $P\in\operatorname{\catf{Proj}}\cat{C}$ \begin{equation} \Phiit_\cat{C} \left( \tikzfig{diskohne} \right)\ : \ \cat{C}(P,P)\longrightarrow k \end{equation} is exactly the trace function of the Calabi-Yau structure from Definition~\ref{deftracefc} (this follows directly from Costello's construction), while for $P,Q\in \operatorname{\catf{Proj}}\cat{C}$, the map \begin{equation} \Phiit_\cat{C} \left( \tikzfig{composition} \right)\ : \ \cat{C}(P,Q) \otimes \cat{C}(Q,P) \longrightarrow \cat{C}(P,P) \end{equation} is the composition over $Q$. \end{remark} The construction of Corollary~\ref{corollarytracefieldtheory} does not do much: It just translates a trivialization of $\catf{N}^\catf{r}$ to a Calabi-Yau structure and then a topological conformal field theory, and in fact, this construction will only be of limited use to us since we want to treat finite tensor categories and not just linear categories. Fortunately, we can give a natural refinement: The construction from Corollary~\ref{corollarytracefieldtheory} becomes more meaningful in the context of finite tensor categories if $\catf{N}^\catf{r}$ is trivialized not just as a linear functor, \emph{but as a right $\cat{C}$-module functor relative to a pivotal structure}. Let us define what we mean by that: \begin{definition}\label{defsymfrob} For any finite tensor category $\cat{C}$ and a pivotal structure $\omega: -^{\vee \vee} \cong \mathrm{id}_\cat{C}$, denote by $(\mathrm{id}_\cat{C},\omega)$ the identity functor endowed with the structure of a right $\cat{C}$-module functor $\cat{C}_\cat{C}\longrightarrow\cat{C}^{\vee\vee}$ by means of $\omega$. We refer to an isomorphism $\catf{N}^\catf{r} \cong (\mathrm{id}_\cat{C},\omega)$ of right $\cat{C}$-module functors as a trivialization of $\catf{N}^\catf{r}$ as a right $\cat{C}$-module functor relative to $\omega$. We define a \emph{symmetric Frobenius structure} on a finite tensor category $\cat{C}$ as a trivialization of $\catf{N}^\catf{r}$ as right $\cat{C}$-module functor relative to a pivotal structure, where the pivotal structure is part of the data. \end{definition} \begin{remark}\label{remunpacksymFrob} Thanks to Theorem~\ref{thmnakamodule}, a symmetric Frobenius structure on a finite tensor category is a pivotal structure plus a trivialization of $D$. We use the term symmetric Frobenius structure not only as convenient shorthand for the rather clumsy description of `pivotal unimodular finite tensor category with a trivialization of the distinguished invertible object as part of the data', but also for a deeper reason: The symmetric Frobenius algebra structure on a finite tensor category allows us to write $\cat{C}$, as a linear category, as modules over a symmetric Frobenius algebra. This can be read off from the Corollary~\ref{cordualhom} because it provides canonical natural isomorphisms \begin{align} \cat{C}(X,Y_\bullet)^* \cong \cat{C}(Y_\bullet, X ) \ , \end{align} where $X\in\cat{C}$ and $Y_\bullet$ is a projective resolution of $Y\in\cat{C}$. However, a finite tensor category with symmetric Frobenius structure requires a compatibility of the monoidal structure with the trivialization of $\catf{N}^\catf{r}$. It is not just an identification of $\cat{C}$, as \emph{linear category}, with modules over a symmetric Frobenius algebra. In the latter sense, the notion is used in \cite{shimizucoend}. \end{remark} \begin{definition}\label{deftracefieldtheory} Let $\cat{C}$ be a finite tensor category with symmetric Frobenius structure. For the trivialization of the right Nakayama functor $\catf{N}^\catf{r}$ (that $\cat{C}$ by Definition~\ref{defsymfrob} comes equipped with), we refer to the topological conformal field theory $\Phiit_\cat{C} : \catf{OC} \longrightarrow \catf{Ch}_k$ built from \emph{this particular} trivialization in the sense of Corollary~\ref{corollarytracefieldtheory} as the \emph{trace field theory of $\cat{C}$}. \end{definition} The name is chosen because $\Phiit_\cat{C}$ does not only recover the trace of the Calabi-Yau structure by Remark~\ref{remtraces}, but can also be recovered from the trace itself. \begin{theorem}\label{thmtracefieldtheory} Let $\cat{C}$ be a finite tensor category with symmetric Frobenius structure and $\Phiit_\cat{C}:\catf{OC}\longrightarrow \catf{Ch}_k$ its trace field theory. The evaluation of $\Phiit_\cat{C}$ on the disk with one incoming open boundary interval whose complementing free boundary carries the label $P\in\operatorname{\catf{Proj}}\cat{C}$ \begin{equation} \Phiit_\cat{C} \left( \tikzfig{diskohne} \right)\ : \ \cat{C}(P,P)\longrightarrow k \end{equation} is a right modified trace, while the evaluation of $\Phiit_\cat{C}$ on the cylinder with one incoming open boundary interval with complementing free boundary label $P\in\operatorname{\catf{Proj}}\cat{C}$ and one outgoing closed boundary circle \begin{align} \Phiit_\cat{C} \left( \tikzfig{ht} \right)\ : \ \cat{C}(P,P)\longrightarrow \int_\mathbb{L}^{P\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(P,P) \label{eqnht} \end{align} agrees, after taking zeroth homology, with the Hattori-Stallings trace of $\cat{C}$. \end{theorem} \begin{proof} The statement about the modified trace can be extracted from Theorem~\ref{thmmtrace} and Remark~\ref{remtraces}. For the second statement, we first observe that by the construction of $\Phiit_\cat{C}$ the map~\eqref{eqnht} is just the inclusion of $\cat{C}(P,P)$ into the direct sum $\bigoplus_{P\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(P,P)$, which is the degree zero term of the Hochschild complex. After taking homology, we get the natural map $\cat{C}(P,P)\longrightarrow HH_0(\cat{C})$, i.e.\ the quotient map projecting to zeroth Hochschild homology, and hence the Hattori-Stallings trace for $\cat{C}$. The connection to the traditional Hattori-Stallings trace \cite{hattori,stallings} uses that by writing $\cat{C}$, as a linear category, as finite-dimensional modules over a finite-dimensional algebra $A$ (which we can always do), the zeroth homology $HH_0(\cat{C})$ is isomorphic to the zeroth Hochschild homology $HH_0(A)=A/[A,A]$ of $A$. This is a consequence of the Agreement Principle of McCarthy \cite{mcarthy} and Keller \cite{keller}, see also \cite[Section~3.2]{dva} for this principle in the context of finite tensor categories. \end{proof} We formulate the result in Theorem~\ref{thmtracefieldtheory} topologically (although the Theorem~\ref{thmmtrace} that it relies on is purely algebraic) because, instead of traces, we will in Section~\ref{sectrivE2} use the trace field theory as an efficient tool for computations. \begin{remark} The reader should appreciate that the trace field $\Phiit_\cat{C}$ is defined through a specific trivialization of the Nakayama functor and not by choosing a modified trace (although Theorem~\ref{thmtracefieldtheory} tells us that we could have done that). This has the advantage that, through the closed formula for $\catf{N}^\catf{r}$, the trace field theory $\Phiit_\cat{C}$ becomes very accessible. In fact, we will rely on the particular definition of $\Phiit_\cat{C}$ given above in future work. \end{remark} \needspace{5\baselineskip} \section{The block diagonal product on Hochschild chains\label{sectrivE2}} We will now see how we can profit from the topological description of traces. First we define a multiplication by evaluation on the pair of pants: \begin{definition}\label{defstarprod} Let $\cat{C}$ be a finite tensor category with symmetric Frobenius structure and $\Phiit_\cat{C}:\catf{OC}\longrightarrow \catf{Ch}_k$ the associated trace field theory. Then we define the \emph{block diagonal $\star$-product} on the Hochschild complex $\int_\mathbb{L}^{X\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(X,X)$ by \begin{align} \star := \Phiit_\cat{C}\left(\tikzfig{pop}\right)\ : \ \int_\mathbb{L}^{X\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(X,X) \otimes \int_\mathbb{L}^{X\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(X,X)\longrightarrow \int_\mathbb{L}^{X\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(X,X) \, . \label{eqnstarproduct} \end{align} \end{definition} The sense in which the product $\star$ is block diagonal will be discussed in Proposition~\ref{propdiag}. The results of Wahl and Westerland on the product obtained from a topological conformal field theory in \cite[Section~6]{wahlwesterland} imply that, up to homotopy, the multiplication is concentrated in degree zero (they prove it for symmetric Frobenius algebras, but their proof carries over to our situation). They also give a formula for the degree zero part of the homotopy commutative multiplication. We will below give a slightly different formula which, when working with a Calabi-Yau category instead of a symmetric Frobenius algebra, is a little more convenient. \begin{lemma}\label{lemmadeligneinH0} The degree zero part \begin{align} \star\ : \bigoplus_{P,Q \in \operatorname{\catf{Proj}} \cat{C}} \cat{C}(P,P)\otimes\cat{C}(Q,Q) \longrightarrow \bigoplus_{P\in \operatorname{\catf{Proj}} \cat{C}} \cat{C}(P,P) \end{align} of the product from Definition~\ref{eqnstarproduct} is given on the summand $\cat{C}(P,P)\otimes\cat{C}(Q,Q)$, up to boundary, by the linear map \begin{equation} \Phiit_\cat{C}\left(\tikzfig{multchain}\right) \ : \ \cat{C}(P,P)\otimes\cat{C}(Q,Q) \longrightarrow \cat{C}(P,P) \ . \label{multviaPhieqn} \end{equation} \end{lemma} \begin{proof} Let $P$ and $Q$ be projective objects in $\cat{C}$. The following morphisms in $\catf{OC}$ can be deformed into each other and hence represent homologous zero chains: \begin{align}{\footnotesize \tikzfig{pairofpants}} \end{align} But this means that the square in $\catf{OC}$ \begin{align} \tikzfig{multsquare} \end{align} commutes up to boundary. If we apply $\Phiit_\cat{C}:\catf{OC}\longrightarrow\catf{Ch}_k$ to the square, we see that the square \begin{equation} \begin{tikzcd} \ar[rrr,"\eqref{multviaPhieqn}"] \ar[dd,swap] \cat{C}(P,P)\otimes\cat{C}(Q,Q) &&& \cat{C}(P,P) \ar[dd] \\ \\ \int_\mathbb{L}^{X\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(X,X) \otimes \int_\mathbb{L}^{X\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(X,X) \ar[rrr,swap,"\star"] &&& \int_\mathbb{L}^{X\in\operatorname{\catf{Proj}}\cat{C}} \cat{C}(X,X) \\ \end{tikzcd} \end{equation} commutes up to chain homotopy, where the vertical maps are just the usual embeddings of endomorphism spaces as summands in the Hochschild complex. Since we now recover the linear map \eqref{multviaPhieqn} as the upper horizontal arrow, the assertion follows. \end{proof} In the case of one object, using the Sweedler notation for the coproduct of a symmetric Frobenius algebra, we recover the formula of Wahl and Westerland \cite[Section~6, page~41]{wahlwesterland} up to boundary. In the sequel, it will always be implicit that the $\star$-product is applied in degree zero (because of the fact that it only contains information in that particular degree). \begin{proposition \label{propdiag} For any finite tensor category $\cat{C}$ with symmetric Frobenius structure, the product $\star$ is block diagonal in the sense that it vanishes on two elements in components indexed by projective objects $P$ and $Q$ with vanishing morphism space $\cat{C}(P,Q)$; \begin{align} f\star g= 0 \quad \text{for}\quad f \in \cat{C}(P,P)\ , \quad g \in \cat{C}(Q,Q) \quad \text{if}\quad \cat{C}(P,Q)=0 \quad \text{(or equivalently $\cat{C}(Q,P)=0$)} \ . \end{align} \end{proposition} \begin{proof} From the formula for the $\star$-product given in Lemma~\ref{lemmadeligneinH0} one can see that the map describing $\star$ on $\cat{C}(P,P)\otimes\cat{C}(Q,Q)$ factors through $\cat{C}(P,Q)$ or $\cat{C}(Q,P)$. \end{proof} We now prove a formula for the $\star$-product of identity endomorphisms of two projective objects. As a preparation, we make the following Definition: \begin{definition}\label{handleelements} Let $\cat{C}$ be a finite tensor category with symmetric Frobenius structure. For $P,Q \in \cat{C}$, we define the \emph{handle element of $P$ and $Q$} as the endomorphism $\xi_{P,Q} \in \cat{C}(P,P)$ obtained by evaluation of the trace field theory on the annulus: \begin{equation} \xi_{P,Q} := \Phiit_\cat{C}\left( \tikzfig{handleelement} \right)\in\cat{C}(P,P) \ . \end{equation} \end{definition} \begin{remark} The name of the element $\xi_{P,Q}$ is chosen for the following reason: If we were in the situation $P=Q$, the element $\xi_{P,P}$ would be the composition `$\text{multiplication}\circ \text{comultiplication} \circ \text{unit}$' in the symmetric Frobenius algebra $\cat{C}(P,P)$. In \cite[page~128]{kock}, this element is called the \emph{handle element} of the symmetric Frobenius algebra. \end{remark} \begin{theorem}\label{thmhandleelement} Let $\cat{C}$ be a finite tensor category with symmetric Frobenius structure. \begin{pnum} \item For $P,Q\in\operatorname{\catf{Proj}}\cat{C}$, the $\star$-product of $\mathrm{id}_P$ and $\mathrm{id}_Q$ is the handle element $\xi_{P,Q}$ of $P$ and $Q$, up to boundary in the Hochschild complex of $\cat{C}$; \begin{align} \mathrm{id}_P \star \mathrm{id}_Q \simeq \xi_{P,Q} \ . \end{align}\label{tracefieldi} \item All handle elements in the sense of Definition~\ref{handleelements} are central in the endomorphism algebras of $\cat{C}$.\label{tracefieldii} \item The modified trace of the handle element is given by \begin{align} \catf{t}_P \xi_{P,Q}=\mathrm{dim} \,\cat{C}(P,Q) \ . \label{eqntraceformula} \end{align}\label{tracefieldiii} \end{pnum} \end{theorem} Of course, the numbers $\mathrm{dim}\,\cat{C}(P,Q)$ on the right hand side of \eqref{eqntraceformula} are the entries of the Cartan matrix of $\cat{C}$, considered here as elements in $k$. If $P$ is simple, the handle element is the number \begin{align} \xi_{P,Q} = \frac{\mathrm{dim}\, \cat{C}(P,Q)}{d^\text{m} (P)} \in k \ , \end{align} where $d^\text{m} (P):=\catf{t}_P(\mathrm{id}_P)\in k^\times$ is the modified dimension of $P$ (note that $\catf{t}(\mathrm{id}_P)\neq 0$ is a consequence of the non-degeneracy of the trace). \begin{proof}[{\slshape Proof of Theorem~\ref{thmhandleelement}}] In order to compute $\mathrm{id}_P \star \mathrm{id}_Q$ for $P,Q\in\operatorname{\catf{Proj}}\cat{C}$, we use Lemma~\ref{lemmadeligneinH0} and the functoriality of $\Phiit_\cat{C}$: \begin{equation} \mathrm{id}_P\star \mathrm{id}_Q \simeq \Phiit_\cat{C}\left(\tikzfig{multchain}\right) \circ \Phiit_\cat{C}\left(\tikzfig{twodisks}\right)= \Phiit_\cat{C}\left( \tikzfig{handleelement} \right) =\xi_{P,Q} \end{equation} This proves~\ref{tracefieldi}. For the proof of \ref{tracefieldii}, recall that in the setting of symmetric Frobenius algebras, it is shown in \cite[page~128]{kock} that the handle element is central. A straightforward computation by means of the trace field theory $\Phiit_\cat{C}$ shows that this holds still true in our more general situation: In fact, one can directly see that both the map $\xi_{P,Q}\circ - : \cat{C}(P,P)\longrightarrow\cat{C}(P,P)$ and the map $-\circ \xi_{P,Q}: \cat{C}(P,P)\longrightarrow\cat{C}(P,P)$ are given by \begin{align} \Phiit_\cat{C}\left(\tikzfig{central2}\right) \end{align} in terms of the trace field theory. For the proof of~\ref{tracefieldiii}, first observe \begin{equation} \Phiit_\cat{C} \left( \tikzfig{diskohne} \right) \circ \Phiit_\cat{C}\left( \tikzfig{handleelement} \right) = \Phiit_\cat{C}\left( \tikzfig{dimension} \right)=\mathrm{dim}\,\cat{C}(P,Q) \ . \end{equation} (This is the generalization of the fact that the counit evaluated on the handle element on a symmetric Frobenius algebra is the linear dimension \cite[page~129]{kock} to Calabi-Yau categories.) Now we use Theorem~\ref{thmtracefieldtheory} which asserts that the evaluation of $\Phiit_\cat{C}$ on the disk with one incoming open boundary interval is actually the modified trace. This proves \eqref{eqntraceformula}. \end{proof} Let us also formulate this on the level of homology and denote for an endomorphism $f: P\longrightarrow P$ of $P\in\operatorname{\catf{Proj}}\cat{C}$ by $\catf{HS}(f)\in HH_0(\cat{C})$ the Hattori-Stallings trace. Then Theorem~\ref{thmtracefieldtheory} and Theorem~\ref{thmhandleelement} imply: \begin{corollary}\label{corhs} For any finite tensor category $\cat{C}$ with symmetric Frobenius structure, \begin{align} \catf{t} (\catf{HS} (\mathrm{id}_P) \star \catf{HS} (\mathrm{id}_Q) ) = \mathrm{dim}\, \cat{C}(P,Q) \quad \text{for}\quad P,Q \in \operatorname{\catf{Proj}} \cat{C} \ . \end{align} \end{corollary} Here, by slight abuse of notation, we denote the map on $HH_0(\cat{C})$ induced by the modified trace again by $\catf{t}$. \needspace{5\baselineskip}
1,941,325,220,815
arxiv
\section{Introduction} \label{sec:intro} We propose a design for two-hop decode-and-forward relaying in cellular networks, primarily for throughput improvement in the low and median throughput regime, on both the downlink (DL) and the uplink (UL) paths. The Backhaul between the base station and relaying mobile device -- the ``longer'' hop -- is the conventional cellular wide area network (WAN) link, whereas the Access link between the relaying and relayed devices -- the ``shorter'' hop -- is an inband device to device (D2D) link. Inband Access links reuse (parts of) the same band that the WAN operates in. D2D discovery and communication are currently being studied in the Standards \cite{3GPP22803}. The network topology is depicted in Fig.~\ref{fig:sysModel}. One of the main motivations for the proposed two-hop architecture is to leverage large number of UEs that are part of a cellular network but are idle most of the time, and can be co-opted to improve overall system performance by relaying. However this poses challenges in terms of defining new interference management, relay association, and UE power management techniques. In this paper, we address these challenges and show the gains of the proposed architecture through system simulations using methodology adopted in 3GPP \cite{3GPP36814}. \begin{figure} \centering \scalebox{1}{\includegraphics[trim = 0mm 7mm 0mm 2mm, clip, width=3.2in]{sysModel.png}} \vspace{-0.1in} \caption{System model. Active UEs connect to eNB through relay UEs in their vicinity, or directly if no nearby UE is suitable for relaying. Access link between active and relaying UEs is a D2D link, whereas backhaul between relaying UE and eNB is the conventional cellular link.} \label{fig:sysModel} \vspace{-0.2in} \end{figure} Layer 3 Relay Nodes were standardized in LTE Release 10 \cite[Sec.~4.7]{3GPP36300Rel10} and further enhancements are under consideration as part of wider heterogeneous network study. These Relay Nodes however are a fusion of a scaled-down lower power base station (eNB) and an ordinary UE: they appear as eNB to the UEs being relayed, and they mimic some of the UE functions on the Backhaul to the Donor eNB. Due to cost and power, Relay Nodes are not envisioned as add-on to ordinary UEs but are designed as stand-alone devices. Moreover, since inband relays create further edges in the network, their deployment typically requires site planning \cite{BulakciSalehHama12} and advanced interference coordination schemes \cite{DamnjanovicMontojoTingfang11}. Both these factors limit the number of Relay Nodes when compared with the ubiquity of UEs. The proposed design exploits that ubiquity and repurposes UEs as relays, thus creating nearly one-to-one ratio between active devices and relays. The most important gains of the proposed design are summarized in the following: \begin{itemize} \item {\bf Exploiting shadow-diversity through relaying:} a UE's throughput to a large extent depends on the shadowing and pathloss to the serving base station. However, the shadowing is known to be uncorrelated over short distances of few tens of meters \cite[Sec.~B.1.2.1.1]{3GPP36814}\cite{Gudmundson91}. We exploit this fact to find UEs to relay that are at a short distance but have much better geometry to the serving base station. We call this shadow-diverse relaying. In section \ref{sec:caseOppRly} we show that shadow-diverse relaying can produce over $10$ dB improvement in median SINR, which leads to a $200$\% increase in median throughput, and in section \ref{sec:implementation} we propose power efficient protocols for finding and associating with such a relay. \item {\bf Uplink spectrum for Downlink traffic:} in a typical (FDD) cellular system, DL spectrum is more congested than UL spectrum -- motivated by this and constrained by certain regulatory limitations on UEs transmitting in DL spectrum, we propose that the Access links use the UL spectrum for {\em both} directions. Other benefits of reusing UL spectrum, in particular simpler interference management, are discussed in Section \ref{sec:caseUlSpect}. \item {\bf Access$\leftrightarrow$UL interference management:} in a two hop relay architecture, interference management is needed across Backhaul and Access links as well as among Access links themselves. However, typically the longer Backhaul link proves to be the throughput bottleneck -- motivated by this we propose an underlay approach for Access links by explicitly managing their interference to the base station (and hence the Backhaul link) via tight power control to the base station, and with minimal signaling overhead to the system. Note that the areas of low throughput (or worse pathloss to base station) are naturally conducive to the reuse of UL resources. Note also that a power control based solution for Access-to-WAN interference management is feasible only because of the proposed architecture of using UL spectrum for Access links; a similar approach would be harder to achieve on the DL spectrum. Lastly, given the spatial and temporal (due to scheduling) sparsity of UL transmissions, the UL to Access interference can be left unmanaged, as shown in Section \ref{sec:caseUlSpect} (Fig.~\ref{fig:spatialSigPow}). \end{itemize} The rest of the paper is organized as follows. We demonstrate potential gains of shadow-diverse relaying in Section \ref{sec:caseOppRly}. We justify the reuse of UL spectrum for the Access link and introduce some of the key interference management ideas in \ref{sec:caseUlSpect}. We provide a detailed design for relay association as well as interference management, and provide system simulation results in \ref{sec:propDesign}. A brief discussion on implementation and technology aspects is presented in Section \ref{sec:implementation} before concluding the paper in Section \ref{sec:conclusion}. \section{A case for shadow-diverse relaying} \label{sec:caseOppRly} The ubiquity of (D2D-enabled) user devices that can also act as relays -- even if over only few tens of meters -- can provide an advantage akin to the user walking outside a building for better reception. To demonstrate this quantitatively, we look at the nature of spatial shadowing correlation as well as its impact on DL and UL SINRs. It was experimentally shown \cite{Gudmundson91} that shadowing (on dB scale) can be modeled as spatially correlated Gaussian random field, with the normalized autocorrelation between two points at distance $\Delta d$ given by, $$ R(\Delta d) = \exp(-\frac{\Delta d}{d_{corr}}), $$ where the correlation distance $d_{corr}$ depends on the environment. Indeed 3GPP has adopted the same model for LTE evaluation \cite[Sec.~B.1.2.1.1]{3GPP36814} with $d_{corr}$ ranging from 6m to 50m for various urban and indoor environments. Moreover, at any point, the shadowing values with respect to different base stations have cross-correlation of $\frac{1}{2}$. Fig.~\ref{fig:concept} illustrates spatial correlation of shadowing for $d_{corr}=25m$ (and $\sigma=7dB$). Other details of the WAN and D2D channel models are given in Appendix. \begin{figure} \centering \scalebox{1}{\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=3.1in]{shadowing.png}} \caption{Harvesting multiuser diversity of the nearby idle devices, at the scale of shadowing. } \label{fig:concept} \end{figure} \begin{figure*} \centering \scalebox{1}{\includegraphics[trim = 0mm 0mm 5mm 0mm, clip, width=6.4in]{sinrIciUpperBounds.png}} \caption{SINR and ICI CDFs. Improvement in average (left) DL and (middle) UL SINR if all active devices were replaced by their relays. Improvement in UL predominantly comes from the (right) reduction in ICI.} \label{fig:upperBound} \end{figure*} Next we quantify the potential gains of using a nearby device with better DL SINR as a {\em relay} between the base station and the {\em edge} device\footnotemark. \footnotetext{The term `edge' device will be used to refer to the relayed UE, \emph{i.e.}, the UE at the edge of the two hop link.} Fig.~\ref{fig:upperBound} shows the improvements in average DL and UL SINRs {\em if the active devices were replaced by their respective neighbor with the best DL SINR.} By `neighbor' we mean any device with $\leq$85dB D2D pathloss to the active device (or equivalently, within about 50m distance). Key observation is that {\em few tens of meters is a sufficient relay search radius to exploit spatial variability of shadowing}, improving median DL SINR by 13dB and UL by 7dB. Note the improvement in UL SINR is predominantly due to the reduction in inter-cell interference (ICI), as shown in Fig.~\ref{fig:upperBound}(right), and to a lesser extent, due to slightly stronger received signal at the serving eNB when the selected neighbor has better pathloss than the original active device. Indeed, a device can use a different criterion (than DL SINR) for selecting an UL relay, \emph{e.g.}, the ratio of serving to interfering WAN channel pathloss, or an added requirement that the pathloss of relay to the serving base station be better than that of the edge device itself. Opting for simplicity we defer such optimizations to future work. An analytical approximation of shadowing-diversity offered by nearby devices, for a specific relay selection scheme, is also provided in \cite{CalcevBonta09}. \section{A case for using the UL spectrum for Access links} \label{sec:caseUlSpect} In addition to regulatory restrictions on UEs transmitting in DL spectrum, there is also a technical case for using the UL spectrum, built on two conjugate observations: that existing interference in UL spectrum is naturally conducive to spatial reuse by short Access links, and that the interference to base station introduced by Access links is easily managed through power control to the base station. \subsection{WAN-to-Access-link interference} \label{subsec:wan2accIM} Let us first consider the WAN-to-Access-link interference. In UL spectrum the sources of interference are weaker UE transmissions arriving over a weaker D2D channel; by contrast, the interference in DL spectrum would originate from higher power base station transmissions arriving over a stronger WAN channel (due to antenna placement). For comparison, Fig.~\ref{fig:spatialSigPow} shows a snapshot, in an arbitrary subframe, of spatial distribution of power in the UL vs the DL spectrum, as seen by a UE. We note that in the UL spectrum, except for a few bright blobs centered at the UEs {\em scheduled for UL transmission in this subframe}, the UL spectrum is amenable to reuse in the remaining {\em darker} space. As different UEs are scheduled from subframe to subframe, almost every location gets to see low interference for some fraction of time. \begin{figure*}[!] \centering \scalebox{1}{\includegraphics[trim = 7mm 0mm 40mm 5mm, clip, width=6.4in]{rxSigPowSpatial.png}} \caption{Snapshot of total signal power (left) in UL spectrum; -90dBm contour curve is plotted in white for reference. One UE per sector is scheduled and allocated the entire 10MHz bandwidth. (Right) That in DL spectrum.} \label{fig:spatialSigPow} \end{figure*} \begin{figure*} \centering \scalebox{1}{\includegraphics[trim = 5mm 0mm 10mm 0mm, clip, width=3.2in]{distNearestIntf.png}} \scalebox{1}{\includegraphics[trim = 5mm 0mm 10mm 0mm, clip, width=3.2in]{rxPowAtRefpointCDF.png}} \caption{(Left) CDF of distance to the nearest UL transmitter on a given RB from point $\left(\mbox{ISD}/3,0\right)$. (Right) CDF of total received power in UL spectrum at point $\left(\mbox{ISD}/3,0\right)$; note $80^{th}$\%-tile is reasonably small (below -90 dBm).} \label{fig:temporalSigPow} \end{figure*} To further illustrate this temporal aspect, Fig.~\ref{fig:temporalSigPow}(left) gives the CDF of the distance to the nearest UL transmitter, on an arbitrary resource block (RB), from reference point $\left(\frac{\mbox{ISD}}{3},0\right)$ in the standard 2-tier, 19 cell deployment with inter-site distance (ISD) of 500m. We note that the mean is about $0.22\times\mbox{ISD}=110$ meters -- much larger than the Access link `length' needed to exploit shadowing diversity. More specifically, Fig.~\ref{fig:temporalSigPow}(right) gives the CDF of total received power at reference point $\left(\frac{\mbox{ISD}}{3},0\right)$ from UL transmissions in the 19 cell deployment. We note that even the $80^{th}$ percentile is reasonably small (below -90 dBm). {\em We conclude that, as-is, the UL spectrum is amenable to spatial reuse by Access links.}\footnotemark~ Next we consider the Access-link-to-WAN interference. \footnotetext{Whereas, spatial reuse of DL spectrum for Access requires dominant interference cancelling receivers \cite{DamnjanovicMontojoJoonyoung12} and cell range expansion through handover biasing and adaptive resource partitioning \cite{DamnjanovicMontojoTingfang11}.} \subsection{Access-link-to-WAN interference} Just as all connected UEs are power controlled in UL to have nearly equal signal strength at the base station \cite[pp.~464-471]{SesTouBak09}, all Access link transmissions can also be power controlled accordingly to ensure that the interference (to base station) introduced by each link is at least, say, $20$dB below the UL signal strength. That is to say, the Access link UEs can derive their transmission power (for Access link) from their UL transmit power with a 20dB backoff. This backoff approach is a simplification of the scheme proposed in \cite{JanisYuDoppler09} for coexistence of D2D communication with cellular, and lends itself favorably to the relay use case: the {\em farther} the UE from base station (and thus more in need of a relay), the higher the allowed transmission power for Access link communication. This underlay framework admits multiple optimizations such as: \begin{itemize} \item The transmit power backoff parameter can be dynamically set and broadcast by the base station to measure and control total interference from Access links. \item With interference budget per Access link fixed, the total number of simultaneously active Access links can also be implicitly controlled by the rates allocated to relays on the Backhaul, as Access links naturally shuts down when the relay buffer is empty (DL) or full (UL). \item Lastly, base station can calculate its {\em interference price} and precisely decide if an Access link is too costly for the rate improvement it offers, and therefore schedule that edge UE directly without the relay. \end{itemize} Investigation of these optimizations, once again, is deferred to future work. \subsection{Access-link-to-Access-link interference} Finally, we comment on Access-link-to-Access-link interference. Access links being short, low power and sparse, they can be allowed to reuse the entire UL spectrum and need not be actively scheduled by the base station (unlike WAN links which share the UL resources.) However, as an implication of Birthday Problem \cite{Grimmett3rdEd}, it is likely that even in a sparse population a few Access links strongly interfere with each other and therefore prove ineffective without further interference coordination. In Section \ref{sec:simResults} we present simulations with and without interference management (TDMA) among Access links, and note that most of throughput gain is realizable even without any interference management or sophisticated WAN scheduling. \section{A candidate design for a D2D relay system}\label{sec:propDesign} Having explored the extent of SINR improvements provided by shadow-diverse relaying (Section \ref{sec:caseOppRly}) and the feasibility of reusing UL spectrum for short Access links (Section \ref{sec:caseUlSpect}), we now layout a candidate system design and investigate its performance through simulation. In picking a candidate design to demonstrate the concept, we will opt for simplicity, deferring sophisticated optimizations to future work. \begin{definit} {\em Relay candidate.} A device is a relay candidate of an active device if \begin{enumerate}[i.] \item the two devices are in the same sector, and \item pathloss between the two devices is smaller than $p^{max}_{acc}$. \end{enumerate} \end{definit} We limit the link budget for Access link $p^{max}_{acc}$ to a very conservative value of 85dB (\emph{i.e.}, about 50m link length). Relay search over 50m yields significant opportunity to exploit shadowing diversity, as seen in Section \ref{sec:caseOppRly}. Moreover, short Access link means the transmit power can be curtailed to enable an underlay approach, as well as to enable spatial reuse across Access links. From amongst the relay candidates, a relay is selected as follows. \begin{definit}\label{def:rlySelect} {\em Relay selection.} From amongst its relay candidates, an active device selects one that has the highest DL SINR, provided it is also higher than the active device's own DL SINR. \end{definit} A device that does not select a relay (because, \emph{e.g.}, the device itself has the best DL SINR amongst the candidates) naturally connects to the base station directly. Moreover, multiple active devices may sometimes select a common relay, although this is rare given small search neighborhood and spatial sparsity of active devices (see Fig.~\ref{fig:sysModel}). Instead of orthogonalizing WAN and Access links, we allow reuse 1 of UL resources on each Access link. That is, each Access link is allowed be active at all times -- barring a natural half-duplex constraint described later -- in the entire UL spectrum (FDD), or in the entire spectrum during UL subframes (TDD). However, the power for Access link transmission is tightly controlled to limit the interference to WAN, as follows. \begin{definit}\label{def:txPower} {\em Access link transmit power.} Transmit power on the Access link by any device (edge or relay) is $P_{UL}-\Delta_{acc}$ dBm, where $\Delta_{acc}$ is a design parameter and $P_{UL}$ dBm is the UL transmit power of the device if the device were allocated the entire UL spectrum. \end{definit} We set the backoff parameter $\Delta_{acc}$ to $20$dB in subsequent simulations. Since the devices are power controlled on the UL, the above method of deriving Access link power from the UL power can be seen as allocating each device an interference budget that the device is allowed to cause to the WAN. It is worth noting that the Access link transmit power implicitly depends on the device's pathloss {\em to serving base station}. As a result, the edge UEs, which typically have worse pathloss to the base station than the relay UEs, tend to have higher transmit power allowance than the relay UEs. Therefore we expect to see edge-to-relay (\emph{i.e.}, UL) Access links to have better SINR than the relay-to-edge links. A word regarding the half-duplex constraint on Access links: \begin{definit} {\em Half-duplex constraint and relative priority of UL vs Access link.} A device is not allowed to be simultaneously active on the WAN UL and the Access link. Therefore, the Access link is not active in the subframes where the relay or the edge device are scheduled by the base station for an UL transmission. \end{definit} While the Access links are not directly scheduled by the base station, the `on' duration of an Access link implicitly depends on the rate being allocated to the relay on the (base station scheduled) Backhaul. More specifically, \begin{definit} {\em Backhaul and Access link coupling through relay buffer.} Relay UEs are configured with a small (edge UE specific) buffer to hold the relayed traffic temporarily. When the buffer is full, the link feeding the buffer shuts down. That is, \begin{itemize} \item if the buffer is associated with UL traffic, then the Access link shuts down (since the UL relay buffer is fed over the Access link by the edge UE); \item if the buffer is assocated with DL traffic, then the base station stops scheduling relay Backhaul. \end{itemize} Similarly, when the buffer is empty, the link draining the buffer naturally shuts down, namely, the Access link for the DL traffic and Backhaul for the UL. \end{definit} Lastly, we describe a possible interference management scheme amongst Access links. \begin{definit} {\em Interference management amongst Access links.} Two Access links are said to interfere with each other if the (pairwise) SIR at any one link due to the interference from the other is below $\gamma_{acc}$. Between any two interfering links in a given subframe, one link is chosen uniformly at random to yield to the other. Therefore, an Access link transmits if it does not yield to any interfering links in that subframe. \end{definit} In short, interfering Access links are active in a time division multiple access (TDMA) fashion such that an Access link with $d$ interfering links is active about $\frac{1}{d+1}$ fraction of the time. This interference management/multiple access (MAC) approach is derived from the distributed FlashLinQ MAC algorithm; see \cite[Sec.~I-A]{WuTavildarShakkottai10} for motivation behind SIR-based yielding and \cite[Sec.~III]{WuTavildarShakkottai10} for details of distributed SIR computation and yielding. Note that if SIR threshold $\gamma_{acc}$ is set to $-\infty$, no local interference management amongst Access links takes place (this is one of the simulated cases in Section \ref{sec:simResults}.) The other case is $\gamma_{acc}=5$dB. To recap, each Access link transmission spans entire UL spectrum and each Access link is always `on' except when (1) the relay buffer is empty (in case of DL) or full (in case of UL), (2) the relay or the edge device is scheduled for an UL transmission, or (3) the Access link has yielded to another Access link (assuming Access link interference management scheme is being used.) \begin{figure*} \centering \scalebox{1}{\includegraphics[trim = 10mm 0mm 10mm 0mm, clip, width=6.4in]{dlInfWithoutIm.png}} \caption{Improvement in DL SINR and throughput.} \label{fig:dlwoIM} \vspace{-0.1in} \end{figure*} \begin{figure*} \centering \scalebox{1}{\includegraphics[trim = 10mm 0mm 10mm 0mm, clip, width=6.4in]{ulInfWithoutIm.png}} \caption{Improvement in UL SINR and throughput.} \label{fig:ulwoIM} \vspace{-0.1in} \end{figure*} \subsection{Simulation results and discussion} \label{sec:simResults} We simulate the above system under full-buffer traffic. Scheduling and resource allocation in the base station for both DL and UL are performed by a proportional fair (PF) scheduler. In subframes where the buffer status of the relay does not permit scheduling the relay Backhaul, the proportional fair scheduler instead considers scheduling the UE directly. Therefore, the end to end links that are Backhaul limited will have (nearly) all the data flowing through relay, whereas those that are Access link limited will have as much data flowing through relay as permitted by the Access link and the remaining share of the UE's data flowing directly to/from the edge UE. Access links can be the bottle neck due to low transmit power budget (see Definition \ref{def:txPower}), strong interference from another nearby Access link, or being {\em surrounded} by many UL transmitting UEs. Relays that offer less than $5\%$ increase in rate over going directly are dropped and those UEs are scheduled directly by the base station. A relay-buffer-and-channel-aware scheduler that opportunistically decides between the direct and through-relay path can yield higher throughput; such optimizations once again are deferred to future work. Next we discuss the simulation results, first for the case without interference management (IM) amongst Access links and then for the case with IM. \subsubsection{Without interference management amongst Access links} First we present results for the system without interference management (IM) amongst Access links, \emph{i.e.}, the case with $\gamma_{acc}=-\infty$. DL-specific user throughput results are summarized in Fig.~\ref{fig:dlwoIM} and UL-specific in Fig.~\ref{fig:ulwoIM}. We make the following observations: \begin{enumerate} \item Average DL SINR improves significantly with the use of relays, with median improving by about $10$dB. Note the average DL SINR of a UE is convex combination of the UE's and its relay's DL SINR, depending upon relatively how often each was scheduled. In the low SINR regime, the CDF matches well with the upper bound obtained from Fig.~\ref{fig:upperBound} (where all active UEs were replaced by their relays). This shows that these low SINR UEs are Backhaul limited and thus almost exclusively scheduled through their relay. This is because these UEs (and their relays) tend to have poor pathloss to the base station and thus higher Access link transmit power (see Definition \ref{def:txPower} and subsequent discussion.) Overall, the proposed design offers about $110\%$ increase in both the $5^{th}$ and $50^{th}$ percentile of average DL rate per UE, respectively. \item Average UL SINRs and rates also improve, though not as close to the upper bound as in the case of DL. Recall that UL upper bound ignores the Access-to-eNB interference (from both the edge-to-relay/DL and relay-to-edge/UL links.) Overall, the proposed design offers $240\%$ and $40\%$ increase in the $5^{th}$ and $50^{th}$ percentile of average UL rate per UE, respectively. \item Fig.~\ref{fig:intf} provides further insight into the UL results. The figure shows the CDFs of various interferences and signal strengths at the base station antenna (capturing the fluctuation of these quantities from subframe to subframe.) Keys are listed for curves from right to left. We note that the gain in UL comes from both the reduction in ICI and increase in signal strength due the use of relays. But while the ICI is reduced, Access-to-eNB interference is added due to the use of the relays (from both edge-to-relay/DL and relay-to-edge/UL links.) Even though the CDF of Access-to-eNB interference is dominated by that of ICI, sometime it is indeed the Access-to-eNB interference that limits UL SINR. As will be seen in next section, this Access-to-eNB interference is further weakened by the use of IM amongst Access links. \item Fig.~\ref{fig:acc} provides the CDF of average Access link SINRs, separately for the relay-to-edge (DL) and edge-to-relay (UL) direction. As expected, edge-to-relay direction has better SINR since edge UEs have higher Access link transmit power (see Definition \ref{def:txPower} and subsequent discussion.) Note that Access links can achieve significant rates even at low SINRs due to reuse 1 of the entire UL spectrum on each link. Links with poor average SINR can be those with low transmit power allowance, or those {\em surrounded} by UL UEs, or those co-located with another Access link (this factor will be mitigated by Access-to-access IM scheme presented in the next section.) \item Some other quantities of interest are as follows: $85\%$ of the DL UEs and $83\%$ of the UL ones are (at least partially) scheduled through relays while the rest are scheduled only directly. On average $4.7$ Access links are simultaneously active per sector, as also reflected in approximately $13$dB separation between the Access-to-eNB interference CDF and signal strength CDF in Fig.~\ref{fig:intf}. \end{enumerate} \begin{figure} \centering \scalebox{1}{\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=3.2in]{varIntf-Inf.png}} \caption{Various interferences and signal strengths at eNB, with ($\gamma_{acc}=5$dB) and without ($\gamma_{acc}=-\infty$) IM amongst Access links. Thin lines denote the case with IM.} \label{fig:intf} \end{figure} \begin{figure} \centering \scalebox{1}{\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=3.0in]{accSinrInf.png}} \caption{Access Link SINRs (forward and reverse). Thick curves are without IM amongst Access links, whereas the thin curves are with IM.} \label{fig:acc} \vspace{-0.2in} \end{figure} \begin{figure} \centering \scalebox{1}{\includegraphics[trim = 7mm 0mm 10mm 0mm, clip, width=3.2in]{rateGainsWithIM.png}} \caption{Improvement in DL and UL throughput, with interference management amongst Access links.} \vspace{-0.2in} \label{fig:gainIM} \end{figure} \subsubsection{With interference management amongst Access links} With IM (\emph{i.e.}, $\gamma_{acc}=5$dB), Access links can support higher rate; Fig.~\ref{fig:acc} gives Access link SINR with and without IM. Improved SINR can result in more traffic routed through relays (for bottle-neck Access links) or reduced Access-to-eNB interference (due to reduced 'on' duration). The gains (over the legacy system) with IM are summarized in Fig.~\ref{fig:gainIM}. We see $160\%$ and $135\%$ (vs $110\%$ without IM) increase in the $5^{th}$ and $50^{th}$ percentile of average DL rate per UE, respectively. In case of UL, we see $330\%$ and $65\%$ (vs $240\%$ and $40\%$ without IM) increase in the $5^{th}$ and $50^{th}$ percentile, respectively. This is due to the reduction in ICI and Access-to-eNB interference (shown in Fig.~\ref{fig:intf}) as a by-product of Access link IM. There are now on average $2.5$ Access links simultaneously active per sector (vs $4.7$ links without IM). ICI is also lower with IM; this is because more UL relays are being used ($94\%$ vs $83\%$) as well as more UL traffic now flows through relays than directly from the edge UE to eNB. Number of DL UEs with relays has also increased to $96\%$ (vs $85\%$ without IM). \section{Implementation and technology aspects} \label{sec:implementation} We discuss some of the signaling and implementation aspects mainly in the context of a 3GPP LTE system. In particular, we discuss protocols for relay discovery, scheduling of access links, and implications of power consumption. \subsection{Signaling for relay selection} Here, we present some protocols for power efficient discovery of idle mode UEs that can serve as a relay. The proposed design crucially depends on synchronization from the eNB and ability of idle UEs to measure downlink SINR without connecting to the eNB, both of which are supported in LTE. As an example, consider $2$ UL sub-frames reserved for Relay discovery every second which amounts to $0.2\%$ of the resource. Amongst the 2 UL-subframes, PUSCH resources ($44$ RB-pairs per sub-frame) are reserved for relay discovery signaling. Each idle mode UE selects one out of the $88$ RB-pairs to transmit on, and transmits using rate $1/4$ QPSK code carrying 70 information bits. The information bits carry (i) UE identity (ii) UE's DL SINR (iii) access link information such as maximum transmit power on the access link. Note that $88$ resources are more than enough to avoid congestion as we typically see 20-30 candidate relays in the vicinity. So, even with spatial reuse of the relay discovery resource, a UE should be able to detect most of the candidate relay UEs. Additionally, certain optimizations can be done to stop UEs with low DL SINRs from participating in the relay discovery which can further reduce the congestion as well as the power penalty. Now, an active mode UE that wants to use a relay receives on these two sub-frames and decodes up to $88$ relay discovery messages. Based on these messages it determines relays with highest DL SINR with certain restriction on the access link such as maximum path-loss. The active mode UE can select the best idle mode UE to relay its traffic, or alternatively the measurements can be reported to the eNodeB which can make the final relay association decision. We omit signaling for relay association from this paper. \subsection{Signaling for interference management} Due to introduction of access links on the uplink spectrum, some signaling is needed for managing interference between access links and uplink UEs. {\em Interference from access link to the WAN} is managed through power control. For signaling, we propose to broadcast a single parameter indicating the maximum tolerable interference level from an access link to the base station -- this information could for example be carried on one of the system information broadcast (SIB) channels. It is possible to further optimize this, and broadcast the power cap per uplink assignment (as the received signal strength would vary based on the UE scheduled on the uplink). This would require the information being sent per assignment in PDCCH and can significantly increase the overhead. {\em Interference from uplink UEs to access links} is managed based on statistical multiplexing as demonstrated in Section \ref{subsec:wan2accIM}, so no new signaling is needed. {\em Interference between access links} can be managed through the eNB. In particular, it is proposed that UEs measure the periodically scheduled SRS signal from other UEs and determine UEs that can cause significant interference to a receiver and report that to the base station. The base station can then instruct access links to orthogonalize in time and/or frequency. This will be done through per UE dedicated signaling that would be carried on PUSCH/PDSCH. Alternatively, the interference can be managed in a distributed way as shown in \cite{WuTavildarShakkottai10}. \subsection{Scheduling} The Direct link or the Backhaul link can be thought of as a traditional LTE link and are scheduled as such. For access link, the scheduling is implicit except for slow time scale scheduling if needed to manage access to access interference. \subsection{Impact on power consumption} One of the traditional concerns for opportunistic relaying is the power consumed at the relay UE. However, we look at the power consumption from a systems aspect, and argue that increase in power consumption is not significant. In particular, we consider: {\em Probability of relaying: } given the large number of idle UEs in a network at a time, chance that a UE is selected to relay traffic for another UE is fairly small (about $4\%$ for the numbers used in this paper). Additionally, a UE with low battery can choose not to participate in the relay discovery protocol. {\em Modem vs Device power consumption:} for a typical smartphone, modem is not the biggest power consumption (LCD backlight and application processor can consume comparable power \cite{CarrollHeiser10}). However, for a relaying UE only the modem needs to stay on thus the corresponding power drawn is much lower than what the device would consume in a typical scenario. {\em No power amplifier over Access link:} since Access link transmissions are 3dBm or lower, the UE's power amplifier (PA) can operate in low power, high efficiency mode over Access link (see \cite[Fig.~7]{JensenLauridsen12}). As before, the high power low efficiency PA operation may be needed only at the device active on the cellular link. {\em Power saving due to higher throughput:} power consumption in modem is typically dictated by the ON time -- however with $50\%$ median improvement in throughput, correspondingly power will be reduced by $50\%$ for given amount of bits exchanged with the network. \section{Conclusion} \label{sec:conclusion} We presented a new architecture for two-hop relaying in cellular LTE networks, built on D2D communication. The key idea proposed was to exploit shadow-diversity of idle UEs and reuse UL spectrum to relay both UL and DL traffic. We argued that reusing UL spectrum facilitates easier interference management between Access and Backhaul links. We validated the proposed architecture through simulations based on the 3GPP methodology, with proposed scheme showing 110\% gain in the median DL throughput and 40\% gain in the median UL throughput. Finally, we presented a preliminary design for realizing the proposed scheme at a system level, including signaling for relay selection and interference management. Throughout the paper we also pointed out various potential optimizations regarding relay selection, adaptive interference management, and scheduling, which will be investigated in future works.
1,941,325,220,816
arxiv
\section{introduction} It is widely accepted that, under ordinary circumstances, the standard Ohm's law, \begin{eqnarray} \vec{J}=\sigma\vec{E}, \label{1} \end{eqnarray} which relates the electric field $\vec{E}$ to the current density $\vec{J}$ flowing in a linear, isotropic and homogenous medium with electric conductivity $\sigma$, provides a very good description of electrodynamic processes occurring in conductors. However, for gases and plasmas, or even for high frequency electromagnetic phenomena, the possibility of non Ohmic features has attracted the interest of many researchers \cite{Wannier,Llebot,Krall,Boyd,Jou88,Jou96}. In those situations, a generalized expression for Ohm's law has been proposed, \begin{eqnarray} \left(1+\tau\frac{d}{dt}\right)\vec{J}=\sigma\vec{E}, \label{2} \end{eqnarray} where $\tau$ is a constant with dimension of time and the assigned derivative is the total time derivative. The above generalized law can be deduced rigorously, either on the basis of kinetic theory \cite{Krall,Boyd} or in the framework of extended irreversible thermodynamics \cite{Jou88,Jou96,Israel}. Clearly, in the limit $\tau\rightarrow0$, the standard Ohm's law is recovered. Still more interestingly, the current density does not immediately vanish as soon as the electric field is turned off. As a consequence of inertial effects due to charge carriers in the conductor, it approaches zero in a time scale which is determined by $\tau$, thereby quantifying the relaxation time of the current density (see Appendix A). In principle, the above time dependent correction added to Ohm's law is negligible under stationary conditions, as well as for low frequency regimes. However, it may have several interesting physical consequences to the behavior of transient electromagnetic fields, and also at high frequency limits. In particular, as discussed in \cite{Cuevas}, it should modify considerably the classical results for space attenuation of electromagnetic waves \cite{Jackson} and time damping of magnetic fields \cite{Landau} in rigid conducting media. In this work, new results for attenuation and damping of electromagnetic fields in rigid conducting media are derived under the combined action of inertia and displacement current. The classical conceptions of poor and good conductors are extended with basis on an effective electric conductivity, which depends on both wave frequency and relaxation time. It is shown that the attenuation for good conductors at high frequency electromagnetic waves depends on the relaxation time only, that is, the penetration depth must saturate to a minimum value at sufficiently high frequencies. Such a result leads to the possibility of measurement of the relaxation time. It is also found that the influences of displacement current and inertia on damping of magnetic fields are opposed to each other. That could explain why, under normal conditions, the decay time of magnetic fields is given approximately by the diffusion time. At very small length scales (at the nano scale, for instance), the decay time could be given either by a fraction of the diffusion time or by the relaxation time, depending whether displacement current or inertia, respectively, would prevail on magnetic diffusion. However, it should be hardly observable in the latter case, under ordinary circumstances, given the smallness of the length scale. The paper is organized as follows. In Sec. II, we cast the basic equations for describing the influence of inertia and displacement current on attenuation and damping. In Sec. III, the attenuation of electromagnetic waves is discussed. The above mentioned effective conductivity is introduced, and an expression for the ratio of time averaged magnetic to electric energy densities is derived in terms of both wave frequency and relaxation time. Then, the low and high frequency regimes are explored for poor and good conductor limits. In Sec. IV, the damping of magnetic fields is discussed. In particular, it is shown that the actions of inertia and displacement current on damping are opposite to each other, and the decay time of magnetic fields is explored at very small length scales. In Sec. V, the main conclusions are summarized. \section{basic equations} For a rigid conductor, except for the small massive charge carriers, there is no relative motion between its constitutive parts. In that situation, the total time derivative in Eq. (\ref{2}) becomes a partial time derivative and the generalized Ohm's law takes the form \begin{eqnarray} \left(1+\tau\frac{\partial}{\partial t}\right)\vec{J}=\sigma\vec{E}. \label{3} \end{eqnarray} By combining Eq. (\ref{3}) with Maxwell's equations (recall there is no free charge in the bulk of the conducting medium), \begin{eqnarray} \nabla\times\vec{B}&=&\mu\epsilon\frac{\partial\vec{E}}{\partial t}+\mu\vec{J},\;\;\;\;\;\;\;\;\;\;\nabla\cdot\vec{B}=0,\nonumber\\\nabla\times\vec{E}&=&-\frac{\partial\vec{B}}{\partial t},\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\nabla\cdot\vec{E}=0, \label{4} \end{eqnarray} where $\mu$ and $\epsilon$ are the magnetic permeability and electric permittivity, respectively, of the medium, one may easily check that the space and time evolution of the magnetic field $\vec{B}$ is governed by \begin{eqnarray} \left(1+\tau\frac{\partial}{\partial t}\right)\nabla^2\vec{B}=\left(1+\tau\frac{\partial}{\partial t}\right)\mu\epsilon\frac{\partial^2\vec{B}}{\partial t^2}+\mu\sigma\frac{\partial\vec{B}}{\partial t}. \label{5} \end{eqnarray} In the limit $\tau\rightarrow0$, the classical space and time evolution of the magnetic field in a rigid conductor is recovered \cite{Jackson}. If we assume that both magnetic and electric fields, as well as the current density, vary in space and time as $\sim\exp{\left(\imath\vec{k}\cdot\vec{x}-\imath\omega t\right)}$, where $\vec{k}$ and $\omega$ are the complex propagation vector and real angular frequency, respectively, of a monochromatic electromagnetic wave, Eqs. (\ref{3}) and (\ref{5}) lead to \begin{eqnarray} \left(1-\imath\omega\tau\right)\vec{J}_0=\sigma\vec{E}_0, \label{6} \end{eqnarray} \begin{eqnarray} \left(1-\imath\omega\tau\right)k^2=\left(1-\imath\omega\tau\right)\omega^2\mu\epsilon+\imath\omega\mu\sigma, \label{7} \end{eqnarray} respectively, where $\vec{J}_0$ and $\vec{E}_0$ are the complex vector amplitudes of the current density and electric field, respectively, and $k=\left(\vec{k}\cdot\vec{k}\right)^{1/2}$ is the complex mode number of the electromagnetic wave. In the limit $\tau\rightarrow0$, the corresponding classical results in Fourier space are recovered \cite{Jackson}. Eqs. (\ref{3}) to (\ref{7}) are the basic equations for describing the influence of inertia and displacement current on space attenuation of electromagnetic waves and time damping of magnetic fields in rigid conducting media. \section{space attenuation of electromagnetic waves} \subsection{General considerations} \subsubsection{Effective conductivity and attenuation factor} As it appears, Eq. (\ref{6}) gives a linear algebraic relation between the complex vector amplitudes $\vec{J}_0$ and $\vec{E}_0$. For our purposes, we rewrite it in a more convenient manner, \begin{eqnarray} \vec{J}_0=\sigma_{\rm eff}\exp{\left(\imath\varphi\right)}\vec{E}_0, \label{8} \end{eqnarray} where we introduce the effective electric conductivity of the rigid medium, \begin{eqnarray} \sigma_{\rm eff}=\frac{\sigma}{\sqrt{1+\omega^2\tau^2}}, \label{9} \end{eqnarray} and the time dephasing angle of the current density $\vec{J}$ with respect to the electric field $\vec{E}$, \begin{eqnarray} \varphi=\tan^{-1}\left(\omega\tau\right). \label{10} \end{eqnarray} The usual definitions of poor and good conductors \cite{Jackson} correspond to the limits $\sigma\ll\epsilon\omega$ and $\sigma\gg\epsilon\omega$, respectively. With basis on Eqs. (\ref{8}) to (\ref{10}), we see that, when inertial effects are taken into account, those limits must be replaced by $\sigma_{\rm eff}\ll\epsilon\omega$ and $\sigma_{\rm eff}\gg\epsilon\omega$, respectively. Indeed, in the limit $\tau\rightarrow0$, the effective electric conductivity takes its ordinary value, and the current density and electric field have the same time phase angle. As a consequence, the extended limits recover the corresponding classical ones. Moreover, we note that those extensions are in accordance with the observation that the conductivity of the medium generally depends on the frequency of the wave \cite{Jackson}. { However, in still more general situations, like those involving anisotropic media, it is well established that the electric conductivity must be treated as a tensor \cite{Bladel}. In these cases, Eq. (\ref{2}) might be extended to \begin{eqnarray} \left(1+\tau\frac{d}{dt}\right)\vec{J}=\stackrel{\leftrightarrow}\sigma\vec{E}, \label{11} \end{eqnarray} where $\stackrel{\leftrightarrow}\sigma$ denotes the electric conductivity tensor. With basis on Eq. (\ref{11}), we see that all classical frequency regimes and conduction limits should require revisions due to possible introductions of other physically relevant frequency and time scales.} Besides, Eq. (\ref{7}) yields the dispersion relation between the complex mode number and real angular frequency of the electromagnetic wave, \begin{eqnarray} k^2=\frac{\omega^2\mu\epsilon}{1+\omega^2\tau^2}\left(1-\frac{\sigma\tau}{\epsilon}+\omega^2\tau^2+\imath\frac{\sigma}{\epsilon\omega}\right). \label{12} \end{eqnarray} Following standard lines \cite{Jackson}, we write the complex mode number as $k=\beta+\imath\alpha/2$, so that its real and imaginary parts are given by \begin{eqnarray} \left. \begin{array}{c} \beta \\ \alpha/2 \end{array} \right\}=\omega\sqrt{\frac{\mu\epsilon/2}{1+\omega^2\tau^2}}\left[\sqrt{\left(1-\frac{\sigma\tau}{\epsilon}+\omega^2\tau^2\right)^2+\left(\frac{\sigma}{\epsilon\omega}\right)^2}\pm\left(1-\frac{\sigma\tau}{\epsilon}+\omega^2\tau^2\right)\right]^{1/2}. \label{13} \end{eqnarray} The imaginary part of $k$, $\alpha/2$, is known as the attenuation factor of the electromagnetic wave propagating in the rigid conductor \cite {Jackson}. Eqs. (\ref{13}) are the same as Eqs. (11) of \cite{Cuevas}. Those authors have studied the problem of attenuation of electromagnetic waves in rigid conductors for the low frequency regime, $\omega\tau\ll1$. In this article, we rediscuss their results but also explore Eqs. (\ref{13}) in the high frequency regime, $\omega\tau\gg1$, on the basis of the effective electric conductivity, introduced above. Moreover, we include an analysis of the time averaged electromagnetic energy density in the discussion, for all different frequency regimes and conduction limits. \subsubsection{Transverse wave and energy density} To begin with, we observe that the space and time fluctuation of the electromagnetic field, assigned above, together with Maxwell's equations, Eqs. (\ref{4}), lead to \begin{eqnarray} k\hat{n}\times\vec{E}_0=\omega\vec{B}_0,\;\;\;\;\;\;\;\;\;\;\hat{n}\cdot\vec{E}_0=0, \label{14} \end{eqnarray} where $\hat{n}$ is the real unit vector in the direction of the complex propagation vector $\vec{k}$, and $\vec{B}_0$ is the complex vector amplitude of the magnetic field, thereby showing the expected transverse nature of the electromagnetic wave. By writing the complex mode number as $k=\mid k\mid\exp\left(\imath\phi\right)$, it follows that the ratio of the real scalar amplitude of $\vec{B}$ to that of $\vec{E}$ is given by \begin{eqnarray} \frac{\mid\vec{B}_0\mid}{\mid\vec{E}_0\mid}=\frac{\mid k\mid}{\omega}, \label{15} \end{eqnarray} and $\vec{B}$ lags $\vec{E}$ in time by the phase angle $\phi$. From Eqs. (\ref{13}), one may easily check that \begin{eqnarray} \mid k\mid=\omega\sqrt{\frac{\mu\epsilon}{1+\omega^2\tau^2}}\left[\left(1-\frac{\sigma\tau}{\epsilon}+\omega^2\tau^2\right)^2+\left(\frac{\sigma}{\epsilon\omega}\right)^2\right]^{1/4}, \label{16} \end{eqnarray} \begin{eqnarray} \phi=\tan^{-1}\frac{\epsilon\omega}{\sigma}\left[\sqrt{\left(1-\frac{\sigma\tau}{\epsilon}+\omega^2\tau^2\right)^2+\left(\frac{\sigma}{\epsilon\omega}\right)^2}-\left(1-\frac{\sigma\tau}{\epsilon}+\omega^2\tau^2\right)\right]. \label{17} \end{eqnarray} In the limit $\tau\rightarrow0$, Eqs. (\ref{16}) and (\ref{17}) recover the classical results for $\mid k\mid$ and $\phi$ \cite{Jackson}. In this work, we take a step further by considering the time average of magnetic and electric energies per unit volume of the medium, \begin{eqnarray} \bar{u}_B=\frac{\mid\vec{B}_0\mid^2}{2\mu},\;\;\;\;\;\;\;\;\;\;\bar{u}_E=\frac{\epsilon\mid\vec{E}_0\mid^2}{2}, \label{18} \end{eqnarray} respectively. First, from Eq. (\ref{15}), we see that the ratio of time averaged magnetic to electric energy densities is given by \begin{eqnarray} \frac{\bar{u}_B}{\bar{u}_E}=\frac{\mid k\mid^2}{\omega^2\mu\epsilon}. \label{19} \end{eqnarray} Then, from Eq. (\ref{16}), it follows that Eq. (\ref{19}) can be written as \begin{eqnarray} \frac{\bar{u}_B}{\bar{u}_E}=\frac{1}{1+\omega^2\tau^2}\sqrt{\left(1-\frac{\sigma\tau}{\epsilon}+\omega^2\tau^2\right)^2+\left(\frac{\sigma}{\epsilon\omega}\right)^2}. \label{20} \end{eqnarray} In the limit $\tau\rightarrow0$, one may easily check that the expected classical result for the ratio $\bar{u}_B/\bar{u}_E$ is recovered \cite{Jackson}. However, in the high frequency regime, $\omega\tau\gg1$, a crude inspection of Eq. (\ref{20}) suggests that $\bar{u}_B$ always approaches $\bar{u}_E$. Somewhat surprisingly, that ceases to be true in the good conductor limit, as will be shown on the basis of the effective electric conductivity, introduced above. \subsection{Low frequency regime} The low frequency regime, $\omega\tau\ll1$, may be attained by considering terms of $\mathcal{O}\left(\omega^2\tau^2\right)=0$. In this order of approximation, we see at once that $\sigma_{\rm eff}=\sigma$, although $\vec{J}$ is slightly dephased in time with respect to $\vec{E}$ by the small angle $\varphi=\omega\tau$. The real and imaginary parts of the complex mode number are \begin{eqnarray} \left. \begin{array}{c} \beta \\ \alpha/2 \end{array} \right\}=\omega\sqrt{\frac{\mu\epsilon}{2}}\left[\sqrt{\left(1-\frac{\sigma\tau}{\epsilon}\right)^2+\left(\frac{\sigma}{\epsilon\omega}\right)^2}\pm\left(1-\frac{\sigma\tau}{\epsilon}\right)\right]^{1/2}. \label{21} \end{eqnarray} Eqs. (\ref{21}) should be compared with Eqs. (12) of \cite{Cuevas}. As it appears, those authors have neglected the term of $\mathcal{O}\left(\tau^2\right)$ under the square root sign. However, this term could not have been neglected since the square root implies it is of $\mathcal{O}\left(\tau\right)$ effectively. Indeed, it will be shown that by consistently retaining it both classical results in the poor and good conductor limits are corrected to terms of $\mathcal{O}\left(\omega\tau\right)$. The modulus of the complex mode number is \begin{eqnarray} \mid k\mid=\omega\sqrt{\mu\epsilon}\left[\left(1-\frac{\sigma\tau}{\epsilon}\right)^2+\left(\frac{\sigma}{\epsilon\omega}\right)^2\right]^{1/4}, \label{22} \end{eqnarray} the magnetic field lags the electric field in time by the phase angle \begin{eqnarray} \phi=\tan^{-1}\frac{\epsilon\omega}{\sigma}\left[\sqrt{\left(1-\frac{\sigma\tau}{\epsilon}\right)^2+\left(\frac{\sigma}{\epsilon\omega}\right)^2}-\left(1-\frac{\sigma\tau}{\epsilon}\right)\right], \label{23} \end{eqnarray} and the ratio of time averaged magnetic to electric energy densities is \begin{eqnarray} \frac{\bar{u}_B}{\bar{u}_E}=\sqrt{\left(1-\frac{\sigma\tau}{\epsilon}\right)^2+\left(\frac{\sigma}{\epsilon\omega}\right)^2}. \label{24} \end{eqnarray} Next, we discuss the limits of Eqs. (\ref{21}) to (\ref{24}) for poor and good conductors. \subsubsection{Poor conductor limit} Poor conductors at low frequency electromagnetic waves are described by requiring the condition $\sigma\ll\epsilon\omega$ to be satisfied. In this approximation, \begin{eqnarray} \mid k\mid\cong\beta=\omega\sqrt{\mu\epsilon}-\frac{\sigma}{2}\sqrt{\frac{\mu}{\epsilon}}\omega\tau,\;\;\;\;\;\;\;\;\;\;\frac{\alpha}{2}=\frac{\sigma}{2}\sqrt{\frac{\mu}{\epsilon}}. \label{25} \end{eqnarray} We see that, although the small classical value for the attenuation factor is not altered by inertial effects due to charge carriers, the real part of the complex mode number, as well as its own modulus, slightly decrease as a consequence of consistently retaining the terms of $\mathcal{O}\left(\tau\right)$ in Eqs. (\ref{21}). The first of Eqs. (\ref{25}) corrects the result of \cite{Cuevas}. As already mentioned, those authors have neglected the term of $\mathcal{O}\left(\tau^2\right)$ under the square root sign in their Eqs. (12). That is the origin of their result. The magnetic field lags the electric field in time by the same small classical phase angle, \begin{eqnarray} \phi=\frac{\sigma}{2\epsilon\omega}. \label{26} \end{eqnarray} Interestingly, the time averaged magnetic and electric energy densities are no longer identical, as they were classically, since the ratio $\bar{u}_B/\bar{u}_E$ slightly decreases with respect to unit, \begin{eqnarray} \frac{\bar{u}_B}{\bar{u}_E}=1-\frac{\sigma\tau}{\epsilon}. \label{27} \end{eqnarray} In the limit $\tau\rightarrow0$, the first of Eqs. (\ref{25}), and (\ref{27}) recover the expected classical results \cite{Jackson}. \subsubsection{Good conductor limit} Good conductors at low frequency electromagnetic waves are described by requiring the condition $\sigma\gg\epsilon\omega$ to be satisfied. In this approximation, \begin{eqnarray} \left. \begin{array}{c} \beta \\ \alpha/2 \end{array} \right\}=\sqrt{\frac{\mu\sigma\omega}{2}}\left(1\mp\frac{\omega\tau}{2}\right). \label{28} \end{eqnarray} We see that the classical values of $\beta$ and $\alpha/2$ are decreased and increased, respectively, by the same small number, $\omega\tau/2$. Eqs. (\ref{28}) are essentially the same as Eqs. (13) of \cite{Cuevas}, provided the binomial expansions of the expressions $\sqrt{1\mp\omega\tau}$ are carried on to the term of $\mathcal{O}\left(\omega\tau\right)\ll1$. The classical result for the modulus of the complex mode number, $\mid k\mid=\sqrt{\mu\sigma\omega}$, is not altered by inertial effects. Interestingly, the classical $\pi/4$ time dephasing angle of $\vec{B}$ with respect to $\vec{E}$ is slightly increased by inertial effects, since \begin{eqnarray} \phi=\frac{\pi}{4}+\frac{\omega\tau}{2}. \label{29} \end{eqnarray} As in the classical case, the time averaged electromagnetic energy density is almost entirely magnetic in nature, since the ratio $\bar{u}_B/\bar{u}_E$ is given by the large number \begin{eqnarray} \frac{\bar{u}_B}{\bar{u}_E}=\frac{\sigma}{\epsilon\omega}. \label{30} \end{eqnarray} In the limit $\tau\rightarrow0$, Eqs. (\ref{28}) and (\ref{29}) recover the expected classical results \cite{Jackson}. \subsection{High frequency regime} Now, we explore a situation which, to the best of our knowledge, had not been considered elsewhere, namely, the influence of inertia on attenuation at high frequencies. The high frequency regime may be achieved by requiring the condition $\omega\tau\gg1$ to be satisfied. In this approximation, we see at once that the classical electric conductivity is strongly suppressed by inertial effects, given that \begin{eqnarray} \sigma_{\rm eff}=\frac{\sigma}{\omega\tau}, \label{31} \end{eqnarray} whilst $\vec{J}$ and $\vec{E}$ are completely out of phase in time, since \begin{eqnarray} \varphi=\frac{\pi}{2}. \label{32} \end{eqnarray} In order to simplify the checking by the reader of the limits for poor and good conductors, the following formulae are written in terms of $\sigma_{\rm eff}$, instead of $\sigma$. The real and imaginary parts of the complex mode number are \begin{eqnarray} \left. \begin{array}{c} \beta \\ \alpha/2 \end{array} \right\}=\sqrt{\frac{\mu\epsilon\omega}{2\tau}}\left[\sqrt{\left(\frac{\sigma_{\rm eff}\tau}{\epsilon}-\omega\tau\right)^2+\left(\frac{\sigma_{\rm eff}}{\epsilon\omega}\right)^2}\mp\left(\frac{\sigma_{\rm eff}\tau}{\epsilon}-\omega\tau\right)\right]^{1/2}, \label{33} \end{eqnarray} while its modulus is given by \begin{eqnarray} \mid k\mid=\sqrt{\frac{\mu\epsilon\omega}{\tau}}\left[\left(\frac{\sigma_{\rm eff}\tau}{\epsilon}-\omega\tau\right)^2+\left(\frac{\sigma_{\rm eff}}{\epsilon\omega}\right)^2\right]^{1/4}. \label{34} \end{eqnarray} The magnetic field lags the electric field in time by the phase angle \begin{eqnarray} \phi=\tan^{-1}\frac{\epsilon\omega}{\sigma_{\rm eff}}\left[\sqrt{\left(\frac{\sigma_{\rm eff}\tau}{\epsilon}-\omega\tau\right)^2+\left(\frac{\sigma_{\rm eff}}{\epsilon\omega}\right)^2}+\left(\frac{\sigma_{\rm eff}\tau}{\epsilon}-\omega\tau\right)\right], \label{35} \end{eqnarray} and the ratio of time averaged magnetic to electric energy densities satisfies \begin{eqnarray} \frac{\bar{u}_B}{\bar{u}_E}=\frac{1}{\omega\tau}\sqrt{\left(\frac{\sigma_{\rm eff}\tau}{\epsilon}-\omega\tau\right)^2+\left(\frac{\sigma_{\rm eff}}{\epsilon\omega}\right)^2}. \label{36} \end{eqnarray} Next, we discuss the limits of Eqs. (\ref{33}) to (\ref{36}) for poor and good conductors. \subsubsection{Poor conductor limit} Poor conductors at high frequency electromagnetic waves are described by requiring the condition $\sigma_{\rm eff}\ll\epsilon\omega$ to be satisfied. In this approximation, \begin{eqnarray} \beta=\omega\sqrt{\mu\epsilon},\;\;\;\;\;\;\;\;\;\;\frac{\alpha}{2}=\frac{1}{\omega^2\tau^2}\frac{\sigma}{2}\sqrt{\frac{\mu}{\epsilon}}. \label{37} \end{eqnarray} We see that the already small attenuation factor for poor conductors at low frequencies is strongly suppressed by inertial effects at high frequencies. The modulus of the complex mode number for poor conductors at low frequencies is also suppressed by inertial effects at high frequencies, \begin{eqnarray} \mid k\mid=\frac{\omega\sqrt{\mu\epsilon}}{\sqrt{\omega\tau}}, \label{38} \end{eqnarray} although less strongly than the attenuation factor. The already small time dephasing angle of $\vec{B}$ with respect to $\vec{E}$ for poor conductors at low frequencies is again strongly suppressed by inertial effects at high frequencies, given that \begin{eqnarray} \phi=\frac{1}{\omega^2\tau^2}\frac{\sigma}{2\epsilon\omega}. \label{39} \end{eqnarray} The time averaged magnetic and electric energy densities are almost identical for poor conductors at high frequencies, since $\bar{u}_B\cong\bar{u}_E$. That is the same result which holds for poor conductors at low frequencies \cite{Jackson}. \subsubsection{Good conductor limit} Good conductors at high frequency electromagnetic waves are described by requiring the condition $\sigma_{\rm eff}\gg\epsilon\omega$ to be satisfied. In this approximation, \begin{eqnarray} \beta=\frac{\sqrt{\mu\sigma/\tau}}{2\omega\tau},\;\;\;\;\;\;\;\;\;\;\mid k\mid\cong\frac{\alpha}{2}=\sqrt{\frac{\mu\sigma}{\tau}}. \label{40} \end{eqnarray} Interestingly, we see that the large attenuation factor depends on the relaxation time only. { The penetration depth of the electromagnetic wave within the conductor, but close to its external surface (skin effect), is defined as $\delta=2/\alpha$ \cite{Jackson}. By neglecting inertial effects, the classical value of $\delta$ for a good conductor scales as $\sim1/\sqrt{\omega}$ [see the second of Eqs. (\ref{28})]. In other words, it should diminish without limit at high frequency electromagnetic waves. However, our new result, the second of Eqs. (\ref{40}), ensures that $\delta$ does not depend on the frequency of the electromagnetic wave, provided inertial effects are taken into account. The physical meaning of this is that $\delta$ must saturate to a minimum value at sufficiently high frequencies. By making the substitutions $\tau\rightarrow\tau_{\rm c}$ and $\sigma\rightarrow\sigma_{\rm D}$ (see Appendix A), we see that, for a typical good metal like copper ($\tau_{\rm c}\cong2.7\times10^{-14}\;{\rm s}$), $\delta\cong1.9\times10^{-6}\;{\rm cm}$. At this point, we would like to encourage the experimentalists to try and find the inertial relaxation time of the current density by measuring the penetration depth for a good conductor at high frequencies, which is within the limits of present day laboratory capabilities (see Appendix B).} The magnetic and electric fields are almost completely out of phase in time, since $\phi\cong\pi/2$. Once $\varphi\cong\pi/2$ too, the time dephasing angle of $\vec{J}$ with respect to $\vec{B}$ is approximately an integral multiple of $\pi$. In other words, $\vec{J}$ and $\vec{B}$ are almost either at the same or at opposite time phase angles. As already mentioned, an interesting consequence of the introduction of the notion of effective electric conductivity in terms of wave frequency and relaxation time is that the time averaged electromagnetic energy density is almost entirely magnetic in nature for good conductors at high frequencies, as long as the ratio $\bar{u}_B/\bar{u}_E$ is given by the large number \begin{eqnarray} \frac{\bar{u}_B}{\bar{u}_E}=\frac{1}{\omega\tau}\frac{\sigma}{\epsilon\omega}. \label{41} \end{eqnarray} Although numerically different, that is the same qualitative result which holds for good conductors at low frequencies \cite{Jackson}. \begin{figure} \resizebox{4in}{4in}{\includegraphics{deltaw.EPS}} \caption{\label{fig_1} Normalized penetration depth as a function of $\hat{\omega}$ and some selected values of $\hat{\sigma}$. Note the tendency for saturation of $\hat{\delta}$ when $\hat{\sigma}$ increases.} \end{figure} \subsection{General case: penetration depth} {In order to discuss the general case, let us introduce dimensionless quantities describing the attenuation in conducting media. As one may check, by defining \begin{eqnarray} \hat{\sigma}=\frac{\sigma}{\epsilon/\tau},\;\;\;\;\;\;\;\;\;\;\hat{\omega}=\frac{\omega}{1/\tau},\;\;\;\;\;\;\;\;\;\;\hat{\delta}=\frac{\delta}{\sqrt{\tau/\left(\mu\sigma\right)}}, \label{42} \end{eqnarray} the second of Eqs. (\ref{13}) can be rewritten as \begin{eqnarray} \hat{\delta}=\sqrt{\frac{2\hat{\sigma}\left(1+\hat{\omega}^2\right)}{\hat{\omega}}}\left[\sqrt{\left(1-\hat{\sigma}+\hat{\omega}^2\right)^2\hat{\omega}^2+\hat{\sigma}^2}-\left(1-\hat{\sigma}+\hat{\omega}^2\right)\hat{\omega}\right]^{-1/2}. \label{43} \end{eqnarray} In Figure 1 we show the behavior of the normalized penetration depth as a function of $\hat{\omega}$ and some selected values of $\hat{\sigma}$. As remarked earlier, in the high frequency limit the penetration depth saturates in the case of good conductors.} \section{time damping of magnetic fields} In the second part of this paper, we explore a problem which is closely related to the description of space attenuation of electromagnetic waves, namely, the estimate of time damping of magnetic fields in rigid conducting media. The classical approach to the latter starts from Eq. (\ref{5}) in the limit $\tau\rightarrow0$, and in absence of displacement current. In other words, inertial effects are neglected, at low frequencies. The resulting equation is a diffusion type equation for the magnetic field, where $\left(\mu\sigma\right)^{-1}$ plays the role of a diffusion coefficient. The physical situation one wants to describe is that of a rigid conducting medium subjected to an external magnetic field, which is suddenly removed. The time dependence of each eigenfunction of the diffusion equation is ascribed as $\sim\exp\left(-\gamma_lt\right)$, where the associated eigenvalue $\gamma_l$ is interpreted as a damping factor. The decay time of the damping field is estimated as $T\sim1/\gamma_1$, where $\gamma_1={\rm min}\left\{\gamma_l\right\}$, by simultaneously estimating $\mid\nabla^2\mid\sim1/L^2$, with $L$ denoting a characteristic length scale of the sample. The classical result for the decay time is given by the diffusion time, $T=\mu\sigma L^2$. This approach is commonly referred to as the quasi static approximation \cite{Landau}. In this article, we follow an alternative path to tackle the above mentioned problem. The idea is that, since the space and time dependence of the external magnetic field may be always Fourier analyzed, we can start at once from Eq. (\ref{7}) by making the substitution $\omega\rightarrow-\imath\gamma$, where $\gamma$ is interpreted as a real damping factor associated to the real mode number $k$ of a given Fourier component. As a consequence, the time dependence of each component takes the desired form, $\sim\exp\left(-\gamma t\right)$, and we get \begin{eqnarray} \left(1-\gamma\tau\right)k^2=\gamma\mu\sigma-\left(1-\gamma\tau\right)\gamma^2\mu\epsilon. \label{45} \end{eqnarray} Thereby, the decay time $T\sim1/\gamma$ of the damping field may be estimated by simultaneously estimating $k^2\sim1/L^2$. As a result, we have \begin{eqnarray} \left(\frac{T}{\mu\sigma L^2}\right)^3-\left(1+\frac{\tau}{\mu\sigma L^2}\right)\left(\frac{T}{\mu\sigma L^2}\right)^2+\frac{\epsilon/\sigma}{\mu\sigma L^2}\frac{T}{\mu\sigma L^2}-\frac{\epsilon/\sigma}{\mu\sigma L^2}\frac{\tau}{\mu\sigma L^2}=0. \label{46} \end{eqnarray} To simplify the notation, first we name the characteristic time scale of conduction as \begin{eqnarray} \tau_\ast=\frac{\epsilon}{\sigma}, \label{47} \end{eqnarray} and, subsequently, normalize all times to the diffusion time, \begin{eqnarray} \hat{\tau}=\frac{\tau}{\mu\sigma L^2},\;\;\;\;\;\;\;\;\;\;\hat{\tau}_\ast=\frac{\tau_\ast}{\mu\sigma L^2},\;\;\;\;\;\;\;\;\;\;\hat{T}=\frac{T}{\mu\sigma L^2}. \label{48} \end{eqnarray} Therefore, we obtain \begin{eqnarray} \hat{T}^3-\left(1+\hat{\tau}\right)\hat{T}^2+\hat{\tau}_\ast\hat{T}-\hat{\tau}_\ast\hat{\tau}=0. \label{49} \end{eqnarray} Eq. (\ref{49}) is a cubic algebraic equation in $\hat{T}$. However, by considering the transformation \cite{Birkhoff} \begin{eqnarray} \hat{T}=\hat{T}_\ast+\left(\frac{1+\hat{\tau}}{3}\right)+\frac{1}{\hat{T}_\ast}\left[\left(\frac{1+\hat{\tau}}{3}\right)^2-\frac{\hat{\tau}_\ast}{3}\right], \label{50} \end{eqnarray} one may easily check that $\hat{T}_\ast^3$ satisfies the quadratic algebraic equation \begin{eqnarray} \left(\hat{T}_\ast^3\right)^2-2\left[\left(\frac{1+\hat{\tau}}{3}\right)^3-\frac{\hat{\tau}_\ast}{3}\left(\frac{1}{2}-\hat{\tau}\right)\right]\hat{T}_\ast^3+\left[\left(\frac{1+\hat{\tau}}{3}\right)^2-\frac{\hat{\tau}_\ast}{3}\right]^3=0, \label{51} \end{eqnarray} whose general solutions are given by \begin{eqnarray} &&\hat{T}_\ast^3=\left(\frac{1+\hat{\tau}}{3}\right)^3-\frac{\hat{\tau}_\ast}{3}\left(\frac{1}{2}-\hat{\tau}\right)\pm\sqrt{\frac{\hat{\tau}_\ast}{3}} \nonumber\\&&\left\{\frac{\hat{\tau}_\ast}{3}+\frac{1}{3}\left[\left(\frac{5-3\sqrt{3}}{4}-\hat{\tau}\right)\left(\frac{5+3\sqrt{3}}{4}-\hat{\tau}\right)+\left(\frac{1}{4}-2\hat{\tau}\right)^{3/2}\right]\right\}^{1/2}\nonumber\\&&\left\{\frac{\hat{\tau}_\ast}{3}+\frac{1}{3}\left[\left(\frac{5-3\sqrt{3}}{4}-\hat{\tau}\right)\left(\frac{5+3\sqrt{3}}{4}-\hat{\tau}\right)-\left(\frac{1}{4}-2\hat{\tau}\right)^{3/2}\right]\right\}^{1/2}. \label{52} \end{eqnarray} We see that there are three possible pairs of values for $\hat{T}_\ast$. Each pair is given by the plus or minus signs in Eqs. (\ref{52}). However, by substituting a chosen pair in Eq. (\ref{50}), one may easily check that the same value for $\hat{T}$ is obtained \cite{Birkhoff}. Thereby, as expected, all cubic algebraic equations, Eqs. (\ref{49}), (\ref{46}) and (\ref{45}), have three solutions. We pass now to the analyzis of important particular situations. \subsection{Neglect of inertia and displacement current} By neglecting both inertial effects and displacement current term, one expects the classical result for the decay time of the magnetic field to be recovered. Indeed, by taking the limits $\hat{\tau}\rightarrow0$ and $\hat{\tau}_\ast\rightarrow0$, simultaneously, in Eqs. (\ref{52}), we get $\hat{T}_\ast=1/3$, so that Eq. (\ref{50}) gives $\hat{T}=1$. In other words, the decay time of the damping field is given by the diffusion time, $T=\mu\sigma L^2$ \cite{Landau}. \subsection{Influence of displacement current} We now relax the approximation $\hat{\tau}_\ast\rightarrow0$ although still keep the limit $\hat{\tau}\rightarrow0$ in Eqs. (\ref{52}) to get \begin{eqnarray} \hat{T}_\ast=\left\{\left(\frac{1}{3}\right)^3-\frac{\hat{\tau}_\ast}{3}\left(\frac{1}{2}\right)\pm\frac{\hat{\tau}_\ast}{3}\left[\frac{\hat{\tau}_\ast}{3}-\frac{1}{3}\left(\frac{1}{4}\right)\right]^{1/2}\right\}^{1/3}, \label{53} \end{eqnarray} so that Eq. (\ref{50}) gives \begin{eqnarray} \hat{T}=\hat{T}_\ast+\frac{1}{3}+\frac{1}{\hat{T}_\ast}\left[\left(\frac{1}{3}\right)^2-\frac{\hat{\tau}_\ast}{3}\right]. \label{54} \end{eqnarray} Two limiting cases naturally arise. \subsubsection{Small displacement current} First, let us suppose that the influence of displacement current term is small in comparison with magnetic diffusion effects. In other words, let us assume that the approximation $\hat{\tau}_\ast\ll1$ holds in Eqs. (\ref{53}) to get \begin{eqnarray} \hat{T}_\ast=\frac{1}{3}-\frac{\hat{\tau}_\ast}{2}\pm\imath\frac{\hat{\tau}_\ast}{2}\left(\frac{1}{\sqrt{3}}\right), \label{55} \end{eqnarray} so that Eq. (\ref{54}) gives \begin{eqnarray} \hat{T}=1-\hat{\tau}_\ast. \label{56} \end{eqnarray} We see that the diffusion time is slightly decreased by the conduction time as a consequence of assuming a small influence of displacement current, \begin{eqnarray} T=\mu\sigma L^2-\frac{\epsilon}{\sigma}. \label{57} \end{eqnarray} For all practical purposes, the conduction time is normally neglected in Eq. (\ref{57}), and the decay time of the damping field is assumed again to be given simply by the diffusion time. \subsubsection{Large displacement current} Second, let us suppose that the influence of displacement current is large in comparison with magnetic diffusion effects. In other words, let us assume that the approximation $\hat{\tau}_\ast\gg1$ holds in Eqs. (\ref{53}) to get \begin{eqnarray} \hat{T}_\ast=\pm\sqrt{\frac{\hat{\tau}_\ast}{3}}, \label{58} \end{eqnarray} so that Eq. (\ref{54}) gives \begin{eqnarray} \hat{T}=\frac{1}{3}. \label{59} \end{eqnarray} We see that the diffusion time is strongly suppressed to a third of its classical value as a consequence of assuming a large influence of displacement current, \begin{eqnarray} T=\frac{\mu\sigma L^2}{3}. \label{60} \end{eqnarray} Note that, although the influence of displacement current is arguably very much larger than magnetic diffusion effects, the decay time does not depend on the conduction time. This interesting situation could arise in the limit of typically very small samples. However, it should be difficult to be observed, under normal conditions, given the smallness of the length scale. \subsection{Influence of inertia} Finally, we relax the approximation $\hat{\tau}\rightarrow0$ although still keep the limit $\hat{\tau}_\ast\rightarrow0$ in Eqs. (\ref{52}) to get \begin{eqnarray} \hat{T}_\ast=\frac{1+\hat{\tau}}{3}, \label{61} \end{eqnarray} so that Eq. (\ref{50}) gives \begin{eqnarray} \hat{T}=1+\hat{\tau}. \label{62} \end{eqnarray} We see that the diffusion time is increased by the relaxation time as a consequence of inertial effects, \begin{eqnarray} T=\mu\sigma L^2+\tau. \label{63} \end{eqnarray} Although this is the same result of \cite{Cuevas}, we stress here that, in order to derive it, the subsidiary condition $\gamma\tau\ll1$ does not need at all to be imposed, contrarily as claimed by those authors. Moreover, we note that the effects of displacement current term on decay time, Eqs. (\ref{57}) and (\ref{60}), are opposite to that of inertial effects, Eq. (\ref{63}). Such a result could explain why, under ordinary circumstances, the decay time of damping fields scales approximately as the diffusion time. As a matter of fact, Eq. (\ref{63}) does not rule out the possibility of considering the particular situation for which magnetic diffusion effects are neglibible in comparison with inertial ones, $\mu\sigma L^2\ll\tau$. In that case, the decay time would scale approximately as the relaxation time, \begin{eqnarray} T=\tau. \label{64} \end{eqnarray} That interesting situation could arise in the limit of typically very small samples (see also \cite{Cuevas}): at the nano scale, for instance, provided quantum mechanical effects were unimportant. \section{conclusion} In this work, new results for space attenuation of electromagnetic waves and time damping of magnetic fields in rigid conducting media have been derived by taking into account the combined effects of inertia, due to charge carriers, and displacement current term. The influence of inertia has been realized by the introduction of the relaxation time for the current density, through a suitable generalization of Ohm's law. The main conclusions of this article can be summarized as follows: \vspace{.5cm} \noindent(1) The classical notions of poor and good conductors have been extended in the framework of an effective electric conductivity. That quantity depends on both wave frequency and relaxation time [see Eq. (\ref{9})]. \vspace{.5cm} \noindent(2) It has been found that the space attenuation for good conductors at high frequency regimes depends on the relaxation time only [see the second of Eqs. (\ref{40})]. In other words, the penetration depth must saturate to a minimum value at sufficiently high frequencies. Such a result leads to the possibility of measurement of the relaxation time (see Appendix B). \vspace{.5cm} \noindent(3) The overall problem of determining the decay time of a magnetic field which is suddenly removed from a rigid conductor has been solved on very general grounds, that is, by taking into account the conjugate influences of magnetic diffusion, displacement current and inertia due to charge carriers. The solutions are given by the roots of a cubic algebraic equation involving all the relevant parameters [see Eq. (\ref{46})]. \vspace{.5cm} \noindent(4) We have shown that the actions of displacement current term and inertia due to charge carriers on damping of magnetic fields are opposite to each other [compare Eqs. (\ref{57}) and (\ref{60}) with (\ref{63})]. In particular, this result explains why, under normal conditions, the classical decay time of damping fields scales approximately as the diffusion time. \vspace{.5cm} \noindent(5) At very small length scales (at nano scales, for instance), provided effects due to quantum mechanics were unimportant, it is possible that the decay time could be given either by a third of the diffusion time [see Eq. (\ref{60})] or by the relaxation time [see Eq. (\ref{64})], depending on whether the displacement current term or inertia due to charge carriers, respectively, would prevail on magnetic diffusion. However, under ordinary circumstances, the former situation should not be easily observable given the smallness of the length scale. \acknowledgments{The authors are grateful to Rose C. Santos and M. P. M. Assun\c{c}\~ao for helpful discussions. JASL is partially supported by CNPq and FAPESP (Brazilian Research Agencies) under grants 304792/2003-9 and 04/13668-0, respectively.}
1,941,325,220,817
arxiv
\section{Introduction} Consider the classical supervised learning from $n$ i.i.d. samples $\{(y_i,{\boldsymbol z}_i)\}_{i\le n}$ where ${\boldsymbol z}_i\in {\mathbb R}^{d}$ are covariate vectors and $y_i\in{\mathbb R}$ are labels. A large number of popular techniques fit the following general scheme: \begin{enumerate} \vspace{-2mm} \item Process the covariates through a featurization map ${\boldsymbol \phi}:{\mathbb R}^{d}\to{\mathbb R}^p$ to obtain feature vectors ${\boldsymbol x}_1={\boldsymbol \phi}({\boldsymbol z}_1)$, \dots, ${\boldsymbol x}_n={\boldsymbol \phi}({\boldsymbol z}_n)$. \vspace{-2mm} \item Select a class of functions that depends on $\textsf{k}$ linear projections of the features, with parameters ${\boldsymbol \Theta}= ({\boldsymbol \theta}_1,\dots,{\boldsymbol \theta}_{\textsf{k}})\in{\mathbb R}^{p\times \textsf{k}}$, ${\boldsymbol \theta}_i\in{\mathbb R}^d$. Namely, for a fixed $F:{\mathbb R}^{\textsf{k}}\to {\mathbb R}$, we consider \begin{align} f({\boldsymbol z};{\boldsymbol \Theta}) = F({\boldsymbol \Theta}^{\sT}{\boldsymbol \phi}({\boldsymbol z}))\, . \end{align} \vspace{-8mm} \item Fit the parameters via (regularized) empirical risk minimization (ERM): \begin{align} \mbox{minimize }\;\;\; \widehat{R}_n({\boldsymbol \Theta};{\boldsymbol Z},{\boldsymbol y}) := \frac{1}{n}\sum_{i=1}^n L(f({\boldsymbol z}_i;{\boldsymbol \Theta}),y_i)+r({\boldsymbol \Theta})\, .\label{eq:FirstERM} \end{align} with $L:{\mathbb R}\times{\mathbb R}\to {\mathbb R}_{\ge 0}$ a loss function, and $r:{\mathbb R}^{p\times \textsf{k}}\to {\mathbb R}$ a regularizer, where ${\boldsymbol Z} = ({\boldsymbol z}_1,\dots,{\boldsymbol z}_n),{\boldsymbol y}=(y_1,\dots,y_n)$. \end{enumerate} This setting covers a large number of approaches, ranging from sparse regression to generalized linear models, from phase retrieval to index models. Throughout this paper, we will assume that $p$ and $n$ are large and comparable, while $\textsf{k}$ is of order one\footnote{A slightly more general framework would allow $F(\, \cdot\,;{\boldsymbol a}):{\mathbb R}^{\textsf{k}}\to{\mathbb R}$ to depend on additional parameters ${\boldsymbol a}\in{\mathbb R}^{\textsf{k}'}$, $\textsf{k}'=O(1)$. This can be treated using our techniques, but we refrain from such generalizations for the sake of clarity.} As a motivating example, consider a $3$-layer network with two hidden layers of width $p$ and $\textsf{k}$: \begin{align} f({\boldsymbol z};{\boldsymbol \Theta})= {\boldsymbol a}^{\sT} \sigma\big({\boldsymbol \Theta}^{\sT}\sigma({\boldsymbol W}^{\sT}{\boldsymbol z})\big)\, . \label{eq:3-layer-example} \end{align} Here we denoted by ${\boldsymbol W}\in{\mathbb R}^{d\times p}$ the first-layer weights, by ${\boldsymbol \Theta}\in{\mathbb R}^{p\times \textsf{k}}$ the second layer weights, and by ${\boldsymbol a}$ the output layer weights. Consider a learning procedure in which the first and last layers ${\boldsymbol a}, {\boldsymbol W}$ are not learnt from data, and we learn ${\boldsymbol \Theta}$ by minimizing the logistic loss for binary labels $y_i\in\{+1,-1\}$: \begin{align} \widehat{R}_n({\boldsymbol \Theta};{\boldsymbol Z},{\boldsymbol y}) = \frac{1}{n}\sum_{i=1}^n \log\Big\{1+\exp\Big[-y_i {\boldsymbol a}^{\sT} \sigma\circ{\boldsymbol \Theta}^{\sT}\circ\sigma({\boldsymbol W}^{\sT}{\boldsymbol z})\Big]\Big\} \, . \label{eq:LogisticExample} \end{align} (Here we use $f\circ g(\,\cdot\,)$ instead of $f( g(\,\cdot\,))$ to denote composition.) This example fits in the general framework above, with featurization map ${\boldsymbol \phi}({\boldsymbol z}) = \sigma({\boldsymbol W}^{\sT}{\boldsymbol z})$, function $F({\boldsymbol u}) = {\boldsymbol a}^{\sT} \sigma({\boldsymbol u})$, and loss $L(\widehat{y},y) = \log(1+e^{-y\widehat{y}})$. We note in passing that the model of Eq.~\eqref{eq:3-layer-example} (with the first and last layer fixed) is not an unreasonable one. If ${\boldsymbol W}$ is random, for instance with i.i.d. columns ${\boldsymbol w}_i\sim{\mathcal{N}}(0,c_d\id_d)$, the first layer performs a random features map in the sense of~\cite{rahimi2007random}. In other words, this layer embeds the data in the reproducing-kernel Hilbert space (RKHS) with (finite width) kernel $H_p({\boldsymbol z}_1,{\boldsymbol z}_2):= p^{-1}\sum_{i=1}^p \sigma(\<{\boldsymbol w}_i,{\boldsymbol z}_1\>)\sigma(\<{\boldsymbol w}_i,{\boldsymbol z}_2\>)$, which approximates the kernel $H_{\infty}({\boldsymbol z}_1,{\boldsymbol z}_2)= \E_{{\boldsymbol w}}[\sigma(\<{\boldsymbol w},{\boldsymbol z}_1\>)\sigma(\<{\boldsymbol w},{\boldsymbol z}_2\>)]$. Fixing the last layer weights ${\boldsymbol a}$ is not a significant reduction of expressivity, since this layer only comprises $\textsf{k}$ parameters, while we are fitting the $p\textsf{k}\gg \textsf{k}$ parameters in ${\boldsymbol \Theta}$. From the point of view of theoretical analysis, we can replace ${\boldsymbol \phi}({\boldsymbol z}_i)$ by ${\boldsymbol x}_i$ in Eq.~\eqref{eq:FirstERM}, and redefine the empirical risk in terms of the feature vectors ${\boldsymbol x}_i$: \begin{align} \widehat{R}_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}) := \frac{1}{n}\sum_{i=1}^n \ell({\boldsymbol \Theta}^{\sT}{\boldsymbol x}_i;y_i)+r({\boldsymbol \Theta})\, ,\label{eq:SecondERM} \end{align} where $\ell({\boldsymbol u};y):= L(F({\boldsymbol u}),y)$. We will remember that ${\boldsymbol x}_i={\boldsymbol \phi}({\boldsymbol z}_i)$ when studying specific featurization maps ${\boldsymbol \phi}$ in Section \ref{section:examples}. A significant line of recent work studies the asymptotic properties of the ERM \eqref{eq:SecondERM} under the proportional asymptotics $n,p\to\infty$ with $n/p\to{\sgamma}\in (0,\infty)$. A number of phenomena have been elucidated by these studies \cite{BayatiMontanariLASSO,thrampoulidis2015,thrampoulidis2018precise}, including the design of optimal loss functions and regularizers \citep{Donoho2016,el2018impact,celentano2019fundamental,aubin2020generalization}, the analysis of inferential procedures \citep{sur2019likelihood,celentano2020lasso}, and the double descent behavior of the generalization error \citep{hastie2019surprises,deng2019model,montanari2019generalization,gerace2020generalisation}. However, existing proof techniques assume Gaussian feature vectors, or feature vectors with independent coordinates. Needless to say, both the Gaussian assumption and the assumption of independent covariates are highly restrictive. Neither corresponds to an actual nonlinear featurization map ${\boldsymbol \phi}$. However, recent work has unveiled a remarkable phenomenon in the context of random features models, i.e. for ${\boldsymbol \phi}({\boldsymbol z}) = \sigma({\boldsymbol W}^{\sT}{\boldsymbol z})$. Under simple distributions on the covariates ${\boldsymbol z}$ (for instances ${\boldsymbol z}$ with i.i.d. coordinates) and for certain weight matrices ${\boldsymbol W}$, the asymptotic behavior of the ERM problem \eqref{eq:SecondERM} appears to be identical to the one of an equivalent Gaussian model. In the equivalent Gaussian model, the feature vectors ${\boldsymbol x}_i$ are replaced by Gaussian features: % \begin{align} % {\boldsymbol x}^{G}_i\sim {\mathcal{N}}(0,\boldsymbol{\Sigma}_{{\boldsymbol W}})\, ,\;\;\;\;\; \boldsymbol{\Sigma}_{{\boldsymbol W}} =\E\big[\sigma({\boldsymbol W}^{\sT}{\boldsymbol z})\sigma({\boldsymbol W}^{\sT}{\boldsymbol z})^{\sT}\big| {\boldsymbol W}\big]\, . \end{align} (We refer to the next section for formal definitions.) We stress that ---in the proportional asymptotics $n\asymp p$--- the test error is typically bounded away from zero as $n,p\to\infty$, and so is possibly the train error. Further, train and test error typically concentrates around different values. Existing proof techniques (for ${\boldsymbol x}_i$ Gaussian) allow to compute the limiting values of these quantities. Insight into the ERM behavior is obtained by studying their dependence on various problem parameters, such as the overparameterization ratio $p/n$ or the noise level. When we say that the the non-Gaussian and Gaussian models have the same asymptotic behavior, we mean that the limits of the test and train errors coincide. This allows transferring rigorous results proven in the Gaussian model to similar statements for more realistic featurization maps. We follow the random matrix theory literature~\citep{tao2012topics} and refer to this as a \emph{universality} phenomenon. When universality holds, the ERM behavior is roughly independent of the features distribution, provided their covariances are matched. Universality is a more delicate phenomenon than concentration of the empirical risk around its expectation. Indeed, as emphasized above, it holds in the high-dimensional regime in which test error and train error do not match. Establishing universality requires understanding the dependence of the empirical risk minimizer $\widehat{\boldsymbol \Theta}_n^{\boldsymbol X}$ on the data ${\boldsymbol X},{\boldsymbol y}$, and not just bound its distance from a population value via concentration. Universality for non-linear random features models was proven for the special case of ridge regression in \cite{hastie2019surprises} and \cite{mei2019generalization}. This corresponds to the ERM problem \eqref{eq:SecondERM} whereby $\textsf{k}=1$, $\ell(u,y) = (u-y)^2$, and $r({\boldsymbol \Theta}) = \lambda\|{\boldsymbol \Theta}\|_2^2$. At the same time, \cite{goldt2020modeling,goldt2020gaussian} provided heuristic arguments and empirical results indicating that universality holds for other ERM problems as well. Universality results for ERM were proven in the past for feature vectors ${\boldsymbol x}_i$ with independent entries \citep{korada2011applications,montanari2017universality,panahi2017universal}. Related results for randomized dimension reduction were obtained in \cite{oymak2018universality}. The case of general vectors ${\boldsymbol x}_i$ is significantly more challenging. To the best of our knowledge, the first and only proof of universality beyond independent entries was given in the recent paper of Hu and Lu \cite{hu2020universality}. The result of \cite{hu2020universality} is limited to strongly convex ERM problems, and their proof technique relies substantially on this assumption. However, modern machine learning applications require considering problems that are either convex but not strongly convex, or non-convex, as in the example \eqref{eq:LogisticExample}. Further, from a mathematical standpoint, there is no reason to believe that strong convexity should be the `right' condition for universality. Here we present the following contributions: \begin{enumerate} \item\emph{Universality of training error.} We prove that under suitable conditions on the features ${\boldsymbol x}_i$, the train error (the asymptotic value of $\min_{{\boldsymbol \Theta}}\widehat{R}_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y})$) is universal for general Lipschitz losses $\ell({\boldsymbol u};y)$ and regularizers $r({\boldsymbol \Theta})$. \item \emph{Universality of test error.} We prove that, under additional regularity conditions, the test error is also universal. We emphasized that these regularity conditions concern the asymptotics of the equivalent Gaussian model. Hence, they can be checked using existing techniques. \item \emph{Applications.} We prove that our results can be applied to feature vectors ${\boldsymbol x}_i={\boldsymbol \phi}({\boldsymbol z}_i)$ that are obtained by two interesting classes of featurization maps: random feature models (random one-layer neural networks) and neural tangent models (obtained by the first-order Taylor expansion of two-layer neural networks). \end{enumerate} In the next section we state our main results. We prove that they apply to random features and neural tangent models in Section \ref{section:examples}, and outline their proof in Section \ref{section:proof_outline}. Most of the technical work is presented in the appendices. \section{Main results} \label{section:main_results} \subsection{Definitions and notations} We reserve the \textsf{sans-serif font} for parameters that are considered as fixed. We use $\norm{X}_{\psi_2}$ and $\norm{f}_\textrm{Lip}$ to denote the subgaussian norm of a random variable $X$ and the Lipschitz modulus of a function $f$, respectively, and $B_q^p(r)$ to denote the $\ell_q$ ball of radius $r$ in $\R^p$. We denote the feature vectors by ${\boldsymbol x}_i\in\R^p$, and the equivalent Gaussian vectors by ${\boldsymbol g}_i\in\R^p$, and introduce the matrices: \begin{equation} \nonumber {\boldsymbol X} := ({\boldsymbol x}_1,\dots,{\boldsymbol x}_n)^\sT,\quad {\boldsymbol G} := ({\boldsymbol g}_1,\dots,{\boldsymbol g}_n)^\sT\, . \end{equation} Throughout, the vectors $\{{\boldsymbol x}_i\}_{i\le n}$ are i.i.d. and $\{{\boldsymbol g}_i\}_{i\le n} \simiid \cN(0,\boldsymbol{\Sigma}_{\boldsymbol g})$. As mentioned above we assume the proportional asymptotics $p,n\to\infty$ whereby, assuming without loss of generality $p := p(n)$, we have \begin{equation} \nonumber \lim_{n\to\infty} \frac{p(n)}{n} = \sgamma \in(0,\infty). \end{equation} In fact most of our statements hold under the slightly more general assumption of $p/n\in [C^{-1},C]$ We assume that the response $y_i$ depends on the feature vector ${\boldsymbol x}_i$ through a low-dimensional projection ${\boldsymbol \Theta}^{\star\sT}{\boldsymbol x}_i$, where ${\boldsymbol \Theta}^\star = ({\boldsymbol \theta}^\star_1,\dots,{\boldsymbol \theta}^\star_{\textsf{k}^\star})\in \R^{p\times \textsf{k}^\star}$ is a fixed matrix of parameters. Namely, we let ${\boldsymbol \epsilon} := (\eps_1,\dots,\eps_n)$ where $\{\eps_i\}_{i\le n}$ are i.i.d. and set: \begin{equation} \label{eq:form_yi} y_i := \eta\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol x}_i ,\eps_i \right) \end{equation} for $\eta: \R^{\textsf{k}^\star+ 1}\to\R$. We write ${\boldsymbol y}({\boldsymbol X})$ or $y_i({\boldsymbol x}_i)$ when we want to make the functional dependence of ${\boldsymbol y}$ on ${\boldsymbol X}$ explicit. We denote the model parameters by ${\boldsymbol \Theta} = ({\boldsymbol \theta}_1,\dots,{\boldsymbol \theta}_\textsf{k})$, where ${\boldsymbol \theta}_{k} \in \R^p$ for $k\in[\textsf{k}]$, and estimate them by minimizing the regularized empirical risk of Eq.~\eqref{eq:SecondERM}, subject to ${\boldsymbol \theta}_k\in {\mathcal C}_p$. Namely, we consider the problem \begin{equation} \label{eq:min_problem} \widehat R^\star_n({\boldsymbol X},{\boldsymbol y}) := \inf_{{\boldsymbol \Theta} \in {\mathcal C}_p^\textsf{k} } \widehat R_n({\boldsymbol \Theta}; {\boldsymbol X}, {\boldsymbol y}), \end{equation} for some ${\mathcal C}_p \subset \R^p$, where ${\mathcal C}_p^\textsf{k} := \prod_{k=1}^\textsf{k} {\mathcal C}_p$. \subsection{Assumptions} Our assumptions are stated in terms of the positive constants $\textsf{R}$, $\textsf{K}$, $\textsf{k}$, $\textsf{k}^{\star}$, the positive function $\textsf{K}_r:\R_{\ge 0}\to\R_{>0}$ and the sequence of symmetric convex sets $\left\{{\mathcal S}_p\right\}_{p\in\Z_{>0}}$ with ${\mathcal S}_p \subseteq B_2^p(\textsf{R})$ for all $p\in \Z_{>0}$. Denoting by $\sOmega=(\sgamma,\textsf{R}, \textsf{K}, \textsf{k}, \textsf{k}^{\star},\textsf{K}_r)$ the list of these constants, all of our results will be uniform with respect to the class of problems that satisfy the assumptions at a given $\sOmega$. \begin{assumption}[Loss and labeling functions] \label{ass:loss} \label{ass:labeling} \label{ass:loss_labeling} The loss function $\ell:\R^{\textsf{k} + 1} \to \R$ is nonnegative Lipschitz with $\norm{\ell}_\textrm{Lip} \le \textsf{K}$, and the labeling function $\eta:\R^{\textsf{k}^\star +1}\to\R$ is Lipschitz with $\norm{\eta}_\textrm{Lip}\le \textsf{K}$. \end{assumption} \begin{assumption}[Constraint set] \label{ass:set} \label{ass:1} The set ${\mathcal C}_p$ defining the constraint in~\eqref{eq:min_problem} is a compact subset of ${\mathcal S}_p$. \end{assumption} \begin{assumption}[Distribution parameters] \label{ass:ThetaStar} For all $k\in[\textsf{k}^\star]$ and $p\in\Z_{>0}$ we have ${\boldsymbol \theta}_k^\star \in{\mathcal S}_p$. \end{assumption} \begin{assumption}[Noise variables] \label{ass:noise} The random variables $\eps_i$ are i.i.d subgaussian satisfying \begin{equation} \sup_{i\in[n]} \norm{\eps_i}_{\psi_2} \le \textsf{K}. \end{equation} \end{assumption} \begin{assumption}[Regulizer] \label{ass:regularizer} The regularizer $r({\boldsymbol \Theta})$ is locally Lipschitz in Frobenius norm, uniformly in $p$. That is, for all $p\in\Z_{>0}$, $B>0$, and ${\boldsymbol \Theta}_0,{\boldsymbol \Theta}_1 \in\R^{p\times \textsf{k}}$ satisfying $\norm{{\boldsymbol \Theta}_0}_F,\norm{{\boldsymbol \Theta}_1}_F \le B$, we have \begin{equation} \nonumber \left|r({\boldsymbol \Theta}_0) - r({\boldsymbol \Theta}_1) \right| \le \textsf{K}_r(B) \norm{{\boldsymbol \Theta}_0 - {\boldsymbol \Theta}_1}_F. \end{equation} \end{assumption} \begin{assumption}[Feature vectors] \label{ass:-1} \label{ass:X} Recall that the random vectors $\{{\boldsymbol x}_i\}_{i\le n}$ are i.i.d. and that the vectors $\{{\boldsymbol g}_i\}_{i\le n} \simiid \cN(0,\boldsymbol{\Sigma}_{\boldsymbol g})$. We assume \begin{equation} \nonumber \sup_{\{{\boldsymbol \theta}\in{\mathcal S}_p: \norm{{\boldsymbol \theta}}_2 \le 1\}} \norm{{\boldsymbol x}^\sT {\boldsymbol \theta}}_{\psi_2} \le \textsf{K}\, ,\;\;\;\; \sup_{\{{\boldsymbol \theta}\in{\mathcal S}_p: \norm{{\boldsymbol \theta}}_2 \le 1\}}\norm{\boldsymbol{\Sigma}_{\boldsymbol g}^{1/2} {\boldsymbol \theta}}_2 \le \textsf{K}. \end{equation} Further, for any bounded Lipschitz function $\varphi:\R\to\R$, \begin{equation} \label{eq:condition_bounded_lipschitz_single} \lim_{p\to\infty} \sup_{{\boldsymbol \theta}\in {\mathcal S}_p} \Big|\E\big[\varphi\big({\boldsymbol \theta}^\sT{\boldsymbol x}\big)\big] - \E\big[\varphi\big({\boldsymbol \theta}^\sT{\boldsymbol g}\big)\big]\Big| = 0. \end{equation} \end{assumption} \begin{remark} Eq.~\eqref{eq:condition_bounded_lipschitz_single} states that the projections of ${\boldsymbol x}$ along elements of ${\mathcal S}_p$ are asymptotically Gaussian. This is a minimal assumption for universality to hold. As we will see below, universality of the training error amounts to saying that $\widehat R^\star_n({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$ is asymptotically distributed as $\widehat R^\star_n({\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))$. Namely, the two risks are similarly distributed \emph{at their respective, random, minimizers $\widehat{\boldsymbol \Theta}_n^{\boldsymbol X}$ and $\widehat{\boldsymbol \Theta}_n^{\boldsymbol G}$}. A minimal condition for this to happen is that their expectations are close \emph{at a fixed, non-random point ${\boldsymbol \Theta}$} namely \begin{align*} 0&=\lim_{n\to\infty}\big|\E\widehat{R}_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) -\E\widehat{R}_n({\boldsymbol \Theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))\big|\\ &=\lim_{n\to\infty}\big|\E \ell({\boldsymbol \Theta}^{\sT}{\boldsymbol x}_1;\eta({\boldsymbol \Theta}^{\star\sT}{\boldsymbol x}_1;\eps_1))- \E\ell({\boldsymbol \Theta}^{\sT}{\boldsymbol g}_1;\eta({\boldsymbol \Theta}^{\star\sT}{\boldsymbol g}_1;\eps_1))\big|\, . \end{align*} This amounts to saying that the distributions of ${\boldsymbol \Theta}^{\sT}{\boldsymbol x}_1$ and ${\boldsymbol \Theta}^{\sT}{\boldsymbol g}_1$ match when tested against a specific function (defined in terms of $\ell$ and $\eta$). Requiring this to hold for all ${\boldsymbol \Theta}\in{\mathcal S}_p^{\textsf{k}}$ essentially amounts to Assumption~\ref{ass:X}. \end{remark} We additionally provide an alternative for Assumption~\ref{ass:loss_labeling} which is sufficient for our results to hold, but not as straightforward to check. \ifthenelse{\boolean{arxiv}}{ \begin{numassumption}[1$'$] } { \begin{numassumption}{1$'$} } \label{ass:labeling_prime} \label{ass:loss_prime} \label{ass:loss_labeling_prime} The loss function $\ell:\R^{\textsf{k} + 1}\to \R$ is nonnegative differentiable with $\norm{\grad \ell}_\textrm{Lip} \le \textsf{K}$, and the labeling function $\eta:\R^{\textsf{k}^\star + 1}\to\R$ is a differentiable with $\norm{\grad \eta}_\textrm{Lip} \le \textsf{K}$. Furthermore, for any random variables ${\boldsymbol v}\in\R^\textsf{k},{\boldsymbol v}^\star\in\R^{\textsf{k}^\star},V\in\R$ satisfying $$\norm{{\boldsymbol v}}_{\psi_2}\vee\norm{{\boldsymbol v}^\star}_{\psi_2}\vee\norm{V}_{\psi_2} \le 2(\textsf{R} +1) \textsf{K}$$ and any $\beta > 0$, we have \begin{equation} \label{eq:exp_integrability} \E\left[\exp\left\{\beta\left|\ell\big( {\boldsymbol v}, \eta({\boldsymbol v}^\star,V) \big)\right|\right\}\right] \le C(\beta,\textsf{R},\textsf{K}) \end{equation} for some $C(\beta,\textsf{R},\textsf{K})$ dependent only on $\beta,\textsf{R},\textsf{K}$. \end{numassumption} We remark that if $\ell$ and $\eta$ satisfy Assumption~\ref{ass:labeling}, then it is easy to see that~\eqref{eq:exp_integrability} holds. \subsection{Universality of the training error} \begin{theorem} \label{thm:main_empirical_univ} Suppose that either Assumption~\ref{ass:loss_labeling} or Assumption~\hyperref[ass:loss_labeling_prime]{1'} holds along with assumptions \ref{ass:1}-\ref{ass:-1}. Then, for any bounded Lipschitz function $\psi:\R\to\R$, \begin{equation} \label{eq:main_thm_eq} \lim_{n\to\infty}\left|\E\left[\psi\left(\widehat R^\star_n\left({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right)\right)\right] - \E\left[\psi\left(\widehat R^\star_n\left({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right) \right)\right]\right| = 0. \end{equation} Hence, for any constant $\rho \in \R$ and $\delta>0$, we have \begin{align} \lim_{n\to\infty}\P\left( \widehat R^\star_n\left({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) \ge \rho + \delta \right) &\le \lim_{n\to\infty}\P\left( \widehat R^\star_n\left({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right) \ge \rho \right), \textrm{ and }\nonumber\\ \lim_{n\to\infty}\P\left( \widehat R^\star_n\left({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) \le \rho - \delta \right) &\le \lim_{n\to\infty}\P\left( \widehat R^\star_n\left({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right) \le \rho \right). \label{eq:prob_bounds} \end{align} Consequently, for all $\rho \in\R$, \begin{equation} \label{eq:P_limit_iff} \widehat R^\star_n({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\stackrel{\P}{\to} \rho \quad\textrm{if and only if} \quad \widehat R^\star_n({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \stackrel{\P}{\to}\rho. \end{equation} \end{theorem} \begin{remark} Theorem \ref{thm:main_empirical_univ} is the key technical result of this paper. While training error is not as interesting as test error, which is treated next, universality of train error is more robust and we will build on it to establish universality of test error. The mathematical reason for the greater robustness of training error is easy to understand. A small data perturbation, changing $\widehat{R}_n({\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y})$ to $\widehat{R}_n({\boldsymbol \Theta},{\boldsymbol X}',{\boldsymbol y}')$, changes the value of the minimum at most by $\sup_{{\boldsymbol \Theta}}|\widehat{R}_n({\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y})-\widehat{R}_n({\boldsymbol \Theta},{\boldsymbol X}',{\boldsymbol y}')|$, but can change the minimizer by a large amount. The situation is of course significantly simpler if the cost is strongly convex, since in that case the change of the minimizer is controlled as well. \end{remark} \noindent{\bf Proof technique.} We outline the proof of Theorem \ref{thm:main_empirical_univ} in Section~\ref{section:proof_outline}. The proof is based on an interpolation method. Namely we consider a feature matrix ${\boldsymbol Z}_t = \sin(t) {\boldsymbol X}+\cos(t) {\boldsymbol G}$ that continuously interpolate between the two cases as $t$ goes from $0$ to $\pi/2$. We then bound the change in training error (minimum empirical risk) along this path. This approach is analogous to the Lindeberg method~\citep{lindeberg1922neue,chatterjee2006generalization}, which was used in the context of statistical learning in \cite{korada2011applications} and subsequently in \cite{montanari2017universality,oymak2018universality,hu2020universality}. A direct application of the Lindeberg procedure, would require to swap an entire row of ${\boldsymbol X}$ with the corresponding row of ${\boldsymbol G}$ and bound the effect on the minimum empirical risk (we cannot replace one entry at a time since these are dependent). We find the use of a continuous path more effective. In \cite{hu2020universality}, the effect of a swapping step is controlled by first bounding the change in the minimizer $\widehat{\boldsymbol \Theta}$. This is achieved by assuming strong convexity of the empirical risk. The bound in the change of the minimizer immediately implies a bound in the change of the minimum value. In the non-convex setting, we face the challenge of bounding the change of the minimum without bounding the change of the minimizer. We achieve this by using a differentiable approximation of the minimum, and a novel polynomial approximation trick which we believe can be of more general applicability. \subsection{Universality of the test error} Let us define the test error \begin{equation} \nonumber R_n^{\boldsymbol x}({\boldsymbol \Theta}) := \E\Big[ \ell\big({\boldsymbol \Theta}^\sT {\boldsymbol x}; \eta({\boldsymbol \Theta}^{\star\sT}{\boldsymbol x},\eps)\big) \Big],\quad R_n^{\boldsymbol g}({\boldsymbol \Theta}) := \E\Big[ \ell\big({\boldsymbol \Theta}^\sT {\boldsymbol g}; \eta({\boldsymbol \Theta}^{\star\sT}{\boldsymbol x},\eps)\big) \Big]. \end{equation} The first expectation is with respect to independent random variables ${\boldsymbol x}\sim\P_{{\boldsymbol x}}$ and $\eps\sim\P_{\eps}$, and the second with respect to independent ${\boldsymbol g}\sim{\mathcal{N}}(0,\boldsymbol{\Sigma}_{{\boldsymbol g}})$ and $\eps\sim\P_{\eps}$. As discussed above, it is easy to see that, under Assumption \ref{ass:X}, $\lim_{n\to\infty}|R_n^{\boldsymbol x}({\boldsymbol \Theta})-R_n^{\boldsymbol g}({\boldsymbol \Theta})|=0$ at a \emph{fixed} ${\boldsymbol \Theta}$. Here however we are interested in comparing the two at near minimizers of the respective ERM problems. We will state two theorems that provide sufficient conditions for universality of the test error under various conditions. The first one concerns a scenario in which near interpolators (models achieving very small training error) exist. We are interested in this scenario because of its relevance to deep learning, and because it is very diffent from the strongly convex one. It is useful to denote the set of near empirical risk minimizers: \begin{align} {\normalfont\textrm{ERM}}_{t}({\boldsymbol X}):=\big\{{\boldsymbol \Theta} \in {\mathcal C}_p^\textsf{k} \mbox{ s.t. } \widehat{R}_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \le t \big\}\, . \end{align} \begin{theorem} \label{thm:universality_bounds} Assume $\lim_{n\to\infty}\P({\normalfont\textrm{ERM}}_0({\boldsymbol G})\neq\emptyset) = 1$. Then under Assumptions~\ref{ass:loss_labeling}-\ref{ass:-1}, for all $\delta >0,\alpha>0$ and $\rho\in\R$ we have \begin{align} \nonumber \lim_{n\to\infty}\P\Big( \min_{{\boldsymbol \Theta}\in {{\normalfont\textrm{ERM}}}_{\alpha}({\boldsymbol X})} R_n^{\boldsymbol x}({\boldsymbol \Theta}) \ge \rho + \delta \Big) &\le \lim_{n\to\infty}\P\Big( \min_{{\boldsymbol \Theta}\in{\normalfont\textrm{ERM}}_0({\boldsymbol G})}R_n^{\boldsymbol g}({\boldsymbol \Theta}) > \rho \Big), \textrm{ and }\\ \lim_{t\to 0}\lim_{n\to\infty}\P\Big( \min_{{\boldsymbol \Theta}\in{\normalfont\textrm{ERM}}_t({\boldsymbol X})}R_n^{\boldsymbol x}({\boldsymbol \Theta}) \le \rho - \delta \Big) &\le \lim_{n\to\infty}\P\Big( \min_{{\boldsymbol \Theta}\in{\normalfont\textrm{ERM}}_{\alpha}({\boldsymbol G})} R_n^{\boldsymbol g}({\boldsymbol \Theta}) \le \rho \Big). \nonumber \end{align} \end{theorem} In other words, the minimum test error over all near-interpolators is universal (provided it does not change discontinuously with the accuracy of `near interpolation'). The same theorem holds (with identical proof) for the maximum test error over near interpolators, and if the level $0$ is replaced with any deterministic constant. The next theorem provides alternative sufficient conditions that guarantee the universality of the test error. We emphasize that these are conditions on the Gaussian features only and it is therefore possible to check them on concrete models using existing techniques. \begin{theorem} \label{thm:test_error} Suppose one of the following holds: \begin{enumerate}[(a)] \item \label{item:a_test_error} The loss $\ell$ is convex, the regularizer $r$ is $\smu$-strongly convex for some fixed constant $\smu>0$, and, letting $\widehat {\boldsymbol \Theta}_n^{\boldsymbol G}:=\argmin_{{\boldsymbol \Theta}\in{\mathcal C}_p^{\textsf{k}}} \widehat R_n({\boldsymbol \Theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))$, we have, for some $\rho,\widetilde\rho\in\R$, \begin{equation} \nonumber \widehat R_n^\star\left({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right) \stackrel{\P}{\to} \rho,\quad R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \Theta}_n^{\boldsymbol G}\right) \stackrel{\P}{\to} \widetilde\rho\, ; \end{equation} \item \label{item:b_test_error} For some $\rho,\widetilde \rho\in\R$, let ${\mathcal U}_p(\widetilde\rho,\alpha):=\{{\boldsymbol \Theta} \in {\mathcal C}_p^\textsf{k} : \;| R_n^{\boldsymbol g}({\boldsymbol \Theta}) - \widetilde \rho |\ge \alpha\}$. We have $\widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \stackrel{\P}{\to} \rho$, and for all $\alpha >0$, there exists $\delta>0$ so that \begin{equation} \nonumber \lim_{n\to\infty}\P\Big(\min_ {{\boldsymbol \Theta}\in {\mathcal U}_p(\widetilde\rho,\alpha)} | \widehat R_n({\boldsymbol \Theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) - \widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) | \ge \delta\Big) = 1; \end{equation} \item \label{item:c_test_error} there exists a function $\rho(s)$ differentiable at $s=0$ such that for all $s$ in a neighborhood of $0$, \begin{equation} \label{eq:DiffCondition} \min_{{\boldsymbol \Theta}\in{\mathcal C}_p^\textsf{k}} \widehat R_n({\boldsymbol \Theta}; {\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) + s R_n^{\boldsymbol g}({\boldsymbol \Theta}) \stackrel{\P}\to \rho(s). \end{equation} \end{enumerate} Then, under Assumptions~\ref{ass:loss_labeling}-\ref{ass:-1}, \begin{equation} \nonumber \left| R_n^{\boldsymbol x}\left(\widehat{\boldsymbol \Theta}_n^{\boldsymbol X}\right) - R_n^{\boldsymbol g}\left(\widehat{\boldsymbol \Theta}_n^{\boldsymbol G}\right)\right| \stackrel{\P}{\to} 0 \end{equation} for any minimizers $\widehat{\boldsymbol \Theta}_n^{\boldsymbol X}, \widehat{\boldsymbol \Theta}_n^{\boldsymbol G}$ of $\widehat R_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$, $\widehat R_n({\boldsymbol \Theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))$, respectively. \end{theorem} \noindent{\bf Proof technique.} The proof of Theorem \ref{thm:universality_bounds} and \ref{thm:test_error} are given in Appendix~\ref{section:proof_univerality_bounds} and~\ref{section:proof_test_error}. The basic technique can be gleaned from condition \eqref{eq:DiffCondition}. We perturb the train error by a term proportional to the test error (this is only a proof device, not an actual algorithm). The test error can be related to the derivative with respect to $s$ of the resulting minimum value. The minimum value is universal by our results in the previous section, and the technical challenge is therefore to control its derivative. \section{Examples} \label{section:examples} \subsection{Two layer (finite width) neural tangent model} \label{section:ntk_example} Consider a two layer neural network with $m$ hidden neurons and fixed second layer weights $f({\boldsymbol z};{\boldsymbol u}) :=\sum_{i=1}^m a_i\sigma(\<{\boldsymbol u}_i,{\boldsymbol z}\>)$, with input ${\boldsymbol z}\in\R^d$. Under neural tangent (a.k.a. lazy) training conditions, such a network is well approximated by a linear model with respect to the features \begin{equation} \label{eq:ntk_covariates} {\boldsymbol \phi}({\boldsymbol z}) := \left({\boldsymbol z} \sigma'({\boldsymbol w}_1^\sT {\boldsymbol z}),\dots,{\boldsymbol z}\sigma'({\boldsymbol w}_m^\sT {\boldsymbol z})\right) \in\R^p\,, \end{equation} where ${\boldsymbol w}_i$ are the first layer weights at initializations ${\boldsymbol u}_i^{0}={\boldsymbol w}_i$, and $p=md$. As in the rest of the paper, we assume to be given training samples $\{(y_i,{\boldsymbol z}_i)\}_{i\le n}$ and to compute feature vectors ${\boldsymbol x}_i={\boldsymbol \phi}({\boldsymbol z}_i)$. We assume a simple covariates distribution: $\{{\boldsymbol z}_i\}_{i\le n}\simiid\cN(0,{\boldsymbol I}_d)$. Further we assume the network initialization to be given by $\{{\boldsymbol w}_j\}_{j\le m}\simiid\textsf{Unif}\left(\S^{d-1}(1)\right)$, i.e., ${\boldsymbol w}_j$ are uniformly distributed on the sphere of radius one in $\R^d$. Notice that: $(i)$~The weights ${\boldsymbol w}_j$ are fixed and do not change from sample to sample; $(ii)$~Although the covariates ${\boldsymbol z}_i$ have a simple distribution, the vectors ${\boldsymbol x}_i$ are highly non-trivial and have dependent entries (in fact they lie on a $m$-dimensional nonlinear manifold in $\R^p$, with $p\gg m$). We assume the activation function $\sigma$ to be four times differentiable with bounded derivatives and to satisfy $\E[\sigma'(G)] = 0$, $\E[G\sigma'(G)] = 0$, for $G\sim\cN(0,1)$. These conditions yield some mathematical simplifications and we defer relaxing them to future work. Further, we focus on $m= m(n),d = d(n)\in\Z_{>0}$, and $\lim_{n\to\infty}m(n)/d(n) = \widetilde\sgamma$ for some fixed $\widetilde\sgamma\in(0,\infty)$. In particular, $m,d=\Theta(p^{1/2})$, $n = \Theta(p)$. For ${\boldsymbol \theta} =({\boldsymbol \theta}_{\up{1}}^\sT,\dots,{\boldsymbol \theta}_\up{m}^\sT)^\sT\in\R^p$, where ${\boldsymbol \theta}_{\up{j}} \in \R^d$ for $j\in[m]$, let ${\boldsymbol T}_{\boldsymbol \theta}\in\R^{d\times m}$ be the matrix ${\boldsymbol T}_{\boldsymbol \theta} = \left({\boldsymbol \theta}_\up{1} ,\dots, {\boldsymbol \theta}_\up{m}\right)$, so that ${\boldsymbol \theta}^\sT {\boldsymbol x} = {\boldsymbol z}^\sT {\boldsymbol T}_{\boldsymbol \theta}^\sT \sigma'({\boldsymbol W}^\sT {\boldsymbol z})$, where ${\boldsymbol W}= ({\boldsymbol w}_1,\dots,{\boldsymbol w}_m$) and $\sigma':\R\to\R$ is applied entrywise. We define, for $p\in\Z_{>0}$, \begin{equation} \label{eq:ntk_set} {\mathcal S}_{p} := \left\{ {\boldsymbol \theta}\in\R^p : \norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op} \le \frac{ \textsf{R}}{\sqrt{d}} \right\}. \end{equation} We have the following universality result for the neural tangent model~\eqref{eq:ntk_covariates}. We state explicitly the result for the training error, however, universality holds for the test error as well under the additional conditions of Theorem \ref{thm:universality_bounds} and Theorem \ref{thm:test_error}. \begin{corollary} \label{cor:ntk_universality} Let ${\boldsymbol x}_i={\boldsymbol \phi}({\boldsymbol z}_i)$ as per Eq.~\eqref{eq:ntk_covariates} with $\{{\boldsymbol z}_i\}_{i \le n}\stackrel{\textrm{i.i.d.}}{\sim}\cN(0,{\boldsymbol I}_d)$, and~${\mathcal S}_p$ be as defined in~\eqref{eq:ntk_set}. Further, let $(\ell,\eta),{\mathcal C}_p,{\boldsymbol \theta}^\star,\eps,r$ satisfy assumptions~\ref{ass:loss_labeling},~\ref{ass:set},~\ref{ass:ThetaStar},~\ref{ass:noise} and~\ref{ass:regularizer} respectively, and ${\boldsymbol g}_i|{\boldsymbol W} \simiid\cN(0,\boldsymbol{\Sigma}_{\boldsymbol W})$ for $\boldsymbol{\Sigma}_{\boldsymbol W} := \E\left[{\boldsymbol x}\bx^\sT| {\boldsymbol W}\right]$. Then for any bounded Lipschitz function $\psi:\R\to\R$, Eq.~\eqref{eq:main_thm_eq} holds along with its consequences: Eq.~\eqref{eq:prob_bounds} and Eq.~\eqref{eq:P_limit_iff}. \end{corollary} \begin{remark} Corollary \ref{cor:ntk_universality} does not hold if we relax the set ${\mathcal S}_p$ to ${\mathcal S}_p:=B_2^p(\textsf{R})$, Indeed, for ${\boldsymbol T}_{\boldsymbol \theta} = \textsf{R}/\sqrt{d}\left(\mathbf{1}_d, 0,\dots,0 \right)$, the random variable ${\boldsymbol \theta}^\sT{\boldsymbol x} = {\boldsymbol z}^\sT {\boldsymbol T}_{\boldsymbol \theta}^\sT \sigma'({\boldsymbol W}^\sT{\boldsymbol z})$ is not asymptotically Gaussian. Clearly, this choice of ${\boldsymbol \theta}$ is not in the set defined in~\eqref{eq:ntk_set}. \end{remark} \noindent{\bf Proof technique.} We prove this as a consequence of Theorem~\ref{thm:main_empirical_univ} in Appendix~\ref{proof:ntk_universality}. The key is to check Assumption~\eqref{ass:-1} on the distribution of the feature vectors. In particular, we check Eq.~\eqref{eq:condition_bounded_lipschitz_single} using Stein's method as done in \cite{hu2020universality} for the random features model. However, treating the neural tangent features of Eq.~\eqref{eq:ntk_covariates} requires extra care due to the more complex covariance structure. \subsection{Random features} \label{section:rf_example} Consider a two layer network with $p$ hidden neurons and fixed first layer weights $f({\boldsymbol z};{\boldsymbol a}) :=\sum_{i=1}^p a_i\sigma(\<{\boldsymbol w}_i,{\boldsymbol z}\>)$, where ${\boldsymbol z}\in\R^d$. This is a linear model with respect to the features \begin{equation} \label{eq:rf_covariates} {\boldsymbol \phi}({\boldsymbol z}):= \left(\sigma\left({\boldsymbol w}_1^\sT {\boldsymbol z}\right),\dots,\sigma\left({\boldsymbol w}_p^\sT{\boldsymbol z}\right)\right). \end{equation} As before, we consider $\{{\boldsymbol z}_i\}_{i\le n}\stackrel{\textrm{i.i.d.}}{\sim}\cN(0,{\boldsymbol I}_d)$ and ${\boldsymbol x}_i={\boldsymbol \phi}({\boldsymbol z}_i)$. Further we assume the first-layer weights to be given by $\{{\boldsymbol w}_j\}_{j\le m}\stackrel{\textrm{i.i.d.}}{\sim}\textsf{Unif}\left(\S^{d-1}(1)\right)$. The activation function $\sigma$ is now assumed to be three times continuously differentiable with bounded derivatives, with $\E\sigma(G) = 0$ for $G\sim \cN(0,1)$. (These are slightly weaker conditions than in the previous section.) We consider $d=d(n)\in\Z_{>0}$ such that, for some fixed $\widetilde\sgamma \in (0,\infty)$, $\lim_{n\to\infty} d(n)/p(n) = \widetilde\sgamma$. Finally, define for $p\in\Z_{>0}$ \begin{equation} \label{eq:rf_set} {\mathcal S}_p := B_\infty^p\left(\frac{\textsf{R}}{\sqrt{p}}\right). \end{equation} Let ${\boldsymbol W}$ be the matrix whose columns are the weights ${\boldsymbol w}_j$. We have the following corollary of Theorem~\ref{thm:main_empirical_univ}. \begin{corollary} \label{cor:rf_universality} Let ${\boldsymbol x}_i={\boldsymbol \phi}({\boldsymbol z}_i)$ as per Eq.~\eqref{eq:rf_covariates} with $\{{\boldsymbol z}_i\}_{i \le n}\stackrel{\textrm{i.i.d.}}{\sim}\cN(0,{\boldsymbol I}_d)$, and~${\mathcal S}_p$ be as defined in~\eqref{eq:rf_set}. Further, let $(\ell,\eta), {\mathcal C}_p, {\boldsymbol \theta}^\star, \eps, r$ satisfy assumptions~\ref{ass:loss_labeling},~\ref{ass:set},~\ref{ass:ThetaStar},~\ref{ass:noise} and~\ref{ass:regularizer} respectively, and ${\boldsymbol g}_i|{\boldsymbol W} \simiid\cN(0,\boldsymbol{\Sigma}_{\boldsymbol W})$ for $\boldsymbol{\Sigma}_{\boldsymbol W} := \E\left[{\boldsymbol x}\bx^\sT| {\boldsymbol W}\right]$. Then for any bounded Lipschitz function $\psi:\R\to\R$, Eq.~\eqref{eq:main_thm_eq} holds along with its consequences: Eq.~\eqref{eq:prob_bounds} and Eq.~\eqref{eq:P_limit_iff}. \end{corollary} In Appendix~\ref{proof:rf_universality}, we derive this corollary as a consequence of Theorem~\ref{thm:main_empirical_univ}. To do so, we use a result established by~\cite{hu2020universality} implying that the feature vectors ${\boldsymbol x}_i$ satisfy Assumption~\ref{ass:X}, for every ${\boldsymbol W}$ in a high probability set. \subsection{Linear functions of vectors with independent entries} Consider feature vectors ${\boldsymbol x}_i = \boldsymbol{\Sigma}^{1/2}\overline{\boldsymbol x}_i\in\R^p$, where the vectors $\overline{\boldsymbol x}_i$ have $p$ i.i.d subgaussian entries of subgaussian norm bounded by $\textsf{K}$. We assume $\|\boldsymbol{\Sigma}\|_{\mathrm{op}}\le\textsf{K}$. Fix any $\alpha>0$. An application of the Lindeberg central limit theorem (CLT) shows that Eq.~\eqref{eq:condition_bounded_lipschitz_single} of Assumption~\ref{ass:X} holds for \begin{equation} \label{eq:indep_set} {\mathcal S}_p := \big\{ {\boldsymbol \theta}\in\R^p:\; \|\boldsymbol{\Sigma}^{1/2}{\boldsymbol \theta}\|_{\infty}\le \textsf{R} p^{-\alpha}\, , \|{\boldsymbol \theta}\|_{2}\le \textsf{R}\big\}\, . \end{equation} \begin{corollary} \label{cor:indep_universality} Let ${\boldsymbol x}_i = \boldsymbol{\Sigma}^{1/2}\overline{\boldsymbol x}_i\in\R^p$ where $\overline{\boldsymbol x}_i$ has i.i.d. subgaussian entries, and~${\mathcal S}_p$ be as defined in~\eqref{eq:indep_set} for an $\alpha>0$. Furthermore, let $(\ell,\eta),{\boldsymbol \theta}^\star,\eps,r$ satisfy assumptions~\ref{ass:loss_labeling},~\ref{ass:ThetaStar},~\ref{ass:noise} and~\ref{ass:regularizer} respectively. Let ${\boldsymbol g}_i \sim\cN(0,\nu \boldsymbol{\Sigma})$, where $\nu := {\rm Var}(\overline{x}_{1,1})$. Then Eq.~\eqref{eq:main_thm_eq} holds for any bounded Lipschitz function $\psi:\R \to \R$ and any compact ${\mathcal C}_p\subseteq {\mathcal S}_p$, along with its consequences:~\eqref{eq:prob_bounds} and~\eqref{eq:P_limit_iff}. \end{corollary} \section{Proof outline for Theorem~\ref{thm:main_empirical_univ}} \label{section:proof_outline} We redefine the vector $\sOmega$ appears in our assumptions to include $\smu$ and $\widetilde\sgamma$: $\sOmega := \left(\textsf{k},\textsf{k}^\star,\sgamma,\textsf{R},\textsf{K},\textsf{K}_r(\cdot),\smu,\widetilde\sgamma\right)$. We will use $C,\widetilde C,C',c, C_0,C_1,$ \dots etc, to denote constants that depend only on $\sOmega$, often without explicit definition. If a constant $C$ depends \textit{additionally} on some variable, say $\beta$, we write $C(\beta)$. We prove Eq.~\eqref{eq:main_thm_eq} of Theorem~\ref{thm:main_empirical_univ} under the weaker Assumption~\hyperref[ass:loss_labeling_prime]{1'} instead of Assumption~\ref{ass:loss_labeling}. We begin by approximating the ERM value $\widehat{R}^\star_n({\boldsymbol X},{\boldsymbol y})$, cf. Eq.~\eqref{eq:min_problem}, by a \textit{free energy} defined by a sum over a finite set in $\R^{p\times\textsf{k}}$. Namely, for $\alpha>0$, let $\cN_\alpha$ be a minimal $\alpha-$net of ${\mathcal C}_p$ and define \begin{equation} \label{eq:free_energy_def} f_\alpha(\beta,{\boldsymbol X}):= -\frac{1}{n\beta}\log\sum_{{\boldsymbol \Theta}\in\cN_\alpha^\textsf{k}} \exp\left\{-\beta\widehat R_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \right\}. \end{equation} \begin{lemma}[Universality of the free energy] \label{lemma:softmin_univ} Under Assumption~\hyperref[ass:loss_labeling_prime]{1'} along with Assumptions~\ref{ass:1}-\ref{ass:-1}, for any fixed $\alpha >0$ and any bounded differentiable function $\psi$ with bounded Lipschitz derivative we have \begin{equation} \nonumber \lim_{n\to\infty} \left|\E\left[\psi\left(f_{\alpha}(\beta,{\boldsymbol X})\right)\right] - \E\left[\psi\left(f_{\alpha}(\beta,{\boldsymbol G})\right)\right]\right| = 0. \end{equation} \end{lemma} Here, we outline the proof of this lemma deferring several technical details to Appendix~\ref{section:proof_softmin_univ} where we present the complete proof. A standard estimate bounds the difference between the free energy and the minimum empirical risk (see Appendix): For $\beta>0$. \begin{equation} \nonumber \Big| f_\alpha(\beta,{\boldsymbol X}) - \min_{{\boldsymbol \Theta}\in\cN_\alpha^\textsf{k}} \widehat R_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\Big| \le C(\alpha)\,\beta^{-1}. \end{equation} Hence, Theorem~\ref{thm:main_empirical_univ} follows from Lemma~\ref{lemma:softmin_univ} via an approximation argument detailed in Appendix~\ref{section:proof_of_main_results}. \subsection*{Universality of the free energy} We assume, without loss of generality, that ${\boldsymbol X}$ and ${\boldsymbol G}$ are defined on the same probability space and are independent, and define the interpolating paths \begin{equation} \label{eq:slepian} {\boldsymbol u}_{t,i}:= \sin(t) {\boldsymbol x}_i + \cos(t) {\boldsymbol g}_i\quad\textrm{and}\quad \widetilde{\boldsymbol u}_{t,i}:= \cos(t) {\boldsymbol x}_i - \sin(t) {\boldsymbol g}_i \end{equation} for $t\in[0,\pi/2]$ and $i\in[n]$. We use ${\boldsymbol U}_t$ to denote the matrix whose $i$th row is ${\boldsymbol u}_{t,i}$; note that these rows are i.i.d.~since the rows of ${\boldsymbol X}$ and ${\boldsymbol G}$ are so. Noting that for all ${\boldsymbol \theta}\in{\mathcal S}_p$, ${\boldsymbol x}^\sT{\boldsymbol \theta}$ and ${\boldsymbol g}^\sT{\boldsymbol \theta}$ are subgaussian with subgaussian norms bounded by $\textsf{R}\textsf{K}$ uniformly over ${\boldsymbol \theta}\in{\mathcal S}_p$, it is easy to see that $\sup_{t\in[0,\pi/2], {\boldsymbol \theta}\in{\mathcal S}_p} \norm{{\boldsymbol u}_t^\sT {\boldsymbol \theta}}_{\psi_2} \le 2\textsf{R}\textsf{K}.$ The goal is control the difference $\left|\E\left[\psi(f_{\alpha}(\beta,{\boldsymbol X}))\right] - \E\left[\psi(f_{\alpha}(\beta,{\boldsymbol G}))\right] \right|$ by controlling the expectation of the derivative $\left|\E \left[\partial_t \psi(f_\alpha(\beta,{\boldsymbol U}_t))\right]\right|$. Before computing the derivative involved, we introduce some notation to simplify exposition. For ${\boldsymbol v}\in\R^\textsf{k},{\boldsymbol v}^\star \in\R^{\textsf{k}^\star},v\in\R$, we define the notation \begin{equation} \nonumber \grad \ell({\boldsymbol v};\eta({\boldsymbol v}^\star,v)) = \left( \frac{\partial}{\partial v_k}\ell\left({\boldsymbol v};\eta({\boldsymbol v}^\star,v)\right)\right)_{k\in[\textsf{k}]}, \grad^\star \ell\left({\boldsymbol v};\eta({\boldsymbol v}^\star,v)\right) = \left( \frac{\partial}{\partial v^\star_k} \ell({\boldsymbol v};\eta({\boldsymbol v}^\star,v)) \right)_{k\in[\textsf{k}^\star]}. \end{equation} Furthermore, we will use the shorthand $\widehat\ell_{t,i}({\boldsymbol \Theta})$ for $\ell\left({\boldsymbol \Theta}^\sT {\boldsymbol u}_{t,i}; \eta\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol u}_{t,i},\eps_i\right)\right)$ and define the term \begin{equation} \label{eq:bd_def} \widehat{\boldsymbol d}_{t,i}({\boldsymbol \Theta}) :=\left( {\boldsymbol \Theta} \grad\widehat\ell_{t,i}({\boldsymbol \Theta}) + {\boldsymbol \Theta}^\star \grad^\star\widehat\ell_{t,i}({\boldsymbol \Theta}) \right). \end{equation} It is convenient to define the probability mass function over ${\boldsymbol \Theta}_0\in\cN_\alpha^\textsf{k}$: \begin{equation} \label{eq:inner_def} p^\up{i}({\boldsymbol \Theta}_0;t) := \frac{ e^{-\beta\left(\sum_{j\neq i}\widehat\ell_{t,j}({\boldsymbol \Theta}_0) + n r({\boldsymbol \Theta}_0)\right) }}{\sum_{{\boldsymbol \Theta}\in\cN_\alpha^\textsf{k}} e^{-\beta\left(\sum_{j\neq i}\widehat\ell_{t,j}({\boldsymbol \Theta}) + n r({\boldsymbol \Theta})\right) }} \quad\textrm{and}\quad \inner{\;\cdot\;}^\up{i}_{\boldsymbol \Theta}:= \sum_{{\boldsymbol \Theta}\in\cN_\alpha^\textsf{k}} (\,\cdot\,)p^\up{i}({\boldsymbol \Theta};t) \end{equation} for $i\in[n]$. With this notation, we can write \begin{align} \label{eq:derivative_explicit} \E\left[\frac{\partial}{\partial t} \psi(f_\alpha(\beta,{\boldsymbol U}_t))\right] &= \E\left[\frac{\psi'(f_\alpha(\beta,{\boldsymbol U}_t))}{n} \sum_{i=1}^n \frac{\inner{ \widetilde{\boldsymbol u}_{t,i}^\sT \widehat{\boldsymbol d}_{t,i}({\boldsymbol \Theta}) e^{-\beta\widehat\ell_{t,i}({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{i}} {\inner{e^{-\beta\widehat\ell_{t,i}({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{i}}\right]. \end{align} Via a leave-one-out argument detailed in Appendix~\ref{section:proof_softmin_univ}, we show that this form allows us to control \begin{equation} \label{eq:upper_bound_on_derivative} \limsup_{n\to\infty}\left|\E\left[\frac{\partial}{\partial t} \psi(f_\alpha(\beta,{\boldsymbol U}_t))\right]\right|\hspace{-0.5mm}\le\hspace{-0.5mm} \norm{\psi'}_\infty\hspace{-0.5mm}\E \hspace{-0.5mm} \left[\limsup_{n\to\infty}\sup_{{\boldsymbol \Theta}_0}\left|\E_\up{1}\left[\frac{ \widetilde{\boldsymbol u}_{t,1}^\sT \widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0) e^{ -\beta\widehat\ell_{t,1}({\boldsymbol \Theta}_0)}}{\inner{e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}}^\up{1}_{\boldsymbol \Theta}} \right]\right| \right] \end{equation} where $\E_\up{1}$ denotes the expectation conditional on $({\boldsymbol G}^\up{1},{\boldsymbol X}^\up{1},{\boldsymbol \epsilon}^\up{1})$; the feature and noise vectors with the 1st sample set to $0$. Meanwhile, the following lemma, whose proof is deferred to Appendix~\ref{section:proof_poly_approx}, allows us to control the right-hand side in~\eqref{eq:upper_bound_on_derivative}. \begin{lemma} \label{lemma:poly_approx} Suppose Assumptions~\hyperref[ass:loss_labeling]{1'} and~\ref{ass:set}-\ref{ass:X} hold. For any $\delta >0, \beta>0$, there exists a polynomial $P$ of degree and coefficients dependent only on $\delta,\beta$ and $\sOmega$ such that for all ${\boldsymbol \Theta}_0 \in {\mathcal S}_p^\textsf{k}, t\in[0,\pi/2]$ and $n \in \Z_{>0}$ \begin{align} \nonumber &\left|\E_\up{1}\left[ \frac{\widetilde{\boldsymbol u}_{t,1}^\sT\widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0) e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta}_0)}} {\inner{e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})} }^\up{1}_{\boldsymbol \Theta}}\right]\right| \le \left|{\E_\up{1}\left[ {\widetilde{\boldsymbol u}_{t,1}^\sT\widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0)} e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta}_0)} P\left(\inner{e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}}^\up{1}_{\boldsymbol \Theta} \right)\right]} \right| + \delta. \nonumber \end{align} \end{lemma} This polynomial approximation lemma is crucial in that, via~\eqref{eq:upper_bound_on_derivative}, it allows us to control the derivative in terms of a low-dimensional projection of the interpolating feature vectors. In turn, the term involving these projections is easier to control. Indeed, letting $P(s) = \sum_{j=0}^M b_js^j$ for degree $M\in\Z_{>0}$ and coefficients $\{b_j\}_{j\le [M]}$ as in the lemma, we can rewrite \begin{equation} \label{eq:poly_rewritten} {\E_\up{1}\hspace{-1mm}\left[ {\widetilde{\boldsymbol u}_{t,1}^\sT\widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0)} e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta}_0)} P\hspace{-1mm}\left(\hspace{-1mm}\inner{e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}}^\up{1}_{\boldsymbol \Theta} \right)\hspace{-1mm}\right]} \hspace{-1.3mm} =\hspace{-1.2mm} \sum_{j=0}^M\hspace{-0.5mm} b_j\hspace{-0.5mm} \inner{\hspace{-0.5mm} \E_\up{1}\hspace{-1mm}\left[ \widetilde{\boldsymbol u}_{t,1}^\sT \widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0) e^{-\beta\sum_{l=0}^j\widehat\ell_{t,1}({\boldsymbol \Theta}_l)} \right]\hspace{-1mm} }^\up{1}_{{\boldsymbol \Theta}_1^j} \end{equation} where $\inner{\;\cdot\;}_{{\boldsymbol \Theta}_1^j}$ is the expectation respect $\{{\boldsymbol \Theta}_l\}_{l\le [j]}$ seen as independent samples from $p^\up{1}({\boldsymbol \Theta};t)$. The next lemma then states that the right-hand side in~\eqref{eq:poly_rewritten} can be controlled via its Gaussian equivalent. \begin{lemma} \label{lemma:gaussian_approx} Suppose Assumptions~\hyperref[ass:loss_labeling_prime]{1'} and~\ref{ass:X} hold. Let $\widetilde{\boldsymbol g}_1\sim\cN(0,\boldsymbol{\Sigma}_{\boldsymbol g})$ and $\widetilde \eps_1$ be an independent copy of $\eps_1$, both independent of ${\boldsymbol g}_1$ and define ${\boldsymbol w}_{t,1} = \sin(t) \widetilde{\boldsymbol g}_1 + \cos(t){\boldsymbol g}_1$ and $\widetilde{\boldsymbol w}_{t,1} = \cos(t) \widetilde{\boldsymbol g}_{1} - \sin(t){\boldsymbol g}_1$. For any fixed $\beta>0$, $t\in[0,\pi/2]$ and $J\in\Z_{>0}$ we have, as $p\to\infty$, \begin{align} \label{eq:gaussian_approx_lemma_eq} \hspace{2mm} \sup_{ \mathclap{\substack{ {\boldsymbol \Theta}^\star\in{\mathcal S}_p^{\textsf{k}^\star}\\ \hspace{7mm}{\boldsymbol \Theta}_0,\dots,{\boldsymbol \Theta}_J\in{\mathcal S}_p^\textsf{k} }} } \hspace{2mm} \Bigg| \E\left[ {\widetilde{\boldsymbol u}_{t,1}^\sT\widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0)} e^{-\beta\sum_{l=0}^J\widehat\ell_{t,1}\left({\boldsymbol \Theta}_l\right)} \right]- \E\left[ {\widetilde{\boldsymbol w}_{t,1}^\sT \widehat{\boldsymbol q}_{t,1}({\boldsymbol \Theta}_0)} e^{-\beta\sum_{l=0}^J\ell\left({\boldsymbol \Theta}_l^\sT{\boldsymbol w}_{t,1};\eta\left( {\boldsymbol \Theta}^\star {\boldsymbol w}_{t,1}, \widetilde\eps_1\right) \right)} \right] \Bigg| \to 0 \end{align} where\; $\widehat{\boldsymbol q}_{t,1}({\boldsymbol \Theta}_0) := {\boldsymbol \Theta}_0 \grad\ell\left({\boldsymbol \Theta}_0^\sT {\boldsymbol w}_{t,1};\eta\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol w}_{t,1},\widetilde\eps_1\right)\right) + {\boldsymbol \Theta}^\star \grad^\star\ell\left({\boldsymbol \Theta}_0^\sT {\boldsymbol w}_{t,1};\eta\left( {\boldsymbol \Theta}^{\star\sT}{\boldsymbol w}_{t,1},\widetilde\eps_1\right)\right).$ \end{lemma} The proof of this lemma is deferred to Appendix~\ref{section:proof_gaussian_approx}. Note that ${\boldsymbol w}_1$ and $\widetilde {\boldsymbol w}_1$ are jointly Gaussian with cross-covariance $\E_\up{1}\left[\widetilde{\boldsymbol w}_{t,1}{\boldsymbol w}_{t,1}^\sT\right] = 0 $ for all $t\in[0,\pi/2]$, and hence they are independent. Then using that $\E[\widetilde {\boldsymbol w}_{t,1}] =0$, the expectation involing $\widetilde {\boldsymbol w}_{t,1},{\boldsymbol w}_{t,1}$ in~\eqref{eq:gaussian_approx_lemma_eq} decouples as \begin{equation} \nonumber \hspace{-1mm} \E\Big[ \widetilde{\boldsymbol w}_{t,1}\Big] ^\sT \hspace{-1mm} \left[\widehat{\boldsymbol q}_{t,1}({\boldsymbol \Theta}_0) e^{-\beta\sum_{l=0}^J\ell\left({\boldsymbol \Theta}_l^\sT{\boldsymbol w}_{t,1};\eta\left( {\boldsymbol \Theta}^\star {\boldsymbol w}_{t,1}, \widetilde\eps_1\right) \right)} \right] = 0. \end{equation} From this we can deduce that \begin{equation} \nonumber \limsup_{n\to\infty}\left|\E\left[\frac{\partial}{\partial t} \psi(f_\alpha(\beta,{\boldsymbol U}_t))\right]\right|= 0, \end{equation} from which the statement of Lemma~\ref{lemma:softmin_univ} can be deduced. \section{Proof of Theorem~\ref{thm:main_empirical_univ}} \label{section:proof_of_main_results} In this section, we complete the proof of Theorem~\ref{thm:main_empirical_univ} by deducing it from Lemma~\ref{lemma:softmin_univ}. \subsection{Universality of optimal empirical risk: Proof of Theorem~\ref{thm:main_empirical_univ}} \label{section:main_thm_proof} Recall that for $\alpha>0$, in Section~\ref{section:proof_outline} we let $\cN_\alpha$ be a minimal $\alpha-$net of $ {\mathcal C}_p \subseteq B_2^p(\textsf{R})$, so that $|\cN_\alpha| \le C(\alpha)^p$ for some $C(\alpha)$ depending only on $\alpha$ and $\sOmega$. Let us define the discretized minimization over ${\boldsymbol \Theta}\in\cN_\alpha^\textsf{k}$ \begin{equation} \label{eq:opt} \mathrm{Opt}_n^\alpha({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) := \min_{{\boldsymbol \Theta} \in \cN_\alpha^\textsf{k}} \widehat R_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})). \end{equation} We have the following consequence of Lemma~\ref{lemma:softmin_univ}. \begin{lemma}[Universality of $\mathrm{Opt}_n^\alpha$] \label{prop:eps_net_univ} Under Assumption~\hyperref[ass:loss_labeling_prime]{1'} along with Assumptions~\ref{ass:1}-\ref{ass:-1}, we have for any bounded differntiable function $\psi$ with bounded Lipschitz derivative \begin{equation} \nonumber \lim_{n\to\infty}\left|\E\left[\psi\left(\mathrm{Opt}_n^\alpha\left({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right)\right) \right] - \E\left[\psi\left(\mathrm{Opt}_n^\alpha\left({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right)\right)\right]\right| = 0. \end{equation} \end{lemma} The proof of this result is deferred to Section~\ref{section:universality_eps_net_proof}. Here, we show that Theorem~\ref{thm:main_empirical_univ}, under the alternative Assumption~\hyperref[ass:loss_labeling_prime]{1'} is a direct consequence of this lemma. First, we need a few technical lemmas. Let us define the restricted operator norm \begin{equation} \nonumber \norm{{\boldsymbol X}}_{{\mathcal S}_p} := \sup_{\left\{{\boldsymbol \theta}\in{\mathcal S}_p : \norm{{\boldsymbol \theta}}_2 \le 1\right\}} \norm{{\boldsymbol X} {\boldsymbol \theta}}_2. \end{equation} \begin{lemma} \label{lemma:op_norm_x} For ${\boldsymbol X},{\boldsymbol G}$ as in Assumption~\ref{ass:X}, we have for some $C \in(0,\infty)$ depending only on $\sOmega$, \begin{equation} \E\left[\norm{{\boldsymbol X}}_{{\mathcal S}_p}^2\right] \le C p,\quad \E\left[\norm{{\boldsymbol G}}_{{\mathcal S}_p}^2\right] \le C p. \nonumber \end{equation} \end{lemma} \begin{lemma} \label{lemma:Rn_lipschitz_bound} Under Assumptions~\hyperref[ass:loss_labeling_prime]{1'},~\ref{ass:ThetaStar},~\ref{ass:noise},~\ref{ass:regularizer} and~\ref{ass:X}, we have for all ${\boldsymbol \Theta},\widetilde{\boldsymbol \Theta}\in{\mathcal S}_p^\textsf{k}$ \begin{equation} \left|\widehat R_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \widehat R_n(\widetilde{\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right| \le C \left( \frac{\norm{{\boldsymbol X}}^2_{{\mathcal S}_p}}{n} + \frac{\norm{{\boldsymbol X}}_{{\mathcal S}_p}\norm{{\boldsymbol y}}_2}{n}+ 1 \right) \norm{{\boldsymbol \Theta} -\widetilde {\boldsymbol \Theta}}_F, \nonumber \end{equation} for some $C > 0$ depending only on $\sOmega$. A similar bound also holds for the Gaussian model. \end{lemma} The proofs are deferred to Sections~\ref{proof:op_norm_x} and~\ref{proof:Rn_lipschitz_bound} respectively. Here, we derive Theorem~\ref{thm:main_empirical_univ}. \subsubsection{Proof of Eq.~\eqref{eq:main_thm_eq} of Theorem~\ref{thm:main_empirical_univ} under Assumption~\hyperref[ass:loss_labeling_prime]{1'}} \label{section:opt_to_erm} Let $\widehat{\boldsymbol \Theta}_{\boldsymbol X} := \left(\widehat {\boldsymbol \theta}_{{\boldsymbol X},1},\dots,\widehat{\boldsymbol \theta}_{{\boldsymbol X},\textsf{k}}\right) $ be a minimizer of $\widehat R_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$, and then let\\ $\widetilde{\boldsymbol \Theta}_{\boldsymbol X} := \big(\widetilde {\boldsymbol \theta}_{{\boldsymbol X},1},\dots,\widetilde {\boldsymbol \theta}_{{\boldsymbol X},\textsf{k}}\big)$ where $\widetilde{\boldsymbol \theta}_{{\boldsymbol X},k}$ is the closest point in $\cN_\alpha$ to $\widehat{\boldsymbol \theta}_{{\boldsymbol X},k}$ in $\ell_2$ norm. We have \begin{align} \left| \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \mathrm{Opt}_n^\alpha({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right| &\stackrel{(a)}{=} \left( \mathrm{Opt}_n^\alpha({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \widehat R^\star_n({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \right)\nonumber\\ &\stackrel{(b)}{\le} \left( \widehat R_n(\widetilde {\boldsymbol \Theta}_{{\boldsymbol X}};{\boldsymbol X}, {\boldsymbol y}({\boldsymbol X}) ) - \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right)\nonumber\\ &\stackrel{(c)}{=}\left|\widehat R_n (\widehat{\boldsymbol \Theta}_{\boldsymbol X}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \widehat R_n(\widetilde {\boldsymbol \Theta}_{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right|\nonumber\\ &\stackrel{(d)}{\le} C_0 \left( \frac{\norm{{\boldsymbol X}}^2_{{\mathcal S}_p}}{n} + \frac{\norm{{\boldsymbol X}}_{{\mathcal S}_p}\norm{{\boldsymbol y}}_2}{n}+ 1 \right) \textsf{k}^{1/2} \alpha,\nonumber \end{align} where in $(a)$ we used that $\mathrm{Opt}_n^\alpha({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \ge \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$, in $(b)$ we used $\mathrm{Opt}_n^\alpha({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \le \widehat R_n(\widetilde {\boldsymbol \Theta}_{\boldsymbol X}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$, in $(c)$ we used $\widehat R_n(\widetilde {\boldsymbol \Theta}_{{\boldsymbol X}};{\boldsymbol X}, {\boldsymbol y}({\boldsymbol X}) )\ge \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$, and in $(d)$ we used Lemma~\ref{lemma:Rn_lipschitz_bound}. Now letting $\psi:\R\to\R$ be a bounded differntiable function with bounded Lipschitz derivative, we have \begin{align*} \Big|\E\left[\psi\left( \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right)\right] - \E \Big[ \psi(\mathrm{Opt}^\alpha_n& ({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})))\Big]\Big|\\ &\le \E\left[\left| \psi\left( \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol G}))\right) - \psi\left(\mathrm{Opt}^\alpha_n ({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right)\right|\right]\nonumber\\ &\le\norm{\psi'}_\infty \E \left| \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \mathrm{Opt}_n^\alpha({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right|\\ &\le C_0 \norm{\psi'}_\infty \E\left[\frac{\norm{{\boldsymbol X}}^2_{{\mathcal S}_p}}{n} + \frac{\norm{{\boldsymbol X}}_{{\mathcal S}_p}\norm{{\boldsymbol y}}_2}{n}+ 1 \right] \textsf{k}^{1/2} \alpha\\ &\stackrel{(a)}{\le} C_0 \norm{\psi'}_\infty \left(C_1 + C_2^{1/2} \E\left[y_1^2\right]^{1/2} + 1 \right) \textsf{k}^{1/2}\alpha\\ &\stackrel{(b)}\le C_1 \norm{\psi'}_\infty \alpha \end{align*} where in $(a)$ we used Lemma~\ref{lemma:op_norm_x} and in $(b)$ the subgaussianity conditions in Assumptions~\ref{ass:noise} and~\ref{ass:X} along with the condition on $\eta$. An analogous argument then shows that \begin{align*} \left| \E \psi\left( \widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))\right) - \E \psi\left(\mathrm{Opt}^\alpha_n ({\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))\right)\right| \le C_1 \norm{\psi'}_\infty\alpha, \end{align*} allowing us to write \begin{align*} &\lim_{n\to\infty}\left| \E \psi\left(\widehat R^\star_n({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right) - \E \psi\left(\widehat R^\star_n({\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))\right) \right|\\ &\hspace{30mm}\le \lim_{n\to\infty}\left| \E \psi\left(\mathrm{Opt}^\alpha_n ({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \right) - \E \psi\left(\mathrm{Opt}^\alpha_n ({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \right) \right| + 2 C_2\norm{\psi'}_\infty \alpha\\ &\hspace{30mm}= 2C_2\norm{\psi'}_\infty\alpha, \end{align*} where the last equality is by Lemma~\ref{prop:eps_net_univ}. Now using that $\norm{\psi'}_\infty < \infty$ and sending $\alpha \to 0$ concludes the proof of Eq.~\eqref{eq:main_thm_eq} for $\psi$ bounded differentiable with bounded Lipschitz derivative. To extend it to $\psi$ bounded Lipschitz, it is sufficient to find a sequence of bounded differentiable functions with bounded Lipschitz derivative approximating $\psi$ uniformly (see for example Section~\ref{section:1_prime_to_1} below for a similar argument). \subsubsection{Proof of Eq.~\eqref{eq:main_thm_eq} of Theorem~\ref{thm:main_empirical_univ} under Assumption~\ref{ass:loss_labeling}} \label{section:1_prime_to_1} Let $\ell:\R^{\textsf{k}+1} \to\R$ be a nonnegative Lipschitz function and $\eta:\R^{\textsf{k}^\star +1} \to\R$ be Lipschitz as in Assumption~\ref{ass:loss_labeling}. Then there exists a sequence of functions $\ell_\mu$ such that: \begin{itemize} \item [\textit{(i)}] For all $\mu >0$, $\ell_\mu$ is differentiable with $\norm{\grad \ell_\mu}_{\textrm{Lip}} \le C(\mu) \norm{\ell}_\textrm{Lip}$ where $C(\mu)$ is a constant depending only on $\mu$,\label{item:diff_approx_1} \item [\textit{(ii)}] $\sup_{\mu > 0 } \norm{\ell_\mu}_\textrm{Lip} \le \norm{\ell}_\textrm{Lip}$ , and\label{item:diff_approx_2} \item [\textit{(iii)}] $\norm{\ell - \ell_\mu}_{\infty} \le \norm{\ell}_\textrm{Lip} \mu$. \label{item:diff_approx_3} \end{itemize} (See~\cite{evans2010partial}, Appendix C.4. for an example of such a sequence.) Define $\eta_\mu$ analogously to approximate $\eta$. Now note that we can assume, without loss of generality, that $\ell_\mu \ge 0$. Indeed, we can otherwise define $\widetilde \ell_\mu({\boldsymbol v}) := \ell_\mu({\boldsymbol v}) - \inf_{{\boldsymbol v}'\in\R^{\textsf{k}+1}} \ell_\mu({\boldsymbol v}') + \inf_{{\boldsymbol v}'\in \R^{\textsf{k} +1}} \ell({\boldsymbol v}')$, and this function will also satisfy ~\hyperref[item:diff_approx_1]{\textit{(i)}},~\hyperref[item:diff_approx_2]{\textit{(ii)}},~\hyperref[item:diff_approx_3]{\textit{(iii)}} if $\ell_\mu$ does, and will be nonnegative (that the infima are finite follows from~\hyperref[item:diff_approx_3]{\textit{(iii)}} and that $\ell \ge 0$). Hence, if $(\ell,\mu)$ satisfy Assumption~\ref{ass:loss_labeling}, then for all $\mu>0$, $(\ell_\mu,\eta_\mu)$ satisfy Assumption~\hyperref[ass:loss_labeling_prime]{1'} since Eq.~\eqref{eq:exp_integrability} holds for Lipschitz functions. \ifthenelse{\boolean{arxiv}}{ \begin{proof}[{Proof of Theorem~\ref{thm:main_empirical_univ} under Assumption~\ref{ass:loss_labeling}}] }{ \begin{proof}{\textbf{of Eq.~\eqref{eq:main_thm_eq} of Theorem~\ref{thm:main_empirical_univ} under Assumption~\ref{ass:loss_labeling} }} } For $\mu>0$, define the empirical risk \begin{align*} &\widehat R_n^\mu({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) := \frac1n \sum_{i=1}^n \ell_\mu\left({\boldsymbol \Theta}^\sT{\boldsymbol x}_i; \eta_\mu\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol x}_i,\epsilon_i\right)\right) + r({\boldsymbol \Theta}), \end{align*} and let $\widehat R_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$ be the risk associated with $\ell$ and $\eta$. Choose minimizers \begin{align*} &\widehat {\boldsymbol \Theta}^{\boldsymbol X}_\mu \in \argmin_{{\boldsymbol \Theta}\in{\mathcal C}_p} \widehat R_n^\mu({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X})),\quad\quad \widehat {\boldsymbol \Theta}^{\boldsymbol X} \in \argmin_{{\boldsymbol \Theta}\in{\mathcal C}_p} \widehat R_n({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X})). \end{align*} Analogously define $\widehat R_n^\mu({\boldsymbol \Theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G})),\widehat R_n({\boldsymbol \Theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))$ and minimizers $\widehat {\boldsymbol \Theta}^{\boldsymbol G}_\mu, \widehat {\boldsymbol \Theta}^{\boldsymbol G}$ for the Gaussian model. By the properties of $(\ell_\mu,\eta_\mu),$ we have \begin{align*} &\widehat R_n\left(\widehat{\boldsymbol \Theta}^{\boldsymbol X}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_n^\mu\left(\widehat{\boldsymbol \Theta}_\mu^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) \le \widehat R_n\left(\widehat{\boldsymbol \Theta}_\mu^{\boldsymbol X}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_n^\mu\left(\widehat{\boldsymbol \Theta}_\mu^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right)\\ &\hspace{15mm}\le \frac1n \sum_{i=1}^n \left|\ell\left(\widehat{\boldsymbol \Theta}_\mu^{{\boldsymbol X}\sT} {\boldsymbol x}_i; \eta\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol x}_i, \epsilon_i\right)\right) - \ell\left(\widehat{\boldsymbol \Theta}_\mu^{{\boldsymbol X}\sT}{\boldsymbol x}_i; \eta_\mu\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol x}_i, \epsilon_i\right)\right)\right|\\ &\hspace{25mm}+ \frac{1}{n}\sum_{i=1}^n \left| \ell\left(\widehat{\boldsymbol \Theta}_\mu^{{\boldsymbol X}\sT} {\boldsymbol x}_i; \eta_\mu\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol x}_i, \epsilon_i\right)\right) - \ell_\mu\left(\widehat{\boldsymbol \Theta}_\mu^{{\boldsymbol X}\sT}{\boldsymbol x}_i; \eta_\mu\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol x}_i, \epsilon_i\right)\right)\right|\\ &\hspace{15mm}\le \frac1n \sum_{i=1}^n \norm{\ell}_\textrm{Lip} \left|\eta\left({\boldsymbol \Theta}^{\star \sT}{\boldsymbol x}_i, \epsilon_i\right) - \eta_{\mu}\left({\boldsymbol \Theta}^{\star \sT}{\boldsymbol x}_i, \epsilon_i\right) \right|\\ &\hspace{25mm}+ \norm{\ell - \ell_\mu}_\infty\\ &\hspace{15mm}\le \norm{\ell}_\textrm{Lip} \norm{\eta - \eta_\mu}_\infty + \norm{\ell - \ell_\mu}_\infty. \end{align*} An analogous argument then shows that we also have \begin{equation} \nonumber \widehat R_n^\mu\left(\widehat{\boldsymbol \Theta}^{\boldsymbol X}_\mu;{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_n\left(\widehat{\boldsymbol \Theta}^{\boldsymbol X}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) \le\norm{\ell}_\textrm{Lip} \norm{\eta - \eta_\mu}_\infty + \norm{\ell - \ell_\mu}_\infty. \end{equation} Hence, we have \begin{equation} \label{eq:approx_risk_diff} \left|\widehat R_n^\mu\left(\widehat{\boldsymbol \Theta}^{\boldsymbol X}_\mu;{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_n\left(\widehat{\boldsymbol \Theta}^{\boldsymbol X}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right)\right| \le\norm{\ell}_\textrm{Lip} \norm{\eta - \eta_\mu}_\infty + \norm{\ell - \ell_\mu}_\infty. \end{equation} and similarly for the terms involving ${\boldsymbol G}$, so that we can upper bound \begin{align*} &\limsup_{n\to\infty} \left| \E\left[\psi\left( \widehat R_n\left(\widehat{\boldsymbol \Theta}^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) \right)\right] - \E\left[\psi\left( \widehat R_n\left(\widehat{\boldsymbol \Theta}^{\boldsymbol G};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right) \right)\right] \right|\\ &\hspace{15mm}\le 2\norm{\psi}_\textrm{Lip} \left( \norm{\ell}_\textrm{Lip}\norm{\eta - \eta_\mu}_\infty + \norm{\ell - \ell_\mu}_\infty \right)\\ &\hspace{25mm}+ \lim_{n\to\infty}\left|\E\left[\psi\left( \widehat R_n^\mu\left(\widehat{\boldsymbol \Theta}_\mu^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) \right)\right] - \E\left[\psi\left( \widehat R_n^\mu\left(\widehat{\boldsymbol \Theta}_\mu^{\boldsymbol G};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right) \right)\right]\right| \\ &\hspace{15mm}\stackrel{(a)}{=} 2\norm{\psi}_\textrm{Lip} \left( \norm{\ell}_\textrm{Lip}\norm{\eta - \eta_\mu}_\infty + \norm{\ell - \ell_\mu}_\infty \right), \end{align*} where $(a)$ follows from Theorem~\ref{thm:main_empirical_univ} under Assumption~\hyperref[ass:loss_labeling_prime]{1'} using the fact that $(\ell_\mu,\eta_\mu)$ satisfy this assumption for fixed $\mu$. Finally, sending $\mu \to 0$ gives the result by property~\hyperref[item:diff_approx_3]{\textit{(iii)}}. \end{proof} \subsubsection{Proof of the bounds in Eq.~\eqref{eq:prob_bounds} of Theorem~\ref{thm:main_empirical_univ}} Having proved that Eq.~\eqref{eq:main_thm_eq} of Theorem~\ref{thm:main_empirical_univ} holds under both assumptions on $(\ell,\eta)$, we show that the bounds in~\eqref{eq:prob_bounds} are a direct consequence. \begin{proof} Fix $\delta>0$ and $\rho\in\R$ and define \begin{equation} \nonumber u_{\delta,\rho}(t) := \begin{cases} 0 & t < \rho \\ (t - \rho)/\delta & t\in[\rho,\rho+\delta)\\ 1 & t \ge \rho + \delta \end{cases}. \end{equation} Note that $u_{\delta,\rho}$ satisfies \begin{equation} \nonumber \mathbf{1}_{\{t \ge \rho + \delta\}} \le u_{\delta,\rho}(t) \le \mathbf{1}_{\{t\ge\rho \}}. \end{equation} Furthermore, since $\norm{u_{\delta,\rho}}_\textrm{Lip} = 1/\delta$, we can apply~\eqref{eq:main_thm_eq} with $\psi := u_{\delta,\rho}$ to conclude \begin{align*} \lim_{n\to\infty}\P\left( \widehat R_n^\star \left({\boldsymbol X},{\boldsymbol y}\left({\boldsymbol X}\right)\right) \ge \rho + \delta \right) &\le \lim_{n\to\infty}\E\left[u_{\delta,\rho}\left(\widehat R_n^\star\left({\boldsymbol X},{\boldsymbol y}\left({\boldsymbol X}\right) \right)\right)\right]\\ &= \lim_{n\to\infty}\E\left[u_{\delta,\rho}\left(\widehat R_n^\star\left({\boldsymbol G},{\boldsymbol y}\left({\boldsymbol G}\right) \right)\right)\right]\\ &\le \lim_{n\to\infty}\P\left( \widehat R_n^\star \left({\boldsymbol G},{\boldsymbol y}\left({\boldsymbol G}\right)\right) \ge \rho \right), \end{align*} which establishes the first bound in~\eqref{eq:prob_bounds}. The second bound follows via a similar argument after modifying the definition of $u_{\delta,\rho}$ to satisfy \begin{equation} \nonumber \mathbf{1}_{\{t \le \rho - \delta\}} \le u_{\delta,\rho}(t) \le \mathbf{1}_{\{t\le\rho \}}. \end{equation} \end{proof} \subsection{Universality of the minimum over the discretized space: Proof of Lemma~\ref{prop:eps_net_univ}} \label{section:universality_eps_net_proof} Recall the minimization problem over the set $\cN_\alpha^\textsf{k}$ defined in~\eqref{eq:opt}. We show in this section that Lemma~\ref{prop:eps_net_univ} is a direct consequence of Lemma~\ref{lemma:softmin_univ}. \ifthenelse{\boolean{arxiv}}{ \begin{proof}[Proof of Lemma~\ref{prop:eps_net_univ}] }{ \begin{proof}{\textbf{Lemma~\ref{prop:eps_net_univ}}} } Fix $\alpha>0$. Let us first bound the derivative of the free energy. Define the probability mass function for ${\boldsymbol \Theta}\in\cN_\alpha^\textsf{k}$, \begin{equation} \nonumber p({\boldsymbol \Theta};{\boldsymbol X}, t) :=\frac{e^{-t n \widehat R_n({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))}}{\sum_{{\boldsymbol \Theta}\in\cN_\alpha^\textsf{k}} e^{- t n \widehat R_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))}} \end{equation} and define similalry $p({\boldsymbol \Theta};{\boldsymbol G},t)$ for the Gaussian model. Recall that the Shannon entropy of a distribution $H(p(\,\cdot\,;{\boldsymbol X},t)) := -\sum_{{\boldsymbol \Theta}\in\cN_\alpha^\textsf{k}} p({\boldsymbol \Theta}; {\boldsymbol X}, t) \log p({\boldsymbol \Theta} ; {\boldsymbol X}, t)$ satisfies \begin{equation} 0 \le H(p({\boldsymbol \Theta};{\boldsymbol X},t)) \le \log \left|\cN_\alpha^\textsf{k} \right| = \log C_0(\alpha,\textsf{R})^{p\textsf{k}}\label{eq:entropy_bound} \end{equation} where $C_0$ depends only on $\alpha,\textsf{R}$ and $\sOmega$. Therefore, the derivative of the free energy with respect to $t$ can be bounded as \begin{align*} \frac{\partial}{\partial t} f_\alpha(t,{\boldsymbol X}) &=\frac{1}{t} \frac{\sum_{{\boldsymbol \Theta}} \widehat R_n({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) e^{ -tn \widehat R_n({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))} }{ \sum_{{\boldsymbol \Theta}} e^{-t n \widehat R_n({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))} } + \frac{1}{t^2 n} \log\sum_{\boldsymbol \Theta} e^{- t n \widehat R_n({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))}\\ &=-\frac{1}{t^2n} \frac{\sum_{{\boldsymbol \Theta}} \log p({\boldsymbol \Theta}; {\boldsymbol X},t) e^{ -tn \widehat R_n({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))} }{ \sum_{{\boldsymbol \Theta}} e^{-t n \widehat R_n({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))} }\\ &=\frac{1}{t^2 n} H\left(p({\boldsymbol \Theta},t)\right)\\ &\stackrel{(a)}{\le} C_1(\alpha)\frac{p(n)}{n} \frac{1}{t^2} \end{align*} where $(a)$ follows by~\eqref{eq:entropy_bound}. This bound on the derivative implies that $f_\alpha(\beta,{\boldsymbol X})$ approximates $\mathrm{Opt}_n^\alpha({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$ uniformly: \begin{align*} \left| f_\alpha(\beta,{\boldsymbol X}) - \mathrm{Opt}_n^\alpha({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \right| &= \lim_{s\to\infty}\left| f_\alpha(\beta,{\boldsymbol X}) - f_\alpha(s,{\boldsymbol X}) \right|\\ &\le C_1(\alpha) \frac{p(n)}{n} \lim_{s\to\infty} \int_{\beta}^\infty \frac{1}{t^2}\textrm{d} t \\ &= C_1(\alpha) \frac{p(n)}{n} \frac1\beta. \end{align*} Clearly, a similar bound holds with ${\boldsymbol G}$ replacing ${\boldsymbol X}$. Hence, we have \begin{align*} &\lim_{n\to\infty} \left|\E\left[\psi\left(\mathrm{Opt}_n^\alpha({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right) - \psi\left(\mathrm{Opt}_n^{\alpha}({\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))\right)\right]\right| \\ &\hspace{35mm}\le \lim_{n\to\infty } \left| \E\left[ \psi( f_{\alpha}(\beta,{\boldsymbol X})) - \psi (f_{\alpha}(\beta,{\boldsymbol G}))\right]\right|+ \frac{2\norm{\psi'}_\infty C_1(\alpha)}{\beta} \lim_{n\to\infty}\frac{p(n)}{n}\\ &\hspace{35mm}\stackrel{(a)}{=} \frac{\norm{\psi'}_\infty C_2(\alpha)}{\beta} \end{align*} where $(a)$ follows from Lemma~\ref{lemma:softmin_univ} along with the assumption that $p(n)/n \to \sgamma$. Sending $\beta\to\infty$ completes the proof. \end{proof} \subsection{Complete proof of universality of the free energy: Proof of Lemma~\ref{lemma:softmin_univ}} \label{section:proof_softmin_univ} Let us recall the interpolating paths ${\boldsymbol u}_{t,i}:= \sin(t) {\boldsymbol x}_i + \cos(t) {\boldsymbol g}_i$ and $\widetilde{\boldsymbol u}_{t,i}:= \cos(t) {\boldsymbol x}_i - \sin(t) {\boldsymbol g}_i$ defined in~\eqref{eq:slepian} for $t\in[0,\pi/2]$ and $i\in[n]$, and the associated matrix ${\boldsymbol U}_t$ whose $i$th row is ${\boldsymbol u}_{t,i}$. Further, recall the gradient notation introduced in Section~\ref{section:proof_outline}: \begin{equation} \nonumber \grad \ell({\boldsymbol v};\eta({\boldsymbol v}^\star,v)) = \left( \frac{\partial}{\partial v_k}\ell\left({\boldsymbol v};\eta({\boldsymbol v}^\star,v)\right)\right)_{k\in[\textsf{k}]},\quad \grad^\star \ell\left({\boldsymbol v};\eta({\boldsymbol v}^\star,v)\right) = \left( \frac{\partial}{\partial v^\star_k} \ell({\boldsymbol v};\eta({\boldsymbol v}^\star,v)) \right)_{k\in[\textsf{k}^\star]} \end{equation} for ${\boldsymbol v}\in\R^\textsf{k},{\boldsymbol v}^\star \in\R^{\textsf{k}^\star},v\in\R$ and the shorthand $\widehat\ell_{t,i}({\boldsymbol \Theta})$ for $\ell\left({\boldsymbol \Theta}^\sT {\boldsymbol u}_{t,i}; \eta\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol u}_{t,i},\epsilon_i\right)\right)$ where we choose to suppress the dependence on ${\boldsymbol \Theta}^\star$ since it is fixed throughout. Now, recall the definition in~\eqref{eq:bd_def}: \begin{equation} \nonumber \widehat{\boldsymbol d}_{t,i}({\boldsymbol \Theta}) :=\left( {\boldsymbol \Theta} \grad\widehat\ell_{t,i}({\boldsymbol \Theta}) + {\boldsymbol \Theta}^\star \grad^\star\widehat\ell_{t,i}({\boldsymbol \Theta}) \right). \end{equation} Finally, recall the probability mass function and its associated expectation defined in~\eqref{eq:inner_def} \begin{equation} \nonumber p^\up{i}({\boldsymbol \Theta}_0;t) := \frac{ e^{-\beta\left(\sum_{j\neq i}\widehat\ell_{t,j}({\boldsymbol \Theta}_0) + n r({\boldsymbol \Theta}_0)\right) }}{\sum_{{\boldsymbol \Theta}} e^{-\beta\left(\sum_{j\neq i}\widehat\ell_{t,j}({\boldsymbol \Theta}) + n r({\boldsymbol \Theta})\right) }} \quad\textrm{and}\quad \inner{\;\cdot\;}^\up{i}_{\boldsymbol \Theta}:= \sum_{{\boldsymbol \Theta}} (\,\cdot\,)p^\up{i}({\boldsymbol \Theta};t), \end{equation} where all sums are implicitly over $\cN_\alpha$; the minimal $\alpha-$net of ${\mathcal C}_p$ introduced in Section~\ref{section:proof_outline}. Before proceeding to the proof of Lemma~\ref{lemma:softmin_univ}, we state the following integrability lemma whose proof is deferred to Appendix~\ref{proof:dct_bound}. Let us use $\E_\up{i}$ to denote the expectation conditional on $({\boldsymbol G}^\up{i},{\boldsymbol X}^\up{i},{\boldsymbol \epsilon}^\up{i})$; the feature vectors and the noise vector with the $i$th sample set to $0$ (or equivalently, since the samples are i.i.d, the expectation with respect to $({\boldsymbol x}_i,{\boldsymbol g}_i,\epsilon_i)$.) \begin{lemma} \label{lemma:dct_bound} Suppose Assumptions~\hyperref[ass:loss_labeling_prime]{1'} and~\ref{ass:set}-\ref{ass:X} hold. For all $n\in\Z_{>0}, t\in[0,\pi/2]$ and $\beta>0$, we have \begin{equation} \label{eq:square_integrability_main_lemma} \sup_{{\boldsymbol \Theta}_0\in{\mathcal S}_p^\textsf{k}} \E_\up{1}\left[\left( \frac{ \widetilde{\boldsymbol u}_{t,1}^\sT \widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0) e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta}_0)}} {\inner{e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{1}}\right)^2\right]\le C(\beta), \end{equation} for some $C(\beta)$ depending only on $\sOmega$ and $\beta$. In particular, we have for any fixed $\beta>0$ and bounded differentiable function $\psi:\R\to\R$ with bounded Lipschitz derivative, \begin{equation} \label{eq:dct_bound_softmax} \int_{0}^{\pi/2} \sup_{n \in \Z_{>0} }\left|\E\left[\frac{\partial}{\partial t}\psi(f_\alpha(\beta, {\boldsymbol U}_t)) \right]\right| \textrm{d} t < \infty, \end{equation} where $f_\alpha(\beta,\cdot)$ is the free energy defined in~\eqref{eq:free_energy_def}. \end{lemma} \ifthenelse{\boolean{arxiv}}{ \begin{proof}[\textbf{ Proof of Lemma~\ref{lemma:softmin_univ}}] }{ \begin{proof}{\textbf{ of Lemma~\ref{lemma:softmin_univ}}} } Using the interpolator ${\boldsymbol U}_t$, we can write \begin{align} \lim_{n\to\infty}\left|\E\left[\psi(f_{\alpha}(\beta,{\boldsymbol X}))\right] - \E\left[\psi(f_{\alpha}(\beta,{\boldsymbol G}))\right] \right| &= \lim_{n\to\infty}\left|\E\left[\psi(f_{\alpha}(\beta,{\boldsymbol U}_{\pi/2}))\right] - \E\left[\psi(f_{\alpha}(\beta,{\boldsymbol U}_0))\right] \right| \nonumber \\ &\stackrel{(a)}{=}\left|\int_{0}^{\pi/2}\lim_{n\to\infty} \E \left[\frac{\partial}{\partial t} \psi(f_\alpha(\beta,{\boldsymbol U}_t))\right]\right|\textrm{d} t \nonumber \end{align} where $(a)$ follows via an application of the dominated convergence theorem along with Lemma~\ref{lemma:dct_bound}. So it is sufficient to show that for all $t\in[0,\pi/2]$, \begin{equation} \label{suff_cond_for_prop_softmax_1} \lim_{n\to\infty}\left|\E \left[\frac{\partial}{\partial t} \psi(f_\alpha(\beta,{\boldsymbol U}_t))\right]\right| = 0. \end{equation} With the notation previously defined, we can compute the deriative of the free energy as \begin{align*} \frac{\partial}{\partial t} \psi(f_\alpha(\beta,{\boldsymbol U}_t)) &= \frac{\psi'(f_\alpha(\beta, {\boldsymbol U}_t))}n \sum_{i=1}^n \frac{ \sum_{{\boldsymbol \Theta}} \widetilde {\boldsymbol u}_{t,i}^\sT \widehat{\boldsymbol d}_{t,i}({\boldsymbol \Theta})e^{-\beta\left(\sum_{j\neq i} \widehat\ell_{t,j}({\boldsymbol \Theta})+ nr({\boldsymbol \Theta})\right)} e^{-\beta\widehat\ell_{t,i}({\boldsymbol \Theta})}} { \sum_{{\boldsymbol \Theta}}e^{ -\beta\left( \sum_{j\neq i} \widehat\ell_{t,j}({\boldsymbol \Theta}) + nr({\boldsymbol \Theta})\right) } e^{-\beta\widehat\ell_{t,i}({\boldsymbol \Theta})} } \\ &= \E\left[\frac{\psi'(f_\alpha(\beta,{\boldsymbol U}_t))}{n} \sum_{i=1}^n \frac{\inner{ \widetilde{\boldsymbol u}_{t,i}^\sT \widehat{\boldsymbol d}_{t,i}({\boldsymbol \Theta}) e^{-\beta\widehat\ell_{t,i}({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{i}} {\inner{e^{-\beta\widehat\ell_{t,i}({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{i}}\right]. \end{align*} Since our goal is to establish~\eqref{suff_cond_for_prop_softmax_1}, let us fix some $t\in[0,\pi/2]$ and suppress it in the notation. We use the previous display to bound the expectation of the derivative as \begin{align} \left|\E\left[\frac{\partial}{\partial t}\psi(f_\alpha(\beta,{\boldsymbol U})) \right]\right| &\le \frac1n\sum_{i=1}^n\E\left[\left|{\psi'(f_\alpha(\beta,{\boldsymbol U})) - \psi'\left(f_\alpha\left(\beta,{\boldsymbol U}^\up{i}\right)\right)} \frac{ \inner{ \widetilde{\boldsymbol u}_{i}^\sT \widehat{\boldsymbol d}_i({\boldsymbol \Theta}) e^{-\beta\widehat\ell_i({\boldsymbol \Theta})} }_{\boldsymbol \Theta}^\up{i} } {\inner{e^{-\beta\widehat\ell_i({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{i}} \right|\right] \label{eq:psi'_softmin_bound} \\ &\hspace{10mm}+ \frac{1}n\sum_{i=1}^n \left|\E\left[\psi'\left(f_\alpha\left(\beta,{\boldsymbol U}^\up{i}\right)\right)\inner{ \E_\up{i}\left[ \frac{ \widetilde{\boldsymbol u}_{i}^\sT \widehat{\boldsymbol d}_i({\boldsymbol \Theta}) e^{-\beta\widehat\ell_i({\boldsymbol \Theta})}} {\inner{e^{-\beta\widehat\ell_i({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{i}}\right]}_{\boldsymbol \Theta}^\up{i} \right]\right|,\label{eq:psi'_leave_one_out_bound} \end{align} where ${\boldsymbol U}^\up{i}$ is obtained by setting the $i$th row in ${\boldsymbol U}$ to $0$. Note that to reach~\eqref{eq:psi'_leave_one_out_bound}, we used the independence of $p^\up{i}({\boldsymbol \Theta};t)$ and $({\boldsymbol x}_i,{\boldsymbol g}_i,\epsilon_i)$ to swap the order of $\E_\up{i}\left[\;\cdot\;\right]$ and $\inner{\;\cdot\;}_{{\boldsymbol \Theta}}^\up{i}$. We control~\eqref{eq:psi'_softmin_bound} and~\eqref{eq:psi'_leave_one_out_bound} separately. The term in~\eqref{eq:psi'_softmin_bound} can be controlled via a simple leave-one-out argument. Indeed, since the samples are i.i.d, it is sufficient to control the term $i=1$ in the sum: \begin{align} \left|\psi'(f_\alpha(\beta,{\boldsymbol U})) - \psi'\left(f_\alpha\left(\beta,{\boldsymbol U}^\up{1}\right)\right)\right| &\le \frac{\norm{\psi'}_\textrm{Lip}}{n\beta} \left|\log\frac{\sum_{{\boldsymbol \Theta}}e^{ -\beta \left(\sum_{j\neq 1} \widehat\ell_j({\boldsymbol \Theta}) + n r({\boldsymbol \Theta})\right) } e^{-\beta\widehat\ell_1({\boldsymbol \Theta})}}{\sum_{{\boldsymbol \Theta}}e^{ -\beta \left(\sum_{j\neq 1} \widehat\ell_j({\boldsymbol \Theta}) + n r({\boldsymbol \Theta})\right) } } \right|\nonumber\\ &\hspace{0mm}= \frac{\norm{\psi'}_\textrm{Lip}}{n\beta} \left|\log\inner{e^{-\beta\widehat\ell_1({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{1}\right|\nonumber\\ &\hspace{0mm}\stackrel{(a)}{=} -\frac{\norm{\psi'}_\textrm{Lip}}{n\beta} \log\inner{e^{-\beta\widehat\ell_1({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{1}\nonumber\\ &\hspace{0mm}\stackrel{(b)}{\le}\frac{\norm{\psi'}_\textrm{Lip}}{n} \inner{\widehat\ell_1({\boldsymbol \Theta})}_{\boldsymbol \Theta}^\up{1},\nonumber \end{align} where $(a)$ follows from the nonnegativitiy of $\ell$ and $\beta$ and $(b)$ follows by Jensen's inequality. Noting that the condition in Eq.~\eqref{eq:exp_integrability} of Assumption~\hyperref[ass:loss_labeling_prime]{1'} guarantees that $\sup_{{\boldsymbol \Theta}\in{\mathcal S}_p^\textsf{k}}\E_\up{i}\left[\widehat \ell_1({\boldsymbol \Theta})^2\right]\le C_0$ and recalling the bound in~\eqref{eq:square_integrability_main_lemma} of Lemma~\ref{lemma:dct_bound}, an application of Cauchy-Schwarz to~\eqref{eq:psi'_softmin_bound} yields \begin{align*} \limsup_{n\to\infty}\E\left[\left|\left({\psi'(f_\alpha(\beta,{\boldsymbol U})) - \psi'(f_\alpha(\beta,{\boldsymbol U}^\up{1}))}\right) \inner{ \frac{ \widetilde{\boldsymbol u}_{1}^\sT \widehat{\boldsymbol d}_1({\boldsymbol \Theta}) e^{-\beta\widehat\ell_1({\boldsymbol \Theta})}} {\inner{e^{-\beta\widehat\ell_1({\boldsymbol \Theta})}}^\up{1}}}_{\boldsymbol \Theta}^\up{1} \right|\right] \hspace{-0.5mm} &\le \limsup_{n\to\infty}\frac{C_1\norm{\psi'}_\textrm{Lip}}{n}\\ &= 0. \end{align*} Meanwhile, to control the term~\eqref{eq:psi'_leave_one_out_bound}, it is sufficient to establish that \begin{equation} \label{eq:suff_cond_for_prop_softmax_2} \lim_{n\to\infty} \sup_{{\boldsymbol \Theta}_0} \left|\E_\up{1}\left[ \frac{ \widetilde{\boldsymbol u}_{1}^\sT \widehat{\boldsymbol d}_{1}({\boldsymbol \Theta}_0) e^{-\beta\widehat\ell_{1}({\boldsymbol \Theta}_0)}} {\inner{e^{-\beta\widehat\ell_1({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{1}}\right]\right| = 0 \quad \textrm{a.s.} \end{equation} To see that this is sufficient, note that with~\eqref{eq:suff_cond_for_prop_softmax_2}, we can control~\eqref{eq:psi'_leave_one_out_bound} as \begin{align} &\limsup_{n\to\infty} \frac{1}n\sum_{i=1}^n \left|\E\left[\psi'\left(f_\alpha\left(\beta,{\boldsymbol U}^\up{i}\right)\right)\inner{ \E_\up{i}\left[ \frac{ \widetilde{\boldsymbol u}_{i}^\sT \widehat{\boldsymbol d}_i({\boldsymbol \Theta}_0) e^{-\beta\widehat\ell_i({\boldsymbol \Theta}_0)}} {\inner{e^{-\beta\widehat\ell_i({\boldsymbol \Theta})}}_{{\boldsymbol \Theta}}^\up{i}}\right]}_{\boldsymbol \Theta}^\up{i} \right] \right| \nonumber \\ &\hspace{53mm}\stackrel{(a)}{\le} \norm{\psi'}_\infty \limsup_{n\to\infty}\E\left[\inner{\left|\E_\up{1}\left[\frac{ \widetilde{\boldsymbol u}_{1}^\sT \widehat {\boldsymbol d}_1({\boldsymbol \Theta}_0) e^{ -\beta\widehat\ell_1({\boldsymbol \Theta}_0)}}{\inner{e^{-\beta\widehat\ell_1({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{1}} \right]\right|}_{{\boldsymbol \Theta}_0}^\up{1}\right] \nonumber \\ &\hspace{53mm} \stackrel{(b)}{\le} \norm{\psi'}_\infty\E \left[\limsup_{n\to\infty}\sup_{{\boldsymbol \Theta}_0}\left|\E_\up{1}\left[\frac{ \widetilde{\boldsymbol u}_{1}^\sT \widehat{\boldsymbol d}_1({\boldsymbol \Theta}_0) e^{ -\beta\widehat\ell_1({\boldsymbol \Theta}_0)}}{\inner{e^{-\beta\widehat\ell_1({\boldsymbol \Theta})}}^\up{1}_{\boldsymbol \Theta}} \right]\right| \right] \nonumber \\ &\hspace{53mm}=0, \nonumber \end{align} where $(a)$ follows by the i.i.d assumption on the samples and $(b)$ follows by reverse Fatou's and Lemma~\ref{lemma:dct_bound}. In order to prove~\eqref{suff_cond_for_prop_softmax_1}, fix $\delta>0$ and let $P(s) := \sum_{j=0}^M b_j s^j$ be the polynomial from Lemma~\ref{lemma:poly_approx}, where $b_j$ and $M>0$ depend only on $\beta,\delta$ and $\sOmega$. Then the this lemma yields the bound \begin{align} \left|\E_\up{1}\left[ \frac{ \widetilde{\boldsymbol u}_1^\sT \widehat{\boldsymbol d}_1({\boldsymbol \Theta}_0) e^{-\beta\widehat\ell_1({\boldsymbol \Theta}_0)}} {\inner{e^{-\beta\widehat\ell_1({\boldsymbol \Theta})}}_{\boldsymbol \Theta}}\right]\right| &\stackrel{(a)}{\le}\left|\E_\up{1}\left[ \widetilde{\boldsymbol u}_1^\sT \widehat{\boldsymbol d}_1({\boldsymbol \Theta}_0) e^{-\beta\widehat\ell_1({\boldsymbol \Theta}_0)} \sum_{j=0}^M b_j \left(\inner{e^{-\beta\widehat\ell_1({\boldsymbol \Theta})}} _{\boldsymbol \Theta}^\up{1}\right)^j \right] \right| + \delta \nonumber \\ &\stackrel{(b)}{=}\left| \sum_{j=0}^M b_j \inner{ \E_\up{1}\left[ \widetilde{\boldsymbol u}_1^\sT \widehat{\boldsymbol d}_1({\boldsymbol \Theta}_0) e^{-\beta\widehat\ell_1({\boldsymbol \Theta}_0)} e^{-\beta\sum_{l=1}^j\widehat\ell_1({\boldsymbol \Theta}_l)} \right] }^\up{1}_{{\boldsymbol \Theta}_1^j} \right| + \delta \nonumber \\ &\le \sum_{j=0}^M \left| b_j \right| \sup_{{\boldsymbol \Theta}_1,\ldots,{\boldsymbol \Theta}_j\in{\mathcal S}_p^\textsf{k}} \left| \E_\up{1}\left[ { \widetilde{\boldsymbol u}_{1}^\sT \widehat{\boldsymbol d}_1({\boldsymbol \Theta}_0) } e^{-\beta\sum_{l=0}^j\widehat\ell_1({\boldsymbol \Theta}_l)} \right] \right|+\delta, \label{eq:poly_approx_bound} \end{align} where $(a)$ is the statement of Lemma~\ref{lemma:poly_approx} and in $(b)$ we defined the expectation $\inner{\;\cdot\;}_{{\boldsymbol \Theta}_1^j}$ with respect to $j$ independent samples from $p^\up{1}({\boldsymbol \Theta};t)$. Now recall the definitions of $\widetilde{\boldsymbol g}_1,\widetilde \epsilon_1, {\boldsymbol w}_1, \widetilde {\boldsymbol w}_1$ and $\widehat {\boldsymbol q}_1({\boldsymbol \Theta}_0)$ from Lemma~\ref{lemma:gaussian_approx}. Note that ${\boldsymbol w}_1$ and $\widetilde {\boldsymbol w}_1$ are jointly Gaussian with cross-covariance \begin{align} \E_\up{1}\left[\widetilde{\boldsymbol w}_{1}{\boldsymbol w}_{1}^\sT\right] = \sin(t)\cos(t)\E_\up{1}\left[\widetilde{\boldsymbol g}_1\widetilde{\boldsymbol g}_1^\sT\right] - \sin(t)\cos(t)\E_\up{1}[{\boldsymbol g}_1 {\boldsymbol g}_1^\sT] = 0,\label{eq:cross_cov_0} \end{align} for all $t\in[0,\pi/2]$, and hence they are independent. And since $\widetilde {\boldsymbol w}_1$ is independent of $\widetilde \epsilon_1$ by definition, the assertion of Lemma~\ref{lemma:gaussian_approx} implies that the summands in~\eqref{eq:poly_approx_bound} converge to $0$. Indeed, for $j\in[M]$: \begin{align} \nonumber &\limsup_{n\to\infty} \quad\sup_{ \mathclap{\substack{ {\boldsymbol \Theta}^\star\in{\mathcal S}_p^{\textsf{k}^\star} \\ {\boldsymbol \Theta}_0,\dots,{\boldsymbol \Theta}_j\in{\mathcal S}_p^\textsf{k}}}}\quad \left| \E_\up{1}\left[ { \widetilde{\boldsymbol u}_1^\sT \widehat{\boldsymbol d}_1({\boldsymbol \Theta}_0)} e^{-\beta\sum_{l=0}^j\widehat\ell_1\left({\boldsymbol \Theta}_l \right)} \right]\right|\\ &\hspace{26mm}\stackrel{(a)}\le \limsup_{n\to\infty} \quad\sup_{ \mathclap{\substack{ {\boldsymbol \Theta}^\star\in{\mathcal S}_p^{\textsf{k}^\star} \\ {\boldsymbol \Theta}_0,\dots,{\boldsymbol \Theta}_j\in{\mathcal S}_p^\textsf{k}}}}\quad \left|\E_\up{1}\left[ { \widetilde{\boldsymbol w}_1^\sT \widehat{\boldsymbol q}_1({\boldsymbol \Theta}_0)} e^{-\beta\sum_{l=0}^j\ell\left({\boldsymbol \Theta}_l^\sT{\boldsymbol w}_1;\eta\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol w}_1, \widetilde\epsilon_1 \right) \right) } \right] \right| \nonumber \\ &\hspace{26mm}\stackrel{(b)}\le \limsup_{n\to\infty} \quad\sup_{ \mathclap{\substack{ {\boldsymbol \Theta}^\star\in{\mathcal S}_p^{\textsf{k}^\star} \\ {\boldsymbol \Theta}_0,\dots,{\boldsymbol \Theta}_j\in{\mathcal S}_p^\textsf{k}}}}\quad \left|\E_\up{1} \Big[\widetilde {\boldsymbol w}_1\Big]^\sT \E_\up{1}\left[{\widehat{\boldsymbol q}_1({\boldsymbol \Theta}_0)} e^{-\beta\sum_{l=0}^j\ell\left({\boldsymbol \Theta}_l^\sT{\boldsymbol w}_1;\eta\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol w}_1, \widetilde\epsilon_1\right)\right)} \right]\right|\nonumber\\ &\hspace{26mm}\stackrel{(c)}=0, \nonumber \end{align} where in $(a)$ we applied Lemma~\ref{lemma:gaussian_approx}, in $(b)$ we used the independence of $\widetilde {\boldsymbol w}_1$ and ${\boldsymbol w}_1$ and in $(c)$ we used that the mean of $\widetilde {\boldsymbol w}_1$ is $0$. Combining this with the bound in~\eqref{eq:poly_approx_bound} yields, for all $\delta>0$, \begin{align} \nonumber \limsup_{n\to\infty}\sup_{{\boldsymbol \Theta}^\star\in{\mathcal S}_p^{\textsf{k}^\star},{\boldsymbol \Theta}_0\in{\mathcal S}_p^\textsf{k}}\left|\E_\up{1}\left[ \frac{ \widetilde{\boldsymbol u}_{1}^\sT \widehat{\boldsymbol d}_{1}({\boldsymbol \Theta}_0) e^{-\beta\widehat\ell_{1}({\boldsymbol \Theta}_0)}} {\inner{e^{-\beta\widehat\ell_{1}({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{1}}\right]\right| \le\delta. \end{align} Taking $\delta \to0$ then establishes~\eqref{eq:suff_cond_for_prop_softmax_2} for any $t \in[0,\pi/2]$ and concludes the proof. \end{proof} \section{Proof of Theorem~\ref{thm:universality_bounds}} \label{section:proof_univerality_bounds} The arguments in this section are independent of the dimension $\textsf{k}$ as long as it is fixed and constant in $n$. So to simplify notation, let us take $\textsf{k} = 1$ throughout. Furthermore, let us assume, without loss of generality, that $\widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})),\widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$ are nonnegative: Otherwise, we can replace the regularizer $r({\boldsymbol \theta})$ with $\widetilde r({\boldsymbol \theta}) := r({\boldsymbol \theta}) - \min_{{\boldsymbol \theta}'\in{\mathcal C}_p} r({\boldsymbol \theta}')$ to obtain a new nonnegative regularizer satisfying Assumption~\ref{ass:regularizer}, and since $\ell$ is assumed to be nonnegative, the minimum empirical risk will be nonegative. Define, for $t>0$ and $n\in\Z_{>0}$ the sequence of events \begin{equation} \nonumber {\mathcal G}_{n,t} := \left\{\widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\le t\right\} \cap \left\{ \widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) = 0\right\}. \end{equation} Recall the assumption that $\lim_{n\to\infty}\P\left(\widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) =0 \right) =1$ and note that it implies, alongside Theorem~\ref{thm:main_empirical_univ}, that for all $t>0$ we have $\lim_{n\to\infty}\P({\mathcal G}_{n,t}) = 1$. Working on the extended real numbers $\bar \R$, let us define \begin{equation} \nonumber F_n^{\boldsymbol g}(t,{\boldsymbol X}) := \quad \min_{ \mathclap{\substack{ {\boldsymbol \theta} \in {\mathcal C}_p \\ \widehat R_n({\boldsymbol \theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \le t }} } R_n^{\boldsymbol g}({\boldsymbol \theta}),\hspace{10mm} F_n^{\boldsymbol x}(t,{\boldsymbol X}) := \quad \min_{ \mathclap{\substack{ {\boldsymbol \theta} \in {\mathcal C}_p \\ \widehat R_n({\boldsymbol \theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \le t }} } R_n^{\boldsymbol x}({\boldsymbol \theta}), \end{equation} and similarly \begin{equation} \nonumber F_n^{\boldsymbol g}(t,{\boldsymbol G}) := \quad \min_{ \mathclap{\substack{ {\boldsymbol \theta} \in {\mathcal C}_p \\ \widehat R_n({\boldsymbol \theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \le t }} } R_n^{\boldsymbol g}({\boldsymbol \theta}) \end{equation} for all $t\ge0$, where we set the value of the minimum to $\infty$ whenever the constraints are not feasible. First we give the following lemma. \begin{lemma} \label{lemma:F_diff} For all $t\ge s > 0$ and any $\delta>0$, we have \begin{equation} \lim_{n\to\infty}\P\left(\left\{\left| F_n^{\boldsymbol x}(t,{\boldsymbol X}) - F_n^{\boldsymbol g}(t,{\boldsymbol X}) \right| > \delta \right\} \bigcap {\mathcal G}_{n,s}\right) = 0. \end{equation} \end{lemma} \begin{proof} Fix $t\ge s> 0$. On ${\mathcal G}_{n,s}$, let \begin{equation} \nonumber \widehat {\boldsymbol \theta}_{\boldsymbol x} \in \quad \argmin_{ \mathclap{\substack{ {\boldsymbol \theta} \in {\mathcal C}_p \\ \widehat R_n({\boldsymbol \theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \le t }} } R_n^{\boldsymbol x}({\boldsymbol \theta}),\hspace{12mm} \widehat {\boldsymbol \theta}_{\boldsymbol g} \in \argmin_{ \mathclap{\substack{ {\boldsymbol \theta} \in {\mathcal C}_p \\ \widehat R_n({\boldsymbol \theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \le t }} } R_n^{\boldsymbol g}({\boldsymbol \theta}) \end{equation} be any minimizers of the respective functions so that $ F_n^{\boldsymbol x}(t,{\boldsymbol X}) = R_n^{\boldsymbol x}(\widehat{\boldsymbol \theta}_{\boldsymbol x})$ and $F_n^{\boldsymbol g}(t,{\boldsymbol X}) = R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}_{\boldsymbol g})$. Then note that we can upper bound \begin{align*} \left(R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}_{\boldsymbol g}) - R_n^{\boldsymbol x}(\widehat {\boldsymbol \theta}_{\boldsymbol x}) \right)\mathbf{1}_{{\mathcal G}_{n,s}} &\stackrel{(a)}\le \left|R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}_{\boldsymbol x}) - R_n^{\boldsymbol x}(\widehat {\boldsymbol \theta}_{\boldsymbol x})\right| \mathbf{1}_{{\mathcal G}_{n,s}}\\ &\le \sup_{{\boldsymbol \theta}\in{\mathcal S}_p} \left|R_n^{\boldsymbol g}({\boldsymbol \theta}) - R_n^{\boldsymbol x}({\boldsymbol \theta})\right| \end{align*} where in $(a)$ we used that $R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}_{\boldsymbol g}) \le R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}_{\boldsymbol x})$ on ${\mathcal G}_{n,s}$. An analogous argument with the roles of ${\boldsymbol x}$ and ${\boldsymbol g}$ exchanged shows that we also have \begin{align} \nonumber \left( R_n^{\boldsymbol x}(\widehat {\boldsymbol \theta}_{\boldsymbol x}) - R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}_{\boldsymbol g}) \right) \mathbf{1}_{{\mathcal G}_{n,s}}\le \sup_{{\boldsymbol \theta}\in{\mathcal S}_p} \left|R_n^{\boldsymbol g}({\boldsymbol \theta}) - R_n^{\boldsymbol x}({\boldsymbol \theta})\right|. \end{align} Hence, for all $\delta>0$, \begin{align*} &\lim_{n\to\infty}\P\left(\bigg\{\left| F_n^{\boldsymbol x}(t,{\boldsymbol X}) - F_n^{\boldsymbol g}(t,{\boldsymbol X}) \right| > \delta \bigg\} \bigcap {\mathcal G}_{n,s}\right)\\ &\hspace{67mm}\le \lim_{n\to\infty}\P\left(\bigg\{ \sup_{{\boldsymbol \theta}\in{\mathcal S}_p} \left|R_n^{\boldsymbol g}({\boldsymbol \theta}) - R_n^{\boldsymbol x}({\boldsymbol \theta})\right| > \delta \bigg\} \bigcap {\mathcal G}_{n,s}\right) \\ &\hspace{67mm}\stackrel{(a)}=0, \end{align*} where in $(a)$ we used Lemma~\ref{lemma:proj_locally_lip} with Lipschitz $\ell$, and that $\P({\mathcal G}_{n,s}) \to 1$ for all fixed $s$. \end{proof} \ifthenelse{\boolean{arxiv}}{ \begin{proof}[Proof of Theorem~\ref{thm:universality_bounds}] }{ \begin{proof}{\textbf{of Theorem~\ref{thm:universality_bounds} }} } Fix $\alpha \ge \alpha_0 >0$. Note that since $\ell$ and $\eta$ are Lipschitz, we can bound, for all $n >0$, \begin{equation} \label{eq:bound_on_FG} \sup_{t \ge \alpha_0} F_n^{\boldsymbol g}(t,{\boldsymbol X})\mathbf{1}_{{\mathcal G}_{n,\alpha_0}} \le \sup_{n>0} \sup_{{\boldsymbol \theta}\in{\mathcal C}_p } R_n^{\boldsymbol x}({\boldsymbol \theta}) \stackrel{(a)}\le C' < \infty, \end{equation} where $(a)$ follows from the subgaussianity assumption in Assumptions~\ref{ass:noise} and~\ref{ass:X}, and hence a similar bound holds for $ F_n^{\boldsymbol x}(t,{\boldsymbol X})\mathbf{1}_{{\mathcal G}_{n,\alpha_0}}$ and $F_n^{\boldsymbol g}(t,{\boldsymbol G})\mathbf{1}_{{\mathcal G}_{n,\alpha_0}}$. Now define \begin{equation} \nonumber s := \frac{C'}{\alpha}. \end{equation} for the constant $C'$ in~\eqref{eq:bound_on_FG}. We first lower bound the quantity \begin{equation} \nonumber \widehat R_{n,s}^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) :=\min_{{\boldsymbol \theta}\in{\mathcal C}_p}\left\{ s\widehat R_n({\boldsymbol \theta}; {\boldsymbol X}, {\boldsymbol y}({\boldsymbol X}))+ R_n^{\boldsymbol g}({\boldsymbol \theta})\right\} \end{equation} on ${\mathcal G}_{n,\alpha_0}$. Letting $\widehat{\boldsymbol \theta}^{\boldsymbol X}_s$ denote a minimizer of this problem we write \begin{align*} &\left(s\widehat R_n\left(\widehat {\boldsymbol \theta}^{\boldsymbol X}_s; {\boldsymbol X}, {\boldsymbol y}({\boldsymbol X})\right) + R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}^{\boldsymbol X}_s\right) \right)\mathbf{1}_{{\mathcal G}_{n,\alpha_0}}\\ &\hspace{49mm}\ge\left(s\widehat R_n\left(\widehat {\boldsymbol \theta}^{\boldsymbol X}_s; {\boldsymbol X} ,{\boldsymbol y}({\boldsymbol X})\right) +F_n^{\boldsymbol g}\left(\widehat R_n(\widehat {\boldsymbol \theta}^{\boldsymbol X}_s, {\boldsymbol X}), {\boldsymbol X}\right)\right)\mathbf{1}_{{\mathcal G}_{n,\alpha_0}}\\ &\hspace{49mm}\ge \min_{t \ge 0} \left\{ t s+ F_n^{\boldsymbol g}(t,{\boldsymbol X})\right\} \mathbf{1}_{{\mathcal G}_{n,\alpha_0}}\\ &\hspace{49mm}\stackrel{(a)}{\ge} \min_{t\ge 0}\left\{ \frac{t C'}{\alpha} + F_n^{\boldsymbol g}(\alpha,{\boldsymbol X}) \mathbf{1}_{t \le \alpha}\right\} \mathbf{1}_{{\mathcal G}_{n,\alpha_0}}\\ &\hspace{49mm}\stackrel{(b)}\ge F_n^{\boldsymbol g}(\alpha,{\boldsymbol X})\mathbf{1}_{{\mathcal G}_{n,\alpha_0}}\min_{t\ge 0}\left\{\frac{t}{\alpha} + \mathbf{1}_{t \le \alpha}\right\}\\ &\hspace{49mm}\ge F_n^{\boldsymbol g}(\alpha,{\boldsymbol X})\mathbf{1}_{{\mathcal G}_{n,\alpha_0}}, \end{align*} where in $(a)$ we used that $F_n^{\boldsymbol g}(t,{\boldsymbol X})$ is nonincreasing in $t$ and the definition of $s$, and in $(b)$ that $C' \ge F_n^{\boldsymbol g}(\alpha,{\boldsymbol X})$ by~\eqref{eq:bound_on_FG}. Meanwhile we can obtain an upper bound for \begin{equation*} \widehat R_{n,s}^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) :=\min_{{\boldsymbol \theta}\in{\mathcal C}_p}\left\{ s\widehat R_n({\boldsymbol \theta}; {\boldsymbol G} ,{\boldsymbol y}({\boldsymbol G}))+ R_n^{\boldsymbol g}({\boldsymbol \theta})\right\} \end{equation*} on ${\mathcal G}_{n,\alpha_0}$ by \begin{align*} \min_{{\boldsymbol \theta}\in{\mathcal C}_p} \left\{s\widehat R_n({\boldsymbol \theta}; {\boldsymbol G}, {\boldsymbol y}({\boldsymbol G})) + R_n^{\boldsymbol g}({\boldsymbol \theta}) \right\}\mathbf{1}_{{\mathcal G}_{n,\alpha_0}} &\le\left( s\alpha_0 + \hspace{6mm} \min_{ \mathclap{\substack{ {\boldsymbol \theta} \in {\mathcal C}_p \\ \widehat R_n({\boldsymbol \theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \le \alpha_0 }} } R_n^{\boldsymbol g}({\boldsymbol \theta})\right)\mathbf{1}_{{\mathcal G}_{n,\alpha_0}} \\ &=\left( \frac{\alpha_0 C'}{\alpha} + F_n^{\boldsymbol g}(\alpha_0,{\boldsymbol G}) \right)\mathbf{1}_{{\mathcal G}_{n,\alpha_0}}. \end{align*} Hence, for $s = C'/\alpha$ and $\alpha_0 \le \alpha$, we have \begin{align} F_n^{\boldsymbol g}(\alpha,{\boldsymbol X}) \mathbf{1}_{{\mathcal G}_{n,\alpha_0}} &\le \widehat R_{n,s}^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\mathbf{1}_{{\mathcal G}_{n,\alpha_0}}, \label{eq:F_bound_X} \\ \widehat R_{n,s}^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))\mathbf{1}_{{\mathcal G}_{n,\alpha_0}} &\le\left( F_n^{\boldsymbol g}(\alpha_0,{\boldsymbol G}) + C'\frac{\alpha_0}{\alpha} \right)\mathbf{1}_{{\mathcal G}_{n,\alpha_0}}. \label{eq:F_bound_G} \end{align} For the first assertion of the theorem, setting $s=C'/\alpha$, we write for $\delta>0$ and $\rho\in\R$, \begin{align*} \lim_{n\to\infty}\P\left( F_n^{\boldsymbol x}(\alpha,{\boldsymbol X}) \ge \rho + 3\delta \right) &\le \lim_{n\to\infty}\P\left(\left\{ F_n^{\boldsymbol g}(\alpha,{\boldsymbol X}) \ge \rho + 2\delta \right\}\bigcap {\mathcal G}_{n,\alpha_0} \right) + \lim_{n\to\infty}\P\left({\mathcal G}_{n,\alpha_0}^c\right) \\ &\hspace{10mm}+ \lim_{n\to\infty}\P\left(\left\{ \left|F_n^{\boldsymbol g}(\alpha,{\boldsymbol X}) - F_n^{\boldsymbol x}(\alpha,{\boldsymbol X})\right| \ge \delta \right\}\bigcap {\mathcal G}_{n,\alpha_0} \right)\\ &\stackrel{(a)}= \lim_{n\to\infty}\P\left(\left\{ F_n^{\boldsymbol g}(\alpha,{\boldsymbol X}) \ge \rho + 2\delta \right\}\bigcap {\mathcal G}_{n,\alpha_0} \right) \\ &\stackrel{(b)}\le \lim_{n\to\infty}\P\left(\left\{ \widehat R_{n,s}^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \ge \rho +2\delta \right\}\bigcap {\mathcal G}_{n,\alpha_0} \right) \\ &\stackrel{(c)}\le \lim_{n\to\infty}\P\left(\left\{ \widehat R_{n,s}^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \ge \rho + \delta \right\} \bigcap {\mathcal G}_{n,\alpha_0} \right) \\ &\stackrel{(d)}\le \lim_{n\to\infty}\P\left(\left\{ F_n^{\boldsymbol g}(\alpha_0,{\boldsymbol G}) + C' \frac{\alpha_0}{\alpha} \ge \rho +\delta \right\} \bigcap {\mathcal G}_{n,\alpha_0}\right)\\ &\stackrel{(e)}\le \lim_{n\to\infty}\P\left( F_n^{\boldsymbol g}(0,{\boldsymbol G}) \ge \rho \right) +\P\left( C' \frac{\alpha_0}{\alpha} \ge \delta \right) \end{align*} where $(a)$ follows from Lemma~\ref{lemma:F_diff} and that $\lim_n\P({\mathcal G}_{n,\alpha_0}) =1$, $(b)$ follows from the bound in Eq.~\eqref{eq:F_bound_X} holding on ${\mathcal G}_{n,\alpha_0}$, $(c)$ follows from Theorem~\ref{thm:main_empirical_univ} by absorbing $R^{\boldsymbol g}({\boldsymbol \theta})$ into the regularization term and the positive constant $s$ into the loss, along with $\lim_n \P({\mathcal G}_{n,\alpha_0}) = 1$, $(d)$ follows from the bound in~\eqref{eq:F_bound_G}, and $(e)$ follows from the monotonicity of $F_n^{\boldsymbol g}(\cdot,{\boldsymbol G})$. Since $\alpha >0$ and $\delta>0$ were arbitrary, sending $\alpha_0\to 0$ completes the proof of the first statement in the theorem. Using a similar argument with the roles of ${\boldsymbol X}$ and ${\boldsymbol G}$ exchanged gives the second statement. \end{proof} \section{Proof of Theorem~\ref{thm:test_error}} \label{section:proof_test_error} We prove the statement under each condition separately in the subsections that follow. We will use $\textsf{k} = 1$ for simplicity, and without losing generality, since the arguments that follow can be directly extended to the setting where $\textsf{k} >0$ as long as it is a fixed constant. \subsection{Proof of Theorem~\ref{thm:test_error} under the condition~$\ref{item:a_test_error}$} For $s\in\R$, let us define the modified empirical risks \begin{align} \nonumber \widehat R_{n,s}({\boldsymbol \theta}; {\boldsymbol X} ,{\boldsymbol y}({\boldsymbol X}))&:= \widehat R_n({\boldsymbol \theta};{\boldsymbol X}, {\boldsymbol y}({\boldsymbol X})) + s R_n^{\boldsymbol g}({\boldsymbol \theta}) \\ \widehat R_{n,s}({\boldsymbol \theta}; {\boldsymbol G}, {\boldsymbol y}({\boldsymbol G}))&:= \widehat R_n({\boldsymbol \theta};{\boldsymbol G} ,{\boldsymbol y}({\boldsymbol G})) + s R_n^{\boldsymbol g}({\boldsymbol \theta}) \label{eq:modified_risk} \end{align} (note the asymmetry), and use $\widehat{\boldsymbol \theta}_s^{\boldsymbol X},\widehat{\boldsymbol \theta}_s^{\boldsymbol G}$ to denote their unique minimizers respectively. Furthermore, we write $\widehat R_{n,s}^\star({\boldsymbol X} ,{\boldsymbol y}({\boldsymbol X}))$ and $\widehat R_{n,s}^\star({\boldsymbol G} ,{\boldsymbol y}({\boldsymbol G}))$ for the minima. First, we show that the convexity assumptions imply the following lemma. \begin{lemma} \label{lemma:theta_lipschitz_in_s} For all $s\in\R$, $n\in\Z_{>0}$, we have \begin{equation} \label{eq:lipschitz_in_s} \norm{\widehat {\boldsymbol \theta}_s^{\boldsymbol X} - \widehat {\boldsymbol \theta}_{-s}^{\boldsymbol X}}_2 \le C |s| \end{equation} for some $C>0$ depending only on $\sOmega$. A similar inequality also holds for $\widehat{\boldsymbol \theta}_{s}^{\boldsymbol G}$. \end{lemma} \begin{proof} We assume without loss of generality that $\ell(u,v)$ is differentiable in $u$ and that $r$ is differentiable in ${\boldsymbol \theta}$. Otherwise, we can replace all derivatives with subgradients in what follows. We prove the statement by upper and lower bounding the quantity \begin{equation} \nonumber \widehat R_{n}\left(\widehat{\boldsymbol \theta}_s^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_{n} \left(\widehat{\boldsymbol \theta}_0^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right). \end{equation} For the lower bound, we have \begin{align*} \widehat R_n\left(\widehat{\boldsymbol \theta}_s^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_n\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X}; {\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) &\ge \frac1n \sum_{i=1}^n \partial_1\ell\left({\boldsymbol x}_i^\sT \widehat {\boldsymbol \theta}_0^{{\boldsymbol X}};y_i \right){\boldsymbol x}_i^\sT \left( \widehat{\boldsymbol \theta}_s^{\boldsymbol X} -\widehat {\boldsymbol \theta}_0^{\boldsymbol X} \right)\\ &\hspace{6mm}+ \grad r\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X}\right)^\sT \left( \widehat{\boldsymbol \theta}_s^{\boldsymbol X} - \widehat {\boldsymbol \theta}_0^{\boldsymbol X} \right) + \frac{\smu}{2} \norm{\widehat{\boldsymbol \theta}_s^{\boldsymbol X} - \widehat {\boldsymbol \theta}_0^{\boldsymbol X} }_2^2\\ &\stackrel{(a)}= \frac{\smu}{2} \norm{\widehat{\boldsymbol \theta}_s^{\boldsymbol X} - \widehat{\boldsymbol \theta}_0^{\boldsymbol X}}_2^2 \end{align*} where $(a)$ follows from the KKT conditions for $\widehat R_n$. Meanwhile, for the upper bound we write \begin{align*} \widehat R_{n}\left(\widehat{\boldsymbol \theta}_s^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_{n} \left(\widehat{\boldsymbol \theta}_0^{\boldsymbol X};{\boldsymbol X}, {\boldsymbol y}({\boldsymbol X})\right) &= \widehat R_{n,s} \left(\widehat{\boldsymbol \theta}_s^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_{n,s}\left(\widehat{\boldsymbol \theta}_0^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right)\\ &\hspace{10mm}+ s\left(R^{\boldsymbol g}_n\left(\widehat{\boldsymbol \theta}^{\boldsymbol X}_0\right) - R^{\boldsymbol g}_n\left(\widehat{\boldsymbol \theta}^{\boldsymbol X}_s\right)\right)\\ &\stackrel{(a)}{\le} \left|s\right| \left| R^{\boldsymbol g}_n\left(\widehat{\boldsymbol \theta}^{\boldsymbol X}_0\right) - R^{\boldsymbol g}_n\left(\widehat {\boldsymbol \theta}^{\boldsymbol X}_s\right)\right|\\ &\stackrel{(b)}{\le} C_1 |s| \norm{\widehat {\boldsymbol \theta}^{\boldsymbol X}_0 - \widehat {\boldsymbol \theta}^{\boldsymbol X}_s }_2, \end{align*} where $(a)$ follows by noting that $\widehat {\boldsymbol \theta}_s^{\boldsymbol X}$ minimizes $\widehat R_{n,s}({\boldsymbol \theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$, and $(b)$ follows since $R^{\boldsymbol g}_n({\boldsymbol \theta})$ is Lipschitz with bounded Lipschitz modulus: Indeed we have \begin{equation} \nonumber \left| R_n^{\boldsymbol g}({\boldsymbol \theta}) - R_n^{\boldsymbol g}({\boldsymbol \theta}') \right| \le \norm{\ell}_\textrm{Lip}\E\left[\left|\left({\boldsymbol \theta} -{\boldsymbol \theta}'\right)^\sT{\boldsymbol g} \right|\right] = \norm{\ell}_\textrm{Lip} \E\left[|G|\right] \norm{{\boldsymbol \theta} - {\boldsymbol \theta}'}_2. \end{equation} Combining the upper and lower bounds and rearranging gives \begin{equation} \nonumber \norm{\widehat{\boldsymbol \theta}^{\boldsymbol X}_s - \widehat {\boldsymbol \theta}^{\boldsymbol X}_0}_2 \le C_2 |s|, \end{equation} and hence \begin{equation} \nonumber \norm{\widehat {\boldsymbol \theta}^{\boldsymbol X}_s - \widehat{\boldsymbol \theta}^{\boldsymbol X}_{-s}}_2 \le \norm{\widehat{\boldsymbol \theta}^{\boldsymbol X}_s - \widehat{\boldsymbol \theta}^{\boldsymbol X}_0}_2 + \norm{\widehat{\boldsymbol \theta}^{\boldsymbol X}_{-s} - \widehat{\boldsymbol \theta}^{\boldsymbol X}_0}_2 \le C_3|s|. \end{equation} This proves the lemma for ${\boldsymbol X}$. A similar argument clearly holds for the Gaussian model. \end{proof} Now let us define, for $s \neq 0$, the differences \begin{equation} \label{eq:D_def} D^{\boldsymbol X}(s) := \frac{\widehat R_{n,s}^\star\left({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_{n}^\star\left({\boldsymbol X},{\boldsymbol y}({\boldsymbol G})\right)}{s},\hspace{5mm} D^{\boldsymbol G}(s) := \frac{\widehat R_{n,s}^\star\left({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right) - \widehat R_{n}^\star\left({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right)}{s}. \end{equation} We state the following lemma. \begin{lemma} \label{lemma:D_universality} For all $n\in \Z_{>0}$ and $s >0$, we have \begin{equation} \label{eq:lip_D} D^{\boldsymbol X}(-s) - D^{\boldsymbol X}(s) \le C\,s \quad\textrm{and} \quad D^{\boldsymbol G}(-s) - D^{\boldsymbol G}(s) \le C\,s \end{equation} for $C>0$ depending only on $\sOmega$. Furthermore, for any $t\in\R, s > 0$ and $\delta>0$, \begin{align} \label{eq:lim_P_DX} \lim_{n\to\infty}\P\left( D^{\boldsymbol X}(-s) \ge t + \delta \right) &\le \lim_{n\to\infty} \P\left(D^{\boldsymbol G}(-s) \ge t \right)\\ \lim_{n\to\infty}\P\left( D^{\boldsymbol X}(s) \le t - \delta \right) &\le \lim_{n\to\infty} \P\left(D^{\boldsymbol G}(s) \le t \right). \label{eq:lim_P_DG} \end{align} \end{lemma} \begin{proof} Let us first show~\eqref{eq:lip_D}. We can write \begin{align*} D^{\boldsymbol X}(-s) - D^{\boldsymbol X}(s) &= -\frac{1}{s}\left(\widehat R_n(\widehat {\boldsymbol \theta}_{-s}^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \widehat R_n(\widehat {\boldsymbol \theta}_0^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right) \\ &\hspace{10mm}-\frac1s \left(\widehat R_n(\widehat {\boldsymbol \theta}_s^{\boldsymbol X};{\boldsymbol X}, {\boldsymbol y}({\boldsymbol X})) - \widehat R_n(\widehat {\boldsymbol \theta}_0^{\boldsymbol X};{\boldsymbol X}, {\boldsymbol y}({\boldsymbol X})) \right)\\ &\hspace{10mm}+ R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_{-s}\left({\boldsymbol X}\right)\right) - R_n^{\boldsymbol g}\left(\widehat{\boldsymbol \theta}_s\left({\boldsymbol X}\right)\right) \\ &\stackrel{(a)}{\le} \left|R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}^{\boldsymbol X}_{-s}) - R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}^{\boldsymbol X}_s )\right|\\ &\stackrel{(b)}{\le} C_0 \norm{\widehat{\boldsymbol \theta}^{\boldsymbol X}_{-s} - \widehat{\boldsymbol \theta}^{\boldsymbol X}_s}_2\\ &\stackrel{(c)}{\le} C_1\,s \end{align*} where in $(a)$ we used $\widehat R_n(\widehat {\boldsymbol \theta}^{\boldsymbol X}_{-s};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\ge \widehat R_n(\widehat {\boldsymbol \theta}^{\boldsymbol X}_0;{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$ and $\widehat R_n(\widehat {\boldsymbol \theta}^{\boldsymbol X}_{s},{\boldsymbol X}) \ge \widehat R_n(\widehat {\boldsymbol \theta}^{\boldsymbol X}_0,{\boldsymbol X})$, in $(b)$ we used that that $R_n^{\boldsymbol g}({\boldsymbol \theta})$ is Lipschitz with bounded Lipschitz modulus and in $(c)$ we used Lemma~\ref{lemma:theta_lipschitz_in_s}. A similar argument then shows the same property for $D^{\boldsymbol G}(s)$. Now let us prove~\eqref{eq:lim_P_DX}. We have \begin{align*} \lim_{n\to\infty}\P\left( D^{\boldsymbol X}(-s) \ge t + 3\delta \right) &= \lim_{n\to\infty} \P\left( \frac{\widehat R_{n,-s}^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) }{-s} \ge t + 3\delta \right)\\ &\le \lim_{n\to\infty} \P\left( \frac{\widehat R_{n,-s}^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \rho }{-s} \ge t + 2\delta \right)\\ &\hspace{10mm}+ \lim_{n\to\infty}\P\left( \left|\widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \rho \right| \ge s\delta \right)\\ &\stackrel{(a)}= \lim_{n\to\infty} \P\left( \frac{\widehat R_{n,-s}^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \rho }{-s} \ge t + 2\delta \right)\\ &\stackrel{(b)}\le \lim_{n\to\infty} \P\left( \frac{\widehat R_{n,-s}^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) - \rho }{-s} \ge t + \delta \right)\\ &\le \lim_{n\to\infty} \P\left( \frac{\widehat R_{n,-s}^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) - \widehat R^\star_n({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) }{-s} \ge t \right) \\ &\hspace{10mm}+ \lim_{n\to\infty}\P\left( \left|\widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) - \rho \right| \ge s\delta \right)\\ &\stackrel{(c)}= \lim_{n\to\infty} \P\left( D^{\boldsymbol G}(-s) \ge t \right) \end{align*} where $(a)$ follows from Theorem~\ref{thm:main_empirical_univ} applied to $\widehat R_n^\star$ along with the assumption that $\widehat R_n^\star({\boldsymbol G}) \stackrel{\P}{\to} \rho$, $(b)$ follows from Theorem~\ref{thm:main_empirical_univ} applied to $\widehat R_{n,-s}^\star$ by absorbing the term $-s R^{\boldsymbol g}$ into the regularizer, and $(c)$ follows directly from the assumption $\widehat R_n^\star({\boldsymbol G}) \stackrel{\P}{\to} \rho$. This proves~\eqref{eq:lim_P_DX}. A similar argument establishes the inequality~\eqref{eq:lim_P_DG}. \end{proof} \ifthenelse{\boolean{arxiv}}{ \begin{proof}[Proof of Theorem~\ref{thm:test_error} under the condition~$\ref{item:a_test_error}$ ] }{ \begin{proof}{\textbf{of Theorem~\ref{thm:test_error} under the condition~$\ref{item:a_test_error}$ }} } First, note that that for $s>0$, $R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}_0^{\boldsymbol X})$ is sandwhiched between $D^{\boldsymbol X}(s)$ and $D^{\boldsymbol X}(-s)$. Indeed, we have \begin{align} \nonumber D^{\boldsymbol X}(s) &\le \frac{\widehat R_{n,s}\left(\widehat{\boldsymbol \theta}^{\boldsymbol X}_0;{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_n\left(\widehat{\boldsymbol \theta}^{\boldsymbol X}_0;{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right)}{s}\\ &= R_n^{\boldsymbol g}\left(\widehat{\boldsymbol \theta}^{\boldsymbol X}_0\right)\nonumber\\ &= \frac{\widehat R_{n,-s}\left(\widehat{\boldsymbol \theta}^{\boldsymbol X}_0;{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_n(\widehat {\boldsymbol \theta}^{\boldsymbol X}_0;{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))}{-s}\nonumber\\ &\le D^{\boldsymbol X}(-s). \label{eq:DX_upper_bound} \end{align} Analogously, we can derive \begin{equation} \label{eq:DG_lower_bound} D^{\boldsymbol G}(s) \le R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol G}\right)\le D^{\boldsymbol G}(-s). \end{equation} Let us first use this to show that $R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X}\right) \stackrel{\P}{\to} \widetilde \rho$. For any $\delta>0$, take $s_\delta\in(0, \delta/(2C))$ where $C$ is the constant appearing in Eq.~\eqref{eq:lip_D} of Lemma~\ref{lemma:D_universality} and write \begin{align*} \lim_{n\to\infty}\P\left(R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X} \right) \ge \widetilde \rho + 3\delta \right) &\stackrel{(a)}\le \lim_{n\to\infty}\P\left( D^{\boldsymbol X}(-s_\delta) \ge \widetilde \rho + 3\delta \right)\\ &\stackrel{(b)}\le \lim_{n\to\infty}\P\left( D^{\boldsymbol G}(-s_\delta) \ge \widetilde \rho + \delta \right)\\ &\stackrel{(c)}\le \lim_{n\to\infty}\P\left( D^{\boldsymbol G}(s_\delta) + C s_\delta \ge \widetilde \rho + \delta \right)\\ &\stackrel{(d)}\le \lim_{n\to\infty}\P\left( R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol G}\right) \ge \widetilde \rho + \frac{\delta}2 \right)\\ &\stackrel{(e)}= 0, \end{align*} where $(a)$ follows by~\eqref{eq:DX_upper_bound}, $(b)$ and $(c)$ follow by Lemma~\ref{lemma:D_universality}, $(d)$ follows by the lower bound in Eq.~\eqref{eq:DG_lower_bound} and the definition of $s_\delta$ and $(e)$ is by the assumption that $R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol G}\right) \stackrel{\P}{\to} 0.$ An analogous argument then shows \begin{equation} \nonumber \lim_{n\to\infty}\P\left(R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X} \right) \le \widetilde \rho - 3\delta \right) = 0. \end{equation} Therefore, $R^{\boldsymbol g}_n\left(\widehat{\boldsymbol \theta}_0^{\boldsymbol X}\right) \stackrel{\P}{\to} \widetilde\rho$. To conclude the proof, recall that $\ell$ is assumed to be Lipschitz so that Lemma~\ref{lemma:proj_locally_lip} implies that $$\left|R^g_n\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X}\right) - R^{\boldsymbol x}_n\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X}\right) \right| \to 0$$ almost surely, yielding the statement of the thoerem under condition~$\ref{item:a_test_error}$. \end{proof} \subsection{Proof of Theorem~\ref{thm:test_error} under the condition~$\ref{item:b_test_error}$} Let $\mathcal{A}_{n,\delta,\alpha}$ be the event in condition~$\ref{item:b_test_error}$, namely, \begin{equation} \nonumber \mathcal{A}_{n,\delta,\alpha} := \left\{ \min_{\left\{{\boldsymbol \theta}\in{\mathcal C}_p : \left| R_n^{\boldsymbol g}({\boldsymbol \theta}) - \widetilde\rho \right|\ge \alpha\right\}} \left| \widehat R_n\left({\boldsymbol \theta};{\boldsymbol G}, {\boldsymbol y}({\boldsymbol G})\right) - \widehat R_n^\star\left( {\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right) \right| \ge \delta\right\}, \end{equation} and take $\widehat{\boldsymbol \theta}_n^{\boldsymbol G}$ and $\widehat{\boldsymbol \theta}_n^{\boldsymbol X}$ to be any minimizers of $\widehat R_n({\boldsymbol \theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))$ and $\widehat R_n({\boldsymbol \theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$ respectively. First, note that this directly implies \begin{equation} \label{eq:g_test_error} \left| R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol G}\right) - \widetilde \rho\;\right| \stackrel{\P}{\to} 0. \end{equation} Indeed, we have for all $\alpha >0$, \begin{align*} \P\left( \left| R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol G}\right) - \widetilde\rho \; \right| \ge \alpha \right) \le \P\left( \left\{\left| R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol G}\right) - \widetilde\rho \; \right| \ge \alpha\right\} \bigcap \mathcal{A}_{n,\delta,\alpha} \right) + \P\left( \mathcal{A}_{n,\delta,\alpha}^c\right)= \P\left( \mathcal{A}_{n,\delta,\alpha}^c\right) \end{align*} for any $\delta>0$. Now choosing $\delta >0$ so that $\lim_{n\to\infty} \P(\mathcal{A}_{n,\delta,\alpha}^c) =0$ proves~\eqref{eq:g_test_error}. Next, we show that \begin{equation} \nonumber \left| R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol X}\right) - \widetilde\rho\; \right| \stackrel{\P}{\to} 0 \end{equation} as a consequence of Theorem~\ref{thm:main_empirical_univ} along with the assumption that $\widehat R_n\left(\widehat{\boldsymbol \theta}_n^{\boldsymbol G}; {\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right)\stackrel{\P}\to\rho.$ Indeed, assume the contrary and choose for any $\alpha>0$, $\delta := \delta_\alpha$ so that $\P(\mathcal{A}_{n,\delta_\alpha, \alpha}^c) \to 0$. We have \begin{align*} &\P\left( \left| \widehat R_n\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_n\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol G}; {\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right)\right| < \delta_\alpha\right)\\ &\hspace{1mm}\le \P\left(\left\{\left| \widehat R_n(\widehat {\boldsymbol \theta}_n^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \widehat R_n(\widehat{\boldsymbol \theta}_n^{\boldsymbol G}; {\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \right| \hspace{-1mm}<\hspace{-1mm} \delta_\alpha\right\} \bigcap \left\{\left| R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}_n^{\boldsymbol X}) -\widetilde\rho \; \right| \ge \alpha\right\} \bigcap \mathcal{A}_{n,\delta_\alpha, \alpha} \right)\\ &\hspace{10mm}+ \P\left(\mathcal{A}_{n,\delta_\alpha, \alpha}^c \right) + \P\left(\left| R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol X}\right) - \widetilde \rho \right| < \alpha \right) \\ &\hspace{1mm}= \P\left(\mathcal{A}_{n,\delta_\alpha, \alpha}^c \right) + \P\left(\left| R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol X}\right) - \widetilde \rho \; \right| < \alpha \right) \end{align*} Sending $n\to\infty$, we have \begin{align*} &\limsup_{n\to\infty} \P\left( \left| \widehat R_n\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol X};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) - \widehat R_n\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol G}; {\boldsymbol G} ,{\boldsymbol y}({\boldsymbol G})\right)\right| < \delta_\alpha\right) \\ &\hspace{70mm}\le \limsup_{n\to\infty} \P\left(\left| R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_n^{\boldsymbol X}\right) -\widetilde\rho \; \right| < \alpha \right) \stackrel{(a)}{<} 1, \end{align*} where $(a)$ follows since we assumed that $\left| R_n^{\boldsymbol g}(\widehat{\boldsymbol \theta}_n^{\boldsymbol X}) - \widetilde\rho\; \right|$ does not converge to $0$ in probability. This directly contradicts $\left|\widehat R_n^\star({\boldsymbol X}, {\boldsymbol y}({\boldsymbol X})) - \widehat R_n^\star( {\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))\right| \stackrel{\P}{\to} 0$; a consequence of Theorem~\ref{thm:main_empirical_univ} and the assumption that $\widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \stackrel{\P}{\to} \rho$ in condition~$\ref{item:b_test_error}$. Meanwhile, note that by Lemma~\ref{lemma:proj_locally_lip}, $\left| R_n^{\boldsymbol g}\left(\widehat{\boldsymbol \theta}_n^{\boldsymbol X}\right) - R_n^{\boldsymbol x}\left(\widehat{\boldsymbol \theta}_n^{\boldsymbol G}\right) \right| \stackrel{\textrm{a.s.}}{\to} 0$, hence, we have for all $\alpha >0$, \begin{align} \nonumber \lim_{n\to\infty}\P\left(\left| R_n^{\boldsymbol x}\left(\widehat{\boldsymbol \theta}_n^{\boldsymbol X}\right) - \widetilde\rho\; \right| > \alpha \right) \le \lim_{n\to\infty}\P\left( \left| R_n^{\boldsymbol g}\left( \widehat {\boldsymbol \theta}_n^{\boldsymbol X}\right) - \widetilde\rho \;\right| > \frac{\alpha}{2}\right) = 0. \end{align} \subsection{Proof of Theorem~\ref{thm:test_error} under the condition~$\ref{item:c_test_error}$ } Recall the definitions of the modified risks $\widehat R_{n,s}\left({\boldsymbol \theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right)$, $\widehat R_{n,s}({\boldsymbol \theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G}))$ for $s\in\R$ in Eq.~\eqref{eq:modified_risk}, and write $\widehat {\boldsymbol \theta}_{s}^{\boldsymbol X}$ for a minimizer of $\widehat R_{n,s}({\boldsymbol \theta};{\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))$ and $\widehat {\boldsymbol \theta}_{s}^{\boldsymbol G}$ for a minimizer of $\widehat R_{n,s}({\boldsymbol \theta};{\boldsymbol G},{\boldsymbol y}({\boldsymbol G})).$ Further, recall the definitions of $D^{\boldsymbol X}(s) ,D^{\boldsymbol G}(s)$ in Eq.~\eqref{eq:D_def} for $s > 0$, and note that the bounds \begin{equation} \nonumber D^{\boldsymbol G}(s) \le R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}_{0}^{\boldsymbol G}) \le D^{\boldsymbol G}(-s),\quad D^{\boldsymbol X}(s) \le R_n^{\boldsymbol g}(\widehat {\boldsymbol \theta}_{0}^{\boldsymbol X}) \le D^{\boldsymbol X}(-s) \end{equation} shown in~\eqref{eq:DX_upper_bound} hold generally without the convexity assumption. Hence, using \begin{equation} \nonumber \Delta \rho (t) := \frac{\rho(t) - \rho(0)}{t}, \end{equation} we can write, for any $\delta>0$ and $s>0$, \begin{align*} \P\left( R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X} \right) \ge R_n^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol G}\right) + 3\delta \right)\hspace{-0.8mm} &\le \P\left( \left| \Delta \rho(-s) - D^{\boldsymbol X}(-s) \right| \ge \delta \right) + \P\left( \left|\Delta \rho(s) - D^{\boldsymbol G}(s) \ \right|\ge \delta \right)\\ & \hspace{10mm}+\P\left( D^{\boldsymbol X}(-s) \ge D^{\boldsymbol G}(s) + 3\delta \right)\\ &\le \P\left( \Delta \rho(-s) \ge \Delta \rho(s) + \delta \right). \end{align*} Now recall that by the assumption in condition~\ref{item:c_test_error}, $\widehat R_{n,s}^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \stackrel{\P}\to \rho(s)$ for all $s$ in some neighborhood of $0$. Theorem~\ref{thm:main_empirical_univ} then implies the same for the model with ${\boldsymbol X}$, i.e., $\widehat R_{n,s}^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \stackrel{\P}\to \rho(s)$, so by Slutsky's we have \begin{equation} \nonumber \left|\Delta \rho(-s) - D^{\boldsymbol X}(-s) \right| \stackrel{\P}{\to} 0,\quad \left|\Delta \rho(s) - D^{\boldsymbol G}(s) \right| \stackrel{\P}{\to} 0 \end{equation} for $s$ in some neighborhood of $0$. Combining this with the previous display gives \begin{align*} \lim_{n\to\infty}\P\left( R^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X} \right) \ge R^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol G}\right) + 3\delta \right) &= \lim_{s\to 0} \P\left(\Delta \rho(-s) \ge \Delta\rho(s) + \delta \right)\\ &\stackrel{(a)}= 0 \end{align*} where $(a)$ follows by differentiability of $\rho(s)$ at $s=0$. By exchanging the roles of ${\boldsymbol X}$ and ${\boldsymbol G}$ in this argument we additionally obtain \begin{align} \nonumber \lim_{n\to\infty}\P\left( R^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X} \right) \le R^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol G}\right) - 3\delta \right) = 0, \end{align} so that $\left|R^g\left(\widehat{\boldsymbol \theta}_0^{\boldsymbol X}\right) - R^g\left(\widehat{\boldsymbol \theta}_0^{\boldsymbol G}\right) \right| \stackrel{\P}{\to} 0$. Finally, using that $\left|R^{\boldsymbol g}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol X}\right) - R^{\boldsymbol x}\left(\widehat {\boldsymbol \theta}_0^{\boldsymbol G}\right) \right|\stackrel{a.s.}\to 0$ as a consequence of Lemma~\ref{lemma:proj_locally_lip}, we obtain the desired result. \section{The neural tangent model: Proof of Corollary~\ref{cor:ntk_universality}} \label{proof:ntk_universality} Let us begin by recalling the definitions and assumptions on the model defined in Section~\ref{section:ntk_example}. Recall the activation function $\sigma$ that is assumed to be four times differentiable with bounded derivatives and satisfying $\E[\sigma'(G)] = 0$, and $E[G\sigma'(G)] = 0$ for $G\sim\cN(0,1)$. Further recall the weight matrix ${\boldsymbol W}$ whose $m$ columns are ${\boldsymbol w}_j\stackrel{i.i.d}{\sim}\textsf{Unif}\left(\S^{d-1}(1)\right)$, $j\in[m]$. The feature vectors for the neural tangent model were then defined in~\eqref{eq:ntk_covariates} as \begin{equation} \nonumber {\boldsymbol x}_i = \left({\boldsymbol z}_i \sigma'\left({\boldsymbol w}_1^\sT {\boldsymbol z}_i\right),\dots,{\boldsymbol z}_i \sigma'\left({\boldsymbol w}_m^\sT {\boldsymbol z}_i\right)\right) \in\R^p, \end{equation} where ${\boldsymbol z}_i\stackrel{\textrm{i.i.d.}}{\sim}\cN(0,{\boldsymbol I}_d)$ for $i\in[n]$. Additionally, for the Gaussian model we defined ${\boldsymbol g} | {\boldsymbol W} \sim \cN\left(0, \E\left[{\boldsymbol x}\bx^\sT | {\boldsymbol W} \right] \right)$. We assume $m(n)/d(n) \to \widetilde\sgamma$ and $p(n)/n \to \sgamma$ as $n\to\infty$. As we have done so far, we suppress the dependence of these integers on $n$. For a given ${\boldsymbol \theta} =\left({\boldsymbol \theta}_{\up{1}}^\sT,\dots,{\boldsymbol \theta}_\up{m}^\sT\right)^\sT\in\R^p$, where ${\boldsymbol \theta}_{\up{j}} \in \R^d$ for $j\in[m]$, we introduced the notation ${\boldsymbol T}_{\boldsymbol \theta}\in\R^{d\times m}$ to denote the matrix ${\boldsymbol T}_{\boldsymbol \theta} = \left({\boldsymbol \theta}_\up{1} ,\dots, {\boldsymbol \theta}_\up{m}\right) \in\R^{d\times m}$ so that we can write ${\boldsymbol \theta}^\sT {\boldsymbol x} = {\boldsymbol z}^\sT {\boldsymbol T}_{\boldsymbol \theta}^\sT \sigma'\left({\boldsymbol W}^\sT {\boldsymbol z}\right)$, where $\sigma':\R\to\R$ is applied element-wise. Finally, recall the set \begin{equation} \nonumber {\mathcal S}_{p} = \left\{ {\boldsymbol \theta}\in\R^p : \norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op} \le \frac{ \textsf{R}}{\sqrt{d}} \right\}. \end{equation} Note that ${\mathcal S}_p$ is symmetric, convex, and ${\mathcal S}_p\subseteq B_2^p(\textsf{R})$. Furthermore, for all ${\boldsymbol \theta}\in{\mathcal S}_p$ we have $\norm{{\boldsymbol \theta}_\up{j}}_2 \le \textsf{R}/\sqrt{d}$ for all $j\in[m]$. The key to proving Corollary~\ref{cor:ntk_universality} is showing that the distribution of the feature vectors $\{{\boldsymbol x}_i\}_{i\le[n]}$ satisfy, on a high probability set, Assumption~\ref{ass:X}. Our proof here is analogous to that of~\cite{hu2020universality} for the random features model. Let us begin our treatment by defining the event \begin{equation} \nonumber \mathcal{B} := \left\{\sup_{\{i,j \in [m] : i\neq j\}} \left|{\boldsymbol w}_i^\sT{\boldsymbol w}_j\right| \le C\left(\frac{\log m}{d}\right)^{1/2}\right\} \bigcap \left\{ \norm{{\boldsymbol W} }_\mathrm{op} \le C' \right\} \end{equation} for some $C,C'$ depending only on $\widetilde\sgamma$ so that $\P(\mathcal{B}^c) \to0$ as $n\to\infty$. The existence of such constants is a standard result (see for example~\cite{vershynin2018high}.) However, we include it as Lemma~\ref{lemma:B_tail_bound} of Section~\ref{section:aux_lemmas_ntK} for completeness. \subsection{Asymptotic Gaussianity on a subset of ${\mathcal S}_p$} \label{subsection:bounded_diff_delta} Throughout, we will be working conditionally on ${\boldsymbol W}\in\mathcal{B}$, so let simplify notation by using $\E[\cdot] := \E[\cdot\, \mathbf{1}_\mathcal{B} \big| {\boldsymbol W}]$. Furthermore, since the initial goal is to establish that the distribution of the feature vectors satisfies Assumption~\ref{ass:X}, we suppress the sample index. For a given $\delta>0$, let us define the set \begin{equation} \nonumber {\mathcal S}_{p,\delta} := \left\{{\boldsymbol \theta}\in{\mathcal S}_p : {\boldsymbol \theta}^\sT \E\left[{\boldsymbol x}\bx^\sT\right]{\boldsymbol \theta} > \delta\right\}, \end{equation} Our goal in this subsection is to prove the following lemma. \begin{lemma} \label{lemma:ntk_bl} For all $\delta>0$ and any differntiable bounded function $\varphi:\R\to\R$ with bounded derivative, we have \begin{equation} \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in {\mathcal S}_{p,\delta}} \left|\E\left[\varphi\left({\boldsymbol \theta}^\sT{\boldsymbol x}\right)\mathbf{1}_{\mathcal{B}} \Big| {\boldsymbol W} \right] - \E\left[\varphi\left({\boldsymbol \theta}^\sT{\boldsymbol g}\right)\mathbf{1}_{\mathcal{B}}\Big | {\boldsymbol W} \right]\right| = 0. \end{equation} \end{lemma} \vspace{5mm} Define, for ${\boldsymbol \theta}\in{\mathcal S}_{p,\delta}$ the notation \begin{equation} \nonumber \nu^2 = \nu^2_{\boldsymbol \theta} := {\boldsymbol \theta}^\sT \E\left[{\boldsymbol x}\bx^\sT\right] {\boldsymbol \theta} > \delta. \end{equation} For a fixed bounded Lipschitz function $\varphi$, let $\chi = \chi_\varphi$ be the solution to Stein's equation for $\varphi$, namely, the function $\chi$ satisfying \begin{equation} \label{eq:stein_solution} \E\left[ \varphi\left(\frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu}\right) - \varphi\left(\frac{{\boldsymbol \theta}^\sT {\boldsymbol g}}{\nu}\right)\right] = \E\left[\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \right] \end{equation} (see~\cite{chen2011normal} for more on Stein's method and properties of the solution $\chi$.). In order to prove Lemma~\ref{lemma:ntk_bl}, it is sufficient to show that \begin{equation} \label{eq:stein_equiv} \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in {\mathcal S}_{p,\delta}}\left|\E\left[\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \right]\right| = 0. \end{equation} To simplify notation, define \begin{equation} \label{eq:Delta_i_def} \Delta_i := \frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} - \frac{1}{\nu}\sum_{j: j\neq i }{\boldsymbol \theta}_\up{j}^\sT{\boldsymbol P}_{i}^\perp{\boldsymbol z} \sigma'({\boldsymbol w}_j^\sT{\boldsymbol z} - \rho_{i,j}{\boldsymbol w}_i^\sT {\boldsymbol z}), \end{equation} where \begin{equation} \nonumber {\boldsymbol P}_{i}^\perp := {\boldsymbol I} - {\boldsymbol w}_i{\boldsymbol w}_i^\sT,\quad \rho_{ij}:= {{\boldsymbol w}_j^\sT{\boldsymbol w}_i}. \end{equation} In Section~\ref{section:ntk_actual_proof}, we upper bound the quantity~\eqref{eq:stein_equiv} as \begin{align} \label{eq:decomposition} &\left|\E\left[\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \right]\right| \\ &\hspace{15mm}\le \left| \E\left[\left(\frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})\Delta_i - 1 \right)\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right)\right] \right|\nonumber\\ &\hspace{25mm}+ \left| \E\left[ \frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) \left(\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} - \Delta_i\right) - \Delta_i\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \right) \right] \right|. \nonumber \end{align} So first, let us control the terms on the right hand side: We do this in Sections~\ref{section:first_term_ntk} and~\ref{section:second_term_ntk}, respectively. Before doing this, we make the following definitions which will be used throughout. Define $\widetilde {\boldsymbol \theta}_{j,i} := {\boldsymbol P}_{i}^\perp{\boldsymbol \theta}_\up{j},$ along with the matrix notation \begin{align*} {\boldsymbol D}_l &:= \textrm{diag}\left\{{\boldsymbol \sigma}^\up{l}({\boldsymbol W}^\sT{\boldsymbol z})\right\},\quad\quad {\boldsymbol M}:=\textrm{diag}\left\{{\boldsymbol W}^\sT{\boldsymbol z}\right\},\quad\quad \widetilde {\boldsymbol M} := \textrm{diag}\left\{{\boldsymbol T}_{\boldsymbol \theta}^\sT {\boldsymbol z}\right\},\\ {\boldsymbol A} &:= {\boldsymbol W}^\sT{\boldsymbol T}_{\boldsymbol \theta} - \left({\boldsymbol W}^\sT {\boldsymbol T}_{{\boldsymbol \theta}}\right) \odot {\boldsymbol I}_{m},\quad\quad {\boldsymbol R} := {\boldsymbol W}^\sT{\boldsymbol W} - {\boldsymbol I}_m,\quad\quad {\boldsymbol N}:= \left(\widetilde {\boldsymbol \theta}_{j,i}^\sT {\boldsymbol z}\right)_{i,j \in[m]}, \end{align*} where we write ${\boldsymbol \sigma}^\up{l}({\boldsymbol v})$ to denote the element-wise application of $\sigma^\up{l}:\R\to\R$, the $l$th derivative of $\sigma$ to a vector ${\boldsymbol v}$. Additionally, here $\odot$ denotes the Hadamard product, and $\textrm{diag}\{{\boldsymbol v}\}$ for a vector ${\boldsymbol v}$ denotes the matrix whose elements on the main diagonal are the elements of ${\boldsymbol v}$, and whose elements off the main diagonal are $0$. We prove the following bounds. \begin{lemma} \label{lemma:op_norm_bounds} For ${\boldsymbol W}\in\mathcal{B}$, we have for any fixed integers $k>0$ and $l\le 4 $ \begin{align*} &\norm{{\boldsymbol D}_l}_\mathrm{op} \le C_0,\hspace{17mm} \E\left[\norm{{\boldsymbol M}}_\mathrm{op}^k \right] \le C_1(\log m)^{k/2} , \hspace{17mm} \E\left[\norm{\widetilde{\boldsymbol M}}^k_\mathrm{op}\right]\le C_2 \frac{(\log{m})^{k/2}}{m^{k/2}},\\ &\norm{{\boldsymbol A}}_\mathrm{op} \le \frac{C_3}{m^{1/2}},\hspace{7mm} \norm{{\boldsymbol R}}_\mathrm{op} \le C_4,\hspace{7mm} \E\left[\norm{{\boldsymbol N}\odot {\boldsymbol R} }_\mathrm{op}^k \right] \le C_5\frac{(\log m)^{k/2}}{m^{k/2}},\hspace{6mm} \norm{{\boldsymbol R}\odot {\boldsymbol R}}_\mathrm{op} \le C_6,\\ &\E\left[\norm{{\boldsymbol N}\odot {\boldsymbol R} \odot {\boldsymbol R} }_\mathrm{op}^k \right] \le C_7\frac{(\log m)^{k/2}}{m^{k/2}},\hspace{4mm} \norm{{\boldsymbol A} \odot {\boldsymbol R}}_\mathrm{op} \le C_8 \frac1{\sqrt{m}},\hspace{4mm} \norm{{\boldsymbol A} \odot {\boldsymbol R} \odot {\boldsymbol R}}_\mathrm{op} \le C_{9} \frac1{\sqrt{m}}, \end{align*} for some constants $C_i$ depending only on $\sOmega$. \end{lemma} \begin{proof} Using Lemma~\ref{lemma:subgaussian_max}, the first five inequalities are direct. Indeed, recalling that $m/d \to\widetilde\sgamma$, we have \begin{align*} &\norm{{\boldsymbol D}_l}_\mathrm{op} = \sup_{i\in[m]}\left|\sigma^\up{l}({\boldsymbol w}_i^\sT {\boldsymbol z})\right| \le \norm{\sigma^\up{l}}_\infty \le C_0,\\ &\E\left[\norm{{\boldsymbol M}}_\mathrm{op}^k \right]= \E\left[\sup_{i\in[m]} \left|{\boldsymbol w}_i^\sT {\boldsymbol z}\right|^k\right] \le C_1\left(\log m\right)^{k/2},\\ &\E\left[\norm{\widetilde{\boldsymbol M}}^k_\mathrm{op}\right] = \E \left[ \sup_{i\in[m]} \left| {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z}\right|^k\right] \le C_2 \frac{\left(\log{m}\right)^{k/2}}{m^{k/2}},\\ &\norm{{\boldsymbol A}}_\mathrm{op} \le \norm{{\boldsymbol W}}_\mathrm{op}\norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op} + \sup_{i}\left|{\boldsymbol w}_i^\sT {\boldsymbol \theta}_\up{i}\right| \le C_3 \frac{1}{m^{1/2}},\\ &\norm{{\boldsymbol R}}_\mathrm{op} \le \norm{{\boldsymbol W}}_\mathrm{op}^2 + \norm{{\boldsymbol I}}_\mathrm{op} \le C_4. \end{align*} For the remaining inequalities, let ${\boldsymbol B}\in\R^{m\times m}$ be an arbitrary fixed matrix and note that we have \begin{align} {\boldsymbol N} \odot {\boldsymbol B} &= \left({\boldsymbol \theta}_\up{j}^\sT {\boldsymbol z}\right)_{i,j\in[m]} \odot {\boldsymbol B} \quad-\quad \left({{\boldsymbol \theta}_\up{j}^\sT {\boldsymbol w}_i {\boldsymbol w}_i^\sT {\boldsymbol z}} \right)_{i,j \in[m]}\odot {\boldsymbol B}\nonumber\\ &= {\boldsymbol B} \widetilde {\boldsymbol M} - ({\boldsymbol M} {\boldsymbol W}^\sT {\boldsymbol T}_{\boldsymbol \theta} ) \odot {\boldsymbol B} \nonumber\\ &= {\boldsymbol B} \widetilde {\boldsymbol M} - \left({\boldsymbol W}^\sT{\boldsymbol T}_{\boldsymbol \theta}\right) \odot ({\boldsymbol M} {\boldsymbol B}), \label{eq:hadamard_eq} \end{align} where the last equality holds because ${\boldsymbol M}$ is a diagonal matrix. Now recall that for the two square matrices ${\boldsymbol W}^\sT{\boldsymbol T}_{\boldsymbol \theta}$ and ${\boldsymbol M}{\boldsymbol B}$, we have (see for example~\cite{johnson1990matrix}, (3.7.9)) \begin{equation} \label{eq:hadamard_ineq} \norm{\left({\boldsymbol W}^\sT{\boldsymbol T}_{\boldsymbol \theta}\right)\odot \left({\boldsymbol M}{\boldsymbol B}\right) }_\mathrm{op} \le \left(\norm{{\boldsymbol I} \odot {\boldsymbol T}_{\boldsymbol \theta}^\sT{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op} \norm{{\boldsymbol I} \odot {\boldsymbol W}^\sT{\boldsymbol W}}_\mathrm{op}\right)^{1/2} \norm{{\boldsymbol M}{\boldsymbol B}}_\mathrm{op}. \end{equation} \noindent Combining~\eqref{eq:hadamard_eq} with~\eqref{eq:hadamard_ineq} we can write \begin{align} \E\left[\norm{{\boldsymbol N} \odot {\boldsymbol B}}_\mathrm{op}^k\right] &\le \E\left[\left(\norm{{\boldsymbol B}\widetilde {\boldsymbol M}}_\mathrm{op} + \norm{\left({\boldsymbol W}^\sT{\boldsymbol T}_{\boldsymbol \theta}\right)\odot \left({\boldsymbol M}{\boldsymbol B}\right) }_\mathrm{op} \right)^k\right] \nonumber\\ &\le \E\left[\left(\norm{{\boldsymbol B}\widetilde{\boldsymbol M}}_\mathrm{op} + \sup_{i\in[m]} \norm{{\boldsymbol \theta}_\up{i}}_2 \sup_{i\in[m]} \norm{{\boldsymbol w}_i}_2 \norm{{\boldsymbol M}{\boldsymbol B}}_\mathrm{op} \right)^k\right]\nonumber\\ &\le C_{10} \norm{{\boldsymbol B}}_\mathrm{op}^k \E\left[ \norm{\widetilde{\boldsymbol M}}_\mathrm{op}^k\right] + C_{10}\sup_{i\in[m]} \norm{{\boldsymbol \theta}_\up{i}}_2^k \norm{{\boldsymbol B}}_\mathrm{op}^k \E\left[ \norm{{\boldsymbol M}}_\mathrm{op}^k \right]\nonumber\\ &\le C_{11}\norm{{\boldsymbol B}}^k_\mathrm{op} \left( \frac{ (\log m)^{k/2} }{m^{k/2}} + \frac{ (\log m)^{k/2}}{d^{k/2}} \right)\label{eq:hadamard_ineq_2}. \end{align} Hence, using $\norm{{\boldsymbol R}}_\mathrm{op}\le C_4$ and $m/d\to\widetilde \sgamma$, we have $\E\left[ \norm{{\boldsymbol N} \odot {\boldsymbol R}}_\mathrm{op}^k \right] = C_5 (\log m)^{k/2}/{m^{k/2}}$ which establishes the sixth bound. Now, note that \begin{align*} \norm{{\boldsymbol R} \odot {\boldsymbol R}}_\mathrm{op} &= \norm{{\boldsymbol W}^\sT {\boldsymbol W} \odot {\boldsymbol R}}_\mathrm{op}\\ &\stackrel{(a)}{\le} \norm{{\boldsymbol I} \odot\ {\boldsymbol W}^\sT {\boldsymbol W} }_\mathrm{op} \norm{{\boldsymbol R}}_\mathrm{op}\\ &= \sup_{i\in [m]} \left|{\boldsymbol w}_i^\sT {\boldsymbol w}_i\right| \norm{{\boldsymbol R}}_\mathrm{op}\\ &\le C_6, \end{align*} where $(a)$ follows using the same bound we applied to~\eqref{eq:hadamard_ineq}. This establishes the seventh bound in the lemma. Now, using~\eqref{eq:hadamard_ineq_2} and the bound applied to~\eqref{eq:hadamard_ineq} again gives $\E\left[ \norm{{\boldsymbol N} \odot {\boldsymbol R} \odot {\boldsymbol R}}_\mathrm{op}^k\right] \le C_7 {(\log m)^{k/2}}/{m^{k/2}} ,$ establishing the eighth bound. For the ninth bound, we first note that by definition of ${\boldsymbol A}$ and ${\boldsymbol R}$, ${\boldsymbol A} \odot {\boldsymbol R} ={\boldsymbol W}^\sT {\boldsymbol T}_{\boldsymbol \theta} \odot {\boldsymbol R}$ so that \begin{align*} \norm{{\boldsymbol A} \odot {\boldsymbol R}}_\mathrm{op} &\le \left(\norm{{\boldsymbol I} \odot {\boldsymbol W}^\sT {\boldsymbol W} }_\mathrm{op} \norm{{\boldsymbol I} \odot {\boldsymbol T}_{\boldsymbol \theta}^\sT{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op}\right)^{1/2} \norm{{\boldsymbol R}}_\mathrm{op}\\ &\le \sup_{i\in[m]} \norm{{\boldsymbol \theta}_\up{i}}_2 \norm{{\boldsymbol R}}_\mathrm{op}\\ &\stackrel{(a)}{\le} C_8 \frac{1}{\sqrt{m}} \end{align*} where in $(a)$ we used that $\norm{{\boldsymbol R}}_\mathrm{op} \le C_4$ and $\norm{{\boldsymbol \theta}_\up{i}}_2\le\textsf{R}/\sqrt{d}$ along with $m/d\to\widetilde\sgamma$. Finally, using $\norm{{\boldsymbol R} \odot {\boldsymbol R}}_\mathrm{op} \le C_6$, a similar argument shows that $\norm{{\boldsymbol A} \odot {\boldsymbol R} \odot {\boldsymbol R}}_\mathrm{op} \le C_{9}/\sqrt{m}$, yielding the final bound of the lemma. \end{proof} \subsubsection{Bounding the first term in Eq.~\eqref{eq:decomposition}} \label{section:first_term_ntk} \begin{lemma} \label{lemma:first_term_ntk} For any $\delta>0$, we have \begin{equation} \label{eq:first_term_ntk} \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}}\left| \E\left[\left(\frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})\Delta_i - 1 \right)\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right)\right] \right| = 0 \end{equation} \end{lemma} \begin{proof} Fix $\delta>0$ throughout. Define for convenience \begin{equation} \nonumber U := \frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z})\Delta_i. \end{equation} Let us compute the expectation of $U$ and control its variance.\\ The expectation can be computed as \begin{align*} \E[U] &= \E\left[ \frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} + \Delta_i - \frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} \right) \right]\\ &= \E\left[ \frac{1}{\nu^2} \left({{\boldsymbol \theta}^\sT{\boldsymbol x}}\right)^2\right] + \frac{1}{\nu}\sum_{i=1}^m \E\left[{\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z})\left(\Delta_i - \frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right)\right]\\ &\stackrel{(a)}{=}1 +\E\left[{\boldsymbol \theta}_\up{i}^\sT{\boldsymbol P}_i^\perp{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) \left(\Delta_i - \frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu}\right) \right] + {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol w}_i\E\left[{\boldsymbol w}_i^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})\left(\Delta_i - \frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu}\right)\right] \\ &\stackrel{(b)}{=}1 + \E\left[{\boldsymbol \theta}_\up{i}^\sT{\boldsymbol P}_i^\perp{\boldsymbol z} \left(\Delta_i - \frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu}\right) \right] \E\left[ \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) \right]\\ &\hspace{10mm}+ {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol w}_i\E\left[{\boldsymbol w}_i^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})\right]\E\left[\left(\Delta_i - \frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu}\right)\right] \\ &\stackrel{(c)}{=}1 \end{align*} where $(a)$ follows by the definition of $\nu$, $(b)$ follows by independence of $\Delta_i - {\boldsymbol \theta}^\sT {\boldsymbol x}/\nu$ and ${\boldsymbol w}_i^\sT {\boldsymbol z}$, which can be seen from the definition of $\Delta_i$, and $(c)$ follows by the assumption on $\sigma'$, namely, that $\E[\sigma'(G)] = \E[ G\sigma'(G)] = 0$ for $G$ standard normal.\\ Now, we control $\Var(U)$. First, note that we can write $\Delta_i$ as \begin{equation} \label{eq:Delta_i_new_form} \Delta_i = \frac{{\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})}{\nu} +\frac{1}{\nu}\sum_{j:j\neq i} \left\{{\boldsymbol \theta}_\up{j}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_j^\sT{\boldsymbol z}) - {\boldsymbol \theta}_\up{j}^\sT {\boldsymbol P}_i^\perp {\boldsymbol z} \sigma'({\boldsymbol w}_j^\sT {\boldsymbol z} - \rho_{ij}{\boldsymbol w}_i^\sT{\boldsymbol z})\right\}. \end{equation} Taylor expanding $\sigma'$ to the third order gives \begin{align} \nonumber \sigma'({\boldsymbol w}_j^\sT{\boldsymbol z} - \rho_{ij}{\boldsymbol w}_i^\sT{\boldsymbol z}) &= \sigma'({\boldsymbol w}_j^\sT{\boldsymbol z}) - \rho_{ij}{\boldsymbol w}_i^\sT{\boldsymbol z} \sigma''({\boldsymbol w}_j^\sT{\boldsymbol z}) + \frac12 \rho_{ij}^2({\boldsymbol w}_i^\sT{\boldsymbol z})^2 \sigma'''({\boldsymbol w}_i^\sT{\boldsymbol z})\\ &\hspace{10mm}-\frac16\rho_{ij}^3({\boldsymbol w}_i^\sT{\boldsymbol z})^3 \sigma^\up{4}(v_{ij}({\boldsymbol z}))\nonumber \end{align} for some $v_{ij}({\boldsymbol z})$ between ${\boldsymbol w}_j^\sT {\boldsymbol z} - \rho_{ij}{\boldsymbol w}_i^\sT{\boldsymbol z}$ and ${\boldsymbol w}_j^\sT{\boldsymbol z}$. Using this expansion and the notation $\widetilde {\boldsymbol \theta}_{j,i}$ defined earlier, $\Delta_i$ can be re-written as \begin{align} \label{eq:delta_i_expansion} \Delta_i = \frac{1}{\nu} {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) &+ \frac1{\nu}\sum_{j: j\neq i} {\boldsymbol \theta}_\up{j}^\sT {\boldsymbol w}_i {\boldsymbol w}_i^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_j^\sT {\boldsymbol z})\\ &+ \frac1{\nu} \sum_{j: j\neq i}\widetilde{\boldsymbol \theta}_{j,i}^\sT{\boldsymbol z} \left( \rho_{ij} {\boldsymbol w}_i^\sT {\boldsymbol z} \sigma''({\boldsymbol w}_j^\sT {\boldsymbol z}) -\frac12\rho_{ij}^2({\boldsymbol w}_i^\sT {\boldsymbol z})^2 \sigma'''({\boldsymbol w}_j^\sT{\boldsymbol z})\right)\nonumber\\ &+ \frac1{6\nu}\sum_{j: j\neq i}\widetilde{\boldsymbol \theta}_{j,i}^\sT{\boldsymbol z} \rho_{ij}^3 ({\boldsymbol w}_i^\sT{\boldsymbol z})^3 \sigma^\up{4}(v_{ij}({\boldsymbol z})).\nonumber \end{align} \noindent Using the expansion~\eqref{eq:delta_i_expansion} in the expression for $U$ gives \begin{align} U =& \frac{1}{\nu^2}\sum_{i=1}^m \left({\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z})\right)^2\label{eq:u_1_z}\\ &+ \frac1{\nu^2}\sum_{i,j: j\neq i} {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}){\boldsymbol \theta}_\up{j}^\sT {\boldsymbol w}_i {\boldsymbol w}_i^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_j^\sT {\boldsymbol z})\label{eq:u_2_z}\\ &+ \frac1{\nu^2} \sum_{i,j: j\neq i}{\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})\widetilde{\boldsymbol \theta}_{j,i}^\sT{\boldsymbol z} \left\{ \rho_{ij} {\boldsymbol w}_i^\sT {\boldsymbol z} \sigma''({\boldsymbol w}_j^\sT {\boldsymbol z}) -\frac12\rho_{ij}^2({\boldsymbol w}_i^\sT {\boldsymbol z})^2 \sigma'''({\boldsymbol w}_j^\sT{\boldsymbol z})\right\}\label{eq:u_3_z}\\ &+ \frac1{6\nu^2}\sum_{i,j: j\neq i}{\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})\widetilde{\boldsymbol \theta}_{j,i}^\sT{\boldsymbol z} \rho_{ij}^3 ({\boldsymbol w}_i^\sT{\boldsymbol z})^3 \sigma^\up{4}(v_{ij}({\boldsymbol z})).\label{eq:u_4_z} \end{align} Let us write $u_1({\boldsymbol z}),u_2({\boldsymbol z}),u_3({\boldsymbol z}),u_4({\boldsymbol z})$ for the terms on the right-hand on lines~\eqref{eq:u_1_z},~\eqref{eq:u_2_z},~\eqref{eq:u_3_z},~\eqref{eq:u_4_z} respectively. Observe that \begin{equation} \label{eq:var_U_bound_in_terms_of_u} \Var(U)^{1/2} \le \sum_{l=1}^4 \Var(u_l({\boldsymbol z}))^{1/2} \stackrel{(a)}{\le}C_0 \sum_{l=1}^3 \left(\E\left[\norm{\grad u_l({\boldsymbol z})}_2^2\right]\right)^{1/2} + C_0\Var(u_4({\boldsymbol z}))^{1/2} \end{equation} where $(a)$ follows from the Gaussian Poincar\'e inequality. We control each summand directly. In doing so, we will make heavy use of the bounds in Lemma~\ref{lemma:op_norm_bounds} and hence we will often do so without reference. First let us bound the expected norm of the gradients in the above display. For $\E\left[\norm{\grad u_1({\boldsymbol z})}_2^2\right]$ we have the bound \begin{align*} \E\left[\norm{\grad u_1({\boldsymbol z})}_2^2\right] &= \E\left[\norm{ \frac2{\nu^2} \sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})^2 {\boldsymbol \theta}_\up{i} + \frac2{\nu^2} \sum_{i=1}^m ({\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z})^2 \sigma''({\boldsymbol w}_i^\sT {\boldsymbol z})\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}){\boldsymbol w}_i}_2^2\right]\nonumber\\ &\le \frac8{\nu^4}\E\left[\norm{ \sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})^2 {\boldsymbol \theta}_\up{i}}_2^2\right]\\ &\hspace{10mm}+ \frac8{\nu^4}\E\left[\norm{ \sum_{i=1}^m ({\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z})^2 \sigma''({\boldsymbol w}_i^\sT {\boldsymbol z})\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}){\boldsymbol w}_i}_2^2\right]\nonumber\\ &\le \frac{8}{\nu^4}\E\left[\norm{{\boldsymbol T}_{\boldsymbol \theta}{\boldsymbol D}_1^2{\boldsymbol T}_{\boldsymbol \theta}^\sT{\boldsymbol z}}_2^2\right] + \frac{8}{\nu^4} \E\left[\norm{{\boldsymbol W}{\boldsymbol D}_1{\boldsymbol D}_2\left(({\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z})^2\right)_{i\in[m]}}_2^2\right]\nonumber\\ &\le \frac{8}{\nu^4} \norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op}^4 \E\left[ \norm{{\boldsymbol D}_1}^4_\mathrm{op} \norm{{\boldsymbol z}}_2^2\right] + \frac{8}{\nu^4}\norm{{\boldsymbol W}}^2_\mathrm{op} \E\left[ \norm{{\boldsymbol D}_1{\boldsymbol D}_2}^2_\mathrm{op} \norm{\left( ({\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z})^2 \right)_{i\in[m]}}_2^2\right]\nonumber\\ &\stackrel{(a)}{\le} \frac{C_1}{\nu^4} \frac{1}{d} + \frac{C_2}{\nu^4} \sum_{i=1}^m \E\left[({\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z})^4 \right]\nonumber\\ &= \frac{C_1}{\nu^4} \frac{1}{d} + \frac{C_2}{\nu^4}\E[G^4] \sum_{i=1}^m\norm{{\boldsymbol \theta}_\up{i}}_2^4\\ &\stackrel{(b)}{\le} C_3(\delta) \left(\frac{1}{d}+ \frac{m}{d^2}\right) \end{align*} for all ${\boldsymbol \theta}\in{\mathcal S}_{p,\delta}$, where $C_1,C_2 >0$ depend only on $\sOmega$, $C_3(\delta)>0$ depends on $\sOmega$ and $\delta>0$, and $G$ is a standard normal variable. Here, $(a)$ follows from the bound $\norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op}\le \textsf{R} /\sqrt{d}$ for ${\boldsymbol \theta}\in{\mathcal S}_p$ along with the bounds in Lemma~\ref{lemma:op_norm_bounds}, and $(b)$ follows from $\nu = \nu_{{\boldsymbol \theta}} > \delta$ for ${\boldsymbol \theta}\in{\mathcal S}_{p,\delta}$, and the bound $\norm{{\boldsymbol \theta}_\up{i}}_2 \le \textsf{R}/\sqrt{d}$ Taking the supremum over ${\mathcal S}_{p,\delta}$ then sending $n\to\infty$ shows that \begin{equation} \label{eq:u_1_var_asymp} \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \E\left[\norm{\grad u_1({\boldsymbol z})}_2^2 \right] = 0. \end{equation} Now, the gradient of $u_2({\boldsymbol z})$ can be computed as \begin{align} \grad u_2({\boldsymbol z}) &=\frac{1}{\nu^2} \sum_{i,j:i\neq j} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) {\boldsymbol \theta}_\up{j}^\sT{\boldsymbol w}_i {\boldsymbol w}_i^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_j^\sT {\boldsymbol z}) {\boldsymbol \theta}_\up{i}\nonumber\\ &\hspace{10mm}+ \frac{1}{\nu^2} \sum_{i,j:i\neq j} {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma''({\boldsymbol w}_i^\sT{\boldsymbol z}) {\boldsymbol \theta}_\up{j}^\sT{\boldsymbol w}_i {\boldsymbol w}_i^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_j^\sT {\boldsymbol z}) {\boldsymbol w}_i\nonumber\\ &\hspace{10mm}+\frac{1}{\nu^2} \sum_{i,j:i\neq j} {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) {\boldsymbol \theta}_\up{j}^\sT{\boldsymbol w}_i \sigma'({\boldsymbol w}_j^\sT {\boldsymbol z}){\boldsymbol w}_i\nonumber\\ &\hspace{10mm}+ \frac{1}{\nu^2} \sum_{i,j:i\neq j} {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) {\boldsymbol \theta}_\up{j}^\sT{\boldsymbol w}_i {\boldsymbol w}_i^\sT {\boldsymbol z} \sigma''({\boldsymbol w}_j^\sT {\boldsymbol z}){\boldsymbol w}_j\nonumber\\ &=\frac1{\nu^2} {\boldsymbol T}_{\boldsymbol \theta} {\boldsymbol D}_1 {\boldsymbol M} {\boldsymbol A} {\boldsymbol \sigma}'({\boldsymbol W}^\sT {\boldsymbol z}) + \frac1{\nu^2} {\boldsymbol W} {\boldsymbol D}_2 \widetilde {\boldsymbol M} {\boldsymbol M} {\boldsymbol A} {\boldsymbol \sigma}'({\boldsymbol W}^\sT {\boldsymbol z}) \nonumber\\ &\hspace{10mm}+ \frac1{\nu^2} {\boldsymbol W} {\boldsymbol D}_1 \widetilde{\boldsymbol M} {\boldsymbol A} {\boldsymbol \sigma}'({\boldsymbol W}^\sT {\boldsymbol z}) + \frac1{\nu^2} {\boldsymbol W} {\boldsymbol D}_2 {\boldsymbol A} {\boldsymbol D}_1 {\boldsymbol M} {\boldsymbol T}_{\boldsymbol \theta}^\sT {\boldsymbol z} \label{eq:grad_u_2}, \end{align} where we recall that ${\boldsymbol \sigma}({\boldsymbol v})$ denotes the vector whose $i$th entry is $\sigma(v_i)$. We have the following bounds on the expected norm squared of each term in~\eqref{eq:grad_u_2}: for the first of these terms, \begin{align} \frac{1}{\nu^4} \E\left[ \norm{ {\boldsymbol T}_{\boldsymbol \theta} {\boldsymbol D}_1 {\boldsymbol M} {\boldsymbol A} {\boldsymbol \sigma}'({\boldsymbol W}^\sT {\boldsymbol z}) }_2^2 \right] &\le \frac{1}{\nu^4}\norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op}^2 \norm{{\boldsymbol A}}_\mathrm{op}^2 \E\left[ \norm{{\boldsymbol D}_1}_\mathrm{op}^2 \norm{{\boldsymbol M}}_\mathrm{op}^2 \norm{{\boldsymbol \sigma}'({\boldsymbol W}^\sT{\boldsymbol z})}^2_2 \right]\nonumber\\ &\le\frac{C_4}{\nu^4 d m}\E\left[ \norm{{\boldsymbol M}}_\mathrm{op}^4\right]^{1/2}\E\left[\norm{{\boldsymbol \sigma}'({\boldsymbol W}^\sT{\boldsymbol z})}^4_2 \right]^{1/2}\nonumber\\ &\le\frac{C_4 \log m}{\nu^4 d m} \E\left[ \left(\sum_{i=1}^m \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z})^2 \right)^2\right]^{1/2} \nonumber\\ &\stackrel{(a)}{\le}C_5(\delta)\left(\frac{ \log m }{d}\right),\nonumber \end{align} for all ${\boldsymbol \theta}\in{\mathcal S}_{p,\delta}$, where $C_4 >0$ depends only on $\sOmega$, and $C_5>0$ depends only on $\sOmega$ and $\delta$. Note that in $(a)$ we used $\norm{\sigma'}_\infty$ is finite. Moving on to bound the norm squared of the second term in~\eqref{eq:grad_u_2}, we have \begin{align} &\frac{1}{\nu^4 } \E\left[ \norm{ {\boldsymbol W} {\boldsymbol D}_2\widetilde{\boldsymbol M}\bM {\boldsymbol A} {\boldsymbol \sigma}'({\boldsymbol W}^\sT{\boldsymbol z}) }_2^2 \right]\nonumber\\ &\hspace{45mm}\le \frac{1}{\nu^4} \norm{{\boldsymbol W}}^2_\mathrm{op} \norm{{\boldsymbol A}}_\mathrm{op}^2 \E\left[ \norm{{\boldsymbol D}_2}_\mathrm{op}^2 \norm{\widetilde {\boldsymbol M}}_\mathrm{op}^2\norm{{\boldsymbol M}}_\mathrm{op}^2 \norm{{\boldsymbol \sigma}'({\boldsymbol W}^\sT{\boldsymbol z})}_2^2 \right]\nonumber\\ &\hspace{45mm}\le C_6 \frac{ 1}{m} \E\left[\norm{\widetilde {\boldsymbol M}}_\mathrm{op}^6 \right]^{1/3} \E\left[ \norm{{\boldsymbol M}}_\mathrm{op}^6 \right]^{1/3} \E\left[ \norm{{\boldsymbol \sigma}'({\boldsymbol W}^\sT{\boldsymbol z})}_2^6\right]^{1/3}\nonumber\\ &\hspace{45mm}= C_7(\delta)\frac{(\log m)^2}{m}.\nonumber \end{align} Similarly, the expected norm squared of the third term in~\eqref{eq:grad_u_2} is bounded as \begin{align} \frac{1}{\nu^4}\E\left[ \norm{{\boldsymbol W} {\boldsymbol D}_1 \widetilde{\boldsymbol M} {\boldsymbol A} {\boldsymbol \sigma}'({\boldsymbol W}^\sT{\boldsymbol z})}_2^2 \right] &\le \frac{C_8}{\nu^4} \norm{{\boldsymbol W}}_\mathrm{op}^2 \norm{{\boldsymbol A}}_\mathrm{op}^2 \E\left[ \norm{\widetilde {\boldsymbol M} }_\mathrm{op}^4\right]^{1/2} \E\left[\norm{{\boldsymbol \sigma}'({\boldsymbol W}^\sT {\boldsymbol z})}_2^4\right]^{1/2}\nonumber\\ &=C_9(\delta)\frac{\log m}{m},\nonumber \end{align} and finally, for the fourth term in~\eqref{eq:grad_u_2} we have \begin{align} \frac{1}{\nu^4}\E\left[ \norm{{\boldsymbol W} {\boldsymbol D}_2 {\boldsymbol A} {\boldsymbol D}_1 {\boldsymbol M} {\boldsymbol T}_{\boldsymbol \theta}^\sT {\boldsymbol z}}_2^2 \right] & \le \frac{1}{\nu^4} \norm{{\boldsymbol W}}_\mathrm{op}^2 \norm{{\boldsymbol A}}_\mathrm{op}^2 \norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op}^2 \E\left[ \norm{{\boldsymbol D}_2}_\mathrm{op}^2 \norm{{\boldsymbol D}_1}_\mathrm{op}^2 \norm{{\boldsymbol M}}_\mathrm{op}^2 \norm{{\boldsymbol z}}_2^2 \right]\nonumber\\ &=C_{10}(\delta)\frac{\log m}{m}\nonumber \end{align} Hence, we similarly conclude that \begin{equation} \label{eq:u_2_var_asymp} \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \E\left[\norm{\grad u_2({\boldsymbol z})}_2^2 \right] = 0. \end{equation} Now moving on to $u_3({\boldsymbol z})$, we can write \begin{align} \grad u_3({\boldsymbol z})=&\frac1{\nu^2 }\sum_{i,j: i\neq j} \Bigg( \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \widetilde {\boldsymbol \theta}_{j,i}^\sT {\boldsymbol z} \left(\rho_{ij}{\boldsymbol w}_i^\sT{\boldsymbol z} \sigma''({\boldsymbol w}_j^\sT {\boldsymbol z}) - \frac12\rho_{ij}^2 ({\boldsymbol w}_i^\sT {\boldsymbol z})^2 \sigma'''({\boldsymbol w}_j^\sT {\boldsymbol z}) \right){\boldsymbol \theta}_\up{i}\nonumber\\ &\hspace{10mm}+ {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma''({\boldsymbol w}_i^\sT {\boldsymbol z}) \widetilde {\boldsymbol \theta}_{j,i}^\sT {\boldsymbol z} \left(\rho_{ij}{\boldsymbol w}_i^\sT{\boldsymbol z} \sigma''({\boldsymbol w}_j^\sT {\boldsymbol z}) - \frac12\rho_{ij}^2 ({\boldsymbol w}_i^\sT {\boldsymbol z})^2 \sigma'''({\boldsymbol w}_j^\sT {\boldsymbol z}) \right){\boldsymbol w}_i\nonumber\\ &\hspace{10mm}+ {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \left(\rho_{ij}{\boldsymbol w}_i^\sT{\boldsymbol z} \sigma''({\boldsymbol w}_j^\sT {\boldsymbol z}) - \frac12\rho_{ij}^2 ({\boldsymbol w}_i^\sT {\boldsymbol z})^2 \sigma'''({\boldsymbol w}_j^\sT {\boldsymbol z}) \right)\widetilde {\boldsymbol \theta}_{j,i}\nonumber\\ &\hspace{10mm}+{\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \widetilde {\boldsymbol \theta}_{j,i}^\sT {\boldsymbol z} \left(\rho_{ij} \sigma''({\boldsymbol w}_j^\sT {\boldsymbol z}) - \rho_{ij}^2 {\boldsymbol w}_i^\sT {\boldsymbol z} \sigma'''({\boldsymbol w}_j^\sT {\boldsymbol z}) \right){\boldsymbol w}_i\nonumber\\ &\hspace{10mm}+ {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \widetilde {\boldsymbol \theta}_{j,i}^\sT {\boldsymbol z} \left(\rho_{ij}{\boldsymbol w}_i^\sT{\boldsymbol z} \sigma'''({\boldsymbol w}_j^\sT {\boldsymbol z}) - \frac12\rho_{ij}^2 ({\boldsymbol w}_i^\sT {\boldsymbol z})^2 \sigma^\up{4}({\boldsymbol w}_j^\sT {\boldsymbol z}) \right){\boldsymbol w}_j \Bigg)\nonumber. \end{align} This can be rewritten as \begin{align} \grad u_3({\boldsymbol z}) =&\frac1{\nu^2} \Bigg( {\boldsymbol T}_{\boldsymbol \theta} {\boldsymbol D}_1 {\boldsymbol M} \left( ({\boldsymbol N} \odot {\boldsymbol R}) {\boldsymbol \sigma}''({\boldsymbol W}^\sT{\boldsymbol z}) - \frac12{\boldsymbol M} ({\boldsymbol N} \odot {\boldsymbol R} \odot {\boldsymbol R})\sigma'''({\boldsymbol W}^\sT {\boldsymbol z}) \right) \label{eq:big_eq_1}\\ &\hspace{10mm}+ {\boldsymbol W}\widetilde{\boldsymbol M} {\boldsymbol D}_2 {\boldsymbol M} \left( ({\boldsymbol N} \odot {\boldsymbol R}) \sigma''({\boldsymbol W}^\sT{\boldsymbol z}) - \frac12{\boldsymbol M}({\boldsymbol N} \odot {\boldsymbol R} \odot {\boldsymbol R} )\sigma'''({\boldsymbol W}^\sT {\boldsymbol z}) \right) \label{eq:big_eq_2}\\ &\hspace{10mm}+ {\boldsymbol T}_{\boldsymbol \theta} \left( {\boldsymbol D}_2 {\boldsymbol R} - \frac12{\boldsymbol D}_3 ({\boldsymbol R} \odot {\boldsymbol R}){\boldsymbol M} \right){\boldsymbol D}_1 {\boldsymbol M} {\boldsymbol T}_{\boldsymbol \theta}^\sT {\boldsymbol z} \label{eq:big_eq_3}\\ &\hspace{10mm}+ {\boldsymbol W}\widetilde {\boldsymbol M} {\boldsymbol D}_1 {\boldsymbol M} {\boldsymbol F} \left( ({\boldsymbol A} \odot {\boldsymbol R})\sigma''({\boldsymbol W}^\sT {\boldsymbol z}) - \frac12{\boldsymbol M} ({\boldsymbol A} \odot {\boldsymbol R} \odot {\boldsymbol R}) \sigma'''({\boldsymbol W}^\sT {\boldsymbol z})\right) \label{eq:big_eq_4}\\ &\hspace{10mm}+ {\boldsymbol W} {\boldsymbol D}_1 \widetilde {\boldsymbol M} \left( ({\boldsymbol N} \odot {\boldsymbol R}) \sigma''({\boldsymbol W}^\sT {\boldsymbol z}) - {\boldsymbol M} ({\boldsymbol N} \odot {\boldsymbol R} \odot {\boldsymbol R}) \sigma'''({\boldsymbol W}^\sT {\boldsymbol z}) \right) \label{eq:big_eq_5}\\ &\hspace{10mm}+ {\boldsymbol W} \left( {\boldsymbol D}_3 ({\boldsymbol N} \odot {\boldsymbol R}) - \frac12{\boldsymbol D}_4 ({\boldsymbol N} \odot{\boldsymbol R}\odot {\boldsymbol R}) {\boldsymbol M} \right) {\boldsymbol M} {\boldsymbol D}_1 {\boldsymbol T}_{\boldsymbol \theta}^\sT {\boldsymbol z} \Bigg). \label{eq:big_eq_6} \end{align} Let us again bound the expected norm squared of each of the terms in the previous display. For the terms on lines~\eqref{eq:big_eq_1} and~\eqref{eq:big_eq_2} we have \begin{align*} &\E\left[ \norm{ \left({\boldsymbol T}_{\boldsymbol \theta}{\boldsymbol D}_1 {\boldsymbol M} \right)\left( ({\boldsymbol N} \odot {\boldsymbol R}) {\boldsymbol \sigma}''({\boldsymbol W}^\sT {\boldsymbol z}) - \frac12{\boldsymbol M} ({\boldsymbol N} \odot {\boldsymbol R} \odot {\boldsymbol R}) {\boldsymbol \sigma}'''({\boldsymbol W}^\sT {\boldsymbol z}) \right)}_2^2\right]\\ &\hspace{30mm}\le \norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op}^2 \Bigg( \E\left[ \norm{{\boldsymbol D}_1 {\boldsymbol M} \left({\boldsymbol N} \odot {\boldsymbol R} \right) {\boldsymbol \sigma}''\left({\boldsymbol W}^\sT {\boldsymbol z}\right)}_\mathrm{op}^2 \right]\\ &\hspace{40mm}+ \frac12\E\left[ \norm{{\boldsymbol D}_1 {\boldsymbol M}^2 \left({\boldsymbol N} \odot {\boldsymbol R} \odot{\boldsymbol R} \right) {\boldsymbol \sigma}'''\left({\boldsymbol W}^\sT {\boldsymbol z}\right)}_2^2 \right] \Bigg)\\ &\hspace{30mm}\le C \norm{{\boldsymbol T}_{\boldsymbol \theta}}_2^2 \bigg( \E \left[ \norm{{\boldsymbol M}}^6_\mathrm{op}\right]^{1/3} \E\left[ \norm{{\boldsymbol N} \odot {\boldsymbol R}}_\mathrm{op}^6\right]^{1/3} \E\left[\norm{{\boldsymbol \sigma}''({\boldsymbol W}^\sT {\boldsymbol z})}_2^6 \right]^{1/3}\\ &\hspace{40mm}+ \E \left[ \norm{{\boldsymbol M}}^{12}_\mathrm{op}\right]^{1/3} \E\left[ \norm{{\boldsymbol N} \odot {\boldsymbol R} \odot {\boldsymbol R} }_\mathrm{op}^6\right]^{1/3} \E\left[\norm{{\boldsymbol \sigma}'''({\boldsymbol W}^\sT {\boldsymbol z})}_2^6 \right]^{1/3} \bigg)\\ &\hspace{30mm}\stackrel{(a)}{\le} C_{11} \frac{(\log m)^3}{d}, \end{align*} where in $(a)$ we used that $\norm{\sigma^\up{l}}_\infty < \infty$. A similar calculation shows that \begin{align*} &\E\left[ \norm{ \left({\boldsymbol W}\widetilde {\boldsymbol M} {\boldsymbol D}_2{\boldsymbol M} \right)\left( ({\boldsymbol N} \odot {\boldsymbol R}) {\boldsymbol \sigma}''({\boldsymbol W}^\sT {\boldsymbol z}) - \frac12{\boldsymbol M} ({\boldsymbol N} \odot {\boldsymbol R} \odot {\boldsymbol R}) {\boldsymbol \sigma}'''({\boldsymbol W}^\sT {\boldsymbol z}) \right)}_2^2\right]\\ &\hspace{10mm}\le C_{12}\norm{{\boldsymbol W}}_\mathrm{op}^2 \bigg( \E\left[ \norm{\widetilde {\boldsymbol M} {\boldsymbol D}_2{\boldsymbol M} \left({\boldsymbol N} \odot {\boldsymbol R} \right) {\boldsymbol \sigma}''\left({\boldsymbol W}^\sT {\boldsymbol z}\right)}_2^2 \right]\\ &\hspace{20mm}+ \E\left[ \norm{\widetilde{\boldsymbol M} {\boldsymbol D}_2 {\boldsymbol M}^2 \left({\boldsymbol N} \odot {\boldsymbol R} \odot{\boldsymbol R} \right) {\boldsymbol \sigma}'''\left({\boldsymbol W}^\sT {\boldsymbol z}\right)}_2^2 \right] \bigg)\\ &\hspace{10mm}\le C_{12} \norm{{\boldsymbol W}}_\mathrm{op}^2 \bigg( \E \left[ \norm{\widetilde{\boldsymbol M}}^8_\mathrm{op}\right]^{1/4} \E \left[ \norm{{\boldsymbol M}}^8_\mathrm{op}\right]^{1/4} \E\left[ \norm{{\boldsymbol N} \odot {\boldsymbol R}}_\mathrm{op}^8\right]^{1/4} \E\left[\norm{{\boldsymbol \sigma}''({\boldsymbol W}^\sT {\boldsymbol z})}_2^8 \right]^{1/4}\\ &\hspace{20mm}+ \E \left[ \norm{{\boldsymbol M}}^{16}_\mathrm{op}\right]^{1/4} \E \left[ \norm{\widetilde{\boldsymbol M}}^8_\mathrm{op}\right]^{1/4} \E\left[ \norm{{\boldsymbol N} \odot {\boldsymbol R} \odot {\boldsymbol R} }_\mathrm{op}^8\right]^{1/4} \E\left[\norm{{\boldsymbol \sigma}'''({\boldsymbol W}^\sT {\boldsymbol z})}_2^8 \right]^{1/4} \bigg)\\ &\hspace{10mm}\le C_{13} \frac{(\log m)^4}{m}. \end{align*}\\ For the term on line~\eqref{eq:big_eq_3}, \begin{align*} &\E\left[\norm{{\boldsymbol T}_{\boldsymbol \theta}\left( {\boldsymbol D}_2 {\boldsymbol R} -\frac12 {\boldsymbol D}_3 \left({\boldsymbol R} \odot {\boldsymbol R}\right){\boldsymbol M} \right) {\boldsymbol D}_1 {\boldsymbol M} {\boldsymbol T}_{\boldsymbol \theta}^\sT {\boldsymbol z} }_2^2 \right] \\ &\hspace{50mm}\le C \norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op}^4 \Big( \norm{{\boldsymbol R}}_\mathrm{op}^2\E\left[ \norm{{\boldsymbol D}_2}_\mathrm{op}^2 \norm{{\boldsymbol D}_1}_\mathrm{op}^2 \norm{{\boldsymbol M}}_\mathrm{op}^2\norm{{\boldsymbol z}}_2^2 \right]\\ &\hspace{60mm}+ \norm{{\boldsymbol R}\odot {\boldsymbol R}}_\mathrm{op}^2\E\left[ \norm{{\boldsymbol D}_3}_\mathrm{op}^2 \norm{{\boldsymbol M}}_\mathrm{op}^2 \norm{{\boldsymbol D}_1}_\mathrm{op}^2 \norm{{\boldsymbol M}}_\mathrm{op}^2 \norm{{\boldsymbol z}}_2^2 \right] \Big)\\ &\hspace{50mm}\stackrel{(a)}{\le} C_{14} \frac{(\log m)^2}{d}. \end{align*} For the term on line~\eqref{eq:big_eq_4}, an analogous calculation shows that \begin{align*} &\E\left[ \norm{{\boldsymbol W}\widetilde {\boldsymbol M} {\boldsymbol D}_1 {\boldsymbol M} {\boldsymbol F}\hspace{-0.6mm} \left(\hspace{-1mm}({\boldsymbol A} \odot {\boldsymbol R})\sigma''({\boldsymbol W}^\sT {\boldsymbol z}) - \frac12{\boldsymbol M} ({\boldsymbol A} \odot {\boldsymbol R} \odot {\boldsymbol R}) \sigma'''({\boldsymbol W}^\sT {\boldsymbol z})\right)}_2^2 \right]\le C_{15}\frac{(\log m)^3}{m}, \end{align*} and then similarly for~\eqref{eq:big_eq_5}, and~\eqref{eq:big_eq_6} we have \begin{align*} \E\left[ \norm{ {\boldsymbol W} {\boldsymbol D}_1 \widetilde {\boldsymbol M} \left( ({\boldsymbol N} \odot {\boldsymbol R}) \sigma''({\boldsymbol W}^\sT {\boldsymbol z}) - {\boldsymbol M} ({\boldsymbol N} \odot {\boldsymbol R} \odot {\boldsymbol R}) \sigma'''({\boldsymbol W}^\sT {\boldsymbol z}) \right) }_2^2\right] \le C_{16}\frac{(\log m)^3}{m}, \end{align*} and \begin{align*} \E\left[ \norm{{\boldsymbol W} \left( {\boldsymbol D}_3 ({\boldsymbol N} \odot {\boldsymbol R}) - \frac12{\boldsymbol D}_4 ({\boldsymbol N} \odot{\boldsymbol R}\odot {\boldsymbol R}) {\boldsymbol M} \right) {\boldsymbol M} {\boldsymbol D}_1 {\boldsymbol T}_{\boldsymbol \theta}^\sT {\boldsymbol z}}_2^2 \right] \le C_{17}\frac{(\log m)^3}{m}, \end{align*} respectively. These bounds then give \begin{equation} \label{eq:u_3_var_asymp} \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \E\left[\norm{\grad u_3({\boldsymbol z})}_2^2 \right]= 0. \end{equation} What remains is the term $\Var(u_4({\boldsymbol z}))^{1/2}$. However, this can be bounded naively as \begin{align*} \Var(u_4({\boldsymbol z})) &\le \frac1{36\nu^4}\E\left[u_4({\boldsymbol z})^2 \right] \\ &\le \frac{m^4}{36\nu^4} \E\left[\sup_{i\neq j} \left| {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \widetilde {\boldsymbol \theta}_{j,i}^\sT {\boldsymbol z} \rho_{i,j}^3 ({\boldsymbol w}_i^\sT {\boldsymbol z})^3 \sigma^\up{4}(v_{ij}({\boldsymbol z}))\right|^2 \right] \\ &\le C_{18} \frac{m^4}{\nu^4} \left( \E\left[\sup_{i\in[m]}\left| {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z}\right|^2 \sup_{i\neq j}\left| \widetilde{\boldsymbol \theta}_{j,i}^\sT {\boldsymbol z}\right|^2 \sup_{i\in[m]}\left| {\boldsymbol w}_i^\sT {\boldsymbol z}\right|^6 \right] \sup_{i\neq j} \left|\rho_{i,j}\right|^6 \right)\\ &\stackrel{(a)}{\le} C_{19} \frac{m^4}{\nu^4} \left( \frac{\log m}{m}\frac{\log m}{m} (\log m)^3\left(\frac{\log m}{d}\right)^3 \right)\\ &\le C_{20} \frac{m^2}{\nu^4} \frac{(\log m)^8}{d^3}, \end{align*} where $(a)$ follows from an application of H\"older's and Lemma~\ref{lemma:op_norm_bounds}. Hence we have \begin{equation} \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \Var(u_4({\boldsymbol z})) = 0. \end{equation} Combining this with \eqref{eq:u_1_var_asymp}, \eqref{eq:u_2_var_asymp}, and \eqref{eq:u_3_var_asymp} gives \begin{equation} \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \Var(U)= 0. \end{equation} Therefore, we can control~\eqref{eq:first_term_ntk} as \begin{align*} \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\textsf{k}}} \left|\E\left[(U-1)\chi'\left(\frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu}\right)\right]\right| &\le \norm{\chi'}_\infty \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\textsf{k}}} \left( \Var(U)^{1/2} + |\E[U - 1]| \right)\\ &= 0 \end{align*} by the previous display and the computation showing $\E[U] = 1$. \end{proof} \subsubsection{Bounding the second term in Eq.~\eqref{eq:decomposition}} \label{section:second_term_ntk} \begin{lemma} \label{lemma:second_term_ntk} For any $\delta>0$, we have \begin{equation} \nonumber \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \left| \E\left[ \frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) \Bigg(\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} - \Delta_i\right) - \Delta_i\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \Bigg) \right] \right|= 0. \end{equation} \end{lemma} \begin{proof} Let us define the event \begin{align*} \mathcal{A}&:= \left\{ \sup_{i\in[m]} \left| \frac{1}{\norm{{\boldsymbol \theta}_\up{i}}} {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\right| \le \left(\log m\right)^{50} \right\} \bigcap\left\{ \sup_{i\in[m]} \left| {\boldsymbol w}_i^\sT{\boldsymbol z}\right| \le \left(\log m\right)^{50} \right\}\\ &\hspace{70mm}\bigcap \left\{\sup_{\{(i,j)\in[m]^2 : i\neq j\}} \left| \frac{1} {\norm{\widetilde{\boldsymbol \theta}_{j,i}}_2 }\widetilde{\boldsymbol \theta}_{j,i}^\sT{\boldsymbol z} \right| \le \left(\log m\right)^{50} \right\} \end{align*} Using that for $v_i$, not necessarily independent, subgaussian with subgaussian norm $1$ \begin{equation*} \P\left(\sup_{i\in[m]}|v_i| > \sqrt{2\log m} + t \right) \le\exp\left\{-\frac{t^2}{2\textsf{K}_v^2}\right\}, \end{equation*} we obtain \begin{align*} \P\left( \mathcal{A}^c \right) \le 3\exp\left\{-\frac{c_0(\log m)^{99}}{2}\right\} \end{align*} for some universal constant $c_0\in(0,\infty)$. Hence, it is sufficient to establish the desired bound on the set $\mathcal{A}$. Indeed, suppose \begin{equation} \label{eq:second_term_ntk_target} \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}}\hspace{-1mm} \left| \E\hspace{-1mm}\left[\hspace{-1mm} \frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) \Bigg(\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} - \Delta_i\right) - \Delta_i\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \Bigg) \mathbf{1}_{\mathcal{A}} \right] \right|= 0, \end{equation} then \begin{align*} &\lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \left| \E\left[ \frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) \Bigg(\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} - \Delta_i\right) - \Delta_i\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \Bigg) \right] \right| \\ &\hspace{20mm}\stackrel{(a)}{\le } \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \frac{C_1 m}{\nu} \left(\norm{\chi}_\infty \vee\norm{\chi'}_\infty\right) \sup_{i\in[m]}\norm{{\boldsymbol \theta}_\up{i}}_2 \E\left[ \norm{{\boldsymbol z}}_2 \Big( 2+ \sup_{i\in[m]}|\Delta_i|\Big) \mathbf{1}_{\mathcal{A}^c} \right]\\ &\hspace{20mm}\le \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \frac{C_1 m}{\nu^2} \left(\norm{\chi}_\infty \vee\norm{\chi'}_\infty\right) \sup_{i\in[m]}\norm{{\boldsymbol \theta}_\up{i}}_2\dots\\ &\hspace{50mm}\dots \E\left[ \norm{{\boldsymbol z}}_2 \Big( 2\nu + \norm{{\boldsymbol \theta}}_2\norm{{\boldsymbol x}}_2 + C_2 m \sup_{j\in[m]}\norm{{\boldsymbol \theta}_\up{j}}_2\norm{{\boldsymbol z}}_2\Big) \mathbf{1}_{\mathcal{A}^c} \right]\\ &\hspace{20mm}\stackrel{(b)}{\le } \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} C_4(\nu) m^2 \exp\left\{-\frac{c_0(\log m)^{99}}{2}\right\}\\ &\hspace{20mm}{\le } \lim_{n\to\infty} C_4(\delta) m^2 \exp\left\{-\frac{c_0(\log m)^{99}}{2}\right\}\\ &\hspace{20mm}=0. \end{align*} where $(a)$ follows by a naive bound on $\Delta_i$ and $(b)$ follows by an application of H\"older's. Hence, throughout we work on the event $\mathcal{A}$.\\ By Lemma 2.4 of~\cite{chen2011normal}, $\chi' = \chi'_\varphi$ is differentiable and $\norm{\chi''}_\infty \le C_0$ since $\varphi$ is assumed to be differntiable with bounded derivative. Hence, \begin{align} \label{eq:2nd_term_taylor} \left|\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) -\chi\left( \frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} - \Delta_i\right) - \Delta_i \chi' \left( \frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right)\right| &\le C_0 \left|\Delta_i \right|^2. \end{align} Using this in~\eqref{eq:second_term_ntk_target} we obtain \begin{align} &\left| \E\left[ \frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) \left(\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} - \Delta_i\right) - \Delta_i\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \right) \mathbf{1}_{\mathcal{A}} \right] \right| \nonumber \\ &\hspace{70mm}\stackrel{(a)}{\le} C_0 \E\left[ \frac{1}{\nu}\sum_{i=1}^m \left|{\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})\right| \Delta_i^2 \mathbf{1}_\mathcal{A} \right] \nonumber \\ &\hspace{70mm}\stackrel{(b)}{\le} \frac{C_1}{\nu} \E\left[\sup_{i\in[m]}\left|\frac{1}{\norm{{\boldsymbol \theta}_\up{i}}_2} {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\right| \sum_{i=1}^m \norm{{\boldsymbol \theta}_\up{i}}_2 \Delta_i^2\mathbf{1}_{\mathcal{A}} \right]\nonumber\\ &\hspace{70mm}\stackrel{(c)}{\le} \frac{C_2}{\nu} \frac{(\log m)^{50}}{m^{1/2}} \sum_{i=1}^m \E\left[ \Delta_i^2\mathbf{1}_{\mathcal{A}} \right], \label{eq:delta_i_square_bound} \end{align} where $(a)$ follows from~\eqref{eq:2nd_term_taylor}, $(b)$ follows from boundedness of $\norm{\sigma'}_\infty$, and $(c)$ follows from $\norm{{\boldsymbol \theta}_\up{i}}_2 \le \textsf{R}/\sqrt{d}$ and the definition of $\mathcal{A}$. Now recall the form of $\Delta_i$ introduced in Eq.~\eqref{eq:Delta_i_new_form} and let us again Taylor expand $\sigma'$ to write \begin{align} \Delta_i &= \frac{1}{\nu} {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) + \frac1{\nu}\sum_{j: j\neq i} {\boldsymbol \theta}_\up{j}^\sT {\boldsymbol w}_i {\boldsymbol w}_i^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_j^\sT {\boldsymbol z})\nonumber\\ &\hspace{10mm}+ \frac1{\nu} \sum_{j: j\neq i}\widetilde{\boldsymbol \theta}_{j,i}^\sT{\boldsymbol z} \rho_{ij} {\boldsymbol w}_i^\sT {\boldsymbol z} \sigma''({\boldsymbol w}_j^\sT {\boldsymbol z})- \frac1{\nu}\sum_{j:j\neq i} \widetilde {\boldsymbol \theta}_{j,i}^\sT {\boldsymbol z} \rho_{ij}^2({\boldsymbol w}_i^\sT {\boldsymbol z})^2 \sigma'''(v_{j,i}({\boldsymbol z}))\nonumber\\ &=: d_{1,i} + d_{2,i} + d_{3,i} + d_{4,i} \label{eq:delta_i_to_d_k_i} \end{align} for some $v_{j,i}({\boldsymbol z})$ between ${\boldsymbol w}_j^\sT {\boldsymbol z}$ and ${\boldsymbol w}_j^\sT{\boldsymbol z} - \rho_{ij}{\boldsymbol w}_i^\sT{\boldsymbol z}$. We show that for each $k\in [4]$, \begin{equation} \label{eq:d_ki_bounds} \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \frac{1}{\nu} \frac{(\log m)^{50}}{m^{1/2}} \sum_{i=1}^m\E\left[d_{k,i}^2\mathbf{1}_\mathcal{A}\right]=0. \end{equation} For the contributions of $d_{1,i}$, we have \begin{align*} \frac{1}{\nu}\frac{(\log m)^{50}}{m^{1/2}}\sum_{i=1}^m \E\left[ d_{1,i}^2\mathbf{1}_\mathcal{A}\right] &\le C_3\frac1{\nu^3} \frac{(\log m)^{50}}{m^{1/2}} \sum_{i=1}^m\E\left[ ({\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z})^2 \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z})^2\mathbf{1}_\mathcal{A} \right]\\ &\le C_4\frac1{\nu^3} \frac{(\log m)^{50}}{m^{1/2}} \E\left[\norm{{\boldsymbol T}_{\boldsymbol \theta} ^\sT {\boldsymbol z}}_2^2 \right]^{1/2}\\ &\le C_4\frac1{\nu^3}\frac{(\log m)^{50}}{m^{1/2}}\norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op}^2 \E\left[\norm{{\boldsymbol z}}_2^2\right]\\ &\le C_5(\delta)\left( \frac{(\log m)^{50} }{m^{1/2}}\right) \end{align*} uniformly over ${\boldsymbol \theta}\in{\mathcal S}_{p,\delta}$. Taking supremum over ${\boldsymbol \theta}\in{\mathcal S}_{p,\delta}$ and sending $n\to\infty$ proves~\eqref{eq:d_ki_bounds} for $k=1$. For $d_{2,i}$, \begin{align*} \frac{1}{\nu}\frac{(\log m)^{50}}{m^{1/2}}\sum_{i=1}^m \E\left[d_{2,i}^2\mathbf{1}_{\mathcal{A}} \right] &\le \frac{C_{6}}{\nu^3} \frac{(\log m)^{50}}{m^{1/2}}\E\left[ \sum_{i=1}^m( {\boldsymbol w}_i^\sT {\boldsymbol z})^2 \left(\sum_{j\neq i} {\boldsymbol \theta}_\up{j}^\sT {\boldsymbol w}_i \sigma'({\boldsymbol w}_j^\sT {\boldsymbol z}) \right)^2 \mathbf{1}_{\mathcal{A}}\right] \\ &\stackrel{(a)}\le \frac{C_{7}}{\nu^3} \frac{ (\log m)^{150}}{m^{1/2}}\E\left[\norm{ {\boldsymbol A}^\sT {\boldsymbol \sigma}'({\boldsymbol W}^\sT {\boldsymbol z}) }_2^2 \right] \\ &\le C_{8}(\delta)\frac{ (\log m)^{150}}{m^{1/2}} \end{align*} uniformly over ${\mathcal S}_{p,\delta}$, where $(a)$ holds by the definition of $\mathcal{A}$. Sending $n\to\infty$ shows~\eqref{eq:d_ki_bounds} for $k=2$. Similarly, we have \begin{align*} \frac{1}{\nu}\frac{(\log m)^{50}}{m^{1/2}} \sum_{i=1}^m \E\left[d_{3,i}^2\mathbf{1}_\mathcal{A}\right] &\le \frac{C_{9}}{\nu^3}\frac{(\log m)^{50}}{m^{1/2}} \E\left[\sum_{i=1}^m ({\boldsymbol w}_i^\sT {\boldsymbol z})^2 \left(\sum_{j\neq i} \widetilde{\boldsymbol \theta}_{j,i}^\sT {\boldsymbol z} \rho_{ij} \sigma''({\boldsymbol w}_j^\sT {\boldsymbol z}) \right)^2 \mathbf{1}_\mathcal{A}\right]\\ &\le \frac{C_{9}}{\nu^3}\frac{(\log m)^{150}}{m^{1/2}} \E\left[\sum_{i=1}^m \left(\sum_{j\neq i} \widetilde{\boldsymbol \theta}_{j,i}^\sT {\boldsymbol z} \rho_{ij} \sigma''({\boldsymbol w}_j^\sT {\boldsymbol z}) \right)^2 \mathbf{1}_\mathcal{A}\right]\\ &= \frac{C_{9}}{\nu^3}\frac{ (\log m)^{150}}{m^{1/2}} \E\left[ \norm{({\boldsymbol N} \odot {\boldsymbol R}) {\boldsymbol \sigma}''({\boldsymbol W}^\sT {\boldsymbol z})}^2 \right]\\ &\le C_{10}(\delta)\frac{(\log m)^{151}}{m^{1/2}} \end{align*} uniformly over ${\boldsymbol \theta}\in{\mathcal S}_{p,\delta}$, establishing~\eqref{eq:d_ki_bounds} for $k=3$. Finally, $d_{4,i}$ can be bounded almost surely on $\mathcal{A}$: \begin{align*} |d_{4,i}|\mathbf{1}_{\mathcal{A}} &\le \frac{C_{11} m}{\nu} \sup_{ i\neq j} \left| \frac{1} {\norm{\widetilde{\boldsymbol \theta}_{j,i}}_2 }\widetilde{\boldsymbol \theta}_{j,i}^\sT{\boldsymbol z} \right| \sup_{i\neq j}\norm{\widetilde {\boldsymbol \theta}_{j,i}}_2 \sup_{i\neq j}\rho_{ij}^2 \sup_{i\in [m]}({\boldsymbol w}_i^\sT {\boldsymbol z})^2 \mathbf{1}_\mathcal{A}\\ &\stackrel{(a)}{\le} \frac{C_{12} (\log m)^{50} }{\nu} \sup_{i\in[m]}\norm{{\boldsymbol P}_i^\perp}_\mathrm{op} \sup_{j\in[m]}\norm{{\boldsymbol \theta}_\up{j}}_2\\ &\stackrel{(b)}{\le} C_{13}(\delta) \frac{(\log m)^{50}}{d^{1/2}} \end{align*} uniformly over ${\mathcal S}_{p,\delta}$, where $(a)$ follows from the definition of the event $\mathcal{A}$ and $(b)$ follows because ${\boldsymbol P}_i^\perp$ is a projection matrix for all $i$ and that $\norm{{\boldsymbol \theta}_\up{j}}_2 \le \textsf{R}/\sqrt{d}$. Therefore, we have \begin{equation} \nonumber \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \frac1\nu \frac{(\log m)^{50}}{m^{1/2}}\sum_{i=1}^m \E[d_{4,i}^2\mathbf{1}_{\mathcal{A}}] = 0, \end{equation} establishing~\eqref{eq:d_ki_bounds} for $k=4$. Hence we showed \begin{align*} &\lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}}\left| \E\left[ \frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) \left(\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} - \Delta_i\right) - \Delta_i\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \right) \mathbf{1}_{\mathcal{A}} \right] \right|\\ &\hspace{70mm}\stackrel{(a)}{\le} \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}} \frac{C_2}{\nu} \frac{(\log m)^{50}}{m^{1/2}} \sum_{i=1}^m \E\left[ \Delta_i^2\mathbf{1}_{\mathcal{A}} \right]\\ &\hspace{70mm}\stackrel{(b)}\le \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_{p,\delta}}\frac{C_{14}}{\nu} \frac{(\log{m})^{50}}{m^{1/2}} \sum_{k=1}^4\sum_{i=1}^m \E\left[ d_{k,i}^2\mathbf{1}_{\mathcal{A}} \right]\\ &\hspace{70mm}\stackrel{(c)}=0, \end{align*} where $(a)$ follows from~\eqref{eq:delta_i_square_bound}, $(b)$ follows from~\eqref{eq:delta_i_to_d_k_i} and $(c)$ follows from~\eqref{eq:d_ki_bounds} holding for $k\in[4]$. Hence, we have shown~\eqref{eq:second_term_ntk_target} and completed the proof. \end{proof} \subsubsection{Proof of Lemma~\ref{lemma:ntk_bl}} \label{section:ntk_actual_proof} \begin{proof} Recall the definition of $\Delta_i$ in~\eqref{eq:Delta_i_def} and note that for all $i\in[m]$, \begin{equation}\nonumber \frac1\nu{\boldsymbol \theta}^\sT{\boldsymbol x} - \Delta_i = \frac1\nu \sum_{j:j\neq i} {\boldsymbol \theta}_\up{j}^\sT {\boldsymbol P}_i^\perp {\boldsymbol z} \sigma'({\boldsymbol w}_j^\sT {\boldsymbol z} - \rho_{i,j} {\boldsymbol w}_i^\sT {\boldsymbol z}). \end{equation} Since ${\boldsymbol z}$ is Gaussian, ${\boldsymbol w}_i^\sT {\boldsymbol z}$ is independent of any function of ${\boldsymbol w}_j^\sT{\boldsymbol z} - \rho_{i,j}{\boldsymbol w}_i^\sT{\boldsymbol z}$ and hence is independent of ${\boldsymbol \theta}^\sT {\boldsymbol x}/\nu - \Delta_i$. Therefore, we have \begin{align} \E\left[ {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \chi\left( \frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu} - \Delta_i \right)\right] &= \E\left[ {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol w}_i {\boldsymbol w}_i^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \chi\left( \frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu} - \Delta_i \right) \right]\\ &\hspace{10mm}+ \E\left[ {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol P}^\perp_i {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \chi\left( \frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu} - \Delta_i \right) \right] \nonumber \\ &= {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol w}_i \E\left[ {\boldsymbol w}_i^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \right] \E\left[ \chi\left( \frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu} - \Delta_i \right) \right]\nonumber\\ &\hspace{10mm}+ \E\left[ {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol P}^\perp_i {\boldsymbol z} \chi\left( \frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu} - \Delta_i \right) \right] \E\left[\sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \right] \nonumber \\ &\stackrel{(a)}{=} 0 \label{eq:cross_term_0} \end{align} where $(a)$ follows from the assumption that $\E[\sigma'(G)] = \E[G\sigma'(G)] = 0$ for a standard normal $G$. Hence, we can write \begin{align} &\left|\E\left[ \varphi\left(\frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu}\right) - \varphi\left(\frac{{\boldsymbol \theta}^\sT {\boldsymbol g}}{\nu}\right)\right]\right|\nonumber\\ &\hspace{15mm}\stackrel{(a)}{=} \left|\E\left[\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) -\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \right]\right| \nonumber\\ &\hspace{15mm}= \Bigg| \E\left[\left(\frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})\Delta_i - 1 \right)\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right)\right]\nonumber\\ &\hspace{25mm}+ \E\left[ \frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) \left(\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} - \Delta_i\right) - \Delta_i\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \right) \right]\nonumber\\ &\hspace{25mm}+ \E\left[\frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT {\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT {\boldsymbol z}) \chi\left( \frac{{\boldsymbol \theta}^\sT {\boldsymbol x}}{\nu} - \Delta_i \right)\right] \Bigg|\nonumber\\ &\hspace{15mm} \stackrel{(b)}\le\left| \E\left[\left(\frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z} \sigma'({\boldsymbol w}_i^\sT{\boldsymbol z})\Delta_i - 1 \right)\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right)\right] \right|\label{eq:term1_ntk}\\ &\hspace{25mm}+ \left| \E\left[ \frac{1}{\nu}\sum_{i=1}^m {\boldsymbol \theta}_\up{i}^\sT{\boldsymbol z}\sigma'({\boldsymbol w}_i^\sT{\boldsymbol z}) \left(\chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) - \chi\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu} - \Delta_i\right) - \Delta_i\chi'\left(\frac{{\boldsymbol \theta}^\sT{\boldsymbol x}}{\nu}\right) \right) \right] \right|\label{eq:term2_ntk} \end{align} where $(a)$ follows by Eq.~\eqref{eq:stein_solution} and $(b)$ follows by Eq.~\eqref{eq:cross_term_0}. Taking the supremum over ${\boldsymbol \theta}\in{\mathcal S}_{p,\delta}$ then $n\to\infty$ and applying Lemmas~\ref{lemma:first_term_ntk} and~\ref{lemma:second_term_ntk} completes the proof. \end{proof} \subsection{Asymptotic Gaussianity on ${\mathcal S}_p$} \label{subsection:bounded_lipschitz_ntk} We give the following consequence of Lemma~\ref{lemma:ntk_bl}. \begin{lemma} \label{lemma:bounded_lip_ntk_no_truncation} For any bounded Lipschitz function $\varphi:\R\to\R$ we have \begin{equation} \nonumber \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in {\mathcal S}_{p}} \left|\E\left[\varphi\left({\boldsymbol \theta}^\sT{\boldsymbol x}\right)\mathbf{1}_{\mathcal{B}} \Big| {\boldsymbol W} \right] - \E\left[\varphi\left({\boldsymbol \theta}^\sT{\boldsymbol g}\right)\mathbf{1}_{\mathcal{B}}\Big | {\boldsymbol W} \right]\right| = 0. \end{equation} \end{lemma} \begin{proof} Again, let us use the notation $\E[\cdot] := \E[\cdot \mathbf{1}_\mathcal{B} | {\boldsymbol W}]$. First define \begin{equation} {\mathcal S}_{p,\delta}^c :=\{ {\boldsymbol \theta}\in{\mathcal S}_p : {\boldsymbol \theta}^\sT\E[{\boldsymbol x}\bx^\sT]{\boldsymbol \theta} \le \delta\}, \end{equation} and take $\varphi$ to be bounded differentiable with bounded derivative. Then for $\delta>0$ we have \begin{align} &\lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in {\mathcal S}_{p}} \left|\E\left[\varphi\left({\boldsymbol \theta}^\sT{\boldsymbol x}\right) \right] - \E\left[\varphi\left({\boldsymbol \theta}^\sT{\boldsymbol g}\right) \right]\right|\nonumber\\ &\hspace{40mm}\stackrel{(a)}\le \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in {\mathcal S}_{p,\delta}^c} \left|\E\left[\varphi\left({\boldsymbol \theta}^\sT{\boldsymbol x}\right) \right] - \E\left[\varphi\left({\boldsymbol \theta}^\sT{\boldsymbol g}\right) \right]\right| \nonumber\\ &\hspace{40mm} \le \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in {\mathcal S}_{p,\delta}^c} \norm{\varphi'}_\infty\left(\E\left[\left({\boldsymbol \theta}^\sT{\boldsymbol x}\right)^2 \right]^{1/2} + \E\left[\left({\boldsymbol \theta}^\sT{\boldsymbol g}\right)^2 \right]^{1/2} \right) \nonumber\\ &\hspace{40mm}\stackrel{(b)}\le 2\norm{\varphi'}_{\infty} \delta\label{eq:ntk_bl_delta_bound} \end{align} where $(a)$ follows from Lemma~\ref{lemma:ntk_bl} and $(b)$ follows from the definition of ${\mathcal S}_{p,\delta}^c$. Now sending $\delta \to 0$ proves the lemma for differentiable Lipschitz functions, which can then be extended to Lipschitz functions via a standard uniform approximation argument. \end{proof} \subsection{Truncation} Let us define ${\mathcal G} := \left\{ \norm{{\boldsymbol z}}_2 \le 2 \sqrt{d}\right\}$ and the random variable $\bar {\boldsymbol x} := {\boldsymbol x} \mathbf{1}_{\mathcal G}.$ The following Lemma establishes the subgaussianity condition of Assumption~\ref{ass:X} for $\bar {\boldsymbol x}$. \begin{lemma} \label{lemma:bar_x_subgaussian} Conditional on ${\boldsymbol W}\in\mathcal{B}$ we have \begin{equation} \nonumber \sup_{{\boldsymbol \theta}\in{\mathcal S}_{p}} \norm{\bar{\boldsymbol x}^\sT {\boldsymbol \theta}}_{\psi_2} \le C \end{equation} for some constant $C$ depending only on $\sOmega$. \end{lemma} \begin{proof} Take arbitrary ${\boldsymbol \theta}\in{\mathcal S}_p$. Let \begin{equation} \nonumber u(t):= \begin{cases} 1 & t \le 2\\ 3 - t & t\in(2,3]\\ 0 & t > 3 \end{cases}, \end{equation} then consider the function $f({\boldsymbol z}) := {\boldsymbol z}^\sT {\boldsymbol T}_{\boldsymbol \theta}^\sT {\boldsymbol \sigma}'({\boldsymbol W}^\sT {\boldsymbol z}) u\left(\norm{{\boldsymbol z}}_2/\sqrt{d}\right)$. Note that $f$ is continuous and differentiable almost everywhere with gradient \begin{equation} \nonumber \grad f({\boldsymbol z}) = \Big({\boldsymbol T}_{\boldsymbol \theta}^\sT {\boldsymbol \sigma}'({\boldsymbol W}^\sT {\boldsymbol z}) + {\boldsymbol W}\textrm{diag}\left\{ {\boldsymbol \sigma}''({\boldsymbol w}_i^\sT {\boldsymbol z}) \right\} {\boldsymbol T}_{\boldsymbol \theta}^\sT {\boldsymbol z}\Big)u\left(\frac{\norm{{\boldsymbol z}}_2}{\sqrt{d}}\right) + u'\left(\frac{\norm{{\boldsymbol z}}_2}{\sqrt{d}}\right) \frac{{\boldsymbol z}}{\sqrt{d}\norm{{\boldsymbol z}}} f({\boldsymbol z}) \end{equation} almost everywhere. Noting that $u'(t) = u'(t) \mathbf{1}_{t \le 3}$ and $u(t) \le \mathbf{1}_{t\le 3}$ we can bound \begin{align} \norm{\grad f({\boldsymbol z})}_2 &\le \left(\norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op} \norm{{\boldsymbol \sigma}'\left({\boldsymbol W}^\sT {\boldsymbol z}\right)}_2 + \norm{{\boldsymbol W}}_\mathrm{op} \sup_{i\in[m]}\sigma''\left({\boldsymbol w}_i^\sT {\boldsymbol z}\right) \norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op} \norm{{\boldsymbol z}}_2\right)\mathbf{1}_{\norm{{\boldsymbol z}}_2 \le 3\sqrt{d}}\nonumber\\ &\hspace{15mm}+ u'\left(\frac{\norm{{\boldsymbol z}}_2}{\sqrt{d}}\right) \frac{\norm{{\boldsymbol z}}_2}{\sqrt{d}}\norm{{\boldsymbol T}_{\boldsymbol \theta}}\norm{{\boldsymbol \sigma}'\left({\boldsymbol W}^\sT {\boldsymbol z}\right)}_2\mathbf{1}_{\norm{{\boldsymbol z}}_2 \le 3\sqrt{d}}\nonumber\\ &\stackrel{(a)}{\le} C_0 \nonumber \end{align} almost everywhere, where $C_0>0$ depends only on $\sOmega$. In $(a)$ we used that $\norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op} \le \textsf{R}/\sqrt{d}$ for ${\boldsymbol \theta}\in{\mathcal S}_p$. Hence, $\norm{f}_\textrm{Lip} \le C_0$ so that $f({\boldsymbol z})$ is subgaussian with subgaussian norm depending only on $\sOmega$. This implies that \begin{align} \nonumber \P\left( \left| \bar{\boldsymbol x}^\sT {\boldsymbol \theta} \right| \ge t \right) &\stackrel{(a)}\le \P\left( \left| f({\boldsymbol z})\right| \ge t \right) \le C_2\exp\left\{ -{c_0t^2}\right\}. \end{align} where $(a)$ follows by nothing that $\mathbf{1}_{t \le 2} \le u(t)$. This shows that $\bar{\boldsymbol x}^\sT {\boldsymbol \theta}$ is subgaussian with subgaussian norm constant in $n$ and ${\boldsymbol \theta}$. Since ${\boldsymbol \theta}\in{\mathcal S}_p$ was arbitrary, this proves the claim. \end{proof} Now, let us show that the condition of Eq.~\eqref{eq:condition_bounded_lipschitz_single} holds for the truncated variables $\bar{\boldsymbol x}$. \begin{lemma} \label{lemma:bar_x_bl} For any bounded Lipschitz function $\varphi :\R\to\R$, we have \begin{equation} \nonumber \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in {\mathcal S}_p} \left| \E \left[\left(\varphi(\bar {\boldsymbol x}^\sT {\boldsymbol \theta}) - \varphi({\boldsymbol g}^\sT {\boldsymbol \theta})\right) \mathbf{1}_\mathcal{B} \big| {\boldsymbol W} \right] \right|=0. \end{equation} \end{lemma} \begin{proof}[\textbf{Proof}] Let us use the notation $\E[(\cdot) ] := \E[(\cdot)\mathbf{1}_{\mathcal{B}}| {\boldsymbol W}].$ We have \begin{align*} \left| \E \left[\left(\varphi(\bar {\boldsymbol x}^\sT {\boldsymbol \theta}) - \varphi({\boldsymbol x}^\sT {\boldsymbol \theta})\right) \right] \right| &\le \norm{\varphi}_\textrm{Lip} \E\left[\left|{\boldsymbol x}^\sT {\boldsymbol \theta} \right|\mathbf{1}_{{\mathcal G}^c} \right]\\ &\le\norm{\varphi}_\textrm{Lip} \E\left[\left({\boldsymbol x}^\sT {\boldsymbol \theta}\right)^2\right]^{1/2} \P\left({\mathcal G}^c\right). \end{align*} Recalling that $\P\left({\mathcal G}^c\right)\le \exp\{-c_0 d\}$ since ${\boldsymbol z}$ is Gaussian, we can write \begin{align*} \lim_{n\to\infty} \sup_{{\boldsymbol \theta}\in{\mathcal S}_p}\left| \E \left[\left(\varphi(\bar {\boldsymbol x}^\sT {\boldsymbol \theta}) - \varphi({\boldsymbol g}^\sT {\boldsymbol \theta})\right) \right] \right| &\le \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_p} \left| \E \left[\left(\varphi(\bar {\boldsymbol x}^\sT {\boldsymbol \theta}) - \varphi({\boldsymbol x}^\sT {\boldsymbol \theta})\right) \right] \right|\\ &\hspace{10mm}+ \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_p}\left| \E \left[\left(\varphi({\boldsymbol x}^\sT {\boldsymbol \theta}) - \varphi({\boldsymbol g}^\sT {\boldsymbol \theta})\right) \right] \right|\\ &\stackrel{(a)}{\le} \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_p} \norm{\varphi}_{\textrm{Lip}} \E\left[\left({\boldsymbol x}^\sT {\boldsymbol \theta}\right)^2\right]^{1/2} e^{-c_0 d}\\ &\le \lim_{n\to\infty}\sup_{{\boldsymbol \theta}\in{\mathcal S}_p} \norm{\varphi}_{\textrm{Lip}} \E\left[\norm{{\boldsymbol z}}^2_2\norm{{\boldsymbol T}_{\boldsymbol \theta}}_\mathrm{op}^2 \norm{{\boldsymbol \sigma}'({\boldsymbol W}^\sT {\boldsymbol z})}_2^2\right]e^{-c_0 d}\\ & = 0. \end{align*} \end{proof} \subsection{Proof of Corollary~\ref{cor:ntk_universality}} \begin{proof} Let ${\mathcal G}_i := \{\norm{{\boldsymbol z}_i}_2 \le 2 \sqrt{d}\}$ where ${\boldsymbol z}_i$ is the Gaussian vector defining the $i$th sample ${\boldsymbol x}_i$ of the neural tangent model. Now let Let $\bar{\boldsymbol X} := (\bar {\boldsymbol x}_1,\dots,\bar {\boldsymbol x}_n)^\sT$ where $\bar{\boldsymbol x}_i := {\boldsymbol x}_i \mathbf{1}_{{\mathcal G}_i}$. Take any compact ${\mathcal C}_p\subseteq {\mathcal S}_p$ and let $\widehat R_n^\star\left(\cdot\right)$ be the optimal empirical risk for a choice of $\ell,\eta,{\boldsymbol \theta}^\star,\epsilon,r$ satisfying assumptions ~\ref{ass:loss_labeling}, ~\ref{ass:ThetaStar},~\ref{ass:noise},~\ref{ass:regularizer}, respectively. Since $\bar{\boldsymbol x}$ verifies Assumption~\ref{ass:X} for ${\boldsymbol W}\in\mathcal{B}$ by Lemmas~\ref{lemma:bar_x_subgaussian} and~\ref{lemma:bar_x_bl}, then Theorem~\ref{thm:main_empirical_univ} can be applied to $\bar{\boldsymbol x}$ to conclude that for for any bounded Lipschitz $\psi$ \begin{equation} \label{eq:bar_x_universality} \lim_{n\to\infty}\left|\E\left[ \psi\left( \widehat R_n^\star\left(\bar {\boldsymbol X}\right) \right)\mathbf{1}_\mathcal{B} - \psi\left( \widehat R_n^\star\left({\boldsymbol G}\right) \right)\mathbf{1}_\mathcal{B} \Big| {\boldsymbol W} \right] \right|= 0 \end{equation} Now, note that we have for some $C_0,c_0 >0$, \begin{align} \label{eq:G_i_union_bound} \P\left(\bigcup_{i\in[n]}{\mathcal G}^c_i\right) \le n\P\left(\norm{{\boldsymbol z}}_2 > 2\sqrt{n}\right) \le C_0n\exp\{-c_0 d\} \to 0 \end{align} as $n\to\infty$, so that \begin{align} \label{eq:risk_bar_x_diff} \lim_{n\to\infty}\left|\E\left[ \psi\left(\widehat R_n^\star({\boldsymbol X})\right) - \psi\left(\widehat R^\star_n(\bar{\boldsymbol X})\right) \right]\right| &\le 2\norm{\psi}_\infty\lim_{n\to\infty} \P\left( \bigcup_{i\in[n]}{\mathcal G}_i^c \right) = 0. \end{align} Meanwhile, \begin{align} \label{eq:risk_bar_g_diff} \left|\E\left[ \psi\left(\widehat R^\star_n(\bar{\boldsymbol X})\right)- \psi\left(\widehat R^\star_n({\boldsymbol G})\right) \right]\right| &\le \left|\E\left[\E\left[ \left(\psi\left(\widehat R^\star_n(\bar{\boldsymbol X})\right)- \psi\left(\widehat R^\star_n({\boldsymbol G})\right) \right)\mathbf{1}_{\mathcal{B}}\Big| {\boldsymbol W}\right]\right]\right| \nonumber\\ &\hspace{10mm}+ 2\norm{\psi}_\infty\P\left(\mathcal{B}^c \right). \end{align} Combining the displays~\eqref{eq:bar_x_universality},~\eqref{eq:risk_bar_x_diff} and~\eqref{eq:risk_bar_g_diff} gives \begin{align} \lim_{n\to\infty}\left|\E\left[ \psi\left(\widehat R^\star_n({\boldsymbol X})\right) - \psi\left(\widehat R^\star_n({\boldsymbol G})\right) \right]\right| &\le \lim_{n\to\infty} \left|\E\left[\E\left[ \left(\psi\left(\widehat R^\star_n(\bar{\boldsymbol X})\right)- \psi\left(\widehat R^\star_n({\boldsymbol G})\right) \right)\mathbf{1}_{\mathcal{B}}\Big| {\boldsymbol W}\right]\right]\right| \nonumber\\ &\hspace{10mm}+ C_1\norm{\psi}_\infty\left(\lim_{n\to\infty} n\exp\{-c_0 d\} + \lim_{n\to\infty} \P(\mathcal{B}^c)\right)\nonumber\\ &\stackrel{(a)}\le \E\left[\lim_{n\to\infty}\left| \E\left[ \left(\psi\left(\widehat R^\star_n(\bar{\boldsymbol X})\right)- \psi\left(\widehat R^\star_n({\boldsymbol G})\right) \right)\mathbf{1}_{\mathcal{B}}\Big| {\boldsymbol W}\right]\right|\right] \nonumber\\ &= 0\nonumber \end{align} where $(a)$ follows by dominated convergence. \end{proof} \subsection{Auxiliary lemmas} \label{section:aux_lemmas_ntK} We include the following auxiliary lemmas for the sake of completeness. \begin{lemma} \label{lemma:subgaussian_max} Let $V_i$ be mean zero subgaussian random variables with $\sup_{i\in[m]}\norm{V_i}_{\psi_2}\le \textsf{K}$. We have for all integer $k \ge 1$, \begin{equation} \nonumber \E\left[ \sup_{i\in[m]} |V_i|^k \right] \le \left( C k \textsf{K}^2 \log{m}\right)^{k/2} \end{equation} for some universal constant $C >0$. \end{lemma} \begin{proof} This follows by integrating the bound \begin{equation} \nonumber \P\left(\sup_{i\in [m]} |V_i| \ge \sqrt{2 \textsf{K}^2 \log m} + t \right) \le C_1\exp\left\{- \frac{t^2}{2\textsf{K}^2}\right\} \end{equation} holding for $V_i$ subgaussian. \end{proof} \begin{lemma} \label{lemma:B_tail_bound} There exist constants $C,C' \in(0,\infty)$ depending only on $\widetilde \sgamma$ such that \begin{equation} \nonumber \lim_{n\to\infty}\P\left(\left\{\sup_{\{i,j \in [m] : i\neq j\}} \left|{\boldsymbol w}_i^\sT{\boldsymbol w}_j\right| > \frac{C(\log m)^{1/2}}{d^{1/2}}\right\} \bigcup \left\{ \norm{{\boldsymbol W} }_\mathrm{op} > C' \right\}\right) =0. \end{equation} \end{lemma} \begin{proof} Let $V_{i,j} = {\boldsymbol w}_i^\sT {\boldsymbol w}_j$ for $i,j\in[m],i\neq j$. Note that $V_{i,j}$ are subgaussian with subgaussian norm $C_1/\sqrt{d}$ for some universal constant $C_1$. Indeed, we have for $\lambda \in\R$, \begin{equation} \nonumber \E\left[ \exp\{\lambda V_{i,j}\} \right] = \E\left[ \E\left[ \exp\{\lambda {\boldsymbol w}_i^\sT {\boldsymbol w}_j\} | {\boldsymbol w}_i\right] \right] \le \exp\left\{C_1\frac{\lambda^2}{d}\right\}, \end{equation} where we used that ${\boldsymbol w}_i$ and ${\boldsymbol w}_j$ are independent for $i\neq j$, $\norm{{\boldsymbol w}_i}=1$ and that ${\boldsymbol w}_{j}$ is subgaussian with subgaussian norm $C_0/\sqrt{d}$. Hence, we have \begin{equation} \nonumber \P\left( \sup_{i\neq j} \left|V_{i,j} \right| > 4C_0\left(\frac{ \log m}{d}\right)^{1/2} \right) \le C_2 \exp\left\{ - 2\log m\right\}. \end{equation} This proves the existence of the constant $C$ in the statement of the lemma. Meanwhile the existence of $C'$ is a consequence of Theorem 4.6.1 in~\cite{vershynin2018high}. \end{proof} \section{The random features model: Proof of Corollary~\ref{cor:rf_universality}} \label{proof:rf_universality} We recall the definitions and assumptions introduced in Section~\ref{section:rf_example}. Recall the activation function $\sigma$ assumed to be a three times differentiable function with bounded derivatives satisfying $\E[\sigma(G)] =0$ for $G\sim\cN(0,1)$, the covariates $\{{\boldsymbol z}_{i}\}_{i\le [n]}~\simiid\cN(0,{\boldsymbol I}_{d})$ and the matrix ${\boldsymbol W}$ whose columns are the weights $\{{\boldsymbol w}_{j}\}_{j\le[p]}\stackrel{i.i.d.}{\sim}\textsf{Unif}(\S^{d-1}(1))$. We assume $d/p \to \widetilde\sgamma$. Now recall the definition of the feature vectors in~\eqref{eq:rf_covariates}: ${\boldsymbol x}:= \left(\sigma\left({\boldsymbol w}_1^\sT {\boldsymbol z}\right),\dots,\sigma\left({\boldsymbol w}_p^\sT{\boldsymbol z}\right)\right)$ and the set in~\eqref{eq:rf_set}: ${\mathcal S}_p = B_\infty^p\left(\textsf{R}/\sqrt{p}\right)$. Define the event \begin{equation} \nonumber \mathcal{B} := \left\{\sup_{i,j\in[m]: i\neq j} \left|{\boldsymbol w}_i^\sT{\boldsymbol w}_j \right| \le C\left(\frac{\log d}{d}\right)^{1/2}\right\} \bigcap \left\{ \norm{{\boldsymbol W} }_\mathrm{op} \le C' \right\} \end{equation} for some $C,C'>0$ universal constants so that $\P(\mathcal{B}^c) \to 0$ as $d\to\infty$ (see Lemma~\ref{lemma:B_tail_bound} for the existence of such $C,C'$.) The following lemma is a direct consequence of Theorem 2 and Lemma 8 from~\cite{hu2020universality}. \begin{lemma} \label{lemma:rf_ass_x} Let $\boldsymbol{\Sigma}_{\boldsymbol W} := \E\left[{\boldsymbol x}\bx^\sT | {\boldsymbol W}\right]$ and ${\boldsymbol g}\big| {\boldsymbol W} \sim\cN(0,\boldsymbol{\Sigma}_{\boldsymbol W}).$ For any bounded differentiable Lipschitz function $\varphi$ we have \begin{equation} \lim_{p\to\infty} \sup_{{\boldsymbol \theta}\in{\mathcal S}_p} \left| \E\left[ \left(\varphi\left({\boldsymbol x}^\sT {\boldsymbol \theta} \right) - \varphi\left( {\boldsymbol g}^\sT {\boldsymbol \theta} \right) \right) \mathbf{1}_\mathcal{B} \big| {\boldsymbol W} \right] \right| = 0.\label{eq:rf_bl} \end{equation} Furthermore, conditional on ${\boldsymbol W} \in\mathcal{B}$, ${\boldsymbol x}$ is subgaussian with subgaussian norm constant in $n$. \end{lemma} \begin{remark} We remark that the setting of~\cite{hu2020universality} differs slightly from the one considered above. Indeed, they take \begin{enumerate} \item the activation function to be odd and the weight vectors to be $\{{\boldsymbol w}_j\}_{j\le[p]} \simiid \cN(0,{\boldsymbol I}_d/d)$, and \item the ``asymptotically equivalent'' Gaussian vectors to be $\widetilde{\boldsymbol g} := c_1 {\boldsymbol W}^\sT {\boldsymbol z} + c_2 {\boldsymbol h}$ for ${\boldsymbol h}\sim\cN(0,{\boldsymbol I}_p)$ instead of ${\boldsymbol g}$, where $c_1$ and $c_2$ are defined so that \begin{equation} \label{label:asymptotic_cov_rf} \lim_{p\to\infty}\norm{\E\left[\widetilde {\boldsymbol g} \widetilde {\boldsymbol g}^\sT \mathbf{1}_\mathcal{B}|{\boldsymbol W}\right] - \E\left[{\boldsymbol x}\bx^\sT \mathbf{1}_\mathcal{B} |{\boldsymbol W}\right]}_\mathrm{op} = 0. \end{equation} \end{enumerate} However, an examination of their proofs reveals that their results hold when $\sigma$ is assumed to satisfy $\E[\sigma(G)]=0$ for $G\sim\cN(0,1)$ instead of being odd, and $\{{\boldsymbol w}_j\}_{j\le [p]}\simiid {\normalfont\textsf{Unif}(\S^{d-1}(1))}$, provided $\widetilde {\boldsymbol g}$ is replaced with ${\boldsymbol g}$. Indeed, the only part where the odd assumption on $\sigma$ is used in their proofs, other than to ensure that $\E\left[\sigma\big({\boldsymbol w}_j^\sT {\boldsymbol z}\big)\big|{\boldsymbol W}\right]=0$, is in showing that~\eqref{label:asymptotic_cov_rf} holds for their setting of $c_1$ and $c_2$ (Lemma 5 of~\cite{hu2020universality}). We circumvent this by our choice of ${\boldsymbol g}$. \end{remark} \begin{remark} Theorem 2 of~\cite{hu2020universality} prove a more general result than the one stated here for their setting. Additionally, they give bounds for the rate of convergence for a fixed ${\boldsymbol \theta}$ in terms of $\norm{{\boldsymbol \theta}}_2,\norm{{\boldsymbol \theta}}_\infty$ and $\norm{\varphi}_\textrm{Lip}$ (and other parameters irrelevant to our setting.). However, here we are only interested in the consequence given above. \end{remark} \ifthenelse{\boolean{arxiv}}{ \begin{proof}[Proof of Corollary~\ref{cor:rf_universality}] }{ \begin{proof}{\textbf{ of Corollary~\ref{cor:rf_universality} }} } First note that via a standard argument uniformly approximating Lipschitz functions wtih differentiable Lipschitz functions, Lemma~\ref{lemma:rf_ass_x} can be extended to hold for $\varphi$ that are bounded Lipschitz. Now note that ${\mathcal S}_p$ as defined in~\eqref{eq:rf_set} is symmetric, convex and a subset of $B_2^p(\textsf{R})$. Let ${\mathcal C}_p$ be any compact subset of ${\mathcal S}_p$ and let $\widehat R_n^\star\left({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right)$ be the minimum of the empirical risk over ${\mathcal C}_p$, where the empirical risk is defined with a choice of $\ell,\eta,{\boldsymbol \theta}^\star, \epsilon$ and $r$ satisfying assumptions \ref{ass:loss_labeling},~\ref{ass:ThetaStar},~\ref{ass:noise},~\ref{ass:regularizer} respectively. By Lemma~\ref{lemma:rf_ass_x}, ${\boldsymbol x}$ is subgaussian conditional on ${\boldsymbol W}$ and hence satisfies the subgaussianity condition of Assumption~\ref{ass:X}. Furthermore, conditional on ${\boldsymbol W} \in\mathcal{B}$, ${\boldsymbol x}$ satisfies the condition in~\eqref{eq:condition_bounded_lipschitz_single} for the given ${\boldsymbol g}$, therefore, Theorem~\ref{thm:main_empirical_univ} implies that for any bounded Lipschitz $\psi$ \begin{equation} \label{eq:rf_conditional_universality} \lim_{n\to\infty}\left|\E\left[ \psi\left( \widehat R_n^\star\left({\boldsymbol X},{\boldsymbol y}({\boldsymbol X})\right) \right)\mathbf{1}_\mathcal{B} - \psi\left( \widehat R_n^\star\left({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})\right) \right)\mathbf{1}_\mathcal{B} \Big| {\boldsymbol W} \right] \right|= 0 \end{equation} \noindent Hence, we can write \begin{align*} &\lim_{n\to\infty}\left| \E\left[ \psi\left( \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right) -\psi\left( \widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \right) \right] \right|\\ &\hspace{40mm}\le \lim_{p\to\infty}\left| \E\left[\E\left[ \left(\psi\left( \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right) - \left(\widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \right)\right)\mathbf{1}_\mathcal{B} \Big| {\boldsymbol W}\right]\right] \right|\\ &\hspace{50mm}+ 2\norm{\psi}_\infty \lim_{p\to\infty}\P\left( \mathcal{B}^c\right)\\ &\hspace{40mm}\stackrel{(a)}{\le} \E\left[ \lim_{p\to\infty}\left| \E\left[ \left(\psi\left( \widehat R_n^\star({\boldsymbol X},{\boldsymbol y}({\boldsymbol X}))\right) - \left(\widehat R_n^\star({\boldsymbol G},{\boldsymbol y}({\boldsymbol G})) \right)\right)\mathbf{1}_\mathcal{B} \Big| {\boldsymbol W}\right]\right|\right] \\ &\hspace{40mm}\stackrel{(b)}{=} 0 \end{align*} where $(a)$ follows by the dominated convergence theorem and $(b)$ follows from Eq.~\eqref{eq:rf_conditional_universality}. \end{proof} \section{Deferred proofs} \label{section:deferred_proofs} \subsection{Proof of Lemma~\ref{lemma:op_norm_x}} \label{proof:op_norm_x} The proof is a standard argument following the argument for bounding $\E\left[\norm{{\boldsymbol Z}}_\mathrm{op}\right]$ for a matrix ${\boldsymbol Z}$ with i.i.d. subgaussian rows (see for example~\cite{vershynin2018high}, Lemma 4.6.1). Note, however, that such a bound, or subgaussian matrix deviation bounds such as Theorem 9.1.1 of~\cite{vershynin2018high} that assume that the rows of ${\boldsymbol X}$, are subgaussian are not directly applicable in our case, since the projections of ${\boldsymbol X}$ are subgaussian only along the directions of ${\mathcal S}_p$. Indeed, we are interested in cases such as the example in Section~\ref{section:ntk_example} where the feature vectors ${\boldsymbol x}_i$ are not subgaussian. Although the statement of Lemma~\ref{lemma:op_norm_x} is a direct extension of such results, we include its proof here for the sake of completeness. We only need to prove the bound for ${\boldsymbol X}$; indeed, ${\boldsymbol G}$ itself satisfies Assumption~\ref{ass:X}. We begin with the following lemma. \begin{lemma} \label{lemma:X_sp_norm_tail_bound} Assume ${\boldsymbol X}$ satisfies Assumption~\ref{ass:X}. Then there exist constants $C,\widetilde C,c >0$ depending only on $\sOmega$ such that for all $t>0$, \begin{equation*} \P\left(\norm{{\boldsymbol X}}_{{\mathcal S}_p}^2 \ge n \widetilde C \left((\delta_t^2 \vee \delta_t)+ 1\right) \right) \le 2e^{-c t^2}\\ \end{equation*} where \begin{equation*} \delta_t := C \frac{\sqrt {p}}{\sqrt n} + \frac{t}{\sqrt n}. \end{equation*} \end{lemma} \begin{proof} First note that we can assume, without loss of generality, that $\E\left[{\boldsymbol x}_i^\sT{\boldsymbol \theta}\right]=0.$ Indeed, by Lemma 2.6.8 of~\cite{vershynin2018high} we have \begin{equation*} \norm{{\boldsymbol \theta}^\sT({\boldsymbol x} - \E[{\boldsymbol x}])}_{\psi_2} \le C\norm{{\boldsymbol \theta}^\sT {\boldsymbol x}}_{\psi_2}. \end{equation*} Recall that ${\mathcal S}_p$ $\subseteq B_2^p(\textsf{R})$, and hence there exists an $\alpha$-net $\cN_{\alpha}$ of ${\mathcal S}_p$ of size $|\cN_{\alpha}| \le C(\textsf{R},\alpha)^p$ for some constant depending only on $\textsf{R},\alpha$. Fix ${\boldsymbol \theta}\in\cN_{\alpha}$ and note that \begin{equation*} \norm{{\boldsymbol X}{\boldsymbol \theta}}_2^2 = \sum_{i=1}^n ({\boldsymbol x}_i^\sT {\boldsymbol \theta})^2. \end{equation*} By Assumption~\ref{ass:X}, $({\boldsymbol x}_i^\sT {\boldsymbol \theta})^2$ are squares of i.i.d subgaussian random variables with subgaussian norm $K_{\boldsymbol \theta} \le \textsf{K}$ uniformly in ${\boldsymbol \theta}$, and with means \begin{equation*} \E\left[\left({\boldsymbol x}_i^\sT {\boldsymbol \theta}\right)^2\right] = \norm{\E\left[{\boldsymbol x}\bx^\sT \right]^{1/2}{\boldsymbol \theta}}_2^2 =: V_{\boldsymbol \theta} \le \textsf{K}, \end{equation*} where the last inequality holds uniformly over ${\boldsymbol \theta}$ (see Proposition 2.5.2 of~\cite{vershynin2018high} for the properties of subgaussian variables). Hence, via Bernstein's inequality (2.8.3 of~\cite{vershynin2018high}), we have for any $s >0$, \begin{align*} \P\left(\sum_{i=1}^n ({\boldsymbol x}_i^\sT {\boldsymbol \theta})^2 \ge n (s + 1)\textsf{K} \right) &= \P\left(\sum_{i=1}^n ({\boldsymbol x}_i^\sT {\boldsymbol \theta})^2 - n V_{\boldsymbol \theta} \ge n (s +1)\textsf{K} - n V_{\boldsymbol \theta} \right)\\ &\le 2\exp\left\{- c n\min\left\{ \left(\frac{(s+1)\textsf{K} - V_{\boldsymbol \theta}}{K_{\boldsymbol \theta}}\right)^2, \left(\frac{(s+1)\textsf{K} - V_{\boldsymbol \theta}}{K_{\boldsymbol \theta}}\right) \right\} \right\}\\ &\stackrel{(a)}{\le} 2 e^{ -c n(s^2\vee s )} \end{align*} where for $(a)$ we used that $\sup_{\boldsymbol \theta} V_{\boldsymbol \theta} \le \textsf{K}$ and $\sup_{\boldsymbol \theta} K_{\boldsymbol \theta} \le \textsf{K}$. Taking $C \ge ( \log C(\textsf{R} ,\alpha)/ c )^{1/2}$ and $s = \delta_t^2 \vee \delta_t$, we have via a union bound over $\cN_\alpha$ \begin{align} \P\left(\sup_{{\boldsymbol \theta}\in\cN_\alpha}\sum_{i=1}^n ({\boldsymbol x}_i^\sT {\boldsymbol \theta})^2 \ge \textsf{K} n \left((\delta_t^2 \vee \delta_t) + 1\right) \right) &{\le} 2C(\textsf{R},\alpha)^p e^{-c n (s^2 \vee s)}\nonumber\\ &\stackrel{(a)}{=} 2C(\textsf{R},\alpha)^pe^{-c n \delta_t^2}\nonumber\\ &\stackrel{(b)}{\le} 2C(\textsf{R},\alpha)^pe^{-c \left(C^2 p + t^2 \right)}\nonumber\\ &\stackrel{(c)}{\le} 2e^{-c t^2}.\label{eq:epsilon_not_op_bound} \end{align} where for $(a)$ we used that $s^2\vee s = \delta_t^2$, for $(b)$ we used the definition of $\delta_t$, and for $(c)$ that $C \ge (\log C(\textsf{R},\alpha) / c)^{1/2}.$ Now via a standard epsilon net argument (see for example the proof of Theorem 4.6.1 in~\cite{vershynin2018high}), one can show that \begin{equation}\nonumber \sup_{{\boldsymbol \theta}\in{\mathcal S}_p}\norm{{\boldsymbol X}{\boldsymbol \theta}}_2^2 \le C_0(\textsf{R},\alpha) \sup_{{\boldsymbol \theta}\in\cN_{\alpha}}\norm{{\boldsymbol X}{\boldsymbol \theta}}_2^2 \end{equation} for some $C_0$ depending only on $\textsf{R}$ and $\alpha$. Combining this with~\eqref{eq:epsilon_not_op_bound} gives the desired result. \end{proof} \begin{lemma} \label{lemma:simpler_tail_bound_X_sp_norm} There exist constants $C,c>0$ depending only on $\sOmega$ such that for all $t >0$, \begin{equation} \nonumber \P\left(\norm{{\boldsymbol X}}_{{\mathcal S}_p} > C\left(\sqrt n + \sqrt p + t\right) \right)\le 2e^{-ct^2}. \end{equation} \end{lemma} \begin{proof} Let $\mathcal{A}$ be the high probability event of Lemma~\ref{lemma:X_sp_norm_tail_bound}, i.e. \begin{equation} \nonumber \mathcal{A} := \left\{ \frac{\norm{{\boldsymbol X}}^2_{{\mathcal S}_p}}{C_0^2 n} - 1 \le (\delta_t^2 \vee \delta_t)\right\} \end{equation} where $C_0 := \sqrt{\widetilde C}$ for the constant $\widetilde C$ appearing in the statement of the lemma. Next define the event \begin{equation} \nonumber {\mathcal G} := \left\{\frac{\norm{{\boldsymbol X}}_{{\mathcal S}_p}}{C_0\sqrt{ n}} \le 1\right\}. \end{equation} We have on ${\mathcal G}^c\cap \mathcal{A}$, \begin{align*} \max\left\{ \left(\frac{\norm{{\boldsymbol X}}_{{\mathcal S}_p}}{C_0\sqrt{n}} - 1 \right)^2, \left|\frac{\norm{{\boldsymbol X}}_{{\mathcal S}_p}}{C_0\sqrt{n}}- 1\right| \right\} &\stackrel{(a)}\le \left|\left(\frac{\norm{{\boldsymbol X}}_{{\mathcal S}_p}}{C_0\sqrt{n}}\right)^2 - 1\right|\\ &\stackrel{(b)}{=} \left(\frac{\norm{{\boldsymbol X}}_{{\mathcal S}_p}}{C_0\sqrt{ n}}\right)^2 - 1\\ &\stackrel{(c)}{\le} \delta_t^2 \vee \delta_t, \end{align*} where $(a)$ follows from \begin{equation} \nonumber \max\left\{(a-b)^2,|a-b|\right\} \le |a^2 -b^2|, \end{equation} holding for $a,b >0$, $a+b \ge 1$. Meanwhile, $(b)$ holds on ${\mathcal G}^c$ and $(c)$ is from the definition of $\mathcal{A}$. Hence, by the definition of $\delta_t$ in Lemma~\ref{lemma:X_sp_norm_tail_bound} we have \begin{equation} \nonumber \mathcal{A}\cap{\mathcal G}^c \subseteq \left\{ \left|\frac{\norm{{\boldsymbol X}}_{{\mathcal S}_p}}{C_0\sqrt{n}} - 1 \right| \le C \sqrt{\frac{p}{n}} + \frac{t}{\sqrt n} \right\} \subseteq \left\{\norm{{\boldsymbol X}}_{{\mathcal S}_p} \le C_1 \left(\sqrt{n} + \sqrt{p} + t\right)\right\}. \end{equation} Meanwhile, from the definition of ${\mathcal G}$, we directly have \begin{equation} \nonumber \mathcal{A} \cap {\mathcal G} \subseteq \left\{ \norm{{\boldsymbol X}}_{{\mathcal S}_p} \le \sqrt{n} C_0\right\}, \end{equation} so for some $C_2$ we have \begin{equation} \nonumber \mathcal{A} \subseteq \left\{ \norm{{\boldsymbol X}}_{{\mathcal S}_p} \le C_2\left( \sqrt{n} + \sqrt{p} + t\right)\right\}, \end{equation} implying that \begin{align} \nonumber \P\left(\norm{{\boldsymbol X}}_{{\mathcal S}_p} > C\left(\sqrt n + \sqrt p + t\right) \right) &\le \P(\mathcal{A}^c) \le 2e^{-ct^2} \end{align} by Lemma~\ref{lemma:X_sp_norm_tail_bound}. \end{proof} Finally, we prove Lemma~\ref{lemma:op_norm_x}. \ifthenelse{\boolean{arxiv}}{ \begin{proof}[Proof of Lemma~\ref{lemma:op_norm_x}] }{ \begin{proof}{\textbf{of Lemma~\ref{lemma:op_norm_x}}} } By an application of Lemma~\ref{lemma:simpler_tail_bound_X_sp_norm} with $t := \sqrt{s}/ C - \sqrt{n} - \sqrt{p}$, we have for all $s > C^2(\sqrt n + \sqrt p)^2,$ \begin{equation} \nonumber \P\left( \norm{{\boldsymbol X}}_{{\mathcal S}_p} \ge \sqrt{s} \right) \le 2 \exp\left\{ -c \left(\frac{\sqrt{s}}{C} - \sqrt{n} - \sqrt{p}\right)^2 \right\}. \end{equation} Hence, we can bound the desired expectation as \begin{align*} \E\left[\norm{{\boldsymbol X}}_{{\mathcal S}_p}^2 \right] &= \int_{0}^\infty \P\left( \norm{{\boldsymbol X}}_{{\mathcal S}_p}^2 > s \right)\textrm{d} s\\ &= \int_{0}^{ C^2\left(\sqrt{n} + \sqrt{p}\right)^2} \P\left( \norm{{\boldsymbol X}}_\mathrm{op} \ge \sqrt{s} \right)\textrm{d} s + \int_{C^2 \left(\sqrt{n} + \sqrt{p}\right)^2}^\infty \P\left( \norm{{\boldsymbol X}}_\mathrm{op} \ge \sqrt{s} \right)\textrm{d} s\\ &\le C^2 \left(\sqrt{n} + \sqrt{p} \right)^2 + 2\int_{C^2 ( \sqrt{n} + \sqrt{p})^2}^\infty \exp\left\{ -c\left(\frac{\sqrt{s}}{C} - \sqrt{n} - \sqrt{p}\right)^2\right\}\textrm{d} s\\ &\le C_0 (n+ p) + C_1 (\sqrt{n} + \sqrt{p}) \\ &\le C_2 p \end{align*} for some sufficiently large $C_2 >0$ depending only on $\sOmega$ since $\lim_{n\to\infty} p(n)/n = \sgamma.$ \end{proof} \subsection{Proof of Lemma~\ref{lemma:Rn_lipschitz_bound}} \label{proof:Rn_lipschitz_bound} We prove the bound for the model with ${\boldsymbol X}$. Throughout, we take ${\boldsymbol y} = {\boldsymbol y}({\boldsymbol X})$. Let us define $L_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}):= \sum_{i=1}^n \ell({\boldsymbol \Theta}^\sT {\boldsymbol x}_i; y_i)/n$ so that $\widehat R_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}) = L_n({\boldsymbol \Theta}; {\boldsymbol X},{\boldsymbol y}) + r({\boldsymbol \Theta}).$ We have \begin{align} \left|\widehat R_n({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}) - \widehat R_n(\widetilde{\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}) \right| &\le \left|L_n\left({\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}\right) - L_n\left(\widetilde{\boldsymbol \Theta};{\boldsymbol X},{\boldsymbol y}\right) \right| + \left|r({\boldsymbol \Theta}) - r\left(\widetilde{\boldsymbol \Theta}_k\right)\right| \nonumber \\ &\stackrel{(a)}\le \sup_{{\boldsymbol \Theta}'\subseteq {\mathcal S}_p^{\textsf{k}}} \left| \inner{\grad_{\boldsymbol \Theta} L_n({\boldsymbol \Theta}'; {\boldsymbol X}, {\boldsymbol y}), \left({\boldsymbol \Theta} - \widetilde {\boldsymbol \Theta} \right)}_F\right|\nonumber\\ &\hspace{10mm}+ \textsf{K}_r\left(\sqrt{\textsf{k}}\textsf{R}\right) \norm{{\boldsymbol \Theta} - \widetilde {\boldsymbol \Theta}}_F,\label{eq:lipschitz_bound_Rn} \end{align} where in $(a)$ we used that the regulizer $r$ is assumed to be locally Lipschitz in Frobenius norm and that $\norm{{\boldsymbol \Theta}}_F \le \sqrt{\textsf{k}}\textsf{R}$ for ${\boldsymbol \Theta}\in{\mathcal S}_p^\textsf{k}$. Now using $\partial_k$ to denote the partial with respect to the $k$th entry, we compute the gradient \begin{align} \grad_{{\boldsymbol \Theta}} L_n\left({\boldsymbol \Theta}; {\boldsymbol X}, {\boldsymbol y}\right) &=\frac1n \sum_{i=1}^n \sum_{k=1}^\textsf{k} \partial_k \ell({\boldsymbol \Theta}^\sT {\boldsymbol x}_i;y_i) \grad_{\boldsymbol \Theta} \left({\boldsymbol \theta}_k^\sT{\boldsymbol x}_i\right) \nonumber\\ &=\frac1n {\boldsymbol X} {\boldsymbol D}({\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y})\label{eq:grad_Ln_computation} \end{align} where we defined \begin{equation} \nonumber {\boldsymbol D}({\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y}) := ({\boldsymbol d}_1({\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y}), \dots,{\boldsymbol d}_\textsf{k} ({\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y}))\in\R^{n\times \textsf{k}} \end{equation} for ${\boldsymbol d}_k({\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y}) := \left(\partial_k \ell\left({\boldsymbol \Theta}^\sT {\boldsymbol x}_i,y_i \right) \right)_{i\in[n]} \in \R^n$. Before applying Cauchy-Schwarz, let us bound the norm $\norm{{\boldsymbol D}({\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y})}_F$. Recall the Lipschitz condition on the gradient of the loss in Assumption~\ref{ass:loss} and note that this implies that for all ${\boldsymbol v}\in\R^{\textsf{k}+1}$, we have \begin{equation} \nonumber \norm{\grad\ell({\boldsymbol v})}_2 \le C_1 \norm{{\boldsymbol v}}_2 + C_2 \end{equation} for some $C_1$ and $C_2$ depending only on $\sOmega$. Hence, we have \begin{align} \norm{{\boldsymbol D}({\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y})}_F^2 &= \sum_{i=1}^n \norm{\grad \ell({\boldsymbol \Theta}^\sT {\boldsymbol x}_i, y_i) }_2^2\nonumber\\ &\le C_3\sum_{i=1}^n\left( \norm{{\boldsymbol \Theta}^\sT{\boldsymbol x}_i}_2^2 + y_i^2 + 1\right)\nonumber\\ &= C_3\left(\sum_{k=1}^\textsf{k} \norm{{\boldsymbol X}{\boldsymbol \theta}_k}_2^2 + \norm{{\boldsymbol y}}_2^2 + 1\right) \nonumber \\ &\le C_4 \left(\norm{{\boldsymbol X}}_{{\mathcal S}_p} \norm{{\boldsymbol \Theta}}_F + \norm{{\boldsymbol y}}_2 + 1\right)^2 ,\label{eq:bound_on_norm_d} \end{align} for some $C_3,C_4$ depending only on $\sOmega$. Combining equations~\eqref{eq:grad_Ln_computation} and~\eqref{eq:bound_on_norm_d} allows us to bound the first term in~\eqref{eq:lipschitz_bound_Rn} as \begin{align*} &\sup_{{\boldsymbol \Theta}'\in {\mathcal S}_p^{\textsf{k}}} \left| \inner{\grad_{{\boldsymbol \Theta}}L_n({\boldsymbol \Theta}', {\boldsymbol X}, {\boldsymbol y}({\boldsymbol X})),\left({\boldsymbol \Theta} - \widetilde {\boldsymbol \Theta} \right)}_F\right|\\ &\hspace{50mm}\stackrel{(a)}{=} \frac1n \sup_{{\boldsymbol \Theta}'\in{\mathcal S}_p^\textsf{k}} \left| \inner{{\boldsymbol D}({\boldsymbol \Theta}',{\boldsymbol X},{\boldsymbol y}),{\boldsymbol X}^\sT\left({\boldsymbol \Theta} - \widetilde {\boldsymbol \Theta} \right)}_F\right|\\ &\hspace{50mm}\stackrel{(b)}\le \frac1n \sup_{{\boldsymbol \Theta}'\in{\mathcal S}_p^\textsf{k}}\norm{{\boldsymbol D}({\boldsymbol \Theta}',{\boldsymbol X},{\boldsymbol y})}_F \norm{{\boldsymbol X}}_{{\mathcal S}_p} \norm{{\boldsymbol \Theta} - \widetilde{\boldsymbol \Theta}}_F\\ &\hspace{50mm}\stackrel{(c)}{\le} \frac{C_5}{n} \sup_{{\boldsymbol \Theta}'\in{\mathcal S}_p^\textsf{k}} \left(\norm{{\boldsymbol X}}_{{\mathcal S}_p} \norm{{\boldsymbol \Theta}'}_F + \norm{{\boldsymbol y}} + 1\right) \norm{{\boldsymbol X}}_{{\mathcal S}_p} \norm{{\boldsymbol \Theta} - \widetilde {\boldsymbol \Theta}}_F, \end{align*} where $(a)$ follows from Eq.~\eqref{eq:grad_Ln_computation}, $(b)$ follows from the assumption that ${\mathcal S}_p$ is symmetric and convex, and $(c)$ follows from~\eqref{eq:bound_on_norm_d}. Finally, combining with~\eqref{eq:lipschitz_bound_Rn} we obtain \begin{align} \nonumber \left|\widehat R_n({\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) - \widehat R_n(\widetilde{\boldsymbol \Theta},{\boldsymbol X},{\boldsymbol y}({\boldsymbol X})) \right| &\le C_6 \left( \frac{\norm{{\boldsymbol X}}^2_{{\mathcal S}_p}}{n} + \frac{\norm{{\boldsymbol X}}_{{\mathcal S}_p}\norm{{\boldsymbol y}}_2}{n}+ 1 \right) \norm{{\boldsymbol \Theta} -\widetilde {\boldsymbol \Theta}}_F \end{align} for some constant $C_6$ depending only on $\sOmega$. This concludes the proof. \subsection{Proof of Lemma~\ref{lemma:dct_bound}} \label{proof:dct_bound} Let us expand this via the definition of $\widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0)$ in~\eqref{eq:bd_def}: \begin{align} \nonumber \widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0)^\sT \widetilde{\boldsymbol u}_{t,1} &= \sum_{k=1}^\textsf{k}\partial_k \ell\left({\boldsymbol \Theta}_0^\sT {\boldsymbol u}_{t,1}; \eta({\boldsymbol \Theta}^{\star\sT}{\boldsymbol u}_{t,1}, \epsilon_1) \right) {\boldsymbol \theta}_k^\sT \widetilde{\boldsymbol u}_{t,1}\\ &\hspace{10mm}+\sum_{k=1}^{\textsf{k}^\star}\partial_k^\star \ell\left({\boldsymbol \Theta}_0^\sT {\boldsymbol u}_{t,1}; \eta({\boldsymbol \Theta}^{\star\sT}{\boldsymbol u}_{t,1}, \epsilon_1) \right) {\boldsymbol \theta}_k^{\star\sT} \widetilde{\boldsymbol u}_{t,1}. \label{eq:bd_expansion} \end{align} where we defined \begin{equation} \nonumber \partial_k \ell({\boldsymbol v};\eta({\boldsymbol v}^\star,v)) := \frac{\partial}{\partial v_k} \ell({\boldsymbol v};\eta({\boldsymbol v}^\star,v)),\quad\quad \partial_k^\star \ell({\boldsymbol v},\eta({\boldsymbol v}^\star,v)) := \frac{\partial}{\partial{v^\star_k}} \ell({\boldsymbol v},\eta({\boldsymbol v}^\star,v)) \end{equation} for ${\boldsymbol v}\in\R^\textsf{k},$ ${\boldsymbol v}^\star \in\R^{\textsf{k}^\star}$ and $v\in\R$. Now recall that $\ell$ and $\eta$ are Lipschitz by Assumption~\hyperref[ass:loss_labeling_prime]{1'}, so we can bound \begin{align*} \left|\partial_k \ell({\boldsymbol v};\eta({\boldsymbol v}^\star,v))\right| &\le C_0 \left( \norm{{\boldsymbol v}}_2 + \left|\eta({\boldsymbol v}^\star,v) \right| + 1 \right) \le C_1\left( \norm{{\boldsymbol v}}_2 + \norm{{\boldsymbol v}^\star}_2^2 + \left|v\right|^2 + 1 \right) \\ \left| \partial_k^\star \ell({\boldsymbol v};\eta({\boldsymbol v}^\star,v)) \right| &\le C_0 \left( \norm{{\boldsymbol v}}_2+ |\eta({\boldsymbol v}^\star,v)|+ 1 \right)\left|\frac{\partial }{\partial v_{k}^\star}\eta({\boldsymbol v}^\star,v) \right| \le C_2\left( \norm{{\boldsymbol v}}_2^2 + \norm{{\boldsymbol v}^\star}_2^3 + \left|v\right|^3 + 1 \right). \end{align*} for $C_0,C_1,C_2$ depending only on $\sOmega$. However, for any fixed $m>0$ and ${\boldsymbol \Theta}\in{\mathcal S}_p^\textsf{k}$ we have \begin{align} \nonumber \E\left[\norm{{\boldsymbol \Theta}^\sT{\boldsymbol u}_{t,1}}_2^m\right]\le C_3(\textsf{k})\sum_{k=1}^\textsf{k}\E\left[\left({\boldsymbol \theta}_k^\sT{\boldsymbol u}_{t,1}\right)^m \right] \le C_4 \end{align} for $C_4$ depending only on $\sOmega$, since $\sup_{t\in[0,\pi/2]}\sup_{{\boldsymbol \theta}\in{\mathcal S}_p} \norm{{\boldsymbol \theta}^\sT {\boldsymbol u}_{t,1}} \le 2\textsf{R}\textsf{K}$ by Assumption~\ref{ass:X}. A similar bound clearly holds for $\widetilde{\boldsymbol u}_{t,1}$. Hence, using that $\textsf{k},\textsf{k}^\star$ are assumed to be fixed, an application of H\"older's gives \begin{equation} \nonumber \E\left[ \left(\widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0)^\sT \widetilde{\boldsymbol u}_{t,1}\right)^4 \right] \le C_5 \end{equation} for some $C_5$ depending only on $\sOmega$, where we also used that $\epsilon_1$ is assumed to be subgaussian by Assumption~\ref{ass:noise}. Therefore, we have \begin{align*} \E_\up{1}\left[\left( \frac{ \widehat{\boldsymbol d}_{t,i}({\boldsymbol \Theta}_0) ^\sT\widetilde{\boldsymbol u}_{t,i} e^{-\beta\widehat\ell_{t,i}({\boldsymbol \Theta}_0)}} {\inner{e^{-\beta\widehat\ell_{t,i}({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{i}}\right)^2\right] &\stackrel{(a)}{\le} C_5^{1/2}\E_\up{i}\left[ \frac{1} {\left(\inner{e^{-\beta\widehat\ell_{t,i}({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{i}\right)^4}\right]^{1/2} \\ &\stackrel{(b)}{\le} C_5^{1/2} \left(\inner{\E_\up{i}\left[e^{4\beta\widehat\ell_{t,i}({\boldsymbol \Theta})}\right]}_{\boldsymbol \Theta}^\up{i} \right)^{1/2}\\ &\stackrel{(c)}{\le} C_5^{1/2} C(\beta)^{1/2} \end{align*} for $C_5$ depending only on $\sOmega$ and $C(\beta)$ depending only on $\sOmega$ and $\beta$. Here, in $(a)$ we used that $\ell$ and $\beta$ are nonnegative, in $(b)$ we used Jensen's and that $p^\up{i}({\boldsymbol \Theta};t)$ as defined in~\eqref{eq:inner_def} is independent of $({\boldsymbol x}_i,{\boldsymbol g}_i,\epsilon_i)$, and in $(c)$ we used the integrability condition of Assumption~\hyperref[ass:loss_prime]{1'}. Recalling that ${\boldsymbol \Theta}^\star \in{\mathcal S}_p^\textsf{k}$ by Assumption~\ref{ass:ThetaStar} and taking the supremum over ${\boldsymbol \Theta}_0 \in {\mathcal S}_p^\textsf{k}$ establishes the first inequality in the statement of the lemma. To establish the second inequality, recall the explicit form of the derivative from~\eqref{eq:derivative_explicit} and note that ${\boldsymbol u}_{t,i}$ are i.i.d. for different $i$ so that \begin{align*} \left| \E\left[\frac{\partial}{\partial t}\psi(f_\alpha(\beta,{\boldsymbol U}_t)) \right]\right| &\le \norm{\psi'}_\infty \E\left[\sup_{{\boldsymbol \Theta}_0 \in{\mathcal C}_p^\textsf{k}} \E_{\up{1}}\left[ \left( \frac{ \widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0) ^\sT\widetilde{\boldsymbol u}_{t,1} e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta}_0)}} {\inner{e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{1}}\right)^2 \right]^{1/2} \right]\\ &\le \norm{\psi'}_\infty C_6(\beta) \end{align*} for $C_6(\beta)$ depending only on $\sOmega$ and $\beta$, where we used that ${\mathcal C}_p \subseteq {\mathcal S}_p$ by Assumption~\ref{ass:set}. Hence, the bound holds uniformly in $t\in[0,\pi/2]$ and $n\in\Z_{>0}$ as desired. \subsection{Proof of Lemma~\ref{lemma:poly_approx}} \label{section:proof_poly_approx} Recall the definitions of ${\boldsymbol u}_{t,i}, \widetilde {\boldsymbol u}_{t,i}, \widehat{\boldsymbol d}_{t,i}({\boldsymbol \Theta})$ and $\inner{\,\cdot\,}_{\boldsymbol \Theta}^\up{i}$ in equations~\eqref{eq:slepian},~\eqref{eq:bd_def} and~\eqref{eq:inner_def} respectively. Further, recall the shorthand notation $\widehat \ell_{t,i}({\boldsymbol \Theta})$ for the loss, and $\E_\up{i}$ for the conditional expectation, all defined in Section~\ref{section:proof_outline}. Throughout, we fix $i=1$ as in the statement of the lemma. Define the event \begin{align} \label{def:G_Theta,B} {\mathcal G}_{{\boldsymbol \Theta},B} &:= \left\{\left|{{\boldsymbol \theta}_k^\sT{\boldsymbol u}_1}\right| \vee \left|{{\boldsymbol \theta}_k^\sT\widetilde{\boldsymbol u}_1}\right| \le B \textrm{ for all } k\in[\textsf{k}]\right\} \bigcap \left\{\left|{{\boldsymbol \theta}_k^{*\sT}{\boldsymbol u}_1}\right| \vee \left|{{\boldsymbol \theta}_k^{*\sT}\widetilde{\boldsymbol u}_1}\right| \le B \textrm{ for all } k\in[\textsf{k}^*]\right\}\nonumber\\ &\hspace{10mm}\bigcap \Big\{|\epsilon_1|\le B \Big\}, \end{align} defined for ${\boldsymbol \Theta}\in{\mathcal S}_p^\textsf{k},{\boldsymbol \Theta}^*\in{\mathcal S}_p^{\textsf{k}^\star}$ and $B>0$. The notation in ${\mathcal G}_{{\boldsymbol \Theta},B}$ indicates that, throughout, we think of ${\boldsymbol \Theta}^*$ as fixed. The following lemma follows from standard subgaussian tail bounds. \begin{lemma} \label{lemma:P_G_c} For any $B>0$, we have constants $C,C' >0$ depending only on~$\sOmega$ such that \begin{equation} \nonumber \sup_{{\boldsymbol \Theta} \in{\mathcal S}_p^\textsf{k}} \P\left( {\mathcal G}_{{\boldsymbol \Theta},B}^c \right) \le C e^{-C' B^2}. \end{equation} \end{lemma} \begin{proof} From the definition of ${\boldsymbol u}_{t,1}$ along with Assumption~\ref{ass:X}, we have $\norm{{\boldsymbol \theta}^\sT{\boldsymbol u}_{t,i}}_{\psi_2} \le 2 \textsf{R}\textsf{K}$, and similarly for $\widetilde {\boldsymbol u}_{t,1}.$ Furthermore, Assumption~\ref{ass:noise} asserts that $\norm{\epsilon_1}_{\psi_2} \le \textsf{K}$. So a union bound directly gives \begin{align} \P({\mathcal G}_{{\boldsymbol \Theta},B}^c) &= \P \left( \bigcup_{k\le \textsf{k}} \left\{ \left|{{\boldsymbol \theta}_k^\sT{\boldsymbol u}_1}\right| \vee \left|{{\boldsymbol \theta}_k^\sT\widetilde{\boldsymbol u}_1}\right| > B \right\} \bigcup_{k\le \textsf{k}^*} \left\{ \left|{{\boldsymbol \theta}_k^{*\sT}{\boldsymbol u}_1}\right| \vee \left|{{\boldsymbol \theta}_k^{*\sT}\widetilde{\boldsymbol u}_1}\right| > B \right\} \bigcup \Big\{ |\epsilon_1| > B \Big\} \right)\nonumber \\ &\le \sum_{k\le \textsf{k}} \left(\P\left(|{\boldsymbol \theta}_k^\sT {\boldsymbol u}_1| > B\right) + \P\left(|{\boldsymbol \theta}_k^\sT \widetilde{\boldsymbol u}_1| > B\right)\right)\nonumber\\ &\hspace{10mm}+ \sum_{k\le \textsf{k}^*}\left( \P\left(|{\boldsymbol \theta}_k^{*\sT} {\boldsymbol u}_1| > B\right) +\P\left(|{\boldsymbol \theta}_k^{*\sT} \widetilde{\boldsymbol u}_1| > B\right)\right) + \P\Big(|\epsilon_1| > B\Big) \nonumber\\ &\stackrel{(a)}{\le} C_0(\textsf{k} + \textsf{k}^* +1) \exp\left\{-\frac{C_1 B^2}{(2\textsf{R} + 1)^2\textsf{K}^2}\right\} \label{eq:P_G_c} \end{align} for some universal constants $C_0,C_1\in(0,\infty).$ \end{proof} Let us now consider the power series of $x\mapsto 1/x$ centered at 1, and its associated remainder \begin{equation} \nonumber P_M(x):= \sum_{l=0}^M (1-x)^l,\quad R_M(x):= \frac1x - P_M(x). \end{equation} We have the following properties of $P_M$ and $R_M$, whose proofs are elementary and are included here for the sake of completeness. \begin{lemma} \label{lemma:P_M} For $M > 0$, we have \begin{enumerate}[(i)] \item \label{item:P_M_1} $R_M(x) = (1-x)^{M+1}/{x}$ for $x\neq 0$; \item $R_M(x)^2$ is convex on $(0,1]$; \label{item:P_M_4} \item For any $s\in(0,1)$ and $\delta>0$, there exists $M>0$ such that $\sup_{t\in[s,1]}\left|R_M(t)\right| < \delta$. \label{item:P_M_5} \end{enumerate} \end{lemma} \begin{proof} For~$\ref{item:P_M_1}$, we write for $x >0$, \begin{align*} R_M(x) &= \frac1{x} \left(1 - \Big(1 - (1-x)\Big)\sum_{l=0}^M(1-x)^l \right)\\ &= \frac1{x} \left(1 - \sum_{l=0}^M(1-x)^l + \sum_{l=1}^{M+1}(1-x)^{l} \right)\\ &=\frac{(1-x)^{M+1}}{x} \end{align*} as desired. For~$\ref{item:P_M_4}$, the convexity of $R_M(x)^2$ can be shown by noting that~$\ref{item:P_M_1}$ gives \begin{equation} \nonumber \frac{\textrm{d}^2}{\textrm{d} x^2}\left(R_M(x)\right)^2= \frac {2(1-x)^{2M}\left(M(2M -1)x^2 + (4M -2)x + 3\right)} {x^4} \ge 0 \end{equation} for all $x\in(0,1]$ and $M>0$. Finally,~$\ref{item:P_M_5}$ can be shown by verifying that $P_M$ is indeed the power series of $1/x$ with a radius of convergence of $1$. \end{proof} The following lemma bounds the error in the approximation, and is the key for proving Lemma~\ref{lemma:poly_approx}. \begin{lemma} \label{lemma:R_M_bound} For any $\delta>0$ and $\beta>0$, there exists some finite integer $M_{\beta,\delta}>0$, depending only on $\beta,\delta$ and $\sOmega$ such that \begin{equation} \nonumber \E_\up{1}\left[ R_{M_{\beta,\delta}}\left(\inner{ e^{ -\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}}^\up{1}_{\boldsymbol \Theta}\right)^2\right] < \delta \end{equation} uniformly in $n$. \end{lemma} \begin{proof} Recall the definition of ${\mathcal G}_{{\boldsymbol \Theta},B}$ in~\eqref{def:G_Theta,B} for $B>0$ and ${\boldsymbol \Theta}\in{\mathcal S}_p^\textsf{k}$ and write for arbitrary integer $M>0$, \begin{align} \E_\up{1} \left[R_M\left(\inner{ e^{ -\beta\widehat\ell_{t,1}({\boldsymbol \Theta}) } }_{\boldsymbol \Theta}^\up{1}\right)^2\right] &\stackrel{(a)}\le \inner{\E_\up{1}\left[ R_M\left( e^{ -\beta\widehat\ell_{t,1}({\boldsymbol \Theta}) } \right)^2 \right]}^\up{1}_{\boldsymbol \Theta}\nonumber\\ &= \inner{\E_\up{1}\left[ R_M\left( e^{ -\beta\widehat\ell_{t,1}({\boldsymbol \Theta})} \right)^2 \mathbf{1}_{{\mathcal G}_{{\boldsymbol \Theta},B}} \right]}^\up{1}_{\boldsymbol \Theta} \label{eq:P_M_approx_bound} \\ &\hspace{9mm}+ \inner{\E_\up{1}\left[ R_M\left( e^{ -\beta\widehat\ell_{t,1}({\boldsymbol \Theta})} \right)^2 \mathbf{1}_{{\mathcal G}^c_{{\boldsymbol \Theta}, B}} \right]}^\up{1}_{\boldsymbol \Theta} \label{eq:PM_approx_proof_2nd_term} \end{align} where $(a)$ follows from Jensen and point$~\ref{item:P_M_4}$ of Lemma~\ref{lemma:P_M} asserting the convexity of $R_M^2$ on $(0,1]$. The expectation in the second term can be bounded uniformly over ${\boldsymbol \Theta}\in{\mathcal S}_p^\textsf{k}$, namely \begin{align*} \E_\up{1}\left[ R_M\left(e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}\right)^2 \mathbf{1}_{{\mathcal G}_{{\boldsymbol \Theta},B}^c} \right] &\le \E_\up{1}\left[ R_M\left(e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}\right)^4\right]^{1/2}\P\left({\mathcal G}_{{\boldsymbol \Theta},B}^c\right)^{1/2} \\ &\stackrel{(a)}{\le} \E_\up{1}\left[\left(\frac{1}{e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}}\right)^4 \right]^{1/2} C_0 e^{- C_1 B^2}\\ &\stackrel{(b)}{\le} {C_2(\beta) C_0e^{-C_1 B^2}} \end{align*} for some constant $C_2(\beta)$ depending only on $\beta$ and $\sOmega$, and $C_0,C_1$ depending only on $\sOmega$. Here, $(a)$ follows from point$~\ref{item:P_M_1}$ of Lemma~\ref{lemma:P_M} along with the tail bound from Lemma~\ref{lemma:P_G_c}, and that $\ell$ is assumed to be nonnegative, and $(b)$ follows from the integrability condition~\eqref{eq:exp_integrability} of Assumption~\hyperref[ass:loss_labeling_prime]{1'}. For a given $\delta\in(0,1)$, we can find some $B_{\beta,\delta}>0$ sufficiently large, depending only on $\beta,\delta$ and $\sOmega$ such that $C_2(\beta) C_0e^{-{C_1B_{\beta,\delta}^2}} < \delta/2$, thus bounding the term in~\eqref{eq:PM_approx_proof_2nd_term} by $\delta/2$. Then for this fixed $B_{\beta,\delta}$, by continuity of the composition of $\ell$ and $\eta$ in $\left({\boldsymbol \Theta}^\sT{\boldsymbol u}_{t,1}, {\boldsymbol \Theta}^{*\sT}{\boldsymbol u}_{t,1},\epsilon_1\right)$, there exists some $\widetilde B_{\beta,\delta}>0$, such that, for any ${\boldsymbol \Theta}\in{\mathcal S}_p^\textsf{k}$, \begin{equation*} \widehat \ell_{t,1}({\boldsymbol \Theta}) = \ell\left({\boldsymbol \Theta}^\sT{\boldsymbol u}_{t,1}, \eta({\boldsymbol \Theta}^{*\sT}{\boldsymbol u}_{t,1},\epsilon_1)\right)\mathbf{1}_{{\mathcal G}_{{\boldsymbol \Theta},B_{\beta,\delta}}} \in \left[0,\widetilde B_{\beta,\delta}\right]. \end{equation*} Therefore, for any ${\boldsymbol \Theta}\in{\mathcal S}_p^\textsf{k}$, \begin{equation*} e^{-\beta\widehat\ell_{t,1}\left({\boldsymbol \Theta}\right)}\mathbf{1}_{{\mathcal G}_{{\boldsymbol \Theta},B_{\beta,\delta}}} \in \left[e^{-\beta\widetilde B_{\beta,\delta}}, 1\right]. \end{equation*} Then, by points~$\ref{item:P_M_5}$ of Lemma~\ref{lemma:P_M}, we can choose $M = M_{\beta,\delta}$ a sufficiently large integer so that \begin{equation} \nonumber |R_{M_{\beta,\delta}}(t)| < \sqrt{\delta/2} \quad \textrm{ for all } t\in \left[e^{-\beta \widetilde B_{\beta,\delta}} , 1\right]. \end{equation} This gives the bound on~\eqref{eq:P_M_approx_bound}: \begin{align*} \inner{\E_\up{1}\left[ R_M\left(e^{-\beta\widehat\ell_{t,1}\left({\boldsymbol \Theta}\right)}\right)^2\mathbf{1}_{{\mathcal G}_{{\boldsymbol \Theta},{B_{\beta,\delta}}}}\right]}^\up{1}_{\boldsymbol \Theta} \le \frac\delta2, \end{align*} which when combined with the bound on $\eqref{eq:PM_approx_proof_2nd_term}$ yields the claim of the lemma. \end{proof} Finally, let us complete the proof of Lemma~\ref{lemma:poly_approx}. \begin{proof} Let $C$ be the constant in Lemma~\ref{lemma:dct_bound} guaranteeing that \begin{equation} \nonumber \left|\E_\up{1}\left[\left( \widetilde{\boldsymbol u}_{t,1}^\sT \widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0) e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta}_0)} \right)^2\right]\right| \le \left|\E_\up{1}\left[\left( \frac{ \widetilde{\boldsymbol u}_{t,1}^\sT \widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0) e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta}_0)}} {\inner{e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}}_{\boldsymbol \Theta}^\up{1}}\right)^2\right]\right| \le C. \end{equation} Fix $\delta>0$, and let $N_{\beta,\delta} := M_{\beta,\delta^2/C}$ so that Lemma~\ref{lemma:R_M_bound} holds with $\delta$ replaced by $\delta^2/C.$ Then, we directly have via an application of Cauchy-Schwarz \begin{align*} &\left|\E_\up{1}\left[ \frac{\widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0)^\sT\widetilde{\boldsymbol u}_{t,1} e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta}_0)}} {\inner{e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta})}}^\up{1}_{\boldsymbol \Theta}}\right]\right| \\ &\hspace{25mm}\le \left|{\E_\up{1}\left[ \widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0)^\sT\widetilde{\boldsymbol u}_{t,1} e^{-\beta\widehat\ell_{t,1}({\boldsymbol \Theta}_0) } P_{N_{\beta,\delta}}\left(\inner{e^{-\beta\widehat\ell_{t,1} ({\boldsymbol \Theta}) }}^\up{1}_{\boldsymbol \Theta}\right) \right]} \right| \\ &\hspace{35mm}+ \E \left[ \left(\widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0)^\sT \widetilde {\boldsymbol u}_{t,1} e^{-\beta\widehat\ell_{t,1} ({\boldsymbol \Theta}_0) } \right)^2 \right]^{1/2} \E\left[ R_{N_{\beta,\delta}}\left(\inner{e^{-\beta\widehat\ell_{t,1} ({\boldsymbol \Theta}) }}^\up{1}_{\boldsymbol \Theta}\right)^2 \right]^{1/2} \\ &\hspace{25mm}\le \left|{\E\left[ \widehat{\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0)^\sT\widetilde{\boldsymbol u} e^{-\beta\widehat\ell_{t,1} ({\boldsymbol \Theta}_0) } P_{N_{\beta,\delta}}\left(\inner{e^{-\beta\widehat\ell_{t,1} ({\boldsymbol \Theta}) }}^\up{1}_{\boldsymbol \Theta}\right) \right]} \right| + \delta \end{align*} as desired. \end{proof} \subsection{Proof of Lemma~\ref{lemma:gaussian_approx}} \label{section:proof_gaussian_approx} This section is dedicated to proving Lemma~\ref{lemma:gaussian_approx}. The first step is extending Eq.~\eqref{eq:condition_bounded_lipschitz_single} as follows. \begin{lemma} \label{lemma:proj} Suppose Assumption~\ref{ass:X} holds. Let $K >0$ be a fixed integer, and $\widetilde {\boldsymbol g} \sim\cN(0,\boldsymbol{\Sigma}_{\boldsymbol g})$ an independent copy of ${\boldsymbol g}$. Then for any bounded Lipschitz function $\varphi:\R^{2K} \to \R$, we have \begin{equation} \nonumber \lim_{p\to\infty}\sup_{{\boldsymbol H} = ({\boldsymbol \theta}_1,\dots,{\boldsymbol \theta}_K)\in{\mathcal S}_p^K }\left|\E\left[\varphi\left({\boldsymbol H}^\sT {\boldsymbol x},{\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right)\right] - \E\left[\varphi \left({\boldsymbol H}^\sT {\boldsymbol g}, {\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right)\right]\right| = 0. \end{equation} \end{lemma} \begin{proof} Fix ${\boldsymbol H} = ({\boldsymbol \theta}_1,\dots,{\boldsymbol \theta}_K) \in{\mathcal S}_p^K$ be arbitrary. Let $M=2K$ and define \begin{equation} \nonumber \widetilde {\boldsymbol H}:=\left(\begin{array}{@{}cc@{}} \begin{matrix} {\boldsymbol H} \end{matrix} & \mbox{\normalfont\large\bfseries 0} \vspace{2mm} \\ \mbox{\normalfont\large\bfseries 0} & \begin{matrix} {\boldsymbol H} \end{matrix} \end{array}\right) \in\R^{2p\times M}, \quad {\boldsymbol v}:= \left({\boldsymbol x}^\sT ,\widetilde {\boldsymbol g}^\sT \right)^\sT \in\R^{2p}, \quad\textrm{and}\quad {\boldsymbol h}:= \left({\boldsymbol g}^\sT ,\widetilde {\boldsymbol g}^\sT \right)^\sT\in\R^{2p}, \end{equation} so that $\left({\boldsymbol H}^\sT {\boldsymbol x}, {\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right) = \widetilde {\boldsymbol H}^\sT {\boldsymbol v}$ and $\left({\boldsymbol H}^\sT {\boldsymbol g}, {\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right) = \widetilde {\boldsymbol H}^\sT {\boldsymbol h}$. Consider any bounded Lipschitz function $\varphi:\R^M\to\R$, and define ${\boldsymbol \alpha} \sim \cN(0,\delta^2 {\boldsymbol I}_{M})$ for $\delta >0$. We can decompose \begin{align} &\left|\E\left[\varphi\left({\boldsymbol H}^\sT {\boldsymbol x},{\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right)\right] - \E\left[\varphi \left({\boldsymbol H}^\sT {\boldsymbol g}, {\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right)\right]\right|\nonumber\\ &\hspace{8mm}= \left|\E\left[\varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol v}\right)\right] - \E\left[\varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol h}\right)\right]\right|\nonumber\\ &\hspace{8mm}\le \left|\E\left[\varphi\left( \widetilde{\boldsymbol H}^\sT {\boldsymbol v}+ {\boldsymbol \alpha}\right)\right] - \E\left[\varphi\left( \widetilde{\boldsymbol H}^\sT {\boldsymbol v} \right)\right]\right| + \left| \E\left[\varphi\left( \widetilde{\boldsymbol H}^\sT {\boldsymbol h} + {\boldsymbol \alpha}\right)\right] -\E\left[\varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol h}\right)\right] \right| \label{eq:first_term_proj} \\ &\hspace{18mm}+ \left| \E\left[\varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol v} + {\boldsymbol \alpha}\right)\right] -\E\left[\varphi\left( \widetilde{\boldsymbol H}^\sT {\boldsymbol h} + \alpha\right)\right] \right|. \label{eq:third_term_proj} \end{align} Both terms on the right hand side on line~\eqref{eq:first_term_proj} are similar and can be bounded in an analogous manner. Namely, we can write for the first of these \begin{align} \left|\E\left[\varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol h} + {\boldsymbol \alpha} \right) - \varphi\left( \widetilde{\boldsymbol H}^\sT{\boldsymbol h} \right)\right]\right| &\le \E\left[\left|\varphi\left(\widetilde{\boldsymbol H}^\sT{\boldsymbol h} + {\boldsymbol \alpha} \right) - \varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol h}\right)\right|\right] \nonumber\\ &\le \norm{\varphi}_\textrm{Lip} \E \norm{{\boldsymbol \alpha}}_2\nonumber\\ &\le \sqrt{M} \norm{\varphi}_\textrm{Lip}\delta \label{eq:projections_first_bound} \end{align} and similarly for the second term. Now for the term on line~\eqref{eq:third_term_proj}, we have for any random variable ${\boldsymbol w} \in\R^{M}$, \begin{align} \nonumber \E\left[\varphi({\boldsymbol w} + {\boldsymbol \alpha})\right] &= \frac{1}{(2\pi)^M}\int\int \varphi({\boldsymbol s})\exp\left\{i {\boldsymbol t}^\sT {\boldsymbol s}- \delta^2\frac{\norm{{\boldsymbol t}}_2^2}{2}\right\} \phi_{\boldsymbol w}({\boldsymbol t}) \textrm{d}{\boldsymbol t} \textrm{d}{\boldsymbol s}, \end{align} where $\phi_{\boldsymbol w}({\boldsymbol t}) := \int \exp\left\{-i {\boldsymbol t}^\sT {\boldsymbol y}\right\} \P_{\boldsymbol w}(\textrm{d} {\boldsymbol y})$ is the (reflected) characteristic function of ${\boldsymbol w}$. Using this representation and denoting the characteristic functions of $\widetilde{\boldsymbol H}^\sT{\boldsymbol v}$ and $\widetilde{\boldsymbol H}^\sT{\boldsymbol h}$ by $\phi_{{\boldsymbol v},{\boldsymbol H}}$ and $\phi_{{\boldsymbol h},{\boldsymbol H}}$ respectively, we have \begin{align} &\left| \E\left[\varphi\left(\widetilde{\boldsymbol H}^\sT{\boldsymbol v} + {\boldsymbol \alpha}\right)\right] -\E\left[\varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol h}+ {\boldsymbol \alpha}\right)\right] \right| \nonumber \\ &\hspace{1mm}= \left|\frac{1}{(2\pi)^M}\int\int \varphi({\boldsymbol s})e^{i{\boldsymbol t}^\sT {\boldsymbol s} - \delta^2\norm{{\boldsymbol t}}^2/2}\big(\phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol t}) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol t}) \big)\textrm{d}{\boldsymbol t} \textrm{d}{\boldsymbol s} \right| \nonumber \\ &\hspace{1mm}\le \frac{1}{(2\pi)^M}\int \left|\varphi({\boldsymbol s})\right|\left(\int e^{2i{\boldsymbol t}^\sT {\boldsymbol s} - \delta^2\norm{{\boldsymbol t}}^2/2} \textrm{d}{\boldsymbol t} \right)^{1/2} \left(\int \big(\phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol t}) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol t}) \big)^2e^{-\delta^2{\norm{{\boldsymbol t}}^2}/{2}}\textrm{d}{\boldsymbol t}\right)^{1/2} \textrm{d}{\boldsymbol s} \nonumber \\ &\hspace{1mm}\stackrel{(a)}{=} \frac{1}{(\delta^2)^{M/4} (2\pi)^{3M/4}}\int \left|\varphi({\boldsymbol s})\right| e^{- {\norm{{\boldsymbol s}}^2}/{\delta^2} }d{\boldsymbol s} \left(\int\left(\phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol t}) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol t}) \right)^2e^{-\delta^2{\norm{{\boldsymbol t}}^2}/{2}}\textrm{d} {\boldsymbol t} \right)^{1/2} \nonumber\\ &\hspace{1mm}= \frac{1}{2^{M/2}}\left(\frac{\delta^2}{2\pi}\right)^{M/4}\E\left[\left|\varphi\left(\frac{{\boldsymbol \alpha}}{\sqrt2}\right)\right|\right] \left( \int\left(\phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol t}) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol t}) \right)^2e^{-\delta^2{\norm{{\boldsymbol t}}^2}/{2}}\textrm{d}{\boldsymbol t} \right)^{1/2}\nonumber\\ &\hspace{1mm}\le \frac{\norm{\varphi}_\infty}{2^{M/2}} \left( \left(\frac{\delta^2}{2\pi}\right)^{M/2} \int\left(\phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol t}) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol t}) \right)^2e^{-\delta^2{\norm{{\boldsymbol t}}^2}/{2}}\textrm{d} {\boldsymbol t} \right)^{1/2}\nonumber\\ &\hspace{1mm}= \frac{\norm{\varphi}_\infty }{2^{M/2}} \E\left[ \Big(\phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol{\tau}}_\delta) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol{\tau}}_\delta)\Big)^2 \right]^{1/2}, \label{eq:projections_second_bound} \end{align} where ${\boldsymbol{\tau}}_\delta \sim\cN(0,{\boldsymbol I}_M/\delta^2)$. Note that in $(a)$ we used \begin{align} \nonumber \int \exp\left\{2i{\boldsymbol t}^\sT {\boldsymbol s} - \delta^2\frac{\norm{{\boldsymbol t}}^2}{2}\right\} \textrm{d} {\boldsymbol t} &= \left(\frac{2\pi}{\delta^2}\right)^{M/2} \exp\left\{-2 \frac{\norm{{\boldsymbol s}}_2^2}{\delta^2} \right\}. \end{align} Fix ${\boldsymbol s}\in \R^K$ such that ${\boldsymbol s}\neq0$. We have for any ${\boldsymbol H} = ({\boldsymbol \theta}_1,\dots,{\boldsymbol \theta}_K) \in {\mathcal S}_p^K$, \begin{equation} \nonumber \frac{{\boldsymbol H}{\boldsymbol s}}{\norm{{\boldsymbol s}}_1} = \sum_{j=1}^K \frac{|s_j|}{\norm{{\boldsymbol s}}_1}\sign\{s_j\} {\boldsymbol \theta}_j. \end{equation} Recalling that ${\mathcal S}_p$ is symmetric, we see that $\sign\{s_j\}{\boldsymbol \theta}_j \in{\mathcal S}_p$ for all $j\in[K]$, and then the convexity of ${\mathcal S}_p$ implies that ${\boldsymbol H}{\boldsymbol s}/\norm{{\boldsymbol s}}_1\in{\mathcal S}_p$ for ${\boldsymbol s} \neq 0$. Letting $\phi_{{\boldsymbol x},{\boldsymbol H}},\phi_{{\boldsymbol g},{\boldsymbol H}}$ be the characteristic functions of ${\boldsymbol H}^\sT{\boldsymbol x}, {\boldsymbol H}^\sT{\boldsymbol g}$ respectively and fixing ${\boldsymbol t} = ({\boldsymbol s},\widetilde {\boldsymbol s}) \in \R^M$, we have if ${\boldsymbol s}\neq 0$, \begin{align} &\limsup_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K}\left|\phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol t}) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol t}) \right|^2\nonumber\\ &\hspace{10mm}\stackrel{(a)}{=} \limsup_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K} \left|\phi_{{\boldsymbol g},{\boldsymbol H}}(\widetilde{\boldsymbol s})\right|^2 \left|\phi_{{\boldsymbol x},{\boldsymbol H}}({\boldsymbol s}) - \phi_{{\boldsymbol g},{\boldsymbol H}}({\boldsymbol s}) \right|^2\nonumber\\ &\hspace{10mm}\stackrel{(b)}{\le} 2 \limsup_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K} \left|\phi_{{\boldsymbol x},{\boldsymbol H}}({\boldsymbol s}) - \phi_{{\boldsymbol g},{\boldsymbol H}}({\boldsymbol s}) \right| \nonumber\\ &\hspace{10mm}\le2 \limsup_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K}\left|\E\left[ \exp\{i{{\boldsymbol x}^\sT{\boldsymbol H}{\boldsymbol s}} \} \right] - \E\left[ \exp\{i{{\boldsymbol g}^\sT {\boldsymbol H}{\boldsymbol s}} \} \right] \right|\nonumber\\ &\hspace{10mm}= \limsup_{p\to\infty}\sup_{{\boldsymbol H} \in {\mathcal S}_p^K}\left|\E\left[ \exp\left\{i\norm{{\boldsymbol s}}_1 {\boldsymbol x}^\sT \left(\frac{{\boldsymbol H}{\boldsymbol s}}{\norm{{\boldsymbol s}}_1}\right)\right\} \right] - \E\left[ \exp\left\{i \norm{{\boldsymbol s}}_1 {\boldsymbol x}^\sT \left(\frac{{\boldsymbol H}{\boldsymbol s}}{\norm{{\boldsymbol s}}_1}\right)\right\}\right] \right|\nonumber\\ &\hspace{10mm}\stackrel{(c)}{\le} \limsup_{p\to\infty}\sup_{{\boldsymbol \theta} \in {\mathcal S}_p}\left|\E\left[ \exp\left\{i\norm{{\boldsymbol s}}_1 {\boldsymbol x}^\sT {\boldsymbol \theta}\right\} \right] - \E\left[ \exp\left\{i \norm{{\boldsymbol s}}_1 {\boldsymbol x}^\sT{\boldsymbol \theta}\right\}\right] \right| \nonumber \\ &\hspace{10mm}\stackrel{(d)}=0 \label{eq:s_equal_0} \end{align} where $(a)$ holds because of the independence of $\widetilde {\boldsymbol g}$ and $({\boldsymbol x},{\boldsymbol g})$, $(b)$ holds because $|\phi({\boldsymbol s})| \in[0,1]$ for all ${\boldsymbol s}\in\R^M$, $(c)$ holds since ${\boldsymbol H} {\boldsymbol s}/\norm{{\boldsymbol s}}_1 \in{\mathcal S}_p$ and $(d)$ holds by Assumption~\ref{ass:X} since $x\mapsto \exp({i\norm{{\boldsymbol s}}_1x})$ is bounded Lipschitz for fixed ${\boldsymbol s} \in\R^M, \in\R$. Further, for ${\boldsymbol s} =0$, by the equality on line~\eqref{eq:s_equal_0} we immediately have $\left|\phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol t}) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol t}) \right|^2 =0$ and hence for any fixed ${\boldsymbol t}\in\R^M,$ \begin{equation} \label{eq:lim_equal_zero_char} \lim_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K}\left|\phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol t}) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol t}) \right|^2 = 0. \end{equation} In conclusion, we have \begin{align*} &\lim_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K}\left|\E\left[\varphi\left({\boldsymbol H}^\sT{\boldsymbol v}\right)\right] - \E\left[\varphi({\boldsymbol H}^\sT {\boldsymbol h})\right]\right| \\ &\hspace{30mm}\stackrel{(a)}{\le} 2\sqrt{M}\norm{\varphi}_\textrm{Lip} \delta + \frac{\norm{\varphi}_\infty}{2^{M/2}} \lim_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K} \E\left[ \Big(\phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol{\tau}}_\delta) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol{\tau}}_\delta)\Big)^2 \right]^{1/2}\\ &\hspace{30mm}\le 2\sqrt{M}\norm{\varphi}_\textrm{Lip} \delta + \frac{\norm{\varphi}_\infty}{2^{M/2}} \lim_{p\to\infty} \E\left[ \sup_{{\boldsymbol H}\in{\mathcal S}_p^K} \Big( \phi_{{\boldsymbol v},{\boldsymbol H}}({\boldsymbol{\tau}}_\delta) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol{\tau}}_\delta)\Big)^2 \right]^{1/2}\\ &\hspace{30mm}\stackrel{(b)}= 2\sqrt{M} \norm{\varphi}_\textrm{Lip} \delta, \end{align*} where $(a)$ follows from the decomposition in~\eqref{eq:first_term_proj} and the bounds in~\eqref{eq:projections_first_bound} and~\eqref{eq:projections_second_bound}, and $(b)$ follows from the dominated convergence theorem along with the limit in~\eqref{eq:lim_equal_zero_char} and domination of the integrand $ \sup_{{\boldsymbol H}\in{\mathcal S}_p^M} \left(\phi_{{\boldsymbol v},{\boldsymbol H}} ({\boldsymbol t}) - \phi_{{\boldsymbol h},{\boldsymbol H}}({\boldsymbol t}) \right)^2 \le 2$. Sending $\delta \to 0$ completes the proof. \end{proof} Now, via a truncation argument, we show that this can be extended to square integrable locally Lipschitz functions. \begin{lemma} \label{lemma:proj_locally_lip} Suppose Assumption~\ref{ass:X} holds. Let $K>0$ be a fixed integer and $\widetilde {\boldsymbol g} \sim\cN(0,\boldsymbol{\Sigma}_{\boldsymbol g})$ an independent copy of ${\boldsymbol g}$, and let $\varphi:\R^M \to\R$ be a locally Lipschitz function satisfying \begin{align} \sup_{p\in \Z_{>0}}\sup_{{\boldsymbol H}=({\boldsymbol \theta}_1,\dots,{\boldsymbol \theta}_M)\in{\mathcal S}_p^M}\E\left[\left| \varphi\left({\boldsymbol H}^\sT {\boldsymbol x},{\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right) \right|^2\right] &< \infty,\textrm{ and}\nonumber\\ \sup_{p\in \Z_{>0}}\sup_{{\boldsymbol H}=({\boldsymbol \theta}_1,\dots,{\boldsymbol \theta}_M)\in{\mathcal S}_p^M}\E\left[\left| \varphi\left({\boldsymbol H}^\sT {\boldsymbol g}, {\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right)\right|^2\right] &< \infty. \label{eq:square_integrability_varphi} \end{align} Then \begin{equation} \nonumber \lim_{p\to\infty} \sup_{{\boldsymbol H}\in {\mathcal S}_p^M} \left|\E\left[ \varphi\left({\boldsymbol H}^\sT {\boldsymbol x},{\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right) \right] - \E\left[ \varphi\left({\boldsymbol H}^\sT {\boldsymbol g}, {\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right) \right] \right| = 0. \end{equation} \end{lemma} \ifthenelse{\boolean{arxiv}}{ \begin{proof}[Proof of Lemma~\ref{lemma:proj_locally_lip}] }{ \begin{proof}{\textbf{of Lemma~\ref{lemma:proj_locally_lip} }} } Fix ${\boldsymbol H} = ({\boldsymbol \theta}_1,\dots,{\boldsymbol \theta}_K) \in{\mathcal S}_p^K$ be arbitrary. Let $K=2M$ and again define \begin{equation} \nonumber \widetilde {\boldsymbol H}:=\left(\begin{array}{@{}cc@{}} \begin{matrix} {\boldsymbol H} \end{matrix} & \mbox{\normalfont\large\bfseries 0} \vspace{2mm} \\ \mbox{\normalfont\large\bfseries 0} & \begin{matrix} {\boldsymbol H} \end{matrix} \end{array}\right) \in\R^{2p\times M}, \quad {\boldsymbol v}:= \left({\boldsymbol x}^\sT ,\widetilde {\boldsymbol g}^\sT \right)^\sT \in\R^{2p}, \quad\textrm{and}\quad {\boldsymbol h}:= \left({\boldsymbol g}^\sT ,\widetilde {\boldsymbol g}^\sT \right)^\sT\in\R^{2p}, \end{equation} so that $\left({\boldsymbol H}^\sT {\boldsymbol x}, {\boldsymbol H}^\sT \widetilde {\boldsymbol g}\right) = \widetilde {\boldsymbol H}^\sT {\boldsymbol v}$ First, we bound the probability of the tail event $\left\{ \norm{{\boldsymbol H}^\sT {\boldsymbol u}}_2 >B\right\}$ for $B>0$. We have \begin{align} \P\left(\norm{\widetilde{\boldsymbol H}^\sT {\boldsymbol v}}_2 > B\right) &\le \P\left(\norm{\widetilde{\boldsymbol H}^\sT {\boldsymbol v}}_\infty > \frac{B}{\sqrt{M}} \right)\nonumber \\ &\le \sum_{m=1}^K\P\left(\left|{\boldsymbol x}^\sT {\boldsymbol \theta}_m\right| >\frac{B}{\sqrt{M}}\right) + \P\left(\left|\widetilde {\boldsymbol g}^\sT {\boldsymbol \theta}_m\right| >\frac{B}{\sqrt{M}}\right) \nonumber\\ &\stackrel{(a)}{\le} C_0 M \exp\left\{-\frac{c_0 B^2}{M}\right\}\label{eq:tail_bound_bwx} \end{align} for some universal constants $c_0,C_0 \in (0,\infty)$ since ${\boldsymbol g}$ and ${\boldsymbol x}$ are subgaussian with constant subgaussian norm and that ${\mathcal S}^p \subseteq B_2^p(\textsf{R})$. An analogous argument then shows \begin{equation} \nonumber \P\left(\norm{\widetilde{\boldsymbol H}^\sT {\boldsymbol h}}_2 > B \right) \le C_0 M \exp\left\{ -\frac{c_1 B^2}{M}\right\}\label{eq:tail_bound_bwg} \end{equation} for some $c_1$. Now fix arbitrary $B>0$, let \begin{equation} \nonumber u_{B}(t) := \begin{cases} 1 & t < B-1 \\ B-t & t\in[B-1,B)\\ 0 & t \ge B \end{cases}. \end{equation} and define $\varphi_B({\boldsymbol s}) := \varphi({\boldsymbol s}) u_{B}\left(\norm{{\boldsymbol s}}_2\right)$. Noting that $\mathbf{1}_{\{\norm{{\boldsymbol s}}_2 \le B-1\}} \le u_{B}\left(\norm{{\boldsymbol s}}_2 \right) \le \mathbf{1}_{\{\norm{{\boldsymbol s}}_2 \le B\}}$ and that $h_{B}$ is Lipschitz, we see that $\varphi_B$ is bounded and Lipschitz. To see that it is indeed Lipschitz, take ${\boldsymbol s},{\boldsymbol t}$ with $\norm{{\boldsymbol t}}_2 \le \norm{{\boldsymbol s}}_2$, \begin{align} \left| \varphi_{B}({\boldsymbol t}) - \varphi_B({\boldsymbol s}) \right| &\le |\varphi({\boldsymbol t})|\mathbf{1}_{\left\{\norm{s}_2\le B\right\}}|u_B(\norm{{\boldsymbol t}}_2) - u_B(\norm{{\boldsymbol t}}_2)|\nonumber\\ &\hspace{10mm}+ |u_B(\norm{{\boldsymbol s}}_2)|\mathbf{1}_{\{\norm{s}_2\le B\}} |\varphi({\boldsymbol t}) - \varphi({\boldsymbol s})|\nonumber\\ &\le C_1(B) \norm{ {\boldsymbol t} - {\boldsymbol s}}_2 + C_2(B)\norm{{\boldsymbol t} - {\boldsymbol s}}_2 \label{eq:soft_lip_truncation} \end{align} for $C_1,C_2$ depending only on $B$ since $\varphi$ is locally Lipschitz. Hence, we can write \begin{align*} &\lim_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K} \left|\E \left[\varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol v}\right) - \varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol h}\right) \right]\right|\\ &\hspace{10mm}\le \lim_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K} \left|\E \left[\left(\varphi_B\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol v}\right) - \varphi_B\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol h}\right) \right) \right]\right|\\ &\hspace{15mm}+ \lim_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K} \E \left[\left|\left(\varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol v} \right)-\varphi\left(\widetilde{\boldsymbol H}^\sT {\boldsymbol h} \right)\right) \left(\mathbf{1}_{\{\norm{\widetilde{\boldsymbol H}^\sT {\boldsymbol v}}_2 > B \}} +\mathbf{1}_{\{\norm{\widetilde{\boldsymbol H}^\sT{{\boldsymbol h}}}_2 > B \}}\right) \right|\right]\\ &\hspace{10mm} \stackrel{(a)}\le C_3\lim_{p\to\infty}\sup_{{\boldsymbol H}\in{\mathcal S}_p^K} \E\left[\left|\varphi\left(\widetilde{\boldsymbol H}^\sT{\boldsymbol v}\right)\right|^2 + \left|\varphi\left(\widetilde{\boldsymbol H}^\sT{\boldsymbol h}\right)\right|^2\right]^{1/2} \hspace{-2mm}\P\left(\norm{\widetilde{\boldsymbol H}^\sT {\boldsymbol v}}_2 \vee \norm{\widetilde{\boldsymbol H}^\sT {\boldsymbol h}}_2 > B \right)^{1/2} \\ &\hspace{10mm}\stackrel{(b)}\le C_4 M \exp\left\{ \frac{-c_1 B^2}{M}\right\} \end{align*} for some $C_3,C_4,c_1> 0$ depending only on $\sOmega$. Here, $(a)$ follows from Lemma~\ref{lemma:proj} and $(b)$ follows from the tail bounds in equations~\eqref{eq:tail_bound_bwx} and~\eqref{eq:tail_bound_bwg} along with the square integrability assumption of $\varphi$. Sending $B\to\infty$ completes the proof. \end{proof} Now, we establish Lemma~\ref{lemma:gaussian_approx}. \ifthenelse{\boolean{arxiv}}{ \begin{proof}[Proof of Lemma~\ref{lemma:gaussian_approx}] }{ \begin{proof}{\textbf{of Lemma~\ref{lemma:gaussian_approx} }} } Recall equations~\eqref{eq:slepian} and~\eqref{eq:bd_def} defining ${\boldsymbol u}_{t,1},\widetilde{\boldsymbol u}_{t,1}$ and $\widehat{\boldsymbol d}_{t,1}$, respectively, in terms of ${\boldsymbol x}_1$ and ${\boldsymbol g}_1$. Further, recall the definitions of $\widetilde {\boldsymbol g}_1,{\boldsymbol w}_{t,1},\widetilde {\boldsymbol w}_{t,1},\widetilde \epsilon_1$ and $\widehat {\boldsymbol q}_{t,1}$ in the statement of the lemma. Define ${\boldsymbol H} := ({\boldsymbol \Theta}^\star,{\boldsymbol \Theta}_0,{\boldsymbol \Theta}_1,\dots,{\boldsymbol \Theta}_J)$ and the function $\varphi$ \begin{equation} \nonumber \varphi\left( {\boldsymbol H}^\sT{\boldsymbol x}_1, {\boldsymbol H}^\sT{\boldsymbol g}_1 \right) := \E\left[ \widetilde {\boldsymbol u}_{t,1}^\sT \widehat {\boldsymbol d}_{t,1}({\boldsymbol \Theta}_0) \exp\left\{-\beta \sum_{l=0}^J \ell\left({\boldsymbol \Theta}_l^\sT {\boldsymbol u}_{t,1}; \eta\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol u}_{t,1},\epsilon_1 \right)\right)\right\} \Bigg| {\boldsymbol x}_1,{\boldsymbol g}_1 \right], \end{equation} i.e., the expectation is with respect to $\epsilon_1$. Since $\widetilde \epsilon_1$ has the same distribution as $\epsilon_1$, we have \begin{equation} \nonumber \varphi\left( {\boldsymbol H}^\sT\widetilde{\boldsymbol g}_1, {\boldsymbol H}^\sT{\boldsymbol g}_1 \right) := \E\left[ \widetilde {\boldsymbol w}_{t,1}^\sT \widehat {\boldsymbol q}_{t,1}({\boldsymbol \Theta}_0) \exp\left\{-\beta \sum_{l=0}^J \ell\left({\boldsymbol \Theta}_l^\sT {\boldsymbol w}_{t,1}; \eta\left({\boldsymbol \Theta}^{\star\sT}{\boldsymbol w}_{t,1},\widetilde\epsilon_1 \right)\right)\right\} \Bigg| \widetilde {\boldsymbol g}_1, {\boldsymbol g}_1 \right]. \end{equation} Now note that $\varphi$ is locally Lipschitz, by the Lipschitz assumption on the derivatives of $\ell$ and $\eta$ in Assumption~\hyperref[ass:loss_labeling_prime]{1'}. Additionally, \begin{equation} \nonumber \sup_{{\boldsymbol H} \in{\mathcal S}_p^{\textsf{k}^\star + (J+1)\textsf{k}}}\E\left[\left| \varphi\left({\boldsymbol H}^\sT {\boldsymbol x}_1,{\boldsymbol H}^\sT {\boldsymbol g}_1\right) \right|^2\right] \le \sup_{{\boldsymbol \Theta}^\star\in{\mathcal S}_p^{\textsf{k}^\star}, {\boldsymbol \Theta}\in{\mathcal S}_p^\textsf{k}}\E\left[\left( \widetilde {\boldsymbol u}_{t,1}^\sT \widehat {\boldsymbol d}_{t,1}({\boldsymbol \Theta})\right)^2 \right] \le C \end{equation} by Lemma~\ref{lemma:dct_bound} and the nonnegativity of $\ell$ and $\beta$. And since Assumption~\ref{ass:X} is satisfied for $\widetilde {\boldsymbol g}_1$ replacing ${\boldsymbol x}_1$, we will also have \begin{equation} \nonumber \sup_{{\boldsymbol H} \in{\mathcal S}_p^{\textsf{k}^\star + (J+1)\textsf{k}}}\E\left[\left| \varphi\left({\boldsymbol H}^\sT \widetilde{\boldsymbol g}_1,{\boldsymbol H}^\sT {\boldsymbol g}_1\right) \right|^2\right] \le C. \end{equation} Therefore, $\varphi$ satisfies the square integrability condition in~\eqref{eq:square_integrability_varphi} of Lemma~\ref{lemma:proj_locally_lip}. An application of this lemma then yields the claim of Lemma~\ref{lemma:gaussian_approx}. \end{proof} \subsection*{Acknowledgements} This work was supported by the NSF through award DMS-2031883, the Simons Foundation through Award 814639 for the Collaboration on the Theoretical Foundations of Deep Learning, the NSF grant CCF-2006489, the ONR grant N00014-18-1-2729, and an NSF GRFP award. \ifthenelse{\boolean{arxiv}}{ \bibliographystyle{amsalpha}
1,941,325,220,818
arxiv
\section{Introduction} \label{section intro Introduction} TORCH (Time Of internally Reflected CHerenkov light) is a novel detector\,\cite{Charles_2011_TORCH, Gys_2016_RICH_proceedings}, under development to provide time-of-flight (ToF) over a large-area, up to around 30\,m$^2$. The detector provides charged-particle identification between 2 and 10~GeV/c momentum over a flight distance of 10~m, and expands on the DIRC concept pioneered by the BaBar DIRC (Detection of Internally Reflected Cherenkov light) \cite{Adam_2005_DIRC-PID-for-BaBar} and the Belle-II iTOP \cite{Abe_2010_Belle-II-TDR} collaborations. TORCH combines fast timing information with DIRC-type reconstruction, aiming to achieve a ToF resolution of approximately 10-15\,ps per incident charged track. TORCH uses a thin 10\,mm quartz sheet as the radiator, utilizing the fast signal from prompt Cherenkov radiation. Total internal reflection is used to propagate the photons to the perimeter of the radiator, where they are focused onto an array of Micro-Channel Plate photomultiplier tube (MCP-PMT) photon detectors, which measure photon angles and arrival times. \\ The time difference between a pion and kaon over a 10\,m flight path is 35\,ps at 10\,GeV/$c$, therefore a per-track time resolution of 10--15\,ps is necessary to achieve a three sigma pion/kaon separation. This leads to a required single-photon time resolution of 70\,ps, given the expectation of about 30 detected Cherenkov photons per individual track. To attain this level of performance, simulation has shown that a 1\,mrad resolution is required on the measurement of the photon angle \,\cite{Charles_2011_TORCH}. To meet this requirement, MCP-PMTs of 53$\times$53 mm$^2$ active area and pixel granularity 128$\times$8 are necessary. Such detectors have been custom-developed for the TORCH application by an industrial partner, Photek Ltd.\\ A five-year R\&D programme has been undertaken, culminating in the design and construction of a small-scale prototype TORCH module. This module consists of a quartz plate of dimensions 120\,mm width, 350\,mm length, and 10\,mm thickness, read out by a single customised MCP-PMT of 32$\times$32 pixels filling a 26.5$\times$26.5 mm square, a quarter area of the final tube dimensions. The prototype was tested at the CERN Proton Synchrotron in 2015 and 2016 in a 5~GeV/$c$ pion/proton mixed beam, and the results compared to those measured with a commercial Planacon MCP-PMT. As a result of the testbeam studies, the full functionality of the TORCH design and its timing properties have been verified. \\ The small-scale demonstrator is a precursor to a full-scale TORCH module (660$\times$1250$\times$10~mm$^3$), read out by ten MCP-PMTs, which is currently under construction. The TORCH detector has been proposed to complement the particle identification of the LHCb experiment at CERN for its future upgrade\,\cite{LHCb_2011_LOI_upgrade, LHCb_2017_EOI_upgrade}.\\ In this paper, first an overview is given of the design of the TORCH detector and the principles of photon reconstruction. The optical system, the MCP-PMT detectors and the electronics readout system are described. The testbeam setup is then discussed, as well as the operating configurations. The method of data analysis, the algorithms for reconstruction, pattern recognition and data calibration are detailed. The single-photon time resolution is presented and the photon detection efficiency is compared to Monte Carlo expectations. Finally, a summary and an overview of future work is given. \section{Design of TORCH} \label{section Design of TORCH} The TORCH detector concept involves precision timing of Cherenkov photons that are emitted by a charged particle as it passes through a solid radiator. The chosen radiator material is synthetic fused silica due to its suitable refractive index, good transmission, and radiation resistance. The radiator takes the form of a highly polished plate, with nominal thickness of 10~mm, chosen as a compromise between providing sufficient yield of detected photons and limiting the material budget of the detector. A large fraction of the photons generated are trapped within the plate by total internal reflection, and propagate to the edges where they are detected with a focusing system equipped with finely-pixellated fast photodetectors. These are located in the focal plane of the cylindrical focusing optics. After correcting for the time of propagation of each photon in the optical system, the photon provides a measurement of the time at which the particle crossed the plate. By combining the information from the different photons emitted by the particle, a high precision measurement can be made of its arrival time. In order to achieve an overall resolution of 70~ps per photon, a time resolution of 50~ps per photon from the photodetector (including the associated readout electronics) is needed, with a similar precision from the reconstruction of the time of propagation. With 30 detected photons from each charged particle, this would provide a timing resolution per particle of 15~ps, assuming the individual photon measurements are independent.\\ Provided that the impact point of the charged particle on the radiator is known, the position of the detected photon along the coordinate $x$ at the plate top or bottom surface (see Figure \ref{figure intro TORCH thx and thz definition}) can provide a precise determination of the photon angle of propagation in the plate, $\theta_x$. The angle of the photon in the second projection $\theta_z$ is determined using a focusing system \cite{Castillo-Garcia_2014_NDIP-proceedings}, which takes the form of a block of synthetic fused silica with a cylindrical mirrored surface, shown in Figure \ref{figure intro focusing block}. This converts the photon angle $\theta_z$ into a position along the local $y_{detector}$ axis, defined in Figure \ref{figure intro focusing block}, which is hereon referred to as the {\it vertical} coordinate of the photon detector (i.e. the focussing coordinate). Similarly the local $x_{detector}$ axis, which lies also along the $x$ axis, is referred to as the {\it horizontal} (non-focussing) coordinate of the photon detector. Monte Carlo studies have shown a precision of about 1~mrad is required on the angle of the photon in both $\theta_x$ and $\theta_z$ projections, to achieve the required resolution on the time of propagation of the photon as it reflects within the plate \cite{Charles_2011_TORCH}. \\ The largest commercially available size of MCP-PMTs with proven technology, the Photonis Planacon, is 60$\times$60~mm$^2$ with an active area of 53$\times$53~mm$^2$ \cite{Photonis_2014_Planacon_datasheet}. The lower limit on $\theta_z$ of 0.45 rad is set by the largest vertical track angle (about 250 mrad, for a 2.5~m high radiator at 10~m distance) plus the largest Cherenkov angle for which light can be detected (about 900 mrad at 7~eV photon energy), generated by a track which is undeviated from the interaction point. The upper limit on $\theta_z$ of 0.85 rad is set by the smallest angle that will still give total internal reflection. In the vertical detector direction, dividing this 400~mrad range into 128~pixels allows for the 1~mrad requirement on $\theta_z$ to be achieved, given that the resolution of a pixel scales with the pixel size as $1/\sqrt{12}$. In $\theta_x$, assuming the Cherenkov photons can propagate at least 2~m after generation, the 1~mrad requirement is met by employing 8~pixels per detector in the horizontal direction. This gives the final design requirement on the effective pixel size to be 6.625$\times$0.414~${\rm mm^2}.$\\ \begin{figure} \begin{center} \includegraphics[width=0.99\columnwidth]{Roger_Coordinates_v3.pdf} \caption{Schematic view of the TORCH radiator plate from the front ($xy$, left) and side ($yz$, right); the angles $\theta_x$ and $\theta_z$ are defined for a Cherenkov photon generated at an angle $\theta_C$ from a charged particle track \cite{Charles_2011_TORCH}.} \label{figure intro TORCH thx and thz definition} \end{center} \end{figure} \begin{figure}[!hbt] \centerline{\includegraphics[width=0.7\columnwidth]{Roger_Focusing_v3.pdf}} \caption{Cross-section of the TORCH focusing block \cite{Castillo-Garcia_2014_NDIP-proceedings}; the right-hand side is a cylindrical mirror, and the paths of photons entering at the accepted range of angles are shown. The vertical detector coordinate is oriented along the focal plane. } \label{figure intro focusing block} \end{figure} Figure \ref{figure intro path length calculation} shows an illustration of the photon path length calculation, by unfolding the multiple reflections the photon undergoes. The path length $L$ can be calculated by projecting the initial direction vector of the photon over the difference in height between track impact point and photon detector, $H$. $L$ is then given by the geometrical projection: \begin{figure}[!hbt] \centerline{\includegraphics[width=0.7\columnwidth]{Roger_Geometry_v2.pdf}} \caption{Illustration of the photon path length calculation: the photon path is shown as the blue line within the radiator plate (shaded), which is unfolded into the green dashed line.} \label{figure intro path length calculation} \end{figure} \begin{equation} L = \sqrt{\frac{H^2}{\cos^2{\theta_x}} + \frac{H^2}{\cos^2{\theta_z}}-H^2}\quad . \label{equation intro path length calculation} \end{equation} Due to chromatic dispersion in the radiator, photons with different energies $E$ propagate at different speeds, which needs to be corrected for. The speed is governed by the group refractive index $n_g$, which can be derived from the phase refractive index $n_p$ of the material: \begin{equation} \label{equation intro nphase ngroup} n_g = n_p + E \frac{{\rm d} n_p}{{\rm d} E}\quad . \end{equation} \noindent The phase and group refractive indices for fused silica as a function of photon energy \cite{Corning_2014_HPFS_7980} are shown in Figure \ref{figure intro refractive indices}. \\ \begin{figure}[!hbt] \centerline{\includegraphics[width=0.7\columnwidth]{FIG_intro_refractive_indices_v3.pdf}} \caption{Phase and group refractive indices of synthetic fused silica as a function of photon energy, as calculated from Ref.~\cite{Corning_2014_HPFS_7980}.} \label{figure intro refractive indices} \end{figure} The measured Cherenkov emission angle $\theta_C$ is used to correct for chromatic dispersion. Given the track and photon unit direction vectors $\mathbf{\hat{v}_t}$ and $\mathbf{\hat{v}_p}$: \begin{equation} \mathbf{\hat{v}_t} \cdot \mathbf{\hat{v}_p} = \cos\, \theta_C = \frac{1}{n_p\,\beta}\quad , \label{equation intro cherenkov reconstruction} \end{equation} \noindent which allows $n_p$ to be determined. Then the group refractive index $n_g$ can be calculated using Eq.~\ref{equation intro nphase ngroup} and the known dispersion relations. This derivation of the refractive index also requires knowledge of $\beta$, the speed of the charged particle expressed as a fraction of the speed of light. Assuming the particle momentum is measured, $\beta$ can be calculated for each particle mass in turn, and propagated through the subsequent analysis, allowing the preferred mass hypothesis to be selected.\\ There are several contributions to the path length of each photon that need to be taken into account, namely the effects of a bevel at the top of the radiator plate, the focusing block and the photon detector window. The bevel (visible in Figure~\ref{figure intro focusing block}) is introduced to simplify the construction of the focusing block, but also adds an ambiguity in the photon path, since light propagating towards negative $z$ at this point will undergo an extra reflection off the front surface of the radiator plate. In addition, for practical reasons it is not feasible to make the full TORCH detector from a single radiator plate, and instead the detector will be subdivided into modules in the $x$ direction. For a large plate, the mapping of the Cherenkov cone through the optics gives rise to a roughly hyperbolic pattern of photon hits on the detectors but, when the plate is subdivided into modules, this pattern is folded on itself due to reflections at the vertical sides. This is illustrated in Figure~\ref{figure intro double side reflection} (left), where the path of a single photon is shown schematically, reflecting twice off a vertical side: once in the radiator plate and once in the focusing block. The folded pattern at the photodetector plane is shown in Figure \ref{figure intro double side reflection} (right) for a module of 66~cm wide with a radiator height of 2.5~m, which corresponds to the dimensions of a module in the final layout. The reflections at the module sides introduce ambiguities in the reconstructed path length. Whilst for a sufficiently wide module there are several solutions for the path that are consistent with a physical solution for the Cherenkov angle, the ambiguities can be resolved in the reconstruction.\\ \begin{figure}[!hbt] \centerline{\includegraphics[width=0.99\columnwidth]{Roger_Module_v3.pdf}} \caption{(Left) Isometric view of a TORCH module of reduced size showing the path of a single photon propagating through the optics (red dashed line) and showing the definition of horizontal ($x_{detector}$) and vertical direction ($y_{detector}$); (Right) the folded pattern of photons expected to arrive at the photodetector plane for a full-sized TORCH module, derived from GEANT simulation.} \label{figure intro double side reflection} \end{figure} The requirements for the optics, photon detector and readout electronics are now discussed in turn. \subsection{Optics requirements} \label{subsection Requirement on optics} The yield of detected photons in the TORCH detector is limited by the optical components in two ways: scattering from surface roughness and Rayleigh scattering. Rayleigh scattering is a fundamental property that cannot be avoided. A Rayleigh scattering length of 500~m is assumed at an energy of 2.805~eV \cite{Cohen-Tanugi_2003_BABAR_DIRC_optical_properties}, scaling with the photon energy as $E^4$. \\ In order to limit losses from surface roughness, it is required that the large flat plate surfaces are polished to a roughness of less than 0.5~nm. Assuming this surface roughness, simulations show that about 14\% of the total number of photons that would otherwise propagate to the detector are lost \cite{Van_Dijk_2016_Thesis}. If this parameter is relaxed to 1~nm, the expected losses increase to about 32\%. \\ For manufacturing reasons the focusing optics and radiator plate are produced independently, and need to be optically coupled. Candidate glues have been tested~\cite{Castillo_Garcia_2016_thesis, Castillo-Garcia_2014_NDIP-proceedings}, including Epotek~301-2, Epotek~305 and Pactan~8030. Epotek 301-2 was used in the BaBar DIRC \cite{Adam_2005_DIRC-PID-for-BaBar} and was found to be mechanically strong, stable and radiation hard, with a well-known refractive index~\cite{Montecchi_2001_CMS-glue-paper}. However, its transmission cuts off at a photon energy of about 4~eV. Epotek~305 transmits up to about 5~eV, is appropriately radiation hard~\cite{Shetter_1979_Epotek-305-radiation-hardness} but limited information is available on its refractive index~\cite{Light_1978_Epotek-305-refractive-index}. Pactan~8030 is a silicone-based adhesive with even better transmission characteristics, up to 6~eV, although little information is available on its other physical properties. Unlike the other epoxy-based glues, Pactan~8030 allows for disassembly because it does not set rigidly. It is therefore suitable for the prototyping phase and has been used here.\\ A further loss of photons in the TORCH optics comes from imperfect reflectivity of the cylindrical mirrored surface of the focusing block. The reflectivity of a representative aluminized sample has been measured, and is shown in Figure~\ref{figure intro mirror reflectivity}. The reflectivity is typically above 85\% for the photon energy range of interest. \begin{figure}[!hbt] \centerline{\includegraphics[width=0.8\columnwidth]{Roger_Reflectivity.pdf}} \caption{Reflectivity of the mirror surface of the focusing block as a function of photon energy, measured on a 1~mm thick sample of Suprasil fused silica coated with $\sim$120~nm of aluminium, at an angle of incidence of 30 degrees. The measurement error is estimated at 0.5\% (absolute), indicated by the shaded band. } \label{figure intro mirror reflectivity} \end{figure} \subsection{TORCH Photodetectors} \label{subsection TORCH photodetectors} To meet the TORCH requirements, the photodetectors require a very good intrinsic time resolution (20-30~ps), low dark noise, spatial granularity (at the anode), along with a high active to dead area ratio. Micro-Channel Plate photomultiplier (MCP-PMT) technology (for review, see \cite{Gys_2015_MCP_overview}) meets these requirements and has also been adopted by other DIRC-type detectors \cite{Abe_2010_Belle-II-TDR,Cowie_2009_PANDA-Disc-DIRC}. The main drawbacks are the relatively low detection efficiency compared to alternative technologies, a limitation on the lifetime, and the restricted granularity of commercial devices. \\ To address the issues of lifetime and granularity, a three-phase R\&D programme was instigated with an industrial partner, Photek Ltd. The first phase addressed the lifetime issues of the MCP-PMT on a small, circular MCP-PMT device (25~mm diameter) \cite{Gys_2016_RICH_proceedings}. The second phase demonstrated the granularity required for TORCH, implementing a square pixellated anode in a circular MCP-PMT device (40~mm diameter), which was used for the testbeam in 2015. The third phase\footnote{These final tubes were delivered in Summer 2017 and are currently undergoing tests.} combines all requirements in a square 60$\times$60~mm$^2$ MCP-PMT with a sensitive area of 53$\times$53~mm$^2$. The testbeam programmes in 2015 and 2016 considered both a Photek Phase-2 tube with an S20 multi-alkali photocathode and a commercially available tube from Photonis, the XP85122 \cite{Photonis_2014_Planacon_datasheet} with a bi-alkali photocathode. Both these MCP-PMTs employ micro-channel plates with a pore size of 10~$\mu$m. \\ The Photek PMTs have a double set of MCPs in a ``Chevron'' configuration. Photoelectrons can reflect off the front face of the first MCP giving rise to secondary signals; typically referred to as backscattering \cite{Korpar_2008_MCP-timing-crosstalk}. The signals from these photoelectrons are translated in space and arrive later in time, with the typical spread in translation and delay set by the distance between the photocathode and the first MCP. \\ The photon counting efficiency of the photodetector is determined by the collection efficiency and the quantum efficiency. The collection efficiency, here estimated to be 65\%, is defined as the ratio of generated to detected photoelectrons after photon conversion. The quantum efficiency is the ratio of photons incident on the front face of the photocathode and those that generate photoelectrons, and is highly dependent on the incident photon energy. The measured quantum efficiency of the two deployed phototube types is shown in Figure \ref{figure intro QE curves}. \\ \begin{figure}[!hbt] \centerline{ \includegraphics[width=0.95\columnwidth]{FIG_intro_QE_plot_v4.pdf}} \caption{Measured quantum efficiency for a representative Photek Phase-2 tube featuring a multi-alkali S20 photocathode and a Photonis Planacon (XP85122) featuring a bi-alkali photocathode.} \label{figure intro QE curves} \end{figure} During the development program it became apparent that, with the 8$\times$128 pixel requirement in a 53$\times$53~mm$^2$ active area, it would be difficult to fit this number of pixels within the envelope of the detector in the vertical direction. This was solved by halving the number of pixels to 64 and sharing the collected charge over multiple pixels. The electronics (further detailed in section \ref{subsection Electronics for TORCH}) perform a simultaneous charge and timing measurement for the PMT signal, and this information can be used in a charge-weighting algorithm to achieve a resolution significantly better than would be expected based on pixel size \cite{Conneely_2015_PSD_proceedings}. \\ To take advantage of the charge-sharing method, a novel technology was developed, which combined aspects from direct and capacitively coupled readout of the Photek tube~ \cite{Conneely_2015_PSD_proceedings}. The readout pads are directly coupled to electrodes buried beneath a thin dielectric layer. These electrodes pick up the charge induced by the electron shower emanating from the MCP stack and collected on the anode resistive layer. Varying the thickness of this resistive layer allows the degree of charge sharing between the pixels to be tuned. \\ The size of the electron shower generated in the MCP-PMT is governed by the tube electrostatics, the layout of the MCP stack and the gap from the rear of the MCP stack to the anode. From earlier measurements of the Photek tube \cite{Van_Dijk_2016_Thesis, Castillo_Garcia_2016_thesis} it is known that the size of the avalanche generated by the MCP-PMT is large compared to an individual pixel. Combined with the charge sharing between pixel pads it is expected that each single photon will have about 3--4 pixel hits. For the Phase-2 generation of MCP-PMTs, the MCP-anode gap is nominally 4.5~mm, and the thickness of the dielectric layer burying the anode contact pads is 0.5~mm. \subsection{TORCH electronics} \label{subsection Electronics for TORCH} The electronics readout system is a key component in achieving the timing resolution required for the TORCH ToF measurement and has gone through an extensive programme of development\,\cite{Gao_2014_TWEPP2013_proceedings, Gao_2015_TWEPP2014_proceedings, Gao_2016_TWEPP2015_proceedings}. The readout to digitize the signals from the MCP-PMT is based on the NINO \cite{Anghinolfi_2004_NINO-for-ALICE} and the HPTDC \cite{Akindinov_2004_HPTDC-for-ALICE} chip-sets, both employed by the ALICE experiment. The NINO ASIC was originally developed as an 8-channel device, with the later 32-channel version \cite{Despeisse_2011_NINO32} utilized for TORCH. A front-end PCB containing two NINO chips reads out 64 MCP-PMT channels, which then connects into a second PCB containing two HPTDC chips, each of which operates as a 32-channel device with 97.7\,ps time binning. The NINO provides discrimination and amplification and takes as input a signal from the MCP-PMT and converts it into an LVDS output pulse, the width of which is a measure of the amount of charge in the signal. The HPTDC then digitises the LVDS pulse by time-stamping the leading and falling edges. This combination of ASICs gives rise to several calibrations which need to be performed in order to reach optimal timing performance: \begin{itemize} \item The charge-to-width calibration of the NINO; \item A time-walk correction of the NINO leading edge (since this is a single-threshold discrimination device); \item An Integral Non-Linearity (INL) correction to the HPTDC, which is a well documented feature\,\cite{Schambach_2003_HPTDC_at_STAR}. \end{itemize} These calibrations and their impact will be discussed in Section \ref{section Calibrations}. \section{The TORCH prototype} \label{section The TORCH prototype} A small-scale TORCH prototype has been constructed featuring optical components of reduced size. Specialized mechanics have been produced for mounting the MCP-PMTs and the accompanying electronics. This prototype was constructed to demonstrate the feasibility of the TORCH concept and to determine the performance of the components used. \subsection{Prototype optics} \label{subsection Prototype optics} The optical components of the TORCH prototype were produced from fused silica (specifically, Corning~7980) by Schott\footnote{SCHOTT Schweiz AG, St. Josefen-Strasse 20, 9001 St. Gallen, Switzerland.}. These components followed the design outlined in Section \ref{section Design of TORCH}, but with scaled-down dimensions: a radiator plate of 120$\times$350$\times$10~mm$^3$ (width$\times$height$\times$thickness) and focussing optics of matching width. The radiator plate was polished to a surface roughness of about 1.5~nm. While this is significantly less stringent than the requirement placed on the full-sized radiator plate (0.5~nm), the number of reflections that individual photons undergo is reduced due to the smaller size of the radiator, hence the requirement was relaxed on cost consideration. Photographs of both components are shown in Figure \ref{figure PS2015 focusing optics}. \\ \begin{figure} \begin{center} \begin{minipage}[!hbt]{0.563\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_TORCH_radiatorplate.jpg} \end{minipage} \hspace{0.1cm} \begin{minipage}[!hbt]{0.387\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_TORCH_focusing_block.jpg} \end{minipage} \caption{Optical components procured from Schott for the TORCH testbeam prototype. (Left) The radiator plate of size 120$\times$350$\times$10~mm$^3$ showing the bevelled edge, of which the acute angle is 36$^\circ$. (Right) Matching focusing block of width 120~mm, with a cylindrical surface with focal length 260~mm, following the design shown in Figure \ref{figure intro focusing block}.} \label{figure PS2015 focusing optics} \end{center} \end{figure} The focusing block was manufactured to focus 2~mm beyond the exit surface onto the photocathode of the detector. The block was aluminized (see Figure \ref{figure intro mirror reflectivity}), and the quartz components glued together using Pactan--8030. \subsection{MCP-PMTs and electronics} \label{subsection MCP-PMTs and electronics} Two independent detectors were used for the testbeam campaigns in 2015 and 2016: a single Photek Phase-2 MCP-PMT and a Photonis Planacon (model XP85122 \cite{Photonis_2014_Planacon_datasheet}), respectively. The detector assemblies can be seen in their associated holding mechanics in Figure \ref{figure testbeam chariots}. Both MCP-PMTs had their input windows spaced 0.5~mm distant from the focusing block with an air gap in between. \\ \begin{figure} \begin{center} \begin{minipage}[!hbt]{0.5074\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_chariot.jpg} \end{minipage} \hspace{0.1cm} \begin{minipage}[!hbt]{0.4425\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2016_chariot.jpg} \end{minipage} \caption{Holding mechanics for the TORCH photodetector and electronics, incorporating (left) a Photek Phase-2 MCP-PMT and (right) a Photonis Planacon XP85122. The Photek tube features a square pixellated readout area embedded within a larger circular area. } \label{figure testbeam chariots} \end{center} \end{figure} The Photek Phase-2 MCP-PMT has a 9~mm thick quartz entrance window. Whilst this is not ideal given the design of the focusing block which was fabricated with the expectation of a thinner window, the added distance can be corrected for in reconstruction. Additionally, there is a defocussing effect, which has been demonstrated to be small \cite{Castillo_Garcia_2016_thesis}. Further effects found in the Phase-2 MCP-PMT were a degraded quantum efficiency and non-homogeneity in the connection of the detector to the readout. Whilst these effects were detrimental to the photon-counting performance of the tube, they were not ultimately problematic for its operation. \\ Both the Photek tube and the Photonis tube feature an array of 32$\times$32 pixels, the latter array contained within four times the area of the former. In order to closely match the granularity requirement of TORCH, pixels were electronically connected in the horizontal direction using a mating board, in groups of eight for the Photek Phase-2 MCP-PMT and in groups of four for the Planacon, each group defining a single readout channel. For the Planacon XP85122, some difficulties were encountered in fabricating a connection between the MCP-PMT and the electronics. This meant that complete data were only obtained from four out of the eight columns of pixels. The data analysis was therefore restricted to this area. These are denoted columns 0--3 in increasing x-coordinate. \\ The relevant characteristics of the MCP-PMTs are shown in Table \ref{table MCP-PMT characteristics}. \begin{table}[!htbp]\footnotesize \begin{center} \begin{tabular}{| l || c | c |} \hline & Testbeam period 2015 & Testbeam period 2016 \\ \hline \hline MCP-PMT employed & Photek Phase-2 & Photonis XP85122 \\ \hline Number of pixels & 4$\times$32 & 8$\times$32 \\ \hline Pixel size & 6.625$\times$0.828~mm$^2$ & 6.4$\times$1.6~mm$^2$ \\ \hline Instrumented area & 26.5$\times$26.5~mm$^2$ & 51.2$\times$51.2~mm$^2$ \\ \hline Window material & Quartz & Sapphire \\ \hline Window thickness & 9~mm & 1~mm \\ \hline Photocathode & Multi-alkali (S20) & Bi-alkali \\ \hline Window-MCP gap & 0.2~mm & 4.9~mm \\ \hline Operated gain & 1,600,000 & 650,000 \\ \hline \end{tabular} \end{center} \caption{Characteristics of the MCP-PMTs employed in the 2015 and 2016 testbeams. The quantum efficiency curves are shown in Figure \ref{figure intro QE curves}.} \label{table MCP-PMT characteristics} \end{table} \subsection{Mechanical structure} \label{subsection Mechanical structure} After gluing, the optical components were mounted in a rigid mechanical structure that allows rotation around the horizontal $x$-axis (perpendicular to the beam direction) to provide variation of the incident particle angle through the radiator. The structure was placed inside a light-tight box, which was then mounted on a translation table, allowing free movement in both directions perpendicular to the beam direction. A photograph of the holding mechanics, the optics, the MCP-PMT and the electronics is shown in Figure \ref{figure PS2015 full miniTORCH}. For the testbeam configuration, the full assembly was tilted at an angle of 5$^\circ$, with the top face in the downstream direction, to improve light collection from incident charged particles. \begin{figure}[!hbt] \centerline{ \includegraphics[width=0.8\columnwidth]{FIG_PS2015_miniTORCH_full_assembly_side_v3.pdf}} \caption{The TORCH prototype module with all components mounted. The radiator, focusing block and the holding mechanics (labelled MCP-PMT) for the photodetector and electronics can be seen.} \label{figure PS2015 full miniTORCH} \end{figure} \section{The TORCH testbeam configuration} \label{section The TORCH testbeam configuration} Measurements were taken at the PS/T9 beam facility at CERN in 2015 and 2016, to test the TORCH prototype with positively charged particles at a nominal momentum of 5~GeV/c. A trigger system and a facility to generate high-resolution reference times for the beam particles were also deployed. Depending on the collimation and momentum settings, the charged hadron beam is mostly populated with pions and protons, with a small admixture of kaons ($\sim$1\%). \\ Two timing stations were implemented in the testbeam configuration, located approximately 10~m upstream and 1~m downstream of the TORCH prototype. Each was constructed from a bare borosilicate bar (8$\times$8$\times$100~mm$^3$) connected to a single channel MCP-PMT (Photonis PP0365G) \cite{Castillo_Garcia_2013_PP0365G_testing}. Each bar was placed in the beam at an angle close to the relevant Cherenkov angle, such that part of the generated light propagated directly towards the MCP-PMT. The corresponding signals were fed into constant fraction discriminators and transmitted via coaxial cables to the TORCH readout electronics. This gave a unified dataset incorporating both the signals from the TORCH prototype and the two time reference stations. \\ In the 2015 period, the trigger was formed by a pair of scintillators perpendicular to the beam, each with an area of 8$\times$8~mm$^2$, and each connected to a PMT (Hamamatsu R1635-02) by a Perspex light guide. A schematic overview of the arrangement is shown in Figure \ref{figure testbeam area schematic}. The scintillators were located close to the time reference stations, ensuring incident particles passed through both borosilicate bars, and hence reducing the angular spread. However, it was found subsequently that there was a class of triggers for particles passing through the light guide, deteriorating the achievable time resolution and beam definition. Hence, in the 2016 testbeam period, this was remedied by using two scintillators in each station, with the scintillators and light guides perpendicular to each other. \\ \begin{figure}[!hbt] \centerline { \includegraphics[width=0.75\columnwidth]{FIG_PS2015_testbeam_area_schematic_v3.pdf}} \caption{Schematic overview of the beamline area showing the positioning of the timing stations T1 and T2 and the scintillators relative to the TORCH prototype. } \label{figure testbeam area schematic} \end{figure} The dual T1 and T2 time references provide redundancy of measurement and also allow for independent particle identification at relatively low momentum. Propagating over 11~m distance at 5~GeV/c, the time of flight difference between protons and pions is about 0.6~ns. The time of flight also allows determination of the momentum of the beam by measuring the average time of flight difference between pions and protons. \section{Calibrations} \label{section Calibrations} To perform the data analysis, several calibrations have been incorporated: the relation of the width of a signal measured with the NINO and HPTDC to its collected charge, the Integral Non-Linearity (INL) of the HPTDC, and the time-walk of the leading edge of the input pulse due to amplitude variation. It was found during laboratory testing that the behaviour of the NINO chip is strongly dependent on the input capacitance, indicating that calibrations do not replicate from one detector to another for the 2015 and 2016 datasets. The calibrations performed will therefore be described separately for the two detectors used. \subsection{Photek Phase-2 MCP-PMT (2015)} \label{subsection Photek Phase-2 MCP-PMT} The Photek Phase-2 detector was operated at an average gain of 1.6$\times$10$^6$ to ensure that the signals would be reliably detected by the electronics. As outlined in Section \ref{subsection TORCH photodetectors}, the shape of the avalanche at the anode is Gaussian, with a standard deviation of about 0.75~mm \cite{Van_Dijk_2016_Thesis, Castillo_Garcia_2016_thesis}, meaning that each single photon cluster is expected to have 3--4 hits (giving a double pulse separation of around 4~mm). \\ The position of each cluster is derived by charge-weighting the individual pixels. Due to differences found in the input capacitance between the laboratory and testbeam setups, pre-calibrations could not be used directly. The best available laboratory calibration was initially used, and the resulting charge recorded on each pixel was multiplied by a scaling factor to set the average observed charge to the same value for each channel. Based on the gain of the tube and knowledge of the charge distribution, this value was set to 80~fC.\\ The correction for INL is solely dependent on the HPTDC chips used and is expected to remain constant over time \cite{Schambach_2003_HPTDC_at_STAR}. The contribution to the time resolution from INL, for individual signals, can be as high as 100~ps. The correction can be calculated using a dataset with high statistics. Unfortunately, the datasets available for the 2015 testbeam period were not large enough to perform this calibration reliably. Therefore correction for INL was only performed on the 2016 data. \\ The most significant correction stems from time-walk. The TORCH electronics discriminates signals of varying size using a fixed threshold. The effect of time-walk is directly correlated to the size of the signal; the smaller a signal, the longer it will take to cross the threshold, even up to a nanosecond. The correction for time-walk is made on a per-channel basis before the hits making up a cluster are combined. The assumption is that within a cluster, the time difference between hits is zero, since the individual signals represent various fractions of the same avalanche. Since it is known that the time-walk only varies with the amplitude of the input signal (here represented by the signal width), the time difference between any pair of hits within a cluster is a measure of the relative time-walk between two channels. For a given combination of two channels, the average relative time-walk can be computed as a function of the signal width measured in those two channels. Parameterizing this three-dimensional distribution then allows a derivation of the shape of the time-walk distribution for individual channels as a function of the width measured in that channel. \\ The simplest choice is to derive the time-walk distributions from neighbours in the finely-pixellated vertical direction, since these have the largest chance of simultaneously being present in a cluster. However, in the case of the Photek Phase-2 MCP-PMT, the behaviour of the relative time difference is influenced by effects that derive from the coupling board between the MCP-PMT and the electronics. It was found during laboratory testing that the input capacitance differs significantly between channels due to longer track-lengths and/or routing on different layers. This implies that neighbouring pixels cannot be used directly to derive the time-walk correction since they show systematically different behaviour. Because of the large average cluster-size (3--4 hits), the option is available to use next-to-nearest neighbours, which electronically have similar behaviour. For each pair of these, the relative time difference distribution as a function of the width in both channels is created and then fitted; the time-walk correction on an individual channel is then derived from its correlation with the other channels. By way of an example, the average time difference as a correlated quantity between hits in two next-to-nearest neighbours is shown in Figure \ref{figure PS2015 time-walk demo}. Additionally, a histogram is shown of the time difference between next-to-nearest hits in clusters before and after applying the derived correction, for a single next-to-nearest neighbouring pair.\\ \begin{figure} \begin{center} \begin{minipage}[!hbt]{0.47\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_time_walk_demo_2D.pdf} \end{minipage} \hspace{0.1cm} \begin{minipage}[!hbt]{0.47\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_time_walk_demo_effect_v3.pdf} \end{minipage} \caption{(Left) average time difference between hits associated to the same cluster for a single set of two next-to-nearest-neighbour channels, expressed as a function of the width of the signal in both channels (1~HPTDC bin = 97.7~ps). The correlation between the two is extracted and fitted. (Right) Histogram of the time difference observed between the same two next-to-nearest neighbour channels, before (green) and after (blue) applying the computed time-walk correction. } \label{figure PS2015 time-walk demo} \end{center} \end{figure} Since each channel has two next-to-nearest neighbours, the time-walk correction is improved further by averaging the fits from both sides. Finally, static offsets between channels (for example, caused by differing track-lengths) are corrected for. These are found by taking the mean average time offset between two channels after applying the relative time-walk correction. \subsection{Photonis Planacon MCP-PMT (2016)} \label{subsection Photonis Planacon MCP-PMT} The individual pixel pads for the Planacon are close to twice as large in the vertical direction compared to the Photek Phase-2 detector. It is assumed the size of the avalanches from both detectors are similar, hence it is expected that each photon cluster recorded with the Planacon will have 1--2 hits. With relatively small clusters, and a high contribution from single hit clusters, the added benefit from charge-weighting the positions of the pixels in a cluster is not as significant as for the Photek Phase-2. Hence for the 2016 dataset, the choice was made not to perform charge calibration, and for multi-hit clusters the position was simply averaged. The Planacon was operated at an average gain of 6.5$\times$10$^5$. \\ During the 2016 testbeam, a high statistics dataset was recorded and subsequently used for deriving both the INL and time-walk corrections. In the construction of the mating board between the Planacon and the electronics, special care was taken to make every channel as similar as possible (in contrast to the Photek Phase-2 mating board). The time-walk calibration was performed on neighbouring channels using the same method as described above. \section{Data analysis} \label{section Data analysis} For both the 2015 and 2016 datasets, the beam (as defined by the scintillator trigger) was focused very close to one of the vertical sides of the radiator. This meant that the path of light reflecting off that side differed only minimally from the light propagating directly to the MCP-PMT. As such, the number of possibilities for paths taken by the photons through the radiator reduces by a factor of two in this configuration. In the vertical direction, the beam was focused slightly below the centre of the radiator plate, at 6.4cm below (2015) and 3.6cm below (2016), respectively. \subsection{Clustering} \label{subsection Clustering} The clustering algorithm associates hits within a vertical column of pixels which are close in time and space. This is defined to be within 2.5~HPTDC time bins (each 97.7~ps) after applying calibrations, and missing at most a single pixel. The columns are numbered from zero, from negative to positive horizontal detector coordinate. Clustering is not performed in the horizontal coarse pixel direction. In both the 2015 and 2016 datasets, four columns of 32~pixels are analysed. The distribution of cluster sizes for two columns in the 2015 dataset for the Photek Phase-2 MCP-PMT is shown in Figure \ref{figure PS2015 clustersize}, and is expected to follow a Poisson distribution. It can be seen that single-hit clusters are significantly enhanced. These are hits that have not been associated to the correct cluster, are an incomplete cluster, or are simply noise, and are suppressed in further data analysis. In the 2016 dataset, the cluster size of the Planacon is on average about 1.3, and all clusters are accepted.\\ \begin{figure}[!hbt] \centerline{ \includegraphics[width=0.75\columnwidth]{FIG_PS2015_N_hits_per_cluster_v7.pdf}} \caption{Distribution of cluster sizes observed in the 2015 dataset (Photek Phase-2 MCP-PMT) for pixel columns 0 and 1. The large number of single hit clusters (relative to the expected Poisson distribution) is attributed to hits not correctly associated to a cluster, and accordingly single-hit clusters are suppressed in further data analysis.} \label{figure PS2015 clustersize} \end{figure} Two different methods for calculating the timestamp and position of a cluster are used. For the 2015 dataset, the weighted charges from individual pixels are used to make the best possible position estimate of the true photon hit. The cluster time is further improved by charge-weighting the individual timestamps, to account for the poorer time resolution of signals with lower charge. In the case of the 2016 dataset, the position and timestamps of the pixels are simply averaged. Cluster counting measurements will be discussed in Section \ref{subsection Photon counting}. \subsection{Particle identification} \label{subsection Particle Identification} The time of flight difference between the T1 and T2 stations measured with the TORCH electronics for both the 2015 and 2016 data are shown in Figure \ref{figure T1-T2 particle identification}, with the pion and proton peaks clearly seen. The proton peak, in terms of the time T1 minus T2, arrives earlier than the pion peak. This is a consequence of a long cable for T1 and a short cable for T2 effectively inverting the underlying distribution in time. The standard deviations of the fitted data are given in Table \ref{table T1-T2 PID sigmas} and demonstrate the combined quality of the time reference signals. It is expected that there is also a small admixture of kaons (about $\sim$1\%), however this contribution cannot be distinguished in either case. The figures for 2016 show that the INL correction is performed to good effect, significantly reducing the time spread.\\ \begin{figure} \begin{center} \begin{minipage}[!hbt]{0.47\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_T1-T2_TORCH_v5.pdf} \end{minipage} \hspace{0.1cm} \begin{minipage}[!hbt]{0.47\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2016_T1-T2_TORCH_v5.pdf} \end{minipage} \caption{Time difference measured between time reference stations T1 and T2 in 2015 (left) and in 2016 before (right, red) and after INL correction (right, blue), showing the proton peak (left in both plots) and the pion peak (right in both plots).} \label{figure T1-T2 particle identification} \end{center} \end{figure} \begin{table}[!htbp]\footnotesize \begin{center} \begin{tabular}{| l || l | l |} \hline & Proton peak & Pion peak \\ \hline \hline 2015 & 134.0$\pm$0.9~ps & 156.0$\pm$0.9~ps \\ \hline 2016 (before INL) & 119$\pm$1~ps & 112$\pm$1~ps \\ \hline 2016 (after INL) & 87$\pm$1~ps & 84.0$\pm$0.7~ps \\ \hline \end{tabular} \end{center} \caption{Standard deviations of fits to the data shown in Figure \ref{figure T1-T2 particle identification}.} \label{table T1-T2 PID sigmas} \end{table} In 2015, the time of flight difference measured between pions and protons is 601$\pm$2~ps, with the uncertainty derived from the error on the means of the Gaussian fits to the data. From the time of flight difference, a momentum of 5.14$\pm$0.01~GeV/c is calculated, deviating slightly from the nominal beam settings (5~GeV/c). In 2016, the time of flight measured between pions and protons is 592$\pm$2~ps, giving a momentum of 5.18$\pm$0.01~GeV/c, again deviating slightly from nominal. \subsection{Timing performance} \label{subsection Timing performance} The position and timestamp for a given cluster are calculated after applying the calibrations to the data. The clusters are then separated into pion and proton contributions according to the T1-T2 time of flight (Figure \ref{figure T1-T2 particle identification}). The ambiguous regions between the proton and pion peaks (in 2015 set to 19.4--19.75 ns, in 2016 set to 29.8--30.1 ns) are removed from further analysis. \\ \begin{figure} \begin{center} \begin{minipage}[!hbt]{0.47\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_timeproj_pions_T2_v6_v4.pdf} \end{minipage} \hspace{0.1cm} \begin{minipage}[!hbt]{0.47\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_timeproj_pions_T1_v6_v4.pdf} \end{minipage} \end{center} \vspace{0.1mm} \begin{center} \begin{minipage}[!hbt]{0.47\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_timeproj_protons_T2_v6_v4.pdf} \end{minipage} \hspace{0.1cm} \begin{minipage}[!hbt]{0.47\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_timeproj_protons_T1_v6_v4.pdf} \end{minipage} \caption{Data taken in 2015: detected vertical position versus timestamp for clusters detected in column 0, after selecting for pions (top) and protons (bottom), relative to timing signal T2 (left) and T1 (right). The overlaid lines represent the simulated patterns for direct light (red) and light undergoing a single (green), double (blue) or triple (black) reflection off the vertical side faces of the optics (see Figure \ref{figure intro double side reflection}). } \label{figure PS2015 time projections} \end{center} \end{figure} \begin{figure}[!hbt] \centerline{ \includegraphics[width=0.75\columnwidth]{FIG_PS2016_v7_col1_timeproj_pions_T2_v4.pdf}} \caption{Pion-selected data taken in 2016: detected vertical position versus timestamp for clusters detected in column 1 relative to time reference station T2. The overlaid lines represent the simulated patterns for direct light (red) and light undergoing a single (green), double (blue) or triple (black) reflection off the vertical sides of the optics. } \label{figure PS2016 time projections} \end{figure} The time relative to T1 and T2 as a function of measured vertical (finely-pixellated) position for clusters detected in column 0 of the Photek MCP-PMT in the 2015 dataset is shown in Figure \ref{figure PS2015 time projections}. The expected patterns from simulation are overlaid. The pattern folding (see Section \ref{section Design of TORCH}) is clearly visible; multiple patterns are observed. The overlaid patterns appear in closely-spaced paired groups, both from the direct light and from pattern folding off the vertical side close to where the charged particle beam impinges on the radiator. Comparing top and bottom plots, there is a shift in the position of the patterns between pions and protons caused by the difference in the Cherenkov angles, expected to be 14.4~mrad at 5.14~GeV/c (equivalent to a shift of 2.3~pixels). There is also an observed deterioration visible in the timing resolution of the T1 plots relative to the T2 plots; this is due to signal degradation over the length of the cable transporting the T1 signal to the TORCH electronics. A slight discontinuity exists at the centre of each pixel column, when comparing Figure \ref{figure PS2015 time projections} (top, left) and (bottom, left). This position correlates to the edge of two individual NINO chips; the discontinuity indicates that constructing the time walk correction across this boundary could be further optimized. \\ Data taking was improved in several aspects in 2016. Firstly, the charged particle beam was focused on a number of different positions on the radiator plate, allowing for alignment of the detector from data. Secondly, recording of a very large dataset allowed for improvements of the calibration, especially the INL. \\ Figure \ref{figure PS2016 time projections} shows the MCP-PMT time measurement relative to T2, detected on column 1 of the Planacon, as a function of vertical position for selected pions. As in the 2015 testbeam period, the observed pattern closely agrees with Monte Carlo expectations. Comparing Figures \ref{figure PS2015 time projections} and \ref{figure PS2016 time projections}, it should be remembered that the vertical pixel width of the Planacon is twice as large as that of the Photek Phase-2. Also the overall size of the Planacon detector is vertically twice as large. \\ The prompt part of the pion signal relative to time reference station T2 (Figure \ref{figure PS2015 time projections} (top, left) and Figure \ref{figure PS2016 time projections}) is now used to benchmark the timing performance. For this measurement the prompt part of the pattern is used, composed of light with either no reflection off the vertical side face or with just a single reflection off the side close to where the charged particle beam traversed the radiator. Residuals of the measured times relative to the predicted curves are shown for two columns of the Photek Phase-2 tube and the Planacon in Figure \ref{figure td-tdr}. Table \ref{table td-tdr results} lists the standard deviations of Gaussian fits to these timing residuals, which is indicative of the single-photon timing resolution achieved. Note that variation due to smearing from the time reference station has not yet been corrected for. The gap between the entrance window and the first micro-channel plate of the Photek Phase-2 MCP-PMT is small (0.2~mm), hence the tail at later times seen in Figure \ref{figure td-tdr} (left) cannot be attributed to backscattering. The most likely cause is a non-optimal time-walk correction. In the case of the Planacon, the input gap is large (4.9~mm), which is expected to displace the backscattering peak to about 0.5--1~ns after the main peak. Hence the tail in Figure \ref{figure td-tdr} (right) is attributed to backscattering. Despite the much coarser granularity of the Planacon, the timing performance has been maintained in that particular TORCH configuration, mainly due to improvements in the calibration techniques. \\ \begin{figure} \begin{center} \begin{minipage}[!hbt]{0.47\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2015_td-tdr_pions_T2_v3.pdf} \end{minipage} \hspace{0.1cm} \begin{minipage}[!hbt]{0.47\columnwidth} \includegraphics[width=\textwidth]{FIG_PS2016_td-tdr_pions_order0_v3.pdf} \end{minipage} \caption{Difference between observed and predicted times for two out of four columns deployed in 2015 (left) and in 2016 (right), for prompt photons from pions. The tail to the right is attributed to non-optimal time-walk corrections (left) and backscattered photo-electrons (right).} \label{figure td-tdr} \end{center} \end{figure} \begin{table}[!htbp]\footnotesize \begin{center} \begin{tabular}{| l || l | l |} \hline & 2015 & 2016 \\ & $\sigma$ of fit & $\sigma$ of fit \\ \hline \hline Column 0 & 110$\pm$2~ps & 124$\pm$4~ps \\ \hline Column 1 & 120$\pm$3~ps & 94$\pm$3~ps \\ \hline Column 2 & 137$\pm$3~ps & 103$\pm$3~ps \\ \hline Column 3 & 111$\pm$3~ps & 99$\pm$4~ps \\ \hline \end{tabular} \end{center} \caption{Standard deviation of Gaussian fits to the timing residuals for all columns, for 2015 and 2016 datasets.} \label{table td-tdr results} \end{table} To derive the intrinsic single-photon time resolution of TORCH, an estimate needs to be made of the time resolution of the time reference stations. As T1 and T2 have identical construction, it is assumed that the time resolution is the same, but that T1 suffers extra smearing from the long cable over which the signal is propagated. In 2015, insufficient data were collected to perform this subtraction reliably. However in 2016 data, the contribution from signal propagation can be factored out, and is found to be 56$\pm$14~ps, where the error is statistical. From the INL-corrected pion distribution shown in Figure \ref{figure T1-T2 particle identification} (right, blue) it is then estimated that the intrinsic time resolution of a single time reference station is 44$\pm$9~ps. Subtracting in quadrature this contribution from the resolutions quoted in Table \ref{table td-tdr results} gives a range of (83--115)$\pm$6~ps for the time resolution of the TORCH prototype. \\ \subsection{Photon counting} \label{subsection Photon counting} A photon counting efficiency measurement was performed only on data from the Planacon MCP-PMT. A photocathode degradation issue with the Photek Phase-2 MCP-PMT made the data for that tube less reliable. \\ The photon counting efficiency of the TORCH prototype is calculated by comparing the number of photons detected per event to a GEANT4 simulation \cite{Geant4_2003_main_paper, Van_Dijk_2016_Thesis}. The simulation accounts for losses due to Rayleigh scattering, and a Lambertian model is used for losses due to microscopic surface roughness of reflective faces. The simulation also accounts for Fresnel reflections at the air gap between the exit surface of the focusing block and the detector window. The resulting photon spectrum is then modified using the transmission curve of the glue used between the radiator plate and the focusing block, the reflectivity of the aluminium surface of the focusing block and the quantum efficiency of Planacon (see Figures \ref{figure intro mirror reflectivity} and \ref{figure intro QE curves}). To account for the collection efficiency of the tube, an efficiency of 65\% was applied. \\ The final applied efficiency factor derives from the threshold of the NINO chip. Two types of cluster inefficiency are considered: those for which the charge measured is so low that the signal does not exceed the threshold, and those straddling the border between two pixels, dividing the charge in such a way that neither meets the threshold. Measurements had previously been performed at several NINO settings and the charge threshold was found to be between 30--60~fC. Following estimates from testbeam data, it is assumed that an average threshold level of 42~fC is representative. Assuming a Gaussian distribution with representative average gain and geometrical spread, it is then estimated that 12.7\% of the total number of generated photoelectrons are lost on average. To simulate this loss, an additional random cut is placed on the number of photoelectrons. \\ Detector patterns for 10k protons and 10k pions were generated, with the effects described above applied. To derive the average expected number of photons over all events, the pion and proton distributions were weighted and combined according to their relative fractions from the integrals under the pion and proton distributions (see Figure \ref{figure T1-T2 particle identification}), namely 61$\pm$3\% pions and 39$\pm$2\% protons. The resulting yields from data and simulation are shown in Figure \ref{figure PS2016 photon counting}. \\ \begin{figure}[!hbt] \centerline{ \includegraphics[width=0.75\columnwidth]{FIG_PS2016_photon_counting_v5.pdf}} \caption{Measured photon counting statistics per event in 2016 testbeam data (red) and expected from simulation (blue).} \label{figure PS2016 photon counting} \end{figure} The mean number of photons expected from simulation is 4.89$\pm$0.02, compared to 3.23$\pm$0.01 observed in data (statistical errors only). Therefore, on average about 34\% fewer photons are observed than expected, indicating that additional factors remain to be accounted for. This will be studied in future developments planned for the TORCH project. \section{Summary and future plans} \label{section Conclusion} TORCH is a DIRC-type detector, designed to achieve high-precision time-of-flight over large areas. In order to provide a $K-\pi$ separation up to 10\,GeV/$c$ momentum over a 10\,m flight path, a ToF resolution of $\sim$15\,ps is required. This translates to a per-photon resolution of 70\,ps, given around 30 detected photons per track.\\ A small-scale TORCH demonstrator, with a quartz plate of dimensions 120$\times$350$\times$10~mm$^3$ has been constructed. The detector is read out by a single customised Photek MCP-PMT with 32$\times$32 pixels contained within an area 26.5$\times$26.5mm$^2$ square, and where a charge-division technique has been used to improve the spatial granularity. Testbeam results are compared to those from a commercial 2-inch square Planacon 32$\times$32 pixellated MCP-PMT.\\ The data analysis methods employ a data-driven approach to correct simultaneously for time-walk, charge-to-width calibration, and integral non-linearities of the electronics readout. Following significant improvements to the triggering and calibration techniques, a range of (83--115)$\pm$6~ps is measured for the single-photon time resolution of TORCH. Hence the single-photon timing performance is approaching the required 70\,ps per photon. The single-photon counting performance is around 34\% lower than expected from simulation. Improvements in the electronics calibration techniques and threshold control are expected in the future. In conclusion, the testbeam measurements have demonstrated the principle of operation of TORCH, with a timing resolution that approaches the requirement for the final detector. \\ The small-scale demonstrator is a precursor to a full-scale TORCH module (660$\times$1250$\times$10~mm$^3$), which is currently under construction. The module will be equipped with ten full-sized 2-inch Photek Phase-3 64$\times$64 pixel MCP-PMTs. The MCP-PMTs, optics and electronics to equip this module have been delivered and are currently under test. All components, including the mechanical structure and housings, will be ready for testbeam operation in 2018. \section*{Acknowledgements} The support of the European Research Council is gratefully acknowledged in the funding of this work through an Advanced Grant under the Seventh Framework Programme FP7 (ERC-2011-AdG 299175-TORCH). The authors wish to express their gratitude to Simon Pyatt of the University of Birmingham for wire bonding the NINO ASICs, and the CERN EP-DT-EF and TE-MPE-EM groups for their efforts in the coupling PCB design and bonding. We are grateful to Nigel Hay, Dominic Kent and Chris Slatter of Photek for their work on the MCP-PMT development.
1,941,325,220,819
arxiv
\section{Entropy production of harmonically coupled drift-diffusion particles on a circle: Harmonic trawlers}\seclabel{HarmonicTrawlers} \paragraph*{Abstract} In this section we consider the entropy production of two harmonically coupled distinguishable drift-diffusion particles on the circle of circumference $L$, \Fref{trawlers}. We calculate the entropy production firstly using simple physical reasoning, \Erefs{def_density_r} and \eref{N2_expected_entropy_production}, and secondly using the field-theoretic methods outlined in the main text from first principles \Eref{N2_final_entropy_production}, to illustrate the formalism with a pedagogical example. In \Sref{generalised_trawlers} the results are re-derived using the more general multi-particle framework of \Sref{MultipleParticles}. \subsection{Entropy production from physical reasoning} \seclabel{HarmonicTrawlers_simple_physics} Two particles, indexed as $1$ and $2$, are placed on a circle only as to ensure that the marginal distributions of their positions $x_1$, $x_2$ and in fact their centre of mass $(x_1+x_2)/2$ reach a steady state distribution, which is uniform. The particles both diffuse with diffusion constant $D$ and self-propel with velocities $w_1$ and $w_2$ respectively. In addition, they interact by an attractive, harmonic pair potential, which may be thought of as a spring. This spring, pulling the particles towards each other, may stretch around the circle, implying that the interaction potential is not periodic, but for simplicity, one may think of $L$ as being so large that the particles never end up stretching the spring beyond one circumference. A cartoon of the setup is shown in \fref{trawlers}. The Langevin dynamics of the particles is \begin{subequations}{\elabel{trawlers_in_R}} \begin{align} \dot{x}_1 &= w_1 - (x_1-x_2)k + \xi_1(t)\\ \dot{x}_2 &= w_2 - (x_2-x_1)k + \xi_2(t) \end{align} \end{subequations} with positive spring constant $k$ and two independent, white Gaussian noise terms, $\xi_1(t)$ and $\xi_2(t)$, so that \begin{align} \ave{\xi_1(t')\xi_1(t)} & = 2 D \delta(t'-t)\\ \ave{\xi_2(t')\xi_2(t)} & = 2 D \delta(t'-t) \ . \end{align} \begin{figure} \centering \begin{tikzpicture} \draw[black,very thick] (0,0) circle (2cm); \draw[very thick,red,decorate,decoration={coil,segment length=4pt}] (-60:2cm) arc (-60:-150:2cm); \draw[draw=none] (0,0); \draw[teal,fill] (-60:2cm) circle (10pt); \draw[draw=none] (0,0); \draw[] (-60:2cm) node {$1$}; \draw[draw=none] (0,0); \draw[thick,->] (-60:1.5cm) arc (-60:-10:1.5cm) node[midway,above,xshift=-5pt,yshift=-3pt] {$w_1$}; \draw[draw=none] (0,0); \draw[teal,fill] (-150:2cm) circle (10pt); \draw[draw=none] (0,0); \draw[] (-150:2cm) node {$2$}; \draw[draw=none] (0,0); \draw[thick,->] (-150:1.5cm) arc (-150:-110:1.5cm) node[midway,above,xshift=4pt,yshift=-2pt] {$w_2$}; \end{tikzpicture} \caption{Cartoon of two harmonically coupled drift-diffusion particles. They are of two different species, such that one drifts with velocity $w_1$ and the other with veclocity $w_2$. The two are interacting by a harmonic potential with spring constant $k$. The particles are placed on a circle only as to maintain a stationary state.} \flabel{trawlers} \end{figure} To implement the stretching of the spring beyond $L$, the coordinates $x_1$ and $x_2$ in \Eref{trawlers_in_R} are \emph{not} periodic. The two Langevin \Erefs{trawlers_in_R} decouple when considering the motion of the centre of mass, \begin{equation} z=\frac{x_1+x_2}{2} \ , \end{equation} and the fluctuations of their distance $x_1-x_2$ about the expected distance $(w_1-w_2)/(2k)$, \begin{equation}\elabel{def_r} {\Delta r}=x_1-x_2-\frac{w_1-w_2}{2k} \ , \end{equation} so that \begin{subequations}{\elabel{eom_z_and_r}} \begin{align} \dot{z} &= \frac{w_1 + w_2}{2} + \mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)}\left(\xi_1(t) + \xi_2(t)\right) \elabel{eom_z}\\ \dot{{\Delta r}} &= -2 {\Delta r} k + \xi_1(t) - \xi_2(t) \elabel{eom_r} \ . \end{align} \end{subequations} The two linear combinations of the independent noises $\xi_1$ and $\xi_2$ are uncorrelated and given they are Gaussian therefore independent, in particular \begin{subequations} \begin{align} \ave{ \mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)}\left(\xi_1(t') + \xi_2(t')\right) \mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)}\left(\xi_1(t) + \xi_2(t)\right) } &= D \delta(t'-t) \elabel{noise_of_z} \\ \ave{ \left(\xi_1(t') - \xi_2(t')\right)\left(\xi_1(t) - \xi_2(t)\right)} &= 4 D \delta(t'-t) \elabel{noise_of_r}\\ \ave{ \mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)}\left(\xi_1(t') + \xi_2(t')\right) \left(\xi_1(t) - \xi_2(t)\right) } &= 0 \ . \end{align} \end{subequations} The two degrees of freedom $z$ and ${\Delta r}$ therefore are independent and their joint entropy production is expected to be the sum of the individual entropy productions. The equations of motion \eref{eom_z_and_r} describe a Brownian particle on a circle and an Ornstein-Uhlenbeck process, so that their respective stationary distribution can be written down immediately. The stationary distribution of $z$ is uniformly $1/L$ on the circle and that of ${\Delta r}$ is a Boltzmann distribution with potential $k{\Delta r}^2$, \Eref{eom_r}, and temperature $2D$, \Eref{noise_of_r}, \begin{equation}\elabel{def_density_r} \rho_{{\Delta r}}({\Delta r}) = \sqrt{\frac{k}{2\pi D}} \exp{-\frac{k{\Delta r}^2}{2D}} \ . \end{equation} Of the two degrees of freedom $z$ and ${\Delta r}$, the latter does not produce entropy, as, being confined by a potential, it cannot generate any stationary probability flux. The other degree of freedom, $z$, "goes around in circles" with a net drift of $(w_1+w_2)/2$, \Eref{eom_z}, and diffusion constant $D/2$, \Eref{noise_of_z}, resulting in an entropy production of \cite{CocconiETAL:2020} \begin{equation}\elabel{N2_expected_entropy_production} \dot{S}_{\text{int}} = \frac{(w_1+w_2)^2}{2D} \ . \end{equation} This result is to be reproduced below by field-theoretic means. This is a real challenge without the shortcut of rewriting the setup so that only one degree of freedom has a finite probability current. \Eref{N2_expected_entropy_production} should be independent of the details of the pair-potential which enters only in so far as it confines the relative motion. This independence is indeed confirmed in \SMref{generalised_trawlers}. \subsection{Entropy production using field-theoretic methods} \subsubsection{Action} The non-perturbative part of the action is immediately given by \Eref{def_action0} and the Fokker-Planck operator of drift-diffusion, \Eref{FPE_drift_diffusion} \begin{equation} \FPop_{\gpvec{y}_i,\gpvec{x}_i} = (D \partial_{y_i}^2 - w_i \partial_{y_i}) \delta(y_i-x_i) \end{equation} which gives rise to the bare propagators in realspace and direct time \begin{equation}\elabel{bare_propagator_harmonic_trawlers} \ave{\phi_i(y_i,t') \tilde{\phi}_i(x_i,t)} = \frac{\theta(t'-t)}{\sqrt{4\pi D (t'-t)}} \Exp{-\frac{(y_i-x_i-w_i(t'-t))^2}{4D(t'-t)}} \ . \end{equation} The harmonic interaction via the pair potential \begin{equation} U(x_i-x_j) = \mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)} k (x_i-x_j)^2 \end{equation} can be implemented as an effective drift in the presence of the other particle species. For example, the constant drift $w_1$ enters into the action as \begin{equation} \AC_0 = \ldots + \int \dint{x_1} \dint{t} \tilde{\phi}_1(x_1,t) (-\partial_{x_1} w_1) \phi_1(x_1,t) + \ldots \end{equation} Correspondingly, the drift $\int\dint{x_2} \rho(x_2) (-U'(x_1-x_2))$ due to the interaction with the other species with density $\rho(x_2)$ enters as \begin{align} \AC_1 &= \int \dint{x_1} \dint{t} \tilde{\phi}_1(x_1,t) \partial_{x_1} \left[\left(\int \dint{x_2} U'(x_1-x_2) \phi^{\dagger}_2(x_2,t) \phi_2(x_2,t) \right) \phi_1(x_1,t)\right]\\ &=-\int \dint{x_1} \dint{t} \left( \partial_{x_1} \tilde{\phi}_1(x_1,t) \right) \left(\int \dint{x_2} U'(x_1-x_2) \phi^{\dagger}_2(x_2,t) \phi_2(x_2,t) \right) \phi_1(x_1,t) \end{align} where $\phi^{\dagger}_2(x_2,t) \phi_2(x_2,t)$ probes for the local density of particles of the second species and $U'(x_1-x_2)$ denotes the derivative of $U(x_1-x_2)$ with respect to its argument. Correspondingly, the contribution to the action due to the effect of particle species $1$ on the drift velocity of particle species $2$ is \begin{align} \AC_2 & = \int \dint{x_2} \dint{t} \tilde{\phi}_2(x_2,t) \partial_{x_2} \left[\left(\int \dint{x_1} U'(x_2-x_1) \phi^{\dagger}_1(x_1,t) \phi_1(x_1,t) \right) \phi_2(x_2,t)\right] \\ & = - \int \dint{x_2} \dint{t} \left( \partial_{x_2} \tilde{\phi}_2(x_2,t) \right) \left(\int \dint{x_1} U'(x_2-x_1) \phi^{\dagger}_1(x_1,t) \phi_1(x_1,t) \right) \phi_2(x_2,t) \ . \end{align} Diagrammatically, each of the actions result in two vertices, as $\phi^{\dagger}_i=1+\tilde{\phi}_i$ by the Doi-shift. These are from $\AC_1$ \begin{equation}\elabel{pair_vertex1} \tikz[baseline=-5pt]{ \begin{scope}[yshift=0.0cm] \draw[black,potStyle] (0,-0.4) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertexOlive{0,-0.4} \tgenVertex{0,0.4} \node at (0.5,-0.5) [right,yshift=0pt] {}; \node at (-0.5,-0.5) [left,yshift=0pt] {}; \draw[tAsubstrate] (0.5,-0.5) -- (0,-0.4) -- (-0.5,-0.5); \node at (0.5,0.5) [right,yshift=0pt] {}; \node at (-0.5,0.5) [left,yshift=0pt] {}; \draw[tAactivity] (0.5,0.5) -- (0,0.4) -- (-0.5,0.5); \end{scope} } \quad\text{ and }\quad \tikz[baseline=-5pt]{ \begin{scope}[yshift=0.0cm] \draw[black,potStyle] (0,-0.4) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertexOlive{0,-0.4} \tgenVertex{0,0.4} \node at (0.5,-0.5) [right,yshift=0pt] {}; \node at (-0.5,-0.5) [left,yshift=0pt] {}; \draw[tAsubstrate] (0,-0.4) -- (0.5,-0.5); \node at (0.5,0.5) [right,yshift=0pt] {}; \node at (-0.5,0.5) [left,yshift=0pt] {}; \draw[tAactivity] (0.5,0.5) -- (0,0.4) -- (-0.5,0.5); \end{scope} } \end{equation} and from $\AC_2$ \begin{equation}\elabel{pair_vertex2} \tikz[baseline=-5pt]{ \begin{scope}[yshift=0.0cm] \draw[black,potStyle] (0,-0.4) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[olive,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,-0.4} \tgenVertexOlive{0,0.4} \node at (0.5,-0.5) [right,yshift=0pt] {}; \node at (-0.5,-0.5) [left,yshift=0pt] {}; \draw[tAactivity] (0.5,-0.5) -- (0,-0.4) -- (-0.5,-0.5); \node at (0.5,0.5) [right,yshift=0pt] {}; \node at (-0.5,0.5) [left,yshift=0pt] {}; \draw[tAsubstrate] (0.5,0.5) -- (0,0.4) -- (-0.5,0.5); \end{scope} } \quad\text{ and }\quad \tikz[baseline=-5pt]{ \begin{scope}[yshift=0.0cm] \draw[black,potStyle] (0,-0.4) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[olive,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,-0.4} \tgenVertexOlive{0,0.4} \node at (0.5,-0.5) [right,yshift=0pt] {}; \node at (-0.5,-0.5) [left,yshift=0pt] {}; \draw[tAactivity] (0,-0.4) -- (0.5,-0.5); \node at (0.5,0.5) [right,yshift=0pt] {}; \node at (-0.5,0.5) [left,yshift=0pt] {}; \draw[tAsubstrate] (0.5,0.5) -- (0,0.4) -- (-0.5,0.5); \end{scope} } \end{equation} where the dash-dotted, vertical, black line accounts for the pair potential, that carries no time-dependence. \subsubsection{Two-particle propagator} The first order two-particle propagator is thus diagrammatically \begin{equation}\elabel{N2_propagator_in_diagrams} \ave{\phi_1(y_1,t')\phi_2(y_2,t')\tilde{\phi}_1(x_1,t)\tilde{\phi}_2(x_2,t)} \corresponds \quad \tikz[baseline=-2.5pt]{ \draw[tAactivity] (0.5,0.2) -- (-0.5,0.2); \draw[tAsubstrate] (0.5,-0.2) -- (-0.5,-0.2); } + \tikz[baseline=-2.5pt]{ \begin{scope}[yshift=0.0cm] \draw[black,potStyle] (0,-0.3) -- (0,0.3); \draw[black,thin] (-0.1,0.1) -- (0.1,0.1); \draw[red,thin] (-0.2,0.2) -- (-0.2,0.4); \tgenVertexOlive{0,-0.3} \tgenVertex{0,0.3} \draw[tAsubstrate] (0.5,-0.4) -- (0,-0.3) -- (-0.5,-0.4); \draw[tAactivity] (0.5,0.4) -- (0,0.3) -- (-0.5,0.4); \end{scope} } + \tikz[baseline=-2.5pt]{ \begin{scope}[yshift=0.0cm] \draw[black,potStyle] (0,-0.3) -- (0,0.3); \draw[black,thin] (-0.1,0.1) -- (0.1,0.1); \draw[olive,thin] (-0.2,0.2) -- (-0.2,0.4); \tgenVertex{0,-0.3} \tgenVertexOlive{0,0.3} \draw[tAactivity] (0.5,-0.4) -- (0,-0.3) -- (-0.5,-0.4); \draw[tAsubstrate] (0.5,0.4) -- (0,0.3) -- (-0.5,0.4); \end{scope} } +\text{h.o.t.} \ . \end{equation} The diagrams are easily translated to mathematical expressions using the bare propagators $\ave{\phi_i\tilde{\phi}_i}$ from \Eref{bare_propagator_harmonic_trawlers}. In particular \begin{equation}\elabel{N2_diagram_straight_twiddle} \tikz[baseline=-2.5pt]{ \node at (0.5,0.2) [right] {$x_1,t$}; \node at (-0.5,0.2) [left] {$y_1,t'$}; \draw[tAactivity] (0.5,0.2) -- (-0.5,0.2); \node at (0.5,-0.2) [right] {$x_2,t$}; \node at (-0.5,-0.2) [left] {$y_2,t'$}; \draw[tAsubstrate] (0.5,-0.2) -- (-0.5,-0.2); } \corresponds \ave{\phi_1(y_1,t') \tilde{\phi}_1(x_1,t)} \ave{\phi_2(y_2,t') \tilde{\phi}_2(x_2,t)} \ , \end{equation} and the convolution \begin{align} \tikz[baseline=-5pt]{ \begin{scope}[yshift=0.0cm] \draw[black,potStyle] (0,-0.4) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertexOlive{0,-0.4} \tgenVertex{0,0.4} \node at (0.5,-0.5) [right,yshift=0pt] {$x_2,t$}; \node at (-0.5,-0.5) [left,yshift=0pt] {$y_2,t'$}; \draw[tAsubstrate] (0.5,-0.5) -- (0,-0.4) -- (-0.5,-0.5); \node at (0.5,0.5) [right,yshift=0pt] {$x_1,t$}; \node at (-0.5,0.5) [left,yshift=0pt] {$y_1,t'$}; \draw[tAactivity] (0.5,0.5) -- (0,0.4) -- (-0.5,0.5); \end{scope} } \corresponds& \int \dint{x_1'} \dint{x_2'} \int_t^{t'} \dint{s} \left\{ -\partial_{x_1'} \ave{\phi_1(y_1,t') \tilde{\phi}_1(x_1',s)} \right\}\nonumber\\ &\times \ave{\phi_1(x_1',s) \tilde{\phi}_1(x_1,t)} U'(x_1'-x_2') \ave{\phi_2(y_2,t') \tilde{\phi}_2(x_2',s)} \ave{\phi_2(x_2',s) \tilde{\phi}_2(x_2,t)} \,, \elabel{pair_pot_effect} \end{align} and similarly for the contribution from the third diagram in \Eref{N2_propagator_in_diagrams}. To calculate \eref{pair_pot_effect} to leading order in $t'-t$, we use \Erefs{trick1} to \eref{trick4}, in particular expanding $U'(x_1'-x_2')$ about $x_1-x_2$, finally resulting in \begin{align}\elabel{pair_pot_effect2} \tikz[baseline=-5pt]{ \begin{scope}[yshift=0.0cm] \draw[black,potStyle] (0,-0.4) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertexOlive{0,-0.4} \tgenVertex{0,0.4} \node at (0.5,-0.5) [right,yshift=0pt] {$x_2,t$}; \node at (-0.5,-0.5) [left,yshift=0pt] {$y_2,t'$}; \draw[tAsubstrate] (0.5,-0.5) -- (0,-0.4) -- (-0.5,-0.5); \node at (0.5,0.5) [right,yshift=0pt] {$x_1,t$}; \node at (-0.5,0.5) [left,yshift=0pt] {$y_1,t'$}; \draw[tAactivity] (0.5,0.5) -- (0,0.4) -- (-0.5,0.5); \end{scope} } \corresponds \left(\partial_{y_1} \ave{\phi_1(y_1,t') \tilde{\phi}_1(x_1,t)} \right) U'(x_1-x_2) \ave{\phi_2(y_2,t') \tilde{\phi}_2(x_2,t)} \left((t'-t) + \order{(t'-t)^2}\right) \,. \end{align} Similarly, for the right-most diagram in \eref{N2_propagator_in_diagrams} we obtain, \begin{align}\elabel{pair_pot_effect3} \tikz[baseline=-5pt]{ \begin{scope}[yshift=0.0cm] \draw[black,potStyle] (0,-0.4) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[olive,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,-0.4} \tgenVertexOlive{0,0.4} \node at (0.5,-0.5) [right,yshift=0pt] {$x_1,t$}; \node at (-0.5,-0.5) [left,yshift=0pt] {$y_1,t'$}; \draw[tAactivity] (0.5,-0.5) -- (0,-0.4) -- (-0.5,-0.5); \node at (0.5,0.5) [right,yshift=0pt] {$x_2,t$}; \node at (-0.5,0.5) [left,yshift=0pt] {$y_2,t'$}; \draw[tAsubstrate] (0.5,0.5) -- (0,0.4) -- (-0.5,0.5); \end{scope} } \corresponds \ave{\phi_1(y_1,t') \tilde{\phi}_1(x_1,t)} U'(x_2-x_1) \left( \partial_{y_2} \ave{\phi_2(y_2,t') \tilde{\phi}_2(x_2,t)} \right) \left((t'-t) + \order{(t'-t)^2}\right) \,. \end{align} \subsubsection{Entropy production} From \Erefs{N2_propagator_in_diagrams}, \eref{N2_diagram_straight_twiddle}, \eref{pair_pot_effect2} and \eref{pair_pot_effect3} the leading order of the two-particle propagator is \begin{align} \ave{\phi_1(y_1,t') \phi_2(y_2,t') \tilde{\phi}_1(x_1,t) \tilde{\phi}_2(x_2,t)} = & \left( 1 + (t'-t) U'(x_1-x_2) \partial_{y_1} + (t'-t) U'(x_2-x_1) \partial_{y_2} + \order{(t'-t)^2} \right)\nonumber\\ &\times \ave{\phi_1(y_1,t') \tilde{\phi}_1(x_1,t)} \ave{\phi_2(y_2,t') \tilde{\phi}_2(x_2,t)} \ . \elabel{N2_propagator_1st_order} \end{align} It follows from \Erefs{def_Op} and \eref{def_Ln} that \begin{align} \bm{\mathsf{K}}^{(2)}_{y_1,y_2,x_1,x_2} = & \lim_{t'\downarrow t} \frac{\mathrm{d}}{\mathrm{d} t'} \ave{\phi_1(y_1,t')\phi_2(y_2,t')\tilde{\phi}_1(x_1,t)\tilde{\phi}_2(x_2,t)} \nonumber\\\elabel{N2_op} = & \left( D \partial_{y_1}^2 - w_1 \partial_{y_1} + D \partial_{y_2}^2 - w_2 \partial_{y_2} + U'(x_1-x_2) \partial_{y_1} + U'(x_2-x_1) \partial_{y_2} \right) \delta(y_1-x_1) \delta(y_2-x_2) \ . \end{align} and \begin{align} \operatorname{\bm{\mathsf{Ln}}}^{(2)}_{y_1,y_2,x_1,x_2} =& \lim_{t'\downarrow t} \Bigg\{ \ln \left( \frac{ \ave{\phi_1(y_1,t') \tilde{\phi}_1(x_1,t)} \ave{\phi_2(y_2,t') \tilde{\phi}_2(x_2,t)} }{ \ave{\phi_1(x_1,t') \tilde{\phi}_1(y_1,t)} \ave{\phi_2(x_2,t') \tilde{\phi}_2(y_2,t)} } \right) \nonumber\\&+ \ln \left( \frac{ 1 - U'(x_1-x_2) \frac{y_1-x_1-w_1(t'-t)}{2D} - U'(x_2-x_1) \frac{y_2-x_2-w_2(t'-t)}{2D} + \order{(t'-t)^2} } { 1 - U'(y_1-y_2) \frac{x_1-y_1-w_1(t'-t)}{2D} - U'(y_2-y_1) \frac{x_2-y_2-w_2(t'-t)}{2D} + \order{(t'-t)^2} } \right) \Bigg\} \ . \end{align} For suitably small $x_1-y_1$ and $x_2-y_2$ the logarithm can be expanded, \begin{align} \operatorname{\bm{\mathsf{Ln}}}^{(2)}_{y_1,y_2,x_1,x_2} = & (y_1 - x_1) \frac{w_1}{D} + (y_2 - x_2) \frac{w_2}{D} - \frac{y_1-x_1}{2D} \left( U'(x_1-x_2) + U'(y_1-y_2) \right) \nonumber\\&- \frac{y_2-x_2}{2D} \left( U'(x_2-x_1) + U'(y_2-y_1) \right) + \ldots \elabel{Ln_pairpot_expanded} \end{align} before taking the derivatives of \Eref{N2_op} by an integration by parts in \begin{equation} \dot{S}_{\text{int}}[\rho_{12}^{(2)}] = \int \dint{x_1}\dint{x_2}\dint{y_1}\dint{y_2} \rho_{12}^{(2)}(x_1,x_2) \bm{\mathsf{K}}^{(2)}_{y_1,y_2,x_1,x_2} \operatorname{\bm{\mathsf{Ln}}}^{(2)}_{y_1,y_2,x_1,x_2} \ . \end{equation} This integral may be messy but straight-forward to evaluate by integration by parts, avoiding derivatives of the joint stationary probability density $\rho_{12}^{(2)}(x_1,x_2)=\rho_{{\Delta r}}({\Delta r})/L$, \Eref{def_density_r}. It follows that indeed \begin{equation}\elabel{N2_final_entropy_production} \dot{S}_{\text{int}}[\rho_{12}^{(2)}] = \frac{(w_1+w_2)^2}{2D} \end{equation} confirming \Eref{N2_expected_entropy_production} through field-theoretic means. This concludes the derivation. Reproducing \Eref{N2_expected_entropy_production} shows that the field theory provides a straight-forward, systematic path to entropy production even in the presence of interactions that have complex physical implications. In \SMref{generalised_trawlers} we will re-derive \Eref{N2_final_entropy_production} more generally. \section{Entropy production of drift-diffusion particles on a torus with potential}\seclabel{drift_diffusion_on_ring} \paragraph*{Abstract} \newcommand{G_{\wvec}}{G_{\wvec}} \newcommand{G_{D}}{G_{D}} \newcommand{G_{\text{\tiny Wi}}}{G_{\text{\tiny Wi}}} In this section we consider a drift-diffusion particle with diffusion constant $D$ and drift $w$ on a $d$-dimensional torus with circumference $L$ and external potential $\Upsilon(x)$. We calculate their entropy production in three different ways, to show that different perturbative expansions produce the same result and to highlight some peculiarities of continuum theories. It is a pretty straight forward exercise to calculate the entropy production from first principles \cite[Sec.~3.11]{CocconiETAL:2020}. This is done in the following within the framework of the main text, first by drawing directly on Wissel's short-time propagator \cite{Wissel:1979} in \SMref{entProd_Wissel}, \Eref{local_entropy_GWissel}, and then field-theoretically in two different setups: In \SMref{FT_about_drift-diffusion}, only the potential is dealt with perturbatively, \Eref{local_entropy_Gw}, in \SMref{FT_about_diffusion}, both the potential and the drift are dealt with perturbatively. In \SMref{Fourier_transformation} we discuss some of the details of continuous space and the particular role of the Fourier-transform. The Fokker-Planck equation for the present setup is \begin{equation}\elabel{FPE_drift_diffusion} \partial_t \rho(\gpvec{y},t) = \hat{\LC}_{\gpvec{y}} \rho(\gpvec{y},t) \text{ with } \hat{\LC}_\gpvec{y} = D \nabla_y^2 - \nabla_y \cdot ( \wvec - \Upsilon'(\gpvec{y})) = D \nabla_y^2 - ( \wvec - \Upsilon'(\gpvec{y})) \cdot \nabla_y + \Upsilon''(\gpvec{y}) \end{equation} where $\Upsilon'(\gpvec{y})=\nabla\Upsilon(\gpvec{y})$ and $\Upsilon''(\gpvec{y})=\laplace\Upsilon(\gpvec{y})$ are used to emphasise that a derivative acts only on $\Upsilon(\gpvec{y})$, in contrast to the nabla in front of the bracket, $\nabla_y ( \wvec - \Upsilon(\gpvec{y}))$, which acts on everything to the right of it, just like the first term $D \nabla_y^2=D\laplace_\gpvec{y}$. After adding a small mass $r\downarrow0$ to maintain causality, the action in real-space and direct time, \Eref{def_action0}, is \begin{equation}\elabel{action_drift_diffusion_appendix} \AC = \int \ddint{x}\ddint{y}\ddint{t} \tilde{\phi}(\gpvec{y},t) ( - \delta(\gpvec{y}-\gpvec{x})\partial_t + \FPop_{\gpvec{y},\gpvec{x}} -r) \phi(\gpvec{x},t) = \int \ddint{x}\ddint{t} \tilde{\phi}(\gpvec{x},t) ( - \partial_t + \hat{\LC}_{\gpvec{x}} -r) \phi(\gpvec{x},t) \end{equation} according to \Eref{def_harmonic_action} as $\FPop_{\gpvec{y},\gpvec{x}} = \hat{\LC}^{\dagger}_{\gpvec{x}} \delta(\gpvec{x}-\gpvec{y})$. In one dimension the Fokker-Planck equation has a known stationary solution and the entropy production can readily be calculated \cite{CocconiETAL:2020}. \subsection{Entropy production from the short-time propagator} \seclabel{entProd_Wissel} In the present section we derive the entropy production on the basis of the short-time propagator introduced by Wissel \supplcite{Wissel:1979}. This will serve as a reference for the following sections. The short-time propagator $G_{\text{\tiny Wi}}(\gpvec{x}\to\gpvec{y}; t'-t)$ may be constructed by some very basic physical reasoning, namely that ``the derivative of the potential plays the same role as a drift''. It is the probability density to transition from position $\gpvec{x}$ to $\gpvec{y}$ within time ${\Delta t}=t'-t$, is given by \cite{Wissel:1979} \begin{equation}\elabel{def_GWissel} G_{\text{\tiny Wi}}(\gpvec{x}\to \gpvec{y}; t'-t) = \frac{\theta(t'-t)}{(4\pi D (t'-t))^{d/2}}\\ \Exp{-\frac{\Big(\gpvec{y}-\gpvec{x}-(\wvec-\Upsilon'(\gpvec{x}))(t'-t)\Big)^2}{4 D (t'-t)}} \end{equation} which, by inspection, solves the differential equation \begin{equation}\elabel{Wissel_PDE} \partial_{t'} G_{\text{\tiny Wi}}(\gpvec{x}\to \gpvec{y}; t'-t) = D \nabla_\gpvec{y}^2 G_{\text{\tiny Wi}}(\gpvec{x}\to \gpvec{y}; t'-t) - (\wvec-\Upsilon'(\gpvec{x})) \nabla_\gpvec{y} G_{\text{\tiny Wi}}(\gpvec{x}\to \gpvec{y}; t'-t) \ . \end{equation} \Eref{def_GWissel} is therefore \emph{not} the solution of the FP \Eref{FPE_drift_diffusion}, but because $\lim_{t'\downarrow t} G_{\text{\tiny Wi}}(\gpvec{x}\to \gpvec{y}; t'-t) = \delta(\gpvec{y}-\gpvec{x})$ and \begin{equation}\elabel{actual_op_Wissel_identity} \big(\wvec-\Upsilon'(\gpvec{x})\big) \cdot \nabla_\gpvec{y} \delta(\gpvec{y}-\gpvec{x}) = \nabla_\gpvec{y} \cdot \big( (\wvec-\Upsilon'(\gpvec{x})) \delta(\gpvec{y}-\gpvec{x}) \big) = \nabla_\gpvec{y} \cdot \big( (\wvec-\Upsilon'(\gpvec{y})) \delta(\gpvec{y}-\gpvec{x}) \big) \end{equation} \Eref{def_GWissel} produces the correct kernel, \begin{equation}\elabel{Op_drift_diffusion_Wissel} \lim_{t'\downarrow t} \partial_{t'} G_{\text{\tiny Wi}}(\gpvec{x}\to \gpvec{y}; t'-t) = \hat{\LC}_\gpvec{y} \delta(\gpvec{y}-\gpvec{x}) \end{equation} in other words, the full propagator $\ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ is approximated to first order by the short-time propagator $G_{\text{\tiny Wi}}(\gpvec{x}\to \gpvec{y}; t'-t)$, \begin{equation}\elabel{propagator_approximated_by_Wissel} \ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} = G_{\text{\tiny Wi}}(\gpvec{x}\to \gpvec{y}; t'-t) \Big(1 + \order{(t'-t)^2}\Big) \ . \end{equation} The kernel $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$, \Eref{def_Op}, calculated from the short-time propagator \Eref{def_GWissel} therefore reproduces correctly the Fokker-Planck kernel, \begin{equation} \bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}} = \lim_{t'\downarrow t} \partial_{t'} G_{\text{\tiny Wi}}(\gpvec{x}\to \gpvec{y}; t'-t) = \hat{\LC}_\gpvec{y} \delta(\gpvec{y}-\gpvec{x})\ , \end{equation} \Eref{Op_drift_diffusion_Wissel}. As for the logarithm $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$, \Eref{def_Ln}, using \Eref{propagator_approximated_by_Wissel} gives \begin{equation}\elabel{Ln_drift_diffusion_Wissel} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}=\lim_{t'\downarrow t} \ln\left( \frac {\ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}} {\ave{\phi(\gpvec{x},t')\tilde{\phi}(\gpvec{y},t)}} \right) = \frac{(\gpvec{y}-\gpvec{x})\bigg(2 \wvec-\Upsilon'(\gpvec{x})-\Upsilon'(\gpvec{y})\bigg)}{2D} \ . \end{equation} by explicit use of \Eref{def_GWissel} and thus the local entropy production \Eref{def_local_entropy} \begin{equation}\elabel{local_entropy_GWissel} \dot{\sigma}(\gpvec{x})= \int \ddint{y} \bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}} = -\Upsilon''(\gpvec{x}) + \frac{( \wvec - \Upsilon'(\gpvec{x}))^2}{D} \quad\text{so that}\quad \dot{S}_{\text{int}}[\rho]= \int\ddint{x} \rho(\gpvec{x}) \dot{\sigma}(\gpvec{x}) \ . \end{equation} Away from stationarity, the logarithm $\ln(\rho(\gpvec{x})/\rho(\gpvec{y}))$ needs to be added to $\dot{\sigma}$ to capture all entropy production, but this contribution is not considered in the present derivation. The results above are very well known, \latin{e.g.}\@\xspace \cite{Seifert:2012} or \cite[Sec.~3.11]{CocconiETAL:2020}, and are here retraced only to highlight which short-time details enter. \subsection{Entropy production from a perturbation theory about drift diffusion}\seclabel{FT_about_drift-diffusion} In the present section, we calculate the entropy production of a drift-diffusion particle in an external potential in a perturbative field theory about drift-diffusion. To this end, we split the action \Eref{action_drift_diffusion_appendix} into two terms, $\AC=\AC_0+\AC_\text{pert}$ with \begin{subequations} \elabel{action_pert_over_drift} \begin{equation}\elabel{action0_drift_diffusion} \AC_0 = \int \ddint{x} \int \dint{t} \tilde{\phi}(\gpvec{x},t) (D \nabla_\gpvec{x}^2 - \wvec \cdot \nabla_\gpvec{x} - \partial_t) \phi(x,t) \end{equation} and \begin{equation}\elabel{action_pert_pot_only} \AC_\text{pert} = - \int \ddint{x} \int \dint{t} \left(\Upsilon'(\gpvec{x}) \phi(\gpvec{x},t) \right) \cdot \nabla_{\gpvec{x}} \tilde{\phi}(\gpvec{x},t) \ , \end{equation} \end{subequations} so that any expectation of the full theory can be calculated along the lines of \Eref{perturbative_expansion}. The bare propagator \begin{align}\elabel{def_Gw} G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t) &= \frac{\theta(t'-t)}{(4\pi D (t'-t))^{d/2}} \Exp{-\frac{\Big(\gpvec{y}-\gpvec{x}-\wvec(t'-t)\Big)^2}{4 D (t'-t)}}\\ \nonumber &\corresponds \tikz[baseline=7.5pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{x},t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{y},t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \end{scope} } \end{align} solves \begin{equation}\elabel{Gw_PDE} \partial_{t'} G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t) = \Big(D \nabla_{\gpvec{y}}^2 - \wvec\cdot\nabla_{\gpvec{y}}\Big) G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t) \ , \end{equation} less vividly denoted by $g(y;x;t'-t)$ in \SMref{simplified_notation}. The bare propagator may be read off from $\AC_0$ either in its present form or after Fourier transforming \begin{equation}\elabel{def_Gw_k} \tikz[baseline=7.5pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{k}_\gpvec{n},\omega$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{k}_\gpvec{m},\omega'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \end{scope} } \corresponds \frac{\delta\mkern-6mu\mathchar'26(\omega'+\omega)L^d\delta_{\gpvec{m}+\gpvec{n},0}} {-\mathring{\imath}\omega'+D \gpvec{k}_\gpvec{m}^2+\mathring{\imath}\wvec\cdot\gpvec{k}_\gpvec{m}+r} \ , \end{equation} with discretised $\gpvec{k}_\gpvec{n}=2\pi\gpvec{n}/L$ and $\gpvec{n}\in\gpset{Z}^d$. Although the Fourier-transform does not add anything crucial to the calculations to come, we discuss it here nevertheless, because of some puzzling implications. \subsubsection{Fourier transformation} \seclabel{Fourier_transformation} To have a guaranteed stationary state, we need the present system to have a finite size of $L^d$ in the following. We therefore need to introduce a Fourier series representation for the spatial coordinates and a Fourier transform for time \begin{subequations} \begin{align} \phi(\gpvec{x},t) & = \frac{1}{L^d}\sum_{\gpvec{n}\in\gpset{Z}^d} \exp{\mathring{\imath} \gpvec{k}_\gpvec{n} \cdot \gpvec{x}} \int \dintbar{\omega} \exp{-\mathring{\imath} \omega t} \phi_\gpvec{n}(\omega) \\ \phi_\gpvec{n}(\omega) & = \int\ddint{x} \exp{-\mathring{\imath} \gpvec{k}_\gpvec{n}\cdot\gpvec{x}} \int \dint{t} \exp{\mathring{\imath} \omega t} \phi(\gpvec{x},t) \ . \end{align} \end{subequations} The action \Eref{action_pert_over_drift} may then be written as \begin{subequations} \elabel{actions_Fouriered} \begin{align} \elabel{action_harm_Fouriered} \AC_0 &= \frac{1}{L^d} \sum_{\gpvec{n}\in\gpset{Z}^d} \int \dintbar{\omega} \tilde{\phi}_{-\gpvec{n}}(-\omega) (-D\gpvec{k}_\gpvec{n}^2 - \mathring{\imath} \wvec\cdot\gpvec{k}_\gpvec{n} -r+ \mathring{\imath}\omega) \phi_\gpvec{n}(\omega)\\ \elabel{action_pert_pot_only_again} \AC_\text{pert} &= \frac{1}{L^{3d}} \sum_{\gpvec{n}\gpvec{m}\gpvec{\ell}} \int \dintbar{\omega} \tilde{\phi}_{\gpvec{n}}(-\omega) \gpvec{k}_\gpvec{m}\cdot\gpvec{k}_{\gpvec{\ell}}\Upsilon_{\gpvec{\ell}} \phi_\gpvec{m}(\omega) L^d \delta_{\gpvec{n}+\gpvec{m}+\gpvec{\ell},\zerovec} \ , \end{align} \end{subequations} with \begin{equation} \Upsilon(\gpvec{x}) = \frac{1}{L^d}\sum_{\gpvec{\ell}\in\gpset{Z}^d} \exp{\mathring{\imath} \gpvec{k}_\gpvec{\ell} \cdot \gpvec{x}} \Upsilon_\gpvec{\ell} \quad\text{ and }\quad \Upsilon_{\gpvec{\ell}} = \int\ddint{x} \exp{-\mathring{\imath} \gpvec{k}_\gpvec{\ell}\cdot\gpvec{x}} \Upsilon(\gpvec{x}) \ . \end{equation} The bare propagator then follows immediately, \Eref{def_Gw_k}, \begin{align} G_{\wvec}(\gpvec{k}_\gpvec{n}\to \gpvec{k}_\gpvec{m};\omega\to\omega') = & \ave[0]{\phi(\gpvec{k}_\gpvec{m},\omega')\tilde{\phi}(\gpvec{k}_\gpvec{n},\omega)} \nonumber\\ =& \frac{\delta\mkern-6mu\mathchar'26(\omega'+\omega)L^{d} \delta_{\gpvec{m}+\gpvec{n},\zerovec} }{-\mathring{\imath}\omega'+D \gpvec{k}_\gpvec{m}^2+\mathring{\imath}\wvec\cdot\gpvec{k}_\gpvec{m}+r} \corresponds \tikz[baseline=7.5pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{k}_\gpvec{n},\omega$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{k}_\gpvec{m},\omega'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \end{scope} } \ , \end{align} with $\delta_{\gpvec{m}+\gpvec{n},\zerovec}$ enforcing $\gpvec{n}=-\gpvec{m}$ and thereby momentum conservation. The diagrammatic expansion produces corrections of the form \begin{align} \ave{\phi(\gpvec{k}_\gpvec{m},\omega')\tilde{\phi}(\gpvec{k}_\gpvec{n},\omega)} \corresponds \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{k}_\gpvec{n},\omega$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{k}_\gpvec{m},\omega'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[white,fill=white] (0,-0.3) circle (3pt); \end{scope} } + \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{k}_\gpvec{n},\omega$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{k}_\gpvec{m},\omega'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[black,potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt); \end{scope} } + \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{k}_\gpvec{m},\omega'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[black,potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt); \end{scope} \begin{scope}[xshift=0.8cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{k}_\gpvec{n},\omega$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[black,potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt); \end{scope} } + \ldots \ , \end{align} where each "bauble" represents the effect of the external potential that serves as a source for momentum, thereby breaking translational invariance. Apart from the technical subtleties of the full propagator not being a Gaussian but rather a Jacobi-theta function, which is of little interest in the following, allowing for a spatial Fourier transform has deeper consequences. To see this, we determine the first order correction \begin{equation}\elabel{first_order_bauble_omega_k} \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{k}_\gpvec{n},\omega$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{k}_\gpvec{m},\omega'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[black,potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt); \end{scope} } \corresponds - \frac{\delta\mkern-6mu\mathchar'26(\omega'+\omega)} {-\mathring{\imath}\omega'+D \gpvec{k}_\gpvec{m}^2+\mathring{\imath}\wvec\cdot\gpvec{k}_\gpvec{m}+r} \gpvec{k}_\gpvec{m}\cdot\gpvec{k}_{\gpvec{m}+\gpvec{n}} \Upsilon_{\gpvec{m}+\gpvec{n}} \frac{1} {-\mathring{\imath}\omega'+D \gpvec{k}_\gpvec{n}^2-\mathring{\imath}\wvec\cdot\gpvec{k}_\gpvec{n}+r} \end{equation} for the spurious potential $\gpvec{k}_\gpvec{\ell}\Upsilon_{\gpvec{\ell}}=\mathring{\imath}\bm{\nu} L^d \delta_{\gpvec{\ell},0}$, which has the same effect as an additional drift by $\bm{\nu}$, as can be verified by direct evaluation in \Eref{action_pert_pot_only_again} and comparison to the corresponding term in \Eref{action_harm_Fouriered}. Such a potential does not converge in real-space, but that has no bearing on the arguments that follow. After inversely Fourier transforming \Eref{first_order_bauble_omega_k} back to direct time, \begin{equation}\elabel{first_order_bauble_t_k} \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{k}_\gpvec{n},t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{k}_\gpvec{m},t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[black,potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt); \end{scope} } \corresponds -\mathring{\imath} (t'-t) \theta(t'-t) \gpvec{k}_\gpvec{m}\cdot\bm{\nu} \Exp{-(t'-t) (D \gpvec{k}_\gpvec{m}^2+\mathring{\imath}\wvec\cdot\gpvec{k}_\gpvec{m}+r)} \end{equation} one can see explicitly the linear dependence on $t'-t$. This observation, of "each blob producing an order of $t'-t$", \SMref{appendixWhichDiagramsContribute}, is what simplifies the calculation of the entropy production in the field-theoretic framework so dramatically. The right-hand side of \Eref{first_order_bauble_t_k} can be recognised as the Fourier-transform in space of a gradient, \Erefs{def_Gw} and \eref{def_Gw_k}, which is immediately inverted to real space, \begin{align} \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{x},t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{y},t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[black,potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt); \end{scope} } &\corresponds -(t'-t) \bm{\nu}\cdot\nabla_\gpvec{y} G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t) \nonumber\\ &= \elabel{first_order_bauble_t_x_final} \bm{\nu}\cdot \frac{\gpvec{y}-\gpvec{x}-\wvec(t'-t)}{2D} G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t) \end{align} using \Eref{def_Gw}. The key-difference between \Erefs{first_order_bauble_t_k} and \eref{first_order_bauble_t_x_final} is the absence of the $t'-t$ pre-factor in the latter. In a perturbation theory of the propagator, terms that are of order $t'-t$ in one representation of the degree of freedom, say $\gpvec{k}$, may no longer seem to be of that order after a Fourier transform. However, the right-hand side of \Eref{first_order_bauble_t_x_final} still vanishes as $t'-t\downarrow0$ for any $\gpvec{y}-\gpvec{x}\ne\zerovec$ due to the exponential in $G_{\wvec}$, \Eref{def_Gw}, and indeed it vanishes much faster than linearly in $t'-t>0$ for any such $\gpvec{y}-\gpvec{x}\ne\zerovec$. For $t'-t<0$ it vanishes for any $\gpvec{y}-\gpvec{x}$ due to the Heaviside $\theta$-function in $G_{\wvec}$ and for $\gpvec{y}-\gpvec{x}=\zerovec$ it vanishes linearly in $t'-t>0$ as the prefactor becomes $-\bm{\nu}\cdot\wvec(t'-t)/(2D)$. This phenomenon, that the order in $t'-t$ is changed by a transformation, is unique to continuous states and physically related to infinite rates being at play in the continuum limit, \SMref{continuum_limit}. If all rates remain finite, as is generally the case for discrete states, it cannot occur, and neither does it happen when all "states" decouple, as is the case \emph{after} a Fourier-transform here. \subsubsection{\texorpdfstring{$\bm{\mathsf{K}}$}{K} and \texorpdfstring{$\operatorname{\bm{\mathsf{Ln}}}$}{Ln} for drift diffusion in a perturbative potential} \seclabel{Kn_Ln_for_drift_diffusion_in_pert_pot} Following from the arguments in the main text and in \SMref{appendixWhichDiagramsContribute}, we re-state the key-ingredients to calculate the entropy production. Firstly, the kernel $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ can immediately be read off from the action or the Fokker-Planck operator, \Eref{FPE_drift_diffusion}, \begin{equation}\elabel{Kn_from_FP_drift_diffusion} \bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}} = \hat{\LC}_\gpvec{y} \delta(\gpvec{y}-\gpvec{x}) = \Big( D \nabla_y^2 - \nabla_y \cdot ( \wvec - \Upsilon'(\gpvec{y})) \Big) \delta(\gpvec{y}-\gpvec{x}) \ , \end{equation} with $\nabla_y\cdot\Upsilon'\delta(\gpvec{y}-\gpvec{x})$ intended to result in two terms by the product rule and with \Eref{actual_op_Wissel_identity} available to re-arrange the right-hand side. Even though the kernel is extracted easily, we will reproduce it below via the propagator to illustrate our scheme. Secondly, the logarithm $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$ is constructed from the propagator to first order. To this end, we state the first order correction \Eref{first_order_bauble_omega_k} in real space and direct time for arbitrary potentials using \Eref{action_pert_pot_only} \begin{equation}\elabel{first_order_bauble_t_x_fullPot} \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{x},t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{y},t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[black,potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt); \end{scope} } \corresponds -\int \ddint{z} \int \dint{s} G_{\wvec}(\gpvec{x}\to\gpvec{z},s-t) \Upsilon'(\gpvec{z})\cdot\nabla_{\gpvec{z}} G_{\wvec}(\gpvec{z}\to\gpvec{y},t'-s) \ , \end{equation} which was previously stated in \Eref{first_order_bauble_t_k} only for the specific choice of the (spurious) potential that has the effect of a uniform drift. To simplify \Eref{first_order_bauble_t_x_fullPot} by direct calculation, we draw on four "tricks": Firstly, \begin{equation}\elabel{trick1} \nabla_{\gpvec{z}} G_{\wvec}(\gpvec{z}\to\gpvec{y},t'-s) = -\nabla_{\gpvec{y}} G_{\wvec}(\gpvec{z}\to\gpvec{y},t'-s) \end{equation} so that the $\nabla_{\gpvec{z}}$ can be taken outside the integral in \Eref{first_order_bauble_t_x_fullPot}. Secondly, we Taylor-expand $\Upsilon'(\gpvec{z})$ about $(\gpvec{x}+\gpvec{y})/2$, \begin{equation}\elabel{trick2} \nabla_\gpvec{z} \Upsilon(\gpvec{z}) = \Upsilon'(\gpvec{z}) = \Upsilon' \left(\frac{\gpvec{y}+\gpvec{x}}{2}\right) + \left( \frac{\gpvec{z}-\gpvec{x}}{2} - \frac{\gpvec{y}-\gpvec{z}}{2} \right)\cdot\nabla \Upsilon' \left(\frac{\gpvec{y}+\gpvec{x}}{2}\right) + \ldots \end{equation} so that parity in $\gpvec{y}-\gpvec{x}$ is readily determined, in contrast to, say, expanding about $\gpvec{x}$ or $\gpvec{y}$. \Eref{trick2} also allows us to use, thirdly, \Eref{def_Gw}, \begin{equation} \elabel{trick3} (\gpvec{z}-\gpvec{x}) G_{\wvec}(\gpvec{x}\to\gpvec{z}; s-t) = (s-t) \big( 2D \nabla_{\gpvec{x}} + \wvec \big) G_{\wvec}(\gpvec{x}\to\gpvec{z}; s-t) \ . \end{equation} Finally, by the time-uniformity of the bare Markov process of drift-diffusion \begin{equation}\elabel{trick4} \int \ddint{z} G_{\wvec}(\gpvec{x}\to\gpvec{z},s-t) G_{\wvec}(\gpvec{z}\to\gpvec{y},t'-s) = \theta(s-t) \theta(t'-s) G_{\wvec}(\gpvec{x}\to\gpvec{y},t'-t) \ , \end{equation} so that the spatial integral in \Eref{first_order_bauble_t_x_fullPot} can be carried out. It turns out that of the expansion \Eref{trick2} only the first order is needed, \begin{align} \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{x},t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{y},t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[black,potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt); \end{scope} } &\corresponds (t'-t) \nabla_\gpvec{y}\cdot\Big( \Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t) \Big) + \ldots \nonumber\\ &= \elabel{first_order_bauble_t_x_final2_fullPot} - G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t) \left( \frac{\gpvec{y}-\gpvec{x}-\wvec(t'-t)}{2D} \right)\cdot\Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) + \ldots \end{align} as higher order terms do not contribute to the kernel nor to the logarithm. In particular, the Laplacian of the external potential is preceded by a factor $t'-t$ and thus vanishes from the logarithm as $t'\downarrow t$. As the logarithm is odd in $\gpvec{y}-\gpvec{x}$ by construction and the highest spatial derivative in the kernel is a second, the logarithm needs to be known only to linear order in $\gpvec{y}-\gpvec{x}$. Similarly, the kernel is a limit of a first derivative in time and thus needs to be known only to linear order in time, related to orders in space via \Eref{trick3}. The propagator may thus be written as \begin{equation}\elabel{st_dd_prop} \ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}= G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t) + (t'-t) \nabla_\gpvec{y}\cdot\Big( \Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t) \Big) + \ldots \ . \end{equation} Applying \Eref{def_Op} and equally \Eref{transition_from_action} to \eref{st_dd_prop} reproduces the kernel \Eref{Kn_from_FP_drift_diffusion}, \begin{equation}\elabel{Kn_from_prop} \bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}} = \Big(D \nabla_{\gpvec{y}}^2 - \wvec\cdot\nabla_{\gpvec{y}}\Big) \delta(\gpvec{y}-\gpvec{x}) + \nabla_\gpvec{y}\cdot\left\{ \Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) \delta(\gpvec{y}-\gpvec{x}) \right\} \end{equation} using \Eref{Gw_PDE} and $\lim_{t'\downarrow t} G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t)=\delta(\gpvec{y}-\gpvec{x})$. As $\Upsilon'((\gpvec{x}+\gpvec{y})/2)\delta(\gpvec{y}-\gpvec{x})=\Upsilon'(\gpvec{y})\delta(\gpvec{y}-\gpvec{x})=\Upsilon'(\gpvec{x})\delta(\gpvec{y}-\gpvec{x})$ the gradient of the potential can be taken outside the divergence, \Eref{actual_op_Wissel_identity}, but under an integral, this manipulation makes no difference. The logarithm \Eref{def_Ln} is correspondingly \begin{subequations} \elabel{Ln_from_drift_diffusion} \begin{align} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}} &\corresponds \lim_{t'\to t} \ln\left( \frac{ \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{x},t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{y},t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \end{scope} } + \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{x},t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{y},t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[black,potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt); \end{scope} } }{ \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{y},t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{x},t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \end{scope} } + \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{y},t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{x},t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[black,potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt); \end{scope} } } \right)\\ \elabel{Ln_from_drift_diffusion_ln_pre_lim} &\corresponds \lim_{t'\to t}\left\{ \ln\left( \frac{G_{\wvec}(\gpvec{x}\to \gpvec{y}; t'-t)}{G_{\wvec}(\gpvec{y}\to \gpvec{x}; t'-t)} \right) + \ln\left( \frac{1- \left( \frac{\gpvec{y}-\gpvec{x}-\wvec(t'-t)}{2D} \right)\cdot\Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) + \ldots } {1- \left( \frac{\gpvec{x}-\gpvec{y}-\wvec(t'-t)}{2D} \right)\cdot\Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) +\ldots } \right) \right\} \\ &= \frac{(\gpvec{y}-\gpvec{x})\cdot\wvec}{D} - \frac{\gpvec{y}-\gpvec{x}}{D} \cdot\Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) + \order{(\gpvec{y}-\gpvec{x})^3}\ , \elabel{Ln_drift} \end{align} \end{subequations} using \Eref{first_order_bauble_t_x_final2_fullPot} to arrive at \Eref{Ln_from_drift_diffusion_ln_pre_lim}. This expression is identical to \Eref{Ln_drift_diffusion_Wissel} based on Wissel's short-time propagator if one allows for corrections of order $(\gpvec{y}-\gpvec{x})^3$, where the expansion of $\Upsilon'$ being about $(\gpvec{x}+\gpvec{y})/2$ becomes important. Together with \Eref{Kn_from_prop} this reproduces \Eref{local_entropy_GWissel} \begin{equation}\elabel{local_entropy_Gw} \dot{\sigma}(\gpvec{x})= \int \ddint{y} \bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}} = -\Upsilon''(\gpvec{x}) + \frac{( \wvec - \Upsilon'(\gpvec{x}))^2}{D} \ . \end{equation} This concludes the present derivation. \subsection{Entropy production from a perturbation theory about diffusion}\seclabel{FT_about_diffusion} We repeat the above derivation treating both potential and drift perturbatively. Expanding about pure diffusion, the action \Eref{action_drift_diffusion_appendix} is split into two terms, $\AC=\AC_0+\AC_\text{pert}$ with \begin{subequations} \elabel{action_pure_diffusion} \begin{equation}\elabel{action0_pure_diffusion} \AC_0 = \int \ddint{x} \int \dint{t} \tilde{\phi}(\gpvec{x},t) (D \nabla_\gpvec{x}^2 - \partial_t) \phi(\gpvec{x},t) \end{equation} and \begin{equation}\elabel{actionPert_pure_diffusion} \AC_\text{pert} = \int \ddint{x} \int \dint{t} \big( (\wvec - \Upsilon'(\gpvec{x})) \phi(\gpvec{x},t) \big) \cdot \nabla_\gpvec{x} \tilde{\phi}(\gpvec{x},t) \ , \end{equation} \end{subequations} where the drift $\wvec$ now features as a shift of the force exerted by the potential $\Upsilon'$. The bare propagator from \Eref{action0_pure_diffusion} is $G_{\wvec}$ of \Eref{def_Gw} with $\wvec=0$, \begin{equation} G_{D}(\gpvec{x}\to \gpvec{y}; t'-t) = \frac{\theta(t'-t)}{(4\pi D (t'-t))^{d/2}} \Exp{-\frac{(\gpvec{y}-\gpvec{x})^2}{4 D (t'-t)}}\\ \corresponds \tikz[baseline=7.5pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{x},t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{y},t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \end{scope} } \end{equation} and the two corrections from \Eref{actionPert_pure_diffusion} are \Eref{first_order_bauble_t_x_final2_fullPot} with $\Upsilon'$ replaced by $\Upsilon'-\wvec$, \begin{subequations} \elabel{phiphitilde_in_diagrams_all} \begin{align} \ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}&\corresponds \tbarePropagator{\gpvec{x},t}{\gpvec{y},t'} + \tblobbedDashedPropagator{\gpvec{x},t}{\gpvec{y},t'} + \blobbedDashedPotPropagator{\gpvec{x},t}{\gpvec{y},t'} + \ldots\\ \elabel{first_order_bauble_t_x_pure_diff1} & \corresponds G_{D}(\gpvec{x}\to \gpvec{y}; t'-t) + (t'-t) \nabla_\gpvec{y}\cdot\left( \left[ \Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) -\wvec \right] G_{D}(\gpvec{x}\to \gpvec{y}; t'-t) \right) + \ldots \\ & = \elabel{first_order_bauble_t_x_pure_diff2} G_{D}(\gpvec{x}\to \gpvec{y}; t'-t) \left\{ 1 - \left( \frac{\gpvec{y}-\gpvec{x}}{2D} \right)\cdot\left[ \Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) - \wvec \right] \right\} + \ldots \end{align} \end{subequations} As the total action \Eref{action_pure_diffusion} is identical to \Eref{action_pert_over_drift}, the kernel from the action of course is the same as \Eref{Kn_from_prop}, as confirmed by reading it off from the propagator in the form \Eref{first_order_bauble_t_x_pure_diff1}, \begin{equation}\elabel{Kn_from_prop_pure_diff} \bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}} = D \nabla_{\gpvec{y}}^2 \delta(\gpvec{y}-\gpvec{x}) + \nabla_\gpvec{y}\cdot\left\{ \left[ \Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) -\wvec\right] \delta(\gpvec{y}-\gpvec{x}) \right\}\ . \end{equation} Similarly, the logarithmic term \Eref{Ln_from_drift_diffusion} is confirmed \begin{equation} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}} = - \frac{\gpvec{y}-\gpvec{x}}{D} \cdot\left[ \Upsilon'\left(\frac{\gpvec{x}+\gpvec{y}}{2}\right) - \wvec\right] + \order{(\gpvec{y}-\gpvec{x})^3} \end{equation} obviously reproducing \Eref{local_entropy_Gw}. This completes the present supplemental section. We have shown that the short-time propagator \Eref{def_GWissel} by by Wissel \cite{Wissel:1979} used in \Erefs{def_Op}, \eref{def_Ln} and \eref{def_local_entropy} reproduces the entropy production \Eref{local_entropy_GWissel} in the literature \cite{CocconiETAL:2020}. We have further shown that a field-theoretic perturbation theory about drift-diffusion, \SMref{FT_about_drift-diffusion}, or about pure diffusion, \SMref{FT_about_diffusion}, equally reproduces these results. This is not a triviality, given the effect of spatial Fourier transform in a continuous state process, \latin{cf.}\@\xspace \Erefs{first_order_bauble_t_k} and \eref{first_order_bauble_t_x_final}. \endinput \section{Entropy production of multiple particles}\seclabel{MultipleParticles} \newcommand{\partial_{t'}}{\partial_{t'}} \paragraph*{Abstract} In the following we derive expressions for the entropy production of a system of $N$ particles. This particle number is fixed, \latin{i.e.}\@\xspace particles do not appear or disappear spontaneously. We treat distinguishable and indistinguishable particles separately. In the case of distinguishable particles, the set of indexed particle coordinates describes a state fully. In the case of indistinguishable particles, all permutations of the indexed particle coordinates correspond to the same state. In the case of sparse occupation, in the sense of never finding more than one particle in the same position, this ambiguity can be efficiently discounted by dividing phase space by $N!$, known as the Gibbs-factor. Sparse occupation is commonly found for continuous states, such as particle coordinates in space, which we will assume in the following. In the following sections we build up our framework step-by-step from distinguishable, non-interacting particles, to particles with interactions, to indistinguishable particles, using the derivations for distinguishable particles as a template. First for $N$ distinguishable and then also for indistinguishable particles, we derive the general principles in \SMref{N_distinguishable_particles} and \ref{sec:N_indistinguishable_particles} respectively, before considering more concretely independent particles, \SMref{N_independent_distinguishable_particles} and \ref{sec:N_independent_indistinguishable_particles}, and then generalising to pair-interacting particles, \SMref{N_interacting_distinguishable_particles} and \ref{sec:N_interacting_indistinguishable_particles}. We apply the present framework to calculate the entropy production \Eref{trawler_final} of two pair-interacting, distinguishable particles in \SMref{generalised_trawlers}, reproducing in a generalised form the "trawler" system of \SMref{HarmonicTrawlers}. We further apply this framework to calculate the entropy production \Eref{entropyProduction_interacting_indistinguishable_example} of $N$ indistinguishable, pair-interacting particles in an external potential in \SMref{N_interacting_indistinguishable_particle_extPot}, reproduced without external potential in \Eref{entropyProduction_for_pairPot}. \subsection{\texorpdfstring{$N$}{N} distinguishable particles} \seclabel{N_distinguishable_particles} For \emph{distinguishable} particles, the starting point of the derivation is the entropy production of a single particle \Eref{def_entropyProduction}, with the particle coordinates $\gpvec{x}$ and $\gpvec{y}$ re-interpreted as those of multiple particles, so that, say, components $(i-1)d+1$ to $id$ of $\gpvec{x}$ and $\gpvec{y}$ are the components of $\gpvec{x}_i$ and $\gpvec{y}_i$ of particle $i$ respectively. The one-point probability or density $\rho(\gpvec{x})$ is then rewritten as the $N$-point probability or density $\rho^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N)$ of $N$ \emph{distinguishable} particles. The constraint of being distinguishable comes about, because in \Eref{def_entropyProduction} each component of $\gpvec{x}$ and $\gpvec{y}$ refers to distinguishable spatial directions. This ``shortcut'' of deriving the expression for the entropy production of $N$ particles can therefore not be taken in the case of indistinguishable particles, which we treat separately in \SMref{N_indistinguishable_particles}. In the field theory, distinguishability is implemented by having different species of particles, each represented by a pair of fields $\phi_i$ and $\phi^{\dagger}_i$, whereas indistinguisable particles belong to the same species, and are then represented by a single pair of fields $\phi$ and $\phi^{\dagger}$. The propagator of a single particle $\ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ that used to make up the kernel $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ and the log-term $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$, \Erefs{def_Op} and \eref{def_Ln}, correspondingly is to be replaced by the joint propagator of all $N$ particle coordinates, $\bigl\langle\phi_1(\gpvec{y}_1,t')\linebreak[1]\phi_2(\gpvec{y}_2,t')\linebreak[1]\ldots\linebreak[1]\phi_N(\gpvec{y}_N,t')\linebreak[1]\phi^{\dagger}_1(\gpvec{x}_1,t)\linebreak[1]\phi^{\dagger}_2(\gpvec{x}_2,t)\linebreak[1]\ldots\linebreak[1]\phi^{\dagger}_N(\gpvec{x}_N,t)\bigr\rangle$, which contains the sum of all diagrams with $N$ incoming and $N$ outgoing legs. The entropy production \Eref{def_entropyProduction} can then be written as a functional of the $N$-point density $\rho^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N)$ as \begin{equation}\elabel{entropy_production_distinguishableN} \dot{S}_{\text{int}}^{(N)}[\rho^{(N)}] = \SumInt_{\substack{\gpvec{x}_1,\ldots,\gpvec{x}_N \\ \gpvec{y}_1,\ldots,\gpvec{y}_N}} \rho^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \left\{ \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} + \ln\left( \frac{\rho^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N)}{\rho^{(N)}(\gpvec{y}_1,\ldots,\gpvec{y}_N)} \right) \right\}\ , \end{equation} where we allow for a sum over discrete states or an integral over continuous states, with \begin{equation}\elabel{Op_distinguishableN} \bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \lim_{t'\downarrow t} \partial_{t'} \ave{\phi_1(\gpvec{y}_1,t')\ldots\phi_N(\gpvec{y}_N,t')\tilde{\phi}_1(\gpvec{x}_1,t)\ldots\tilde{\phi}_N(\gpvec{x}_N,t)} \end{equation} and \begin{equation}\elabel{Ln_distinguishableN} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \lim_{t'\downarrow t} \ln\left(\frac{\ave{\phi_1(\gpvec{y}_1,t')\ldots\phi_N(\gpvec{y}_N,t')\tilde{\phi}_1(\gpvec{x}_1,t)\ldots\tilde{\phi}_N(\gpvec{x}_N,t)}}{\ave{\phi_1(\gpvec{x}_1,t')\ldots\phi_N(\gpvec{x}_N,t')\tilde{\phi}_1(\gpvec{y}_1,t)\ldots\tilde{\phi}_N(\gpvec{y}_N,t)}}\right) \ . \end{equation} The density $\rho^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N)$ disappears from the curly bracket in \Eref{entropy_production_distinguishableN} at stationarity. Indeed, in the following, \emph{we focus on the entropy production at stationarity}, neglecting the term \begin{equation}\elabel{entropyProductionNeglectedAtStationarity} \Delta \dot{S}_{\text{int}}^{(N)}[\rho^{(N)}] = \SumInt_{\substack{\gpvec{x}_1,\ldots,\gpvec{x}_N \\ \gpvec{y}_1,\ldots,\gpvec{y}_N}} \rho^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \left\{ \ln\left( \frac{\rho^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N)}{\rho^{(N)}(\gpvec{y}_1,\ldots,\gpvec{y}_N)} \right) \right\}\ . \end{equation} In \Eref{entropy_production_distinguishableN}, the entropy production at stationarity is written as a functional of $\rho^{(N)}$, which may be ``supplied externally'', to emphasise that the entropy production can be thought of as a spatial average of the \emph{local entropy production} \begin{equation} \elabel{def_entropyProductionDensity} \dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = \SumInt_{\gpvec{y}_1,\ldots,\gpvec{y}_N} \bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \ , \end{equation} which is a function of $\gpvec{x}_1,\ldots\gpvec{x}_N$ only, so that \begin{equation}\elabel{entropyProduction_as_spave} \dot{S}_{\text{int}}^{(N)}[\rho^{(N)}] = \SumInt_{\gpvec{x}_1,\ldots,\gpvec{x}_N} \rho^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \end{equation} is a spatial mean. The need to know the full $\rho^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N)$ is generally a major obstacle. If $N$ is large then little is generally known analytically about it in an interacting system. Even numerical or experimental estimates of the $\rho^{(N)}$ are of limited use, because often the statistics is poor. Below, this obstacle is overcome as it turns out that a theory with $n$-point interaction needs at most the $(2n-1)$-density and, under the assumption of short-rangedness, only the $n$-point density. In the field theory, the \emph{exact, stationary} $N$-point density $\rho^{(N)}$ is \begin{align} \elabel{density_is_propagator} \rho^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N) = \lim_{t_{01},\ldots,t_{0N}\to-\infty} \ave{\phi_1(\gpvec{x}_1,t)\ldots\phi_N(\gpvec{x}_N,t)\tilde{\phi}_1(\gpvec{x}_{01},t_{01})\ldots\tilde{\phi}_N(\gpvec{x}_{0N},t_{0N})} \ , \end{align} independent of the initialisation $\gpvec{x}_{01},\ldots,\gpvec{x}_{0N}$ provided the system is ergodic. The limit of each $t_{0i}\to-\infty$ may be replaced by $t\to\infty$. In principle, the propagator $\langle\phi_1\ldots\tilde{\phi}_N\rangle$ entering into the entropy production \Eref{entropy_production_distinguishableN} via $\bm{\mathsf{K}}^{(N)}$ and $\operatorname{\bm{\mathsf{Ln}}}^{(N)}$, \Erefs{Op_distinguishableN} and \eref{Ln_distinguishableN} contains a plethora of terms. Without perturbative terms, however, it is simply the product of $N$ single-particle propagators, \begin{align}\elabel{propagator_factorising} \bigl\langle\phi_1(\gpvec{y}_1,t')\linebreak[1]\phi_2(\gpvec{y}_2,t')\linebreak[1]\ldots\linebreak[1]\phi_N(\gpvec{y}_N,t')\linebreak[1]\phi^{\dagger}_1(\gpvec{x}_1,t)\linebreak[1]\phi^{\dagger}_2(\gpvec{x}_2,t)\linebreak[1]\ldots\linebreak[1]\phi^{\dagger}_N(\gpvec{x}_N,t)\bigr\rangle &= \prod^N_i \ave[0]{\phi_i(\gpvec{y}_i,t')\tilde{\phi}_i(\gpvec{x}_i,t)} \nonumber\\& \corresponds \tikz[baseline=-2.5pt,scale=1.0]{ \begin{scope}[yshift=0.45cm] \node at (0.5,0) [right] {$\gpvec{x}_1,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_1,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=-0.5mm] {$1$}; \end{scope} \begin{scope}[yshift=0.0cm] \node at (0.5,0) [right] {$\gpvec{x}_2,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_2,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=-0.5mm] {$2$}; \end{scope} \node at (0,-0.4) {$\vdots$}; \node at (1,-0.4) [right] {$\vdots$}; \node at (-1,-0.4) [left] {$\vdots$}; \begin{scope}[yshift=-0.9cm] \node at (0.5,0) [right] {$\gpvec{x}_N,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_N,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,below] {$N$}; \end{scope} } \ , \end{align} each propagator distinguishable by the particle species as indicated by the additional label on the line. To simplify the diagrammatics, we will omit many of the labels in the following. Throughout this work, we are not considering multiple particles of the same of many species. We are also not considering any form of branching, such that all diagrams have the same number of incoming and outgoing legs, as discussed in \SMref{branching_vertices}. Next, allowing for perturbative terms, such as a single "blob", $\tikz[baseline=-2.5pt]{\draw[tAactivity] (0.3,0) -- (-0.3,0); \tgenVertex{0,0};}$, for example, when particle drift or an external potential is implemented perturbatively, produces \begin{multline}\elabel{joint_propagator_plus_first_order} \bigl\langle\phi_1(\gpvec{y}_1,t')\linebreak[1]\phi_2(\gpvec{y}_2,t')\linebreak[1]\ldots\linebreak[1]\phi_N(\gpvec{y}_N,t')\linebreak[1]\phi^{\dagger}_1(\gpvec{x}_1,t)\linebreak[1]\phi^{\dagger}_2(\gpvec{x}_2,t)\linebreak[1]\ldots\linebreak[1]\phi^{\dagger}_N(\gpvec{x}_N,t)\bigr\rangle\\ \corresponds \tikz[baseline=-2.5pt,scale=1.0]{ \begin{scope}[yshift=0.45cm] \node at (0.5,0) [right] {$\gpvec{x}_1$}; \node at (-0.5,0) [left] {$\gpvec{y}_1$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=-0.5mm] {$1$}; \end{scope} \begin{scope}[yshift=0.0cm] \node at (0.5,0) [right] {$\gpvec{x}_2$}; \node at (-0.5,0) [left] {$\gpvec{y}_2$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=-0.5mm] {$2$}; \end{scope} \node at (0,-0.4) {$\vdots$}; \node at (1,-0.4) {$\vdots$}; \node at (-1,-0.4) {$\vdots$}; \begin{scope}[yshift=-0.9cm] \node at (0.5,0) [right] {$\gpvec{x}_N$}; \node at (-0.5,0) [left] {$\gpvec{y}_N$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,below] {$N$}; \end{scope} } + \tikz[baseline=-2.5pt,scale=1.0]{ \begin{scope}[yshift=0.45cm] \tgenVertex{0,0} \node at (0.5,0) [right] {$\gpvec{x}_1$}; \node at (-0.5,0) [left] {$\gpvec{y}_1$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=-0.0mm] {$1$}; \end{scope} \begin{scope}[yshift=0.0cm] \node at (0.5,0) [right] {$\gpvec{x}_2$}; \node at (-0.5,0) [left] {$\gpvec{y}_2$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=-0.5mm] {$2$}; \end{scope} \node at (0,-0.4) {$\vdots$}; \node at (1,-0.4) {$\vdots$}; \node at (-1,-0.4) {$\vdots$}; \begin{scope}[yshift=-0.9cm] \node at (0.5,0) [right] {$\gpvec{x}_N$}; \node at (-0.5,0) [left] {$\gpvec{y}_N$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,below] {$N$}; \end{scope} } + \tikz[baseline=-2.5pt,scale=1.0]{ \begin{scope}[yshift=0.45cm] \node at (0.5,0) [right] {$\gpvec{x}_1$}; \node at (-0.5,0) [left] {$\gpvec{y}_1$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=-0.5mm] {$1$}; \end{scope} \begin{scope}[yshift=0.0cm] \tgenVertex{0,0} \node at (0.5,0) [right] {$\gpvec{x}_2$}; \node at (-0.5,0) [left] {$\gpvec{y}_2$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above] {$2$}; \end{scope} \node at (0,-0.4) {$\vdots$}; \node at (1,-0.4) {$\vdots$}; \node at (-1,-0.4) {$\vdots$}; \begin{scope}[yshift=-0.9cm] \node at (0.5,0) [right] {$\gpvec{x}_N$}; \node at (-0.5,0) [left] {$\gpvec{y}_N$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,below] {$N$}; \end{scope} } + \ldots + \tikz[baseline=-2.5pt,scale=1.0]{ \begin{scope}[yshift=0.45cm] \node at (0.5,0) [right] {$\gpvec{x}_1$}; \node at (-0.5,0) [left] {$\gpvec{y}_1$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=-0.5mm] {$1$}; \end{scope} \begin{scope}[yshift=0.0cm] \node at (0.5,0) [right] {$\gpvec{x}_2$}; \node at (-0.5,0) [left] {$\gpvec{y}_2$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=-0.5mm] {$2$}; \end{scope} \node at (0,-0.4) {$\vdots$}; \node at (1,-0.4) {$\vdots$}; \node at (-1,-0.4) {$\vdots$}; \begin{scope}[yshift=-0.9cm] \tgenVertex{0,0} \node at (0.5,0) [right] {$\gpvec{x}_N$}; \node at (-0.5,0) [left] {$\gpvec{y}_N$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,below] {$N$}; \end{scope} } + \text{h.o.t.} \end{multline} On the right there is a single product of bare propagators without a blob, followed by $N$ terms consisting of a product of $N-1$ bare propagators and a single propagator with blob. Higher order terms with multiple bobs do not contribute, \SMref{appendixWhichDiagramsContribute}. \subsubsection*{Simplified notation and example} \seclabel{simplified_notation} To facilitate the derivations in the following sections, we introduce a simplified notation and an example at this stage. Firstly, a plain, bare propagator of particle species $i$ shall be written as \begin{equation}\elabel{def_g_i} \tikz[baseline=-2.5pt,scale=1.0]{ \node at (0.5,0) [right] {$\gpvec{x}_i,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_i,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=-0.5mm] {$i$}; } \corresponds \ave[0]{\phi_i(\gpvec{y}_i,t')\tilde{\phi}_i(\gpvec{x}_i,t)} = g_i(\gpvec{y}_i;\gpvec{x}_i;t'-t) = g_i \end{equation} with \Eref{propagator_limit_is_delta} \begin{equation}\elabel{g_i_limit} \lim_{t'\downarrow t} \ave[0]{\phi_i(\gpvec{y}_i,t')\tilde{\phi}_i(\gpvec{x}_i,t)} = \lim_{t'\downarrow t} g_i(\gpvec{y}_i;\gpvec{x}_i;t'-t) = \delta(\gpvec{y}_i-\gpvec{x}_i) = \delta_i \end{equation} where we have also introduced the shorthand $\delta_i$, whose gradient and higher order derivatives we will denote by dashes. We further denote the time derivative of $g_i$ in the limit of $t'\downarrow t$ by \begin{equation}\elabel{def_gdot} \lim_{t'\downarrow t} \partial_{t'} \ave[0]{\phi_i(\gpvec{y}_i,t')\tilde{\phi}_i(\gpvec{x}_i,t)}= \lim_{t'\downarrow t} \partial_{t'} g_i(\gpvec{y}_i;\gpvec{x}_i;t'-t)= \dot{g}_i(\gpvec{y}_i;\gpvec{x}_i)= \dot{g}_i \ . \end{equation} The latter derives its properties from the Fokker-Planck operator, $\FPop_{\gpvec{y}}$, in \Eref{FPeqn_main} \begin{equation} \dot{g}_i=\hat{\LC}_{\gpvec{y}_i}\delta(\gpvec{y}_i-\gpvec{x}_i) \ . \end{equation} The perturbative, generic transmutation-like terms, such as those with a single blob in \Eref{propagator_expansion_app}, will be denoted by \begin{equation}\elabel{def_f_i} \tikz[baseline=5pt,scale=1.0]{ \begin{scope}[yshift=0.25cm] \tgenVertex{0,0} \node at (0.5,0) [right] {$\gpvec{x}_i,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_i,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0) node[midway,above,yshift=0.3mm] {$i$}; \end{scope} } \corresponds f_i(\gpvec{y}_i;\gpvec{x}_i;t'-t) = f_i \ . \end{equation} Such a term may contain a complicated dependence on $t'-t$, but is generally evaluated to first order in $t'-t$. We denote its time derivative in the limit of $t'\downarrow t$ as \begin{equation}\elabel{def_fdot} \lim_{t'\downarrow t} \partial_{t'} f_i(\gpvec{y}_i;\gpvec{x}_i;t'-t)= \dot{f}_i(\gpvec{y}_i;\gpvec{x}_i)= \dot{f}_i \ . \end{equation} The notation of $g_i$ and $f_i$ allows us to succinctly express the full propagator as \begin{equation} \ave{\phi_i(\gpvec{y}_i,t')\tilde{\phi}_i(\gpvec{x}_i,t)} = g_i + f_i + \order{(t'-t)^2} \ , \end{equation} where $f_i$ generally vanishes linearly in $t'-t$, \SMref{appendixWhichDiagramsContribute} and \ref{sec:drift_diffusion_on_ring}, so that \begin{equation}\elabel{f_vanishes} \lim_{t'\downarrow t} f_i = 0 \ . \end{equation} The time derivatives of the full propagators that we will need can be succinctly expressed as \begin{equation} \lim_{t'\downarrow t}\elabel{single_propagator_derivative} \partial_{t'} \ave{\phi_i(\gpvec{y}_i,t')\tilde{\phi}_i(\gpvec{x}_i,t)} = \dot{g}_i + \dot{f}_i \ . \end{equation} Beyond the narrow definitions above, expanding the propagator can be rather dangerous. For example, it would be wrong to say that $\ave{\phi_i(\gpvec{y}_i,t')\tilde{\phi}_i(\gpvec{x}_i,t)}=\delta(\gpvec{y}_i-\gpvec{x}_i)+(t'-t)(\dot{g}_i + \dot{f}_i) + \order{(t'-t)^2}$, because the $\delta$-function is truly absent from $\ave{\phi_i\tilde{\phi}_i}$ at $t'-t>0$. Also, $\dot{g}_i$ and $\dot{f}_i$ are kernels, generally containing derivatives of $\delta$-functions, unsuitable, for example, to appear in the logarithm. There, we will need limits of the form $\lim_{t'\downarrow t} f_i/g_i$, such as \Eref{f_over_g_example}. It is further useful to introduce a succinct notation for $g_i$ and $f_i$ with reversed arguments \begin{subequations}\elabel{def_barred} \begin{align} \overline{g}_i&= g_i(\gpvec{x}_i;\gpvec{y}_i;t'-t)\\ \overline{f}_i&= f_i(\gpvec{x}_i;\gpvec{y}_i;t'-t) \ . \end{align} \end{subequations} A useful example of $g_i$ and $\dot{g}_i$ is drift-diffusion in $d$ dimension, \Eref{def_Gw}, \begin{equation}\elabel{example_g_i} g_i=\frac{\theta(t'-t)}{(4\pi D_i (t'-t))^{d/2}} \Exp{-\frac{(\gpvec{y}_i-\gpvec{x}_i-\wvec_i(t'-t))^2}{4 D_i (t'-t)}} \end{equation} with drift velocity $\wvec_i$ and diffusion constant $D_i$ of particle species $i$, so that \begin{equation}\elabel{gi_over_Bgi} \lim_{t'\downarrow t} \frac{g_i}{\overline{g}_i} = \Exp{\frac{\wvec_i\cdot(\gpvec{y}_i-\gpvec{x}_i)}{D_i}} \ . \end{equation} A propagator has generally the property \Eref{trick4}, \begin{equation}\elabel{propoagator_convolution} g_i(\gpvec{y}_i;\gpvec{x}_i;t'-t) = \int \ddint{z_i} g_i(\gpvec{y}_i;\gpvec{z}_i;t'-s) g_i(\gpvec{z}_i;\gpvec{x}_i;s-t) \end{equation} for any $s\in(t,t')$. For $s\notin(t,t')$ the integral vanishes, as each $g_i$ enforces causality via a Heaviside-$\theta$ function, \Eref{example_g_i}. The bare propagator in \Eref{example_g_i} solves the FPE~\eref{FPeqn_main}\@\xspace for individual particle species $i$ with operator \begin{equation} \hat{\LC}^{(i)}_{\gpvec{y}_i}= D_i\nabla_{\gpvec{y}_i}^2 - \wvec_i\cdot\nabla_{\gpvec{y}_i}\ , \end{equation} and therefore \begin{equation}\elabel{gdot_example} \dot{g}_i = \left(D_i\nabla_{\gpvec{y}_i}^2 - \wvec_i\cdot\nabla_{\gpvec{y}_i}\right) \delta(\gpvec{y}_i-\gpvec{x}_i) = D_i\delta''(\gpvec{y}_i-\gpvec{x}_i) - \wvec_i\cdot\delta'(\gpvec{y}_i-\gpvec{x}_i) \ . \end{equation} The perturbative term $f_i$ may be another source of drift, either constant or due to an external potential $\Upsilon_i(\gpvec{x})$. It is constructed via the convolution \Eref{first_order_bauble_t_x_fullPot} \begin{align} \tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0.4) [right,yshift=0pt] {$\gpvec{x}_i,t$}; \node at (-0.5,0.4) [left,yshift=0pt] {$\gpvec{y}_i,t'$}; \draw[tAactivity] (0.5,0.4) -- (-0.5,0.4); \draw[potStyle] (0,-0.2) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,0.4} \draw[black,fill=white] (0,-0.2) circle (3pt) node[right] {$\Upsilon_i'$}; \end{scope} } &\corresponds f_i(\gpvec{y}_i;\gpvec{x}_i;t'-t) \nonumber\\ &= (t'-t) \nabla_{\gpvec{y}_i}\cdot\Big( \Upsilon_i'\left(\frac{\gpvec{x}_i+\gpvec{y}_i}{2}\right) g_i(\gpvec{y}_i;\gpvec{x}_i;t'-t) \Big) + \text{h.o.t.} \ , \elabel{f_example} \end{align} where $\Upsilon_i'$ denotes the gradient of $\Upsilon_i$ with respect to its argument. Of the two terms resulting from the gradient acting on the product on the right, only the differentiation of $g_i$ results in a term that eventually enters in the entropy production, as discussed after \Eref{first_order_bauble_t_x_final2_fullPot}, \SMref{Kn_Ln_for_drift_diffusion_in_pert_pot}. Using \Eref{def_g_i} explicitly, one finds in particular the ratio $f_i/g_i$ as it will be useful for the logarithm, \begin{equation}\elabel{f_over_g_example} \lim_{t'\downarrow t} \frac{f_i}{g_i} = -\frac{\gpvec{y}_i-\gpvec{x}_i}{2 D_i} \cdot \Upsilon'_i\left(\frac{\gpvec{y}_i+\gpvec{x}_i}{2}\right) \ , \end{equation} so that $\delta_i \lim_{t'\downarrow t} f_i/g_i=0$. In general, we will make the weaker assumption \begin{equation}\elabel{f_over_g_delta_limit} \lim_{t'\downarrow t} \delta_i \frac{f_i}{g_i} =0 \ , \end{equation} which might be taken most easily as the $\delta_i$ in front of $f_i/g_i$ can greatly simplify this ratio. As for the kernel, differentiating \Eref{f_example} with respect to $t'$, \Eref{def_fdot}, gives \begin{equation}\elabel{fdot_example} \dot{f}_i = \lim_{t'\downarrow t} \nabla_{\gpvec{y}_i}\cdot\Big( \Upsilon_i'\left(\frac{\gpvec{x}_i+\gpvec{y}_i}{2}\right) g_i(\gpvec{y}_i;\gpvec{x}_i;t'-t) \Big)= \nabla_{\gpvec{y}_i}\cdot\left( \Upsilon_i'\left(\frac{\gpvec{x}_i+\gpvec{y}_i}{2}\right) \delta_i \right) = \Upsilon_i'\left(\frac{\gpvec{x}_i+\gpvec{y}_i}{2}\right) \cdot \delta_i' \ , \end{equation} similar to \Eref{Kn_from_prop} and discussion thereafter. Carrying on with the simplified notation, we also need to introduce the notation for pair interactions. The structure of a pair potential term follows that of the external potential \Eref{f_example}, to leading order in $t'-t$, \begin{align} &\tikz[baseline=0pt]{ \begin{scope}[yshift=0.0cm] \draw[potStyle] (0,-0.4) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,-0.4} \tgenVertex{0,0.4} \node at (0.5,-0.5) [right,yshift=0pt] {$\gpvec{x}_j,t$}; \node at (-0.5,-0.5) [left,yshift=0pt] {$\gpvec{y}_j,t'$}; \draw[tAactivity] (0.5,-0.5) -- (0,-0.4) -- (-0.5,-0.5); \node at (0.5,0.5) [right,yshift=0pt] {$\gpvec{x}_i,t$}; \node at (-0.5,0.5) [left,yshift=0pt] {$\gpvec{y}_i,t'$}; \draw[tAactivity] (0.5,0.5) -- (0,0.4) -- (-0.5,0.5); \end{scope} } \corresponds h_{ij}(\gpvec{y}_i,\gpvec{y}_j;\gpvec{x}_i,\gpvec{x}_j; t'-t) \nonumber\\ &= - \int \dint{s} \ddint{z_i}\ddint{z_j} g_i(\gpvec{z}_i; \gpvec{x}_i; s-t) \left(\nabla_{\gpvec{z}_i} U_{ij}(\gpvec{z}_i-\gpvec{z}_j)\right) \cdot \left(\nabla_{\gpvec{z}_i} g_i(\gpvec{y}_i; \gpvec{z}_i; t'-s)\right) g_j(\gpvec{z}_j; \gpvec{x}_j; s-t) g_j(\gpvec{y}_j; \gpvec{z}_j; t'-s) \nonumber\\ &= (t'-t) \left(\nabla_{\gpvec{x}_i} U_{ij}(\gpvec{x}_i-\gpvec{x}_j)\right)\cdot\left(\nabla_{\gpvec{y}_i}g_i(\gpvec{y}_i; \gpvec{x}_i; t'-t)\right) g_j(\gpvec{y}_j; \gpvec{x}_j; t'-t) + \text{h.o.t.} \elabel{h_example} \end{align} where we have used "tricks" similar to \Erefs{trick1} to \eref{trick4}. In brief, we may write $h_{ij}$ as \Eref{pair_pot_effect2} \begin{equation}\elabel{h_example_compact} h_{ij} = h_{ij}(\gpvec{y}_i,\gpvec{y}_j;\gpvec{x}_i,\gpvec{x}_j; t'-t)= (t'-t)U'_{ij}(\gpvec{x}_i-\gpvec{x}_j)\cdot g_i'(\gpvec{y}_i; \gpvec{x}_i; t'-t) g_j(\gpvec{y}_j; \gpvec{x}_j; t'-t) + \text{h.o.t.} \ . \end{equation} Just like $f_i$, the interaction $h_{ij}$ is only ever evaluated to first order in $t'-t$ and we may therefore be occasionally found sloppilly dropping higher order terms in their entirety. We denote the limit of the time-derivative of $h_{ij}$ by \begin{equation}\elabel{def_hdot} \lim_{t'\downarrow t} \partial_{t'} h_{ij}(\gpvec{y}_i,\gpvec{y}_j;\gpvec{x}_i,\gpvec{x}_j; t'-t) = \dot{h}_{ij}(\gpvec{y}_i,\gpvec{y}_j;\gpvec{x}_i,\gpvec{x}_j) =\dot{h}_{ij} \ , \end{equation} and assume that it is $\delta$-like in $\gpvec{y}_j-\gpvec{x}_j$, \begin{equation} \dot{h}_{ij}\propto\delta_j \ . \elabel{hderi_delta} \end{equation} For the example in \Eref{h_example_compact}, this means \begin{equation}\elabel{hdot_example} \dot{h}_{ij}=U'_{ij}(\gpvec{x}_i-\gpvec{x}_j)\cdot \delta_i' \delta_j\ . \end{equation} The interaction term evaluated with inverted arguments is denoted by \begin{align} \overline{h}_{ij} &= h_{ij}(\gpvec{x}_i,\gpvec{x}_j;\gpvec{y}_i,\gpvec{y}_j; t'-t)\ , \end{align} corresponding to the notation introduced above. The properties of $h_{ij}$ are very similar to those of $f_i$, as it affects the motion of particle $i$, otherwise only evaluating the position of particle $j$. \subsubsection{\texorpdfstring{$N$}{N} independent, distinguishable particles} \seclabel{N_independent_distinguishable_particles} When particles are independent, the propagator factorises, \begin{align}\nonumber \bigl\langle\phi_1(\gpvec{y}_1,t')\linebreak[1]\ldots\linebreak[1]\phi_N(\gpvec{y}_N,t')\linebreak[1]\phi^{\dagger}_1(\gpvec{x}_1,t)\linebreak[1]\ldots\linebreak[1]\phi^{\dagger}_N(\gpvec{x}_N,t)\bigr\rangle = & \prod^N_i \ave{\phi_i(\gpvec{y}_i,t')\tilde{\phi}_i(\gpvec{x}_i,t)}\\ = & \prod^N_i g_i + \sum_i^N f_i \prod^N_{\substack{j\ne i}} g_j + \order{(t'-t)^2} \ , \elabel{Npropagators_distinguishable} \end{align} so that the kernel becomes, using \Erefs{g_i_limit}, \eref{def_gdot}, \eref{def_fdot} and \eref{f_vanishes}, or simply \Erefs{propagator_factorising} and \eref{single_propagator_derivative}, \begin{align}\elabel{Op_distinguishable_independent} \bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = & \sum_{i=1}^N \lim_{t'\downarrow t} \left( \partial_{t'} \ave{\phi_i(\gpvec{y}_i,t')\tilde{\phi}_i(\gpvec{x}_i,t)} \right) \prod_{j\ne i}^N \ave{\phi_j(\gpvec{y}_j,t')\tilde{\phi}_j(\gpvec{x}_j,t)} \nonumber \\ = & \sum_i^N (\dot{g}_i + \dot{f}_i) \prod^N_{\substack{j\ne i}} \delta_j \end{align} and the logarithm of the ratio of the propagators, \begin{equation}\elabel{Ln_distinguishable_independent} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \lim_{t'\downarrow t} \ln\left( \frac {\prod^N_i g_i + \sum_i^N f_i \prod^N_{\substack{j\ne i}} g_j + \order{(t'-t)^2}} {\prod^N_i \overline{g}_i + \sum_i^N \overline{f}_i \prod^N_{\substack{j\ne i}} \overline{g}_j + \order{(t'-t)^2}} \right) \ , \end{equation} where we have made use of the barred notation \Eref{def_barred}. There is no need to retain terms of order $(t'-t)^2$, because if the lower order terms vanish, so does the kernel and the entire logarithm does not contribute. For a continuous variable, the logarithm is efficiently written as \begin{equation}\elabel{Ln_distinguishable_independent_simplified} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \lim_{t'\downarrow t} \sum_i^N \ln\left( \frac {g_i} {\overline{g}_i} \right) + \ln\left( \frac {1 + \sum_i^N f_i/g_i } {1 + \sum_i^N \overline{f}_i/\overline{g}_i} \right) \ . \end{equation} If states are discrete, a slightly different approach is needed and $\operatorname{\bm{\mathsf{Ln}}}^{(N)}$ is best kept in the form \Eref{Ln_distinguishable_independent} as neither $g_i/\overline{g}_i$ nor $f_i/g_i$ might be well-defined in the limit $t'\downarrow t$, while $\delta_i$ itself evaluates to either $0$ or $1$ inside the logarithm. Henceforth, we will focus entirely on continuous states $\gpvec{x}_i$. As the kernel $\bm{\mathsf{K}}^{(N)}$ is expected to be at most second order in spatial derivatives, \SMref{Kn_Ln_for_drift_diffusion_in_pert_pot}, the logarithm $\operatorname{\bm{\mathsf{Ln}}}^{(N)}$, which is odd in $\gpvec{y}-\gpvec{x}$, can be expanded in small $\gpvec{y}-\gpvec{x}$, \begin{equation}\elabel{Ln_distinguishable_independent_simplified2} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \lim_{t'\downarrow t} \sum_i^N \ln\left( \frac {g_i} {\overline{g}_i} \right) + \sum_i^N \left[ \frac{f_i}{g_i} - \frac{\overline{f}_i}{\overline{g}_i} \right]\ , \end{equation} so that the local entropy production \Eref{def_entropyProductionDensity} at stationarity, is \begin{equation}\elabel{entropyProductionDensity_independent_distinguishable_initial} \dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = \int \ddint{y_{1,\ldots,N}} \left\{ \sum_i^N (\dot{g}_i + \dot{f}_i) \prod^N_{\substack{j\ne i}} \delta_j \right\} \lim_{t'\downarrow t} \left\{ \sum_k^N \left[ \ln\left(\frac{g_k}{\overline{g}_k}\right) + \frac{f_k}{g_k} - \frac{\overline{f}_k}{\overline{g}_k} \right] \right\} \ . \end{equation} The two sums in this expression produce $N^2$ terms in total. Under the integral, the product $\prod^N_{\substack{j\ne i}} \delta_j$ forces $g_k/\overline{g}_k$ to converge to $1$ and $f_k/g_k$ to vanish for all $k\ne i$, \Eref{f_over_g_delta_limit}. Of the second sum, only the terms $k=i$ remain, so that \begin{align} \dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = & \int \ddint{y_{1,\ldots,N}} \sum_i^N (\dot{g}_i + \dot{f}_i) \left\{ \prod^N_{\substack{j\ne i}} \delta_j \lim_{t'\downarrow t} \left[ \ln\left(\frac{g_i}{\overline{g}_i}\right) + \frac{f_i}{g_i} - \frac{\overline{f}_i}{\overline{g}_i} \right]\right\} \nonumber \\ = & \sum_i^N \dot{\sigma}^{(N)}_i(\gpvec{x}_i) \elabel{entropyProductionDensity_independent_distinguishable_final_sum} \end{align} with \begin{align} \dot{\sigma}^{(N)}_i(\gpvec{x}_i) = \int \ddint{y_i} (\dot{g}_i + \dot{f}_i) \lim_{t'\downarrow t} \left[ \ln\left(\frac{g_i}{\overline{g}_i}\right) + \frac{f_i}{g_i} - \frac{\overline{f}_i}{\overline{g}_i} \right] \ , \elabel{entropyProductionDensity_independent_distinguishable_final} \end{align} in fact independent of any other particles around, so that $\dot{\sigma}^{(N)}_i(\gpvec{x}_i)=\dot{\sigma}^{(1)}_i(\gpvec{x}_i)$. The local entropy production is therefore the sum of the local entropy production of each particle. Using this expression in \Eref{entropyProduction_as_spave}, \begin{equation} \dot{S}_{\text{int}}^{(N)}[\rho^{(N)}] = \int \ddint{x_{1,\ldots,N}} \rho^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \sum_i^N \dot{\sigma}^{(1)}_i(\gpvec{x}_i) \end{equation} allows all integrals except the one over $\gpvec{x}_i$ to be carried out as a marginalisation, \begin{equation}\elabel{marginalisation_densityN_i_distinguishable} \rho_i^{(N)}(\gpvec{x}_i) = \int \ddint{x_{1\ldots i-1,i+1,\ldots,N}} \rho^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \ , \end{equation} so that $\rho_i^{(N)}(\gpvec{x}_i)$ is the density of particle species $i$ at $\gpvec{x}_i$ and \begin{align} \dot{S}_{\text{int}}^{(N)}[\rho^{(N)}] = \sum_i^N \int \ddint{x_i} \rho_i^{(N)}(\gpvec{x}_i) \dot{\sigma}^{(1)}_i(\gpvec{x}_i) \ , \elabel{entropyProduction_independent_distinguishable_final} \end{align} confirming the overall entropy production as the sum of single particle entropy productions, \latin{i.e.}\@\xspace confirming extensivity. To illustrate \Eref{entropyProductionDensity_independent_distinguishable_final} with an example, \begin{align} \dot{\sigma}^{(1)}_i(\gpvec{x}_i) = & \int \ddint{y_i} \left(D_i\delta''_i - \wvec_i\cdot\delta'_i + \Upsilon'_i\left(\frac{\gpvec{y}_i+\gpvec{x}_i}{2}\right) \cdot \delta'_i + \mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)} \Upsilon''_i\left(\frac{\gpvec{y}_i+\gpvec{x}_i}{2}\right) \delta_i\right)\nonumber\\ & \times \left[ \frac{\wvec_i\cdot(\gpvec{y}_i-\gpvec{x}_i)}{D_i}- \frac{\gpvec{y}_i-\gpvec{x}_i}{D_i}\cdot \Upsilon'_i\left(\frac{\gpvec{y}_i+\gpvec{x}_i}{2}\right) \right] = -\nabla^2 \Upsilon_i(\gpvec{x}_i) +\frac{\left(\nabla \Upsilon(\gpvec{x}_i) - \wvec_i\right)^2}{D_i} \elabel{entropyProductionDensity_distinguishable_non-interacting_example} \end{align} using \Erefs{example_g_i}, \eref{gi_over_Bgi}, \eref{gdot_example}, \eref{f_over_g_example} and \eref{fdot_example}. This result is identical to \Eref{local_entropy_Gw}. Without drift, it is easy to show that the spatial average of \Eref{entropyProductionDensity_distinguishable_non-interacting_example} vanishes for a Boltzmann density, $\rho_i^{(N)}(\gpvec{x}_i) \propto \exp{-\Upsilon(\gpvec{x}_i)/D_i}$ so that the entropy production vanishes. \Erefs{entropyProductionDensity_independent_distinguishable_final} and \eref{entropyProduction_independent_distinguishable_final} are the central results of this section. Adding interaction, as done in the next section, results in more terms, but a remarkably similar structure. \subsubsection{\texorpdfstring{$N$}{N} pairwise interacting, distinguishable particles} \seclabel{N_interacting_distinguishable_particles} In the presence of interaction, the $n$-point propagator acquires additional terms to order $t'-t$. Diagrammatically, such terms to be added to $\ave{\phi_1(\gpvec{y}_1,t')\ldots\tilde{\phi}_N(\gpvec{x}_N,t)}$, beyond those shown in \Eref{joint_propagator_plus_first_order}, are of the form \Eref{h_example}, \begin{align}\elabel{Npropagators_distinguishable_interaction_diagrams} \ave{\phi_1(\gpvec{y}_1,t')\ldots\tilde{\phi}_N(\gpvec{x}_N,t)} \corresponds \ldots + \tikz[baseline=-20pt]{ \begin{scope}[yshift=0.0cm] \draw[potStyle] (0,-0.25) -- (0,0.25); \draw[black,thin] (-0.1,0.1) -- (0.1,0.1); \draw[red,thin] (-0.2,0.15) -- (-0.2,0.35); \tgenVertex{0,-0.25} \tgenVertex{0,0.25} \node at (0.5,-0.3) [right,yshift=0pt] {$\gpvec{x}_2,t$}; \node at (-0.5,-0.3) [left,yshift=0pt] {$\gpvec{y}_2,t'$}; \draw[tAactivity] (0.5,-0.3) -- (0,-0.25) -- (-0.5,-0.3); \node at (0.5,0.3) [right,yshift=0pt] {$\gpvec{x}_1,t$}; \node at (-0.5,0.3) [left,yshift=0pt] {$\gpvec{y}_1,t'$}; \draw[tAactivity] (0.5,0.3) -- (0,0.25) -- (-0.5,0.3); \end{scope} \begin{scope}[yshift=-0.8cm] \node at (0.5,0) [right,yshift=0pt] {$\gpvec{x}_3,t$}; \node at (-0.5,0) [left,yshift=0pt] {$\gpvec{y}_3,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \node at (0,-0.45) {$\vdots$}; \node at (1,-0.45) [right] {$\vdots$}; \node at (-1,-0.45) [left] {$\vdots$}; \end{scope} \begin{scope}[yshift=-1.75cm] \node at (0.5,0) [right,yshift=0pt] {$\gpvec{x}_N,t$}; \node at (-0.5,0) [left,yshift=0pt] {$\gpvec{y}_N,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} } + \tikz[baseline=-20pt]{ \begin{scope}[yshift=0.0cm] \draw[potStyle] (0,-0.25) -- (0,0.25); \draw[black,thin] (-0.1,0.1) -- (0.1,0.1); \draw[red,thin] (-0.2,0.15) -- (-0.2,0.35); \tgenVertex{0,-0.25} \tgenVertex{0,0.25} \node at (0.5,-0.3) [right,yshift=0pt] {$\gpvec{x}_3,t$}; \node at (-0.5,-0.3) [left,yshift=0pt] {$\gpvec{y}_3,t'$}; \draw[tAactivity] (0.5,-0.3) -- (0,-0.25) -- (-0.5,-0.3); \node at (0.5,0.3) [right,yshift=0pt] {$\gpvec{x}_1,t$}; \node at (-0.5,0.3) [left,yshift=0pt] {$\gpvec{y}_1,t'$}; \draw[tAactivity] (0.5,0.3) -- (0,0.25) -- (-0.5,0.3); \end{scope} \begin{scope}[yshift=-0.8cm] \node at (0.5,0) [right,yshift=0pt] {$\gpvec{x}_2,t$}; \node at (-0.5,0) [left,yshift=0pt] {$\gpvec{y}_2,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \node at (0,-0.45) {$\vdots$}; \node at (1,-0.45) [right] {$\vdots$}; \node at (-1,-0.45) [left] {$\vdots$}; \end{scope} \begin{scope}[yshift=-1.75cm] \node at (0.5,0) [right,yshift=0pt] {$\gpvec{x}_N,t$}; \node at (-0.5,0) [left,yshift=0pt] {$\gpvec{y}_N,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} } + \tikz[baseline=-20pt]{ \begin{scope}[yshift=-0.5cm] \draw[potStyle] (0,-0.25) -- (0,0.25); \draw[black,thin] (-0.1,0.1) -- (0.1,0.1); \draw[red,thin] (-0.2,0.15) -- (-0.2,0.35); \tgenVertex{0,-0.25} \tgenVertex{0,0.25} \node at (0.5,-0.3) [right,yshift=0pt] {$\gpvec{x}_3,t$}; \node at (-0.5,-0.3) [left,yshift=0pt] {$\gpvec{y}_3,t'$}; \draw[tAactivity] (0.5,-0.3) -- (0,-0.25) -- (-0.5,-0.3); \node at (0.5,0.3) [right,yshift=0pt] {$\gpvec{x}_2,t$}; \node at (-0.5,0.3) [left,yshift=0pt] {$\gpvec{y}_2,t'$}; \draw[tAactivity] (0.5,0.3) -- (0,0.25) -- (-0.5,0.3); \node at (0,-0.75) {$\vdots$}; \node at (1,-0.75) [right] {$\vdots$}; \node at (-1,-0.75) [left] {$\vdots$}; \end{scope} \begin{scope}[yshift=0.3cm] \node at (0.5,0) [right,yshift=0pt] {$\gpvec{x}_1,t$}; \node at (-0.5,0) [left,yshift=0pt] {$\gpvec{y}_1,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \begin{scope}[yshift=-1.75cm] \node at (0.5,0) [right,yshift=0pt] {$\gpvec{x}_N,t$}; \node at (-0.5,0) [left,yshift=0pt] {$\gpvec{y}_N,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} } + \ldots \end{align} each particle potentially interacting with any other particle. The underlying vertex, \eref{h_example}, is not symmetric, as the force is exerted on a particle attached via a dashed, red leg by the particle attached via undashed legs. The force is mediated by the dash-dotted line and need not be symmetric. There are therefore $N(N-1)$ such contributions. With each of the interaction terms \Eref{Npropagators_distinguishable_interaction_diagrams} being of order $t'-t$, \Eref{h_example_compact}, they add to the propagator \Eref{Npropagators_distinguishable} in the form \begin{equation}\elabel{Npropagators_distinguishable_with_interaction} \ave{\phi_1(\gpvec{y}_1,t')\ldots\tilde{\phi}_N(\gpvec{x}_N,t)} = \prod^N_i g_i + \sum_i^N f_i \prod^N_{\substack{j\ne i}} g_j + \sum_i^N \sum_{j\ne i}^N h_{ij} \prod_{k\notin\{i,j\}} g_{k} + \order{(t'-t)^2} \ , \end{equation} affecting both the kernel $\bm{\mathsf{K}}^{(N)}$, \Eref{Op_distinguishableN}, and the logarithm $\operatorname{\bm{\mathsf{Ln}}}^{(N)}$, \Eref{Ln_distinguishableN}, of the entropy production \Eref{entropy_production_distinguishableN}. The effect of the interaction term $h_{ij}$ on the kernel is similar to that of $f_i$, \Eref{def_f_i}, as the time derivative of $h_{ij}$ in the limit $t'\downarrow t$ renders it a kernel on $\gpvec{y}_i,\gpvec{x}_i$. The coordinates $\gpvec{y}_j$ and $\gpvec{x}_j$ enter into the amplitude of the force, but otherwise, under the limit, enter merely into a $\delta$-function, so that, starting from \Eref{Op_distinguishable_independent} \begin{equation}\elabel{Op_distinguishable_interacting} \bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \sum_i^N (\dot{g}_i + \dot{f}_i) \prod^N_{\substack{j\ne i}} \delta_j + \sum_i^N \sum_{j\ne i}^N \dot{h}_{ij} \prod_{k\notin\{i,j\}}^N \delta_{k} \ . \end{equation} Because of the $\delta_j$-like nature of $\dot{h}_{ij}$, each term multiplied by it has $\gpvec{y}_j=\gpvec{x}_j$ enforced for all $j$ except $j=i$. The logarithm $\operatorname{\bm{\mathsf{Ln}}}^{(N)}$ also acquires $N(N-1)$ new terms, conveniently written in the form \begin{equation}\elabel{Ln_distinguishable_interacting_simplified} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \lim_{t'\downarrow t} \sum_i^N \ln\left( \frac {g_i} {\overline{g}_i} \right) + \ln\left( \frac {1 + \sum_i^N f_i/g_i + \sum_i^N\sum_{j\ne i}^N h_{ij}/(g_i g_j) } {1 + \sum_i^N \overline{f}_i/\overline{g}_i+ \sum_i^N\sum_{j\ne i}^N \overline{h}_{ij}/(\overline{g}_i \overline{g}_j) } \right) \ . \end{equation} following the steps from \Eref{Ln_distinguishable_independent} to \eref{Ln_distinguishable_independent_simplified}. Again, we expand the terms in the rightmost logarithm, so that \begin{equation}\elabel{Ln_distinguishable_interacting_simplified2} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \lim_{t'\downarrow t} \sum_i^N \ln\left( \frac {g_i} {\overline{g}_i} \right) + \sum_i^N \left[ \frac{f_i}{g_i} - \frac{\overline{f}_i}{\overline{g}_i} \right] + \sum_i^N \sum_{j\ne i}^N \left[ \frac{h_{ij}}{g_i g_j} - \frac{\overline{h}_{ij}}{\overline{g}_i\overline{g}_j} \right]\ , \end{equation} producing an expression that is more efficiently analysed. Using \Eref{Op_distinguishable_interacting} for $\bm{\mathsf{K}}^{(N)}$ and \Eref{Ln_distinguishable_interacting_simplified2} for $\operatorname{\bm{\mathsf{Ln}}}^{(N)}$ in \Eref{def_entropyProductionDensity}, we have \begin{align} \dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) =& \int \ddint{y_{1,\ldots,N}} \left\{ \sum_i^N (\dot{g}_i + \dot{f}_i) \prod^N_{\substack{j\ne i}} \delta_j + \sum_i^N \sum_{j\ne i}^N \dot{h}_{ij} \!\!\prod_{k\notin\{i,j\}}\!\!\delta_{k} \right\}\nonumber\\ &\times \lim_{t'\downarrow t} \left\{ \sum_\ell^N \ln\left( \frac {g_\ell} {\overline{g}_\ell} \right) + \sum_\ell^N \left[ \frac{f_\ell}{g_\ell} - \frac{\overline{f}_\ell}{\overline{g}_\ell} \right] + \sum_\ell^N \sum_{m\ne \ell}^N \left[ \frac{h_{\ell m}}{g_\ell g_m} - \frac{\overline{h}_{\ell m}}{\overline{g}_\ell\overline{g}_m} \right] \right\} \ , \elabel{entropyProductionDensity_interacting_distinguishable_initial} \end{align} which contains all the terms of \Eref{entropyProductionDensity_independent_distinguishable_final_sum}, in addition to any terms involving $h_{ij}$, in particular \begin{align}\nonumber \dot{\sigma}^{(N)}_{\text{term 1}}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = & \int \ddint{y_{1,\ldots,N}} \Bigg( \sum_i^N \sum_{j\ne i}^N \dot{h}_{ij} \prod_{k\notin\{i,j\}} \delta_{k} \Bigg)\\ &\times \lim_{t'\downarrow t} \Bigg\{ \sum_\ell^N \ln\left( \frac {g_\ell} {\overline{g}_\ell} \right) + \sum_\ell^N \left[ \frac{f_\ell}{g_\ell} - \frac{\overline{f}_\ell}{\overline{g}_\ell} \right] + \sum_\ell^N \sum_{m\ne \ell}^N \left[ \frac{h_{\ell m}}{g_\ell g_m} - \frac{\overline{h}_{\ell m}}{\overline{g}_\ell\overline{g}_m} \right] \Bigg\} \elabel{entropyProductionDensity_interacting_distinguishable_term1} \end{align} and \begin{equation}\elabel{entropyProductionDensity_interacting_distinguishable_term2} \dot{\sigma}^{(N)}_{\text{term 2}}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = \int \ddint{y_{1,\ldots,N}} \Bigg( \sum_i^N (\dot{g}_i + \dot{f}_i) \prod^N_{\substack{j\ne i}} \delta_j \Bigg) \lim_{t'\downarrow t} \left\{ \sum_\ell^N \sum_{m\ne \ell}^N \left[ \frac{h_{\ell m}}{g_\ell g_m} - \frac{\overline{h}_{\ell m}}{\overline{g}_\ell\overline{g}_m} \right] \right\}\ , \end{equation} so that with the simplifications from \Erefs{entropyProductionDensity_independent_distinguishable_final_sum} and \eref{entropyProductionDensity_independent_distinguishable_final} \begin{equation}\elabel{entropyProduction_interacting_distinguishable_sum} \dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = \dot{\sigma}^{(N)}_{\text{term 1}}(\gpvec{x}_1,\ldots,\gpvec{x}_N) + \dot{\sigma}^{(N)}_{\text{term 2}}(\gpvec{x}_1,\ldots,\gpvec{x}_N) + \sum_i^N \dot{\sigma}^{(1)}_i(\gpvec{x}_i) \ . \end{equation} Analysing \Eref{entropyProductionDensity_interacting_distinguishable_term1} first, the term $\dot{h}_{ij}\prod_k\delta_k$ enforces $\gpvec{y}_\ell=\gpvec{x}_\ell$ for all $\ell$ except $\ell=i$, because $\dot{h}_{ij}$ is proportional to $\delta_j$. As a result, $\ln(g_\ell/\overline{g}_\ell)$ vanishes for all $\ell\ne i$, as do $f_\ell/g_\ell$ and $\overline{f}_\ell/\overline{g}_\ell$, \Eref{f_over_g_delta_limit}. For the same reason, the terms $h_{\ell m}/(g_\ell g_m)$ and $\overline{h}_{\ell m}/(\overline{g}_\ell \overline{g}_m)$ vanish. The only terms in the curly brackets of \Eref{entropyProductionDensity_interacting_distinguishable_term1} that do not vanish by the $\dot{h}_{ij}\prod_k\delta_k$ pre-factor are therefore $\ell=i$. The first set of terms \Eref{entropyProductionDensity_interacting_distinguishable_term1} thus simplify to \begin{align} \nonumber \dot{\sigma}^{(N)}_{\text{term 1}}(\gpvec{x}_1,\ldots,\gpvec{x}_N) &= \int \ddint{y_{1,\ldots,N}} \sum_i^N \sum_{j\ne i}^N \dot{h}_{ij} \prod_{k\notin\{i,j\}} \delta_{k} \lim_{t'\downarrow t} \Bigg\{ \ln\left( \frac {g_i} {\overline{g}_i} \right) + \left[ \frac{f_i}{g_i} - \frac{\overline{f}_i}{\overline{g}_i} \right] + \sum_{m\ne i} \left[ \frac{h_{i m}}{g_i g_m} - \frac{\overline{h}_{i m}}{\overline{g}_i\overline{g}_m} \right] \Bigg\}\\ &= \sum_i^N \sum_{j\ne i}^N \int \ddint{y_{i,j}} \dot{h}_{ij} \left\{ \ln\left( \frac {g_i} {\overline{g}_i} \right) + \left[ \frac{f_i}{g_i} - \frac{\overline{f}_i}{\overline{g}_i} \right] + \left[ \frac{h_{i j}}{g_i g_j} - \frac{\overline{h}_{i j}}{\overline{g}_i\overline{g}_j} \right] \right\} \nonumber \\& \elabel{entropyProductionDensity_interacting_distinguishable_term1_simplified2} \qquad + \sum_i^N \sum_{j\ne i}^N \sum_{m\notin \{i,j\}} \int \ddint{y_{i,j,m}} \dot{h}_{ij}\delta_m \left[ \frac{h_{i m}}{g_i g_m} - \frac{\overline{h}_{i m}}{\overline{g}_i\overline{g}_m} \right] \ , \end{align} where the last term contributes only when particles are sufficiently densely packed on the scale of the potential range. This is because a term of the form \[ \dot{h}_{ij} \left[ \frac{h_{i m}}{g_i g_m} - \frac{\overline{h}_{i m}}{\overline{g}_i\overline{g}_m} \right] \] needs $\gpvec{x}_j$ to be sufficiently close to $\gpvec{x}_i$ such that $\dot{h}_{ij}$ contributes, and $\gpvec{x}_m$ to be sufficiently close to $\gpvec{x}_i$ such that $h_{i m}$ and $\overline{h}_{i m}$ contribute. In other words, this term contributes only if three particles might be interacting simultaneously by pairwise interaction \cite{SuzukiETAL:2015,ChatterjeeGoldenfeld:2019,ZampetakiETAL:2021}. The second set of terms, \Eref{entropyProductionDensity_interacting_distinguishable_term2}, can be simplified similarly. Since $\gpvec{y}_\ell=\gpvec{x}_\ell$ for all $\ell\ne i$, all terms $h_{\ell m}/(g_\ell g_m)$ and $ \overline{h}_{\ell m}/(\overline{g}_\ell \overline{g}_m)$ vanish for all $\ell\ne i$, \begin{align} \dot{\sigma}^{(N)}_{\text{term 2}}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = & \int \ddint{y_{1,\ldots,N}} \sum_i^N (\dot{g}_i + \dot{f}_i) \prod^N_{\substack{j\ne i}} \delta_j \lim_{t'\downarrow t} \left\{ \sum_{m\ne i}^N \left[ \frac{h_{i m}}{g_i g_m} - \frac{\overline{h}_{i m}}{\overline{g}_i\overline{g}_m} \right] \right\} \nonumber\\ = & \sum_i^N \sum_{m\ne i}^N \int \ddint{y_{i,m}} (\dot{g}_i + \dot{f}_i)\delta_m \lim_{t'\downarrow t} \left[ \frac{h_{i m}}{g_i g_m} - \frac{\overline{h}_{i m}}{\overline{g}_i\overline{g}_m} \right] \ . \elabel{entropyProductionDensity_interacting_distinguishable_term2_simplified} \end{align} This term has a smaller contribution the quicker $h_{im}$ drops off when particle $i$ and particle $m$ are far apart. This is because $(\dot{g}_i + \dot{f}_i)\delta_m$ does not provide extra weight for $i$ and $m$ being close to each other, which is the condition for $h_{im}$ and $\overline{h}_{im}$ in the logarithmic term $h_{im}/(g_ig_m)-\overline{h}_{im}/(\overline{g}_i\overline{g}_m)$ to contribute. The local entropy production of interacting, distinguishable particles thus consists of three types of terms: Firstly, $\dot{\sigma}^{(1)}_i(\gpvec{x}_i)$, \Eref{entropyProductionDensity_independent_distinguishable_final}, collects all contributions due to $g_i$ and $f_i$ only, which are due to the free motion of particle $i$ and the effect of any external potential on it. Secondly, $\dot{\sigma}^{(N)}_{ij}(\gpvec{x}_i,\gpvec{x}_j)=\dot{\sigma}^{(2)}_{ij}(\gpvec{x}_i,\gpvec{x}_j)$, which contains those terms of $\dot{\sigma}^{(N)}_{\text{term 1}}$ and $\dot{\sigma}^{(N)}_{\text{term 2}}$, that depend on the coordinates, $\gpvec{x}_i$ and $\gpvec{x}_j$ of only \emph{two distinct} particles $i$ and $j$, \Eref{entropyProductionDensity_interacting_distinguishable_term1_simplified2} and \Eref{entropyProductionDensity_interacting_distinguishable_term2_simplified}, \begin{align}\elabel{entropyProductionDensity_interacting_distinguishable_ij} \dot{\sigma}^{(2)}_{ij}(\gpvec{x}_i,\gpvec{x}_j)= \int \ddint{y_{i,j}} \left( \dot{h}_{ij} \lim_{t'\downarrow t} \left\{ \ln\left( \frac {g_i} {\overline{g}_i} \right) + \left[ \frac{f_i}{g_i} - \frac{\overline{f}_i}{\overline{g}_i} \right] + \left[ \frac{h_{i j}}{g_i g_j} - \frac{\overline{h}_{i j}}{\overline{g}_i\overline{g}_j} \right] \right\} + (\dot{g}_i + \dot{f}_i)\delta_j \lim_{t'\downarrow t} \left\{ \frac{h_{i j}}{g_i g_j} - \frac{\overline{h}_{i j}}{\overline{g}_i\overline{g}_j} \right\}\right) \ . \end{align} It gives the entropy produced by particle $i$ due to its interaction with particle $j$. Thirdly, a term that depends on three coordinates, the last term of \Eref{entropyProductionDensity_interacting_distinguishable_term1_simplified2} for one triplet $i,j,k$ of distinct particles, that contributes only for particle systems so dense that more than two particles might be interacting at once, \begin{equation}\elabel{entropyProductionDensity_interacting_distinguishable_ijk} \dot{\sigma}^{(N)}_{ijk}(\gpvec{x}_i,\gpvec{x}_j,\gpvec{x}_k)= \dot{\sigma}^{(3)}_{ijk}(\gpvec{x}_i,\gpvec{x}_j,\gpvec{x}_k)= \int \ddint{y_{i,j,k}} \dot{h}_{ij}\delta_k \lim_{t'\downarrow t} \left[ \frac{h_{i k}}{g_i g_k} - \frac{\overline{h}_{i k}}{\overline{g}_i\overline{g}_k} \right] \ . \end{equation} This term gives the entropy produced by particle $i$ due to its interaction with particles $j$ and $k$ simultaneously. In general, the local entropy productions are not invariant under index permutations, as the order of indices determines the specific role each particle plays. We calculate $\dot{\sigma}^{(1)}_i$, $\dot{\sigma}^{(2)}_{ij}$ and $\dot{\sigma}^{(3)}_{ijk}$ explicitly in the examples studied in \SMref{generalised_trawlers} and \ref{sec:N_interacting_indistinguishable_particle_extPot}. By construction, an $n$-point vertex will result in a local entropy production depending on up to $2n-1$ locations, namely one location of the particle experiencing the displacement, $n-1$ locations of other particles interacting with it in the kernel and another $n-1$ of particles interacting with it in the logarithm. Under the assumption of short-rangedness, terms depending on more than $n$ locations may be neglected, by assuming that if $n$ particles happen to be close enough to interact, the probability of finding more than $n$ will be exceedingly low. We are \emph{not} making any such assumption in the present work. With these local entropy productions of $N$ pair-interacting, distinguishable particles in place, the overall entropy production at stationarity is \begin{align} \dot{S}_{\text{int}}^{(N)}[\rho^{(N)}] = & \sum_i^N \int \ddint{x_i} \rho_i^{(N)}(\gpvec{x}_i) \dot{\sigma}^{(1)}_i(\gpvec{x}_i) + \sum_i^N \sum_{j\ne i}^N \int \ddint{x_i} \ddint{x_j} \rho_{ij}^{(N)}(\gpvec{x}_i,\gpvec{x}_j) \dot{\sigma}^{(2)}_{ij}(\gpvec{x}_i,\gpvec{x}_j) \nonumber\\ &+ \sum_i^N \sum_{j\ne i}^N \sum_{k\notin \{i,j\}} \int \ddint{x_i} \ddint{x_j} \ddint{x_k} \rho_{ijk}^{(N)}(\gpvec{x}_i,\gpvec{x}_j,\gpvec{x}_k) \dot{\sigma}^{(3)}_{ijk}(\gpvec{x}_i,\gpvec{x}_j,\gpvec{x}_k)\ , \elabel{entropyProduction_interacting_distinguishable_final} \end{align} where we have introduced various marginalisations of the density, similar to \Eref{marginalisation_densityN_i_distinguishable}, \begin{subequations} \elabel{marginalisation_densityN_i2orMore_distinguishable} \begin{align} \rho_{ij}^{(N)}(\gpvec{x}_i,\gpvec{x}_j) &= \int \!\prod_{\ell\notin\{i,j\}}^N\! \ddint{x_\ell}\, \rho^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N)\\ \rho_{ijk}^{(N)}(\gpvec{x}_i,\gpvec{x}_j,\gpvec{x}_k) &= \int \!\prod_{\ell\notin\{i,j,k\}}^N\! \ddint{x_\ell}\, \rho^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N) \ . \end{align} \end{subequations} The two-point density $\rho_{ij}^{(N)}(\gpvec{x}_i,\gpvec{x}_j)$ is the joint density of particle species $i$ at $\gpvec{x}_i$ and species $j$ at $\gpvec{x}_j$, and similarly for the three-point density $\rho_{ijk}^{(N)}(\gpvec{x}_i,\gpvec{x}_j,\gpvec{x}_k)$. These densities are invariant under permutations of the indices, say \begin{equation} \rho_{ij}^{(N)}(\gpvec{x}_i,\gpvec{x}_j)=\rho_{ji}^{(N)}(\gpvec{x}_j,\gpvec{x}_i) \quad\text{and}\quad \rho_{ijk}^{(N)}(\gpvec{x}_i,\gpvec{x}_j,\gpvec{x}_k)=\rho_{kij}^{(N)}(\gpvec{x}_k,\gpvec{x}_i,\gpvec{x}_j) \ . \end{equation} Because the local entropy production \Erefs{entropyProductionDensity_independent_distinguishable_final}, \eref{entropyProductionDensity_interacting_distinguishable_ij} and \eref{entropyProductionDensity_interacting_distinguishable_ijk} depends only on a very reduced set of coordinates, the others can be integrated out. These marginalisations, \Erefs{marginalisation_densityN_i_distinguishable} and \eref{marginalisation_densityN_i2orMore_distinguishable}, are what makes the calculation of the entropy production feasible in practice. Having to know the full $N$-point density, according to \Eref{entropyProduction_as_spave} is normally an insurmountable obstacle in a theory and marred by significant experimental errors. \Eref{entropyProduction_interacting_distinguishable_final}, however, makes this task doable. Denoting by $\gpvec{x}^{(q)}_i$ the particle locations of species $i$ in measurement $q$ of $Q$ measurements, with the help of \Eref{entropyProduction_interacting_distinguishable_final} the entropy production may then be estimated by \begin{equation}\elabel{entropyProduction_interacting_distinguishable_experimental} \dot{S}_{\text{int}}^{(N)} = \frac{1}{Q} \sum_{q}^{Q} \left\{ \sum_i^N \dot{\sigma}^{(1)}_i(\gpvec{x}^{(q)}_i) + \sum_i^N \sum_{j\ne i}^N \dot{\sigma}^{(2)}_{ij}(\gpvec{x}^{(q)}_i,\gpvec{x}^{(q)}_j) + \sum_i^N \sum_{j\ne i}^N \sum_{k\notin \{i,j\}} \dot{\sigma}^{(3)}_{ijk}(\gpvec{x}^{(q)}_i,\gpvec{x}^{(q)}_j,\gpvec{x}^{(q)}_k) \right\} \ , \end{equation} replacing, for example, $\rho_{2}^{(N)}(\gpvec{x}_1,\gpvec{x}_2)$ by the experimental estimate $(1/Q)\sum_q^Q \sum_{i_1,i_2=1\atop i_1\ne i_2}^N \delta(\gpvec{x}_1-\gpvec{x}^{(q)}_{i_1}) \delta(\gpvec{x}_2-\gpvec{x}^{(q)}_{i_2})$. For drift-diffusive particles in pair and external potentials, the expressions derived in this section are \emph{exact}. The qualification to drift-diffusion and potentials is necessary, only in so far as asumptions have been made about the properties of $g_i$, $f_i$ and $h_{ij}$ under various limits, such as \Erefs{g_i_limit}, \eref{f_vanishes}, \eref{f_over_g_delta_limit} and \eref{hderi_delta}. This concludes the derivations for distinguishable particles, with the crucial results \Eref{entropyProduction_independent_distinguishable_final} drawing on \Eref{entropyProductionDensity_independent_distinguishable_final}, and \Eref{entropyProduction_interacting_distinguishable_final} drawing on \Erefs{entropyProductionDensity_independent_distinguishable_final}, \eref{entropyProductionDensity_interacting_distinguishable_ij} and \eref{entropyProductionDensity_interacting_distinguishable_ijk}. In the next example, we re-derive the results of \SMref{HarmonicTrawlers} using the present, general framework, and in \SMref{N_indistinguishable_particles}, we extend this framework to indistinguishable particles. \subsubsection{Example: Entropy production of two pair-interacting distinguishable drift-diffusion particles without external potential} \seclabel{generalised_trawlers} To illustrate the framework outlined in \SMref{N_interacting_distinguishable_particles}, we use the example of two pair-interacting drift-diffusion particles on a circle of circumference $L$, which is calculated ``from first principles'' in \SMref{HarmonicTrawlers}, where it is found that the entropy production is $\dot{S}_{\text{int}}=(w_1+w_2)^2/(2D)$, if the particles drift with velocities $w_1$ and $w_2$ respectively and both diffuse with diffusion contant $D$. This result ought to be independent of the details of the pair potential $U$, given the simple physical reasoning in \SMref{HarmonicTrawlers_simple_physics}, as we confirm in the following. To calculate the entropy production on the basis of \Eref{entropyProduction_interacting_distinguishable_final}, we need the local entropy productions, $\dot{\sigma}^{(1)}_i(x_i)$, \Eref{entropyProductionDensity_independent_distinguishable_final}, and $\dot{\sigma}^{(2)}_{ij}(x_i,x_j)$, \Eref{entropyProductionDensity_interacting_distinguishable_ij}, but in the absence of a third particle, not $\dot{\sigma}^{(3)}_{ijk}$. We further need the obviously uniform one-point densities \begin{align} \rho_1^{(2)}(x_1)=\rho_2^{(2)}(x_2)=1/L \end{align} at stationarity, \SMref{HarmonicTrawlers}, and the two-point densities $\rho_{12}^{(2)}$ and $\rho_{21}^{(2)}$. As it turns out, these do not need to be known explicitly in terms of the interaction potential $U(x_1-x_2)$. Firstly, by translational invariance, the two-point densities factorise into a uniform distribution and a distribution of the distance $r=x_1-x_2$, \begin{equation}\elabel{density_12_trawler_example} \rho_{12}^{(2)}(x_1,x_2)= \rho_{21}^{(2)}(x_2,x_1)= \frac{1}{L} \rho_r(x_1-x_2) \ . \end{equation} Secondly, assuming Newton's third law, so that the force acting on one particle $U'_{12}(x_1-x_2)=U'(x_1-x_2)$ is the negative of the force acting on the other particle, $U'_{21}(x_2-x_1)=-U'(x_1-x_2)$ \cite{LoosKlapp:2020,Garcia-MillanZhang:2022} and further assuming that the potential is even so that $U'$ is odd, we can write the equations of motion \begin{subequations}\elabel{Langevin_x12} \begin{align} \dot{x}_1 &= w_1 - U'(x_1-x_2) + \xi_1(t) \\ \dot{x}_2 &= w_2 - U'(x_2-x_1) + \xi_2(t) \end{align} \end{subequations} where $U(r)=kr^2/2$ in \SMref{HarmonicTrawlers}, but shall be left unspecified here. We can then, thirdly, determine the equation of motion of the distance $r$ because the right hand sides of \Eref{Langevin_x12} are solely a function of $r$, \begin{equation} \dot{r}=\big(w_1-w_2-2U'(r)\big) + \xi_1(t)-\xi_2(t)\ , \end{equation} so that $r$ diffuses with diffusion constant $2D$ and drifts with velocity $w_1-w_2-2U'(r)$, giving rise to a Fokker-Planck equation of the density $\rho_r(r,t)$ of $r$, \begin{equation} \dot{\rho}_r = -\partial_r \bigg(\big(w_1-w_2-2U'(r)\big)\rho_r\bigg) + 2D\partial_r^2\rho_r \ , \end{equation} which determines the probability current $j_r$ via $\dot{\rho}_r=-D\partial_r j_r$ up to a constant. A simplifying assumption that allows simple physical reasoning to reproduce the results below, \SMref{HarmonicTrawlers_simple_physics}, is that one particle ends up towing the other, implying that the particle distance $r$ does not increase indefinitely. We thus demand that $j_r$ vanishes at stationarity, \begin{equation}\elabel{j_r_vanishes} 0=-j_r=2\rho_r'(r)+\frac{1}{D}\big(2U'(r)-(w_1-w_2) \big)\rho_r(r) \ , \end{equation} which, in the presence of drift $w_1-w_2$ implies that the potential $U(r)$ is binding. The differential \Eref{j_r_vanishes} is all we need to know about $\rho_r(r)$ in the following. To calculate the local entropy production $\dot{\sigma}^{(1)}_i(x_i)$ on the basis of \Eref{entropyProductionDensity_independent_distinguishable_final}, we require $f_i$ and $g_i$. Without an external potential and with the drift being dealt with non-perturbatively, $f_i$ vanishes and $g_i$ is given by \Eref{example_g_i}, so that, from \Erefs{gi_over_Bgi} and \eref{gdot_example}, \begin{subequations} \begin{align} \dot{g}_i &= D \delta_i'' - w_i \delta_i' \ ,\\ \ln\left(\frac{g_i}{\overline{g}_i}\right)&=\frac{(y_i-x_i)w_i}{D} \ , \end{align} \end{subequations} where dashed $\delta$-functions are differentiated with respect to their argument, $\delta_i=\delta(y_i-x_i)$, and therefore \begin{equation}\elabel{entropyProductionDensity_1_example_final} \dot{\sigma}^{(1)}_i(x_i) = \int \dint{y_i} \left( D \delta_i'' - w_i \delta_i' \right) \frac{(y_i-x_i)w_i}{D} = \frac{w_i^2}{D} \ . \end{equation} The interaction term is equally easily determined, \Erefs{h_example_compact} and \eref{hdot_example} give \begin{equation} \dot{h}_{ij}=U'(x_i-x_j) \delta_i' \delta_j \elabel{hdot_example_trawler_final} \end{equation} and by \Eref{trick3} \begin{equation} h_{ij}=-\frac{y_i-x_i}{2D} g_i g_j U'(x_i-x_j) + \text{h.o.t.} \ . \elabel{h_example_trawler_final} \end{equation} Using \Erefs{hdot_example_trawler_final} and \eref{h_example_trawler_final} in the local entropy production in \Eref{entropyProductionDensity_interacting_distinguishable_ij}, \begin{align} \dot{\sigma}^{(2)}_{ij}(x_i,x_j)&= \int \dint{y_{i,j}} \left( U'(x_i-x_j) \delta_i' \delta_j \Bigg\{ \frac{(y_i-x_i)w_i}{D} + \left[ -\frac{y_i-x_i}{2D} U'(x_i-x_j) + \frac{x_i-y_i}{2D} U'(y_i-y_j) \right] \right\}\nonumber\\ &\nonumber\qquad + (D \delta_i'' - w_i \delta_i')\delta_j \left[ -\frac{y_i-x_i}{2D} U'(x_i-x_j) + \frac{x_i-y_i}{2D} U'(y_i-y_j) \right]\Bigg)\\ &= -\frac{U'(x_i-x_j)}{D}\big(2w_i-U'(x_i-x_j)\big) -U''(x_i-x_j) \ . \elabel{entropyProductionDensity_2_example_final} \end{align} We proceed to calculate the entropy production by using $\dot{\sigma}^{(1)}_i(x_i)$, \Eref{entropyProductionDensity_1_example_final}, and $\dot{\sigma}^{(2)}_{ij}(x_i,x_j)$, \Eref{entropyProductionDensity_2_example_final}, in \Eref{entropyProduction_interacting_distinguishable_final}, \begin{align} \dot{S}_{\text{int}}^{(2)}[\rho^{(2)}] =& \int_0^L \dint{x_1} \rho_1^{(2)}(x_1) \dot{\sigma}^{(1)}_1(x_1) + \int_0^L \dint{x_2} \rho_2^{(2)}(x_2) \dot{\sigma}^{(1)}_1(x_2) \nonumber\\& + \int_0^L \dint{x_1}\dint{x_2} \left\{ \rho_{12}^{(2)}(x_1,x_2) \dot{\sigma}^{(2)}_{12}(x_1,x_2) + \rho_{21}^{(2)}(x_2,x_1) \dot{\sigma}^{(2)}_{21}(x_2,x_1) \right\} \elabel{entropyProduction_example_distinguishable_first_step} \end{align} Given that $\rho_1^{(2)}(x_1)=\rho_2^{(2)}(x_2)=1/L$ is constant, the first two integrals give simply \begin{equation}\elabel{entropyProduction_example_distinguishable_first_step_first_integral} \int_0^L \dint{x_1} \rho_1^{(2)}(x_1) \dot{\sigma}^{(1)}_1(x_1) + \int_0^L \dint{x_2} \rho_2^{(2)}(x_2) \dot{\sigma}^{(1)}_1(x_2) = \frac{w_1^2}{D}+\frac{w_2^2}{D} \ , \end{equation} which is the entropy production of two, independent drift-diffusion particles. The remaining double integrals are \begin{align} &\int_0^L \dint{x_1}\dint{x_2} \left\{ \rho_{12}^{(2)}(x_1,x_2) \dot{\sigma}^{(2)}_{12}(x_1,x_2) + \rho_{21}^{(2)}(x_2,x_1) \dot{\sigma}^{(2)}_{21}(x_2,x_1) \right\}\nonumber\\ &= - 2 \int_0^L \dint{x_1}\dint{x_2} \rho_{12}^{(2)}(x_1,x_2) \left\{ \frac{U'(x_1-x_2)}{D}\big(w_1-w_2-U'(x_1-x_2)\big) +U''(x_1-x_2)\right\} \end{align} where we have used that $U'$ is odd and $U''$ is even. Inserting \Eref{density_12_trawler_example} and using $\rho_r (U'^2/D - (w_1-w_2)U'/D - U'') = D \rho''_r - (w_1-w_2)^2\rho_r/(4D)$ from \Eref{j_r_vanishes} finally gives \begin{equation}\elabel{entropyProduction_example_distinguishable_second_integral} \int_0^L \dint{x_1}\dint{x_2} \left\{ \rho_{12}^{(2)}(x_1,x_2) \dot{\sigma}^{(2)}_{12}(x_1,x_2) + \rho_{21}^{(2)}(x_2,x_1) \dot{\sigma}^{(2)}_{21}(x_2,x_1) \right\} = - \frac{(w_1-w_2)^2}{2D} \ . \end{equation} The sum of \Erefs{entropyProduction_example_distinguishable_first_step_first_integral} and \eref{entropyProduction_example_distinguishable_second_integral} gives the total entropy production \Eref{entropyProduction_example_distinguishable_first_step}, \begin{equation}\elabel{trawler_final} \dot{S}_{\text{int}}^{(2)}[\rho^{(2)}] = \frac{w_1^2}{D}+\frac{w_2^2}{D} - \frac{(w_1-w_2)^2}{2D} = \frac{(w_1+w_2)^2}{2D} \ , \end{equation} where the two-particle contributions $\dot{\sigma}^{(2)}_{12}$ and $\dot{\sigma}^{(2)}_{21}$ cancel some of the entropy generated by the free case. \Eref{trawler_final} is indeed identical to the result \Eref{N2_final_entropy_production} in \SMref{HarmonicTrawlers}. As opposed to the calculation there, the present result holds for all even, reciprocal \cite{LoosKlapp:2020,Garcia-MillanZhang:2022} interaction potentials as outlined before \Erefs{Langevin_x12}. As particles drag each other by attraction or push each other by repulsion, provided only the potential prevents a current in $r=x_1-x_2$ \Eref{j_r_vanishes}, the entropy production is independent of its details. \subsection{\texorpdfstring{$N$}{N} indistinguishable particles} \seclabel{N_indistinguishable_particles} Assuming that no particle position is occupied more than once, the integral over the phase space occupied by $N$ indistinguishable particles is correctly captured by the $N$-fold integral over the particle coordinates, as if the particles were distinguishable, but dividing by $N!$ to compensate for the $N!$-fold degeneracy and thus overcounting of equivalent states. Using the Gibbs factor $1/N!$ to account for indistinguishability is allowable whenever multiple occupation of the same position has vanishing measure, an assumption which we refer to as \emph{sparse occupation}. Sparse occupation re-establishes distinguishability at equal times, so that indistinguishability needs to be accounted for only in transitions: Observing two particles at $\gpvec{y}_1,\gpvec{y}_2$ at one time and $\gpvec{x}_1,\gpvec{x}_2$ a moment later allows for the transitions $(\gpvec{y}_1,\gpvec{y}_2)\to(\gpvec{x}_1,\gpvec{x}_2)$ or $(\gpvec{y}_1,\gpvec{y}_2)\to(\gpvec{x}_2,\gpvec{x}_1)$. Obviously, observables, such as the entropy production, must reflect that particles are indistinguishable and thus must be invariant under permutations of coordinates, but this apparent simplification is difficult to implement. The difference between distinguishable and indistinguishable particles becomes apparent already at the level of the $N$-point density $\rho_N^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N)$, which for indistinguishable particles is invariant under permutations of the arguments and is normalised differently. While the $N$-point density $\rho^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N)$ of distinguishable particles equals the \emph{probability density} to find the different particles $i$ at their respective locations $\gpvec{x}_i$, for indistinguishable particles it is the \emph{number density of any particle} at $\gpvec{x}_1$, \emph{any other} particle at $\gpvec{x}_2$ and so on. As long as all coordinates are distinct, there is no need to re-introduce distinguishability in order to satisfy the requirement of locating \emph{another} particle. Similar to \Eref{density_is_propagator}, the density $\rho^{(N)}$ can be determined elegantly on the basis of the field theory with one pair of fields, $\phi$ and $\phi^{\dagger}$. At stationarity, we introduce \begin{equation}\elabel{density_indistinguishable_from_propagator} \rho_n^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_n) = \lim_{t_{01},\ldots,t_{0N}\to-\infty} \ave{\phi(\gpvec{x}_1,t)\ldots\phi(\gpvec{x}_n,t)\phi^{\dagger}(\gpvec{x}_{01},t_{01})\ldots\phi^{\dagger}(\gpvec{x}_{0N},t_{0N})} \ , \end{equation} as the $n$-point number density of $N$ particles of the same species. Fixing $n-1$ particle coordinates and considering only the dependence of the $n$-point density on $\gpvec{x}_n$, the latter might ``encounter'' any of the ``undetermined, other'' $N-(n-1)$ particles in an integral, so that integrating over $\gpvec{x}_n$ produces \begin{equation}\elabel{marginalisation_densityN} \int \ddint{x_n} \rho_n^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_n) = \big( N-(n-1) \big) \rho_{n-1}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_{n-1}) \ . \end{equation} This marginalisation property is owed to the density accounting for \emph{distinct particles}, \Eref{density_indistinguishable_from_propagator}, as it constructed using $n$ annihilator operators, which each contribute with a local particle number count and then remove (annihilate) one particle locally, so that it cannot contribute towards further counts. Using \Eref{marginalisation_densityN} repeatedly gives \begin{equation}\elabel{marginalisation_densityN_i_indistinguishable} \rho_{1}^{(N)}(\gpvec{x}_1) = \frac{1}{(N-1)!} \int \ddint{x_{N,N-1,\ldots,2}} \rho_{N}^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N) \ , \end{equation} and generally \begin{equation} \elabel{def_rho_N_n_indistinguishable} \rho_n^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_n)=\frac{1}{(N-n)!} \int\ddint{x_{N,N-1,\ldots,n+1}} \rho_n^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N) \end{equation} to be contrasted with the one-point density of distinguishable particles, \Eref{marginalisation_densityN_i_distinguishable}. For the special case of $n=1$ in \Eref{marginalisation_densityN} we may define $\rho_0^{(N)}(\emptyset)=1$, so that \begin{equation}\elabel{marginalisation_densityN_consequences} \int \ddint{x} \rho_1^{(N)}(\gpvec{x}) = N \qquad\text{and}\qquad \int \ddint{x_{N,\ldots,1}} \rho_N^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = N! \ . \end{equation} The integral over the phase space of occupation numbers then suggestively produces \begin{equation} \frac{1}{N!} \int \ddint{x_{N,\ldots,1}} \rho_N^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = 1 \ . \end{equation} Propagators in a Doi-Peliti field theory, designed for occupation-number states, naturally implement indistinguishability. Unless different species are specified in the form of different fields, an expression such as \Eref{density_indistinguishable_from_propagator} produces diagrams of all possible permutations of incoming and outgoing coordinates by virtue of Wick's theorem. The propagators used in \Erefs{def_Op} and \eref{def_Ln} are therefore naturally the transition probability densities of occupation number states and the expression for the entropy production rate only needs to account for the phase space being that of indistinguishable particles, \begin{align} \dot{S}_{\text{int}}^{(N)}[\rho_N^{(N)}] = \frac{1}{(N!)^2} \int&\ddint{x_{1,\ldots,N}}\ddint{y_{1,\ldots,N}} \rho_N^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \nonumber\\ &\times \left\{ \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} + \ln\left( \frac{\rho_N^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N)}{\rho_N^{(N)}(\gpvec{y}_1,\ldots,\gpvec{y}_N)} \right) \right\} \elabel{entropy_production_indistinguishableN} \end{align} with $\bm{\mathsf{K}}^{(N)}$ and $\operatorname{\bm{\mathsf{Ln}}}^{(N)}$ given by the expressions for indistinguishable particles corresponding to \Erefs{Op_distinguishableN} and \eref{Ln_distinguishableN} respectively, \begin{equation}\elabel{Op_indistinguishableN} \bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \lim_{t'\downarrow t} \partial_{t'} \ave{\phi(\gpvec{y}_1,t')\ldots\phi(\gpvec{y}_N,t')\tilde{\phi}(\gpvec{x}_1,t)\ldots\tilde{\phi}(\gpvec{x}_N,t)} \end{equation} and \begin{equation}\elabel{Ln_indistinguishableN} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \lim_{t'\downarrow t} \ln\left(\frac{\ave{\phi(\gpvec{y}_1,t')\ldots\phi(\gpvec{y}_N,t')\tilde{\phi}(\gpvec{x}_1,t)\ldots\tilde{\phi}(\gpvec{x}_N,t)}}{\ave{\phi(\gpvec{x}_1,t')\ldots\phi(\gpvec{x}_N,t')\tilde{\phi}(\gpvec{y}_1,t)\ldots\tilde{\phi}(\gpvec{y}_N,t)}}\right) \ . \end{equation} The joint propagator used here accounts for strictly \emph{distinct} particles as we characterise the transition probabilities of \emph{all} particles. Allowing for the same particle to be counted several times gives rise to terms containing factors of $\delta(\gpvec{y}_i-\gpvec{y}_j)$ and fewer creator fields. That these terms do not feature in the propagators above is consistent with the assumption of sparse occupation that allows the Gibbs factor $1/N!$ to account for indistinguishability when integrating over all phase space. In keeping with \Erefs{def_entropyProductionDensity} and \eref{entropyProduction_as_spave}, we can write \Eref{entropy_production_indistinguishableN} at stationarity as a weighted average, \begin{equation}\elabel{entropy_production_indistinguishableN_as_density} \dot{S}_{\text{int}}^{(N)}[\rho_N^{(N)}] = \frac{1}{N!} \int \ddint{x_{1,\ldots,N}} \rho_N^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \end{equation} with the local entropy production $\dot{\sigma}^{(N)}$ for independent particles at stationarity defined as \begin{equation}\elabel{def_entropyProductionDensity_indistinguishableN} \dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N)= \frac{1}{N!} \int \ddint{y_{1,\ldots,N}} \bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \ . \end{equation} To ease notation, in the following we use the notation of $g_i$, $f_i$, \latin{etc.}\@\xspace, as introduced in \Eref{def_g_i}, for example \begin{equation}\elabel{g_the_same} \ave[0]{\phi(\gpvec{y}_i,t')\tilde{\phi}(\gpvec{x}_i,t)} = g_i = g(\gpvec{y}_i;\gpvec{x}_i;t'-t)\ , \end{equation} adopted for indistinguishable particles by dropping the index from the fields. However, all $g_i$ now are the \emph{same} function evaluated for different variables, namely $\gpvec{y}_i$ and $\gpvec{x}_i$, as suggested by the final $g(\gpvec{y}_i;\gpvec{x}_i;t'-t)$ in \Eref{g_the_same} not carrying an index $i$. The same applies to $f_i$, $h_{ij}$ and the corresponding functions with inverted arguments $\gpvec{y}_i$ and $\gpvec{x}_i$, for example \begin{equation}\elabel{f_the_same} \overline{f}_i = f(\gpvec{x}_i;\gpvec{y}_i;t'-t) \ . \end{equation} \subsubsection{\texorpdfstring{$N$}{N} independent, indistinguishable particles} \seclabel{N_independent_indistinguishable_particles} The $N$-particle joint propagator $\bigl\langle\phi(\gpvec{y}_1,t')\linebreak[1]\phi(\gpvec{y}_2,t')\linebreak[1]\ldots\linebreak[1]\phi(\gpvec{y}_N,t')\linebreak[1]\tilde{\phi}(\gpvec{x}_1,t)\linebreak[1]\tilde{\phi}(\gpvec{x}_2,t)\linebreak[1]\ldots\linebreak[1]\tilde{\phi}(\gpvec{x}_N,t)\bigr\rangle$ immediately factorises in the absence of interactions. However, rather than resulting in a single product of $N$ propagators like \Eref{propagator_factorising}, it is the sum of the $N!$ distinct products of propagators, each accounting for a particular pairing of Doi-shifted creator and annihilator fields, \begin{align} \bigl\langle\phi(\gpvec{y}_1,t') & \phi(\gpvec{y}_2,t')\ldots\phi(\gpvec{y}_N,t')\tilde{\phi}(\gpvec{x}_1,t)\tilde{\phi}(\gpvec{x}_2,t)\ldots\tilde{\phi}(\gpvec{x}_N,t)\bigr\rangle \nonumber\\ = & \ave{\phi(\gpvec{y}_1,t')\tilde{\phi}(\gpvec{x}_1,t)} \ave{\phi(\gpvec{y}_2,t')\tilde{\phi}(\gpvec{x}_2,t)} \ldots \ave{\phi(\gpvec{y}_N,t')\tilde{\phi}(\gpvec{x}_N,t)} \nonumber\\ &+ \ave{\phi(\gpvec{y}_2,t')\tilde{\phi}(\gpvec{x}_1,t)} \ave{\phi(\gpvec{y}_1,t')\tilde{\phi}(\gpvec{x}_2,t)} \ldots \ave{\phi(\gpvec{y}_N,t')\tilde{\phi}(\gpvec{x}_N,t)} + \ldots \elabel{permutation_propagator} \end{align} As a result of \Eref{permutation_propagator}, both $\bm{\mathsf{K}}^{(N)}$ and $\operatorname{\bm{\mathsf{Ln}}}^{(N)}$ contain $N!$ times as many terms as in the case of distinguishable particles. As far as $\bm{\mathsf{K}}^{(N)}$ is concerned, \Eref{Op_indistinguishableN}, this seems to barely complicate the expression for the entropy production, because the permutation of the fields can be undone by a permutation of the dummy variables of the integral it is sitting in, so that, say $\gpvec{y}_i$ is paired with $\gpvec{x}_i$ in each and every of the propagators appearing in $\bm{\mathsf{K}}^{(N)}$ according to \Eref{permutation_propagator}. As $\operatorname{\bm{\mathsf{Ln}}}^{(N)}$ and the logarithm of the joint density both are invariant under permutations of the $\gpvec{y}_i$, and the joint density $\rho_N^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N)$ in the pre-factor is not even affected by such a permutation of the $\gpvec{y}_i$, there are $N!$ such permutations, all equal and therefore cancelling the factor of $1/N!$ in \Eref{def_entropyProductionDensity_indistinguishableN}, so that \begin{equation}\elabel{def_entropyProductionDensity_indistinguishableN2} \dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = \int \ddint{y_{1,\ldots,N}} \left\{ \sum_{i=1}^N (\dot{g}_i + \dot{f}_i) \prod_{j\ne i}^N \delta_j \right\} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \ , \end{equation} similar to \Eref{Op_distinguishable_independent}. Using \Eref{permutation_propagator} in \Eref{Ln_indistinguishableN} to calculate the logarithm, we have \begin{align} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \nonumber &= \lim_{t'\downarrow t} \Bigg\{ \ln\Big( \ave{\phi(\gpvec{y}_1,t')\tilde{\phi}(\gpvec{x}_1,t)} \ave{\phi(\gpvec{y}_2,t')\tilde{\phi}(\gpvec{x}_2,t)} \ldots \ave{\phi(\gpvec{y}_N,t')\tilde{\phi}(\gpvec{x}_N,t)} \\\nonumber &\quad\qquad + \ave{\phi(\gpvec{y}_2,t')\tilde{\phi}(\gpvec{x}_1,t)} \ave{\phi(\gpvec{y}_1,t')\tilde{\phi}(\gpvec{x}_2,t)} \ldots \ave{\phi(\gpvec{y}_N,t')\tilde{\phi}(\gpvec{x}_N,t)} + \ldots \Big)\\\nonumber &\quad - \ln\Big( \ave{\phi(\gpvec{x}_1,t')\tilde{\phi}(\gpvec{y}_1,t)} \ave{\phi(\gpvec{x}_2,t')\tilde{\phi}(\gpvec{y}_2,t)} \ldots \ave{\phi(\gpvec{x}_N,t')\tilde{\phi}(\gpvec{y}_N,t)} \\ &\quad\qquad+ \ave{\phi(\gpvec{x}_2,t')\tilde{\phi}(\gpvec{y}_1,t)} \ave{\phi(\gpvec{x}_1,t')\tilde{\phi}(\gpvec{y}_2,t)} \ldots \ave{\phi(\gpvec{x}_N,t')\tilde{\phi}(\gpvec{y}_N,t)} + \ldots \Big) \Bigg\} \ . \end{align} We will use \Eref{g_i_limit} in the form that any of the propagators $\ave{\phi(\gpvec{y}_i,t')\tilde{\phi}(\gpvec{x}_j,t)}$ in the limit of $t'\downarrow t$ will vanish if $i\ne j$ because $\gpvec{y}_i=\gpvec{x}_j$ for $i\ne j$ is not enforced by the $\delta$-functions in the kernel and therefore $\gpvec{y}_i=\gpvec{x}_j$ has vanishing measure under the integral. As the kernel enforces $\gpvec{y}_i=\gpvec{x}_j$ only for $i=j$, under the integral the logarithm simplifies just like in \Eref{entropyProductionDensity_independent_distinguishable_final_sum}. Because all the $g_i$ and $f_i$ are the same function for indistinguishable particles just with different arguments $\gpvec{y}_i$ and $\gpvec{x}_i$, the local entropy production of indistinguishable particles corresponding to \Eref{entropyProductionDensity_independent_distinguishable_final_sum} is invariant under permutations of the arguments $\gpvec{x}_1,\ldots,\gpvec{x}_N$ and the single-particle local entropy production $\dot{\sigma}_i^{(N)}(\gpvec{x}_i)$ is the same for any particle $i$, \latin{i.e.}\@\xspace $\dot{\sigma}_i^{(N)}(\gpvec{x}_i)=\dot{\sigma}_1^{(1)}(\gpvec{x}_i)$, \Eref{entropyProductionDensity_independent_distinguishable_final}. Inserting $\dot{\sigma}^{(N)}$ in \Eref{entropyProductionDensity_independent_distinguishable_final_sum} into the entropy production \Eref{entropy_production_indistinguishableN_as_density}, and using that $\dot{\sigma}^{(N)}$ and $\rho_N^{(N)}$ are invariant under permutations of $\gpvec{x}_1,\ldots,\gpvec{x}_N$, we obtain \begin{equation}\elabel{entropy_production_indistinguishableN_as_density_independent2} \dot{S}_{\text{int}}^{(N)}[\rho_N^{(N)}] = \frac{N}{N!} \int \ddint{x_{1,\ldots,N}} \rho_N^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \dot{\sigma}_1^{(1)}(\gpvec{x}_1) \ . \end{equation} Using \Eref{marginalisation_densityN_i_indistinguishable} to marginalise $\rho_N^{(N)}$ over $\gpvec{x}_2,\ldots,\gpvec{x}_N$ finally gives \begin{equation}\elabel{entropy_production_indistinguishableN_as_density_independent2_final} \dot{S}_{\text{int}}^{(N)}[\rho_N^{(N)}] = \int \ddint{x_1} \rho_1^{(N)}(\gpvec{x}_1) \dot{\sigma}_1^{(1)}(\gpvec{x}_1) \ , \end{equation} which is $N$ times the entropy production of a single particle, provided $\rho_1^{(N)}(\gpvec{x}_1)=N\rho_1^{(1)}(\gpvec{x}_1)$, \Eref{marginalisation_densityN_consequences}. This is not necessarily the case, in particular not when ``the system is not ergodic'' or not stationary, for example when particles are trapped or their position is not equilibrated. If the density, however, obeys $\rho_1^{(N)}(\gpvec{x}_1)=N\rho_1^{(1)}(\gpvec{x}_1)$ we can write \begin{equation}\elabel{entropy_production_indistinguishableN_as_density_independent3} \dot{S}_{\text{int}}^{(N)}[\rho_N^{(N)}] = N \int \ddint{x_1} \rho_1^{(1)}(\gpvec{x}_1) \dot{\sigma}_1^{(1)}(\gpvec{x}_1) \ . \end{equation} For indistinguishable particles, we use the notation $\dot{\sigma}_n^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_n)$ for the local entropy production depending on $n$ locations in an $N$ particle system. \subsubsection{\texorpdfstring{$N$}{N} pairwise interacting, indistinguishable particles} \seclabel{N_interacting_indistinguishable_particles} In the following, we generalise the result in \Eref{entropy_production_indistinguishableN_as_density_independent2_final} to interacting, indistinguishable particles. In the case of interaction, neither density nor propagator factorise. However, just as in the discussion of interacting distinguishable particles, the propagator can still be expanded systematically, very much along the same lines as \Eref{Npropagators_distinguishable_interaction_diagrams}, with the added benefit of having to draw only on \emph{one} type of interaction, \begin{equation} h_{ij}=h(\gpvec{y}_i,\gpvec{y}_j;\gpvec{x}_i,\gpvec{x}_j; t'-t) \ , \end{equation} which is, similar to $g$ and $f$, \Erefs{g_the_same} and \eref{f_the_same}, the same function $h$ for any two particles with positions as indicated. Using the propagator in \Eref{Npropagators_distinguishable_with_interaction} as the starting point, we may write \begin{align}\elabel{Npropagators_indistinguishable_with_interaction} \ave{\phi(\gpvec{y}_1,t')\ldots\tilde{\phi}(\gpvec{x}_N,t)} = & \prod^N_i g(\gpvec{y}_i;\gpvec{x}_i;t'-t) + \sum_i^N f(\gpvec{y}_i;\gpvec{x}_i;t'-t) \prod^N_{\substack{j\ne i}} g(\gpvec{y}_j;\gpvec{x}_j;t'-t) \nonumber\\ &+ \sum_i^N \sum_{j\ne i}^N h(\gpvec{y}_i,\gpvec{y}_j;\gpvec{x}_i,\gpvec{x}_j; t'-t) \prod_{k\notin\{i,j\}} g(\gpvec{y}_k;\gpvec{x}_k;t'-t) + \text{perm.} + \order{(t'-t)^2} \ , \end{align} where "perm." refers to distinct permutations of the coordinates, as seen earlier in \Eref{permutation_propagator}. For example, the term $\prod_i g$ exists in $N!$ distinct permutations: One being the first term in \Eref{Npropagators_indistinguishable_with_interaction}, $\prod_ig_i$, and the remaining containing at least two terms such as $g(\gpvec{y}_1;\gpvec{x}_2;t'-t)$, that do not adhere to the pattern of the shorthand $g_i=g(\gpvec{y}_i,\gpvec{x}_i;t'-t)$. When $\gpvec{x}_i=\gpvec{y}_i$ is enforced for all $i$ except one, all these additional permutations essentially vanish under the integral, as discussed below. The term $\sum_i f \prod_j g$ exists in $N(N!)$ distinct permutations, as $f(\gpvec{y}_i;\gpvec{x}_j;t'-t)$ exists in $N^2$ permutations and the remaining $\prod_j g$ in a further $(N-1)!$. The term involving $h$, correspondingly comes in $N(N-1)(N!)$ permutations. \Eref{Npropagators_indistinguishable_with_interaction} enters the kernel with a time-derivative and a limit $t'\downarrow t$, producing $N(N!)$ distinct terms of the form $(\dot{g}+\dot{f})\prod \delta$, as seen in the case without interaction, \Eref{def_entropyProductionDensity_indistinguishableN2}. Permuting the $\gpvec{y}_1,\ldots,\gpvec{y}_N$, so that every $\gpvec{y}_i$ is paired with $\gpvec{x}_i$ produces $N!$ times the same $N$ terms involving $(\dot{g}+\dot{f})$ and $N(N-1)$ terms involving $h$, specifically, \begin{equation}\elabel{def_entropyProductionDensity_indistinguishable_interacting1} \dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) = \int \ddint{y_{1,\ldots,N}} \left\{ \sum_{i=1}^N (\dot{g}_i + \dot{f}_i) \prod_{j\ne i}^N \delta_j + \sum_{i=1}^N \sum_{j\ne i}^N \dot{h}_{ij} \prod_{k\notin \{i,j\}}^N \delta_k \right\} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \ , \end{equation} similar to \Eref{def_entropyProductionDensity_indistinguishableN2}. As in the previous section, the logarithmic term is \emph{a priori} unaffected by any of the permutations, because being based on the propagator it is invariant under any permutations among the $\gpvec{y}_i$ and among the $\gpvec{x}_i$. However, the same argument as in the previous section applies to all terms that the logarithm is comprised of, namely that in each one which demands $\gpvec{y}_i$ to be arbitrarily close to $\gpvec{x}_j$, this needs to be enforced by a $\delta$-function in the kernel, as it otherwise happens only with vanishing measure. All terms entering the logarithm make this demand in $N-1$ of $N$ pairs of $\gpvec{y}_i$ and $\gpvec{x}_j$, as $h(\gpvec{y}_k,\gpvec{y}_{\ell};\gpvec{x}_i,\gpvec{x}_j; t'-t)$ is $\delta$-like in $\gpvec{y}_{\ell}-\gpvec{x}_j$. What remains of the logarithm in \Eref{def_entropyProductionDensity_indistinguishable_interacting1} is therefore \begin{equation}\elabel{def_entropyProductionDensity_indistinguishable_interacting1b} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} = \lim_{t'\downarrow t} \ln\left(\frac {\prod^N_\ell g_\ell + \sum_\ell^N f_\ell \prod^N_{\substack{m\ne \ell}} g_m + \sum_\ell^N\sum_{m\ne \ell}^N h_{\ell m} \prod_{n\notin\{\ell,m\}}^N g_n + \ldots} {\prod^N_\ell \overline{g}_\ell + \sum_\ell^N \overline{f}_\ell \prod^N_{\substack{m\ne \ell}} \overline{g}_m + \sum_\ell^N\sum_{m\ne \ell}^N \overline{h}_{\ell m} \prod_{n\notin\{\ell,m\}}^N \overline{g}_n + \ldots} \right) \ , \end{equation} with the terms in $\ldots$ vanishing as some of the proximities are not enforced. After dividing out $\prod^N_\ell g_\ell/\overline{g}_\ell$ from the argument of the logarithm, the resulting expression for $\dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N)$ in \Eref{def_entropyProductionDensity_indistinguishableN} is identical to that for distinguishable particles, \Eref{entropyProductionDensity_interacting_distinguishable_initial}. The factor of $1/N!$ in \Eref{entropy_production_indistinguishableN}, and the factorial factors produced by marginalisation, \Eref{marginalisation_densityN}, further simplify the total entropy production. Moreover, since the functions $g$, $f$ and $h$ are the same for all particles, the resulting expressions simplify considerably. Focussing firstly on the overall structure, using $\dot{\sigma}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N)$ in \Eref{entropy_production_indistinguishableN_as_density} with the same simplifications as carried out on \Eref{entropyProductionDensity_interacting_distinguishable_initial} via \Eref{entropyProduction_interacting_distinguishable_sum} to \Eref{entropyProductionDensity_interacting_distinguishable_ijk} in the case of distinguishable particles gives, for indistinguishable particles, \begin{align}\elabel{entropy_production_indistinguishableN_with_interaction} \dot{S}_{\text{int}}^{(N)}[\rho_N^{(N)}] = \frac{1}{N!} \int \ddint{x_{1,\ldots,N}} \rho_N^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \left\{ \sum_i^N \dot{\sigma}^{(1)}_i(\gpvec{x}_i) + \sum_i^N \sum_{j\ne i}^N \dot{\sigma}^{(2)}_{ij}(\gpvec{x}_i,\gpvec{x}_j) + \sum_i^N \sum_{j\ne i}^N \sum_{k\notin \{i,j\}} \dot{\sigma}^{(3)}_{ijk}(\gpvec{x}_i,\gpvec{x}_j,\gpvec{x}_k) \right\} \end{align} with the local entropy production $\dot{\sigma}^{(1)}_i(\gpvec{x}_i)$, $\dot{\sigma}^{(2)}_{ij}(\gpvec{x}_i,\gpvec{x}_j)$ and $\dot{\sigma}^{(3)}_{ijk}(\gpvec{x}_i,\gpvec{x}_j,\gpvec{x}_k)$ as defined in \Erefs{entropyProductionDensity_independent_distinguishable_final}, \eref{entropyProductionDensity_interacting_distinguishable_ij}, and \eref{entropyProductionDensity_interacting_distinguishable_ijk} respectively. \Eref{entropy_production_indistinguishableN_with_interaction} is essentially \Eref{entropyProduction_interacting_distinguishable_final} but with a different notion of the $N$-point density $\rho_N^{(N)}$, which can ber further simplified by marginalisation, \Erefs{marginalisation_densityN}, \eref{marginalisation_densityN_i_indistinguishable} and \eref{def_rho_N_n_indistinguishable}. Finally, because the functions $g_i$, $f_i$ and $h_{ij}$ depend on the particle index only in as far as the coordinates are concerned, the summations above can all be carried out and the local entropy productions reduce to \begin{subequations} \elabel{def_entropyProductionDensities_indistinguishable} \begin{align} \elabel{def_entropyProductionDensities_indistinguishable_1} \dot{\sigma}^{(1)}_1(\gpvec{x}_1) &= \int \ddint{y_1} (\dot{g}_1 + \dot{f}_1) \lim_{t'\downarrow t} \left[ \ln\left(\frac{g_1}{\overline{g}_1}\right) + \frac{f_1}{g_1} - \frac{\overline{f}_1}{\overline{g}_1} \right] \\ \elabel{def_entropyProductionDensities_indistinguishable_2} \dot{\sigma}^{(2)}_2(\gpvec{x}_1,\gpvec{x}_2) &= \int \ddint{y_{1,2}} \dot{h}_{12} \lim_{t'\downarrow t} \left\{ \ln\left( \frac {g_1} {\overline{g}_1} \right) + \left[ \frac{f_1}{g_1} - \frac{\overline{f}_1}{\overline{g}_1} \right] + \left[ \frac{h_{1 2}}{g_1 g_2} - \frac{\overline{h}_{1 2}}{\overline{g}_1\overline{g}_2} \right] \right\} + (\dot{g}_1 + \dot{f}_1)\delta_2 \lim_{t'\downarrow t} \left\{ \frac{h_{1 2}}{g_1 g_2} - \frac{\overline{h}_{1 2}}{\overline{g}_1\overline{g}_2} \right\} \\ \elabel{def_entropyProductionDensities_indistinguishable_3} \dot{\sigma}^{(3)}_{3}(\gpvec{x}_1,\gpvec{x}_2,\gpvec{x}_3)&= \int \ddint{y_{1,2,3}} \dot{h}_{12}\delta_3 \lim_{t'\downarrow t} \left[ \frac{h_{1 3}}{g_1 g_3} - \frac{\overline{h}_{1 3}}{\overline{g}_1\overline{g}_3} \right] \ , \end{align} \end{subequations} using a slightly more suitable notation, where the subscript of $\dot{\sigma}^{(N)}_n$ refers to the number of particles considered rather than the particle index, as in \Erefs{entropyProductionDensity_independent_distinguishable_final}, \eref{entropyProductionDensity_interacting_distinguishable_ij} and \eref{entropyProductionDensity_interacting_distinguishable_ijk}. With these definitions, the integrated entropy production is then \begin{align} \dot{S}_{\text{int}}^{(N)}[\rho^{(N)}] = \int \ddint{x_1} \rho_{1}^{(N)}(\gpvec{x}_1) \dot{\sigma}^{(1)}_1(\gpvec{x}_1) + \int \ddint{x_{1,2}} \rho_{2}^{(N)}(\gpvec{x}_1,\gpvec{x}_2) \dot{\sigma}^{(2)}_2(\gpvec{x}_1,\gpvec{x}_2) + \int \ddint{x_{1,2,3}} \rho_{3}^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\gpvec{x}_3) \dot{\sigma}^{(3)}_3(\gpvec{x}_1,\gpvec{x}_2,\gpvec{x}_3)\ , \elabel{entropyProduction_interacting_indistinguishable_final2} \end{align} neatly cancelling the factorial pre-factor. \subsubsection{Example: Entropy production of \texorpdfstring{$N$}{N} pair-interacting indistinguishable particles in an external potential} \seclabel{N_interacting_indistinguishable_particle_extPot} The example of a drift-diffusion particle in an external potential has been introduced in \SMref{simplified_notation}, in particular \Eref{entropyProductionDensity_distinguishable_non-interacting_example}: An example for $g$ is shown in \Eref{example_g_i}, $\dot{g}$ in \Eref{gdot_example}, $f$ in \Eref{f_example}, $\dot{f}$ in \Eref{fdot_example}, $h$ in \Eref{h_example_compact} and $\dot{h}$ in \Eref{hdot_example}. We will use those for $\dot{\sigma}^{(1,2,3)}$, \Erefs{def_entropyProductionDensities_indistinguishable}, in \Erefs{entropyProduction_interacting_indistinguishable_final2}. The local entropy production $\dot{\sigma}^{(1)}_1$ is given by \Erefs{entropyProductionDensity_distinguishable_non-interacting_example} with the same velocity and diffusion for all particles, \begin{equation} \elabel{entropyProductionDensity1_example_final} \dot{\sigma}^{(1)}_1(\gpvec{x}_1) = -\Upsilon''(\gpvec{x}_1) + \frac{1}{D}\left(\wvec - \Upsilon'(\gpvec{x}_1)\right)^2 \ , \end{equation} which is due to self-propulsion with velocity $\wvec$ in the external potential $\Upsilon(\gpvec{x})$, while $\dot{\sigma}^{(2)}_2$ from \Eref{def_entropyProductionDensities_indistinguishable_2} is, using \Eref{fdot_example} and \eref{hdot_example_trawler_final}, \begin{equation}\elabel{entropyProductionDensity2_example_final} \dot{\sigma}^{(2)}_2(\gpvec{x}_1,\gpvec{x}_2) = \frac{2} {D} U'(\gpvec{x}_1-\gpvec{x}_2)\cdot \big( \Upsilon'(\gpvec{x}_1) - \wvec \big) + \frac{1}{D} U'^2(\gpvec{x}_1-\gpvec{x}_2) - U''(\gpvec{x}_1-\gpvec{x}_2) \end{equation} which originates from pair interactions, and equals \Eref{entropyProductionDensity_2_example_final} when $\Upsilon\equiv0$ and all drifts are the same. Finally, $\dot{\sigma}^{(3)}_3$ in \Eref{def_entropyProductionDensities_indistinguishable_3} is, using \Eref{h_example_trawler_final}, \begin{align} \dot{\sigma}^{(3)}_{3}(\gpvec{x}_1,\gpvec{x}_2,\gpvec{x}_3) &= \int \ddint{y_{1,2,3}} U'(\gpvec{x}_1-\gpvec{x}_2)\cdot\delta_1' \delta_2 \delta_3 \left[ -U'(\gpvec{x}_1-\gpvec{x}_3) \cdot \frac{\gpvec{y}_1-\gpvec{x}_1}{2D} +U'(\gpvec{y}_1-\gpvec{y}_3) \cdot \frac{\gpvec{x}_1-\gpvec{y}_1}{2D} \right] \nonumber\\ \elabel{entropyProductionDensity3_example_final} &= \frac{1}{D} U'(\gpvec{x}_1-\gpvec{x}_2) \cdot U'(\gpvec{x}_1-\gpvec{x}_3) \ , \end{align} showing that, for this choice of interactions, $\dot{\sigma}^{(3)}_{3}$ has a distinctive role for particle $1$ while particles $2$ and $3$ play the same role. Collecting all terms to construct the entropy production of $N$ pair-interacting, indistinguishable particles according to \Eref{entropyProduction_interacting_indistinguishable_final2} from $\dot{\sigma}^{(1)}_1$ in \Eref{entropyProductionDensity1_example_final}, $\dot{\sigma}^{(2)}_2$ in \Eref{entropyProductionDensity2_example_final} and $\dot{\sigma}^{(3)}_3$ in \Eref{entropyProductionDensity3_example_final}, then gives \begin{align} \nonumber \dot{S}_{\text{int}}^{(N)}[\rho^{(N)}] =& \int \ddint{x_1} \rho_{1}^{(N)}(\gpvec{x}_1) \left( -\Upsilon''(\gpvec{x}_1) + \frac{1}{D}\left(\wvec - \Upsilon'(\gpvec{x}_1)\right)^2 \right)\\ \nonumber &+ \int \ddint{x_{1,2}} \rho_{2}^{(N)}(\gpvec{x}_1,\gpvec{x}_2) \left( \frac{2} {D} U'(\gpvec{x}_1-\gpvec{x}_2)\cdot \left\{ \Upsilon'(\gpvec{x}_1) - \wvec \right\} + \frac{1}{D} U'^2(\gpvec{x}_1-\gpvec{x}_2) - U''(\gpvec{x}_1-\gpvec{x}_2) \right)\\ &+ \int \ddint{x_{1,2,3}} \rho_{3}^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\gpvec{x}_3) \left( \frac{1}{D} U'(\gpvec{x}_1-\gpvec{x}_2) \cdot U'(\gpvec{x}_1-\gpvec{x}_3) \right) \ . \elabel{entropyProduction_interacting_indistinguishable_example} \end{align} If $U$ is even, then the term $\rho_2^{(N)}(\gpvec{x}_1,\gpvec{x}_2) U'(\gpvec{x}_1-\gpvec{x}_2)\cdot\wvec$ in the second line of \Eref{entropyProduction_interacting_indistinguishable_example} changes sign under exchange of the dummy variables $\gpvec{x}_1$ and $\gpvec{x}_2$ and thus drops out under integration. The corresponding term projecting on $\Upsilon'(\gpvec{x}_1)$ rather than $\wvec$ does not posses the same symmetry. \Eref{entropyProduction_interacting_indistinguishable_example} with external potential vanishing, $\Upsilon\equiv0$, and even pair potential, $U'(\gpvec{x}_1-\gpvec{x}_2)=-U'(\gpvec{x}_2-\gpvec{x}_1)$, is \Eref{entropyProduction_for_pairPot} in the main text. \subsubsection*{No entropy production of pair-interacting, indistinguishable, diffusive particles without drift}\seclabel{no_EPR_without_drift} As a sanity check of \Eref{entropyProduction_interacting_indistinguishable_example}, we calculate the entropy production of $N$ pair-interacting, indistinguishable particles, which are subject to diffusion but not to drift, assuming stationarity. In this case, the $N$-point density is Boltzmann, \begin{equation}\elabel{density_indistinguishable_example} \rho_{N}^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N) = \mathcal{N}^{-1} \exp{-\mathcal{H}/D} \quad\text{ with }\quad \mathcal{H}=\sum_{i=1}^N \Upsilon(\gpvec{x}_i) + \sum_{i=1}^N\sum_{j=i+1}^N U(\gpvec{x}_i-\gpvec{x}_j) \end{equation} and suitable normalisation $\mathcal{N}^{-1}$, such that \Eref{marginalisation_densityN_consequences} holds. Without drift the entropy production should vanish. The Hamiltionian written in the form \Eref{density_indistinguishable_example} assumes an even pair-potential $U$, but this does not amount to a loss of generality, because odd contributions can be shown to cancel in a Hamiltonian invariant under permutations of indeces, as is the case for indistinguishable particles. To show that the entropy production with \Eref{density_indistinguishable_example} vanishes, we use that the integral of $\nabla_{\gpvec{x}_1}^2 \rho_{N}^{(N)}$ over all space vanishes by Gauss' theorem, and calculate it explicitly, \begin{subequations} \begin{align}\elabel{derivative_density_N_example_equilibrium_identity} 0&= \frac{D}{ (N-1)!} \int \ddint{x_{1,\ldots,N}} \nabla_{\gpvec{x}_1}^2 \rho_{N}^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\ldots,\gpvec{x}_N) \\ \nonumber &=\int \ddint{x_1} \rho_{1}^{(N)}(\gpvec{x}_1) \left\{ - \Upsilon''(\gpvec{x}_1) + \frac{\Upsilon'^2(\gpvec{x}_1)}{D} \right\} \\ \nonumber &\quad + \int \ddint{x_{1,2}} \rho_{2}^{(N)}(\gpvec{x}_1,\gpvec{x}_2) \left\{ - U''(\gpvec{x}_1-\gpvec{x}_2) + \frac{U'^2(\gpvec{x}_1-\gpvec{x}_2)}{D} + \frac{2 \Upsilon'(\gpvec{x}_1)}{D} \cdot U'(\gpvec{x}_1-\gpvec{x}_2) \right\} \\ &\quad + \int \ddint{x_{1,2,3}} \rho_{3}^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\gpvec{x}_3) \frac{U'(\gpvec{x}_1-\gpvec{x}_2)}{D}\cdotU'(\gpvec{x}_1-\gpvec{x}_3) \ , \elabel{entropyProduction_indistinguishable_equilibrium_example_identity} \end{align} \end{subequations} where we have used \Erefs{marginalisation_densityN}, \eref{marginalisation_densityN_i_indistinguishable} and \eref{def_rho_N_n_indistinguishable} and the symmetry of the density under permutation of the arguments. By inspection we find that \Eref{entropyProduction_indistinguishable_equilibrium_example_identity} is \Eref{entropyProduction_interacting_indistinguishable_example} at $\wvec=\zerovec$. In other words, the stationary entropy production of $N$ identical particles, subject to a pair- and an external potential, vanishes in the absence of drift, provided the particles are Boltzmann-distributed. Of course, this is what we expect from simple physical reasoning, but the present calculation offers an important sanity check in particular for the somewhat unusual $3$-point term. \endinput \section{Short-time scaling of diagrams}\seclabel{appendixWhichDiagramsContribute} \makeatletter \makeatother \newcommand{I_{n}}{I_{n}} \newcommand{I_{\mathsf{X}}}{I_{\mathsf{X}}} \newcommand{I_{\mathsf{*}}}{I_{\mathsf{*}}} \newcommand{I_\mathsf{-X}}{I_\mathsf{-X}} \newcommand{I_\mathsf{>o<}}{I_\mathsf{>o<}} \newcommand{\dot{I}_{\mathsf{X}}}{\dot{I}_{\mathsf{X}}} \newcommand{\dot{I}_{\mathsf{*}}}{\dot{I}_{\mathsf{*}}} \newcommand{\dot{I}_\mathsf{-X}}{\dot{I}_\mathsf{-X}} \newcommand{\dot{I}_\mathsf{>o<}}{\dot{I}_\mathsf{>o<}} \paragraph*{Abstract} In the following we will consider different types of diagrams that possibly contribute to the propagator up to first order in time ${\Delta t}=t'-t$, which are the diagrams that contribute to the entropy production. \emph{The general rule emerging from the arguments below is that a diagram containing $m$ blobs decays at least as fast as ${\Delta t}^m$ in small ${\Delta t}$. To calculate the entropy production only diagrams with up to one blob are needed.} Diagrams enter in the entropy production either through the kernel $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ \Eref{transition_from_action} or \Eref{Op_in_diagrams} or the logarithmic term $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$ \Eref{Ln_from_propagators} or \Eref{Ln_in_diagrams}. By construction, both of these terms draw only on the first order in the small time difference ${\Delta t}=t'-t$ between creation at time $t$ and annihilation at time $t'$. For the kernel, this is established by the limit $\lim_{t'\downarrow t}$ after differentiation taken in \Erefs{def_Op} and \eref{Op_in_diagrams}. Such an operation extracts the first order in ${\Delta t}$ only, reducing it to the single particle Fokker-Planck operator plus corrections due to interactions and reactions. For the kernel, there is no need to extract any terms beyond linear order in ${\Delta t}$. In other words, if the linear order vanishes, the kernel vanishes and thus the entropy production. For the logarithm, the reason why diagrams enter only to first order in ${\Delta t}$ is more subtle; although \Erefs{Ln_from_propagators} and \eref{Ln_in_diagrams} contain a limit similar to \Erefs{def_Op} and \eref{Op_in_diagrams}, \latin{a priori}, L'Hopital's rule might require much higher derivatives: The first order is needed if the zeroth vanishes, the second if the first and zeroth order both vanish and so on. However, if the first order vanishes, then the kernel $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ vanishes too, and the contribution to the entropy production according to \Eref{def_entropyProduction} is nil (\SMref{DP_plus_EP}, remark after \Eref{Ln_from_TransitionRate}). In this section we derive some general principles on the short-time scaling of diagrams firstly in systems with single particles (see \SMref{drift_diffusion_on_ring}) and later also in systems with multiple particles (see \SMref{MultipleParticles}). In \SMref{contribs_to_prop} we present the basic arguments why contributions to the propagator order by order in the perturbation, shown as a "blob" in the diagram, are in fact also order by order in the time ${\Delta t}$ that passes between creation and annihilation, \latin{i.e.}\@\xspace between initialisation and measurement. The argument carries through to more complicated objects, such as star-like vertices, \SMref{interaction_vertices}, although the notion of blobs needs to be clarified in the case of an interaction potential, \Eref{interaction_example}, and joint propagators, \SMref{joint_props}. The scaling of diagrams with internal loops follows the pattern above, \SMref{internal_blobs}. We include a discussion about branching and coagulation vertices in the context of particle-conserving intereactions, \SMref{branching_vertices}. We complete this section with a power-counting argument to show that a diagram with $m$ blobs is of order ${\Delta t}^{m}$, \SMref{general_power_counting}. \subsection{Contributions to the full propagator}\seclabel{contribs_to_prop} First, we determine which diagrams in the full propagator contribute to zeroth order in small ${\Delta t}$. Starting with the simplest such diagrams, we consider first the bare propagator like \Eref{orig_propagator_as_an_inverse} (\SMref{bare_propagator_diffusion}) \begin{equation}\elabel{simple_propagator_with_extra} \tbarePropagator{\gpvec{k},\omega}{\gpvec{k}',\omega'} \corresponds \ave[0]{\phi(\gpvec{k}',\omega')\tilde{\phi}(\gpvec{k},\omega)} = \delta\mkern-6mu\mathchar'26(\omega+\omega') \delta\mkern-6mu\mathchar'26(\gpvec{k}+\gpvec{k}') \left\{ \frac{1}{-\mathring{\imath}\omega'+p} + \order{\omega'^{-2}} \right\} \ , \end{equation} where we assume the typical conservation of momentum in the propagator and allow for some implicit dependence of the pole $\omega'=-\mathring{\imath} p$ on the momentum, $p=p(\gpvec{k})$. The momenta are a proxy for any state-dependence and we will not make use of either $\delta\mkern-6mu\mathchar'26(\gpvec{k}+\gpvec{k}')$ or $p(\gpvec{k})$. Poles may be repeated, but that does not matter in the following considerations. For the following discussion, it is helpful to retain the $\delta\mkern-6mu\mathchar'26(\omega+\omega')$ function, even when it can be easily integrated. It is an indicator of time-translational invariance, to be discussed further. All that matters in \Eref{simple_propagator_with_extra} for the following arguments is that the bare propagator decays \emph{at least} as fast as $(-\mathring{\imath}\omega'+p)^{-1}$ in large $\omega'$, as shown for example in the case of a Markov chain in \Eref{orig_propagator_as_an_inverse}. However, since it must implement the feature \eref{propagator_limit_is_delta} \begin{equation}\elabel{propagator_limit_is_delta_again_discrete} \lim_{t'\downarrow t} \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} =\delta_{\gpvec{y},\gpvec{x}} \end{equation} for discrete states and, say \begin{equation}\elabel{propagator_limit_is_delta_again} \lim_{t'\downarrow t} \ave[0]{\phi(\gpvec{k}',t')\tilde{\phi}(\gpvec{k},t)} =\delta\mkern-6mu\mathchar'26(\gpvec{k}+\gpvec{k}') \end{equation} for continuous states, it is also clear that it cannot decay faster than $\omega^{-1}$. If it were to decay like, say, $((-\mathring{\imath}\omega' + p_1)(-\mathring{\imath}\omega' + p_2))^{-1}$, then \begin{equation} \lim_{t'\downarrow t}\int\dintbar{\omega'} \exp{-\mathring{\imath}\omega' (t'-t)} \frac{1}{-\mathring{\imath}\omega' + p_1}\frac{1}{-\mathring{\imath}\omega' + p_2} = \int\dintbar{\omega'} \frac{1}{-\mathring{\imath}\omega' + p_1}\frac{1}{-\mathring{\imath}\omega' + p_2} =0 \ , \end{equation} as will be discussed in further detail below. As indicated in \Eref{simple_propagator_with_extra}, a bare propagator might thus have contributions that vanish in large $\omega'$ as fast as $\order{\omega'^{-2}}$ or even faster \cite{BothePruessner:2021,ZhangPruessner:2022,Garcia-MillanPruessner:2021}, but it always has one contribution of the form $(-\mathring{\imath}\omega'+p)^{-1}$. Its inverse Fourier transform reads \begin{equation}\elabel{simple_propagator_Fourier} \int\dintbar{\omega}\dintbar{\omega'} \exp{-\mathring{\imath}(\omega t+\omega't')} \frac{\delta\mkern-6mu\mathchar'26(\omega+\omega')}{-\mathring{\imath} \omega'+ p} = \theta(\Re({\Delta t} p))\operatorname{sgn}({\Delta t}) \exp{-{\Delta t} p} \quad\text{ where }\quad {\Delta t}=t'-t\ , \end{equation} and $\Re({\Delta t} p)$ is the real part of ${\Delta t} p$. Causality, \latin{i.e.}\@\xspace that a particle's presence cannot be measured before it is created, is then enforced by demanding that the real-part of $p$ is positive, so that the Heaviside $\theta$-function in \Eref{simple_propagator_Fourier} vanishes for ${\Delta t}=t'-t<0$. It is therefore safe to assume that all poles $\omega'=-\mathring{\imath} p$ of all propagators are located in the lower half-plane. To simplify the following discussion, we shall henceforth assume \begin{equation}\elabel{simple_propagator} \tbarePropagator{\gpvec{k},\omega}{\gpvec{k}',\omega'} \corresponds \ave[0]{\phi(\gpvec{k}',\omega')\tilde{\phi}(\gpvec{k},\omega)} = \delta\mkern-6mu\mathchar'26(\omega+\omega') \delta\mkern-6mu\mathchar'26(\gpvec{k}+\gpvec{k}') \frac{1}{-\mathring{\imath}\omega'+p} \end{equation} and thus ignore the terms $\order{\omega'^{-2}}$ in \Eref{simple_propagator_with_extra}. Are there any other diagrams contributing to the full propagator to zeroth order in ${\Delta t}$? Corrections to the propagator due to the perturbative part of the action, \latin{e.g.}\@\xspace \Eref{propagator_expansion_app} (\SMref{pert_exp_full_prop}), may be written as \begin{equation}\elabel{propagator_first_order_corrected} \tblobbedPropagator{\gpvec{k},\omega}{\gpvec{k}',\omega'} \corresponds \frac{\delta\mkern-6mu\mathchar'26(\omega+\omega')\delta\mkern-6mu\mathchar'26(\gpvec{k}+\gpvec{k}')}{-\mathring{\imath} \omega' + p_2} \BC_{21} \frac{1}{-\mathring{\imath} \omega' + p_1} \end{equation} with $p_1$, $p_2$ real and positive, and generally dependent on $\gpvec{k}=-\gpvec{k}'$. The effect of the blob in the diagram is captured by $\BC_{21}$. If $\BC_{21}$ is a function of $\omega'$, its $\omega'$ dependence has to be analysed in more detail: Firstly, it cannot introduce poles in $\omega'$ that are located in the upper half-plane, as that would break causality. Secondly, $\BC_{21}$ may diverge in $\omega'$ but never as fast as $\omega'$ itself, as it would not be captured in a perturbation theory in general, so it is safe to assume that $\lim_{\omega'\to\infty}\BC_{21}(\omega')/\omega'=0$. Thirdly, $\BC_{21}(t)$ might be dependent on absolute time $t$, as if subject to some external forcing \cite{PauschGarcia-MillanPruessner:2020}, which amounts to allowing for sinks and sources of $\omega'$ in diagrams, which then no longer carry a factor of $\delta\mkern-6mu\mathchar'26(\omega+\omega')$. This generalisation of $\BC_{21}$ \emph{does} indeed invalidate the following arguments, because breaking time-translational invariance means that not all propagators might have to "carry" the same $\omega'$. To keep what follows simple, we will, however, assume time translational invariance, as will be manifest by any contribution to the propagator being proportional to $\delta\mkern-6mu\mathchar'26(\omega+\omega')$. To determine to what order in ${\Delta t}$ the diagram in \eref{propagator_first_order_corrected} contributes to the full propagator, we need to calculate its inverse Fourier transform. We may consider more generally an integral similar to \Eref{derivative_integral}, consisting of $n$ propagators and $n-1$ blobs, \begin{equation}\elabel{def_IntMultiProp} I_{n}({\Delta t})=\int \dintbar{\omega'} \frac{\exp{-\mathring{\imath} \omega' {\Delta t}}} {\prod_{j=1}^{n} (-\mathring{\imath} \omega' + p_j)} \Big( \BC_{n\,n-1}(\omega') \BC_{n-1\,n-2}(\omega') \cdots \BC_{21}(\omega') \Big) \ , \end{equation} which corresponds to the contribution of a diagram involving $n$ propagators each carrying $\omega'$, which vanishes most slowly in $\omega'$. If $\BC$ is constant in $\omega'$, the integrand at ${\Delta t}=0$ decays like $\propto\omega'^{-n}$. If $\BC(\omega')$ diverges in $\omega'$, say $\BC(\omega')\propto\omega'^\mu$ the overall behaviour of the integrand of \Eref{def_IntMultiProp} is $\propto\omega'^{-n+\mu(n-1)}$, with $\mu<1$ as discussed above. An example of $I_{1}({\Delta t})$ and $I_{2}({\Delta t})$ are the Fourier transforms of \eref{simple_propagator} and \eref{propagator_first_order_corrected} respectively. Given that all poles of the integrand in \Eref{def_IntMultiProp} are in the lower half-plane, repeated or not, $I_{n}({\Delta t})$ generally vanishes at ${\Delta t}<0$, for the same reason as \Eref{simple_propagator_Fourier} vanishes. At ${\Delta t}=0$ and $n\ge2$, the contour can be closed by the ML lemma \cite{AblowitzFokas:2003} either in the lower or in the upper half-plane, as the integrand vanishes strictly faster than $\omega'^{-1}$ in large $\omega'$. Because all poles are in the lower half-plane, the integral thus vanishes at ${\Delta t}=0$ and $n\ge2$, meaning that a diagram like \Eref{propagator_first_order_corrected} does not contribute at ${\Delta t}=0$. Not much can be said for ${\Delta t}=0$ and $n=1$, because then the integral \Eref{def_IntMultiProp} is logarithmically divergent. Strictly speaking, however, we are interested in the behaviour of diagrams in the \emph{limit} ${\Delta t}\downarrow0$, as required in \Erefs{def_Op} and \eref{def_Ln}, not at ${\Delta t}=0$. To make this connection, we make the following observation: In the limit of ${\Delta t}\downarrow0$ or ${\Delta t}\uparrow0$, the effect of the exponential like the one in \Eref{def_IntMultiProp} is solely that it directs the closure of the auxiliary contour to determine any of the integrals by the ML lemma. Otherwise, the exponential $\exp{-\mathring{\imath} \omega' {\Delta t}}$ has no further effect, its contribution to the residue in the limit ${\Delta t}\to0$ from above or from below is always a factor $1$, as it converges to unity irrespective of the value of the pole in $\omega'$. Of course, that by itself does not mean that the value of the integral is the same in both limits, as is known, for example, from the Fourier-transform of a Heaviside $\theta$-function. However, if the integral exists at ${\Delta t}=0$ (not just its principle value) and if the auxiliary path can be taken in both half planes without contributing by the ML lemma, then the value of the integral at ${\Delta t}=0$ is independent of the orientation of the auxiliary path. The integral at ${\Delta t}=0$ provides the "glue" between the two limits. In other words, if the integral at ${\Delta t}=0$ exists, then it must be identical to the integral in the limit ${\Delta t}\downarrow0$, but in fact also identical to the integral in the limit ${\Delta t}\uparrow0$. The latter vanishes by causality, which means that if the integral at ${\Delta t}=0$ exists, it vanishes as well and so does the one for ${\Delta t}\downarrow0$. In brief: Defining $I_{n}^\pm=\lim_{{\Delta t}\to0^\pm}I_{n}({\Delta t})$, \Eref{def_IntMultiProp}, then the existence of $I_{n}(0)$ and its independence from the orientation of the auxiliary path guarantees $I_{n}(0)=I_{n}^+$ as well as $I_{n}(0)=I_{n}^-$, but $I_{n}^-=0$, which thus implies $I_{n}^+=0$. If $I_{n}(0)$ exists, then $I_{n}({\Delta t})$ is continuous at ${\Delta t}=0$. We can use the argument in the following form: \emph{If a diagram with $n\geq2$ bare propagators can be calculated for ${\Delta t}=0$, then it is identical to its limits ${\Delta t}\to0^{\pm}$ and necessarily vanishes.} Therefore, contributions to the zeroth order in ${\Delta t}$ solely come from the bare propagator, \Erefs{propagator_limit_is_delta_again_discrete}, \eref{propagator_limit_is_delta_again} and \eref{simple_propagator_Fourier}. Next, we determine which other diagrams in the full propagator contribute to first order in small ${\Delta t}$. All of the following reasoning is done in Fourier time, \latin{i.e.}\@\xspace frequencies $\omega$, because in Fourier time it comes down to mere power counting, although equivalent arguments can of course be made in direct time. Considering the first derivative of $I_{n}({\Delta t})$, \Eref{def_IntMultiProp}, with respect to ${\Delta t}$, differentiation brings down a factor of $-\mathring{\imath}\omega'$. If there are $n$ bare propagators carrying $\omega'$ and thus $n-1$ factors of perturbative $\BC(\omega')$, the integrand vanishes as fast as $\omega'^{1-n+\mu(n-1)}$. By the same arguments as outlined above, this integral vanishes at ${\Delta t}=0$ provided $1-n+\mu(n-1)<-1$, \latin{i.e.}\@\xspace $2-\mu<n(1-\mu)$. For this to hold for all $n\ge3$ we would need to require henceforth $\mu<1/2$. To simplify what follows, we will assume, however, $\BC(\omega')$ to be independent of $\omega'$. For example \begin{equation} \frac{\mathrm{d}}{\mathrm{d} {\Delta t}}I_{n}({\Delta t})=\int \dintbar{\omega'} \frac{-\mathring{\imath} \omega' \exp{-\mathring{\imath} \omega' {\Delta t}}} {\prod_{j=1}^{n} (-\mathring{\imath} \omega' + p_j)} \prod_{i=1}^{n-1} \BC_{i+1\,i} \ , \end{equation} which vanishes by the ML lemma for ${\Delta t}=0$ and $n\ge3$, such that, say \begin{equation} \left.\partial_{t'} \left( \tikz[baseline=1pt]{ \begin{scope}[yshift=0.0cm] \node at (0.5,0) [right] {$\gpvec{k}_1,t$}; \node at (-1.5,0) [left] {$\gpvec{k}'_1,t'$}; \draw[tAactivity] (0.5,0) -- (-1.5,0); \tgenVertex{0,0} \tgenVertex{-1,0} \end{scope} } \right) \right|_{{\Delta t}=0} \corresponds 0 \ . \end{equation} Similar arguments apply to higher derivatives, but these are not relevant in the present work. \emph{For $\BC(\omega')$ constant in $\omega'$ it follows that a contribution to the propagator with $n$ legs and thus $n-1$ blobs behaves like $I_{n}({\Delta t})\in\order{{\Delta t}^{n-1}}$ in small ${\Delta t}$.} A tadpole diagram like \begin{equation} \elabel{tadpole_diagram} \tikz[baseline=3pt]{ \begin{scope}[yshift=0cm] \tgenVertex{0,0} \draw[tAactivity] (0.5,0) -- (-0.5,0); \draw[tAactivity] (0,0) to [out=60,in=180] (0.5,0.5); \draw[tAactivity] (0,0) to [out=30,in=00] (0.5,0.5); \end{scope} } \in\order{{\Delta t}} \end{equation} does not contribute to order ${\Delta t}^0$, but does so to order ${\Delta t}$, for the same reasons as \Eref{propagator_first_order_corrected}. The additional loop does not carry $\omega'$ and is in fact independent of external frequencies and momenta. The loop provides a constant pre-factor and does not modify the inverse Fourier transform in any other way. Tadpoles are a rather exotic type of diagram in Doi-Peliti field theories, as they require a source, which is, however, generally found in response field theories \cite{Taeuber:2014,WalterSalbreuxPruessner:2021:FieldTheory}. In summary, corrections to the propagator of the form \Eref{propagator_first_order_corrected} with $n\ge2$ bare propagators carrying $\omega'$ vanish at ${\Delta t}=0$ and in the limit ${\Delta t}\downarrow0$ provided $\mu<1$. Similarly, their first derivatives vanish at ${\Delta t}=0$ and in the limit ${\Delta t}\downarrow0$ for all $n\ge3$ provided $\mu<1/2$. \subsection{Interaction vertices}\seclabel{interaction_vertices} Taking the Fourier transform of a star-like diagram \begin{align}\elabel{general_whiskers} \tikz[baseline=-4pt]{ \begin{scope}[yshift=0.0cm] \tgenVertex{0,0} \node at (20:1cm) [right,yshift=0pt] {$t$}; \node at (0:1cm) [right,yshift=0pt] {$t$}; \node at (-40:1cm) [right,yshift=0pt] {$t$}; \node at (-20:-1cm) [left,yshift=0pt] {$t'$}; \node at (0:-1cm) [left,yshift=0pt] {$t'$}; \node at (40:-1cm) [left,yshift=0pt] {$t'$}; \draw[tAactivity,dotted] (-35:0.8cm) arc (-35:-5:0.8cm); \draw[tAactivity,dotted] (35:-0.8cm) arc (35:5:-0.8cm); \draw[tAactivity] (20:1cm) -- (0,0); \draw[tAactivity] (0:1cm) -- (0,0); \draw[tAactivity] (-40:1cm) -- (0,0); \draw[tAactivity] (-20:-1cm) -- (0,0); \draw[tAactivity] (0:-1cm) -- (0,0); \draw[tAactivity] (40:-1cm) -- (0,0); \end{scope} } &\corresponds I_{\mathsf{*}}({\Delta t}) = \int \dintbar{\omega_{1,\ldots,n}} \dintbar{\omega'_{1,\ldots,n}} \exp{-\mathring{\imath}(\omega_1+\ldots+\omega_n){\Delta t}}\\ &\nonumber \times \delta\mkern-6mu\mathchar'26(\omega_1+\ldots+\omega_n+\omega'_1+\ldots+\omega'_n) \left(\prod_{i=1}^n \frac{1}{-\mathring{\imath}\omega_i+p_i}\right) \left(\prod_{i=1}^n \frac{1}{\mathring{\imath}\omega'_i+p'_i}\right) \end{align} one can show that \begin{equation}\elabel{IntStar_explicit} I_{\mathsf{*}}({\Delta t})=\theta({\Delta t}) \frac{\exp{-{\Delta t}\sum_{i=1}^n p'_i}-\exp{-{\Delta t}\sum_{i=1}^n p_i}}{\sum_{i=1}^n p_i-\sum_{i=1}^n p'_i} \ , \end{equation} so that $I_{\mathsf{*}}(0)=\lim_{{\Delta t}\downarrow0}I_{\mathsf{*}}({\Delta t})=\lim_{{\Delta t}\uparrow0}I_{\mathsf{*}}({\Delta t})=0$, and \begin{equation} \elabel{scaling_star_like_diagram} I_{\mathsf{*}}({\Delta t}) \corresponds \tikz[baseline=-4pt]{ \begin{scope}[yshift=0.0cm] \tgenVertex{0,0} \node at (20:1cm) [right,yshift=0pt] {$t$}; \node at (0:1cm) [right,yshift=0pt] {$t$}; \node at (-40:1cm) [right,yshift=0pt] {$t$}; \node at (-20:-1cm) [left,yshift=0pt] {$t'$}; \node at (0:-1cm) [left,yshift=0pt] {$t'$}; \node at (40:-1cm) [left,yshift=0pt] {$t'$}; \draw[tAactivity,dotted] (-35:0.8cm) arc (-35:-5:0.8cm); \draw[tAactivity,dotted] (35:-0.8cm) arc (35:5:-0.8cm); \draw[tAactivity] (20:1cm) -- (0,0); \draw[tAactivity] (0:1cm) -- (0,0); \draw[tAactivity] (-40:1cm) -- (0,0); \draw[tAactivity] (-20:-1cm) -- (0,0); \draw[tAactivity] (0:-1cm) -- (0,0); \draw[tAactivity] (40:-1cm) -- (0,0); \end{scope} } \in \order{{\Delta t}} \ . \end{equation} The interaction vertices in \Erefs{pair_vertex1} and \eref{pair_vertex2} of the type \begin{equation}\elabel{interaction_example} \tikz[baseline=0pt]{ \draw [draw=none,fill=red!25] (0,0) ellipse (0.35cm and 0.75cm); \begin{scope}[yshift=0.0cm] \draw[black,potStyle] (0,-0.4) -- (0,0.4); \draw[black,thin] (-0.1,0.2) -- (0.1,0.2); \draw[red,thin] (-0.2,0.3) -- (-0.2,0.5); \tgenVertex{0,-0.4} \tgenVertex{0,0.4} \node at (0.5,-0.5) [right,yshift=0pt] {$\gpvec{k}_1,t$}; \node at (-0.5,-0.5) [left,yshift=0pt] {$\gpvec{k}'_1,t'$}; \draw[tAactivity] (0.5,-0.5) -- (0,-0.4) -- (-0.5,-0.5); \node at (0.5,0.5) [right,yshift=0pt] {$\gpvec{k}_2,t$}; \node at (-0.5,0.5) [left,yshift=0pt] {$\gpvec{k}'_2,t'$}; \draw[tAactivity] (0.5,0.5) -- (0,0.4) -- (-0.5,0.5); \end{scope} } \in\order{{\Delta t}} \end{equation} are in fact of the form $I_{\mathsf{*}}({\Delta t})$ with $n=2$ even when \Eref{interaction_example} seems to contain \emph{two blobs}. However, the dash-dotted vertical line, which represents the interaction potential, is not a propagator $\propto\omega^{-1}$ and has no frequency dependence. In \Eref{interaction_example} we show the two blobs suggestively as if located inside a large, faintly drawn blob. \subsection{Joint propagators}\seclabel{joint_props} The same arguments as above apply to joint propagators, which are just products of diagrams, such as \begin{equation} \tikz[baseline=3pt]{ \begin{scope}[xscale=0.7] \begin{scope}[yshift=0.4cm] \tgenVertex{0,0} \node at (0.5,0) [right] {$\gpvec{k}_1,t$}; \node at (-0.5,0) [left] {$\gpvec{k}'_1,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \begin{scope}[yshift=0.0cm] \node at (0.5,0) [right] {$\gpvec{k}_2,t$}; \node at (-0.5,0) [left] {$\gpvec{k}'_2,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \end{scope} } \in\order{{\Delta t}} \text{ and } \tikz[baseline=3pt]{ \begin{scope}[xscale=0.7] \begin{scope}[yshift=0.4cm] \tgenVertex{0,0} \node at (0.5,0) [right] {$\gpvec{k}_1,t$}; \node at (-0.5,0) [left] {$\gpvec{k}'_1,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \begin{scope}[yshift=0.0cm] \tgenVertex{0,0} \node at (0.5,0) [right] {$\gpvec{k}_2,t$}; \node at (-0.5,0) [left] {$\gpvec{k}'_2,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \end{scope} } \in \order{{\Delta t}^2} \text{ and } \tikz[baseline=3pt]{ \begin{scope}[xscale=0.7] \begin{scope}[yshift=0.4cm] \tgenVertex{0.25,0} \tgenVertex{-0.25,0} \node at (0.5,0) [right] {$\gpvec{k}_1,t$}; \node at (-0.5,0) [left] {$\gpvec{k}'_1,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \begin{scope}[yshift=0.0cm] \tgenVertex{0,0} \node at (0.5,0) [right] {$\gpvec{k}_2,t$}; \node at (-0.5,0) [left] {$\gpvec{k}'_2,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \end{scope} } \in \order{{\Delta t}^3} \end{equation} in the sense that these diagrams vanish in small ${\Delta t}$ at least like ${\Delta t}$, ${\Delta t}^2$ and ${\Delta t}^3$ respectively. Similarly, \begin{equation} \tikz[baseline=-11pt]{ \begin{scope}[yshift=0.0cm] \tgenVertex{0,-0.15} \node at (0.5,0) [right,yshift=3pt] {$\gpvec{k}_1,t$}; \node at (-0.5,0) [left,yshift=3pt] {$\gpvec{k}'_1,t'$}; \draw[tAactivity] (0.5,0) -- (0,-0.15) -- (-0.5,0); \node at (0.5,-0.3) [right,yshift=0pt] {$\gpvec{k}_2,t$}; \node at (-0.5,-0.3) [left,yshift=0pt] {$\gpvec{k}'_2,t'$}; \draw[tAactivity] (0.5,-0.3) -- (0,-0.15) -- (-0.5,-0.3); \begin{scope}[yshift=-0.7cm] \node at (0.5,0) [right] {$\gpvec{k}_3,t$}; \node at (-0.5,0) [left] {$\gpvec{k}'_3,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \end{scope} } \in \order{{\Delta t}} \qquad\text{ and }\qquad \tikz[baseline=-11pt]{ \begin{scope}[yshift=0.0cm] \tgenVertex{0,-0.15} \node at (0.5,0) [right,yshift=3pt] {$\gpvec{k}_1,t$}; \node at (-0.5,0) [left,yshift=3pt] {$\gpvec{k}'_1,t'$}; \draw[tAactivity] (0.5,0) -- (0,-0.15) -- (-0.5,0); \node at (0.5,-0.3) [right,yshift=0pt] {$\gpvec{k}_2,t$}; \node at (-0.5,-0.3) [left,yshift=0pt] {$\gpvec{k}'_2,t'$}; \draw[tAactivity] (0.5,-0.3) -- (0,-0.15) -- (-0.5,-0.3); \begin{scope}[yshift=-0.7cm] \tgenVertex{0,0} \node at (0.5,0) [right] {$\gpvec{k}_3,t$}; \node at (-0.5,0) [left] {$\gpvec{k}'_3,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \end{scope} } \in \order{{\Delta t}^2} \end{equation} where the scaling of the interaction vertex $I_{\mathsf{*}}({\Delta t})$ with $n=2$ is that of \Eref{scaling_star_like_diagram}. \subsection{Internal blobs and loops}\seclabel{internal_blobs} The order of more complicated diagrams such as \begin{equation} \tikz[baseline=-7.5pt]{ \begin{scope}[yshift=0.0cm,xscale=1] \tgenVertex{0,-0.15} \tgenVertex{-0.4cm,-0.07} \node at (0.75cm,0.1) [right,yshift=0pt] {$t$}; \node at (-0.75cm,0.1) [left,yshift=0pt] {$t'$}; \draw[tAactivity] (-0.75cm,0) -- (0,-0.15) -- (0.75cm,0); \node at (0.75cm,-0.3) [right,yshift=0pt] {$t$}; \node at (-0.75cm,-0.3) [left,yshift=0pt] {$t'$}; \draw[tAactivity] (0.75cm,-0.3) -- (0,-0.15) -- (-0.75cm,-0.3); \end{scope} } \in\order{{\Delta t}^2} \,, \quad \tikz[baseline=-4pt]{ \begin{scope}[yshift=0.0cm,xscale=1] \tgenVertex{-0.4cm,0} \tgenVertex{0.4cm,0} \node at (0.9cm,0.2cm) [right,yshift=0pt] {$t$}; \node at (-0.9cm,0.2cm) [left,yshift=0pt] {$t'$}; \node at (0.9cm,-0.2cm) [right,yshift=0pt] {$t$}; \node at (-0.9cm,-0.2cm) [left,yshift=0pt] {$t'$}; \draw[tAactivity] (-0.9cm,0.2) -- (-0.4cm,0); \draw[tAactivity] (-0.9cm,-0.2) -- (-0.4cm,0); \draw[tAactivity] (0.9cm,0.2) -- (0.4cm,0); \draw[tAactivity] (0.9cm,-0.2) -- (0.4cm,0); \draw[tAactivity] (-0.4cm,0) to [out=60,in=120] (0.4cm,0); \draw[tAactivity] (-0.4cm,0) to [out=-60,in=-120] (0.4cm,0); \end{scope} } \in\order{{\Delta t}^2} \,,\quad \tikz[baseline=-4pt]{ \begin{scope}[yshift=0.0cm,xscale=1] \tgenVertex{-0.4cm,0} \tgenVertex{0.4cm,0} \node at (0.9cm,0.35cm) [right,yshift=0pt] {$t$}; \node at (-0.9cm,0.35cm) [left,yshift=0pt] {$t'$}; \node at (0.9cm,-0.35cm) [right,yshift=0pt] {$t$}; \node at (-0.9cm,-0.35cm) [left,yshift=0pt] {$t'$}; \node at (0.9cm,0cm) [right,yshift=0pt] {$t$}; \node at (-0.9cm,0cm) [left,yshift=0pt] {$t'$}; \draw[tAactivity] (-0.9cm,0.3) -- (-0.4cm,0); \draw[tAactivity] (-0.9cm,-0.3) -- (-0.4cm,0); \draw[tAactivity] (-0.9cm,0) -- (-0.4cm,0); \draw[tAactivity] (0.9cm,0.3) -- (0.4cm,0); \draw[tAactivity] (0.9cm,-0.3) -- (0.4cm,0); \draw[tAactivity] (0.9cm,0) -- (0.4cm,0); \draw[tAactivity] (-0.4cm,0) to [out=60,in=120] (0.4cm,0); \draw[tAactivity] (-0.4cm,0) to [out=-60,in=-120] (0.4cm,0); \end{scope} } \in\order{{\Delta t}^2} \ , \end{equation} can be determined by studying them as a variation of star-like diagram \Eref{general_whiskers}. The key insight is that any additional internal propagator adds a pole \emph{on the same half-plane} as they can be found in the star-like diagrams. Loops result in additional integrals, but do not change the general argument. \subsection{Branching and Coagulation vertices}\seclabel{branching_vertices} The propagators considered in the entropy production of $N$ particles, \Erefs{entropyProduction_multipleParticles} and \eref{def_trans_multi_diag} are of the form $\ave{\phi(\gpvec{k}'_1,{\Delta t})\ldots\phi(\gpvec{k}'_N,{\Delta t})\phi^{\dagger}(\gpvec{k}_1,0)\ldots\phi^{\dagger}(\gpvec{k}_N,0)}$. After the Doi-shift $\phi^{\dagger}=1+\tilde{\phi}$, these are represented by possibly disconnected diagrams that have $N$ outgoing legs and \emph{at most} $N$ incoming legs. Since here we consider only processes where the total particle number is conserved, there is no diagram with more outgoing than incoming legs, such as the branching diagram \begin{equation}\elabel{branching_and_coagulation} \tikz[baseline=-4pt]{ \begin{scope}[yshift=0.0cm,xscale=1] \tgenVertex{0,0} \node at (0:0.5) [right,yshift=0pt] {$t$}; \node at (150:0.5) [left,yshift=0pt] {$t'$}; \node at (-150:0.5) [left,yshift=0pt] {$t'$}; \draw[tAactivity] (0:0.5) -- (0,0); \draw[tAactivity] (150:0.5) -- (0,0); \draw[tAactivity] (-150:0.5) -- (0,0); \end{scope} } \ . \end{equation} To have $N$ outgoing legs it therefore takes \emph{at least} $N$ incoming legs. All contributions to $\ave{\phi(\gpvec{k}'_1,{\Delta t})\ldots\phi(\gpvec{k}'_N,{\Delta t})\phi^{\dagger}(\gpvec{k}_1,0)\ldots\phi^{\dagger}(\gpvec{k}_N,0)}$, which in principle can contain diagrams with fewer than $N$ incoming legs, in the processes considered here therefore have exactly $N$ incoming legs and $N$ outgoing legs. This constraint, together with the absence of branching vertices, implies that coagulation-like vertices, which have more incoming legs than outgoing legs, such as \begin{equation} \tikz[baseline=-4pt]{ \begin{scope}[yshift=0.0cm,xscale=1] \tgenVertex{0,0} \node at (30:0.5) [right,yshift=0pt] {$t$}; \node at (-30:0.5) [right,yshift=0pt] {$t$}; \node at (180:0.5) [left,yshift=0pt] {$t'$}; \draw[tAactivity] (30:0.5) -- (0,0); \draw[tAactivity] (-30:0.5) -- (0,0); \draw[tAactivity] (180:0.5) -- (0,0); \end{scope} } \end{equation} do not contribute to the propagators needed to calculate the entropy production, even if they are present in the Doi-Peliti field theory as a result of particles interactions, \latin{e.g.}\@\xspace \Erefs{pair_vertex1} and \eref{pair_vertex2}. \subsection{General power counting}\seclabel{general_power_counting} We complete the present discussion with a power counting argument showing that any diagram containing $m$ blobs, or vertices, scales like ${\Delta t}^{m}$ in the short-time limit ${\Delta t}\to0$. This section is a generalisation of \SMref{contribs_to_prop}, since it includes diagrams possibly involving internal loops. As disconnected diagrams scale like their product, we restrict the discussion to connected diagrams. Those are made from propagators and blobs, so that any two blobs are connected by a propagator proportional to $\omega^{-1}$ (but \Eref{interaction_example}). Since here we consider \emph{vertices} with as many legs coming in as coming out only, we can further restrict the discussion to \emph{diagrams} having as many legs coming in as come out. A connected diagram with $n$ incoming and outgoing legs can be thought of as being made from $n$ disconnected bare propagators which are "tied together" by inserting $m$ vertices. Initially, the $n$ propagators each scale like $\omega'^{-1}\delta\mkern-6mu\mathchar'26(\omega'+\omega)$. The insertion of an $\ell$-legged vertex, with $\ell$ incoming and $\ell$ outgoing legs, splices $\ell$ propagators, effectively creating $\ell$ "internal" ones, where we may ignore any additional internal $\delta\mkern-6mu\mathchar'26$-functions as having cancelled with internal integrals. These internal integrals are trivial, as opposed to the ones discussed below. As the vertex is time-translational invariant, it will further introduce integrals over $\ell$ internal $\omega$ as well as a single $\delta\mkern-6mu\mathchar'26$-function. As each such $\ell_i$-legged vertex $i$ with $i=1,\ldots,m$ effectively introduces $\ell_i$ new propagators by splicing, the total count of such new propagators being $\mathcal{L}=\sum_{i=1}^m\ell_i$. Each of those gives rise to an internal $\omega$-integral, so that there are $\mathcal{I}=\mathcal{L}$ such internal, non-trivial integrals. There are $m$ vertices inserted, each will give rise to a frequency conserving $\delta\mkern-6mu\mathchar'26$-function, so that starting with $n$ such $\delta\mkern-6mu\mathchar'26$-functions from the initially disconnected $n$ propagators, there is a total $\mathcal{K}=n+m$ $\delta\mkern-6mu\mathchar'26$-functions and a total of $\mathcal{N}=n+\mathcal{L}$ propagators. The final diagram is obtained by carrying out the $\mathcal{I}$ internal integrals, using up as many $\delta\mkern-6mu\mathchar'26$-functions as possible, but at most $\mathcal{K}-1$ as the overall diagram has a $\delta\mkern-6mu\mathchar'26$-prefactor. The remaining $\mathcal{I}-(\mathcal{K}-1)\ge0$ integrals are loops. Starting with an integrand that goes like $\omega^{-\mathcal{N}}$ in large $\omega$ and integrating $\mathcal{I}$ times with the help of $\mathcal{K}-1$ $\delta\mkern-6mu\mathchar'26$-functions produces a final diagram that scales like $\omega'^{-\mathcal{N}+\mathcal{I}-(\mathcal{K}-1)}\delta\mkern-6mu\mathchar'26(\omega'_1+\ldots)$. From the above \begin{equation} -\mathcal{N}+\mathcal{I}-(\mathcal{K}-1) = - (n+\mathcal{L}) +\mathcal{L} - (n+m-1) =-2n-m+1 \ , \end{equation} \latin{i.e.}\@\xspace the diagram behaves in large, external $\omega'$ like $\omega'^{-2n-m+1}\delta\mkern-6mu\mathchar'26(\omega_1'+\ldots)$. Carrying out the inverse Fourier transform over $2n$ external $\omega'$ using up the remaining $\delta\mkern-6mu\mathchar'26$-function, thus produces an integral proportional to ${\Delta t}^m$. This is the desired scaling behaviour. \endinput \section{From Master and Fokker-Planck Equation to Field Theory to Entropy Production} \seclabel{DP_plus_EP} \paragraph*{Abstract} In this section, we derive a Doi-Peliti field theory with arbitrarily many particles from the parameterisation of a single particle master equation with discrete states. \Eref{time_evolution_operator_op_only} shows that the transition matrix of the master \Eref{rho_i_ME_matrix_rewrite} is identically the transition matrix in the action of the field theory. It is further shown that any continuum limit that is taken in the master equation in order to obtain a Fokker-Planck equation (FPE), can equivalently be performed in the action of the field theory. As a result, we find a direct mapping from an FPE to a Doi-Peliti action, \Erefs{rho_i_ME_matrix_rewrite_kernel} and \eref{def_harmonic_action}. Rather than taking therefore the canonical, but cumbersome route from FPE to discretised master equation to discrete action to continuum action, we determine a direct and very simple route from FPE to action. In \SMref{the_propagator} we construct the bare propagator \Eref{orig_propagator_as_an_inverse} of a Markov chain in $\omega$ and \Eref{bare_propagator_realtime} in direct time. Allowing for a perturbative term in the action leads in principle to infinitely many additional terms in the propagagator, but crucially only a single correction in the short-time derivative \Eref{summary_propagator_first_order_summary_deri}, \SMref{appendixWhichDiagramsContribute}. \SMref{entropy_production} retraces the basic reasoning by Gaspard's formulation \cite{Gaspard:2004} of the (internal) entropy production (rate) of a Markovian system, introducing in particular the kernel $\bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}}$, \Eref{def_Op_app}, the logarithm $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\gpvec{x}}$, \Eref{def_Ln_app}, and the local entropy production $\dot{\sigma}$, \Eref{def_entropyProductionDensity_app}. In \SMref{EP_from_propagators} kernel and logarithm are expressed in terms of diagrams and thus in terms of the propagator, \Eref{Op_and_Ln_from_diagrams}, and ultimately the action and the master equation, \Erefs{Op_in_diagrams} and \eref{Ln_from_propagator}. In \SMref{EP_cont} this result is extended to continuous degrees of freedom, \Erefs{Op_continuum_Fpop_plus} and \eref{Ln_for_continuous}. \subsection{Master equation} In the following $\MC_{\gpvec{y}\gpvec{x}}$ for $\gpvec{y}\ne\gpvec{x}$ are non-negative rates for the transitioning of a particle from state $\gpvec{x}$ to state $\gpvec{y}$. The object $\MC$ may be thought of as a ``hopping matrix''. A particle is then being lost from state $\gpvec{y}$ by a hop with rate $\sum_{\gpvec{x},\gpvec{x}\ne\gpvec{y}} \MC_{\gpvec{x}\gpvec{y}}$. With suitable definition of $\MC_{\gpvec{y}\yvec}$, \Eref{def_Fjj}, $\MC_{\gpvec{y}\gpvec{x}}$ has the form of a usual (conservative) Markov-matrix in continuous time. If $\rho(\gpvec{x},t)$ is the probability to find an \emph{individual} particle at time $t$ in state $\gpvec{x}$, which might be interpreted as a position in space, then a single degree of freedom evolves according to the master equation of a continuous time Markov chain, \begin{equation}\elabel{rho_i_ME_orig} \dot{\rho}(\gpvec{y},t) = \sum_{\gpvec{x}\atop \gpvec{x}\ne\gpvec{y}} \big( \MC_{\gpvec{y}\gpvec{x}}\rho(\gpvec{x},t) - \MC_{\gpvec{x}\gpvec{y}} \rho(\gpvec{y},t) \big) \ , \end{equation} which is the usual Markovian evolution. The sum in \Eref{rho_i_ME_orig} runs over all states $\gpvec{x}$, excluding $\gpvec{x}=\gpvec{y}$. To cater for the needs of the field theory, we need to break conservation of the Markovian evolution. We therefore amend \Eref{rho_i_ME_orig} by a term representing spontaneous extinction of a particle in state $\gpvec{y}$ with rate $r_\gpvec{y}$, \begin{equation}\elabel{rho_i_ME_mass} \dot{\rho}(\gpvec{y},t) = - r_\gpvec{y} \rho(\gpvec{y},t) + \sum_{\gpvec{x}\atop \gpvec{x}\ne\gpvec{y}} \big( \MC_{\gpvec{y}\gpvec{x}}\rho(\gpvec{x},t) - \MC_{\gpvec{x}\gpvec{y}} \rho(\gpvec{y},t) \big) \ . \end{equation} The mass $r_\gpvec{y}>0$ is necessary to make the Doi-Peliti field theory causal. In the present work, it is a mere technicality and will be taken to $0^+$ whenever convenient. Introducing the additional definition \begin{equation}\elabel{def_Fjj} \MC_{\gpvec{y}\yvec}=-\sum_{\gpvec{x}\atop \gpvec{x}\ne \gpvec{y}} \MC_{\gpvec{x}\gpvec{y}} \ , \end{equation} allows us to rewrite the master equation in terms of a single rate matrix or \emph{kernel} $-r_\gpvec{y} \delta_{\gpvec{y},\gpvec{x}}+\MC_{\gpvec{y}\gpvec{x}}$, so that \begin{equation}\elabel{rho_i_ME_matrix_rewrite} \dot{\rho}(\gpvec{y},t) = \sum_{\gpvec{x}} \rho(\gpvec{x},t) \left[ -r_\gpvec{y} \delta_{\gpvec{y},\gpvec{x}} + \MC_{\gpvec{y}\gpvec{x}} \right] \ , \end{equation} using the Kronecker $\delta$-function $\delta_{\gpvec{y},\gpvec{x}}$. \subsubsection{Continuum limit} \seclabel{continuum_limit} It is instructive to consider the example of discretised drift-diffusion in one dimension, \begin{equation}\elabel{transMatDiffusion} \MC_{b a}= h_r \big(\delta_{a+1,b} - \delta_{a,b}\big) + h_l \big(\delta_{a-1,b} - \delta_{a,b}\big) \end{equation} with $h_l$ and $h_r$ the rates of hopping left and right respectively, with $a$ and $b$ numbering the position of origin and destination on a ring. In this case, the continuum limit of the master equation can be taken by introducing the parameterisation $D=(h_r+h_l){\Delta x}^2 /2$ and $w=(h_r-h_l){\Delta x}$, so that $h_r=D/{\Delta x}^2+w/(2{\Delta x})$ and $h_l=D/{\Delta x}^2-w/(2{\Delta x})$. The rates $h_r$ and $h_l$ are bound to be positive for any $D>0$ and sufficiently small ${\Delta x}$. The master \Eref{rho_i_ME_matrix_rewrite} with the rate matrix $\MC_{b a}$ given in \Eref{transMatDiffusion} can now be rewritten as \begin{equation}\elabel{ME_diffusion_example} \dot{\rho}(b,t) = - r_b \rho(b,t) + D \frac{\rho(b+1,t)-2\rho(b,t)+\rho(b-1,t)}{{\Delta x}^2} - w \frac{\rho(b+1,t)-\rho(b-1,t)}{2{\Delta x}} \end{equation} which turns into \begin{equation}\elabel{FPE_diffusion_example} \dot{\tilde{\rho}}(\gpvec{y},t) = \mathcal{L}_\gpvec{y} \tilde{\rho}(\gpvec{y},t) \quad\text{ with }\quad \hat{\LC}_\gpvec{y} = - r_\gpvec{y} + D \partial_\gpvec{y}^2 - w \partial_\gpvec{y} \ , \end{equation} after introducing $\tilde{\rho}(\gpvec{y}=b{\Delta x},t)=\rho(b,t)/{\Delta x}$ and taking the continuum limit ${\Delta x}\to0$ while maintaining $\int\dint{y}\tilde{\rho}(y)=1$. The continuum limit may be taken directly on the rates $\MC_{b a}$ in \Eref{transMatDiffusion}, \begin{equation} \lim_{{\Delta x}\to0} \frac{h_r}{{\Delta x}} \big(\delta_{a+1,b} - \delta_{a,b}\big) + \frac{h_l}{{\Delta x}} \big(\delta_{a-1,b} - \delta_{a,b}\big) = D \delta''(\gpvec{y}-\gpvec{x})-w \delta'(\gpvec{y}-\gpvec{x}) \end{equation} using \begin{subequations} \begin{align} \lim_{{\Delta x}\to0} {\Delta x}^{-1} \big(\delta_{a+1,b} - \delta_{a-1,b}\big) &=-2 \partial_\gpvec{y} \delta(\gpvec{y}-\gpvec{x}) = -2 \delta'(\gpvec{y}-\gpvec{x})\\ \lim_{{\Delta x}\to0} {\Delta x}^{-1} \big(\delta_{a+1,b} - 2\delta_{a,b} + \delta_{a-1,b}\big) &= \partial^2_\gpvec{y} \delta(\gpvec{y}-\gpvec{x}) = \delta''(\gpvec{y}-\gpvec{x}) \end{align} \end{subequations} with $\gpvec{y}=b{\Delta x}$ and $\delta(\gpvec{y}-\gpvec{x}) = \lim_{{\Delta x}\to0} {\Delta x}^{-1} \delta_{a,b}$, the Dirac $\delta$-function defined in terms of the Kronecker $\delta$-function. The kernel to be used in \Eref{rho_i_ME_matrix_rewrite} thus becomes a rate density, \begin{align} \lim_{{\Delta x}\to0} {\Delta x}^{-1} \left[ -r_b \delta_{b,a} + \MC_{b a} \right] = & \left[ -r_\gpvec{y} \delta(\gpvec{y}-\gpvec{x}) + D \delta''(\gpvec{y}-\gpvec{x}) - w \delta'(\gpvec{y}-\gpvec{x})\right] \nonumber\\ = & \hat{\LC}_y \delta(y-x) = \FPop_{yx} \elabel{continuum_limit_kernel} \end{align} to be used in the continuum limit of \Eref{rho_i_ME_matrix_rewrite}, which now produces an integral, \begin{equation}\elabel{rho_i_ME_matrix_rewrite_kernel} \dot{\rho}(y,t) = \int\dint{x} \FPop_{yx} \rho(x,t) \ , \end{equation} that turns into the usual FPE \begin{equation}\elabel{FPE_standard} \dot{\rho}(y,t) = \big( D \partial_y^2 - w \partial_y - r\big) \rho(y,t) \ , \end{equation} using \Eref{continuum_limit_kernel}. The procedure above is readily generalised to higher dimensions. To summarise this section, a master equation of the form \Eref{rho_i_ME_matrix_rewrite} can be turned into a FPE like \Erefs{rho_i_ME_matrix_rewrite_kernel} or \eref{FPE_standard} via a suitable continuum limit. \Erefs{rho_i_ME_mass}, \eref{rho_i_ME_matrix_rewrite} and later \eref{rho_i_ME_matrix_rewrite_kernel} form the basis of the action to be determined in the following. \subsection{Doi-Peliti field theory} To build a Doi-Peliti field theory on the basis of the hopping matrix $\MC$ and the extinction rates $r_\gpvec{y}$ that parameterise the master \Eref{rho_i_ME_mass}, we need to introduce the probability $\mathcal{P}\left( \{n\}; t \right)$ of occupation numbers $\{n\}=\{n_1,\ldots,n_\gpvec{y},\ldots\}$, which quantify the number of particles on each site $\gpvec{y}$. Each of these particles is concurrently subject to a Poissonian change of state, \begin{align} \ddX{t}{} \mathcal{P}\left( \{n\}; t \right) = & \sum_{\gpvec{y}} \Big\{ (n_\gpvec{y}+1) r_\gpvec{y} \mathcal{P}\left( \{\ldots, n_\gpvec{y}+1, \ldots\}; t \right) - n_\gpvec{y} r_\gpvec{y} \mathcal{P}\left( \{n\}; t \right) \Big\}\nonumber\\ \elabel{Pdot} & + \sum_{\gpvec{y}}\sum_{\substack{\gpvec{x}\\ \gpvec{x}\ne\gpvec{y}}} \Big\{ (n_\gpvec{x}+1) \MC_{\gpvec{y}\gpvec{x}} \mathcal{P}\left( \{\ldots, n_\gpvec{x}+1, \ldots, n_\gpvec{y}-1, \ldots\}; t \right) - n_\gpvec{y} \MC_{\gpvec{x}\gpvec{y}} \mathcal{P}\left( \{n\}; t \right) \Big\} \ . \end{align} The second index $\gpvec{x}$ in the double-sum cannot take the value $\gpvec{x}=\gpvec{y}$, as otherwise the configuration $\{\ldots, n_\gpvec{x}+1, \ldots, n_\gpvec{y}-1, \ldots\}$ is ill-defined. That $\MC_{\gpvec{y}\gpvec{x}}$ features on the right only with $\mathcal{P}$ whose arguments $\{n\}$ have identical $\sum_i n_i$ is an expression of the conservation of particles in the dynamics parameterised by $\MC_{\gpvec{y}\gpvec{x}}$. The master \Eref{Pdot} on the basis of occupation numbers differs crucially from the master \Eref{rho_i_ME_mass} on the basis of the state of a particle, in that the former tracks many particles simultaneously, while explicitly preserving the particle nature of the degrees of freedom, whereas the latter captures only the one-point density, \latin{i.e.}\@\xspace strictly the probability to find a single particle at a particular point. However, there is nothing in \Eref{rho_i_ME_mass} that forces $\gpvec{x}$ to be the sole degree of freedom of a \emph{particle} and $\rho(\gpvec{x},t)$ to be its probability. In fact, $\rho(\gpvec{x},t)$ may equally be an arbitrarily divisible quantity such as heat and \Eref{rho_i_ME_mass} its evolution. \Eref{rho_i_ME_mass} correctly describes the evolution of the one-point density of a \emph{particle}, but it makes no \emph{demand} on the particle nature and contains no information about higher correlation functions. In \Eref{Pdot}, on the other hand, the occupation numbers are strictly particle \emph{counts}, \latin{i.e.}\@\xspace non-negative integers. In order to arrive at \Eref{Pdot} from \Eref{rho_i_ME_mass} we have to demand that \Eref{rho_i_ME_mass} describes the probabilistic evolution of a single particle and \Eref{Pdot} the corresponding independent evolution of many of them. And yet, because \Eref{Pdot} draws on the same transition matrix as \Eref{rho_i_ME_mass}, we will be able to take in \Eref{Pdot} the same continuum limit that turned \Eref{rho_i_ME_mass} into \eref{FPE_diffusion_example}. We proceed by casting \Eref{Pdot} in a Doi-Peliti action following a well-established procedure \cite{Doi:1976,Peliti:1985,Taeuber:2014,TaeuberHowardVollmayr-Lee:2005,Cardy:2008}. The temporal evolution of the weighted sum over Fock states $\ket{\{n\}}$ , \begin{equation} \ket{\psi}(t) = \sum_{\{n\}} \mathcal{P}\left( \{n\}; t \right) \ket{\{n\}} \end{equation} can be written in terms of ladder operators $\creatX{}=1+\creatDSX{}$ and $\annihX{}$ as \elabel{time_evolution_operator} \begin{equation} \ddX{t} \ket{\psi}(t) = \hat{\AC} \ket{\psi}(t) \end{equation} with a time-evolution operator as simple as \begin{equation}\elabel{time_evolution_operator_op_only} \hat{\AC} = - \sum_{\gpvec{y}} r_\gpvec{y} \creatDSX{\gpvec{y}}\annihX{\gpvec{y}} + \sum_{\gpvec{y}}\sum_{\substack{\gpvec{x}\\ \gpvec{x}\ne\gpvec{y}}} \MC_{\gpvec{y}\gpvec{x}} \Big\{ \creatDSX{\gpvec{y}}-\creatDSX{\gpvec{x}} \Big\} \annihX{\gpvec{x}} \ . \end{equation} The term $\creatDSX{\gpvec{y}}-\creatDSX{\gpvec{x}}$ indicates a conservative particle transition from state $\gpvec{x}$ to state $\gpvec{y}$ parameterised by $\MC_{\gpvec{y}\gpvec{x}}$, whereas $r_\gpvec{y}\creatDSX{\gpvec{y}} \annihX{\gpvec{y}}$ in \Eref{time_evolution_operator_op_only} is the signature of spontaneous particle extinction from state $\gpvec{y}$ with rate $r_\gpvec{y}$. Using \Eref{def_Fjj} to rewrite \Eref{time_evolution_operator_op_only} again as \begin{equation} \elabel{time_evolution_operator_op_only_rewrite} \hat{\AC} = \sum_{\gpvec{y}} \creatDSX{\gpvec{y}} \sum_\gpvec{x} \annihX{\gpvec{x}} \bigg[ - r_\gpvec{y} \delta_{\gpvec{y},\gpvec{x}} + \MC_{\gpvec{y}\gpvec{x}} \bigg] \end{equation} reveals how closely the time evolution operator of the Fock-space $\hat{\AC}$ is related to the master \Eref{rho_i_ME_matrix_rewrite}, as the square bracketed rate matrix $[-r_\gpvec{y} \delta_{\gpvec{y},\gpvec{x}} + \MC_{\gpvec{y}\gpvec{x}} ]$ in \Eref{time_evolution_operator_op_only_rewrite} is the same as the one \Eref{continuum_limit_kernel}. In \Eref{continuum_limit_kernel} we show that its continuum limit is the kernel $\FPop_{yx}$. Proceeding along the canonical path \cite{TaeuberHowardVollmayr-Lee:2005,Cardy:2008,Taeuber:2014} turns \Eref{time_evolution_operator_op_only_rewrite} into the harmonic action \begin{equation}\elabel{def_harmonic_action_discrete} \AC_0 = \int \dint{t} \sum_\gpvec{y} \tilde{\phi}(\gpvec{y},t) \sum_\gpvec{x} \bigg[ - \partial_t \delta_{\gpvec{y},\gpvec{x}} - r_\gpvec{y} \delta_{\gpvec{y},\gpvec{x}} + \MC_{\gpvec{y}\gpvec{x}} \bigg] \phi(\gpvec{x},t) \ . \end{equation} Comparing again to the original master \Erefs{rho_i_ME_mass} and \eref{rho_i_ME_matrix_rewrite} shows their simple relationship to the action. Upon taking the continuum limit, just as in \Eref{rho_i_ME_matrix_rewrite_kernel}, the sum over $\gpvec{x}$ turns into an integral. To turn the sum over $\gpvec{y}$ into an integral, the product of the fields $\tilde{\phi}\phi$ is to be rescaled to a density. This is a trivial operation, as the fields are dummy variables, \begin{equation}\elabel{def_harmonic_action} \AC_0 = \int \dint{t} \int \ddint{x}\ddint{y} \tilde{\phi}(\gpvec{y},t) \bigg[ - \partial_t \delta(\gpvec{y}-\gpvec{x}) + \FPop_{yx} \bigg] \phi(\gpvec{x},t) \ . \end{equation} Again, \Eref{def_harmonic_action} bears a striking resemblance to the master \Eref{rho_i_ME_matrix_rewrite_kernel}. Of course, \Eref{def_harmonic_action} simplifies significantly as some of the integrals can easily be carried out in the presence of $\delta$-functions. After turning observables into fields, expectations on the basis of the harmonic action are calculated as \begin{equation}\elabel{def_ave_0} \ave[0]{\bullet} = \int \Dint{\phi}\Dint{\tilde{\phi}} \bullet \Exp{\AC_0} \ . \end{equation} In a Doi-Peliti field theory all terms in the action arise from a master equation. More complicated ones, in particular those that describe interaction and reaction, are generally not bilinear and therefore need to be dealt with perturbatively. Yet, they are simply \emph{added} to the action, just as they are added to the master equation, being concurrent Poisson processes. The full action $\AC$ is then a sum of the harmonic part $\AC_0$, whose path integral can be taken, and a perturbative part $\AC_\text{pert}$. After turning observables into fields, expectations are now calculated as \begin{equation}\elabel{perturbation_theory} \ave{\bullet} = \int \Dint{\phi}\Dint{\tilde{\phi}} \bullet \Exp{\AC} = \int \Dint{\phi}\Dint{\tilde{\phi}} \big( \bullet \Exp{\AC_\text{pert}} \big) \Exp{\AC_0} = \ave[0]{ \bullet \Exp{\AC_\text{pert}} } \end{equation} with full action $\AC=\AC_0+\AC_\text{pert}$ and calculated perturbatively by expanding in powers of $\AC_\text{pert}$. It is tempting to interpret $\phi(\gpvec{x},t)$ as a particle density with corresponding units and $\tilde{\phi}(\gpvec{y},t)$ as an auxiliary field like the one used in the response field formalism \cite{Taeuber:2014}. In fact, \Eref{def_harmonic_action} looks very much like the Martin-Siggia-Rose-Janssen-De Dominicis "trick" \cite{MartinSiggiaRose:1973,Janssen:1976,DeDominicis:1976,Taeuber:2014} applied to the FP \Eref{rho_i_ME_matrix_rewrite_kernel}, but without a noise source, given that the FPE is not a Langevin equation and thus does not carry a noise. There are, however, two crucial differences between Doi-Peliti field theories and reponse field field theories: Firstly, in Doi-Peliti field theories the fields are conjugates of operators that obey a commutation relation. The operator formalism guarantees that the particle nature of the particles is maintained. The fields are not densities. Consequently, observables are not simply fields $\phi$. Rather, any observable has to be constructed on the basis of operators. That commutator produces additional terms that spoil any apparent interpretation of $\phi$ as the density. If $\phi$ were a \emph{particle density} and $\exp{\AC}$ its statistical weight, the path integral would have to be constrained to those paths that correspond to sums of $\delta$-functions. Secondly, observables in a Doi-Peliti field theory generally need to be initialised explicitly, with $\phi^\dagger=1+\tilde{\phi}$ "generating" a particle. This "auxiliary field" is not the response of the system to an external perturbation. The difference between response field and Doi-Peliti formalism is further illustrated and discussed in \cite{BotheETAL:2022}. The Doi-Peliti formalism provides us with an action $\AC$, a path integral and a commutator that allows us to construct desired observables which can be calculated as an expectation with $\exp{\AC}$ as the apparent weight. The formalism may be seen as a recipe to replace a difficult calculation of observables in a particle system by an easier one in terms of continuously varying, unconstrained fields. But because $\phi(\gpvec{x},t)$ is not the particle density and the path integral not an integral over allowed paths, $\int\Dint{\tilde{\phi}} \exp{\AC[\phi,\tilde{\phi}]}$ is not the weight of a particular density history. This is the reason why the approach in \cite{NardiniETAL:2017} does not apply to Doi-Peliti field theories. \subsection{The propagator}\seclabel{the_propagator} Using the canonical procedure \cite{Taeuber:2014,TaeuberHowardVollmayr-Lee:2005,Cardy:2008}, in the following we will derive some properties of the propagator $\ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$, which, strictly, is the expected particle number in discrete state $\gpvec{y}$ at time $t'$, given a single particle was initially placed in discrete state $\gpvec{x}$ at time $t$. In this case, because there is only one particle, the expected number of particles at $\gpvec{x}$ is identical to the probability that the particle is at $\gpvec{x}$. We determine first the bare propagator $\ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ for the discrete state action \Eref{def_harmonic_action_discrete} and then the full propagator perturbatively for an additional generic perturbative action \begin{equation}\elabel{gen_pert_action} \AC_\text{pert} = \int \dint{t} \sum_{\gpvec{y},\gpvec{x}} \tilde{\phi}(\gpvec{y},t) \BC_{\gpvec{y}\gpvec{x}} \phi(\gpvec{x},t) \ , \end{equation} using \Eref{perturbation_theory}. The perturbative expansion of the propagator will feed into an \emph{exact} expression for the entropy production in \SMref{entropy_production}. That \Eref{gen_pert_action} is bilinear might look like a significant loss of generality, yet what matters below is not the precise form of the action, but the expansion of the propagator that results from it. We shall therefore consider $\BC_{\gpvec{y}\gpvec{x}}$ as a generic higher order correction to the propagator. As qualified further below, we need to make certain assumptions on the time-dependence of $\BC_{\gpvec{y}\gpvec{x}}$. For now, we may think of it as having no time-dependence. Given the conservative nature of the dynamics and general time-homogeneity this is not a strong restriction. In the continuum, a suitable perturbation might be self-propulsion or a potential, in discrete state space, the perturbation could be transitions beyond those convenient for the harmonic part. The discreteness of the state space considered thus far also seems to reduce generality. This is indeed an important constraint, which will require careful resolution in \SMref{drift_diffusion_on_ring}, in particular \SMref{Fourier_transformation}. Even when we are able to determine the action of a continuous state process from its FPE, \Eref{def_harmonic_action} from \Eref{rho_i_ME_matrix_rewrite_kernel}, and derive an expression for the entropy production in the final \Sref{EP_cont}, in the following we will focus entirely on discrete states and leave the generalisation of the arguments for later. \subsubsection{The bare propagator} \seclabel{bare_propagator_diffusion} The bare propagator $\ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ is most easily calculated after Fourier-transforming the fields, \begin{equation} \phi(\gpvec{y},t)=\int\dintbar{\omega}\exp{-\mathring{\imath}\omega t}\phi(\gpvec{y},\omega) \quad\text{ and }\quad \tilde{\phi}(\gpvec{x},t')=\int\dintbar{\omega'}\exp{-\mathring{\imath}\omega' t'}\phi(\gpvec{x},\omega') \end{equation} with $\dintbar{\omega}=\dint{\omega}/(2\pi)$, so that the harmonic part of the action \Eref{def_harmonic_action_discrete} becomes \begin{equation}\elabel{def_harmonic_action_discrete_omega} \AC_0 = \int \dintbar{\omega} \sum_{\gpvec{y},\gpvec{x}} \tilde{\phi}(\gpvec{y},-\omega) \bigg[ \mathring{\imath}\omega \delta_{\gpvec{y},\gpvec{x}} - r_\gpvec{y} \delta_{\gpvec{y},\gpvec{x}} + \MC_{\gpvec{y}\gpvec{x}} \bigg] \phi(\gpvec{x},\omega) \end{equation} and correspondingly \begin{equation}\elabel{gen_pert_action_omega} \AC_\text{pert} = \int \dintbar{\omega} \sum_{\gpvec{y},\gpvec{x}} \tilde{\phi}(\gpvec{y},-\omega) \BC_{\gpvec{y}\gpvec{x}} \phi(\gpvec{x},\omega) \ . \end{equation} The bare propagator is \begin{equation}\elabel{orig_propagator_as_an_inverse} \tbarePropagator{\gpvec{x},\omega}{\gpvec{y},\omega'} \corresponds \ave[0]{\phi(\gpvec{y},\omega')\tilde{\phi}(\gpvec{x},\omega)} = \delta\mkern-6mu\mathchar'26(\omega'+\omega) \left( \left[-\mathring{\imath} \omega \mathbf{1} +\diag(\gpvec{r}) - \MC\right]^{-1} \right)_{\gpvec{y}\gpvec{x}} \ , \end{equation} derived, if necessary, using a transformation that diagonalises $\MC$. Using that $\MC$ is a Markov matrix and $\Re(r_y)>0$, this may be transformed into direct time \begin{equation}\elabel{bare_propagator_realtime} \tbarePropagator{\gpvec{x},t}{\gpvec{y},t'} \corresponds \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} = \theta(t'-t) \Big(\Exp{(t'-t)\big[\MC-\diag(\gpvec{r})\big]}\Big)_{\gpvec{y}\gpvec{x}} \ . \end{equation} \Eref{bare_propagator_realtime} implies that $\ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ solves the master \Eref{rho_i_ME_matrix_rewrite} for $t'>t$, as \begin{equation}\elabel{propagator_limit_is_delta} \lim_{t'\downarrow t} \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} = \delta_{\gpvec{y},\gpvec{x}} \quad\text{with}\quad \lim_{t'\downarrow t} \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} = \delta(\gpvec{y}-\gpvec{x}) \quad\text{in the continuum} \end{equation} and \begin{equation}\elabel{propagator_deri} \partial_{t'} \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} = \delta_{\gpvec{y},\gpvec{x}}\delta(t'-t) + \sum_\gpvec{z} \left(\MC_{\gpvec{y}\gpvec{z}} - r_\gpvec{y} \delta_{\gpvec{y},\gpvec{z}}\right) \ave[0]{\phi(\gpvec{z},t')\tilde{\phi}(\gpvec{x},t)} \ , \end{equation} and correspondingly the bare propagator will solve the FPE after taking the continuum limit. In other words, the propagator is indeed the Green function of the FPE. The term $\delta_{\gpvec{y},\gpvec{x}}\delta(t'-t)$ is due to the derivative of the Heaviside $\theta$-function and $\delta(t'-t)\Exp{(t'-t)(\MC-\diag(\gpvec{r}))}=\mathbf{1}\delta(t'-t)$. \subsubsection{Perturbative expansion of the full propagator} \seclabel{pert_exp_full_prop} The full propagator $\ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ acquires corrections from the perturbative part of the action \Eref{gen_pert_action}, so that, \Eref{propagator_expansion}, \begin{align} \tbarePropagator{\gpvec{x},t}{\gpvec{y},t'} & + \tblobbedPropagator{\gpvec{x},t}{\gpvec{y},t'} + \tDblobbedPropagator{\gpvec{x},t}{\gpvec{y},t'} + \ldots\, \corresponds \ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} \nonumber\\ =& \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} + \int_{-\infty}^\infty \dint{s} \sum_{\gpvec{a},\gpvec{b}} \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{b},s)} \BC_{\gpvec{b}\gpvec{a}} \ave[0]{\phi(\gpvec{a},s)\tilde{\phi}(\gpvec{x},t)} + \ldots \elabel{propagator_expansion_app} \end{align} The bare propagator is stated in \Eref{bare_propagator_realtime} and the first order correction is easily determined explicitly, \begin{equation}\elabel{first_order_explicitly} \tblobbedPropagator{\gpvec{x},t}{\gpvec{y},t'} \corresponds \int_{t}^{t'} \dint{s} \sum_{\gpvec{a},\gpvec{b}} \left(\Exp{(t'-s)\MC-\diag(\gpvec{r})}\right)_{\gpvec{y}\gpvec{b}} \BC_{\gpvec{b}\gpvec{a}} \left(\Exp{(s-t)\MC-\diag(\gpvec{r})}\right)_{\gpvec{a}\gpvec{x}} \ . \end{equation} This is generally not trivial to evaluate, because the matrix exponentials and $\BC$ generally do not commute. Yet, \Eref{first_order_explicitly} clearly vanishes as $t'\downarrow t$. The derivative of \Eref{first_order_explicitly} with respect to $t'$ produces two terms, one from the differentiation of the integrand and one from the differentiation of the integration limits. In the limit $t'\downarrow t$ only the latter contributes, as the integral vanishes for $t'\downarrow t$, so that \begin{equation}\elabel{deri_first_order_correction_final} \lim_{t'\downarrow t} \partial_{t'} \tblobbedPropagator{\gpvec{x},t}{\gpvec{y},t'} \corresponds \sum_{\gpvec{a},\gpvec{b}} \delta_{\gpvec{y},\gpvec{b}} \BC_{\gpvec{b}\gpvec{a}} \delta_{\gpvec{a},\gpvec{x}} = \BC_{\gpvec{y}\gpvec{x}} \ . \end{equation} The diagrammatics in terms of perturbative "blobs" is further discussed in \SMref{appendixWhichDiagramsContribute}. Based on these arguments, or by direct evaluation of the convolutions using \Eref{bare_propagator_realtime}, one can show that a term to $n$th order in the perturbation vanishes like $(t'-t)^n$. In summary, \begin{subequations} \elabel{summary_propagator_first_order_summary} \begin{align} \elabel{summary_propagator_first_order_summary_nonDeri} \lim_{t'\downarrow t} \ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} &= \delta_{\gpvec{y},\gpvec{x}}\\ \elabel{summary_propagator_first_order_summary_deri} \lim_{t'\downarrow t} \partial_{t'} \ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} &= \MC_{\gpvec{y}\gpvec{x}} - r_\gpvec{y} \delta_{\gpvec{y},\gpvec{x}} + \BC_{\gpvec{y}\gpvec{x}} \end{align} \end{subequations} When we discuss entropy production in the following, we will drop the mass term $r_\gpvec{y} \delta_{\gpvec{y},\gpvec{x}}$, as in the present work we treat only conservative dynamics. \subsection{Entropy production}\seclabel{entropy_production} To calculate the entropy production we cannot follow \cite{NardiniETAL:2017} and attempt to derive a "path density" in the form $\mathcal{P}([\phi])\propto\int\Dint{\tilde{\phi}}\exp{\AC}$, firstly because this integral generally cannot be sensibly performed as $\tilde{\phi}$ is introduced as the complex conjugate of $\phi$, and secondly because $\phi(\gpvec{x},t)$ is not a particle density, but rather the conjugate of the annihilation operator. The quantity $\mathcal{P}([\phi])$ therefore does not have the meaning of a probability density of a particular history of particle movements. We will now use the propagator as characterised in \Eref{summary_propagator_first_order_summary} to calculate the entropy production of the continuous time Markov chain \Eref{rho_i_ME_mass} with the interaction \Eref{gen_pert_action} added. The \emph{internal entropy production} (rate) by a single particle, whose sole degree of freedom is its state $\gpvec{x}$, is generally given by \supplcite{Gaspard:2004} \begin{equation} \elabel{def_entropyProduction_Suppl_initial} \dot{S}_{\text{int}}[\rho] = \lim_{{\Delta t}\downarrow0} \frac{1}{2{\Delta t}} \sum_{\gpvec{y}\gpvec{x}} \Big\{ \rho(\gpvec{x})\Transition_{\gpvec{y}\gpvec{x}}({\Delta t}) - \rho(\gpvec{y})\Transition_{\gpvec{x}\gpvec{y}}({\Delta t}) \Big\} \ln \left( \frac {\rho(\gpvec{x})\Transition_{\gpvec{y}\gpvec{x}}({\Delta t})} {\rho(\gpvec{y})\Transition_{\gpvec{x}\gpvec{y}}({\Delta t})} \right) \end{equation} where we define $0\ln(0/0)=0$ to make the expression well-defined even when some transition rates $\Transition_{\gpvec{y}\gpvec{x}}$ vanish. The external entropy production is closely related and identical to the negative of the internal entropy production at stationarity \cite{Gaspard:2004,CocconiETAL:2020}. The \emph{functional} $\dot{S}_{\text{int}}[\rho]$ is the rate of entropy production by the system given $\rho(\gpvec{x})$ as the probability of finding the particle in state $\gpvec{x}$. Compared to \Eref{rho_i_ME_mass} we have dropped the time dependence of $\rho(\gpvec{x})$ to emphasise that in the expression above we consider the density as given. Further, $\Transition_{\gpvec{y}\gpvec{x}}({\Delta t})$ denotes the probability of the particle transitioning from state $\gpvec{x}$ to state $\gpvec{y}$ over the course of time ${\Delta t}$. With $\lim_{{\Delta t}\to0}\Transition_{\gpvec{y}\gpvec{x}}({\Delta t})=\delta_{\gpvec{y},\gpvec{x}}$ and \begin{equation}\elabel{Transition_dot} \mathring{\Transition}_{\gpvec{y}\gpvec{x}} = \lim_{{\Delta t}\downarrow0} \ddX{{\Delta t}} \Transition_{\gpvec{y}\gpvec{x}}({\Delta t}) = \lim_{{\Delta t}\downarrow0} \frac{\Transition_{\gpvec{y}\gpvec{x}}({\Delta t}) - \Transition_{\gpvec{y}\gpvec{x}}(0)}{{\Delta t}} \ . \end{equation} we have \begin{equation}\elabel{simplified_curly_bracket} \lim_{{\Delta t}\downarrow0}\frac{1}{{\Delta t}} \Big\{ \rho(\gpvec{x})\Transition_{\gpvec{y}\gpvec{x}}({\Delta t}) - \rho(\gpvec{y})\Transition_{\gpvec{x}\gpvec{y}}({\Delta t}) \Big\} = \rho(\gpvec{x})\mathring{\Transition}_{\gpvec{y}\gpvec{x}} - \rho(\gpvec{y})\mathring{\Transition}_{\gpvec{x}\gpvec{y}} \ . \end{equation} Given we are studying a continuous time Markov chain, $\mathring{\Transition}_{\gpvec{y}\gpvec{x}}$ is a rate matrix, so that \cite{Gaspard:2004} \begin{equation}\elabel{transition_expansion} \Transition_{\gpvec{y}\gpvec{x}}({\Delta t}) = \delta_{\gpvec{y},\gpvec{x}} + {\Delta t} \mathring{\Transition}_{\gpvec{y}\gpvec{x}} + \order{{\Delta t}^2} \ , \end{equation} with $\mathring{\Transition}_{\gpvec{y}\gpvec{x}}\ge0$ for $\gpvec{y}\ne\gpvec{x}$ and $\mathring{\Transition}_{\gpvec{y}\yvec}<0$, accounting for the loss of any state $\gpvec{y}$ into all other accessible states, as normally implemented by definition of a Markovian rate matrix, \Eref{def_Fjj}. In the Markov chain introduced at the beginning of the present supplement, the rate matrix $\mathring{\Transition}$ of \Eref{Transition_dot} \emph{is} in fact the Markov matrix $\MC$ of the master \Eref{rho_i_ME_mass} with \Eref{def_Fjj}, \latin{i.e.}\@\xspace $\mathring{\Transition}=\MC$. We will keep the notation separate to allow for $\mathring{\Transition}$ to acquire corrections beyond $\MC$ due to perturbations. Below, we will demonstrate that the transition rate matrix $\mathring{\Transition}$ plays the role of a kernel. Indeed, in the continuum, \SMref{EP_cont}, it can be written as the Fokker-Planck operator acting on a Dirac $\delta$-function. To this end, we introduce separately \begin{equation}\elabel{def_Op_app} \bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}} = \lim_{{\Delta t}\downarrow0} \ddX{{\Delta t}} \Transition_{\gpvec{y}\gpvec{x}}({\Delta t}) \end{equation} even when in the present case of a Markov chain we simply have that $\bm{\mathsf{K}}=\mathring{\Transition}$, \Eref{Transition_dot}. This term is the focus of much of this work. Using \Eref{simplified_curly_bracket} in \eref{def_entropyProduction_Suppl_initial}, the entropy production (rate) is \begin{align} \elabel{def_entropyProduction_Suppl_step1} \dot{S}_{\text{int}}[\rho] = & \mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)} \sum_{\gpvec{y}\gpvec{x}} \Big\{ \rho(\gpvec{x})\mathring{\Transition}_{\gpvec{y}\gpvec{x}} - \rho(\gpvec{y})\mathring{\Transition}_{\gpvec{x}\gpvec{y}} \Big\} \lim_{{\Delta t}\downarrow0} \ln \left( \frac {\rho(\gpvec{x})\Transition_{\gpvec{y}\gpvec{x}}({\Delta t})} {\rho(\gpvec{y})\Transition_{\gpvec{x}\gpvec{y}}({\Delta t})} \right) \nonumber\\ = & \sum_{\gpvec{y}\gpvec{x}} \rho(\gpvec{x})\bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}} \lim_{{\Delta t}\downarrow0} \ln \left( \frac {\rho(\gpvec{x})\Transition_{\gpvec{y}\gpvec{x}}({\Delta t})} {\rho(\gpvec{y})\Transition_{\gpvec{x}\gpvec{y}}({\Delta t})} \right) \end{align} assuming that both limits exist and defining now also $0\ln(0)=0$. The logarithm vanishes for $\gpvec{y}=\gpvec{x}$ and we shall therefore proceed assuming $\gpvec{y}\ne\gpvec{x}$. It may be considered to be comprised of two terms: The first one, $\ln(\rho(\gpvec{x})/\rho(\gpvec{y}))$, contains only the density $\rho$ and is independent of the time ${\Delta t}$. The contribution from this term to the entropy production vanishes when $\rho(\gpvec{x})$ is stationary. The second logarithmic term in \Eref{def_entropyProduction_Suppl_step1} we define as \begin{equation}\elabel{def_Ln_app} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\gpvec{x}} = \lim_{{\Delta t}\downarrow0} \ln \left( \frac {\Transition_{\gpvec{y}\gpvec{x}}({\Delta t})} {\Transition_{\gpvec{x}\gpvec{y}}({\Delta t})} \right) \ . \end{equation} This term generally contributes at stationary and is the second term the present work focuses on. With definitions \Erefs{def_Op_app} and \eref{def_Ln_app} we can write the entropy production as \begin{equation} \elabel{def_entropyProduction_Suppl_step2} \dot{S}_{\text{int}}[\rho] = \sum_{\gpvec{y}\gpvec{x}} \rho(\gpvec{x})\bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}} \left\{ \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\gpvec{x}} + \ln\left( \frac {\rho(\gpvec{x})} {\rho(\gpvec{y})} \right) \right\} = \sum_{\gpvec{x}} \rho(\gpvec{x}) \dot{\sigma}(\gpvec{x}) + \sum_{\gpvec{x}} \rho(\gpvec{x}) \bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}} \sum_\gpvec{y} \ln\left( \frac {\rho(\gpvec{x})} {\rho(\gpvec{y})} \right) \end{equation} where we have introduced the \emph{(stationary) local entropy production}, \begin{equation}\elabel{def_entropyProductionDensity_app} \dot{\sigma}(\gpvec{x}) = \sum_\gpvec{y} \bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\gpvec{x}} \ . \end{equation} This notation is also a reminder that this expression for the entropy production goes back to Kullback and Leibler \cite{KullbackLeibler:1951}. Focusing now on a Markov chain, the kernel is simply the transition rate matrix, \begin{equation}\elabel{Kn_from_TransitionRate} \bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}}=\mathring{\Transition}_{\gpvec{y}\gpvec{x}} \ , \end{equation} \Eref{def_Op_app} and \eref{Transition_dot}. The logarithm term $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$, \Eref{def_Ln_app}, obviously vanishes when $\gpvec{y}=\gpvec{x}$ and is otherwise easily determined using \Eref{transition_expansion} and L'H{\^o}pital's rule. In principle, this requires higher order derivatives beyond $\mathring{\Transition}_{\gpvec{y}\gpvec{x}}$, if it vanishes. However, in this case $\bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}}$ in \Erefs{def_entropyProduction_Suppl_step2} and \eref{def_entropyProductionDensity_app} vanishes as well and we thus write \begin{subnumcases}{\elabel{Ln_from_TransitionRate}\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\gpvec{x}}=} 0 & for $\gpvec{y}=\gpvec{x}$ \elabel{Ln_yy_vanishes}\\ 0 & for $\gpvec{y}\ne\gpvec{x}$ and $\mathring{\Transition}_{\gpvec{y}\gpvec{x}}=0$\\ \ln \left( \frac {\mathring{\Transition}_{\gpvec{y}\gpvec{x}}} {\mathring{\Transition}_{\gpvec{x}\gpvec{y}}} \right) & otherwise \ , \end{subnumcases} making use of $0\ln(0/0)=0=0\ln(0)$ in case $\mathring{\Transition}_{\gpvec{y}\gpvec{x}}$ or both $\mathring{\Transition}_{\gpvec{y}\gpvec{x}}$ and $\mathring{\Transition}_{\gpvec{y}\gpvec{x}}$ vanish. Strictly, \Eref{Ln_from_TransitionRate} is thus the limit \Eref{def_Ln_app} only in case of $\gpvec{y}=\gpvec{x}$ or whenever $\bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}}=\mathring{\Transition}_{\gpvec{y}\gpvec{x}}$ does not vanish. In the present section, we have determined expressions for the entropy production \emph{given} the transition rate matrix $\mathring{\Transition}$. We proceed by showing how transition rate matrix and thus entropy production are determined by a field theory. \subsubsection{Expressing the entropy production in terms of propagators} \seclabel{EP_from_propagators} Both $\bm{\mathsf{K}}$ and $\operatorname{\bm{\mathsf{Ln}}}$ are based on the transition probability $\Transition_{\gpvec{y}\gpvec{x}}({\Delta t})$, \Erefs{def_Op_app} and \eref{def_Ln_app}. In a field-theoretic description, the probability to be in state $\gpvec{y}$ having started from state $\gpvec{x}$ is given by \Eref{propagator_expansion_app} \begin{align} \Transition_{\gpvec{y}\gpvec{x}}({\Delta t}) = & \ave{\phi(\gpvec{y},t+{\Delta t})\tilde{\phi}(\gpvec{x},t)}\nonumber\\ \corresponds & \tbarePropagatorL{\gpvec{x},t}{\gpvec{y},t+{\Delta t}} + \tblobbedPropagator{\gpvec{x},t}{\gpvec{y},t+{\Delta t}} + \tDblobbedPropagator{\gpvec{x},t}{\gpvec{y},t+{\Delta t}} + \ldots\ , \end{align} which is independent of $t$ due to time translational invariance. Using this expression in \Erefs{def_Op_app}, \eref{def_Ln_app} and \eref{summary_propagator_first_order_summary} with $r_\gpvec{y}\downarrow0$ gives \begin{subequations} \elabel{Op_and_Ln_from_diagrams} \begin{align} \bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}} &= \lim_{{\Delta t}\downarrow0} \ddX{{\Delta t}} \ave{\phi(\gpvec{y},t+{\Delta t})\tilde{\phi}(\gpvec{x},t)} = \MC_{\gpvec{y}\gpvec{x}} + \BC_{\gpvec{y}\gpvec{x}} \nonumber\\ \elabel{Op_in_diagrams} &\corresponds \lim_{{\Delta t}\downarrow0} \ddX{{\Delta t}} \left( \tbarePropagatorL{\gpvec{x},t}{\gpvec{y},t+{\Delta t}} + \tblobbedPropagator{\gpvec{x},t}{\gpvec{y},t+{\Delta t}} \right) \\ \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\gpvec{x}} &= \lim_{{\Delta t}\downarrow0} \ln \left( \frac {\ave{\phi(\gpvec{y},t+{\Delta t})\tilde{\phi}(\gpvec{x},t)}} {\ave{\phi(\gpvec{x},t+{\Delta t})\tilde{\phi}(\gpvec{y},t)}} \right) \nonumber\\ \elabel{Ln_in_diagrams} &\corresponds \lim_{{\Delta t}\downarrow0} \ln \left( \frac {\tbarePropagatorL{\gpvec{x},t}{\gpvec{y},t+{\Delta t}} + \tblobbedPropagator{\gpvec{x},t}{\gpvec{y},t+{\Delta t}}} {\tbarePropagatorL{\gpvec{y},t}{\gpvec{x},t+{\Delta t}} + \tblobbedPropagator{\gpvec{y},t}{\gpvec{x},t+{\Delta t}}} \right) \end{align} \end{subequations} where the diagrams are shown only to first order in the perturbation, as higher orders, those $\propto {\Delta t}^2$ and higher, cannot possibly contribute, \SMref{pert_exp_full_prop}. \Eref{Op_in_diagrams} is explicitly the first order contribution in ${\Delta t}$ to the propagator and is determined immediately using \Eref{summary_propagator_first_order_summary_deri} with $r_\gpvec{y}\downarrow0$ to preserve the particle number, unity. In the logarithm, we might first use \Eref{summary_propagator_first_order_summary_nonDeri}, but that produces a meaningful result only for $\gpvec{y}=\gpvec{x}$, in which case indeed $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\yvec}=0$, \Eref{Ln_yy_vanishes}. For $\gpvec{y}\ne\gpvec{x}$ we need to apply L'H{\^o}pital, so that with \Eref{summary_propagator_first_order_summary_deri} for $r_\gpvec{y}\downarrow0$ (conserved particle number), \begin{equation}\elabel{Ln_from_propagator} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\gpvec{x}} = \ln\left( \frac {\MC_{\gpvec{y}\gpvec{x}} + \BC_{\gpvec{y}\gpvec{x}}} {\MC_{\gpvec{x}\gpvec{y}} + \BC_{\gpvec{x}\gpvec{y}}} \right) \ . \end{equation} In summary, the entropy production of a continuous time Markov chain with density $\rho(\gpvec{x})$ given, stationary or not, is \Eref{def_entropyProduction_Suppl_step2} with kernel $\bm{\mathsf{K}}$ in \Eref{Op_in_diagrams} and $\operatorname{\bm{\mathsf{Ln}}}$ in \Eref{Ln_from_propagator}. \subsubsection{Continuum Limit} \seclabel{EP_cont} As long as states are discrete and rates therefore finite (\SMref{Fourier_transformation}) the logarithm $\operatorname{\bm{\mathsf{Ln}}}$ \Eref{Ln_from_propagator} is a function of the kernel $\bm{\mathsf{K}}$, \Eref{Op_in_diagrams}. In the continuum this simple relationship breaks down. To find the relevant expressions in the continuum, we return to the propagator in the continuum, replacing rate matrices \latin{etc.}\@\xspace by their continuum counterparts. Much of the following is done in further detail in \SMref{drift_diffusion_on_ring} and illustrated further in \SMref{MultipleParticles}. Below we present only the basic argument. For continuous states $\gpvec{x},\gpvec{y}$ the probability $\rho(\gpvec{x})$ in \Eref{def_entropyProduction_Suppl_step2} is a \emph{density} which we denote by the same symbol $\rho(\gpvec{x})$. Similarly, the kernel $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$, which for discrete states is a rate, has units of a rate \emph{density} on $\gpvec{y}$, with $\gpvec{x}$ given. Correspondingly, the expression for the entropy production \Eref{def_entropyProduction_Suppl_step2} becomes the double integral \begin{equation} \elabel{def_entropyProduction_Suppl_step2_integral} \dot{S}_{\text{int}}[\rho] = \int\dint{\gpvec{x}}\dint{\gpvec{y}} \rho(\gpvec{x})\bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}} \left\{ \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\gpvec{x}} + \ln\left( \frac {\rho(\gpvec{x})} {\rho(\gpvec{y})} \right) \right\} = \int\dint{\gpvec{x}} \rho(\gpvec{x}) \dot{\sigma}(\gpvec{x}) + \int\dint{\gpvec{x}} \rho(\gpvec{x}) \bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}} \int\dint{\gpvec{y}} \ln\left( \frac {\rho(\gpvec{x})} {\rho(\gpvec{y})} \right) \end{equation} with \Eref{def_entropyProductionDensity_app} replaced by \begin{equation}\elabel{def_entropyProductionDensity_app_continuous} \dot{\sigma}(\gpvec{x}) = \int\dint{\gpvec{y}} \bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\gpvec{x}} \ . \end{equation} The continuum limit of the kernel is easy to determine using \Eref{Op_in_diagrams} with \Erefs{continuum_limit_kernel} and \eref{def_harmonic_action}, effectively replacing $\MC_{\gpvec{y}\gpvec{x}} - r_\gpvec{y} \delta_{\gpvec{y},\gpvec{x}}$ in \Eref{Op_in_diagrams} by $\FPop_{\gpvec{y}\gpvec{x}}$, setting again $r_\gpvec{y}=0$ to preserve the particle number. Using further the definition similar to \Eref{continuum_limit_kernel}, \begin{equation} \hat{\GC}_{\gpvec{y}\gpvec{x}}= \lim_{{\Delta x}\to0}\frac{1}{{\Delta x}} \BC_{\gpvec{y}\gpvec{x}} \corresponds \fullBlobb{\gpvec{x}}{\gpvec{y}} \end{equation} to capture the contribution from the perturbative part in the continuum limit of \Eref{Op_in_diagrams}, we obtain \Eref{transition_from_action}, \begin{equation}\elabel{Op_continuum_Fpop_plus} \bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}}=\FPop_{\gpvec{y}\gpvec{x}} + \hat{\GC}_{\gpvec{y}\gpvec{x}} \corresponds \FPop_{\gpvec{y}\gpvec{x}} + \fullBlobb{\gpvec{x}}{\gpvec{y}} \ . \end{equation} The kernel $\bm{\mathsf{K}}_{\gpvec{y}\gpvec{x}}$ turns into an operator acting on $\delta$-functions in the continuum. Using the L'H{\^o}pital route, one might expect the same for the logarithm, but it is hard to see how such ill-defined objects can be evaluated as a ratio within the logarithm. Instead, we assume at this stage and later demonstrate explicitly, \SMref{drift_diffusion_on_ring}, that the following approach is useful. We write \begin{equation} \frac {\tbarePropagatorL{\gpvec{x},t}{\gpvec{y},t+{\Delta t}} + \tblobbedPropagator{\gpvec{x},t}{\gpvec{y},t+{\Delta t}}} {\tbarePropagatorL{\gpvec{y},t}{\gpvec{x},t+{\Delta t}} + \tblobbedPropagator{\gpvec{y},t}{\gpvec{x},t+{\Delta t}}} = \frac {\tbarePropagatorL{\gpvec{x},t}{\gpvec{y},t+{\Delta t}}} {\tbarePropagatorL{\gpvec{y},t}{\gpvec{x},t+{\Delta t}}} \quad \frac { 1+\frac {\tblobbedPropagatorS{\gpvec{x},t}{\gpvec{y},t+{\Delta t}}} {\tbarePropagatorLS{\gpvec{x},t}{\gpvec{y},t+{\Delta t}}} } { 1+\frac {\tblobbedPropagatorS{\gpvec{y},t}{\gpvec{x},t+{\Delta t}}} {\tbarePropagatorLS{\gpvec{y},t}{\gpvec{x},t+{\Delta t}}} } \ , \end{equation} where we assume that \begin{equation}\elabel{small_term} \frac {\tblobbedPropagator{\gpvec{y},t}{\gpvec{x},t+{\Delta t}}} {\tbarePropagatorL{\gpvec{y},t}{\gpvec{x},t+{\Delta t}}} \end{equation} is small in a sense further discussed in \SMref{drift_diffusion_on_ring}. Taking the limit of the logarithm of the above expression gives the right hand side of \Eref{Ln_in_diagrams} in the continuum limit, \begin{equation}\elabel{Ln_for_continuous} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}\gpvec{x}} \corresponds \lim_{{\Delta t}\downarrow0} \left\{ \ln\left( \frac {\tbarePropagatorL{\gpvec{x},t}{\gpvec{y},t+{\Delta t}}} {\tbarePropagatorL{\gpvec{y},t}{\gpvec{x},t+{\Delta t}}} \right) + \frac {\tblobbedPropagator{\gpvec{x},t}{\gpvec{y},t+{\Delta t}}} {\tbarePropagatorL{\gpvec{x},t}{\gpvec{y},t+{\Delta t}}} - \frac {\tblobbedPropagator{\gpvec{y},t}{\gpvec{x},t+{\Delta t}}} {\tbarePropagatorL{\gpvec{y},t}{\gpvec{x},t+{\Delta t}}} \right\} \ , \end{equation} as illustrated in \SMref{drift_diffusion_on_ring}, in particular \Eref{Ln_from_drift_diffusion}. The limit of the logarithm of the ratio of bare propagators is in general available, because the bare propagator is known explicitly. The ratio of the correction and the bare propagator can be expected to be finite, as the correction draws itself on the bare propagation. The above continuum limit concludes the present supplement \SMref{DP_plus_EP}. It contains essentially all technical details of how to proceed from the a master equation such as \Eref{rho_i_ME_matrix_rewrite} or a FPE such as \Eref{FPE_diffusion_example} or \eref{rho_i_ME_matrix_rewrite_kernel} to an action \Erefs{def_harmonic_action_discrete} or \eref{def_harmonic_action}. Expanding the resulting propagator for short times, \Eref{summary_propagator_first_order_summary}, finally produces expressions for the entropy production, \Eref{def_entropyProduction_Suppl_step2} with \eref{Op_and_Ln_from_diagrams}, and in the continuum \Eref{def_entropyProduction_Suppl_step2_integral} with \eref{Op_continuum_Fpop_plus} and \eref{Ln_for_continuous}. \section{Continuous particle number approximation of biased hopping on a ring}\seclabel{app_ring} \paragraph*{Abstract} Particle systems may be described by a continuous density field $\phi(x,t)$, whose temporal evolution is approximated by a conservative Langevin equation with additive noise. That is the case, for instance, in Active Model B \cite{NardiniETAL:2017}, a far-from-equilibrium extension of Hohenberg and Halperin's Model B \cite{HohenbergHalperin:1977}. In this section we study a simple, exactly solvable system under the same approximation: $N$ particles hopping on a ring-lattice of $M$ states. Our results show that, although spatial correlations are captured correctly, the framework devised in \cite{NardiniETAL:2017} produces an unphysical entropy production, as it is not extensive in the particle number $N$, but instead extensive in the number of states $M$, consistent with those being the degrees of freedom of the description in terms of $\phi(x,t)$. The outline of our derivation is as follows: In \sref{model_description} we define the model and state its basic properties, such as average particle number, variance and entropy production, in \sref{coarse_graining} we introduce its continuum particle number description with additive noise, which finally produces the Onsager-Machlup functional \Eref{def_OM}, in \sref{properties_of_S} and \sref{OM_functional}. We derive the correlation function of the Fourier modes of the density field description in \sref{correlator_a_t} and validate it, before deriving the entropy production under this approximation as outlined in \cite{NardiniETAL:2017}, \Eref{entropy_production_final} in \sref{M_state_Nardini_entropy_production}. \subsection{Biased hopping on a ring} \seclabel{model_description} In the following we consider $N$ independent particles subject to an $M$-state Markov process. The basic setup is sketched in \Fref{M_state_Markov}. States $i=1,2,\ldots,M$ are connected periodically, so that $i=1$ may be thought of $i=M+1$ and $i=M$ as $i=0$. Transitions from $i$ to $i+1$, clockwise moves, happen with rate $\alpha$ and transitions from $i$ to $i-1$, anti-clockwise moves, with rate $\beta$, implying $M>2$ to render the setup and the notion of clockwise and anti-clockwise meaningful. No other transitions are allowed. The Markovian degree of freedom is $\phi_i(t)$, the number of particles on site $i$ at time $t$. As a count per site, we may refer to $\phi_i(t)$ as a density. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{M_state_Markov.png} \caption{Cartoon of an $M$-state Markov process. Periodic states $i=1,2,\ldots,M$ are reached by transitions with rate $\alpha$ from $i-1$ and with rate $\beta$ from $i+1$.} \flabel{M_state_Markov} \end{figure} The density and its variance at stationarity are \begin{equation} \elabel{steady_state_density_Nardini} \overline{\densfield}_i \coloneqq \lim_{t\to\infty}\ave{\phi_i(t)}=\frac{N}{M} \ , \quad\text{ and }\quad \lim_{t\to\infty}\ave{\phi_i^2(t)} - \ave{\phi_i(t)}^2 = \frac{N(M-1)}{M^2} \,, \end{equation} by considering the stationary occupation of any site as a $N$-times repeated Bernoulli process with success probability $1/M$. The entropy production of a single particle can be determined by elementary considerations \cite{CocconiETAL:2020} to be $(\alpha-\beta)\ln(\alpha/\beta)$ so that for $N$ particles, \begin{equation}\elabel{M_state_exact_asympotic_result} \dot{S}_{\text{int}} = N (\alpha-\beta)\ln\left(\frac{\alpha}{\beta}\right) \qquad\text{ for }\qquad M>2 \end{equation} at stationarity, distinguishable or not. In the framework discussed in the present work, this is immediately confirmed by \Erefs{EntropyProductionMarkovChain} and \SMref{entropy_production}. \subsection{Continuum particle number description}\seclabel{coarse_graining} Following the approach in \cite{NardiniETAL:2017,WittkowskiETAL:2014,TiribocchiETAL:2015}, we consider the particle density field $\phi_i(t)$ of state $i$ as a function of time $t$ as a continuum, $\phi_i\in\gpset{R}$. Like Eq.~(1) in \cite{NardiniETAL:2017}, the density field $\phi_i(t)$ can be considered as being governed by a conservative Langevin equation with \emph{additive} noise such as \begin{equation}\elabel{Langevin_smallFluctuation_with_fudge} \dot{\pmb{\phi}}(t) = (\alpha S + \beta S^\mathsf{T} ) \pmb{\phi}(t) + \sqrt{f \alpha \frac{N}{M}} S \bm{\xi}_{\alpha}(t) + \sqrt{f \beta \frac{N}{M}} S^\mathsf{T} \bm{\xi}_{\beta}(t) \end{equation} where the column vector $\pmb{\phi}(t)$ has components $\phi_i(t)$. The matrices \begin{equation}\elabel{def_S} S=\left( \begin{array}{rrrcrr} -1 & 0 & 0 & \cdots & 0 & 1\\ 1 & -1 & 0 & \cdots & 0 & 0\\ 0 & 1 & -1 & \cdots & 0 & 0\\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & -1 \end{array} \right) \end{equation} and its transpose $S^\mathsf{T}$, denoted by $\mathsf{T}$, result in a conservative movement of particles from site $i$ to $i+1$ and from $i$ to $i-1$ respectively. These matrices possess a number of very useful algebraic properties, discussed in \SMref{properties_of_S}. To ease notation, we further introduce \begin{equation} \elabel{def_sigma} \sigma \coloneqq \alpha S + \beta S^\mathsf{T}\ , \end{equation} for the deterministic part of the Langevin \Eref{Langevin_smallFluctuation_with_fudge}. Without the noise the Langevin equation is the exact master equation of a single particle. \Eref{Langevin_smallFluctuation_with_fudge} may thus be seen as an attempt to, somewhat \latin{ad hoc}, express the master equation on the particle count, by adding to the single particle dynamics a noisy "bath". We have introduced a fudge-factor $f$ in \eref{Langevin_smallFluctuation_with_fudge} as a way to trace the noise amplitude through the calculation. It also provides a mechanism to adjust the noise amplitude \emph{a posteriori}. The amplitude $\propto\sqrt{N/M}$ is otherwise chosen to reflect that the variance of the noise is proportional to the mean, as for a Poisson distribution. We will verify below that this choice results in the density field $\pmb{\phi}$ governed by \Eref{Langevin_smallFluctuation_with_fudge} reproducing the single site variance \Eref{steady_state_density_Nardini} for $f=1$. The two independent noise vectors $\bm{\xi}_{\alpha}(t)$ and $\bm{\xi}_{\beta}(t)$ have vanishing mean and correlation matrices \begin{align}\elabel{M_state_Markov_noise_correlator} \ave{\bm{\xi}_{\alpha}(t) \bm{\xi}^\mathsf{T}_{\alpha}(t')} = \delta(t-t') \mathbf{1} && \text{ and } && \ave{\bm{\xi}_{\beta}(t) \bm{\xi}^\mathsf{T}_{\beta}(t')} = \delta(t-t') \mathbf{1} \end{align} with $\bm{\xi}_{\alpha}$ and $\bm{\xi}_{\beta}$ column vectors and $\mathbf{1}$ an $M\times M$ identity matrix. The noise vectors can, of course, be summed into a single noise term, \begin{equation}\elabel{def_zeta_vec} \bm{\zeta}(t;f) = \sqrt{f \alpha \frac{N}{M}} S \bm{\xi}_{\alpha}(t) + \sqrt{f \beta \frac{N}{M}} S^\mathsf{T} \bm{\xi}_{\beta}(t) \end{equation} with vanishing mean, $\ave{\bm{\zeta}(t;f)}=\zerovec$, and correlator \begin{equation}\elabel{zeta_correlator} \ave{\bm{\zeta}(t;f) \bm{\zeta}^\mathsf{T}(t';f)} = f \frac{N}{M} (\alpha + \beta) SS^\mathsf{T} \delta(t-t') \ . \end{equation} As shown in \SMref{properties_of_S}, one eigenvalue of $S$, say $\lambda_0$, vanishes, which means that the corresponding $0$-mode of $\zeta$ has no variance. To write a path-density $\mathcal{P}[\bm{\zeta}(t;f)]$ for the noise in the current form, we would need to regularise this correlation matrix, as well as the deterministic part of \Eref{Langevin_smallFluctuation_with_fudge}. This is straight-forwardly doable, however, somewhat messy. To avoid this distraction, we change to a basis which allows the removal of the $0$-mode from the path-densities altogether. With the noise defined above and the deterministic part of the original Langevin \Eref{Langevin_smallFluctuation_with_fudge} effectively captured by $\sigma$, \Eref{def_sigma}, it may finally be written as \begin{equation}\elabel{Langevin_simple} \dot{\pmb{\phi}}(t) = \sigma \, \pmb{\phi}(t) + \bm{\zeta}(t;f) \ . \end{equation} \Eref{Langevin_smallFluctuation_with_fudge} thus contains two approximations: Firstly, $\pmb{\phi}\in\gpset{R}^M$ is a \emph{continuous degree of freedom} that evolves without any constraints other than the conservation imposed by $S$ and $S^\mathsf{T}$, even though it is introduced as a \emph{local, instantaneous particle count}, which requires $\phi_i(t)\in\gpset{N}_0$. Secondly, the additive noise has a (squared) amplitude proportional to the \emph{steady state} $N/M$, rather than the \emph{instantaneous} occupation number $\phi_i(t)$. There is no easy remedy for either of these two approximations, which ultimately will produce the wrong entropy production. \subsection{Properties of the lattice derivative matrix \texorpdfstring{$S$}{S}}\seclabel{properties_of_S} The matrix $S$ as defined in \Eref{def_S} plays the r{\^o}le of a spatial derivative and, unsurprisingly, has corresponding Fourier-mode eigenvectors, which we define as \begin{equation} \gpvec{e}_\mu = \left( \begin{array}{l} \exp{\mathring{\imath} k_\mu \cdot 1}\\ \exp{\mathring{\imath} k_\mu \cdot 2}\\ \exp{\mathring{\imath} k_\mu \cdot 3}\\ \qquad\vdots \\ \exp{\mathring{\imath} k_\mu \cdot M} \end{array} \right) \elabel{def_S_evec} \end{equation} where the $\cdot$ is there to emphasise the multiplication in the exponent. The coefficients $k_{\mu}=2\pi\mu/M$ parameterise the $M$ distinct modes $\mu=0,1,2,\ldots,M-1$ and, in fact $\gpvec{e}_\mu=\gpvec{e}_{M+\mu}$. By definition, the $j$th component of $\gpvec{e}_\mu$ is $(\gpvec{e}_\mu)_j=\exp{\mathring{\imath} k_{\mu} j}$. Writing $S_{ij}=-\delta_{i,j}^M + \delta_{i-1,j}^M$ with \begin{equation}\elabel{def_deltaM} \delta_{i,j}^M= \left\{ \begin{array}{ll} 1 & \text{ for } i=j \mod M \\ 0 & \text{ otherwise } \end{array} \right. \end{equation} the eigenvalues are easily determined, $S \gpvec{e}_\mu = \lambda_{\mu} \gpvec{e}_\mu$ with \begin{equation} \elabel{def_lambda} \lambda_\mu=\exp{-\mathring{\imath} k_{\mu}} - 1 \end{equation} and, correspondingly, $S^\mathsf{T} \gpvec{e}_\mu = \lambda^*_{\mu} \gpvec{e}_\mu$, so that the eigenvectors are orthogonal, \begin{equation}\elabel{def_deltaDbar} \gpvec{e}_\mu \cdot \gpvec{e}_\nu =\gpvec{e}^\dagger_{-\mu} \gpvec{e}_\nu = M \delta_{\mu+\nu,0}^M \ . \end{equation} The eigenvalue of the $0$-mode vanishes, $\lambda_M=\lambda_0=0$, and we will henceforth consider only $\mu=1,2,\ldots,M-1$. From \Eref{def_S} it follows by elementary calculation that \begin{equation} \elabel{sum_s_is_product_s} SS^\mathsf{T} = -\left( S+S^\mathsf{T} \right) = S^\mathsf{T} S \ , \end{equation} \latin{i.e.}\@\xspace $S$ and $S^\mathsf{T}$ commute. The $\gpvec{e}_{\mu}$ are also eigenvectors of $\sigma$ introduced in \Eref{def_sigma}, as $\sigma \gpvec{e}_{\mu} = p_{\mu} \gpvec{e}_{\mu}$ with \begin{equation}\elabel{def_p} p_{\mu} = \alpha \lambda_{\mu} + \beta \lambda_{\mu}^* = -(\alpha+\beta) (1-\cos(k_{\mu})) + \mathring{\imath} (\beta-\alpha)\sin(k_{\mu}) \ , \end{equation} which has negative realpart for all $k_{\mu}$ as $k_{\mu}\ne0$ for $\mu\ne0$. The eigenvectors $\gpvec{e}_{\mu}$ can be used to re-express $\bm{\zeta}(t;f)$ and $\pmb{\phi}(t)$ in terms of a more suitable basis, omitting the undesired $0$-mode, say, $\bm{\zeta}(t;f) = \frac{1}{M} \sum_{\mu=1}^{M-1} \gpvec{e}_\mu z_\mu(t;f)$, or simply \begin{equation}\elabel{zeta_in_terms_of_z} \bm{\zeta}(t;f) = \left( \begin{array}{c} \zeta_1(t;f)\\ \zeta_2(t;f)\\ \zeta_3(t;f)\\ \vdots\\ \zeta_M(t;f) \end{array} \right) = \frac{1}{M} \mathcal{E} \left( \begin{array}{c} z_1(t;f)\\ z_2(t;f)\\ z_3(t;f)\\ \vdots\\ z_{M-1}(t;f) \end{array} \right) \quad\text{ with }\quad \mathcal{E} = \left( \begin{array}{ccccc} \gpvec{e}_1 & \gpvec{e}_2 & \gpvec{e}_3 & \ldots & \gpvec{e}_{M-1} \end{array} \right) \end{equation} so that the $M\times (M-1)$ matrix $\mathcal{E}$ is composed from the column vectors $\gpvec{e}_1$, \ldots, $ \gpvec{e}_{M-1}$, and is "essentially unitary", because \begin{equation}\elabel{ECECdagger} \mathcal{E}^\dagger = \mathcal{E}^{*\mathsf{T}} = \left( \begin{array}{c} \gpvec{e}^\dagger_{1} \\ \gpvec{e}^\dagger_{2} \\ \gpvec{e}^\dagger_{3} \\ \vdots \\ \gpvec{e}^\dagger_{M-1} \end{array} \right) \text{ obeys } \mathcal{E}^\dagger\mathcal{E}=M\ident \ , \end{equation} with $\ident$ an $(M-1)\times(M-1)$ identity matrix, as $\gpvec{e}_{\mu}^*=\gpvec{e}_{-\mu}$ by construction. Of course, not being square $\mathcal{E}$ cannot be unitary and indeed $\mathcal{E}\EC^\dagger$ looks rather grim, \Eref{EEdagger}. It follows that \begin{equation}\elabel{def_z} \gpvec{z}(t;f) = \mathcal{E}^\dagger \bm{\zeta}(t;f) \ . \end{equation} By definition, the elements $(\mathcal{E})_{i\mu}=(\gpvec{e}_\mu)_i=\exp{\mathring{\imath} k_{\mu} i}$ depend only on the product $i\mu$ and therefore $\mathcal{E}$ is symmetric, $(\mathcal{E})_{i\mu}=(\mathcal{E})_{\mu i}$, for $1\le i,\mu\le M-1$. One can further show that \begin{equation}\elabel{SE} S \mathcal{E} = \mathcal{E} \Lambda \quad\text{ with }\quad \Lambda = \left( \begin{array}{ccccc} \lambda_1 &&& & \mbox{\huge $0$ } \\ & \lambda_2\\ & & \lambda_3 \\ & & & \ddots \\ \mbox{\huge $0$} & & & & \lambda_{M-1} \end{array} \right) \quad\text{ so that }\quad \mathcal{E}^\dagger S^\mathsf{T} = \Lambda^* \mathcal{E}^\dagger \end{equation} and similarly \begin{equation}\elabel{StransposeE} S^\mathsf{T} \mathcal{E} = \mathcal{E} \Lambda^* \quad\text{ so that }\quad \mathcal{E}^\dagger S = \Lambda \mathcal{E}^\dagger \quad\text{ and }\quad \mathcal{E}^\dagger \sigma = \big(\alpha \Lambda + \beta \Lambda^*\big) \mathcal{E}^\dagger \ . \end{equation} The noise correlator of $\gpvec{z}$ is determined by \Erefs{zeta_correlator} and \eref{def_z} by direct computation \begin{equation}\elabel{z_correlator} \ave{\gpvec{z}(t;f) \gpvec{z}^\dagger(t';f)} = f N (\alpha+\beta) \delta(t-t') \Lambda\Lambda^* \end{equation} using $\mathcal{E}^\dagger S S^\mathsf{T} \mathcal{E} = M \Lambda\Lambda^*$. As opposed to the correlator of $\zeta$, \Eref{zeta_correlator}, the matrix on the right can be inverted and thus be used as the basis of an Onsager-Machlup functional. While the noise $\bm{\zeta}$ is fully determined by the noise $\gpvec{z}$, as neither has a $0$-mode, the density field $\pmb{\phi}$ can have any $0$-mode, only that it does not evolve in time and therefore is not to be considered a degree of freedom. We therefore introduce the modes $a_\mu(t)$ of $\pmb{\phi}$ with an extra parameter $\overline{\densvec}=\overline{\densfield} \gpvec{e}_0$ that is constant in time and has identical components, so that \begin{equation}\elabel{narrow_symmetry} \partial_t \overline{\densvec} = \zerovec = \sigma \overline{\densvec} \ , \end{equation} and $\overline{\densfield}=N/M$, \Eref{steady_state_density_Nardini}. Any $M$-component density field may then be written in terms of the background $\overline{\densvec}$ and an $(M-1)$-component vector $\gpvec{a}(t)$ of modes, \begin{equation}\elabel{def_avec} \pmb{\phi}(t) = \overline{\densvec} + \frac{1}{M} \mathcal{E} \gpvec{a}(t) \ , \quad\text{ so that }\quad \gpvec{a}(t) = \mathcal{E}^\dagger \pmb{\phi}(t) \ . \end{equation} With this in place, we can write the Langevin \Eref{Langevin_simple} in diagonal form, by acting from the left with $\mathcal{E}^\dagger$, \begin{equation}\elabel{Langevin_a} \dot{\gpvec{a}}(t) = \big(\alpha \Lambda + \beta \Lambda^*\big) \gpvec{a}(t) + \gpvec{z}(t;f) \ . \end{equation} Via \Eref{def_avec}, \Eref{Langevin_a} provides an alternative equation of motion of the present Langevin dynamics that is more easily analysed. \subsection{The Onsager-Machlup functional}\seclabel{OM_functional} The path density of the noise $\gpvec{z}$ follows from its correlator \Eref{z_correlator} as \begin{equation}\elabel{z_path_density} \mathcal{P}[\gpvec{z}(t;f)] \propto \Exp{-\frac{1}{2 f N (\alpha+\beta)} \int \dint{t} \gpvec{z}^\dagger(t;f) (\Lambda\Lambda^*)^{-1} \gpvec{z}(t;f) } \end{equation} and since a path $\gpvec{z}(t)$ is uniquely determined from that of $\gpvec{a}$ and vice versa via \Eref{Langevin_a}, $\gpvec{z}=\dot{\gpvec{a}}-(\alpha\Lambda+\beta\Lambda^*)\gpvec{a}$, the Onsager-Machlup functional follows immediately, $\mathcal{P}[\gpvec{a}(t)] \propto \exp{\mathcal{G}[\gpvec{a}(t)]}$ with \begin{equation}\elabel{def_OM} \mathcal{G}[\gpvec{a}(t)]= - \frac{1}{2 f N (\alpha+\beta)} \int \dint{t} \big(\dot{\gpvec{a}}-(\alpha\Lambda+\beta\Lambda^*)\gpvec{a}\big)^\dagger (\Lambda\Lambda^*)^{-1} \big(\dot{\gpvec{a}}-(\alpha\Lambda+\beta\Lambda^*)\gpvec{a}\big)\ , \end{equation} as the Jacobian is constant. By Fourier transforming the modes, \begin{equation}\elabel{avec_Fourier_transform} \gpvec{a}(\omega) = \int \dint{t} \exp{\mathring{\imath}\omega t}\gpvec{a}(t) \quad\text{ and }\quad \gpvec{a}(t) = \int \dintbar{\omega} \exp{-\mathring{\imath}\omega t}\gpvec{a}(\omega) \end{equation} with $\dintbar{\omega}=\dint{\omega}/(2\pi)$, the Onsager-Machlup functional becomes \begin{equation}\elabel{OM_a_omega} \mathcal{G}[\gpvec{a}(\omega)]= - \frac{1}{2 f N (\alpha+\beta)} \int \dintbar{\omega} \gpvec{a}^\dagger(\omega) \big(\mathring{\imath}\omega\ident+\alpha\Lambda+\beta\Lambda^*\big)^\dagger (\Lambda\Lambda^*)^{-1} \big(\mathring{\imath}\omega\ident+\alpha\Lambda+\beta\Lambda^*\big) \gpvec{a}(\omega) \ . \end{equation} Because all matrices in \Eref{OM_a_omega} are diagonal the following calculations are greatly simplified. For now, we will proceed with an infinite time domain, even when the calculation of the entropy production in \Sref{M_state_Nardini_entropy_production} will require a finite time interval to be taken to infinity systematically. \subsection{Correlator of the density's Fourier modes \texorpdfstring{ $\gpvec{a}(t)$}{a(t)}}\seclabel{correlator_a_t} Above, it was argued that the distribution $\mathcal{P}[\gpvec{z}]$ in \Eref{z_path_density} produces correlator \Eref{z_correlator}. Correspondingly, the distribution $\mathcal{P}[\gpvec{a}]$ with functional in \Eref{def_OM} produces the correlation matrix \begin{equation} \ave{\gpvec{a}(\omega) \gpvec{a}^\dagger(\omega')} = f N(\alpha+\beta) \left[ \left(\mathring{\imath}\omega\ident+\alpha\Lambda+\beta\Lambda^*\right)^\dagger (\Lambda \Lambda^*)^{-1} \left(\mathring{\imath}\omega\ident+\alpha\Lambda+\beta\Lambda^*\right)\right]^{-1} \delta\mkern-6mu\mathchar'26(\omega-\omega')\ . \end{equation} To invert the matrix in square brackets we use that $\Lambda$ is diagonal, so \begin{equation} \left[ \left(\mathring{\imath}\omega\mathbf{1}+\alpha\Lambda+\beta\Lambda^*\right)^\dagger (\Lambda \Lambda^*)^{-1} \left(\mathring{\imath}\omega\mathbf{1}+\alpha\Lambda+\beta\Lambda^*\right) \right]_{\mu\nu} = \delta_{\mu,\nu} (-\mathring{\imath}\omega+\alpha\lambda_\mu^*+\beta\lambda_\mu) (\lambda_\mu^*)^{-1} \lambda_\mu^{-1} (\mathring{\imath}\omega+\alpha\lambda_\mu+\beta\lambda^*_\mu) \ . \end{equation} Using \Eref{def_p} to write $\lambda_\mu$ in terms of the poles $p_\mu$, and $a_\mu^*(\omega) = a_{M-\mu}(-\omega)$ via \Erefs{def_avec} and \eref{avec_Fourier_transform}, the correlators become \begin{align}\elabel{a_correlator_elegant} \ave{a_\mu(\omega)a_\nu(\omega')} &= \frac{f N(\alpha+\beta) \lambda^*_\mu\lambda_\mu \delta^M_{\mu+\nu,0} \delta\mkern-6mu\mathchar'26(\omega+\omega') }{ (-\mathring{\imath}\omega-p_\mu) (\mathring{\imath}\omega-p_\mu^*) } \end{align} bound to be real, as every factor is multiplied by its complex conjugate. Both brackets in the denominator are of the form $-\mathring{\imath}\omega$ plus a number that has a strictly positive real part, as $\Re(p_\mu)<0$, \Eref{def_p}. The inverse Fourier transform of \Eref{a_correlator_elegant} is \begin{align} \ave{a_\mu(t)a_\nu(t')} &= \int \dintbar{\omega} \dintbar{\omega'} \exp{-\mathring{\imath} (\omega t+\omega't')} \ave{a_\mu(\omega)a_\nu(\omega')}\nonumber\\ &= f N \delta^M_{\mu+\nu,0} \Exp{\mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)} (\alpha+\beta)(\lambda_\mu+\lambda_\mu^*)|t-t'|} \Exp{\mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)} (\alpha-\beta) (\lambda_\mu-\lambda_\mu^*) (t-t')} \ , \elabel{a_corr} \end{align} using $\lambda_\mu\lambda_\mu^*=-(\lambda_\mu+\lambda_\mu^*)$ with $\Re(\lambda_i+\lambda_i^*)=-2(1-\cos(k_\mu))<0$. To validate this result, we may calculate the equal-time correlation matrix \begin{equation}\elabel{EEdagger} \ave{(\pmb{\phi}(t)-\overline{\densvec}) (\pmb{\phi}(t)-\overline{\densvec})^\mathsf{T}} =M^{-2} \mathcal{E} \ave{\gpvec{a}(t) \gpvec{a}^\dagger(t)} \mathcal{E}^\dagger =\frac{f N}{M^2} \left( \begin{array}{cccc} M-1 &&& \mbox{\Large $-1$ }\\ & M-1\\ & & \ddots \\ \mbox{\Large $\!\!-1$ } && & M-1 \end{array} \right) \end{equation} via \Eref{def_avec} and the matrix on the far right being $\mathcal{E}\EC^\dagger$, with the diagonal confirming the variance \Eref{steady_state_density_Nardini} with $f=1$. Closer inspection, for example using a Doi-Peliti field theory, shows that the correlator in \Eref{a_corr}, is exact if $f=1$. In other words, the setup \Eref{Langevin_smallFluctuation_with_fudge} captures two-point correlations exactly. The entropy production below draws on the symmetric equal-time derivative, which we write here simply as \begin{align} \ave{a_\mu(t)\dot{a}_\nu(t)} &= \mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)} \lim_{t'\downarrow t}\frac{\mathrm{d}}{\mathrm{d} t'} \ave{a_\mu(t)a_\nu(t')} + \mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)} \lim_{t'\uparrow t}\frac{\mathrm{d}}{\mathrm{d} t'} \ave{a_\mu(t)a_\nu(t')} \nonumber\\ &= -\mathchoice{\frac{1}{2}}{(1/2)}{\frac{1}{2}}{(1/2)} f N \delta^M_{\mu+\nu,0} (\alpha-\beta)(\lambda_\mu-\lambda_\mu^*) \ . \elabel{symmetric_derivative} \end{align} \subsection{Entropy production} \seclabel{M_state_Nardini_entropy_production} Using Seifert's scheme \supplcite{Seifert:2005}, the entropy production of the $M$-state Markov process with path density $\mathcal{P}([\pmb{\phi}];T)$, \Eref{def_OM}, for a path of duration $T$ is \begin{equation}\elabel{entropy_production_step1} \dot{S}_{\text{int}} = \lim_{T\to\infty} \frac{1}{T} \ave{ \ln\left( \frac{\mathcal{P}([\pmb{\phi}];T)}{\mathcal{P}([\pmb{\phi}^R];T)} \right) } = \lim_{T\to\infty} \frac{1}{T} \ave{ \mathcal{G}([\gpvec{a}];T)-\mathcal{G}([\gpvec{a}^R];T) } \ , \end{equation} where the constant Jacobian to transform $\pmb{\phi}$ to $\gpvec{a}$ cancels in the fraction inside the logarithm and $\mathcal{G}([\gpvec{a}];T)$ is the Onsager-Machlup functional in \Eref{def_OM} with integration limits $0$ and $T$. In \Eref{entropy_production_step1}, $\gpvec{a}^R(t)$ denotes the reverse path, $\gpvec{a}^R(t)=\gpvec{a}(T-t)$ so that the Onsager-Machlup functional $\mathcal{G}([\gpvec{a}^R];T)$ can be evaluated by replacing $\dot{\gpvec{a}^R}(t)$ by $-\dot{\gpvec{a}}(T-t)$. The ensemble average in \Eref{entropy_production_step1} needs to be taken over all \emph{allowed, accessible} field configurations, but with $\pmb{\phi}$ being continuous and $\overline{\densvec}$ fixed this poses no restriction. In constructing the Onsager-Machlup functional, a decision had been made implicitly or explicitly about the It{\^o} \latin{vs}.\@\xspace Stratonovich nature of the derivative $\dot{\gpvec{a}}$. Avoiding ambiguity, we use in the following the symmetrised version of the derivative, \Eref{symmetric_derivative}, so that \begin{align} \nonumber \dot{S}_{\text{int}} &= \lim_{T\to\infty} \frac{1}{T} \int_0^T\dint{t} \frac{1}{ f N (\alpha+\beta)} \left( \ave{\dot{\gpvec{a}}(t)^\dagger (\Lambda\Lambda^*)^{-1} (\alpha\Lambda+\beta\Lambda^*)\gpvec{a}(t)} + \ave{\big((\alpha\Lambda+\beta\Lambda^*)\gpvec{a}(t)\big)^\dagger (\Lambda\Lambda^*)^{-1} \dot{\gpvec{a}}(t)} \right) \\ &= \lim_{T\to\infty} \frac{1}{T} \int_0^T\dint{t} \frac{(\alpha-\beta)^2}{ 2(\alpha+\beta)} \sum_{\mu=1}^{M-1} \left( 2 - \frac{\lambda_\mu}{\lambda_\mu^*} - \frac{\lambda_\mu^*}{\lambda_\mu} \right) \ . \elabel{entropy_production_step1b} \end{align} The fudge-factor $f$ has cancelled because it is the amplitude of the correlator and thus appears with its inverse in the action functional. It is obvious that this type of cancellation will occur in any bilinear action. Gone with the fudge factor is also the particle number $N$. Otherwise, \Eref{entropy_production_step1b} shows all the characteristics of the entropy production rate of drift-diffusion: It is quadratic in the hopping bias, $\alpha-\beta$, inversely proportional in the total hopping rate $\alpha+\beta$, which plays the r{\^o}le of a diffusion constant, and the integrand is independent of $t$, as the system is in the stationary state, so that the integral simply cancels the normalisation $1/T$. With some algebraic manipulation, $\lambda_{\mu}/\lambda^*_{\mu}+\lambda^*_{\mu}/\lambda_{\mu}=-2\cos k_\mu$ and $\sum_{\mu=1}^{M-1} \cos k_\mu = -1$ for $M\ge2$ as $\cos k_0=1$, \Eref{entropy_production_step1b} becomes \begin{equation}\elabel{entropy_production_final} \dot{S}_{\text{int}} = \frac{(\alpha-\beta)^2}{\alpha+\beta} (M-2) \quad\text{ for }\quad M\ge2 \ . \end{equation} This is the final result for the entropy production on the basis of the path probability from the Langevin equation \eref{Langevin_smallFluctuation_with_fudge} of the continuum particle number description. Taking the continuum limit $M\to\infty$ while maintaining finite drift and diffusion results in \Eref{entropy_production_final} diverging. Comparing \Eref{entropy_production_final} to the exact expression \Eref{M_state_exact_asympotic_result}, immediately reveals some problems: While the logarithm might be recovered by making $\alpha-\beta$ small, the linearity in the particle number $N$ of the exact expression is replaced by a linearity in the number of states $M$ in \Eref{entropy_production_final}. It generally bears all the hallmarks of $M$ rather than $N$ being the number of degrees of freedom entering into this expression of the entropy production. One cannot argue that this $M$ is a proxy for the particle number $N$, as it is not multiplied by the expected particle number per site $\overline{\densfield}$. As a result $\dot{S}_{\text{int}}$ of \Eref{entropy_production_final} diverges as $M\to\infty$, irrespective of whether $\overline{\densfield}$ is held constant or not in the limit. In short, \Eref{entropy_production_final} produces the wrong result, consistent with this expression capturing the \emph{states} as the degrees of freedom, rather than the \emph{particles}. It may not come as a surprise that an approximation scheme that changes the phase space from countable and discrete to uncountable and continuous shows a very different entropy production. In this light, it appears to be anything but a coarse-graining, to turn the $N$ individual degrees of freedom of positions $i\in\{1,2,\ldots,M\}$, that evolve stochastically in time, to the vastly larger phase space of a density $\phi_i:\{1,2,\ldots,M\}\to\gpset{R}$, similarly evolving stochastically in time. In principle, the path probabilities and thus the entropy production in the present framework can be derived exactly on the basis of Dean's multiplicative noise \cite{Dean:1996}. However, this results in the Onsager-Machlup functional carrying an inverse of the field, which generally poses a challenge, certainly a difficult one in the presence of interaction. Even when that is overcome, the expectation in \Eref{entropy_production_step1} would need to be taken over the set of allowed field configurations, which now would be sums of $\delta$-functions. This concludes the present derivation. Apparently, in this simple setup, the Langevin \Eref{Langevin_smallFluctuation_with_fudge} and the subsequent Onsager-Machlup functional \Eref{def_OM} capture the fluctuations correctly, but cannot be used as the starting point to construct the observable that determines the entropy production from the path probabilities, because they consider $\pmb{\phi}$ as continuous degrees of freedom subject to additive noise, whereas in the original process the components of $\pmb{\phi}$ are non-negative integers, $\phi_i\in\gpset{N}_0$. The relationship between Langevin equation and entropy production as exploited in \cite{NardiniETAL:2017} is in principle exact. But by assuming that the Langevin equation that approximates the dynamics can also be used to approximate the entropy production, it seems the wrong degree of freedom is subsequently considered as the one generating entropy. \endinput \section{Introduction} Active matter has been the focus of much research in statistical mechanics and biophysics over the past decade, because of many surprising theoretical features \cite{TonerTu:1995,Cates:2012,JuelicherGrillSalbreux:2018,MandalLiebchenLoewen:2019}, the rich phenomenology \cite{CatesTailleur:2015,LiebchenLevis:2017} and a plethora of applications \cite{DiLeonardoETAL:2010,XiETAL:2019,PietzonkaETAL:2019}. At the heart of active matter lies the conversion of chemical fuel into mechanical work, often in the form of self-propulsion, which leads to sustained non-equilibrium behaviour that is distinctly different from that of relaxing equilibrium thermodynamic systems \cite{Cates:2012}. How different, is quantified by the entropy production, which also quantifies the work performed. If we want to harvest and utilise this work, we need to quantify and control the system at the level it is observed and in the degrees of freedom that can be manipulated, rather than at a coarse-grained or smoothed level. The problem is illustrated by a team of horses observed from high above, when they may look almost like a droplet squeezing through a pore as they push past obstacles. At this level of description it may be difficult to distinguish a forward from a backward movie of the scene. Zooming in on the individual animal, however, reveals the details of their movement \cite{Muybridge:1878} and thus the difference between forward and backward immediately. If the horses are to be hitched to a plough, this is the level of observation needed to asses their utility. Assessing smoothed quasi-horses \cite{Mattuck:1992} does not help. Field theory has been the work-horse of statistical mechanics for many decades \cite{DombGreenLebowitz:1972}, because it allows for an efficient calculation and a systematic approximation of universal and non-universal observables in many-particle systems by means of a powerful machinery, that can be cast in an elegant, physically meaningful language in the form of diagrams. To apply this framework to active particle systems, effective field theories have been proposed, that use the continuously varying local particle density as the relevant degree of freedom. However, the entropy production of an approximating field theory is not necessarily a good approximation of the microscopic entropy production of the actual particle system. An exact, fully microscopic framework to calculate the entropy production systematically in active many-particle systems remains a theoretical challenge. In recent years, several exact results have been found \cite{Gaspard:2004}, although those are limited to linear interaction forces \cite{LoosKlapp:2020,GodrecheLuck:2019}, or cases where the full time-dependent particle probability density is known \cite{CocconiETAL:2020}. The entropy production crucially depends on the degrees of freedom used to describe the system state. Coarse-graining by integrating out degrees of freedom or by mapping sets of microstates to mesostates generally underestimate the entropy production \cite{RoldanETAL:2021, Esposito:2012, CocconiSalbreuxPruessner:2022, LoosPersonalCommunication:2021}. In \cite{NardiniETAL:2017} the particle dynamics has instead been approximated by recasting it as a continuously varying density subject to a Langevin equation of motion with additive noise. This approach captures much of the physics well, notably predicting that most of the entropy is produced at interfaces between dense and dilute phases \cite{NardiniETAL:2017,MartinETAL:2021,FodorETAL:2022}. Yet, it does not provide a lower bound of the entropy production, as it replaces the countably many particle degrees of freedom by the uncountably many of a density in space. \SMref{app_ring} illustrates this in a simple, tractable example displaying a \emph{divergent} entropy production. Doi-Peliti field theories \cite{Doi:1976,Peliti:1985} on the other hand, retain the particle nature of the constituent degrees of freedom, but can be cumbersome to derive, normally requiring discretisation and an explicit derivation of a master equation. Instead, we demonstrate how a Doi-Peliti action can be determined using the Fokker-Planck operator governing the single particle dynamics. Interactions through external and pair potentials, as well as reactions can be added by virtue of the same Poissonian ``superposition principle'', that allows a master equation to account for concurrent processes by adding corresponding gain and loss terms. We further show how the ensuing perturbation theory and its diagrammatics can be used to derive the entropy production, which turns out to draw only on the bare propagator and the lowest order perturbative vertices, as well as certain correlation functions. The diagrammatics of a field theory provides the small number of terms needed to calculate entropy production \emph{exactly}. Our procedure results in very general formulae that need as system-specific input the details of the interaction potentials and a few low-order correlation functions. In the simplest case of non-interacting particles, the latter reduce to the one-point density, so that the entropy production becomes a spatial average of a local property. In general, if the interaction allows for up to $n$ particles interacting simultaneously, only the $(2n-1)$-point equal-time correlation function needs to be known, effectively quantifying where and how frequently such interactions take place. We thus introduce a generic scheme to derive tractable expressions for the entropy production of complex many-particle systems on the basis of their microscopic, stochastic equation of motion. We illustrate the technique in a number of examples. The present work brings to bear the power of field theory to the field of active matter, while retaining particle entity, by calculating entropy production of the relevant degrees of freedom using diagrammatics and avoiding approximations altogether. Details of our derivations can be found in the supplemental material. We list the key results according to the structure of the article: \begin{itemize} \item[]\hspace{-20pt}\emph{\Sref{DP_from_FP_main}}: We show how a Doi-Peliti field theory is readily derived from a Fokker-Planck Equation, in particular \Eref{def_action0_cont} from \Eref{FPeqn_main} (also \SMref{DP_plus_EP}). \item[]\hspace{-20pt}\emph{\Sref{entropy_production_main}}: We introduce the framework to calculate entropy production, proceeding from the definition \Eref{def_entropyProduction} via \Eref{def_entropyProduction_elegant} to the diagrammatics of \Erefs{transition_from_action} and \eref{Ln_from_propagators} (also \SMref{entropy_production} and \SMref{appendixWhichDiagramsContribute}). \item[]\hspace{-20pt}\emph{\Sref{interaction_main}}: We include interaction, determining the relevant diagrams in \Eref{def_trans_multi_diag}, which immediately simplify to produce general expressions for $N$ pair-interacting indistinguishable particles such as \Eref{entropyProduction_for_pairPot} (also \SMref{MultipleParticles}). A corresponding numerical scheme is readily derived as \Eref{interacting_EPR_numerics}. We find that the entropy production of pair-interacting particles draws at most on the $3$-point density \cite{LynnETAL:2022,ZhangGarcia-Millan:2022}. \item[]\hspace{-20pt}\emph{\Sref{examples_main}}: We give concrete examples: a Markov chain, drift-diffusion of a single particle (also \SMref{drift_diffusion_on_ring}), \Eref{entropy_production_drift_diffusion_main}, and two distinct particles on a circle (also \SMref{HarmonicTrawlers}). \end{itemize} We conclude in \emph{\Sref{summary_outlook}} with a discussion, a summary of our results and an outlook. \section{Field Theory from Fokker-Planck equation} \seclabel{DP_from_FP_main} An efficient way to characterise a many-particle system is in terms of occupation numbers, which allows for, in principle, arbitrary particle numbers and species without having to change the parameterisation, as opposed to a description in terms of the individual particle degrees of freedom. Doi-Peliti (DP) field theories provide a framework that readily caters for the spatio-temporal evolution of \emph{particles} in terms of occupation numbers, in contrast to, say, the response field formalism \cite{MartinSiggiaRose:1973,Janssen:1976,DeDominicis:1976,Taeuber:2014} which need correction terms in the form of Dean's equation \cite{Dean:1996,BotheETAL:2022}. As the derivation of a DP field theory from a master equation can be cumbersome in particular in the presence of external fields \cite{TaeuberHowardVollmayr-Lee:2005,Cardy:2008}, we demonstrate in \SMref{DP_plus_EP} that a DP field theory of non-interacting particles inherit the evolution operator of the one-particle Fokker-Planck equation (FPE). In particular, any continuum limit that has to be taken in a lattice-based master equation to derive the continuum FPE can equivalently be applied in the field theory. In other words, if the FPE of a density $\rho(\gpvec{y},t)$ reads \begin{equation}\elabel{FPeqn_main} \partial_t \rho(\gpvec{y},t) = \SumInt_{\gpvec{x}} \FPop_{\gpvec{y},\gpvec{x}} \rho(\gpvec{x},t) \end{equation} with Fokker-Planck kernel $\FPop_{\gpvec{y},\gpvec{x}}$, then the Doi-Peliti action reads \begin{equation}\elabel{def_action0} \AC_0 = \int \dint{t} \SumInt_{\gpvec{x},\gpvec{y}} \tilde{\phi}(\gpvec{y},t) (\FPop_{\gpvec{y},\gpvec{x}} - \delta(\gpvec{y}-\gpvec{x})\partial_t) \phi(\gpvec{x},t) \end{equation} with annihilator field $\phi(\gpvec{x},t)$, Doi-shifted creator field \cite{Cardy:2008} $\tilde{\phi}(\gpvec{y},t)$ and observables calculated in the path-integral \cite{TaeuberHowardVollmayr-Lee:2005,Taeuber:2014} \begin{equation}\elabel{def_ave0} \ave[0]{\bullet} = \int\Dint{\phi}\Dint{\tilde{\phi}} \exp{\AC_0} \bullet \ . \end{equation} The simple relationship between \Erefs{FPeqn_main} and \eref{def_action0} is the first key-result of the present work. For continuous degrees of freedom, the kernel in \Eref{FPeqn_main} is usually written as $\FPop_{\gpvec{y},\gpvec{x}} = \hat{\LC}^{\dagger}_{\gpvec{x}} \delta(\gpvec{x}-\gpvec{y})=\hat{\LC}_{\gpvec{y}} \delta(\gpvec{x}-\gpvec{y})$ so that $\SumInt_{\gpvec{x}}\FPop_{\gpvec{y},\gpvec{x}} \rho(\gpvec{x},t) = \hat{\LC}_{\gpvec{y}}\rho(\gpvec{y},t)$ with FP operator $\hat{\LC}_{\gpvec{y}}$ and $\hat{\LC}^\dagger_{\gpvec{y}}$ its adjoint. In this case the action simplifies to \begin{equation}\elabel{def_action0_cont} \AC_0 = \int \dint{t} \int \ddint{y} \tilde{\phi}(\gpvec{y},t) (\hat{\LC}_\gpvec{y} - \partial_t) \phi(\gpvec{y},t) . \end{equation} The bare propagator \begin{equation} \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} \corresponds \tbarePropagator{\gpvec{x},t}{\gpvec{y},t'} \end{equation} of the action \Eref{def_action0} is the Green function of the FPE~\eref{FPeqn_main}\@\xspace, \SMref{DP_plus_EP}, and thus solves $\partial_{t'} \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} = \hat{\LC}_{\gpvec{y}} \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ with $\lim_{t'\downarrow t} \ave[0]{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} = \delta(\gpvec{y}-\gpvec{x})$. The action \Eref{def_action0} has by construction the same form as the action obtained by formally applying the MSR-trick \cite{MartinSiggiaRose:1973,Janssen:1976,DeDominicis:1976,Taeuber:2014} to the FPE, despite the absence of a noise term. However, the DP field theory retains the particle nature of the constituent degrees of freedom without the need of additional terms, like Dean's \cite{Dean:1996,BotheETAL:2022}. As a small price, a DP field theory is endowed with a commutator relation that needs to be consulted every time an observable is constructed from operators. As a consequence, unlike an effective Langevin-equation on the density, the annihilator field $\phi$ of a DP field theory is not the particle density \cite{Cardy:2008}, and the action is not the particle density probability functional. Recasting the action in terms of a Langevin equation on the field $\phi$ can produce unexpected features, such as imaginary noise \cite{HowardTaeuber:1997,LefevreBiroli:2007}. In a sense, the fields of a DP field theory are proxies, such that after expressing a desired observable in terms of fields according to the operators, the expectation of these fields is identical to that of the observable. Drawing on the wealth of knowledge and intuition available for the construction of master equations, it is easy to incorporate into a DP field theory a wide range of terms, including reactions, transmutations, interactions, pair-potentials or external potentials, as the field theory's action inherits the additivity of concurrent Poisson processes in a master equation. Some terms can be incorporated into the FP operator, others have to be treated perturbatively. Henceforth, we will assume that the full action \begin{equation}\elabel{def_action_full} \AC = \AC_0 + \AC_{\text{pert}} \end{equation} may contain perturbative terms such that expectations are calculated by expanding the exponential on the right hand side of \begin{equation}\elabel{perturbative_expansion} \ave{\bullet} = \int\Dint{\phi}\Dint{\tilde{\phi}} \exp{\AC} \bullet = \ave{\ \bullet \ \Exp{\AC_{\text{pert}}}}_0 \ , \end{equation} and taking expectations as in \Eref{def_ave0}. Even without interaction, $\AC_{\text{pert}}$ may absorb terms of $\FPop$ that are not readily integrated in \Eref{def_ave0}, so that the solution of the FPE~\eref{FPeqn_main}\@\xspace becomes in fact a perturbation theory. This is illustrated in \SMref{drift_diffusion_on_ring} for drift-diffusion in an arbitrary, periodic potential and in \cite{Garcia-MillanPruessner:2021} for Run-and-Tumble in a harmonic potential. \section{Entropy production} \seclabel{entropy_production_main} In the present framework, the entropy production can be elegantly expressed in terms of the bare propagators and the perturbative part of the action. We will demonstrate this first for a single particle before generalising to multiple particles. Following the scheme by Gaspard \citep{Gaspard:2004} to calculate entropy production in \emph{Markovian} systems, we draw on the propagator $\ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ as the probability (density) for a particle to transition from $\gpvec{x}$ at time $t$ to $\gpvec{y}$ at time $t'$. The internal entropy production of an evolving degree of freedom may then be written as a functional of the instantaneous probability (density) $\rho(\gpvec{x})$ to find it in state $\gpvec{x}$, namely \begin{equation}\elabel{def_entropyProduction} \dot{S}_{\text{int}}[\rho] = \SumInt_{\gpvec{x},\gpvec{y}} \rho(\gpvec{x}) \bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}} \left\{ \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}} + \ln\left( \frac{\rho(\gpvec{x})}{\rho(\gpvec{y})} \right) \right\} \end{equation} with \begin{equation}\elabel{def_Op} \bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}} = \lim_{t'\downarrow t} \frac{\mathrm{d}}{\mathrm{d} t'} \ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} \end{equation} and \begin{equation}\elabel{def_Ln} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}} = \lim_{t'\downarrow t} \ln\left(\frac{\ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}}{\ave{\phi(\gpvec{x},t')\tilde{\phi}(\gpvec{y},t)}}\right) \end{equation} as we show in \SMref{entropy_production}. \Eref{def_entropyProduction} is the starting point for the derivation of the entropy production from a Doi-Peliti field theory. Much of what follows focuses on how to extract $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ and $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$ from the action. As the field-theory correctly shows, \SMref{entropy_production}, if the states $\gpvec{y},\gpvec{x}$ are discrete and the process is a simple Markov chain, $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ reduces to the Markov (rate) matrix $\mathring{\Transition}_{\gpvec{y}\gpvec{x}}$ of the process of transitioning from $\gpvec{x}$ to $\gpvec{y}$, and $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$ is the logarithm of ratios of these rates, \begin{equation}\elabel{Ln_discrete_case} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}} = \ln\left( \frac{\mathring{\Transition}_{\gpvec{y}\gpvec{x}}}{\mathring{\Transition}_{\gpvec{x}\gpvec{y}}} \right) \ , \end{equation} \Erefs{Kn_from_TransitionRate} and \eref{Ln_from_TransitionRate}. If the states $\gpvec{x},\gpvec{y}$ are continuous, then $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ can be cast as a kernel, which in the absence of a perturbative contribution to the action is identical to the FP kernel, $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}=\FPop_{\gpvec{y},\gpvec{x}}$, given that the propagator is the Green function of the FPE, \SMref{entropy_production}. Integrating by parts then gives \begin{equation}\elabel{def_entropyProduction_elegant} \dot{S}_{\text{int}}[\rho] = \SumInt_{\gpvec{x},\gpvec{y}}\rho(\gpvec{x})\delta(\gpvec{y}-\gpvec{x}) \hat{\LC}^\dagger_{\gpvec{y}} \left\{ \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}} + \ln\left( \frac{\rho(\gpvec{x})}{\rho(\gpvec{y})} \right) \right\} \ . \end{equation} In principle, the density to use in \Erefs{def_entropyProduction} and \eref{def_entropyProduction_elegant} is given by the propagation from the initial state up until time $t$, in which case it becomes an explicit function of $t$ \begin{equation} \rho(\gpvec{x};t) =\ave{\phi(\gpvec{x},t)\tilde{\phi}(\gpvec{x}_0,t_0)} \ . \end{equation} In general, this density might be well approximated by an effective theory, that omits the microscopic details entering into the entropy production via \Erefs{def_Op} and \eref{def_Ln}. The entropy production \Erefs{def_entropyProduction} and \eref{def_entropyProduction_elegant} simplifies further if $\rho(\gpvec{x})$ is stationary, \begin{equation} \rho(\gpvec{x}) = \SumInt_{\gpvec{y}} \ave{\phi(\gpvec{x},t')\tilde{\phi}(\gpvec{y},t)} \rho(\gpvec{y}) \end{equation} for any $t'-t>0$, in which case $\ln(\rho(\gpvec{x})/\rho(\gpvec{y}))$ disappears from \Eref{def_entropyProduction} and the expression reduces to that of the negative of the external entropy production \cite{CocconiETAL:2020}. In that case, \Eref{def_entropyProduction} may be interpreted as the average $\dot{S}_{\text{int}}=\spave{\dot{\sigma}(\gpvec{x})}=\SumInt_{\gpvec{x}} \rho(\gpvec{x}) \dot{\sigma}(\gpvec{x}) $ of the \emph{local} entropy production \begin{equation}\elabel{def_local_entropy} \dot{\sigma}(\gpvec{x}) = \SumInt_{\gpvec{y}} \bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}} \ , \end{equation} which derives from the dynamics only and is independent of the density. We will discuss the formalism in this form in greater detail below, after introducing interaction. A priori, the full propagator is needed in \Erefs{def_Op} and \eref{def_Ln}. However, as it turns out, provided the process is time-homogeneous, generally in the discrete case as well as in continuous perturbation theories about a Gaussian (details in \SMref{drift_diffusion_on_ring}), $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ and $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$ draw only on the bare propagator and possibly on the first order perturbative term. As detailed in \SMref{entropy_production}, the key argument for this simplification is that the propagator $\ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ only ever enters in the limit $t'\downarrow t$, either in the form of an explicit derivative, \Eref{def_Op}, or in the form of a ratio, \Eref{def_Ln}, which may also draw on the derivative via L'H{\^o}pital. The propagator therefore needs to be determined only to first order in small $t'-t$. If the full propagator $\ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ is given by a perturbative expansion of the action \Eref{perturbative_expansion}, diagrammatically written as \begin{widetext} \begin{equation}\elabel{propagator_expansion} \ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} \corresponds \tbarePropagator{\gpvec{x},t}{\gpvec{y},t'} + \tblobbedPropagator{\gpvec{x},t}{\gpvec{y},t'} + \tDblobbedPropagator{\gpvec{x},t}{\gpvec{y},t'} + \ldots\ , \end{equation} \end{widetext} in principle every order in the perturbation theory might contribute to $\ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)}$ to first order in $t'-t$. As outlined in the following, closer inspection, however, reveals a simple relationship, namely that \emph{the $n$th order in $t'-t$ is fully given by the first $n$ diagrams} on the right hand side of \Eref{propagator_expansion}. In continuum field theories, this needs careful analysis, but it holds for perturbation theories about drift-diffusion, \SMref{drift_diffusion_on_ring}, where the highest order derivative in $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ is a second and $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$, necessarily odd in $\gpvec{y}-\gpvec{x}$, therefore does not need to be known beyond second order. Leaving the details to the supplement \SMref{entropy_production} and \SMref{appendixWhichDiagramsContribute}, we proceed by demonstrating that the first time derivative of the second order contribution \tDblobbedPropagator{}{} on the right hand side of \Eref{propagator_expansion} vanishes at $t'=t$. This follows from differentiating with respect to $t'$ the inverse Fourier-transform of \tDblobbedPropagator{}{}, which for time-homogeneous processes has the form \begin{equation}\elabel{derivative_integral} \dot{I}(t'-t)=\int \dintbar{\omega'} \frac{-\mathring{\imath} \omega' \exp{-\mathring{\imath} \omega' (t'-t)} C} {\prod_{j=1}^{3} (-\mathring{\imath} \omega' + p_j)} \ , \end{equation} where the three propagators are $(-\mathring{\imath} \omega' + p_j)^{-1}$ with $j=1,2,3$ and $C$ denotes the couplings. The poles $-\mathring{\imath} p_j$ may be repeated, which does not affect the argument. Crucially, all poles are situated in the \emph{lower} half-plane, which is required by causality of each bare propagator entering in \Eref{derivative_integral}. After taking $t'\to t$ the contour can be closed in the \emph{upper} half-plane, as the integrand $\propto 1/\omega'^2$ decays fast enough. It follows that $\dot{I}(0)$, \Eref{derivative_integral}, vanishes. This argument easily generalises to higher derivatives and correspondingly higher orders. Consequently, only the first two diagrams on the right hand side of \Eref{propagator_expansion} contribute to the propagator to first order in $t'-t$. The argument above draws on the structure of the diagrams where bare propagators connect ``blobs''. The diagrams in the propagators of \Erefs{def_Op} and \eref{def_Ln} that end up contributing, contain at most one such blob. How the blob enters into $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ and $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$ is explained in the following. The blobs can contain tadpole-like loops only in the presence of source terms, such as \Eref{tadpole_diagram}. If such source terms are absent, \emph{the blobs are merely the vertices of the perturbative part of the action}. If the phenomenon studied is not time-homogeneous, $\omega$ might have sinks and sources and the structure of the integrals representing contributions to the propagator are no longer of the form \Eref{derivative_integral}. With these provisos in place, the kernel needed in \Eref{def_Op} reduces to the bare propagator plus the first order correction, \begin{equation}\elabel{transition_from_action} \bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}\corresponds \FPop_{\gpvec{y},\gpvec{x}} + \fullBlobb{\gpvec{x}}{\gpvec{y}} \ , \end{equation} where $\FPop$ refers to the non-perturbative part of the action \Eref{def_action0}, and $\tfullBlobb{}{}$ to the first order, one-particle irreducible, amputated contributions due to the perturbative part of the action, the ``blob''. It may contain perturbative contributions due to the single-particle Fokker-Planck operator, or due to additional processes, such as interactions and reactions. In field-theoretic terms, $\FPop_{\gpvec{y},\gpvec{x}}$ is the inverse bare propagator evaluated at $\omega=0$ and $\tfullBlobb{}{}$ is a contribution to the "self-energy". In stochastic particle systems, $\FPop_{\gpvec{y},\gpvec{x}}=(D\partial_y^2-w\partial_y)\delta(y-x)$ would be drift-diffusion (\SMref{drift_diffusion_on_ring}) and $\tfullBlobb{}{}=-r$ an additional extinction with rate $r$. Generally, no higher orders, such as $\tfullDoubleBlobb{}{}$, or any loops carrying $\omega$, such as the middle term in \Eref{Npropagators_oneLoop}, enter (\SMref{appendixWhichDiagramsContribute}). While maybe unsurprising as far as the kernel $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$ is concerned, this simplification to one blob carries through to the logarithm on the basis of \Erefs{Ln_discrete_case} and \eref{Ln_from_propagator} if states are discrete, and by an expansion of the form \begin{widetext} \begin{equation}\elabel{Ln_from_propagators} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}} \corresponds \lim_{t'\downarrow t}\left\{ \ln \left( \frac{ \tbarePropagator{\gpvec{x},t}{\gpvec{y},t'} }{ \tbarePropagator{\gpvec{y},t}{\gpvec{x},t'} } \right) + \frac {\tblobbedPropagator{\gpvec{x},t}{\gpvec{y},t'}} {\tbarePropagator{\gpvec{x},t}{\gpvec{y},t'}} - \frac {\tblobbedPropagator{\gpvec{y},t}{\gpvec{x},t'}} {\tbarePropagator{\gpvec{y},t}{\gpvec{x},t'}} \right\} \ , \end{equation} \end{widetext} in the continuum, \Eref{Ln_for_continuous} and similarly \SMref{drift_diffusion_on_ring}, \Eref{Ln_from_drift_diffusion}. In summary, what is needed to calculate the entropy production \Eref{def_entropyProduction} of a single degree of freedom is: (a) the density $\rho(\gpvec{x};t)$, which at stationarity may be well approximated by an effective theory, and (b) the \emph{microscopic} action \Eref{def_action_full} to construct kernel $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}$, via \Eref{transition_from_action}, and logarithm $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}$ via \Eref{Ln_from_propagators} using at most one blob. \subsection{Many conserved particles} \seclabel{entropy_production_multiple_main} In the presence of $N>1$ distinguishable particles, \Eref{def_entropyProduction} remains in principle valid if $\gpvec{x},\gpvec{y}$ are understood to encapsulate all $N$ particle coordinates at once, with the density in \Eref{def_entropyProduction} replaced by the joint density $\rho(\gpvec{x}_1,\ldots,\gpvec{x}_N)$ and the propagator in \Erefs{def_Op} and \eref{def_Ln} replaced by the joint propagator $\bigl\langle\phi_1(\gpvec{y}_1,t')\linebreak[1]\phi_2(\gpvec{y}_2,t')\linebreak[1]\ldots\linebreak[1]\phi_N(\gpvec{y}_N,t')\linebreak[1]\tilde{\phi}_1(\gpvec{x}_1,t)\linebreak[1]\tilde{\phi}_2(\gpvec{x}_2,t)\linebreak[1]\ldots\linebreak[1]\tilde{\phi}_N(\gpvec{x}_N,t)\bigr\rangle$, where the indices of the fields refer to \emph{distinguishable} particle species. Without interaction, the overall entropy production is the sum of the individual entropy productions, \Eref{entropyProductionDensity_independent_distinguishable_final_sum}. If particles are \emph{indistinguishable}, dropping the indices generally results in $N!$ as many terms from permutations of the fields, as well as the joint density $\rho(\gpvec{x}_1,\ldots,\gpvec{x}_N)$ at stationarity being $N!$ that of distinguishable particles. At the same time, the phase space summed or integrated over in \Eref{def_entropyProduction} has to be adjusted to reflect that occupation numbers are the degrees of freedom, not particle positions \cite{CocconiETAL:2020,ZhangGarcia-Millan:2022}. In the case of sparse occupation, where every site is occupied by at most one particle, a condition usually met in continuum space, this can be done by means of the Gibbs factor \cite{Sethna:2006}, which amounts to dividing the phase space of distinguishable particles by $N!$, \begin{widetext} \begin{equation}\elabel{entropyProduction_multipleParticles} \dot{S}_{\text{int}}^{(N)}[\rho] = \frac{1}{(N!)^2} \int\ddint{x_1}\ldots\ddint{x_N} \int\ddint{y_1}\ldots\ddint{y_N} \rho^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N) \bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{y}_N,\gpvec{x}_1,\ldots,\gpvec{x}_N} \end{equation} \end{widetext} with $N$-particle kernel $\bm{\mathsf{K}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{x}_N}$ and logarithm $\operatorname{\bm{\mathsf{Ln}}}^{(N)}_{\gpvec{y}_1,\ldots,\gpvec{x}_N}$ defined by using the joint propagator $\bigl\langle\phi(\gpvec{y}_1,t')\linebreak[1]\ldots\linebreak[1]\tilde{\phi}(\gpvec{x}_N,t)\bigr\rangle$ on the right of \Erefs{def_Op} and \eref{def_Ln}. At stationarity, the Gibbs factor precisely cancels the multiplicity of the terms mentioned above, \SMref{N_independent_indistinguishable_particles}. Again, without interaction, the entropy production of $N$ indistinguishable particles is linear in $N$, \Eref{entropy_production_indistinguishableN_as_density_independent3}. The diagrams contributing to the joint propagator are generally disconnected, say $ \tikz[baseline=-2.5pt]{ \draw[tAactivity] (0.5,0.12) -- (-0.5,0.12); \draw[tAactivity] (0.5,0.0) -- (-0.5,0.0); \draw[tAactivity] (0.5,-0.12) -- (-0.5,-0.12); } $ and may, in principle, involve any number of vertices, such as $\tfullBlobb{}{}$, say $ \tikz[baseline=-2.5pt]{ \tgenVertex{0,0.2} \draw[tAactivity] (0.5,0.2) -- (-0.5,0.2); \draw[tAactivity] (0.5,0.03) -- (-0.5,0.03); \draw[tAactivity] (0.5,-0.12) -- (-0.5,-0.12); } $ or $ \tikz[baseline=-2.5pt]{ \tgenVertex{0,0.2} \draw[tAactivity] (0.5,0.2) -- (-0.5,0.2); \tgenVertex{0,0.0} \draw[tAactivity] (0.5,0.0) -- (-0.5,0.0); \draw[tAactivity] (0.5,-0.15) -- (-0.5,-0.15); } $. However, as detailed in \SMref{appendixWhichDiagramsContribute}, the argument that reduces to at most one vertex contributions to a single particle propagator, similarly applies to multiple particle propagators, so that any blob inside a joint propagator raises the order of $t'-t$ by one. Any contribution to the joint propagator $\bigl\langle\phi(\gpvec{y}_1,t)\linebreak[1]\ldots\linebreak[1]\tilde{\phi}(\gpvec{x}_N,t)\bigr\rangle$ in the joint kernel $\bm{\mathsf{K}}_{\gpvec{y}_1,\ldots,\gpvec{x}_N}$ or the joint logarithm $\operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y}_1,\ldots,\gpvec{x}_N}$ therefore contains at most one vertex. The set of diagrams to be considered can be reduced further with an argument best made after allowing for interaction. \subsection{Interaction} \seclabel{interaction_main} In the presence of interaction, the joint propagator contains contributions of the form $ \tikz[baseline=-2.5pt]{ \tgenVertex{0,0} \draw[tAactivity] (0.5,0.1) -- (0,0) -- (-0.5,0.1); \draw[tAactivity] (0.5,-0.1) -- (0,0) -- (-0.5,-0.1); } $. Each such vertex is also of order $t'-t$, \SMref{appendixWhichDiagramsContribute}. If particles are conserved, each vertex must have at least as many incoming legs as outgoing ones. At this stage, the joint propagator entering the $N$ particle kernel $\bm{\mathsf{K}}^{(N)}$ and logarithm $\operatorname{\bm{\mathsf{Ln}}}^{(N)}$ is of the form \newcommand{0.5}{0.5} \begin{widetext} \begin{multline}\elabel{def_trans_multi_diag} \ave{ \phi(\gpvec{y}_1,t') \ldots\phi(\gpvec{y}_N,t')\tilde{\phi}(\gpvec{x}_1,t) \ldots\tilde{\phi}(\gpvec{x}_N,t) }\\ \corresponds \tikz[baseline=-2.5pt]{ \begin{scope}[yshift=0.3cm] \node at (0.5,0) [right] {$\gpvec{x}_1,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_1,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \begin{scope}[yshift=0.0cm] \node at (0.5,0) [right] {$\gpvec{x}_2,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_2,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \node at (0,-0.2) {$\vdots$}; \begin{scope}[yshift=-0.65cm] \node at (0.5,0) [right] {$\gpvec{x}_N,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_N,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} } +\text{perm.}+ \tikz[baseline=-2.5pt]{ \begin{scope}[yshift=0.3cm] \tgenVertex{0,0} \node at (0.5,0) [right] {$\gpvec{x}_1,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_1,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \begin{scope}[yshift=0.0cm] \node at (0.5,0) [right] {$\gpvec{x}_2,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_2,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \node at (0,-0.2) {$\vdots$}; \begin{scope}[yshift=-0.65cm] \node at (0.5,0) [right] {$\gpvec{x}_N,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_N,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} } +\text{perm.}+ \tikz[baseline=-2.5pt]{ \begin{scope}[yshift=0.3cm] \tgenVertex{0,-0.15} \node at (0.5,0) [right] {$\gpvec{x}_1,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_1,t'$}; \draw[tAactivity] (0.5,0) -- (0,-0.15) -- (-0.5,0); \node at (0.5,-0.3) [right] {$\gpvec{x}_2,t$}; \node at (-0.5,-0.3) [left] {$\gpvec{y}_2,t'$}; \draw[tAactivity] (0.5,-0.3) -- (0,-0.15) -- (-0.5,-0.3); \end{scope} \node at (0,-0.2) {$\vdots$}; \begin{scope}[yshift=-0.65cm] \node at (0.5,0) [right] {$\gpvec{x}_N,t$}; \node at (-0.5,0) [left] {$\gpvec{y}_N,t'$}; \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} } +\text{perm.} +\order{(t'-t)^2} \ , \end{multline} \end{widetext} each with all distinct permutations of incoming and outgoing particle coordinates, $\gpvec{x}_i$ and $\gpvec{y}_i$ respectively, as indicated by $\text{perm.}$. What does \emph{not} enter (\SMref{appendixWhichDiagramsContribute}) are terms involving more than one vertex, such as \renewcommand{0.5}{0.7} \begin{equation}\elabel{Npropagators_oneLoop} \tikz[baseline=-2.5pt]{ \begin{scope}[yshift=0.5cm] \tgenVertex{0,0} \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \begin{scope}[yshift=0.2cm] \tgenVertex{0,0} \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \begin{scope}[yshift=0.00cm] \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \node at (0,-0.22) {$\vdots$}; \begin{scope}[yshift=-0.65cm] \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} } \,\text{ or }\, \tikz[baseline=-2.5pt]{ \begin{scope}[yshift=0.15cm] \tgenVertex{0.35,0.15} \tgenVertex{-0.35,0.15} \draw[tAactivity] (0.5,0.3) -- (0.35,0.15) to[out=-135,in=-45] (-0.35,0.15) -- (-0.5,0.3); \draw[tAactivity] (0.5,0.0) -- (0.35,0.15) to[out=135,in=45] (-0.35,0.15) -- (-0.5,0.0); \end{scope} \begin{scope}[yshift=0.00cm] \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} \node at (0,-0.2) {$\vdots$}; \begin{scope}[yshift=-0.65cm] \draw[tAactivity] (0.5,0) -- (-0.5,0); \end{scope} } \,\text{ or }\, \tikz[baseline=-6pt]{ \begin{scope}[yshift=0.00cm] \tgenVertex{-0.35,0.2} \tgenVertex{0.35,-0.2} \draw[tAactivity] (0.5,0.2) -- (-0.35,0.2) -- (-0.5,0.2); \draw[tAactivity] (0.5,-0.2) -- (0.35,-0.2) -- (-0.35,0.2) -- (-0.5,0.0); \draw[tAactivity] (0.5,0.0) -- (0.35,-0.2) -- (-0.5,-0.2); \end{scope} \node at (0,-0.35) {$\vdots$}; \draw[tAactivity] (0.5,-0.8) -- (-0.5,-0.8); } \ . \end{equation} Even with the restriction to a single blob, \Eref{def_trans_multi_diag} contains many diagrams, seemingly involving many permutations of many initial and final coordinates. Similarly, the $N$-point equal time density $\rho^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_N)$ is needed in \Eref{entropyProduction_multipleParticles}, which would be an ardeous task to determine. However, because every bare propagator degenerates into a $\delta$-function as $t'\downarrow t$, they can considerably simplify the expression for the entropy production. As discussed in \SMref{MultipleParticles} any bare propagator featuring together with, \latin{i.e.}\@\xspace multiplying, a blobbed diagram, effectively drops out in the limit $t'\downarrow t$. As the bare propagators drop away, so does the need for higher order densities. As a result the entropy production of a system whose "largest blob" has $n$ incoming and $n$ outgoing legs can be calculated on the basis of the $(2n-1)$-point joint density, restricting a hierarchy of terms to $2n-1$ rather than $N$ \cite{LynnETAL:2022}. For example, $N$ indistinguishable particles with self-propulsion speed $\wvec$, diffusion $D$ and pair-interaction, $n=2$, via an even potential $U$ have entropy production, \begin{widetext} \begin{subequations} \elabel{entropyProduction_for_pairPot} \begin{align} \dot{S}_{\text{int}}^{(N)}[\rho^{(N)}] =& \elabel{entropyProduction_for_pairPot1} \int \ddint{\gpvec{x}_1} \rho_{1}^{(N)}(\gpvec{x}_1) \left\{\frac{\wvec^2}{D}\right\}\\ &+ \elabel{entropyProduction_for_pairPot2} \int \ddint{\gpvec{x}_{1,2}} \rho_{2}^{(N)}(\gpvec{x}_1,\gpvec{x}_2) \left\{ \frac{1}{D} \left(\nablaU(\gpvec{x}_1-\gpvec{x}_2)\right)^2 -\laplace U(\gpvec{x}_1-\gpvec{x}_2) \right\}\\ &+ \elabel{entropyProduction_for_pairPot3} \int \ddint{\gpvec{x}_{1,2,3}} \rho_{3}^{(N)}(\gpvec{x}_1,\gpvec{x}_2,\gpvec{x}_3) \left\{ \frac{1}{D} \nablaU(\gpvec{x}_1-\gpvec{x}_2) \cdot \nablaU(\gpvec{x}_1-\gpvec{x}_3) \right\} \ , \end{align} \end{subequations} \end{widetext} which is \Eref{entropyProduction_interacting_indistinguishable_example} with external potential $\Upsilon\equiv0$ and further simplified by using that the pair potential $U$ is even. It demonstrably vanishes in the absence of drift $\wvec$, as shown in \SMref{no_EPR_without_drift}. \Eref{entropyProduction_for_pairPot} and \eref{entropyProduction_interacting_indistinguishable_example} are \emph{exact} results, assuming pair interaction being the highest order interaction as well as sparse occupation, which here means merely that two particles cannot be located at exactly the same point in space, something that in the continuum is practically always fulfilled. The densities $\rho_{n}^{(N)}(\gpvec{x}_1,\ldots,\gpvec{x}_n)$ thus denote the density of $n$ \emph{distinct} particles at positions $\gpvec{x}_1,\ldots\gpvec{x}_n$, normalising to $N!/(N-n)!$. Each term in curly brackets in \Eref{entropyProduction_for_pairPot} can be cast as a local entropy production, depending on one, two or three coordinates. \Eref{entropyProduction_for_pairPot1} is the entropy production or work due to self-propulsion by individual particles, $N\wvec^2/D$, \Eref{entropyProduction_for_pairPot2} is the work due to two particles excerting equal and opposite forces on each other and \Eref{entropyProduction_for_pairPot3} is the work performed by one particle in the potential of another particle as it is being pushed or pulled by a third particle. In the form of \Eref{entropyProduction_for_pairPot}, entropy production in an experiment or simulation can be estimated efficiently by using $Q$ samples $q=1,2,\ldots,Q$ of $N$ particle coordinates $\gpvec{x}^{(q)}_{i}$ with $i=1,\ldots,N$, \begin{widetext} \begin{equation}\elabel{interacting_EPR_numerics} \dot{S}_{\text{int}}=\frac{1}{Q}\sum_{q=1}^Q \left[ \sum_{i_1=1}^N \dot{\sigma}^{(1)}_1\left(\gpvec{x}^{(q)}_{i_1}\right) + \sum_{i_1,i_2=1\atop i_1\ne i_2}^N \dot{\sigma}^{(2)}_2\left(\gpvec{x}^{(q)}_{i_1},\gpvec{x}^{(q)}_{i_2}\right) + \sum_{i_1,i_2=1\atop i_1\ne i_2\ne i_3\ne i_1}^N \dot{\sigma}^{(3)}_3\left(\gpvec{x}^{(q)}_{i_1},\gpvec{x}^{(q)}_{i_2},\gpvec{x}^{(q)}_{i_1}\right) \right]\ . \end{equation} \end{widetext} with the $\dot{\sigma}^{(i)}_i$ with $i=1,2,3$ given by the three pairs of curly brackets in \Eref{entropyProduction_for_pairPot} and generally in \Eref{def_entropyProductionDensities_indistinguishable}. All sums run over distinct particle indices, so that for example $(1/Q)\sum_q \sum_{i_1,i_2=1\atop i_1\ne i_2}^N \delta(\gpvec{x}_1-\gpvec{x}^{(q)}_{i_1})\delta(\gpvec{x}_2-\gpvec{x}^{(q)}_{i_2})$ estimates $\rho_{2}^{(N)}(\gpvec{x}_1,\gpvec{x}_2)$. Entropy production in a particle system with interaction can thus be estimated on the basis of "snapshots" and the microscopic action, without the need of introducing a new measure \cite{RoETAL:2021,TociuETAL:2020}. In the case of $n$-particle interaction, it generally draws on equal-time $(2n-1)$-point densities and the time-evolution terms given by the action. Neither the full $N$-point density nor the $2N$-point two-time correlation function is needed, which is what \Eref{entropyProduction_multipleParticles} suggests. If the number of particles is not fixed but becomes itself a degree of freedom, the phase space integrated or summed over in \Eref{def_entropyProduction} needs to be adjusted. This case is beyond the scope of the present work. \section{Examples} \seclabel{examples_main} In the following we illustrate the methods introduced above by calculating the entropy production of 1) a continuous time Markov chain, 2) a drift-diffusion Brownian particle on a torus with potential, and 3) two drift-diffusion particles on a circle interacting via a harmonic pair potential. \paragraph*{Continuous time Markov chain.} The single-particle master equation of a continuous time Markov chain is \Eref{FPeqn_main} with $\FPop_{\gpvec{y}\gpvec{x}}$ the Markov-matrix $\MC_{\gpvec{y}\gpvec{x}}$ for transitions from discrete state $\gpvec{x}$ to $\gpvec{y}$. Following standard procedure \cite{TaeuberHowardVollmayr-Lee:2005}, \SMref{DP_plus_EP}, the action of the resulting field theory is \Eref{def_action0}, \begin{equation} \AC_0 = \sum_{\gpvec{x}\gpvec{y}} \dint{t} \tilde{\phi}(\gpvec{y},t) (\MC_{\gpvec{y}\gpvec{x}} - \delta_{\gpvec{y},\gpvec{x}}\partial_t) \phi(\gpvec{x},t) \ . \end{equation} From \Erefs{def_entropyProduction}, \eref{Ln_discrete_case} and \eref{transition_from_action} in the absence of a perturbative term, the entropy production immediately follows, \begin{equation}\elabel{EntropyProductionMarkovChain} \dot{S}_{\text{int}}[\rho] = \sum_{\gpvec{x},\gpvec{y}} \MC_{\gpvec{y}\gpvec{x}}\rho(\gpvec{x}) \ln\left( \frac {\MC_{\gpvec{y}\gpvec{x}}\rho(\gpvec{x})} {\MC_{\gpvec{x}\gpvec{y}}\rho(\gpvec{y})} \right) \ , \end{equation} \SMref{EP_from_propagators}, consistent with \cite{Gaspard:2004,CocconiETAL:2020}. The contributions to first order in the perturbative vertex in \Erefs{transition_from_action}, \eref{Op_in_diagrams} and \eref{Ln_from_propagator} ensure that this expression for the entropy production does not change even when some contributions to $\MC$ are moved to the perturbative part of the action, $\BC$ in \Erefs{Op_in_diagrams} and \eref{Ln_from_propagator}. \paragraph*{Drift-diffusion.} This process is a paradigmatic example of a continuous space process as many other, more complicated ones, in particular many active matter models \cite{ZhangPruessner:2022,ZhangETAL:2022,ZhenPruessner:2022} can be studied as a perturbation of it. The continuity of the degree of freedom means that a transform is needed to render the process local in a new variable, here the Fourier-mode $\gpvec{k}$, so that the path-integral \Eref{perturbative_expansion} can be performed. However, as detailed in \SMref{drift_diffusion_on_ring}, the transform can in principle spoil the relationship between the number of blobs in the diagram and its order in $t'-t$ as discussed around \Eref{derivative_integral}. It turns out, \SMref{Fourier_transformation}, that in the case of drift-diffusion processes, the number of blobs determines the leading order in the distance $\gpvec{y}-\gpvec{x}$ of any contribution finite in the limit $t'\downarrow t$, in fact preserving \Erefs{transition_from_action} and \eref{Ln_from_propagators}. To be general, we allow for an external potential, but to render drift diffusion stationary even without an external potential, we restrict it to a $d$-dimensional torus of circumference $L$. As detailed in \SMref{drift_diffusion_on_ring}, the drift can be captured either exactly or perturbatively, while an external potential generally has to be treated perturbatively. The FPE of a particle diffusing with constant $D$ and drifting with velocity $\wvec$ on a torus with periodic external potential $\Upsilon(\gpvec{y})$ is \Eref{FPeqn_main} with $\hat{\LC}_\gpvec{y}=D\nabla_{\gpvec{y}}^2 + \nabla_{\gpvec{y}} (- \wvec +\Upsilon'(\gpvec{y}))$, where the operators act on everything to the right and $\Upsilon'=\nabla\Upsilon$ denotes the gradient of the potential. The propagator to first order is (\SMref{drift_diffusion_on_ring}, \Erefs{phiphitilde_in_diagrams_all}) \begin{widetext} \begin{align}\elabel{entropy_production_drift_diffusion_main} \ave{\phi(\gpvec{y},t')\tilde{\phi}(\gpvec{x},t)} = & \frac{\theta(t'-t)\exp{-\frac{(\gpvec{y}-\gpvec{x})^2}{4D(t'-t)}}}{(4\pi D(t'-t))^{d/2}} \left( 1 + (\gpvec{y}-\gpvec{x})\cdot \frac{\wvec-\nabla\Upsilon(\gpvec{x})}{2 D} + \order{(\gpvec{y}-\gpvec{x})^2} \right) \nonumber\\ \corresponds & \tbarePropagator{\gpvec{x},t}{\gpvec{y},t'} + \tblobbedDashedPropagator{\gpvec{x},t}{\gpvec{y},t'} + \blobbedDashedPotPropagator{\gpvec{x},t}{\gpvec{y},t'}+ \order{(t'-t)^2} \ , \end{align} \end{widetext} so that with \Eref{Ln_from_propagators} \begin{multline} \operatorname{\bm{\mathsf{Ln}}}_{\gpvec{y},\gpvec{x}}=\frac{\gpvec{y}-\gpvec{x}}{2D} \cdot \big[2\wvec -\Upsilon'(\gpvec{x}) - \Upsilon'(\gpvec{y})\big] \\ + \order{(\gpvec{y}-\gpvec{x})^3} \ , \end{multline} \Erefs{Ln_drift_diffusion_Wissel} and \eref{Ln_drift}. Using this with $\bm{\mathsf{K}}_{\gpvec{y},\gpvec{x}}=\hat{\LC}_\gpvec{y}\delta(\gpvec{y}-\gpvec{x})$ in \Eref{def_entropyProduction}, then correctly produces \Erefs{local_entropy_Gw} \cite{CocconiETAL:2020} \begin{multline}\elabel{entropyProduction_driftdiffusion_main} \dot{S}_{\text{int}}[\rho] = \int_0^L \ddint{x} \Biggl\{ \rho(\gpvec{x}) \left( \frac{(\wvec-\Upsilon'(\gpvec{x}))^2}{D} - \laplace \Upsilon(\gpvec{x})\right)\\ + D \frac{(\rho'(\gpvec{x}))^2}{\rho(\gpvec{x})} + \Upsilon'(\gpvec{x}) \rho'(x) \Biggr\} \end{multline} with the last two terms that involve $\rho'(\gpvec{x})=\nabla_{\gpvec{x}} \rho(\gpvec{x})$ cancelling at stationarity, $0=\partial_t\rho=D\rho''-\partial_x(w-\Upsilon')\rho$, and the first two terms involving the potential cancelling at vanishing current $0=\gpvec{j}=-D\rho'+(\wvec-\Upsilon')\rho$. \paragraph*{Harmonic trawlers.} A free particle with diffusion constant $D$, drifting with velocity $w$ on a circle without external potential produces entropy with rate $w^2/D$ \cite{CocconiETAL:2020}. Entropy production being extensive, without interaction two identical particles produce twice as much entropy. If they have different drift velocities $w_1$ and $w_2$ the total entropy production is $(w_1^2+w_2^2)/D$. If they are coupled by an attractive (binding) pair-potential, as if coupled by a spring, they behave like a single particle drifting with velocity $(w_1+w_2)/2$ and diffusing with constant $D/2$, so that the overall entropy production is $(w_1+w_2)^2/(2D)$. If $w_1=w_2$, then the entropy production is identical to that of free particles, but if $w_1\ne w_2$, the pair potential becomes ``visible''. While easily derived using physical arguments, determining this expression perturbatively from a field theory that is "oblivious" to such physical intricacies is a non-trivial task and a good litmus test for the power of the scheme presented in this work. As detailed in \SMref{HarmonicTrawlers}, the entropy production is indeed correctly reproduced, drawing in particular explicitly on \Eref{def_trans_multi_diag}. The process is generalised to arbitrary attractive pair-potentials in \SMref{generalised_trawlers} and further qualified in \SMref{no_EPR_without_drift} where it is confirmed that in the present framework arbitrarily many identical pair-interacting particles do not produce entropy without drift. \section{Discussion, summary and outlook} \seclabel{summary_outlook} Above we have demonstrated how to construct a Doi-Peliti field theory, \Erefs{def_action0}, \eref{def_ave0} and \eref{def_action0_cont}, from the Fokker-Planck or master equation \eref{FPeqn_main} governing single particle dynamics, without having to resort to explicit discretisation. The resulting expression for the entropy production, \Eref{def_entropyProduction_elegant}, is of a particularly simple form, indicating that entropy production can be interpreted as a mean of a \emph{local} expression \Eref{def_local_entropy}. Additional processes, reactions and interaction, can be added and, if necessary, treated perturbatively, \Erefs{def_action_full} and \eref{perturbative_expansion}. Expressing the entropy production in terms of propagators, \Erefs{def_entropyProduction}, \eref{def_Op} and \eref{def_Ln}, it turns out that the perturbative contributions enter only to first order, \Erefs{transition_from_action} and \eref{Ln_from_propagators}, because each such perturbation introduces a term of order $t'-t$, \Eref{derivative_integral}. Loops enter into the entropy production only in the presence of external sources (tadpole-like diagrams). Treating interaction perturbatively, the results are generalised to many interacting particles, \SMref{MultipleParticles}, with significant simplifications taking place as diagrams with more than one blob do not enter, \latin{e.g.}\@\xspace \Eref{def_trans_multi_diag} (also \SMref{appendixWhichDiagramsContribute}), and disconnected diagrams that simplify as bare propagators turn into $\delta$-functions. The resulting stationary entropy production, \Eref{entropyProduction_for_pairPot}, can again be understood as a spatial average involving equal-time densities. If interactions involve at most $n$ particles at once, the highest order density needed is $2n-1$. Because of this structure, it can be used to estimate the entropy production in experimental and numerical systems, \Eref{interacting_EPR_numerics}, as well as on the basis of effective theories. While the results are derived by means of a Doi-Peliti field theory, they apply universally. Results such as \Erefs{entropyProduction_for_pairPot}, \eref{entropyProduction_interacting_indistinguishable_example} or \eref{entropyProduction_driftdiffusion_main} are exact and can be extended to include higher order interactions or even reactions. They can be used to answer vital questions in applied and theoretical active matter that have previously been studied using approximative schemes \cite{TociuETAL:2020,RoETAL:2021}, such as the energy dissipation in hair-cell bundles \cite{RoldanETAL:2021}, in neuronal responses to visual stimuli \cite{LynnETAL:2022}, or in Kramer's model \cite{Neri:2022}. The general recipe to calculate entropy production in any system is thus to determine the basic or "bare" characteristics, such as the self-propulsion speed or the pair-potential, as well as the $2n-1$ densities, which are used as the weight in an integral like \Eref{entropyProduction_for_pairPot}. Extensions to the formulae derived in the present work are a matter of inserting the new blobs into the field theory. The present scheme allows the systematic calculation of the entropy production based on the microscopic dynamics of the process, while retaining the particle nature of the degrees of freedom. Calculating field theoretically the entropy production of particle systems has been attempted before, notably by Nardini \latin{et~al}.\@\xspace \cite{NardiniETAL:2017}. Their approach is based on an effective dynamics, given by Active Model B \cite{WittkowskiETAL:2014}, that describes the particle density as a continuous function in space by means of a Langevin equation with additive noise in order to smoothen or coarse-grain the dynamics. However, \emph{particle systems} necessarily require multiplicative noise, for example in the form of Dean's equation \cite{Dean:1996,BotheETAL:2022}, to allow the density to faithfully capture the dynamics of \emph{particles}. If not endowed with a mechanism to maintain the particle nature of the degrees of freedom, recasting the dynamics in terms of an unconstrained density constitutes a massive increase of the available phase space rather than a form of coarse graining. There is no reason to assume that the entropy production of such an effective field theory is an approximation of the microscopic entropy production. We are not aware of an example of a \emph{particle system}, whose entropy production is correctly captured by an effective field theory based on a Langevin equation on the continuously varying particle density with additive noise \cite{NardiniETAL:2017}. In fact, attempting to use such an approximation of a most basic, exactly solvable process and using the coarse-graining scheme in \cite{NardiniETAL:2017}, produces a spurious dependence on the size of the state space and a lack of extensivity in the particle number, \SMref{app_ring}, \Eref{entropy_production_final}, while the present field theory trivially produces the exact expression for the microscopic entropy production, \Eref{M_state_exact_asympotic_result} in \SMref{model_description}. We argue that the observable of entropy production needs to be constructed from the microscopic dynamics, which is partially integrated out or "blurred" in effective theories of the particle density. These generally capture correlations effectively and efficiently, but they do so at the expense of smoothing the microscopic details that give rise to entropy production, as they change the description of the dynamics from one in terms of particles to one in terms of space. However, the expression for the entropy production needs to be determined from the microscopics of the particle dynamics, even when eventually calculated from $(2n-1)$-point densities. Effective theories may contain the necessary information to determine these densities, but not to construct the functional for the entropy production in the first place. In future research we may want to exploit further the general expressions for the entropy production of multiple interacting particles, such as \Eref{entropyProduction_for_pairPot} and those derived in \SMref{MultipleParticles}. One may ask, in particular, for bounds on the entropy production by an ensemble of interacting particles and the shape of the pair-interaction potential to maximise it. The present framework can also be extended to the grand canonical ensemble, where particles are created and annihilated, as they branch and coagulate. From a theoretical point of view, it might be interesting to consider the case of non-sparse occupation by identical particles. The grand challenge, however, is to extend the present framework to non-Markovian systems, as to calculate the entropy production in systems where not all degrees of freedom are known, such as the orientation-integrated entropy production of Run-and-Tumble particles in a harmonic potential \cite{Garcia-MillanPruessner:2021}. \begin{acknowledgments} We would like to thank the many people with whom we discussed some aspects of the present work at some point: Tal Agranov, Ignacio Bordeu, Michael Cates, Luca Cocconi, {\'E}tienne Fodor, Sarah Loos, Cesare Nardini, Johannes Pausch, Patrick Pietzonka, Guillaume Salbreux, Elsen Tjhung, Benjamin Walter, Fr{\'e}d{\'e}ric van Wijland, Ziluo Zhang and Zigan Zhen for many enlightening discussions. RG-M was supported in part by the European Research Council under the EU’s Horizon 2020 Programme (Grant number 740269). RG-M acknowledges support from a St John’s College Research Fellowship, University of Cambridge. \end{acknowledgments}
1,941,325,220,820
arxiv
\section{Introduction} Oscillations and instabilities of neutron stars were always considered among the promising sources for gravitational waves. The systematic study of non-axisymmetric neutron star oscillations began in the 1960s with the pioneering works of Thorne and collaborators (cf. \cite{1967ApJ...149..591T, 1969ApJ...158....1T} (and subsequent papers)), in which they laid out the equations governing the perturbations of compact stars in general relativity. The numerical solution of these equations has proven highly challenging and it has taken nearly two decades before Lindblom and Detweiler found an advantageous formulation of the eigenvalue problem that allowed them to determine the complex frequencies of the acoustic modes of sufficiently realistic stellar models (\cite{1983ApJS...53...73L, 1985ApJ...292...12D}). These results did not conclude the investigation of these modes; in particular \cite{1991RSPSA.432..247C, 1991RSPSA.434..449C} turned to the perturbations of relativistic stars and studied their oscillations as a scattering problem. In the mid-1980s, inspired by a toy model \cite{1986GReGr..18..913K} suggested that the dynamic spacetime of a neutron star exhibits its very own class of modes and christened them $w$-modes (\cite{1992MNRAS.255..119K}). The short damping times of these modes (comparable to the ones of black-holes) pose numerical challenges, but the application of the continued fraction method (\cite{1985RSPSA.402..285L}) and numerical integration along anti-Stokes lines (\cite{1995MNRAS.274.1039A}) improved considerably the accuracy in the calculation of both \emph{fluid} and \emph{spacetime} modes and led to the discovery of the new class of $w$-modes, the so-called $w_{\rm II}$-modes (\cite{1993PhRvD..48.3467L}). This advancement in computational accuracy allowed the study of many neutron star equations of state leading to the discovery of universal relations involving frequencies and damping times of oscillation modes. These universal relations provided ways for solving the inverse problem within the so-called \emph{gravitational wave asteroseismology}, allowing to set constrains on the mass and/or radius of the neutron star, and eventually the nuclear equation of state, via the observation of oscillations (see, e.g., \cite{1996PhRvL..77.4134A, 1998MNRAS.299.1059A, 2001MNRAS.320..307K, 2004PhRvD..70l4015B}). With increasing computational power during the 1990s, the first attempts were made to increase the dimensionality of the hitherto (due to the restriction to spherical symmetry) purely radial problem and include time as a second dimension. The first successful time evolutions of perturbations of relativistic neutron stars were reported by \cite{1998PhRvD..58l4012A}. \cite{2000PhDT.......170R, 2001PhRvD..63f4018R} reformulated those evolution equations by means of the ADM-formalism (\cite{ADMformalism}) and introduced a non-uniform radial grid to ensure stable numerical evolution when using realistic equations of state . \subsection{Asteroseismology} The different oscillation patterns of a neutron star are characterized by their restoring force, e.g.,~ $p$(pressure)-modes, $g$(gravity)-modes, $i$(Coriolis)-modes, $s$(shear)-modes or $w$(wave)-modes. The $f$-mode is the fundamental mode of the $p$-mode sequence and it is the oscillation mode most likely to be excited in violent processes such as neutron star formation by supernova core collapse (\cite{2018MNRAS.474.5272T, 2019MNRAS.482.3967T}), the pre-merger interaction of neutron stars (see, e.g., \cite{1994MNRAS.270..611L, 1995MNRAS.275..301K, 2011MNRAS.412.1331F, 2012PhRvD..86l1501G, 2017ApJ...837...67C, 2018PhRvD..98j4005C, 2020PhRvD.101h3002S, 2021MNRAS.506.2985K,2021MNRAS.tmp.2385K}), the early-post-merger oscillations of the final object (\cite{2006PhRvD..73f4027S, 2012PhRvL.108a1101B, 2013PhRvD..88d4026H, 2014PhRvD..89j4021B, 2014PhRvD..90b3002B, 2016CQGra..33r4002L, 2017ApJ...850L..34B, PhysRevD.93.124051, 2015PhRvD..91f4001T, 2018PhRvL.120v1101D, 2019PhRvD.100j4029B}). In the case that the merging neutron stars are of relatively small mass and the post-merger object is a fast spinning neutron star, the unstable $f$-mode oscillations can lead to its spin-down (\cite{2015PhRvD..92j4040D}). The $f$-mode is associated with major density variations and thus can potentially be an emitter of copious amounts of gravitational radiation. The emission of gravitational waves is the primary reason for the mode's rapid damping, at least for newly born neutron stars. The efforts to associate the patterns of oscillations with the bulk parameters of the stars, e.g.,~ their mass, radius or equation of state (henceforth EoS) was initiated in the mid-1990s and continued for almost two decades, advancing the field of gravitational wave asteroseismology (\cite{1996PhRvL..77.4134A, 1998MNRAS.299.1059A, 2001MNRAS.320..307K, 2004PhRvD..70l4015B, 2005PhRvL..95o1101T, 2010ApJ...714.1234L}). To date, very robust empirical relations have been derived for non-rotating neutron stars, connecting observables such as frequency, damping time, or moment of inertia $I$ to the bulk properties; for example, relations of the form $\sigma_0 = \alpha + \beta \sqrt{ M_0/R_0^3 }$ or $M\sigma_0 = F(M_0^3/I)$ (cf. \cite{1996PhRvL..77.4134A, 2010ApJ...714.1234L}) could provide the average density or the moment of inertia of the star if the $f$-mode frequency $\sigma_0$ is known. In the era of gravitational wave astronomy, the various oscillation patterns (traced already in non-linear numerical simulations, e.g., \cite{2004MNRAS.352.1089S,2006PhRvD..74f4024K,Baiotti08,2009PhRvD..79b4002B,2010PhRvD..81h4055Z,2011MNRAS.418..427S,2015PhRvL.115i1101B}), if observed, can provide a wealth of information about the emitting sources, and their effects can leave their imprints both in the gravitational but also in the electromagnetic spectrum. Moreover, recent studies relate the $f$-mode frequencies to the Love numbers (\cite{2019PhRvC..99d5806W, 2020NatCo..11.2553P, 2021MNRAS.503..533A, 2021PhRvD.104b3005M}) and even to the postmerger short gamma-ray bursts (\cite{2019ApJ...884L..16C}). \subsection{Rotation} Nearly all the previously mentioned studies were concerned with non-rotating stars. In nature, neutron stars will always rotate and their rotation rate may reach extreme values. The inclusion of rotation proves difficult since the extreme rotation rates that neutron stars may (and do) reach do not allow to neglect the star's oblateness which removes the spherical symmetry from the system; this in turn makes the mathematical formulation much more involved. As a first approximation, rotation was treated perturbatively, too, which allowed considering even rotating stars as spherically symmetric (\cite{1967ApJ...150.1005H}). In this so-called \emph{slow-rotation approximation}, the perturbation equations gain considerably in complexity and have been written down in a gauge introduced by \cite{1957PhRv..108.1063R} first by \cite{1992PhRvD..46.4289K}. Even though the problem remains one-dimensional, its solution is not straightforward as (among other technicalities) the outgoing-wave boundary condition at infinity is elusive. Notwithstanding, \cite{1998ApJ...502..708A} successfully applied this formalism and discovered that $r$-modes are prone to the so-called CFS-instability, named after their discoverers \cite{1970PhRvL..24..611C} and \cite{1975ApJ...199L.157F, 1978ApJ...222..281F}, at any rotation rate. There has been continuing effort using the slow-rotation approximation concerning rotational modes (\cite{2001PhRvD..63b4019L, 2003PhRvD..68l4010L}), and also employing different gauges (\cite{2002MNRAS.332..676R, 2008MNRAS.384.1711V, 2008PhRvD..77b4029P}), but with the pressing need for frequencies of \emph{rapidly} rotating neutron stars, the interest slowly faded. Even though the slow-rotation approximation has proven fruitful in the understanding of neutron star physics, it is no longer applicable when considering neutron stars at arbitrary rotation rates, which is essential for nascent neutron stars or post-merger configurations in the immediate aftermath of a binary merger. Without the spherical symmetry of the problem, one has to account for at least two spatial dimensions which complicates the equations further and amplifies the computational expense; furthermore, it remains elusive how to formulate the outgoing-wave boundary condition at infinity for the spacetime perturbations in two spatial dimensions which essentially removes the possibility to formulate a corresponding eigenvalue problem. This issue can be circumvented by adopting the \emph{Cowling approximation} (\cite{1941MNRAS.101..367C}), in which the spacetime is considered static, also leading to a considerable simplification of the perturbation equations. Ignoring the impact of a dynamic spacetime (which is most severe for the quadrupolar $f$-mode), \cite{1997ApJ...490..779Y, 1999ApJ...515..414Y} computed quadrupolar $f$-mode frequencies of rapidly rotating neutron stars and studied the associated CFS-instability in the late 1990s. \cite{2007PhRvD..75d3007B} revived this approach and investigated the general properties of the spectrum of neutron stars regarding the acoustic and Coriolis-driven modes. As a further step toward a more general relativistic treatment, \cite{2012PhRvD..86j4055Y} revisited the problem in the \emph{conformal flatness approximation}. However, with the mathematical difficulties of extending the eigenvalue formulation to include a dynamic spacetime, the focus shifted to study the oscillation spectra by evolving the perturbation equations in time. Despite their complexity, the perturbation equations for rapidly rotating relativistic stars have been written down by \cite{1992MNRAS.254..435P}, even though they were not approached numerically at that time. \cite{2008PhRvD..78f4063G, 2009PhRvD..80f4026G} worked in the Cowling approximation and successfully extracted $f$- and $g$-mode frequencies of arbitrarily uniformly and fast rotating neutron stars and even of differentially rotating ones \cite{2010PhRvD..81h4019K} by adding \emph{artificial viscosity} (also known as \cite{kreiss1973methods} dissipation) to their evolution equations in order to stabilise their time evolutions. During the first decade of the new millennium, substantial advances were made in the time evolution of the unperturbed, non-linear Einstein equations, mostly driven by the aim to simulate compact binary mergers but also applicable to isolated neutron stars. These systems have hardly any symmetries that can be exploited to reduce the complexity of the problem, requiring to carry out the time evolutions on a three-dimensional grid. The upside of which is that essentially no constraints have to be placed on the rotational profile when simulating the dynamics of a neutron star. Such codes have been seen as a promising new approach to the calculation of mode frequencies of rapidly (and differentially) rotating neutron stars and already at the beginning of the decade, the frequencies of axisymmetric modes in the Cowling approximation (\cite{2000MNRAS.313..678F, 2001MNRAS.325.1463F}) and those of (quasi-)radial modes in full general relativity (\cite{2002PhRvD..65h4024F}) had been reported. The non-linear codes kept evolving and were used to generate mode frequencies of $f$-modes in the conformal flatness approximation (\cite{2006MNRAS.368.1609D}) or those of inertial modes in the Cowling approximation (\cite{2008PhRvD..77l4019K}). \cite{2009PhRvD..79b4002B}. Not much later, the frequencies of non-axisymmetric modes in full general relativity of non-rotating polytropic neutron stars (\cite{2009PhRvD..79b4002B}) and soon those of rapidly rotating polytropic neutron stars (\cite{2010PhRvD..81h4055Z}) were obtained from fully non-linear simulations. Even though successful, this approach to computing the frequencies of non-axisymmetric modes, however, has not been followed closely, which is also due to the computational expense associated with such numerical simulations and the accompanying limited accuracy. In fact, from the point of view of gravitational wave detectability of oscillation modes, the most relevant scenarios are likely to involve rapidly rotating stars. Unfortunately, the aforementioned empirical relations cannot be trivially extended to rotating stars. Rotation splits the oscillation spectra in a similar fashion as the Zeeman splitting of the spectral lines due to the presence of magnetic fields. In rotating stars, the splitting leads to perturbations propagating in the direction of rotation (so-called \emph{co}-rotating modes) and perturbations traveling in the opposite direction (\emph{counter}-rotating modes). The oscillation frequency as observed by an observer at infinity will either increase or decrease depending on the propagation direction of the waves; for slow rotation there will be a shift of the form $\sigma = \sigma_0 \pm \kappa m \Omega + \mathcal{O}(\Omega^2)$ where $m$ is the angular harmonic index, $\kappa$ a mode and stellar model-dependent constant, and $\Omega$ the angular rotation rate of the star. If the spin of the star exceeds a critical value, which depends on, e.g.,~ the EoS and its mass---i.e.,~ when the pattern velocity $\sigma/m$ of the backward moving mode becomes smaller than the star's rotation rate $\Omega/2\pi$---then the star becomes unstable to the emission of gravitational radiation; this is the aforementioned CFS instability. This instability is generic (independent of the degree of rotation) for the $r$-modes (\cite{1998ApJ...502..708A, 1998ApJ...502..714F}) while it can be excited only for relatively high spin values ($\Omega \gtrapprox 0.8 \Omega_K$, with $\Omega_K$ the Kepler velocity) for the quadrupolar $f$-modes. An extensive discussion can be found in \cite{2017LRR....20....7P} and \cite{2018ASSL..457..673G}. This review is based on the highlights of four recent articles published by the authors, which are \cite{2020PhRvL.125k1106K,2020PhRvD.102f4026K,2021PhRvD.103h3008V,2021PhRvD.104b3005M}. Throughout this article, we employ units in which $c=G=M_\odot=1$. \section{Perurbation equations} \label{sec:formulation} \subsection{Background configuration} We are going to work with the Einstein equations along with the law for the conservation of energy-momentum, \begin{equation} G_{\mu\nu} = 8\pi T_{\mu\nu} \quad\text{and}\quad \nabla_\mu T^{\mu\nu} = 0, \label{eq:Einstein} \end{equation} where $G_{\mu\nu}$ is the Einstein tensor and $T_{\mu\nu}$ is the energy-momentum tensor. We restrict ourselves to the study of the dynamics of small perturbations around an equilibrium configuration which allows us to linearise equations \eqref{eq:Einstein}. We assume an axisymmetric, stationary background configuration for which the metric written in quasi-isotropic coordinates takes the form \begin{align} ds^2 & = g_{\mu\nu}^{(0)} dx^\mu dx^\nu \\ & = - e^{2\nu} dt^2 + e^{2\psi} r^2 \sin^2 \theta (d\varphi - \omega dt)^2 + e^{2\mu} (dr^2 + r^2 d\theta^2). \label{eq:metric} \nonumber \end{align} Here, $\nu$, $\psi$, $\mu$, and $\omega$ are the four unknown metric potentials, depending only on $r$ and $\theta$. We model the neutron star to be a perfect fluid without viscosity for which the corresponding energy-momentum tensor takes the form \begin{equation} T^{\mu\nu} = (\epsilon + p) u^\mu u^\nu + p g^{\mu\nu}, \label{eq:Energy-Momentum} \end{equation} where $\epsilon$ is the energy density, $p$ is the pressure, and $u^\mu$ the 4-velocity of the fluid. The only two non-vanishing components of the 4-velocity are linked via the star's angular rotation rate, $u^\varphi = \Omega u^t$, and by means of the normalisation of the 4-velocity they are given by \begin{align} u^t = \frac{1}{\sqrt{e^{2\nu} - e^{2\psi} r^2\sin^2\theta \left( \Omega-\omega \right)^2}}. \end{align} After specifying an EoS, which may be a polytropic or a tabulated one, linking energy density and pressure to each other, we generate uniformly rotating equilibrium configurations using the \texttt{rns}-code (\cite{1995ApJ...444..306S, 1998A&AS..132..431N,rns-v1.1}). We considered sequences of neutron stars along which we keep either the central energy density constant or the baryon mass fixed (the latter are also known as \emph{evolutionary sequences}) with rotation rates up to their respective mass-shedding limit. Our neutron star models were based on polytropic and realistic EoSs. We considered polytropic models with three different polytropic indices $N = 0.6849, 0.7463$, and $1.0$. Furthermore, we employed piecewise-polytropic approximations, introduced by \cite{2009PhRvD..79l4032R}, for the four tabulated EoSs (APR4, H4, SLy, and WFF1) that we used. Our nonrotating configurations have gravitational masses $M \in [1.17,\,2.19] M_\odot$. Even though current astrophysical constraints play a role in our particular choice of EoSs, it is largely motivated by our desire to provide robust universal relations by covering a wide part of the parameter space. \subsection{Perturbation Equations} As usual in perturbative studies, we decompose the metric as \begin{align} g_{\mu\nu} & = g_{\mu\nu}^{(0)} + h_{\mu\nu}, \end{align} where $g_{\mu\nu}^{(0)}$ is the background metric and $h_{\mu\nu}$ its perturbation. As we will work in the Hilbert gauge, it will be advantageous to work instead with the trace-reversed metric perturbation, defined by \begin{align} \phi_{\mu\nu} & := h_{\mu\nu} - \frac{1}{2} g_{\mu\nu}^{(0)} h, \end{align} where $h := {h^\mu}_\mu$ is the trace of the metric perturbations. The metric perturbations are not unique but possess gauge freedom which can be utilised in different ways. Often, the gauge freedom is used to eliminate some of the spacetime perturbations, e.g., by using the well-known gauge by \cite{1957PhRv..108.1063R}, and hence to reduce the number of perturbation equations. In our studies, however, we followed a different approach (which we will reason below) and opt for the Hilbert gauge, which is the gravitational equivalent to the well-known Lorenz gauge in electromagnetism, specified by \begin{equation} f_\mu := \nabla^\nu \phi_{\mu\nu} = 0. \label{eq:hilbert_gauge} \end{equation} In the Hilbert gauge, the perturbed Einstein tensor takes the form \begin{align} - 2 \delta G_{\mu\nu} & = \square \phi_{\mu\nu} + 2 R^\alpha{}_\mu{}^\beta{}_\nu \phi_{\alpha\beta} + R \phi_{\mu\nu} \nonumber - \left({R^\alpha}_{\mu} \phi_{\nu\alpha} + {R^\alpha}_{\nu} \phi_{\mu\alpha}\right) - g_{\mu\nu} R^{\alpha\beta} \phi_{\alpha\beta}, \end{align} where $R^\alpha{}_\mu{}^\beta{}_\nu$, $R^{\alpha\beta}$, and $R$ are the background Riemann tensor, Ricci tensor and scalar curvature, respectively. The advantage of the Hilbert gauge is that the evolution equations for the metric perturbations will take the form of ten coupled wavelike equations (note that in the above expression, the d'Alembert operator, defined with respect to the background metric, is the only differential operator acting on the metric perturbations) while the mixing of temporal and spatial derivatives is avoided. This is in contrast to other common gauge choices (\cite{1998PhRvD..58l4012A, 1971NCimB...3..295B, 2001PhRvD..63f4018R, 2002MNRAS.332..676R}) or the ones without any gauge choice (\cite{1992MNRAS.254..435P}) where the field equations split into subsets of hyperbolic and elliptic equations which have to be either solved simultaneously or by quite cumbersome manipulations to bring them in a hyperbolic form, something that is technically very difficult for the perturbations of rotating stars if not impossible. The fully hyperbolic character of the perturbation equations in the Hilbert gauge makes this gauge particularly convenient for the numerical implementation in a time evolution. Our choice of gauge, namely the Hilbert gauge, does not eliminate any of the metric perturbations; hence, we need 10 variables to describe the spacetime perturbations. As in previous studies of rotating neutron stars (\cite{PhDVavoul2007, 2010PhRvD..81h4019K}), we use 4 variables for the fluid. In total, we need to evolve 14 variables in time. The evolution equations follow in a very straightforward manner from the perturbed Einstein equations \begin{align} \delta G_{\mu\nu} & = 8 \pi \delta T_{\mu\nu}, \label{eq:pert_Einstein} \end{align} and the perturbed law for the conservation of energy-momentum \begin{align} \delta \left( \nabla_\mu T^{\mu\nu} \right) & = 0. \label{eq:pert_conslaw} \end{align} The evolution equations themselves are quite lengthy and not very enlightening and we refer the reader to \cite{2020PhRvD.102f4026K} for details in the derivation and their implementation. \section{Results.} \subsection{Universal Relations for single neutron stars} As shown in \cite{2020PhRvD.102f4026K}, our code produces results in excellent agreement with previously published values (\cite{2010PhRvD..81h4055Z, 2018PhRvD..98j4005C}) and our convergence tests demonstrate an accuracy of the obtained frequencies of $1 - 2\%$. In this article, we will provide some highlighted results in order to demonstrate the existence of asteroseismological relations of various types and we lay out the way that one can make use of these relations in analyzing gravitational wave signals. More specifically, we will show different universal relations providing accurate estimates for the $f$-mode frequency given some bulk parameters of the star and vice versa. First, we observe a universal behavior of the $f$-mode frequency $\sigma_\text{i}$ as observed in the inertial frame as a function of the star's angular spinning frequency $\Omega$ along sequences of \emph{fixed central energy density} models when we normalize both frequencies with the $f$-mode frequency $\sigma_0$ of the corresponding non-rotating star. Figure~\ref{fig:sigma-omega-eps} displays this behavior for more than 230 different neutron star models of each the co-rotating (i.e., stable) and counter-rotating (i.e., potentially unstable) branches of the $f$-mode for seven EoSs and various central energy densities (with corresponding central rest mass densities $\rho_c \in [2.2, 7.3] \rho_0$, where $\rho_0 = 2.7 \times 10^{14}\unit{g/cm}^3$ is the nuclear saturation density); we model the universal behavior using the quadratic function \begin{align} \frac{\sigma_\text{i}}{\sigma_0} = 1 + a_1 \left( \frac{\Omega}{\sigma_0} \right) + a_2 \left( \frac{\Omega}{\sigma_0} \right)^2 . \label{eq:omvsom_model} \end{align} The results of a least squares fit are $a_1^\text{u} = -0.193$ and $a_2^\text{u} = -0.0294$ for the potentially unstable branch and $a_1^\text{s} = 0.220$ and $a_2^\text{s} = -0.0170$ for the stable branch of the $f$-mode. The quadratic fit accounts well for the increasing oblateness of the star with its rotation; however, close to the Kepler limit, deviations from this simple model become visible. As this deviation is most pronounced for the less realistic polytropic EoSs, we do not take them into account for the quadratic fits. The root mean square of the residuals is $0.024$ for the counter-rotating branch and $0.048$ for the co-rotating branch. \begin{figure}[htbp] \centering \includegraphics[width=8.6cm]{sigma-Omega-eps_c.pdf} \caption{Universal relations for the $l=|m|=2$ $f$-mode frequencies for sequences of constant central energy density as observed in the inertial frame. The graph shows the results from 21 such sequences (three sequences per EoS; three polytropic and four realistic EoSs). The potentially unstable $f$-mode branch displays a strikingly universal behavior; the largest deviations from a quadratic fit occur close to the mass-shedding limit of the sequences, mainly for the polytropic EoSs. Figure taken from \cite{2020PhRvL.125k1106K} with permission by APS.} \label{fig:sigma-omega-eps} \end{figure} We point out that our model predicts that the unstable branch of the quadrupolar $f$-mode becomes susceptible to the CFS instability once the angular rotation rate of the star exceeds $\Omega \approx \left(3.4 \pm 0.1\right) \sigma_0$ (when considering sequences of constant central energy density); note that the given uncertainty is a bound, not a confidence interval. This finding regarding the critical value complements the well-known threshold of $T/|W| \approx 0.08 \pm 0.01$ in terms of the ratio of rotational to gravitational potential energy (\cite{1998ApJ...492..301S, 1999ApJ...510..854M}), which is confirmed in our simulations and is in contrast to the widely used Newtonian result of $T/|W| \approx 0.14$. The stable branch of the $f$-mode can be fitted more accurately when switching to the comoving frame and considering sequences of constant baryon mass. The frequency $\sigma_\text{c}$ observed in the comoving frame is related to the frequency observed in the inertial frame via $\sigma_\text{c} = \sigma_\text{i} + m\Omega/2\pi$. We show our results for more than 120 different neutron star models using four realistic EoSs in Figure~\ref{fig:sigma-omega-Mo}. We fit our results to the quadratic function \begin{align} \frac{\sigma_\text{c}}{\sigma_0} = 1 + b_1 \left( \frac{\Omega}{\Omega_K} \right) + b_2 \left( \frac{\Omega}{\Omega_K} \right)^2 ; \label{eq:sigma-omega-Mo_model} \end{align} note that we use the Kepler velocity $\Omega_K$ to normalize the star's rotation rate in this formula. The results of a least squares fit are $b_1^\text{u} = 0.517$ and $b_2^\text{u} = -0.542$ for the potentially unstable branch (which in the comoving frame exhibits the higher frequencies) and $b_1^\text{s} = -0.235$ and $b_2^\text{s} = -0.491$ for the stable branch of the $f$-mode. The root mean square of the residuals is $0.024$ for the co-rotating branch and $0.051$ for the counter-rotating branch. \footnote{Similar relations were presented in \cite{2011PhRvD..83f4031G,2013PhRvD..88d4052D} but in the Cowling approximation.} \begin{figure}[htbp] \centering \includegraphics[width=8.6cm]{sigma-Omega-Mo.pdf} \caption{Universal relations for the $l=|m|=2$ $f$-mode frequencies for sequences of constant baryon mass as observed in the comoving frame. The graph shows the results from 12 such sequences (three sequences per EoS). The stable $f$-mode branch displays universal behavior. Figure taken from \cite{2020PhRvL.125k1106K} with permission by APS.} \label{fig:sigma-omega-Mo} \end{figure} In earlier studies for non-rotating models, fitting relations of the form $\sigma_0=\alpha+\beta\sqrt{M_0/R_0^3}$ were derived (\cite{1998MNRAS.299.1059A,2011PhRvD..83f4031G,2013PhRvD..88d4052D}). Here, $\alpha$ and $\beta$ can be estimated for the EoSs that fulfill the constraints at the time of observation while $M_0$ and $R_0$ correspond to the mass and radius of the non-rotating model. Thus, this relation in combination with Eq.~\eqref{eq:omvsom_model} or \eqref{eq:sigma-omega-Mo_model} connects three fundamental parameters of the sequence, i.e.,~ mass and radius of the non-rotating member with the spin of the observed model. Obviously, from a single observation of the $f$-mode frequency, one cannot extract these values but can put constraints among the three of them. Any extra observed oscillation frequency, e.g.,~ both co- and counter-rotating frequencies or knowledge of some parameters of the star, such as its mass, will place more stringent constraints. Another fitting relation which can easily be implemented in solving the inverse problem is incorporating the \emph{effective compactness} $\eta := \sqrt{\bar{M}^3/I_{45}}$ (which is closely related to the compactness $M/R$), where $\bar{M}:=M/M_\odot$ and $I_{45}:=I/10^{45}\unit{g}\unit{cm}^2$ are the star's scaled gravitational mass and moment of inertia, inspired by \cite{2010ApJ...714.1234L}. We will be guided by the model employed in the Cowling approximation by \cite{2015PhRvD..92l4004D} which reproduces the $f$-mode frequency of a particular neutron star from its rotation rate, gravitational mass, and effective compactness. We propose the fitting formula \begin{align} \hat{\sigma_\text{i}} = \left( c_1 + c_2 {\hat \Omega} + c_3 {\hat \Omega}^2 \right) + \left( d_1 + d_3 {\hat \Omega}^2 \right) \eta, \label{eq:fit-freq-alter} \end{align} where $\hat{\sigma}_\text{i} := \bar{M}\sigma_\text{i}/\unit{kHz}$ and $\hat{\Omega} := \bar{M}\Omega/\unit{kHz}$; note that we set $d_2 = 0$ as it turns out that this coefficient would be afflicted with a large uncertainty. Using around 100 models based on polytropic as well as around 400 models based on realistic EoSs, the resulting coefficients from a least-squares fit for the counter-rotating branch of the $f$-mode are $(c_1, c_2, c_3)^\text{u}=(-2.14, -0.201, -7.68 \times 10^{-3})$ and $(d_1, d_2, d_3)^\text{u}=(3.42, 0, 1.75 \times 10^{-3})$; for the co-rotating branch, we find the coefficients $(c_1, c_2, c_3)^\text{s}=(-2.14, 0.220, -14.6 \times 10^{-3})$ and $(d_1, d_2, d_3)^\text{s}=(3.42, 0, 6.86 \times 10^{-3})$. The error in the above reported coefficients is less than 10\,\% and the fitting formula recovers the frequencies with a deviation of less than 20\,\%, with considerably higher accuracy (below 5\,\%) where the $f$-mode frequency is larger than $\approx 500\unit{Hz}$. We show the obtained frequencies along with the predictions from our proposed fitting formula for a few select values of $\hat{\Omega}$, spanning the parameter space up to the Kepler limit, in Fig.~\ref{fig:eta-Msigma}. \begin{figure}[htbp] \centering \includegraphics[width=8.6cm]{Oh-Ms-eta.pdf} \caption{The scaled $f$-mode frequency of the potentially unstable branch in dependence of the effective compactness for different values of $\hat{\Omega} = \bar{M}\Omega/\unit{kHz}$. The straight lines represent the prediction of our fitting formula, cf. Eq.~\eqref{eq:fit-freq-alter}. Figure taken from \cite{2020PhRvL.125k1106K} with permission by APS.} \label{fig:eta-Msigma} \end{figure} Qualitatively, our coefficients for the counter-rotating branch agree in order of magnitude with those published by \cite{2015PhRvD..92l4004D} in the Cowling approximation; comparing the special case of no rotation, $\hat{\Omega} = 0$, our fitting formula yields roughly $20\,\%$ lower frequencies in our fully general relativistic setup, which is in accordance with expectations. The lines of constant $\hat{\Omega}$ in Figure~\ref{fig:eta-Msigma} may give the impression that the CFS instability operates more easily in stars with low (effective) compactness, seemingly in contrast to the finding that post-Newtonian effects tend to enhance this instability \cite{1992ApJ...385..630C}. This paradox can be resolved by noting that relativistic effects mainly shift the $f$-mode frequency to lower values while the inclination of the lines of constant $\hat{\Omega}$ is largely unaltered (cf. Figure~3 in \cite{2015PhRvD..92l4004D}). Furthermore, while stars of lower (effective) compactness may reach the neutral point of the $f$-mode indeed at a lower rotation rate, this happens considerably closer to the Kepler limit (if at all) than it would do in more compact stars. The fitting formula \eqref{eq:fit-freq-alter} has the advantage that it does not rely on specifically defined sequences of neutron stars, along which a particular property is held constant. For example, Eq.~\eqref{eq:sigma-omega-Mo_model} depends on the $f$-mode frequency $\sigma_0$ of the (in a very particular fashion) corresponding non-rotating configuration, which may not even exist in some cases (e.g.,~ for supramassive neutron stars supported by rotation); the latter model, cf.~Eq.~\eqref{eq:fit-freq-alter}, is satisfied with bulk properties of the star of which we want to know the oscillation frequency and vice versa. Another benefit of this formulation is that (as demonstrated by \cite{2015PhRvD..92l4004D}) a similar formula can be derived for higher multipoles, i.e.,~ $l \ge 3$. Fitting formula \eqref{eq:fit-freq-alter} can be useful in imposing further constraints on the parameters of the postmerger objects since it combines the mass and spin of the resulting object with the $f$-mode frequency and, via $\eta$, the moment of inertia $I$ or the compactness $M/R$. Thus, the latter two can be further constrained by an observation of an $f$-mode signal, as mass and potentially spin can be extracted from the premerger and early postmerger analysis of the signal. The situation becomes more attractive if both co- and counter-rotating modes or other combination of modes are observed since only the mass of the postmerger object will be needed to constrain its parameters by using only the asteroseismological relations (see \cite{2011PhRvD..83f4031G,2015PhRvD..92l4004D,2020PhRvD.101h4039}). This will be an independent yet complementary constraint in the estimation of the radius in addition to those based on the Love numbers (see \cite{2018PhRvD..98h4061D,2018PhRvL.121p1101A,2018PhRvL.121i1102D,2018ApJ...852L..29R}). \begin{figure}[htbp] \centering \includegraphics[width=8.6cm]{H4_fmode-lres.png} \caption{The frequency of the ${}^2f_2$-mode for the entire EoS H4 is displayed color coded and some contour lines are shown. Each black dot indicates a neutron star model for which we have calculated its nonaxisymmetric mode frequencies. All neutron stars located above the (nearly horizontal) dash-dotted line are supramassive. The red (thick) contour line at $0.0\unit{kHz}$ separates the stable models from those that are susceptible to the CFS-instability. Figure taken from \cite{2020PhRvL.125k1106K} with permission by APS.} \label{fig:H4fmode} \end{figure} As a graphical illustration of the behavior of the $f$-mode frequency across the entire parameter space of stable equilibrium models, we present in Figure~\ref{fig:H4fmode} the frequency of the counter-rotating branch as obtained from the time evolutions exemplary for EoS H4; the graph will be qualitatively similar for other EoSs. We constructed several hundred neutron star models across the entire $M$-$R_e$ plane along so-called \emph{evolutionary sequences}; those are sequences of differently fast rotating neutron stars that share the same baryon mass. The two-dimensional plane of equilibrium models has four distinct boundaries: the static limit at the lower left along which the non-rotating models are located; second, the mass-shedding limit bounds the equatorial radius to the right; next, in the top left of the graph, the limit of stability with respect to quasi-radial perturbations connects the (non-rotating) maximum mass TOV model to (approximately) the heaviest model\footnote{It is well-known that the uniformly rotating equilibrium models with the fastest rotation rate, the highest angular momentum, or the largest gravitational mass may be distinct and it depends on the EoS whether or not those are stable with respect to quasi-radial perturbations (see \cite{1994ApJ...424..823C}).}; equilibrium configurations that are located to the left of this line will collapse to a black hole upon quasi-radial perturbation. Last, in line with current theory and observations (\cite{2018MNRAS.481.3305S,2015ApJ...812..143M}), we limit ourselves to neutron stars with masses $M \gtrsim 1.17 M_\odot$. The dots in Figure \ref{fig:H4fmode} depict the considered equilibrium models. The dash-dotted line slightly above $M = 2.0\,M_\odot$ is the evolutionary sequence with the baryon mass $M_0 = 2.3\,M_\odot$ and it separates the supramassive neutron stars (above that line) which are merely supported by centrifugal force from those (below that line) which may spin down ending up at a stable non-rotating configuration. Other evolutionary sequences can be imagined by combining the dots with lines parallel to the dash-dotted one. The graph shows that for this particular EoS each neutron star model may become CFS-unstable if it is sufficiently spun up. For heavier neutron stars, this happens considerably below the mass-shedding limit and sufficiently heavy supramassive models (approximately $M \gtrsim 2.25M_\odot$) will inevitably be CFS-unstable (with respect to the quadrupolar $f$-mode); i.e. those stars will be stabilized merely by viscous mechanisms counteracting the CFS-instability. \subsection{Universal Relations involving long-lived remnants from BNS mergers} Inspired by the works on universal relations for single neutron stars~(\cite{1998MNRAS.299.1059A,2004PhRvD..70l4015B,2017PhR...681....1Y}), the last five years have given rise to universal relations for BNS: they relate the pre-merger neutron stars to the early post-merger remnant, and have been developed using numerical relativity simulations~(\cite{2015PhRvL.115i1101B,2016PhRvD..93l4051R,2020PhRvD.101h4006K}). These works have primarily focused on relating the tidal deformability of the pre-merger stars, which impact the dynamics of the pre-merger gravitational waves at leading order through the \emph{reduced tidal deformability} or \emph{binary tidal deformability} $\tilde\Lambda$~(\cite{2008PhRvD..77b1502F,2014PhRvL.112j1101F}), to various stellar parameters of the early remnant. Recently, \cite{2020PhRvD.101h4039V} investigated empirical relations for BNS mergers based on the extensive CoRe data set of numerical relativity gravitational wave simulations~(\cite{2018CQGra..35xLT01D}). Covering a wide range of mass ratios, they find an extensive set of universal relations involving the various peak frequencies of the post-merger gravitational wave signal, involving, e.g., the chirp mass and characteristic radius of a 1.6 $M_\odot$ neutron star. In particular, they also find universal relations between the binary tidal deformability and the primary $f$-mode frequency of the post-merger signal (as in~\cite{2020PhRvD.101h4006K}), however, this time involving the chirp mass of the BNS. Other established universal relations connect the post-merger peak frequency to the tidal coupling constant (\cite{2015PhRvL.115i1101B, 2019PhRvD.100j4029B}). The universal relation between the binary tidal deformability of the BNS and the stable, co-rotating $f$-mode frequency $\sigma^s$ of the early, differentially rotating remnant proposed in~\cite{2020PhRvD.101h4006K}, led us in \cite{2021PhRvD.104b3005M} to derive a similar relation for a potentially long-lived, uniformly rotating remnant: the relation takes the form \begin{equation} \log_{10} \hat\sigma^s = a(q) \cdot \tilde \Lambda^{\frac{1}{5}} + b(q), \label{eq:indirect1} \end{equation} where $\hat \sigma^s = \frac{M}{M_\odot}\frac{\sigma^s}{\text{kHz}}$ is the normalized co-rotating $f$-mode frequency, and $q = \frac{M_1}{M_2} \leq 1$ the gravitational mass ratio of the pre-merger stars. For rapidly rotating, long-lived remnants (with rotation frequency $\bar\Omega \geq 800\,\text{Hz}$), this relation achieves an average relative error of $1.3\,\%$. We also derive a relation for the potentially unstable, counter-rotating $f$-mode frequency of the long-lived remnant, presenting the possibility of predicting the onset of the earlier mentioned CFS-instability. Combining these results with the universal relation, Eq.~\eqref{eq:sigma-omega-Mo_model}, for fast rotating neutron stars between the stable, co-rotating $f$-mode frequency and the effective compactness $\eta = \sqrt{\bar M^3/I_{45}}$ (as defined in the previous section), we also derived a combined relation of the form \begin{equation} \eta = \frac{10^{a(q) \cdot \tilde \Lambda^{\frac{1}{5}} + b(q)} - \left(c_1 + c_2 \hat \Omega + c_3 \hat \Omega^2\right)}{d_1 + d_3 \hat \Omega} \label{eq:indirect2} \end{equation} that relates the pre-merger binary tidal deformability of the BNS with the effective compactness of the long-lived remnant. For rapidly rotating remnants, this relation achieves an average relative error of $2.4\,\%$. Finally, by directly relating these quantities without going via the $f$-mode, we obtain a universal relation of the form \begin{equation} \log\left[\bar M^5 \eta\right] = a(q) \left(\bar M^{5}\tilde\Lambda^{-\frac{1}{5}}\right)^2 + b(q) \bar M^{5}\tilde\Lambda^{-\frac{1}{5}}+ c(q). \label{eq:direct} \end{equation} This relation achieves improved accuracy, reaching an average relative error of $\sim 1.5\,\%$ for remnants with any rotation frequency. We show the quadratic fit in Fig.~\ref{fig:prav-quadratic} for the symmetric case of $q = 1$. \begin{figure}[htbp] \centering \includegraphics[width=8.6cm]{quadratic.pdf} \caption{The quadratic fit between $\bar{M}^5 \tilde{\Lambda}^{-1/5}$ and $\bar{M}^5C$ for $q = 1$. The fit corresponds to Eq.~\eqref{eq:direct}. Figure taken from \cite{2021PhRvD.104b3005M} with permission by APS.} \label{fig:prav-quadratic} \end{figure} We finally also consider a direct relation between the binary tidal deformability and the compactness $C = M/R$ of the long-lived remnant. Such a relation would allow the direct estimation of the remnant's radius $R$ using independent estimates of its gravitational mass. We propose a relation of the form \begin{equation} \bar M^5 C = a(q) \bar M^{5} \tilde \Lambda^{-\frac{1}{5}} + b(q) \label{eq:direct2} \end{equation} which, however, only achieves an accuracy an order of magnitude worse than for the effective compactness relation, reaching an average relative error of $\sim 8.8\,\%$. A graphic representation of this fit along with the data is shown in Figure~\eqref{fig:prav-compactness}. \begin{figure}[htbp] \centering \includegraphics[width=8.6cm]{compactness.pdf} \caption{The linear fit between $\bar{M}^5 \tilde{\Lambda}^{-1/5}$ and $\bar{M}^5C$ for $q = 1$, considering only soft EoSs. The fit corresponds to Eq.~\eqref{eq:direct2}. Figure taken from \cite{2021PhRvD.104b3005M} with permission by APS.} \label{fig:prav-compactness} \end{figure} The results presented in \cite{2021PhRvD.104b3005M} represent a first step towards finding universal relations between the pre-merger neutron stars and the potential long-lived remnant of a BNS merger using perturbative calculations. Our approach can be freely extended to, e.g., hot EoSs, phase transitions, as well as differential rotation for the remnant to, to cover, e.g., earlier parts of the post-merger phase. The various functional expressions for $a(q)$, $b(q)$ and $c(q)$ used in equations (\ref{eq:indirect1}), (\ref{eq:indirect2}), (\ref{eq:direct}), (\ref{eq:direct2}) as well as the coefficients $c_1$, $c_2$, $c_3$, $d_1$ and $d_2$ are given in \cite{2021PhRvD.104b3005M}. \section{Approaching the inverse problem} Having laid out the procedure to extract fluid oscillation frequencies from time evolution in a general relativistic framework and having developed various universal relations between bulk properties of neutron stars and their $f$-mode frequencies, we now turn to one of the possible applications. The field of gravitational wave asteroseismology attempts to invert the above outlined procedure and aims to constrain bulk properties of neutron stars such as mass and radius given one or more oscillations frequencies. The above derived universal relations are a valuable tool for this as they do not depend on the underlying and hitherto poorly understood nuclear equation of state. However, even without universal relations at hand, the inverse problem can be approached. We will summarise an approach of the inverse problem using Bayesian methods as laid out by \cite{2021PhRvD.103h3008V}. Here, the fundamental idea is tested on the $f$-mode frequency, but it may easily be extended to other fluid modes or combined with their damping times. The use of the Bayesian framework is advantageous (compared to, e.g., an analytical inversion of the non-linear universal relation) as it directly allows to incorporate error bars of the measurement of the $f$-mode into the calculation. Besides the Bayesian approach and universal relations, there are in principle also semi-classical techniques, e.g., WKB theory, which can be used to address the inverse spectrum problem in simplified cases; see \cite{Volkel:2019gpq}¸for using axial $w$-modes of spherically symmetric neutron stars and \cite{Volkel:2017kfj} using a similar approach for ultra compact stars. In a first step, we pick an EoS and a particular (rotating) neutron star model for which we then calculate two $f$-mode frequencies. We assume those to have a relative error of $3\,\%$ (Gaussian), while our prior knowledge on $M$ and $R$ is uninformative. Two data points should in principle suffice to uniquely pinpoint one neutron star model (assuming a cold EoS). We show the result of the Bayesian analysis in Figure~\ref{fig:seb-eos_methods}. The initially chosen model is depicted by the red cross at $M = 1.8\,M_\odot$ and $R = 15\,\text{km}$. The blue shaded area in the big panel shows the correlations of mass and radius during sampling, whose posterior distributions are shown in the side panels. While the radius of the star can be reconstructed with fairly small error bars, the reconstructed mass is considerably less constrained. Nonetheless, the peak of the probability distribution nicely resembles the initially chosen model. If the $f$-mode frequencies were to be known more accurately, the corresponding error bars of reconstructed mass and radius would be smaller, too. \begin{figure}[htbp] \centering \includegraphics[width=8.6cm]{eos_methods.png} \caption{Here we compare the EoS method assuming the H4 EoS (blue) and the MPA1 EoS (orange). The diagonal panels show the sampled posterior distribution of $M$ and $R$, while the off diagonal panel combines a scatter plot with logarithmic contour lines. The red cross and red line indicate the true H4 parameters that belong to the assumed $3\,\%$ $f$-mode data. For both cases we assume that the mass is known by $10\,\%$. Figure taken from \cite{2021PhRvD.103h3008V} with permission by APS.} \label{fig:seb-eos_methods} \end{figure} This analysis required the assumption of a particular EoS and so far we used the same EoS to reconstruct the model that we also used to generate the underlying neutron star model. If we use a different EoS to perform the Bayesian analysis, we will obviously reconstruct a different model; this is shown in orange in Figure~\ref{fig:seb-eos_methods}. The reconstructed star in this case is considerably smaller and lighter. Note that the apparent cut of the MPA1 EoS posteriors (in particular for the mass) is not an indication that this EoS has not been used for the injection, since a similar behavior can also be found for the H4 EoS, if the injected parameters are closer to the edge of the H4 EoS neutron star parameter space. One needs further information, such as the precise mass, the tidal deformability, or perhaps a damping time, in order to rule out this EoS. Instead of making an educated guess for the ``correct'' EoS and reconstructing mass and radius as just described, we may also use the universal relation proposed in Eq. \eqref{eq:fit-freq-alter}. We will not need to assume any EoS but the universal relation will provide us---given two $f$-mode frequencies and a good estimate of the star's mass---with an estimate for the stars's rotation rate $\Omega$ and its effective compactness $\eta$. We again use the Bayesian method to invert the universal relation and we apply it to the same model as in the previous analysis; the H4 model with $M = 1.8\,M_\odot$ and $R = 15\,\text{km}$. In Figure~\ref{fig:seb-eos_vs_ur_omega} we show the results of the analysis. On the $x$-axis, we show the star's effective compactness and rotation rate, both normalised to the actual value corresponding to the equilibrium configuration. The two graphs then show the posterior distributions. We ran the same analysis twice, once assuming a $30\,\%$ relative error on the prior mass (solid lines) and once assuming a $10\,\%$ relative error (dashed lines). As expected, with a less informed prior on the mass, the posterior distribution for the desired quantities have a considerably larger variance. The blue and orange curves correspond to the previously describe method where we assume a particular EoS. We have described this above and here we show the posteriors for $\eta$ and $\Omega$. The green curves show the posterior distribution employing the universal relation. It is evident that the H4 EoS method (blue lines) and universal relation (UR) method (green lines) yield very similar results for the rotation rate $\Omega$, while assuming the MPA1 EoS (orange lines) indicates a value that is larger than the correct one. Note that both observations hold independent of the specific prior knowledge of $M$ assumed here ($30\,\%$ and $10\,\%$). The situation for the effective compactness $\eta$ is qualitatively different. First, the prior knowledge of $M$ plays a big role for the UR method, but is less important for the EoS methods. For those we find that the correctly assumed H4 EoS is almost independent of uncertainties in $M$, while the posterior distribution obtained by the MPA1 EoS is shifted. Note that the rather different scaling behavior of the UR method is in agreement with the findings described above. Finally, while the posteriors of $\Omega$ are very smooth, one observes small ``bumps'' for the H4 EoS, e.g., at $\eta/\eta_0 \approx 1.02$. We have verified that this likely is an artifact from the finite resolution and particular range of the used H4 $f$-mode data that is available to us. This directly sets the scale of how precise our currently implemented EoS data can be used to resolve the underlying parameters, which is of order percent level. \begin{figure}[htbp] \centering \includegraphics[width=8.6cm]{eos_vs_ur_eta_int.png} \includegraphics[width=8.6cm]{eos_vs_ur_omega_int.png} \caption{Here we show the posterior distributions of the rotation rate $\Omega$ (top panel) and effective compactness $\eta$ (bottom panel) normalized to the injected H4 values ($\Omega_0$,$\eta_0$). Posteriors are obtained by using the EoS method with H4 EoS (blue), the MPA1 EoS (orange), as well as the universal relation method (green). Solid lines correspond to 30\,\% relative error on the prior mass M and dashed lines to 10\,\%. We indicate each mean of the posteriors as vertical lines. The $f$-mode relative error is assumed to be 3\,\%. Figure taken from \cite{2021PhRvD.103h3008V} with permission by APS.} \label{fig:seb-eos_vs_ur_omega} \end{figure} \section{Summary and Outlook} We report the first extraction of frequencies of the $l=|m|=2$ $f$-mode of general relativistic, rapidly rotating neutron stars \emph{without} the commonly used \emph{slow-rotation} or \emph{Cowling approximation} to an extent that allows us to generalize the findings into universal relations. This concludes a long-standing open problem, building upon the effort from numerous studies throughout the past five decades. For this, we have derived a set of time evolution equations governing the perturbations of the fluid of a rapidly rotating neutron star as well as its surrounding spacetime, derived in a perturbative framework in full general relativity. We have opted for the Hilbert gauge in order to arrive at a set of fully hyperbolic equations for the spacetime perturbations whose implementation does not pose major obstacles. Convergence tests to study the accuracy of our code reveal that in a typical simulation, the obtained frequency usually deviates less than $1\,\% - 2\,\%$ from the limiting value or has an accuracy of about $10\unit{Hz}$. As we model axisymmetric configurations only, we can reduce the problem to two spatial dimensions which drastically lowers the computational expense of our numerical time evolutions in comparison to non-linear codes that perform three-dimensional simulations. The evolution of the perturbations of a neutron star for $15\unit{ms}$ on a grid with $3120 \times 50$ points, which facilitates a decent frequency resolution, requires only a few dozen of CPU hours; this enables us to study broad ranges of parameters and various EoSs. We expect to further reduce the computational cost even further by deriving a simplified set of perturbation equations. As a result, we provide different universal relations for the frequencies of the $l=|m|=2$ $f$-modes of uniformly rotating neutron stars at zero temperature which are independent of the EoS; the proposed formulae are calibrated to several hundred neutron star models that are constructed using both polytropic and realistic EoSs and are scattered across the entire parameter space of equilibrium solutions. Furthermore, it is possible to link the pre-merger binary tidal deformability to the effective compactness of the late post-merger remnant in a universal manner, too. Such universal relations will be an essential piece in the asteroseismological toolkit once the third-generation GW observatories will be able to pick up the ring-down and fluid ringing signal following the merger of a binary neutron star system; they allow to solve the inverse problem, leading to significantly tighter constraints for mass and radius of the postmerger object. For this task, it is elementary to have a smorgasbord of universal relations at hand, which allows to make a practical choice, depending on which observables are available or in which of the star's properties one is interested. We will extend the present list of such universal relations in future articles, utilizing different combinations of bulk properties of the star; while we may obviously be (and already have been) inspired by previously published fitting formulae that were derived using different approximative frameworks, we need to be open-minded about models involving novel combinations of observables. We also report the discovery of an accurate estimate for the onset of the CFS-instability when the $f$-mode frequency of the non-rotating member of the family is known and verified the corresponding critical value of $T/|W|$. A natural extension of the present study will be a more comprehensive investigation of the spectrum of neutron stars (i.e. higher multipole $f$-modes, low $p$-modes and $g$-modes as well as $w$-modes) which may be excited in different astrophysical processes. Furthermore, we are going to extend our code to account for differentially rotating neutron stars and hot EoSs that are particularly relevant for nascent neutron stars or postmerger configurations in the immediate aftermath of a binary merger, both of which will have a considerable impact on the vibration frequencies or the onset of the CFS-instability (and via two further scaling parameters also on the universal relations) during a very short but dynamic interval of their lives. Furthermore, a precise knowledge of the spectrum of compact objects is invaluable as also isolated neutron stars as well as those in inspiraling binary systems may possess high spin rates and various oscillation modes, which may be excited, e.g., via glitches or tidal coupling, may impact their electromagnetic emission or the dynamic of the whole system. \section{Acknowledgements} C.K. acknowledges financial support through DFG research Grant No. 413873357. S.V. acknowledges financial support provided under the European Union's H2020 ERC Consolidator Grant ``GRavity from Astrophysical to Microscopic Scales'' grant agreement no. GRAMS-815673. A part of the computations were performed on Trillian, a Cray XE6m-200 supercomputer at UNH supported by the NSF MRI program under Grant No. PHY-1229408.
1,941,325,220,821
arxiv
\section{Introduction} \label{S:INTRO} Our main goal in this paper is to develop flexible techniques that advance the theory of shock formation in initially regular solutions to quasilinear hyperbolic PDE systems featuring \emph{multiple speeds} of propagation. Our techniques apply in \emph{more than one spatial dimension}, a setting in which one is forced to complement the method of characteristics with an exceptionally technical ingredient: energy estimates that hold all the way up to the shock singularity. We recall that shock singularities are tied to the intersection of a family of characteristics and are such that the (initially smooth) solution remains bounded but some derivative of it blows up in finite time, a phenomenon also known as wave breaking. Our approach has robust features and could be used to prove shock formation for a large class of systems; see Subsect.\ \ref{SS:EXTENDINGRESULTORELATEDSYSTEMS} for discussion on various types of systems that could be treated with our approach. However, for convenience, we study in detail only systems of pure wave\footnote{For quasilinear wave equations, either the first- or second-order Cartesian coordinate partial derivatives of the solution blow up in finite time, depending on whether the quasilinear terms are of type $\Phi \cdot \partial^2 \Phi$ or $\partial \Phi \cdot \partial^2 \Phi$.} type in the present article. Specifically, our main result is a sharp proof of finite-time shock formation for an open set of nearly plane symmetric solutions to equations \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE}, which form a system of two quasilinear wave equations in two spatial dimensions featuring two distinct (dynamic) wave speeds; see Theorem~\ref{T:ROUGHMAINTHM} on pg.~\pageref{T:ROUGHMAINTHM} for a rough summary of our results, Subsect.\ \ref{SS:SYSTEMSUNDERSTUDY} for our assumptions on the nonlinearities, and Theorem~\ref{T:MAINTHEOREM} in Sect.\ \ref{S:MAINTHEOREM} for the precise statements. Here we provide a very rough summary. \begin{theorem}[\textbf{Main theorem} (very rough statement)] \label{T:VERYROUGHMAINTHM} In two spatial dimensions, under suitable assumptions on the nonlinearities\footnote{To ensure that shocks form, we make a genuine nonlinearity-type assumption, which results in the presence of Riccati-type terms that drive the blowup; see Remark~\ref{R:GENUINELYNONLINEAR}.} (described in Subsect.\ \ref{SS:SYSTEMSUNDERSTUDY}), there exists an open\footnote{By open, we mean open with respect to a suitable Sobolev topology.} set of regular, approximately plane symmetric\footnote{By plane symmetric initial data, we mean data that are functions of the Cartesian coordinate $x^1$.} initial data for the quasilinear wave equation system \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} such that the solution variables exhibit the following behavior: some first-order Cartesian coordinate partial derivative of one of the solution variables blows up in finite time while the first-order Cartesian coordinate partial derivatives of the other solution variable remain uniformly bounded, all the way up to the singularity in the first solution variable's derivatives. \end{theorem} Our proof of Theorem~\ref{T:MAINTHEOREM} is not by contradiction but is instead based on giving a \emph{complete description of the dynamics}, all the way up to the first singularity, which is tied to the intersection of a family of outgoing\footnote{Throughout, ``outgoing'' roughly means right-moving, that is, along the positive $x^1$ axis, as is shown in Figure~\ref{F:FRAME} on pg.~\pageref{F:FRAME}.} approximately plane symmetric characteristics;\footnote{The characteristics that intersect are co-dimension one and have the topology $\mathbb{I} \times \mathbb{T}$, where $\mathbb{I}$ is an interval of time and $\mathbb{T}$ is the standard torus.} see Figure~\ref{F:FRAME} on pg.~\pageref{F:FRAME} for a picture of the setup, in which those characteristics are about to intersect to form a shock. It is important to note that our approach here relies on our assumption that the shock-forming wave variable satisfies a scalar equation whose principal part depends \emph{only on the shock-forming variable itself}; see equation \eqref{E:FASTWAVE}.\footnote{In contrast, the principal part of the wave equation \eqref{E:SLOWWAVE} of the non-shock-forming solution variable is allowed to depend on both solution variables.} Especially for this reason (and for others as well), one would need new ideas to prove shock formation results in more than one spatial dimension for more general second-order quasilinear hyperbolic systems, where the coupling of all of the unknowns can occur in the principal part of all of the equations. These more complicated kinds of systems arise, for example, in the study of elasticity and crystal optics, where the unknowns are the scalar components $\Phi^A$ of the ``array-valued'' solution $\Phi$ and the evolution equations can be written in the form\footnote{The principal coefficients $H_{AB}^{\alpha \beta}(\partial \Phi)$ must, of course, verify appropriate technical assumptions to ensure the hyperbolicity (and local well-posedness) of the system.} $H_{AB}^{\alpha \beta}(\partial \Phi) \partial_{\alpha} \partial_{\beta} \Phi^B = 0 $ (with implied summation over $\alpha$, $\beta$, and $B$). Prior to this paper, the only constructive proof of shock formation for a quasilinear hyperbolic system in more than one spatial dimension featuring multiple speeds was our work \cites{jLjS2016a,jLjS2016b}, joint with J.~Luk, in which we studied a system of quasilinear wave equations featuring a \emph{single wave operator} coupled to a quasilinear transport equation.\footnote{More precisely, the equations studied in \cites{jLjS2016a,jLjS2016b} were a formulation of the compressible Euler equations with vorticity.} As we explain below, the following basic features of the system studied in \cites{jLjS2016a,jLjS2016b} played a crucial role in the analysis: \textbf{i)} there was precisely one wave operator in the system and \textbf{ii)} transport operators are \emph{first-order}. Thus, the main contribution of the present article is that we allow for additional (second-order) quasilinear wave operators in the system. This requires new geometric and analytic ideas since, as we explain two paragraphs below, a naive approach to analyzing the additional wave operators would lead to commutator error terms that are uncontrollable near the shock. The phenomenon of shock formation is ubiquitous in the study of quasilinear hyperbolic PDEs in the sense that many systems without special structure\footnote{Some systems in one spatial dimension with special structure, such as \emph{totally linearly degenerate} quasilinear hyperbolic systems, are not expected to admit any shock-forming solutions that arise from smooth initial data.} are known, in the case of one spatial dimension, to admit shock-forming solutions, at least for some regular initial conditions. In fact, the theory of solutions to quasilinear hyperbolic PDE systems in one spatial dimension is rather advanced in that it incorporates the formation of shocks starting from regular initial conditions as well as their subsequent interactions, at least for solutions with small total variation. Indeed, a key reason behind the advanced status of the $1D$ theory is the availability of estimates within the class of functions of bounded variation; readers can consult the monograph \cite{cD2010} for a detailed account of the one-dimensional theory. In more than one spatial dimension, the theory of solutions to quasilinear hyperbolic PDEs (without symmetry assumptions) is much less developed, owing in part to the fact that bounded variation estimates for hyperbolic systems typically fail in this setting \cite{jR1986}. In fact, in more than one spatial dimension, there are very few works even on the formation of a shock\footnote{One of course needs to make assumptions on the structure of the nonlinearities in order to ensure that shocks can form.} starting from smooth initial conditions, let alone the subsequent behavior of the shock wave\footnote{We note, however, that Majda has solved \cites{aM1981,aM1983a,aM1983b}, in appropriate Sobolev spaces, the \emph{shock front problem}. That is, he proved a local existence result starting from an initial discontinuity given across a smooth hypersurface contained in the Cauchy hypersurface. The data must verify suitable jump conditions, entropy conditions, and higher-order compatibility conditions. Moreover, as we describe in Subsect.\ \ref{SS:PRIORWORKSANDSUMMARY}, Christodoulou recently solved \cite{dC2017} the restricted shock development problem.} or the interaction of multiple shock waves; our work here concerns the first of these problems. Specifically our result builds on the body of work \cites{sA1995,sA1999a,sA1999b,dC2007,jS2016b,dCsM2014,gHsKjSwW2016,sM2016,sMpY2017,jLjS2016a,jLjS2016b} on shock formation in more than one spatial dimension, the new feature being that in the present article, we have treated wave systems with multiple wave speeds such that all solution variables are allowed to be non-zero at the first singularity. All prior shock formation results in more than one spatial dimension, with the exception of the aforementioned works \cites{jLjS2016a,jLjS2016b}, concern scalar quasilinear wave equations, which enjoy the following fundamental property: there is only one wave speed, which is tied to the characteristics of the principal part of the equation. As we mentioned earlier, the methods of \cites{jLjS2016a,jLjS2016b} yield similar shock formation results for wave-transport systems in which there is precisely one wave operator. As we mentioned above, many quasilinear hyperbolic systems of mathematical and physical interest have a principal part that is more complicated than that of the scalar wave equations treated in \cites{sA1995,sA1999a,sA1999b,dC2007,jS2016b,dCsM2014,gHsKjSwW2016,sM2016,sMpY2017} and the wave-transport systems treated in \cites{jLjS2016a,jLjS2016b}, which feature precisely one wave operator. It is of interest to understand whether or not shock formation also occurs in solutions to such more complicated systems in more than one spatial dimension. We now explain why our proof of shock formation for quasilinear wave systems with multiple wave speeds, though they are not the most general type of second-order hyperbolic systems of interest, requires new ideas compared to \cites{jLjS2016a,jLjS2016b}. Like all prior works on shock formation in more than one spatial dimension, our approach in \cites{jLjS2016a,jLjS2016b} was fundamentally based on the construction of geometric vectorfields adapted to the single wave operator. The following idea, originating in \cites{sA1995,sA1999a,sA1999b,dC2007}, lied at the heart of our analysis of \cites{jLjS2016a,jLjS2016b}: because the vectorfields were adapted to the wave operator, we were able to commute them \emph{all the way through it} while generating only controllable error terms. Moreover, commuting all the way through the wave operator seems like an essential aspect of the proof since the special cancellations that one relies on to control error terms seem to be visible only under a covariant second-order formulation of the wave equation; see also Remark~\ref{R:NEEDSECONDORDER}. To handle the presence of a transport operator in the system of \cites{jLjS2016a,jLjS2016b}, we relied on the following key insight: upon introducing a geometric weight,\footnote{Specifically, the weight is the inverse foliation density $\upmu$, which we describe later on in detail (see Def.~\ref{D:FIRSTUPMU}).} one can commute the same geometric vectorfields through an essentially arbitrary,\footnote{More precisely, the transport operator must be transversal to the null hypersurfaces corresponding to the wave operator.} solution-dependent, \emph{first-order} (transport) operator; thanks to the weight, one encounters only error terms that can be controlled all the way up to the shock. However, as we explain at the end of Subsect.\ \ref{SS:PRIORWORKSANDSUMMARY}, it seems that commuting the geometric vectorfields through a typical second-order differential operator, such as a wave operator with a speed different from the one to which the vectorfields are adapted, leads to uncontrollable error terms; see the end of Subsect.\ \ref{SS:PRIORWORKSANDSUMMARY} for further discussion on this point. For this reason, our treatment of systems featuring two wave operators with strictly different speeds\footnote{By strictly different speeds, we mean that the characteristics corresponding to the two wave operators are strictly separated; see Subsubsect.\ \ref{SSS:WAVESPEEDASSUMPTIONS} for our precise assumptions.} is based on the following fundamental strategy, which we discuss in more detail in Subsect.\ \ref{SS:PRIORWORKSANDSUMMARY}: \begin{quote} We use a \emph{first-order reformulation} of one of the wave equations (corresponding to the solution variable that does \emph{not} form a shock), which, though somewhat limiting in the precision it affords, \emph{allows us to commute the equations with the geometric vectorfields while avoiding uncontrollable error terms}. \end{quote} For the solutions under consideration, the ``fast wave variable,'' denoted by $\Psi$ throughout, is the one that forms a shock. That is, $\Psi$ remains bounded but some of its first-order partial derivatives with respect to the Cartesian coordinates blow up. In contrast, the ``slow wave variable,'' denoted by $w$ throughout, is more regular in that its first-order partial derivatives with respect to the Cartesian coordinates remain \emph{uniformly bounded}, all the way up to the shock. We can draw an analogy to the little that is known about shock formation in elasticity, a second-order hyperbolic theory in which longitudinal waves propagate at a faster speed than transverse waves \cite{sTZ1998}. In spherical symmetry, there are no transverse elastic waves, and the equations of elasticity reduce to a single scalar quasilinear wave equation that governs the propagation of longitudinal waves. In this setting, under an appropriate genuinely nonlinear assumption, John proved \cite{fJ1984} a small-data finite-time shock formation result. Thus, for elastic waves in the simplified setting of spherical symmetry, it is precisely the fastest wave that forms a shock (while the ``transverse wave part'' of the solution remains trivial). In our work here, the slow moving wave $w$ is allowed to be non-zero at the singularity. For this reason, our proof of the more regular behavior of $w$ is non-trivial and relies on our first-order formulation of the evolution equations for $w$. As will become clear, one would likely need new ideas to treat data such that the slow wave variable forms a shock. This is what one expects to happen, for example, in various solution regimes for the Euler-Einstein equations of cosmology and for the Euler-Maxwell equations of plasma dynamics, where one expects the slow-moving sound waves to be able to drive finite-time shock formation in the ``fluid part'' of the system (for appropriate data). Roughly, the reason that one would need new ideas to prove shock formation for the slow wave is that in the present article, to close the energy estimates, we crucially rely on the fact that the characteristic hypersurfaces of the fast wave operator (which are also known as null hypersurfaces in view of their connection to the Lorentzian notion of a null vectorfield) are \emph{spacelike} relative to the slow wave operator; indeed, this essentially defines what it means for the slow wave operator to be ``slow.'' We denote these fast wave characteristic hypersurfaces by $\mathcal{P}_u^t$ when they are truncated at time $t$; see Figure~\ref{F:FRAME} on pg.~\pageref{F:FRAME} for a picture of the $\mathcal{P}_u^t$. Analytically, the fact that the $\mathcal{P}_u^t$ are spacelike relative to the slow wave operator is reflected in the coerciveness estimate \eqref{E:SLOWNULLFLUXCOERCIVENESS}, which shows that energy for the slow wave variable $w$ along $\mathcal{P}_u^t$ is positive definite in $w$ and \emph{all} of its partial derivatives with respect to the Cartesian coordinates and \emph{does not feature a degenerate $\upmu$ weight}. This non-degenerate coerciveness along $\mathcal{P}_u^t$ appears to be essential for closing the energy estimates. We remark that in one spatial dimension, one can rely exclusively on the method of characteristics (without energy estimates) and thus there are many results in which the slow wave can blow up, \cites{fJ1974,pL1964,dCdRP2016,aRfS2008} to name just a few. \begin{remark}[\textbf{We do not use a first-order formulation for the shock-forming wave equation}] \label{R:NEEDSECONDORDER} As we describe in Subsect.\ \ref{SS:CONTEXTANDPRIORRESULTS}, it does not seem possible to treat the wave equation for $\Psi$ using a first-order formulation in the spirit of the one that we use for $w$; such a formulation of the evolution equations for $\Psi$ would not exhibit the special geometric cancellations found in our second-order formulation of it (see equation \eqref{E:FASTWAVE} and the discussion below it). \end{remark} \subsection{A more precise summary of the main results} \label{SS:SUMMARYOFMAINRESULTS} In this paper, we consider data for the system of wave equations given on the union of a portion of a null hypersurface $\mathcal{P}_0$ and a portion of the two-dimensional spacelike Cauchy hypersurface $\Sigma_0 := \lbrace 0 \rbrace \times \Sigma \simeq \Sigma$, where the ``space manifold'' $\Sigma$ is \begin{align} \label{E:SPACEMANIFOLD} \Sigma := \mathbb{R} \times \mathbb{T}. \end{align} Here and throughout, $\mathbb{T}$ is the standard one-dimensional torus (that is, the interval $[0,1]$ with the endpoints identified and equipped with a standard smooth orientation) while ``null'' means null with respect to the Lorentzian metric $g$ corresponding to the wave equation for the shock-forming variable $\Psi$. In fact, \begin{quote} \emph{We tailor all geometric constructions to the fast wave metric $g$}. \end{quote} See Figure~\ref{F:FRAME} on pg.~\pageref{F:FRAME} for a picture of the setup. We allow for non-trivial data on $\mathcal{P}_0$ because this setup would be convenient, in principle, for proving that, at least for some of our solutions, $\Psi$ and $w$ are both non-zero at the singularity (roughly because one could place non-zero data on $\mathcal{P}_0$ near the shock). However, we do not explicitly exhibit any data such that both solution variables are guaranteed to be non-zero at the singularity. Our assumption of two spatial dimensions is for technical convenience only; similar results could be proved in three or more spatial dimensions. This assumption allows us to avoid the technical issue of deriving elliptic estimates for the foliations that we use in our analysis, which are needed in three or more spatial dimensions. The elliptic estimates would be somewhat lengthy to derive but have been well-understood in the context of shock formation starting from \cite{dC2007}. Our proof of shock formation could also be extended to different spatial topologies, though changing the spatial topology might alter the kinds of data to which our methods apply.\footnote{The formation of the shock is local in nature. Thus, given any spatial manifold, our approach could be used to exhibit an open set of data on it such that the solution forms a shock in finite time. Roughly, there is no obstacle to proving large-data shock formation on general manifolds.} We recall that the shock-forming variable $\Psi$ corresponds to the ``fast speed'' while the less singular variable $w$ corresponds to the ``slow speed.'' We will study solutions with initial data that are (non-symmetric) perturbations of the initial data corresponding to simple outgoing plane symmetric waves with $w \equiv 0$. We discuss the notion of a simple outgoing plane symmetric solution in more detail in Subsubsect.\ \ref{SSS:NEARLYSIMPLEWAVES}, in which, for illustration, we provide a proof of our main results for plane symmetric solutions. By plane symmetric solutions, we mean solutions that depend only on the Cartesian coordinates $t$ and $x^1$. In the context of our main results, the factor of $\mathbb{T}$ in the space manifold $ \Sigma = \mathbb{R} \times \mathbb{T} $ corresponds to perturbations away from plane symmetry. The advantage of studying (asymmetric) perturbations of simple outgoing plane symmetric waves is that it allows us to focus our attention on the singularity formation without having to confront additional evolutionary phenomena such as dispersion, which is exhibited, for example, in the \emph{initial} evolutionary phase of small-data solutions with initial data given on $\mathbb{R}^2$. We studied similar nearly plane symmetric solution regimes in our joint work \cite{jSgHjLwW2016} on shock formation for scalar quasilinear wave equations as well as our joint work \cite{jLjS2016b} on the compressible Euler equations with vorticity, which we further describe below. We now give a slightly more precise statement our main results; see Theorem~\ref{T:MAINTHEOREM} for the precise statements. \begin{theorem}[\textbf{Main theorem} (slightly more precise statement)] \label{T:ROUGHMAINTHM} Under suitable assumptions on the nonlinearities (described in Subsect.\ \ref{SS:SYSTEMSUNDERSTUDY}), there exists an open set of regular data (belonging to an appropriate Sobolev space) for the system \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} such that $\max_{\alpha=0,1,2} |\partial_{\alpha} \Psi|$ blows up in finite time due to the intersection the ``fast'' characteristics $\mathcal{P}_u$, while $|\Psi|$, $|w|$, and $\max_{\alpha=0,1,2} |\partial_{\alpha} w|$ remain uniformly bounded, all the way up to the singularity. More precisely, we allow the data for $\Psi$ to be large or small, but they must be close to the data corresponding to a \underline{simple outgoing plane wave}. We furthermore assume that the data for $w$ are relatively small compared to the data for $\Psi$. \end{theorem} \begin{remark}[\textbf{The need for geometric coordinates}] \label{R:NEEDFORGEOMETRICCOORDS} Although Theorem~\ref{T:ROUGHMAINTHM} refers to the Cartesian coordinates, as in all of the prior proofs of shock formation in more than one spatial dimension, we need very sharp estimates, tied to a system of geometric coordinates, to close our proof; the Cartesian coordinates are not adequate for measuring the regularity and boundedness properties of the solutions nor for tracking the intersection of characteristics. \end{remark} \subsection{Paper outline and remarks} \label{SS:PAPEROUTLINE} \begin{itemize} \item In the rest of Sect.\ \ref{S:INTRO}, we place our main result in context, construct some of the basic geometric objects that we use in our analysis, and provide an overview of the proof. \item In the present paper, we often cite identities and estimates proved in the work \cite{jSgHjLwW2016}, which yielded similar shock formation results for scalar wave equations (see, however, Remark~\ref{R:NEWPARAMETER}). \item In Sect.\ \ref{S:GEOMETRICSETUP}, we construct the remaining geo-analytic objects that we use in our proof and exhibit their main properties. \item In Sect.\ \ref{S:NORMSANDBOOTSTRAP}, we define the norms that we use in our analysis, state our assumptions on the data, and formulate bootstrap assumptions. \item In Sect.\ \ref{S:ENERGIES}, we define our energies and provide the energy identities that we use in our $L^2$ analysis. \item In Sects.\ \ref{S:PRELIMINARYPOINTWISE}-\ref{S:POINTWISEESTIMATES}, we derive a priori $L^{\infty}$ and pointwise estimates for the solution. \item In Sect.\ \ref{S:ENERGYESTIMATES}, we derive a priori energy estimates. This is the most important and technically demanding section of the paper and it relies on all of the geometric constructions and estimates from the prior sections. \item In Sect.\ \ref{S:MAINTHEOREM}, we provide our main theorem. This section is short because the estimates of the preceding sections essentially allow us to quote the proof of \cite{jSgHjLwW2016}*{Theorem~15.1}. \end{itemize} \subsection{Further context and prior results} \label{SS:CONTEXTANDPRIORRESULTS} In his foundational work \cite{bR1860} in which he invented the Riemann invariants, Riemann initiated the rigorous study of finite-time shock formation in initially regular solutions to quasilinear hyperbolic systems in one spatial dimension, specifically the compressible Euler equations of fluid mechanics. An abundance of shock formation results in one spatial dimension were proved in the aftermath of Riemann's work, with important contributions by Lax \cite{pL1964}, John \cite{fJ1974}, Klainerman--Majda \cite{sKaM1980}, and many others, continuing up until the present day \cite{dCdRP2016}. The first constructive results on finite-time shock formation in more than one spatial dimension (without symmetry assumptions) were proved by Alinhac \cites{sA1995,sA1999a,sA1999b}, who treated scalar quasilinear wave equations in two and three spatial dimensions that fail to satisfy the null condition. In the case of the quasilinear wave equations of irrotational relativistic fluid mechanics (which are scalar equations), Alinhac's results were remarkably sharpened by Christodoulou \cite{dC2007}, whose fully geometric framework subsequently led to further shock formation results for scalar quasilinear wave equations \cites{jS2016b,dCsM2014,sMpY2017,jSgHjLwW2016} as well as the compressible Euler equations with vorticity \cites{jLjS2016a,jLjS2016b}. In Subsect.\ \ref{SS:PRIORWORKSANDSUMMARY}, we describe some of these works in more detail. As we mentioned at the beginning, our main goal in this paper is to develop techniques for studying shock formation in solutions to quasilinear hyperbolic systems in more than one spatial dimension whose principal part exhibits multiple speeds of propagation. In any attempt to carry out such a program, one must grapple with following difficulty: the known approaches to proving shock formation in more than one spatial dimension for scalar wave equations are based on geo-analytic constructions that are fully adapted to the principal part of the scalar equation (which corresponds to a dynamic Lorentzian metric), or, more precisely, to a family of characteristic hypersurfaces corresponding to the Lorentzian metric. One might say that in prior works, all coordinate/gauge freedoms for the domain were exhausted in order to understand the intersection of the characteristics corresponding to the scalar equation and the relationship of the intersection to the blowup of the solution's derivatives. Therefore, those works left open the question of how to prove shock formation for systems featuring multiple unknowns and a more complicated principal part with distinct speeds of propagation; roughly speaking, to treat such more complicated systems, one has to understand how geometric objects adapted to one part of the principal operator (that is, to one wave speed) interact with remaining part of the principal operator (corresponding to the other speeds). For systems with appropriate structure, one could skirt this difficulty by considering only a subset of initial conditions that lead to the following behavior: only one solution variable is non-zero at the first singularity. For such solutions, the analysis effectively reduces to the study of a scalar equation. A simple but important example of this approach is given by John's aforementioned study \cite{fJ1984} of shock formation in spherically symmetric solutions to the equations of elasticity, in which case the equations of motion, which generally have a complicated principal part with multiple speeds of propagation, reduce to a spherically symmetric \emph{scalar} quasilinear wave equation. However, such a drastically simplified setup is mathematically and physically unsatisfying in that it is typically not stable against nontrivial perturbations with very large spatial support, no matter how small they might be. In our joint works \cites{jLjS2016a,jLjS2016b}, we proved the first shock formation result for a system with more than one speed in which all solution variables can be active (non-zero) at the first singularity. Specifically, the system (which is actually a new formulation of the compressible Euler equations) featured one wave operator and one transport operator, and we showed that for suitable data, a family of outgoing \emph{wave characteristics} intersect in finite time and cause a singularity in the first-order Cartesian partial derivatives of the wave variables but the not transport variable.\footnote{In \cites{jLjS2016a,jLjS2016b}, the transport variable was the specific vorticity, defined to be vorticity divided by density.} With the result \cite{jLjS2016b} in mind, one might say the main new contribution of this article is to upgrade the framework established in \cite{jLjS2016b} in order to accommodate an additional \emph{second-order} quasilinear hyperbolic scalar equation (see Subsect.\ \ref{SS:EXTENDINGRESULTORELATEDSYSTEMS} for remarks on how to extend our result to additional systems). As we mentioned earlier, our main results apply to initial data such that the ``fast wave'' variable $\Psi$ (corresponding to the strictly faster of the two speeds of propagation) is the one that forms a shock. That is, $\Psi$ remains bounded but some first-order Cartesian coordinate partial derivative $\partial_{\alpha} \Psi$ blows up in finite time. Like all prior works on shock formation, in the equations that we study in this article, the blowup of $\partial \Psi$ is driven by the presence of Riccati-type self-interaction inhomogeneous terms $\partial \Psi \cdot \partial \Psi$ in the wave equation for $\Psi$. As we have already stressed, the ``slow wave'' variable $w$ exhibits much less singular behavior, even though its wave equation is also allowed to contain (when expressed relative to standard Cartesian coordinates) Riccati-type self-interaction terms $\partial w \cdot \partial w$. Our ability to track the different behaviors of $\Psi$ and $w$ requires new geometric and analytic insights, notably the advantages that arise from our first-order reformulation of the slow wave equation. We now outline why, for the solutions under study, $\partial \Psi$ blows up while $\partial w$ does not, even though the wave equations for $\Psi$ and $w$ can have a similar structure.\footnote{In fact, the wave equation for $w$ is allowed to be even more complicated than that of $\Psi$ since we allow the principal operator for $w$ to depend on $\Psi$, $w$, and $\partial w$, while the principal operator for $\Psi$ is allowed to depend only on $\Psi$; see \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE}.} The main idea is that we consider initial data that lead to \emph{nearly simple outgoing plane wave solutions} in which $\Psi$ has small initial dependence (in appropriate norms) on the Cartesian torus coordinate $x^2$ and in which $w$ and $\partial w$ are initially relatively small and remain so throughout the evolution. By an ``outgoing simple wave,'' we roughly mean a solution such that $w \equiv 0$ and such that the dynamics of $\Psi$ are dominated by a right-moving (along the $x^1$ axis) wave, as opposed to a combination of right- and left-moving waves. Our results show that for (non-symmetric) perturbations of such solutions, $\partial \Psi$ blows up before the small terms $\partial w \cdot \partial w$ are able to drive the blowup of $\partial w$. In particular, in the solution regime under consideration, $w$ and $\partial w$ remain small in $L^{\infty}$ all the way up to the first singularity in $\partial \Psi$. This is a subtle effect in that the wave equation for $w$ is allowed to contain source terms (see RHS~\eqref{E:SLOWWAVE}) that are linear (but not quadratic or higher order!) in the tensorial component of $\partial \Psi$ that blows up. Therefore, our work entails the study of non-trivial interactions between a wave that forms a singularity with another wave that, as we must prove, exhibits less singular behavior, even though it is coupled to the ``singular part'' of the singular wave. The set of initial data that we treat in our main results is motivated in part by John's aforementioned work \cite{fJ1974}, in which he proved a blowup result for first-order quasilinear hyperbolic systems with multiple speeds in one spatial dimension whose small-data solutions behave like simple waves\footnote{That is, in \cite{fJ1974}, one non-zero solution component is dominant near the singularity.} near the singularity. John's result was recently sharpened \cite{dCdRP2016} by Christodoulou--Perez using extensions of the framework developed by Christodoulou in \cite{dC2007}. In Subsubsect.\ \ref{SSS:NEARLYSIMPLEWAVES}, to illustrate some of the main ideas, we provide a proof of our main results in the special case that the initial data have exact plane symmetry, that is, for data that are \emph{independent} of the Cartesian coordinate $x^2$. We caution, however, that the assumption of plane symmetry represents a drastic simplification of the full problem; in plane symmetry, we are able to avoid deriving energy estimates, which is the main technical difficulty that one encounters in the general case. As we will explain, our analytic approach, in particular our approach to energy estimates, is based on geometric decompositions adapted to the characteristics corresponding to principal part of $\Psi$'s wave equation together with a first-order reformulation of the wave equation for $w$ that is compatible with these geometric decompositions. This allows us to track the distinct behavior of the two waves all the way up to the shock. As we mentioned above, one would likely need new ideas to treat data such that slow wave $\partial w$ is expected to form the first singularity and, at the same time, to interact with the fast wave $\Psi$ and its derivatives. \subsection{Additional details concerning the most relevant prior works} \label{SS:PRIORWORKSANDSUMMARY} Our work here builds upon the outstanding contributions of Alinhac \cites{sA1995,sA1999a,sA1999b}, who was the first to prove small-data shock formation for solutions to quasilinear wave equations in more than one spatial dimension. Specifically, Alinhac studied scalar quasilinear wave equations of the form \begin{align} \label{E:ALWAVE} (g^{-1})^{\alpha \beta}(\partial \Phi) \partial_{\alpha} \partial_{\beta} \Phi = 0, \end{align} on $\mathbb{R}^{1+n}$ for $n=2,3$. For all such equations that fail to satisfy the null condition, he identified a set of small, regular, compactly supported initial data such that the corresponding solution forms a shock in finite time. The set of initial data to which his main results apply were such that the constant-time hypersurface of first blowup contains \underline{exactly one point} at which $\partial^2 \Phi$ blows up.\footnote{For equations of type \eqref{E:ALWAVE}, a shock singularity is such that $\Phi$ and $\partial \Phi$ remain bounded while $\partial^2 \Phi$ blows up.} The most important ingredient in Alinhac's approach was a dynamically constructed system of geometric coordinates tied to an eikonal function (see Subsubsect.\ \ref{SSS:GEOMETRICINGREDIENTS} for a precise definition), whose level sets are $g$-null hypersurfaces (that is, characteristics). The main idea behind his approach was as follows: show that relative to the geometric coordinates, the solution remains regular up to the singularity, except possibly at the very high (geometric) derivative levels. This enables one to approach the problem of shock formation from a more traditional perspective in which one derives long-time-existence-type estimates. It turns out that this approach, while viable, is extremely technically demanding to implement. The reason is that the best estimates known allow for the possibility that the high-order geometric energies blow up, which makes it difficult (though possible) to prove that the solution remains regular relative to the geometric coordinates at the lower derivative levels. After one has obtained regular estimates (at the lower derivative levels) relative to the geometric coordinates, one can easily carry out the rest of the proof, namely deriving the singularity formation relative to the Cartesian coordinates, which one obtains by showing that the geometric coordinates degenerate relative to the Cartesian ones. Roughly, both the formation of the shock and the blowup of the solution's Cartesian coordinate partial derivatives are caused by the intersection of the level sets of the eikonal function. The shock-generating initial conditions identified by Alinhac form a set of ``non-degenerate'' initial data, which can be thought of as generic inside the set of all smooth, small, compactly supported initial data. Although Alinhac's use of an eikonal function allowed him to provide a sharp description of the first singularity, his approach to deriving energy estimates was based on a Nash--Moser iteration scheme featuring a free boundary. His iteration scheme fundamentally relied on his non-degeneracy assumptions on the data, which led to solutions whose first singularity is \emph{isolated} in the constant-time hypersurface of first blowup. His reliance on a Nash--Moser scheme is tied to the fact that the regularity theory for the eikonal function is very difficult at the top order. Our work also builds upon Christodoulou's groundbreaking sharpening \cite{dC2007} of Alinhac's results for the subclass of (scalar) Euler-Lagrange wave equations corresponding to the irrotational relativistic Euler equations in three spatial dimensions. For these wave equations, Christodoulou proved the following main results: \textbf{i)} He showed that for solutions generated by small,\footnote{In particular, unlike Alinhac's proof, Christodoulou's yields global information about solutions corresponding to an open set of data that contains the trivial data in its interior.} regular, compactly supported data, shocks are the only possible singularities. The formation of a shock exactly corresponds to the intersection of the characteristics, or equivalently, the vanishing of the inverse foliation density of the characteristics, denoted by $\upmu$ (see Def.~\ref{D:FIRSTUPMU}). Put differently, Christodoulou proved that if the characteristics never intersect, then the solution exists globally.\footnote{More precisely, he studied the solution only in a region that is trapped in between an inner null cone and an outer null cone.} \textbf{ii)} He exhibited an open set of data for which shocks do in fact form. Unlike Alinhac's data, Christodoulou's do not have to lead to a solution with an isolated first singularity. Most importantly, \textbf{iii)} Christodoulou gave a complete description of a portion of the maximal development\footnote{Roughly, the maximal development is the largest possible classical solution that is uniquely determined by the data. Readers can consult \cites{jSb2016,wW2013} for further discussion.} of the data, all the way up to the boundary. Although for brevity we have not given a complete description of the maximal development in this article, it is likely that our sharp estimates could be used to obtain such a description, by invoking arguments along the lines of \cite{dC2007}*{Chapter~15}. Like Alinhac's approach and the approach of the present article, Christodoulou's framework was based on an eikonal function. However, unlike Alinhac, Christodoulou did not use Nash--Moser estimates when deriving energy estimates. Instead, he used a sharper, more geometric approach that required the full strength of his framework. In particular, to avoid derivative loss, Christodoulou derived sharp information regarding the tensorial regularity properties of eikonal functions in the context of shock formation, in which their level sets intersect. In this endeavor, he was undoubtedly aided by the experience he gained from his celebrated joint proof with Klainerman of the stability of Minkowski spacetime \cite{dCsK1993}. In that work, the authors also had to deeply understand the high-order regularity properties of eikonal functions, though in a less degenerate context in which they remain regular. Christodoulou's sharp description of the maximal development, though of interest in itself, is also important for another reason: it is an essential ingredient for properly setting up the shock development problem, which is the problem of weakly continuing the solution to the relativistic Euler equations past the first singularity under appropriate selection criteria in the form of jump conditions; see \cite{dC2007} for further discussion. We note that the shock development problem was recently solved in spherical symmetry \cite{dCaL2016}. Moreover, in a recent breakthrough work \cite{dC2017}, for the non-relativistic compressible Euler equations without symmetry assumptions, Christodoulou solved the shock development problem in a restricted case (known as the restricted shock development problem) such that the jump in entropy across the shock hypersurface was ignored. Christodoulou's sharp, geometric approach has led to further advancements on shock formation in solutions starting from smooth initial conditions, including extensions of his results to larger classes of equations and new types of initial conditions \cites{jS2016b,dCsM2014,gHsKjSwW2016,sM2016,sMpY2017,jLjS2016a,jLjS2016b}; see the survey article \cite{gHsKjSwW2016} for an in-depth discussion of some of these results. However, a crucial feature of both Alinhac's and Christodoulou's frameworks is that they are tailored precisely to a single family of characteristics -- the family whose intersection corresponds to a shock singularity. Thus, all prior works left open the question of whether or not these approaches can be adapted to systems featuring multiple speeds of propagation. As we mentioned above, first affirmative result in this direction was provided by our joint works \cites{jLjS2016a,jLjS2016b} with J.~Luk, in which we discovered some remarkable geo-analytic structures in the compressible Euler equations with vorticity. Inspired by these structures, we developed an extended version of Christodoulou's framework, and we used it to prove shock formation for solutions to a quasilinear \emph{wave-transport} system. More precisely, the wave-transport system that we studied in \cites{jLjS2016a,jLjS2016b} was a new formulation of the compressible Euler equations, where the velocity components and density satisfied a system of covariant wave equations, \emph{all with the same covariant wave operator $\square_g$} (corresponding to a single Lorentzian metric $g$), and the vorticity satisfied a (first-order) transport equation. There were two speeds in the system: the speed of sound, corresponding to sound wave propagation, and the speed associated to the transporting of vorticity. A particularly remarkable aspect of the equations studied in \cites{jLjS2016a,jLjS2016b}, which is central to the proofs, is that the inhomogeneous terms had a good null structure that did not interfere with the shock formation processes. The null structures, which are fully nonlinear in nature, are a tensorial generalization of the good null structure enjoyed by the standard null form $\mathcal{Q}^g(\partial \Psi,\partial \Psi)$, which is an admissible term in our systems \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} (see just below those equations for further discussion on this point). In our works \cites{jLjS2016a,jLjS2016b}, to control the wave-transport solution's derivatives, we followed the approach of \cite{dC2007} (itself inspired by the Christodoulou--Klainerman proof \cite{dCsK1993} of the stability of Minkowski spacetime) and constructed a family of dynamic geometric objects, including geometric vectorfields, adapted to the characteristics\footnote{More precisely, as we describe in Subsect.\ \ref{SS:OVERVIEWOFPROOF}, the vectorfields were adapted to an eikonal function corresponding to $g$, whose level sets are $g$-null hypersurfaces.} of $g$. A seemingly unavoidable aspect of our approach in \cites{jLjS2016a,jLjS2016b} was that, due to the coupled nature of the system, we were forced to commute the transport equation with the \emph{same geometric vectorfields} in order to obtain estimates for the solution's derivatives. In general, one might expect to encounter crippling error terms from this procedure, since the geometric vectorfields are not adapted to the transport operator. What allowed our proof to go through are the following facts: \textbf{i)} transport operators are first-order and \textbf{ii)} the operators $\upmu \partial_{\alpha}$ exhibit good commutation properties with the geometric vectorfields, where $\partial_{\alpha}$ is a Cartesian coordinate partial derivative vectorfield and $\upmu > 0$, mentioned above, is the inverse foliation density of the wave characteristics (which we rigorously define in Subsubsect.\ \ref{SSS:GEOMETRICINGREDIENTS} since $\upmu$ plays a critical role in the present work as well). Therefore, since the transport operator was just a (solution-dependent) linear combination of the $\partial_{\alpha}$, upon multiplying the transport equation by $\upmu$ and commuting it with the geometric vectorfields, we were able to completely avoid the worst imaginable commutator error terms, which enabled us to close the proof. We now stress that our approach in \cites{jLjS2016a,jLjS2016b} does not allow one to commute the geometric vectorfields through typical second-order operators $\upmu \partial_{\alpha} \partial_{\beta}$; this would typically generate crippling commutator error terms\footnote{Using a weight with a different power of $\upmu$, such as $\upmu^2 \partial_{\alpha} \partial_{\beta}$, also seems to lead to insurmountable difficulties.} featuring a factor of $1/\upmu$, which blows up as $\upmu \to 0$ and obstructs the goal of deriving regular estimates. In particular, in itself, the approach of \cites{jLjS2016a,jLjS2016b} does not manifestly allow one to couple an additional quasilinear wave equation with a ``new'' metric $h$ that is different from $g$. Here it makes sense to clarify the following point: the crippling error terms do \emph{not} arise when commuting the geometric vectorfields through $\square_g$ (since the vectorfields are adapted to the characteristics of $g$), but they \emph{do} arise when commuting them through a typical second-order differential operator. Thus, the works \cites{jLjS2016a,jLjS2016b} left open the question of how to prove shock formation for solutions to second-order quasilinear systems featuring more than one wave operator. As we have mentioned, in the present article, we prove the first shock formation results for systems of this type. The following key idea, mentioned earlier, lies at the heart of our approach here. \begin{quote} \emph{It is possible to formulate the wave equation for the non-shock-forming slow variable as a first-order system that can be treated using an extension of the approach of \cite{jLjS2016b}}; see equations \eqref{E:SLOW0EVOLUTION}-\eqref{E:SYMMETRYOFMIXEDPARTIALS}. \end{quote} \subsection{Remarks on the nonlinear terms and extending the results to related systems} \label{SS:EXTENDINGRESULTORELATEDSYSTEMS} The formation of shocks exhibited by Theorem~\ref{T:ROUGHMAINTHM} is of course tied to our structural assumption on the nonlinearities, which we precisely describe in Subsect.\ \ref{SS:SYSTEMSUNDERSTUDY}. As we mentioned earlier, the blowup of $\Psi$ is driven by the presence of a Riccati-type interaction term in its wave equation, which is captured by our assumption \eqref{E:NONVANISHINGNONLINEARCOEFFICIENT} below. For this reason, the wave equation of $\Psi$ can be caricatured as\footnote{Throughout, if $V$ is a vectorfield and $f$ is a scalar function, then $V f := V^{\alpha} \partial_{\alpha} f$ denotes the $V$-directional derivative of $f.$} $L_{(Flat)} \partial_1 \Psi \sim (\partial_1 \Psi)^2 + \mbox{\upshape Error}$, where $L_{(Flat)} := \partial_t + \partial_1$ and $\mbox{\upshape Error}$ depends on $\Psi$, $w$, and their derivatives (and in particular $\mbox{\upshape Error}$ contains the quasilinear interaction terms). Although this caricature wave equation suggests that $\partial_1 \Psi$ should blow up in finite time along the integral curves of $L_{(Flat)}$, this is not how our proof works. It seems that in order to close the energy estimates and to show that error terms do not interfere with the blowup, one needs to derive very sharp estimates tailored to the family of characteristics corresponding to $\Psi$, which are in turn influenced by $\Psi$ in view of the quasilinear nature\footnote{The metric $g$ in the wave equation for $\Psi$ is such that $g=g(\Psi)$. In particular, $g$ does not depend on $w$ and thus the characteristics corresponding to $g$ are not directly influenced by $w$.} of the equation. We also stress that, as in prior shock formation results, our proof is more sensitive to perturbations of the equations than typical proofs of global existence. This is not surprising in view of the fact that adding terms of the form, say $\pm (\partial_1 \Psi)^3$, to the RHS of the above caricature equation can drastically alter the global behavior of its solutions. In contrast, since $w$ and $\partial_{\alpha} w$ remain bounded up to the shock, our approach is able to accommodate essentially arbitrary semilinear terms comprised of products of these variables; see Subsect.\ \ref{SS:SYSTEMSUNDERSTUDY} for our precise assumptions on the nonlinearities. For convenience, we have chosen not to treat the most general type of system to which our approach applies. Our approach is flexible in the sense that it could be used to treat systems featuring additional wave equations, transport equations, or symmetric hyperbolic equations. However, the following assumptions play a critical role in our analysis. \begin{itemize} \item The Lorentzian metric $g$ corresponding to the principal part of the wave equation of $\Psi$ depends only on $\Psi$. That is, in the wave equation \eqref{E:FASTWAVE} below, $g = g(\Psi)$. More generally, we could allow for $g = g(\Psi_1,\cdots,\Psi_m)$, as long as the \emph{same metric} $g$ corresponds to the principal part of the wave equation of $\Psi_i$ for $1 \leq i \leq m$. This assumption is needed to control the top-order derivatives of the eikonal function corresponding to $g$ (see the discussion of modified quantities in Subsubsect.\ \ref{SSS:ENERGYESTIMATES} for more details on this point). \item The shock-forming variable $\Psi$ corresponds to the fastest speed (in the strict sense) in the system. This assumption implies that the null hypersurfaces $\mathcal{P}_u$ corresponding to the metric $g(\Psi)$ are \emph{spacelike} from the perspective of the principal parts of the remaining equations in the system. This is important because our proof requires the availability of positive definite energies for the slow wave solution variables along $g$-null hypersurfaces. In the present article, the positive definiteness of the energies for the slow wave $w$ along these hypersurfaces is guaranteed by the estimates in equation \eqref{E:SLOWNULLFLUXCOERCIVENESS}. \item See Remark~\ref{R:DIFFERENTFASTEQUATION} below for a discussion of other types of ``fast'' wave equations for which we could prove a stable shock formation result. \end{itemize} \begin{remark} In Subsect.\ \ref{SS:SYSTEMSUNDERSTUDY} we will make further assumptions on the nonlinearities and quantify the assumption that $\Psi$ is the fast wave. \end{remark} \subsection{Basic notational and index conventions} \label{SS:NOTATIONANDINDEXCONVENTIONS} We now summarize some of our notation. Some of the concepts referred to here are defined later in the article. Throughout, $\lbrace x^{\alpha} \rbrace_{\alpha =0,1,2}$ denote the standard Cartesian coordinates on the spacetime $\mathbb{R} \times \Sigma$, where $x^0 \in \mathbb{R}$ is the time variable and $(x^1,x^2) \in \Sigma = \mathbb{R} \times \mathbb{T}$ are the space variables, chosen such that $\partial_2$ is positively oriented. We denote the corresponding partial derivative vectorfields by $ \displaystyle \partial_{\alpha} =: \frac{\partial}{\partial x^{\alpha}} $ (which are globally defined and smooth even though $x^2$ is only locally defined), and we often use the alternate notation $t := x^0$ and $\partial_t := \partial_0$. \begin{itemize} \item Lowercase Greek spacetime indices $\alpha$, $\beta$, etc.\ correspond to the Cartesian spacetime coordinates and vary over $0,1,2$. Lowercase Latin spatial indices $a$,$b$, etc.\ correspond to the Cartesian spatial coordinates and vary over $1,2$. We use tilded indices such as $\widetilde{\alpha}$ in the same way that we use their non-tilded counterparts. All lowercase Greek indices are lowered and raised with the fast wave spacetime metric $g$ and its inverse $g^{-1}$, and \emph{not with the Minkowski metric}. \item We use Einstein's summation convention in that repeated indices are summed over their respective ranges. \item We sometimes use $\cdot$ to denote the natural contraction between two tensors (and thus raising or lowering indices with a metric is not relevant for this contraction). For example, if $\xi$ is a spacetime one-form and $V$ is a spacetime vectorfield, then $\xi \cdot V := \xi_{\alpha} V^{\alpha}$. \item If $\xi$ is a one-form and $V$ is a vectorfield, then $\xi_V := \xi_{\alpha} V^{\alpha}$. Similarly, if $W$ is a vectorfield, then $W_V := W_{\alpha} V^{\alpha} = g(W,V)$. \item If $\xi$ is an $\ell_{t,u}$-tangent one-form (as defined in Subsect.\ \ref{SS:PROJECTIONTENSORFIELDANDPROJECTEDLIEDERIVATIVES}), then $\xi^{\#}$ denotes its $g \mkern-8.5mu / $-dual vectorfield, where $g \mkern-8.5mu / $ is the Riemannian metric induced on $\ell_{t,u}$ by $g$. Similarly, if $\xi$ is a symmetric type $\binom{0}{2}$ $\ell_{t,u}$-tangent tensor, then $\xi^{\#}$ denotes the type $\binom{1}{1}$ $\ell_{t,u}$-tangent tensor formed by raising one index with $\gsphere^{-1}$ and $\xi^{\# \#}$ denotes the type $\binom{2}{0}$ $\ell_{t,u}$-tangent tensor formed by raising both indices with $\gsphere^{-1}$. \item If $\xi$ is an $\ell_{t,u}$-tangent tensor, then the norm $|\xi|$ is defined relative to the Riemannian metric $g \mkern-8.5mu / $, as we make precise in Def.~\ref{D:POINTWISENORM}. \item Unless otherwise indicated, all quantities in our estimates that are not explicitly under an integral are viewed as functions of the geometric coordinates $(t,u,\vartheta)$ of Def.~\ref{D:GEOMETRICCOORDINATES}. Unless otherwise indicated, integrands have the functional dependence established below in Def.~\ref{D:NONDEGENERATEVOLUMEFORMS}. \item If $Q_1$ and $Q_2$ are two operators, then $[Q_1,Q_2] = Q_1 Q_2 - Q_2 Q_1$ denotes their commutator. \item $A \lesssim B$ means that there exists $C > 0$ such that $A \leq C B$. \item $A \approx B$ means that $A \lesssim B$ and $B \lesssim A$. \item $A = \mathcal{O}(B)$ means that $|A| \lesssim |B|$. \item Constants such as $C$ and $c$ are free to vary from line to line. \textbf{These constants, as well as implicit constants, are allowed to depend in an increasing, continuous fashion on the data-size parameters $\mathring{\updelta}$ and $\mathring{\updelta}_*^{-1}$ from Subsect.\ \ref{SS:DATAASSUMPTIONS}. However, the constants can be chosen to be independent of the parameters $\mathring{\upalpha}$, $\mathring{\upepsilon}$, and $\varepsilon$ whenever the following conditions hold: \textbf{i)} $\mathring{\upepsilon}$ and $\varepsilon$ are sufficiently small relative to $1$, sufficiently small relative to $\mathring{\updelta}^{-1}$, and sufficiently small relative to $\mathring{\updelta}_*$, and \textbf{ii)} $\mathring{\upalpha}$ is sufficiently small relative to $1$}, in the sense described in Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}. \item Constants $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}$ are also allowed to vary from line to line, but unlike $C$ and $c$, the $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}$ are \textbf{universal in that, as long as $\mathring{\upalpha}$, $\mathring{\upepsilon}$, and $\varepsilon$ are sufficiently small relative to $1$, they do not depend on $\varepsilon$, $\mathring{\upepsilon}$, $\mathring{\updelta}$, or $\mathring{\updelta}_*$}. \item $A = \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(B)$ means that $|A| \leq C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} |B|$, with $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}$ as above. \item For example, $\mathring{\updelta}_*^{-2} = \mathcal{O}(1)$, $2 + \mathring{\upalpha} + \mathring{\upalpha}^2 = \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(1)$, $\mathring{\upalpha} \varepsilon = \mathcal{O}(\varepsilon)$, $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}^2 = \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha})$, and $C \mathring{\upalpha} = \mathcal{O}(1)$; some of these examples are non-optimal. \item $\lfloor \cdot \rfloor$ and $\lceil \cdot \rceil$ respectively denote the standard floor and ceiling functions. \end{itemize} \subsection{The systems under study} \label{SS:SYSTEMSUNDERSTUDY} \subsubsection{Statement of the equations} \label{SSS:STATEMENTOFEQUATIONS} For notational convenience, we introduce the following array associated to the slow wave: \begin{align} \label{E:SLOWWAVEVARIABLES} \vec{W} & := (w,w_0,w_1,w_2), & w_{\alpha} &:= \partial_{\alpha} w, && (\alpha = 0,1,2), \end{align} where we again stress that $\partial_{\alpha}$ denotes a Cartesian coordinate partial derivative vectorfield. \begin{remark}[\textbf{Remark on the pointwise norm of the array $\vec{W}$}] \label{R:POINTWISENORMOFSLOWVARIABLEARRAY} Throughout the article, we view $\vec{W}$ to be an array of scalar functions without tensorial structure. Thus, there should be no danger of confusing the definition $| \vec{W} |^2 := w^2 + \sum_{\alpha = 0}^2 w_{\alpha}^2$ with the definition \eqref{E:POINTWISENORM} below for the pointwise norm $|\cdot|$ of an $\ell_{t,u}$-tangent tensor. \end{remark} \begin{center} {\large \underline{\textbf{The system of wave equations under study}}} \end{center} Our main results concern the following system of two wave equations: \begin{subequations} \begin{align} \square_{g(\Psi)} \Psi & = \mathfrak{M}(\Psi,\vec{W}) \mathcal{Q}^g(\partial \Psi,\partial \Psi) + \mathfrak{N}_1^{\alpha}(\Psi,\vec{W}) \partial_{\alpha} \Psi + \mathfrak{N}_2(\Psi,\vec{W}), \label{E:FASTWAVE} \\ (h^{-1})^{\alpha \beta}(\Psi,\vec{W}) \partial_{\alpha} \partial_{\beta} w & = \widetilde{\mathfrak{M}}(\Psi,\vec{W}) \mathcal{Q}^g(\partial \Psi,\partial \Psi) + \widetilde{\mathfrak{N}}_1^{\alpha}(\Psi,\vec{W}) \partial_{\alpha} \Psi + \widetilde{\mathfrak{N}}_2(\Psi,\vec{W}). \label{E:SLOWWAVE} \end{align} \end{subequations} Above and throughout, $g$ and $h$ are, by assumption, Lorentzian metrics for small values of their arguments (see below for our precise assumptions), $\square_{g(\Psi)}$ is the covariant wave operator\footnote{Relative to arbitrary coordinates, $\square_g f = \frac{1}{\mbox{$\sqrt{|\mbox{\upshape det} g|}$}} \partial_{\alpha}\left(\sqrt{|\mbox{\upshape det} g|} (g^{-1})^{\alpha \beta} \partial_{\beta} f \right)$. \label{FN:COVWAVEOPARBITRARYCOORDS}} of $g(\Psi)$, $\mathfrak{M}$, $\mathfrak{N}_1^{\alpha}$, $\cdots$, and $\widetilde{\mathfrak{N}}_2$ are smooth nonlinear terms described below, and \begin{align} \label{E:STANDARDNULLFORM} \mathcal{Q}^g(\partial \Psi,\partial \Psi) := (g^{-1})^{\alpha \beta}(\Psi) \partial_{\alpha} \Psi \partial_{\beta} \Psi \end{align} is the standard null form associated to $g$. It is important for our proof that the wave operator of the shock-forming variable $\Psi$ is covariant, the reason being that the geometric vectorfields that we construct exhibit good commutation properties with\footnote{More precisely, they exhibit good commutation properties with $\upmu \square_g$, where we define $\upmu$ in Def.~\ref{D:FIRSTUPMU}.} the operator $\square_g$. We stress that $\mathcal{Q}^g(\partial \Psi,\partial \Psi)$ is, from the point of view of closing our estimates, the only allowable term on RHSs~\eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} that is quadratic in $\partial \Psi$. The reason is that $\mathcal{Q}^g(\partial \Psi,\partial \Psi)$ has the following special nonlinear structure: it is \emph{linear} in the tensorial component of $\partial \Psi$ that blows up; see \eqref{E:UPMUTIMESNULLFORMSSCHEMATIC} for the geo-analytic statement of this fact. More precisely, upon decomposing $\mathcal{Q}^g(\partial \Psi,\partial \Psi)$ relative to an appropriate frame, we find find that it is linear in a derivative of $\Psi$ in a direction that is transversal to the $g$-characteristics. The key point is that it is precisely the transversal derivative of $\Psi$ that blows up, while the derivatives of $\Psi$ in directions tangential to the $g$-characteristics remain uniformly bounded\footnote{Except possibly at the high derivative levels, due to the degenerate high-order energy estimates that we derive; see Subsubsect.\ \ref{SSS:ENERGYESTIMATES}.} all the way up to the singularity. For this reason, such terms have only a negligible effect on the dynamics all the way up to the shock, at least compared to the Riccati-type term that is quadratic in the transversal derivatives of $\Psi$ and that drives the singularity formation. Note that this Riccati-type term becomes visible only if we expand the expression $\square_{g(\Psi)} \Psi$ on LHS~\eqref{E:FASTWAVE} relative to Cartesian coordinates. We refer readers to \cite{jLjS2016a} for further discussion of these issues, noting only that the good structure of $\mathcal{Q}^g(\partial \Psi,\partial \Psi)$ is referred to as \emph{the strong null condition} (relative to $g$) in \cite{jLjS2016a}. Note that inhomogeneous terms that are quadratic or higher-order in $\partial \Psi$ typically have the following property: they are at least quadratic in the derivatives of $\Psi$ in directions transversal to the $g$-characteristics. Such terms are too singular to be included on RHSs \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} within our framework and in fact, might introduce instabilities that prevent a shock from forming or, alternatively, that generate a completely different kind of blowup. In particular, like all prior works on shock formation for wave equations, our proof is unstable against the addition of cubic terms $(\partial \Psi)^3$ to the equations, and similarly for terms that are higher-order in $\partial \Psi$. \begin{remark}[\textbf{Extending the result to a different type of fast wave equation}] \label{R:DIFFERENTFASTEQUATION} Instead of studying equation \eqref{E:FASTWAVE}, we could alternatively prove a shock-formation result for ``fast'' non-covariant quasilinear wave equations of the form \begin{align} \label{E:NONCOVWAVEEQUATION} (g^{-1})^{\alpha \beta}(\partial \Phi) \partial_{\alpha} \partial_{\beta} \Phi = \mathfrak{N}(\Phi,\partial \Phi,w), \end{align} where $\mathfrak{N}(\cdot)$ is a smooth function of its arguments such that $ \displaystyle \mathfrak{N}(\Phi,\partial \Phi,0) = 0 $ for $\nu = 0,1,2$. As is explained in \cite{jS2016b}, to treat equations of type \eqref{E:NONCOVWAVEEQUATION}, one could first differentiate equation\footnote{More generally, our approach could be extended to allow for $(g^{-1})^{\alpha \beta}= (g^{-1})^{\alpha \beta}(\Phi,\partial \Phi)$ in equation \eqref{E:NONCOVWAVEEQUATION}.} \eqref{E:NONCOVWAVEEQUATION} with the Cartesian coordinate partial derivatives $\partial_{\nu}$ to obtain a system of type \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} in the unknowns $\Phi$, $\vec{\Psi}$, and $\vec{W}$ that obey the semilinear inhomogeneous term assumptions stated in Subsubsect.\ \ref{SSS:ASSUMPTIONSONREMAINNGNONLINEARITIES}, where $\vec{\Psi} := (\Psi_0,\Psi_1,\Psi_2) := (\partial_0 \Phi, \partial_1 \Phi, \partial_2 \Phi)$ and $g = g(\vec{\Psi})$. More precisely, in \cite{jS2016b}, we showed that the scalar functions $\Psi_{\nu}$ satisfy a system of covariant wave equations of type \eqref{E:FASTWAVE}, where the terms that are quadratic in $\partial \vec{\Psi}$ exhibit the same kind of good null structure as the standard $g$-null form \eqref{E:STANDARDNULLFORM}. The assumption $ \displaystyle \mathfrak{N}(\Phi,\partial \Phi,0) = 0 $ guarantees that the system admits simple outgoing plane wave solutions in which $w \equiv 0$ (see Subsubsect.\ \ref{SSS:ASSUMPTIONSONREMAINNGNONLINEARITIES} for further discussion). This assumption is convenient for our analysis. It could be weakened to allow for a larger class of semilinear terms $\mathfrak{N}$ such that the system no longer admits exact simple outgoing plane wave solutions. However, compared to our main theorem, we generally would have to make different assumptions on the initial data (adapted to $\mathfrak{N}$) to guarantee that a shock forms in finite time; see the discussion below equation \eqref{E:SOMENONINEARITIESARELINEAR} for related remarks. \end{remark} \subsubsection{Assumptions on the remaining nonlinearities} \label{SSS:ASSUMPTIONSONREMAINNGNONLINEARITIES} We assume that relative to the Cartesian coordinates, the nonlinearities $g_{\alpha \beta}(\cdot)$, $h_{\alpha \beta}(\cdot)$, $\mathfrak{M}(\cdot)$, $\mathfrak{N}_1^{\alpha}(\cdot)$, $\cdots$, $\widetilde{\mathfrak{N}}_2(\cdot)$ in the system \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} are given smooth functions of their arguments (for $|\Psi|$ and $|\vec{W}|$ sufficiently small) and that \begin{align} \label{E:SOMENONINEARITIESARELINEAR} \mathfrak{N}_1^{\alpha}(\Psi,0) & = \mathfrak{N}_2(\Psi,0) = \widetilde{\mathfrak{N}}_1^{\alpha}(\Psi,0) = \widetilde{\mathfrak{N}}_2(\Psi,0) = 0, && (\alpha = 0,1,2). \end{align} That is, we assume that the semilinear terms in \eqref{E:SOMENONINEARITIESARELINEAR} vanish when $\vec{W} = 0$. The assumptions \eqref{E:SOMENONINEARITIESARELINEAR} are such that the system \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} admits simple outgoing plane wave solutions in which $w \equiv 0$, $\Psi = \Psi(t,x^1)$, and $\Psi$ is a ``right-moving'' wave, as opposed to being a combination of left- and right-moving waves; see Subsect.\ \ref{SS:EXISTENCEOFDATA} for further discussion on this point. Our main theorem concerns perturbations (without symmetry assumptions) of these simple outgoing plane waves. Our results could be extended to allow for additional kinds of semilinear terms on RHSs \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE}, such as a Klein-Gordon term (i.e., a constant multiple of $\Psi$ on RHS~\eqref{E:FASTWAVE}) or products with the schematic structure $\Psi \partial \Psi$. However, in the presence of these semilinear terms, the equations no longer admit simple outgoing plane wave solutions (aside from the trivial zero solution). Consequently, our assumptions on the initial data (see Subsects.\ \ref{SS:DATAASSUMPTIONS} and \ref{SS:SMALLNESSASSUMPTIONS}) that lead to shock formation would generally have to be adjusted to accommodate such new types of semilinear terms.\footnote{A good model equation for understanding the subtleties in this analysis is the inhomogeneous Burgers' equation $\partial_t \Psi + \Psi \partial_1 \Psi = \Psi^2$. Roughly, for data such that $\Psi$ is initially small while $\partial_1 \Psi$ is initially large in some region, the solution is such that $\partial_1 \Psi$ blows up along the characteristics while $\Psi$ remains bounded (at least up to the first singularity in $\partial_1 \Psi$), much like in the case of the homogeneous Burgers' equation. However, unlike the homogeneous Burger's equation, the inhomogeneous equation also admits the $T$-parameterized family of ODE-type blowup solutions $ \displaystyle \Psi_T(t) := \frac{1}{T-t} $, whose singularity is at the level of $\Psi$ itself. \label{FN:INHOMOGENEOUSBURGERS}} This would lengthen the article and obscure the new ideas that we aim to highlight here; for this reason, we limit our study to semilinear terms that verify \eqref{E:SOMENONINEARITIESARELINEAR}. Regarding the fast wave metric $g$, we assume that \begin{align} \label{E:LITTLEGDECOMPOSED} g_{\alpha \beta} = g_{\alpha \beta}(\Psi) & := m_{\alpha \beta} + g_{\alpha \beta}^{(Small)}(\Psi), && (\alpha, \beta = 0,1,2), \end{align} where $m_{\alpha \beta} = \mbox{diag}(-1,1,1)$ is the standard Minkowski metric on $\mathbb{R} \times \Sigma$ (where $\Sigma$ is defined in \eqref{E:SPACEMANIFOLD}) and the Cartesian components $g_{\alpha \beta}^{(Small)}(\Psi)$ are given smooth functions of $\Psi$ such that \begin{align} \label{E:METRICPERTURBATIONFUNCTION} g_{\alpha \beta}^{(Small)}(\Psi = 0) & = 0. \end{align} We also introduce the scalar functions \begin{align} \label{E:BIGGDEF} G_{\alpha \beta} = G_{\alpha \beta}(\Psi) & := \frac{d}{d \Psi} g_{\alpha \beta}(\Psi), && G_{\alpha \beta}' = G_{\alpha \beta}'(\Psi) := \frac{d^2}{d \Psi^2} g_{\alpha \beta}(\Psi), \end{align} which appear throughout our analysis. In order to ensure that shocks can form in solutions, including plane symmetric ones that depend only on $t$ and $x^1$, we assume that \begin{align} \label{E:NONVANISHINGNONLINEARCOEFFICIENT} G_{\alpha \beta}(\Psi = 0) L_{(Flat)}^{\alpha} L_{(Flat)}^{\beta} \neq 0, \end{align} where \begin{align} \label{E:LFLAT} L_{(Flat)} : = \partial_t + \partial_1. \end{align} As is explained in \cite{jSgHjLwW2016}, these assumptions are essentially equivalent to the assumption that the null condition \emph{fails to hold} for plane symmetric solutions to the wave equation for $\Psi$. Roughly, these assumptions ensure that for the solutions under study, the coefficient of the main terms driving the blowup is non-zero; as will become clear, the main term is the first product on RHS~\eqref{E:UPMUFIRSTTRANSPORT}. \begin{remark}[\textbf{Genuinely nonlinear systems}] \label{R:GENUINELYNONLINEAR} Our assumption that the vectorfield \eqref{E:LFLAT} verifies \eqref{E:NONVANISHINGNONLINEARCOEFFICIENT} is similar to the well-known genuine nonlinearity condition for first-order strictly hyperbolic systems. In particular, for plane symmetric solutions with $\Psi$ sufficiently small, the assumption \eqref{E:NONVANISHINGNONLINEARCOEFFICIENT} ensures that there are quadratic Riccati-type terms in the fast wave equation \eqref{E:FASTWAVE}, which become visible if one expands the LHS relative to the Cartesian coordinates. The Riccati-type terms provide essentially the same blowup-mechanism as the one that drives the blowup in solutions to $2 \times 2$ genuinely nonlinear strictly hyperbolic systems, which Lax studied in his well-known work \cite{pL1964}. \end{remark} As we mentioned above, a fundamental aspect of our proof is that we reformulate the slow wave equation \eqref{E:SLOWWAVE} as a first-order system, which allows us to avoid certain top-order commutator error terms that we would have no means to control. Specifically, we study the following first-order system which, under the assumption \eqref{E:ZEROZEROISMINUSONE} below, is easily seen to be a consequence of \eqref{E:SLOWWAVE}, $(i,j =1,2)$: \begin{subequations} \begin{align} \partial_t w_0 & = (h^{-1})^{ab}(\Psi,\vec{W}) \partial_a w_b + 2 (h^{-1})^{0a}(\Psi,\vec{W}) \partial_a w_0 \label{E:SLOW0EVOLUTION} \\ & \ \ - \widetilde{\mathfrak{M}}(\Psi,\vec{W}) \mathcal{Q}^g(\partial \Psi,\partial \Psi) - \widetilde{\mathfrak{N}}_1^{\alpha}(\Psi,\vec{W}) \partial_{\alpha} \Psi - \widetilde{\mathfrak{N}}_2(\Psi,\vec{W}), \notag \\ \partial_t w_i & = \partial_i w_0, \label{E:SLOWIEVOLUTION} \\ \partial_t w & = w_0, \label{E:SLOWEVOLUTION} \\ \partial_i w_j & = \partial_j w_i. \label{E:SYMMETRYOFMIXEDPARTIALS} \end{align} \end{subequations} Note that \eqref{E:SYMMETRYOFMIXEDPARTIALS} can be viewed as a constraint representing the symmetry of the mixed partial derivatives of $w$ with respect to the Cartesian coordinates. It is easy to check that the constraint \eqref{E:SYMMETRYOFMIXEDPARTIALS}, if verified at time $0$, is propagated by the flow of equation \eqref{E:SLOWIEVOLUTION}. \subsubsection{Assumptions tied to the wave speeds} \label{SSS:WAVESPEEDASSUMPTIONS} We now quantify our assumption that $\Psi$ is the fast wave and $w$ is the slow wave. \begin{center} \underline{\textbf{Assumption on the wave speeds}} \end{center} We assume that the following holds for non-zero vectors $V$ whenever $|\Psi| + |\vec{W}|$ is sufficiently small: \begin{align} \label{E:VECTORSHCAUSALIMPLIESGTIMELIKE} h_{\alpha \beta} V^{\alpha} V^{\beta} \leq 0 \implies g_{\alpha \beta} V^{\alpha} V^{\beta} < 0. \end{align} Note that \eqref{E:VECTORSHCAUSALIMPLIESGTIMELIKE} is equivalent to the following implication, valid for non-zero co-vectors $\omega$: \begin{align} \label{E:VECTORSGCAUSALIMPLIESHTIMELIKE} (g^{-1})^{\alpha \beta} \omega_{\alpha} \omega_{\beta} \leq 0 \implies (h^{-1})^{\alpha \beta} \omega_{\alpha} \omega_{\beta} < 0. \end{align} From \eqref{E:VECTORSGCAUSALIMPLIESHTIMELIKE} and the fact that $(g^{-1})^{\alpha \beta}(\Psi = 0) = (m^{-1})^{\alpha \beta} = \mbox{\upshape diag}(-1,1,1)$ (see \eqref{E:LITTLEGDECOMPOSED}-\eqref{E:METRICPERTURBATIONFUNCTION}), it follows that $(h^{-1})^{00}(\Psi,\vec{W}) < 0$ whenever $|\Psi|$ and $|\vec{W}|$ are sufficiently small. For convenience, we rescale the metrics and equations by a positive conformal factor so that the following holds relative to the Cartesian coordinates: \begin{align} \label{E:ZEROZEROISMINUSONE} (g^{-1})^{00}(\Psi) & = (h^{-1})^{00}(\Psi,\vec{W}) \equiv - 1. \end{align} The identities assumed in \eqref{E:ZEROZEROISMINUSONE} simplify many calculations but are in no way essential. Note that in view of the definition of a covariant wave operator, rescaling the metric $g$ introduces an additional semilinear inhomogeneous null form term of the form $\mathfrak{M}(\Psi) \mathcal{Q}^g(\partial \Psi,\partial \Psi)$ on RHS~\eqref{E:FASTWAVE}. It turns out that due to its good null structure, this term does not have a substantial influence of the dynamics of the solutions that we study in our main theorem. Note also that this new term already falls under the scope of the allowable terms on RHS~\eqref{E:FASTWAVE}. \subsection{Overview of the proof of the main result} \label{SS:OVERVIEWOFPROOF} In this subsection, we provide an overview of the proof of our main result, Theorem~\ref{T:MAINTHEOREM}. Our basic geometric setup is similar to the one pioneered by Christodoulou in his study of shock formation in irrotational relativistic fluid mechanics \cite{dC2007}. \subsubsection{Basic geometric ingredients} \label{SSS:GEOMETRICINGREDIENTS} As in all prior works on shock formation in more than one spatial dimension, to follow the solution all the way to the singularity in $\max_{\alpha=0,1,2} |\partial_{\alpha} \Psi|$, we construct an eikonal function adapted to the metric $g(\Psi)$. \begin{definition}[\textbf{Eikonal function}] \label{D:INTROEIKONAL} The eikonal function $u$ solves the eikonal equation initial value problem \begin{subequations} \begin{align} \label{E:INTROEIKONAL} (g^{-1})^{\alpha \beta}(\Psi) \partial_{\alpha} u \partial_{\beta} u & = 0, \qquad \partial_t u > 0, \\ u|_{\Sigma_0} & = 1 - x^1, \label{E:INTROEIKONALINITIALVALUE} \end{align} \end{subequations} where $\Sigma_0 \simeq \mathbb{R} \times \mathbb{T}$ is the hypersurface of constant Cartesian time $0$. \end{definition} Our choice of initial conditions in \eqref{E:INTROEIKONALINITIALVALUE} is adapted to the approximate plane symmetry of the data that we will consider. The level sets of $u$ are $g$-null hypersurfaces, which we denote by $\mathcal{P}_u$ (see Def.~\ref{D:HYPERSURFACESANDCONICALREGIONS}) and which we often refer to as the \emph{characteristics}. See Figure~\ref{F:FRAME} on pg.~\pageref{F:FRAME} for a depiction of the characteristics, where the characteristics $\mathcal{P}_u^t$ in the figure have been truncated at time $t$. We clarify that even though the system \eqref{E:FASTWAVE} + \eqref{E:SLOW0EVOLUTION}-\eqref{E:SYMMETRYOFMIXEDPARTIALS} features multiple speeds of propagation, we study only the characteristic family $\lbrace \mathcal{P}_u \rbrace_{u \in [0,1]}$ in detail since, for the data under consideration, the intersection of distinct members of this family corresponds to the formation of a shock. Using $u$, we will construct a collection of geometric objects that can be used to derive sharp information about the solution. The most important of these is the \emph{inverse foliation density} $\upmu$. Its vanishing corresponds to the intersection of the characteristics and, as it turns out (see Subsubsect.\ \ref{SSS:FORMATIONOFSHOCK}), the formation of a singularity in $\max_{\alpha=0,1,2} |\partial_{\alpha} \Psi|$. \begin{definition}[\textbf{Inverse foliation density}] \label{D:FIRSTUPMU} We define $\upmu > 0$ as follows: \begin{align} \label{E:FIRSTUPMU} \upmu & := \frac{-1}{(g^{-1})^{\alpha \beta}(\Psi) \partial_{\alpha} t \partial_{\beta} u}, \end{align} where $t$ is the Cartesian time coordinate. \end{definition} Note that by \eqref{E:LITTLEGDECOMPOSED}-\eqref{E:METRICPERTURBATIONFUNCTION} and the initial conditions \eqref{E:INTROEIKONALINITIALVALUE} for $u$, we have $\upmu|_{\Sigma_0} = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\Psi)$ (see Subsect.\ \ref{SS:NOTATIONANDINDEXCONVENTIONS} regarding our use of the notation $\mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\cdot)$). Thus, for the data that we consider in this article, in which $|\Psi|$ is initially small, it follows that $\upmu$ is initially near unity. In short, our main goal in the article is to exhibit an open set of data such that $\upmu$ vanishes in finite time, to show that its vanishing is tied to the blowup of $\max_{\alpha=0,1,2} |\partial_{\alpha} \Psi|$, and to show that $|w|$ and $\max_{\alpha=0,1,2} |\partial_{\alpha} w|$ remain bounded. The following spacetime subsets are tied to $u$ and play a fundamental role in our analysis. \begin{definition} [\textbf{Subsets of spacetime}] \label{D:HYPERSURFACESANDCONICALREGIONS} We define the following subsets of spacetime: \begin{subequations} \begin{align} \Sigma_{t'} & := \lbrace (t,x^1,x^2) \in \mathbb{R} \times \mathbb{R} \times \mathbb{T} \ | \ t = t' \rbrace, \label{E:SIGMAT} \\ \Sigma_{t'}^{u'} & := \lbrace (t,x^1,x^2) \in \mathbb{R} \times \mathbb{R} \times \mathbb{T} \ | \ t = t', \ 0 \leq u(t,x^1,x^2) \leq u' \rbrace, \label{E:SIGMATU} \\ \mathcal{P}_{u'} & := \lbrace (t,x^1,x^2) \in \mathbb{R} \times \mathbb{R} \times \mathbb{T} \ | \ u(t,x^1,x^2) = u' \rbrace, \label{E:PU} \\ \mathcal{P}_{u'}^{t'} & := \lbrace (t,x^1,x^2) \in \mathbb{R} \times \mathbb{R} \times \mathbb{T} \ | \ 0 \leq t \leq t', \ u(t,x^1,x^2) = u' \rbrace, \label{E:PUT} \\ \ell_{t',u'} &:= \mathcal{P}_{u'}^{t'} \cap \Sigma_{t'}^{u'} = \lbrace (t,x^1,x^2) \in \mathbb{R} \times \mathbb{R} \times \mathbb{T} \ | \ t = t', \ u(t,x^1,x^2) = u' \rbrace, \label{E:LTU} \\ \mathcal{M}_{t',u'} & := \cup_{u \in [0,u']} \mathcal{P}_u^{t'} \cap \lbrace (t,x^1,x^2) \in \mathbb{R} \times \mathbb{R} \times \mathbb{T} \ | \ 0 \leq t < t' \rbrace. \label{E:MTUDEF} \end{align} \end{subequations} \end{definition} We refer to the $\Sigma_t$ and $\Sigma_t^u$ as ``constant time slices,'' the $\mathcal{P}_u$ and $\mathcal{P}_u^t$ as ``characteristics'' or ``null hypersurfaces,'' and the $\ell_{t,u}$ as ``tori.'' Note that $\mathcal{M}_{t,u}$ is ``open-at-the-top'' by construction. To study the solution, we complement $t$ and $u$ with a \emph{geometric torus coordinate} $\vartheta$ to form a geometric coordinate system $(t,u,\vartheta)$ with corresponding partial derivative vectorfields $ \displaystyle \left\lbrace \frac{\partial}{\partial t}, \frac{\partial}{\partial u}, \Theta := \frac{\partial}{\partial \vartheta} \right\rbrace $. To differentiate the equations and obtain estimates for the solution's derivatives, we also construct a related vectorfield frame \begin{align} \label{E:INTROGOODFRAME} \lbrace L, \breve{X}, Y \rbrace, \end{align} which spans the tangent space at each point where $\upmu > 0$. The vectorfield $L$ verifies $ \displaystyle L = \frac{\partial}{\partial t} $ and is null with respect to $g$, while $\breve{X}$ and $Y$ are, respectively, replacements for $ \displaystyle \frac{\partial}{\partial u} $ and $ \Theta $ with better regularity properties that are needed to close the top-order energy estimates; see Subsect.\ \ref{SS:EIKONALFUNCTIONANDRELATED} for the details behind the construction of $\vartheta$ and the vectorfields. We will prove that for the solutions under study, the vectorfields $L$ and $Y$ remain close to their background values, which are respectively $\partial_t + \partial_1$ and $\partial_2$. In contrast, $\breve{X}$ behaves like $- \upmu \partial_1$ and thus shrinks as the shock forms. Moreover, $ \displaystyle X := \frac{1}{\upmu} \breve{X} $ remains close to $- \partial_1$ all the way up to the shock. See Figure~\ref{F:FRAME} on pg.~\pageref{F:FRAME} for a schematic depiction of the vectorfields $\lbrace L, \breve{X}, Y \rbrace$ and note in particular that $|\breve{X}|$ is smaller in the region where $\upmu$ is small. Note also that we have displayed the (outgoing) characteristics $\mathcal{P}_u$ of $g(\Psi)$ in the figure but we have not displayed any characteristics of $h$ since they do not play a role in our analysis. Moreover, $L$ and $Y$ are tangential to the characteristics $\mathcal{P}_u$ while $\breve{X}$ is transversal to them. A key aspect of our proof is that we will be able to derive \emph{uniform bounds} for the $L$, $Y$, and $\breve{X}$ derivatives of the solution all the way up to the shock, except near the top derivative level; as we describe in the discussion surrounding \eqref{E:INTROTOPENERGY}-\eqref{E:INTROLOWESTENERGY}, our high-order geometric energies are allowed to blow up as the shock forms. The fact that we can derive non-singular estimates for the low-level $\breve{X}$ derivatives of the solution is fundamentally tied to the fact that $|\breve{X}|$ shrinks like $\upmu$ as $\upmu \to 0$. Note that this is compatible with the formation of a singularity in $\max_{\alpha=0,1,2} |\partial_{\alpha} \Psi|$. More precisely, our main theorem yields that $ \displaystyle |\breve{X} \Psi| \gtrsim 1 $ near points where $\upmu$ is small and thus the derivative of $\Psi$ with respect to the order-unity vectorfield $ \displaystyle X := \frac{1}{\upmu} \breve{X} $ blows up precisely when $\upmu$ vanishes; see Subsubsect.\ \ref{SSS:FORMATIONOFSHOCK} for a more detailed overview of this aspect of the proof. \begin{center} \begin{overpic}[scale=.35]{Frame.pdf} \put (83,55.5) {\large$\displaystyle L$} \put (65.2,44.2) {\large$\displaystyle \breve{X}$} \put (69.5,49.5) {\large$\displaystyle Y$} \put (59,31) {\large$\displaystyle L$} \put (35.8,21.5) {\large$\displaystyle \breve{X}$} \put (46.2,28) {\large$\displaystyle Y$} \put (51,13) {\large$\displaystyle \mathcal{P}_0^t$} \put (37,13) {\large$\displaystyle \mathcal{P}_u^t$} \put (7,13) {\large$\displaystyle \mathcal{P}_1^t$} \put (22,26) {\large$\displaystyle \upmu \approx 1$} \put (64,68) {\large$\displaystyle \upmu \ \mbox{\upshape small}$} \end{overpic} \captionof{figure}{The vectorfield frame from \eqref{E:INTROGOODFRAME} at two distinct points in $\mathcal{P}_u$} \label{F:FRAME} \end{center} \subsubsection{The spacetime regions under study} \label{SSS:SPACETIMEREGION} For convenience, we study only the future portion of the solution that is completely determined by the data lying in the subset $\Sigma_0^{U_0} \subset \Sigma_0$ of thickness $U_0$ and on a portion of the characteristic $\mathcal{P}_0$, where \begin{align} \label{E:FIXEDPARAMETER} 0 < U_0 \leq 1 \end{align} is a parameter, fixed until Theorem~\ref{T:MAINTHEOREM}; see Figure~\ref{F:REGION}. We will study spacetime regions such that $0 \leq u \leq U_0$, where $u$ is the eikonal function from Def.~\ref{D:INTROEIKONAL}. We have introduced the parameter $U_0$ because one would need to allow $U_0$ to vary in order to study the behavior of the solution up to the boundary of the maximal development, as Christodoulou did in \cite{dC2007}*{Chapter 15}. For brevity, we do not pursue this issue in the present article. \begin{center} \begin{overpic}[scale=.2]{Region.pdf} \put (37,33) {\large$\displaystyle \mathcal{P}_{U_0}^t$} \put (74,33) {\large$\displaystyle \mathcal{P}_0^t$} \put (18,5) {\large$\displaystyle \mbox{``interesting'' data}$} \put (77,38) {\large \rotatebox{45}{$\displaystyle \mbox{very small data}$}} \put (35,18) {\large$\Sigma_0^{U_0}$} \put (31,10) {\large$\displaystyle U_0$} \put (-.8,16) {\large$\displaystyle x^2 \in \mathbb{T}$} \put (24,-3.2) {\large$\displaystyle x^1 \in \mathbb{R}$} \thicklines \put (-1.1,3){\vector(.9,1){22}} \put (.5,1.8){\vector(100,-4.5){48}} \put (10.5,13.9){\line(.9,1){2}} \put (53.5,11.9){\line(.43,1){1}} \put (11.5,15){\line(100,-4.5){42.5}} \end{overpic} \captionof{figure}{The spacetime region under study} \label{F:REGION} \end{center} In our analysis, we will restrict our attention to times $t$ verifying $0 \leq t < 2 \mathring{\updelta}_*^{-1}$, where $\mathring{\updelta}_* > 0$ is the data-dependent parameter defined by \begin{align} \label{E:INTROCRITICALBLOWUPTIMEFACTOR} \mathring{\updelta}_* & := \frac{1}{2} \sup_{\Sigma_0^1} \left[G_{L L} \breve{X} \Psi \right]_-. \end{align} The quantity \eqref{E:INTROCRITICALBLOWUPTIMEFACTOR} is essentially the main term in the transport equation for $\upmu$ (see \eqref{E:UPMUFIRSTTRANSPORT}) that drives $\upmu$ to $0$ in finite time. In \eqref{E:INTROCRITICALBLOWUPTIMEFACTOR}, $G_{L L} := G_{\alpha \beta} L^{\alpha} L^{\beta}$, where $G_{\alpha \beta}$ is defined in \eqref{E:BIGGDEF} and $L$ is the $g$-null vectorfield mentioned in Subsubsect.\ \ref{SSS:GEOMETRICINGREDIENTS} (see Def.~\ref{D:LUNITDEF} for the precise definition). In our analysis, we take into account only the portion of the data lying in the subset $\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}$ of the characteristic $\mathcal{P}_0$ since, by domain of dependence considerations, only this portion can influence the solution in the regions under study. The parameter $\mathring{\updelta}_*$ is important because under certain assumptions described below, the time of first shock formation is a small perturbation\footnote{For $\mathring{\upalpha}$ and $\mathring{\upepsilon}$ sufficiently small, the time of first shock formation is $\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\mathring{\upepsilon}) \rbrace \mathring{\updelta}_*^{-1}$, where $\mathring{\upalpha}$ and $\mathring{\upepsilon}$ are the data-size parameters described in Subsubsect.\ \ref{SSS:DATAASSUMPTIONSANDBOOTSTRAPASSUMPTIONS}.} of $\mathring{\updelta}_*^{-1}$. We will clarify the connection between $\mathring{\updelta}_*$ and the time of first shock formation in Subsubsect.\ \ref{SSS:FORMATIONOFSHOCK}. Moreover, in view of the above remarks, we see that to close our bootstrap argument (which we briefly overview in Subsubsect.\ \ref{SSS:DATAASSUMPTIONSANDBOOTSTRAPASSUMPTIONS}), it is sufficient to control the solution for times up to $2 \mathring{\updelta}_*^{-1}$, which is plenty of time for the shock to form. \subsubsection{A model problem: shock formation for nearly simple outgoing waves under the assumption of plane symmetry} \label{SSS:NEARLYSIMPLEWAVES} In this subsubsection, we illustrate some of the main ideas behind our analysis by sketching a proof of our main results for plane symmetric solutions, that is, solutions that depend only on $t$ and $x^1$. For such solutions, we are able to rely exclusively on the method of characteristics when deriving estimates. In particular, we can avoid energy estimates, which drastically simplifies the proof. Our analysis in this subsubsection can be viewed as a sharpening of the approach of John \cite{fJ1974}, in the spirit of the recent work \cite{dCdRP2016}. For convenience, we consider only the case in which the fast wave metric perturbation function from \eqref{E:LITTLEGDECOMPOSED} takes the simple form \begin{align} \label{E:MODELMETRICPERT} g_{\alpha \beta}^{(Small)}(\Psi) = \left\lbrace (1 + \Psi)^2 - 1 \right\rbrace \delta_{\alpha}^1 \delta_{\beta}^1, \end{align} where $\delta_{\alpha}^{\beta}$ is the standard Kronecker delta. Moreover, in this subsubsection only, we use, in addition to the vectorfield $L$, the vectorfield $\breve{\underline{L}}$ defined by \begin{align} \label{E:ULGOODDEF} \breve{\underline{L}} & := \upmu L + 2 \breve{X}. \end{align} It is easy to check that $g(\breve{\underline{L}},\breve{\underline{L}}) = 0$ (that is, that $\breve{\underline{L}}$ is $g$-null) and that the following relations hold (these relations follow easily from Lemma~\ref{L:BASICPROPERTIESOFFRAME}): \begin{align} \label{E:NULLVECGEOMETRICCOMP} L t = 1, \qquad L u = 0, \qquad \breve{\underline{L}} t = \upmu, \qquad \breve{\underline{L}} u = 2. \end{align} From the point of view of the estimates derived in this subsubsection, the vectorfield $\breve{\underline{L}}$ plays a role similar to the one played by the $\mathcal{P}_u$-transversal vectorfield $\breve{X}$ that we use in the rest of the paper. The advantage of $\breve{\underline{L}}$ in this subsubsection is that it is $g$-null and thus the principal part of the fast wave equation takes a simple form in plane symmetry when expressed in terms of $L$ and $\breve{\underline{L}}$ derivatives; see equations \eqref{E:MODELFAST}-\eqref{E:SWITCHEDORDERMODELFAST} As in the bulk of the paper, we will focus our attention here on nearly simple outgoing waves. By a simple outgoing (that is, right-moving) plane wave, we mean a solution such that $L \Psi \equiv 0$ and $w \equiv 0$. Due to our assumptions \eqref{E:SOMENONINEARITIESARELINEAR} on the semilinear inhomogeneous terms on and RHSs~\eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE}, the systems that we study in this paper admit simple plane wave solutions. In the present subsubsection, we will consider plane symmetric initial data verifying a set of size assumptions. Our assumptions involve the four parameters $\mathring{\upalpha}$, $\mathring{\upepsilon}$, $\mathring{\updelta}$, and $\mathring{\updelta}_*$, which in this subsubsection only have slightly different (but analogous) definitions than they do in the rest of the paper. Specifically, we assume that the initial data for $\Psi$ and $w$ are given along $\Sigma_0^1$, which corresponds to the portion of $\Sigma_0$ with $0 \leq u \leq 1$, as well as $\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}$, which is the portion of the level set $\lbrace u = 0 \rbrace$ with $0 \leq t \leq 2 \mathring{\updelta}_*^{-1}$, where we define $\mathring{\updelta}_*$ just below. Note that in plane symmetry, $\Sigma_0^1$ can be identified with an orientation-reversed version of the unit interval $[0,1]$ of $x^1$ values. We assume the following size conditions, where all functions on the LHSs of the inequalities are assumed to be continuous with respect to the geometric coordinates $(t,u)$: \begin{subequations} \begin{align} \breve{\underline{L}} \Psi|_{\Sigma_0^1} & = f(u), \label{E:PLANESYMMETRYTHEONELARGEDATUM} \\ \left\| \Psi \right\|_{L^{\infty}(\Sigma_0^1)} & \leq \mathring{\upalpha}, \label{E:PSIITSLFPLANESYMMETRYSIGMA01} \\ \left\| L \Psi \right\|_{L^{\infty}(\Sigma_0^1)} & \leq \mathring{\upepsilon}, \label{E:PLANESYMMETRYLUNITPSISMALLSIGMA01} \\ \left\| w \right\|_{L^{\infty}(\Sigma_0^1)}, \, \left\| w_0 \right\|_{L^{\infty}(\Sigma_0^1)}, \, \left\| w_1 \right\|_{L^{\infty}(\Sigma_0^1)} & \leq \mathring{\upepsilon}, \label{E:PLANESYMMETRYWSMALLSIGMA01} \\ \left\| \Psi \right\|_{L^{\infty}(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}})}, \, \left\| L \Psi \right\|_{L^{\infty}(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}})} & \leq \mathring{\upepsilon}, \label{E:PLANESYMMETRYALLPSISMALLPO} \\ \left\| w \right\|_{L^{\infty}(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}})}, \, \left\| w_0 \right\|_{L^{\infty}(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}})}, \, \left\| w_1 \right\|_{L^{\infty}(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}})} & \leq \mathring{\upepsilon}, \label{E:PLANESYMMETRYALLWSMALLPO} \end{align} \end{subequations} where $f(u)$ is a continuous function and\footnote{In \eqref{E:MODELCASEBLOWUPTIMEPARAMETER} and throughout, $[p]_- := |\min \lbrace p, 0 \rbrace|$.} \begin{align} \label{E:MODELCASEBLOWUPTIMEPARAMETER} \mathring{\updelta}_* & := \sup_{u \in [0,1]} [f(u)]_-. \end{align} Above, $\mathring{\upalpha} > 0$ is a parameter that, for our subsequent bootstrap argument to close, must be small in an absolute sense, while $\mathring{\upepsilon} \geq 0$ is a parameter that must be small in an absolute sense, small relative to $\mathring{\updelta}_*$, and small relative to $\mathring{\updelta}^{-1}$, where \begin{align} \mathring{\updelta} &:= \sup_{u \in [0,1]} |f(u)|. \end{align} In the remainder of this subsubsection, we will assume that $\mathring{\updelta}_* > 0$ and $\mathring{\updelta} > 0$. When $\mathring{\upepsilon} = 0$, the corresponding solution is a simple outgoing plane wave; see the end of this subsubsection for further discussion of this point. It is straightforward to see that there exist data that verify the above size assumptions. See Subsect.\ \ref{SS:EXISTENCEOFDATA} for further discussion on this point in the context of our main theorem. For convenience, in this subsubsection, we study only the following specific example of a system of type \eqref{E:FASTWAVE} + \eqref{E:SLOW0EVOLUTION}-\eqref{E:SYMMETRYOFMIXEDPARTIALS} in one spatial dimension, where the metric perturbation function is given by \eqref{E:MODELMETRICPERT} and, to simplify the discussion, we have chosen relatively simple semilinear terms: \begin{subequations} \begin{align} L \breve{\underline{L}} \Psi & = L \Psi \cdot \breve{\underline{L}} \Psi + w_0 \cdot \breve{\underline{L}} \Psi + \upmu w_0 \cdot \Psi, \label{E:MODELFAST} \\ \breve{\underline{L}} L \Psi & = L \Psi \cdot \breve{\underline{L}} \Psi + \upmu (L \Psi)^2 + w_0 \cdot \breve{\underline{L}} \Psi + \upmu w_0 \cdot \Psi, \label{E:SWITCHEDORDERMODELFAST} \\ \upmu \partial_t w_0 & = \frac{1}{4} \upmu \partial_1 w_1 + L \Psi \cdot \breve{\underline{L}} \Psi + \upmu w_0 \cdot \Psi, \label{E:MODELSLOW0} \\ \upmu \partial_t w_1 & = \upmu \partial_1 w_0 . \label{E:MODELSLOW1} \end{align} \end{subequations} We now make some remarks on the structure of equations \eqref{E:MODELFAST}-\eqref{E:MODELSLOW1}. We have multiplied the equations by the inverse foliation density $\upmu$, which will help clarify certain aspects of the analysis.\footnote{Away from plane symmetry, it is critically important to multiply the wave equations by $\upmu$ before commuting them with appropriate vectorfields; the factor of $\upmu$ leads to important cancellations. In contrast, our arguments in this subsubsection do not involve commuting the equations.} The forms of LHSs \eqref{E:MODELFAST}-\eqref{E:SWITCHEDORDERMODELFAST} are a consequence of Prop.~\ref{P:GEOMETRICWAVEOPERATORFRAMEDECOMPOSED}. In \eqref{E:MODELSLOW0}, the factor of $ \displaystyle \frac{1}{4} $ accounts for our assumption that $w$ is the slow wave. Note that equations \eqref{E:MODELSLOW0}-\eqref{E:MODELSLOW1} are semilinear while for the general class of equations that we consider, the analogous equations are typically quasilinear. These facts play very little role in the discussion in this subsubsection. In particular, in proving the main theorem of the paper, we use that $w$ is the slow wave mainly when deriving energy estimates, which we can avoid in this subsubsection by integrating along characteristics. That is, in this subsubsection, it is not fundamentally important that $w$ is the slow wave. To facilitate our analysis via integrating along characteristics, we now replace \eqref{E:MODELSLOW0}-\eqref{E:MODELSLOW1} with the following equations,\footnote{Equations \eqref{E:REVAMPEDMODELSLOW0}-\eqref{E:REVAMPEDMODELSLOW1} are evolution equations for the Riemann invariants of the subsystem \eqref{E:MODELSLOW0}-\eqref{E:MODELSLOW1}.} which are equivalent up to harmless constant factors on the right-hand sides: \begin{subequations} \begin{align} \upmu (2 \partial_t + \partial_1) (w_0 - \frac{1}{2} w_1) & = L \Psi \cdot \breve{\underline{L}} \Psi + \upmu w_0 \cdot \Psi, \label{E:REVAMPEDMODELSLOW0} \\ \upmu (2 \partial_t - \partial_1) (w_0 + \frac{1}{2} w_1) & = L \Psi \cdot \breve{\underline{L}} \Psi + \upmu w_0 \cdot \Psi. \label{E:REVAMPEDMODELSLOW1} \end{align} \end{subequations} In our subsequent analysis, we will rely on the following relations, which are simple consequences of Lemma~\ref{L:CARTESIANVECTORFIELDSINTERMSOFGEOMETRICONES}, our assumption that the metric perturbation is given by \eqref{E:MODELMETRICPERT}, the normalization condition $g(X,X) = 1$ (see \eqref{E:RADIALVECTORFIELDSLENGTHS}), our assumption of plane symmetry, and the fact that (under these assumptions) the vectorfield $Y$ verifies $Y = \partial_2$: \begin{align} \label{E:PLANESYMMETRICCHOV} 2 \partial_t & = L + \frac{1}{\upmu} \breve{\underline{L}}, \qquad 2 \partial_1 = (1 + \Psi) \left\lbrace L - \frac{1}{\upmu} \breve{\underline{L}} \right\rbrace. \end{align} Using \eqref{E:PLANESYMMETRICCHOV}, we can replace \eqref{E:REVAMPEDMODELSLOW0}-\eqref{E:REVAMPEDMODELSLOW1} with the following equations, which are again equivalent to \eqref{E:MODELSLOW0}-\eqref{E:MODELSLOW1} up to harmless constant factors on the right-hand sides: \begin{subequations} \begin{align} \left\lbrace (1 - \Psi) \breve{\underline{L}} + \upmu (3 + \Psi) L \right\rbrace (w_0 - \frac{1}{2} w_1) & = L \Psi \cdot \breve{\underline{L}} \Psi + \upmu w_0 \cdot \Psi, \label{E:AGAINREVAMPEDMODELSLOW0} \\ \left\lbrace (3 + \Psi) \breve{\underline{L}} + \upmu (1 - \Psi) L \right\rbrace (w_0 + \frac{1}{2} w_1) & = L \Psi \cdot \breve{\underline{L}} \Psi + \upmu w_0 \cdot \Psi. \label{E:AGAINREVAMPEDMODELSLOW1} \end{align} \end{subequations} In plane symmetry, the most important aspect of LHSs \eqref{E:AGAINREVAMPEDMODELSLOW0}-\eqref{E:AGAINREVAMPEDMODELSLOW1} are that for $|\Psi|$ small, the vectorfields $(1 - \Psi) \breve{\underline{L}} + \upmu (3 + \Psi) L$ and $ (3 + \Psi) \breve{\underline{L}} + \upmu (1 - \Psi) L $ are transversal to the $\mathcal{P}_u$, a simple fact that follows from the identities $\breve{\underline{L}} u = 2$ and $L u = 0$ (see \eqref{E:NULLVECGEOMETRICCOMP}). We now note that $\upmu$ (which is defined in \eqref{E:FIRSTUPMU}) verifies an evolution equation that we can schematically express as follows (see \eqref{E:UPMUFIRSTTRANSPORT} for the precise formula): \begin{align} \label{E:UPMUSIMPLEPLANEWAVESCHEMATICEVOLUTION} L \upmu & = \breve{\underline{L}} \Psi + \upmu L \Psi. \end{align} In total, we will study the system \eqref{E:MODELFAST}-\eqref{E:SWITCHEDORDERMODELFAST} + \eqref{E:AGAINREVAMPEDMODELSLOW0}-\eqref{E:AGAINREVAMPEDMODELSLOW1} + \eqref{E:UPMUSIMPLEPLANEWAVESCHEMATICEVOLUTION} and sketch a proof that whenever $\mathring{\upepsilon}$ is sufficiently small (in a manner that is allowed to depend on $\mathring{\updelta}$ and $\mathring{\updelta}_*$) and $\mathring{\upalpha}$ is small relative to $1$, a shock forms in $\Psi$ in finite time. In our analysis, we will rely on the geometric coordinates $(t,u)$. To facilitate our analysis, we find it convenient to make the following bootstrap assumptions for $(t,u) \in [0, T_{(Boot)}) \times [0,1]$, where $0 < T_{(Boot)} \leq 2 \mathring{\updelta}_*^{-1}$ is a bootstrap time: \begin{subequations} \begin{align} |\Psi| & \leq \mathring{\upalpha}^{1/2}, \label{E:INTROPSIITSELFBOOTSTRAP} \\ |L \Psi|, \, |w_0|, \, |w_1| & \leq \mathring{\upepsilon}^{1/2}, \label{E:INTROSMALLBOOT} \\ \left| \breve{\underline{L}} \Psi(t,u) - f(u) \right| & \leq \mathring{\upepsilon}^{1/2}, \label{E:INTROTRANSVERSALBOOT} \\ \upmu(t,u) & \leq 1 + 2|f(u)| \mathring{\updelta}_*^{-1} + \mathring{\upalpha}^{1/2} + \mathring{\upepsilon}^{1/2}. \label{E:INTROUPMUBOOT} \end{align} \end{subequations} We also assume that for $(t,u) \in [0, T_{(Boot)}) \times [0,1]$, we have \begin{align} \label{E:INTRONOSHOCKBOOT} \upmu(t,u) > 0, \end{align} which is tantamount to the assumption that a shock has not yet formed on $[0,T_{(Boot)}) \times [0,1]$, though it allows for the possibility that a shock forms exactly at time $T_{(Boot)}$. By standard local well-posedness, if the data verify the size assumptions \eqref{E:PLANESYMMETRYTHEONELARGEDATUM}-\eqref{E:PLANESYMMETRYALLWSMALLPO}, if $\mathring{\upalpha}$ and $\mathring{\upepsilon}$ are sufficiently small in the manner described above, and if $T_{(Boot)} > 0$ is sufficiently small, then there exists a classical solution for $(t,u) \in [0, T_{(Boot)}) \times [0,1]$ such that the bootstrap assumptions are verified in this region. Using the identities \eqref{E:PLANESYMMETRICCHOV}, we see that if the bootstrap assumptions are not saturated and if $\upmu$ remains uniformly positive on $[0,T_{(Boot)}) \times [0,1]$, then the solution and its $\partial_t$ and $\partial_1$ derivatives remain uniformly bounded in magnitude on $[0,T_{(Boot)}) \times [0,1]$. It is a standard result that under these conditions, the solution can be classically continued past the time $T_{(Boot)}$. Thus, in order to prove that a shock forms, it suffices to \textbf{i)} justify the bootstrap assumptions by deriving a strict improvement of them, a task that we accomplish by showing that they hold with $\mathring{\upepsilon}^{1/2}$ replaced by $C \mathring{\upepsilon}$ (where $\mathring{\upepsilon}$ is chosen to be sufficiently small) and with $\mathring{\upalpha}^{1/2}$ replaced by $\mathring{\upalpha} + C \mathring{\upepsilon}$ (where $\mathring{\upalpha}$ is also chosen to be sufficiently small); \textbf{ii)} to show that $\upmu$ can vanish in finite time; and \textbf{iii)} to show that the vanishing of $\upmu$ leads to the blowup of $\max \lbrace |\partial_t \Psi|, |\partial_1 \Psi| \rbrace$. Note that by \eqref{E:INTROSMALLBOOT}, our proof \textbf{i)} implies that $|w_0|$ and $|w_1|$ remain bounded. We now explain how to improve the bootstrap assumptions, starting with \eqref{E:INTROSMALLBOOT}. To this end, we find it convenient to introduce (see \eqref{E:MTUDEF} for the definition of $\mathcal{M}_{T_{(Boot)};u}$) \begin{align} q(u) := \sup_{\mathcal{M}_{T_{(Boot)};u}} \left\lbrace |L \Psi| + |w_0| + |w_1| \right\rbrace. \end{align} In the rest of the proof, we silently rely on \eqref{E:NULLVECGEOMETRICCOMP}, which allows us to think of $ \displaystyle L = \frac{d}{dt} $ along the integral curves of $L$ and $ \displaystyle \breve{\underline{L}} = 2 \frac{d}{du} $ along the integral curves of $\breve{\underline{L}}$. Similarly, we have that $ \displaystyle (1 - \Psi) \breve{\underline{L}} + \upmu (3 + \Psi) L = 2 (1 - \Psi) \frac{d}{du} $ along the integral curves of $(1 - \Psi) \breve{\underline{L}} + \upmu (3 + \Psi) L $ and $ \displaystyle (3 + \Psi) \breve{\underline{L}} + \upmu (1 - \Psi) L = 2(3 + \Psi) \frac{d}{du} $ along the integral curves of $ (3 + \Psi) \breve{\underline{L}} + \upmu (1 - \Psi) L $. Using these observations, we integrate equations \eqref{E:SWITCHEDORDERMODELFAST} and \eqref{E:AGAINREVAMPEDMODELSLOW0}-\eqref{E:AGAINREVAMPEDMODELSLOW1} and use \eqref{E:NULLVECGEOMETRICCOMP}, the bootstrap assumptions, and the small-data assumptions \eqref{E:PLANESYMMETRYLUNITPSISMALLSIGMA01}, \eqref{E:PLANESYMMETRYWSMALLSIGMA01}, \eqref{E:PLANESYMMETRYALLPSISMALLPO}, and \eqref{E:PLANESYMMETRYALLWSMALLPO} to obtain \begin{align} \label{E:LITTLEQGRONWALLREADY} q(u) & \leq C \mathring{\upepsilon} + C \int_{u'=0}^u q(u') \, du', \end{align} where here and throughout the paper, all constants $C$ are allowed to depend on $\mathring{\updelta}$ and $\mathring{\updelta}_*$, and similarly for implicit constants hidden in the notations $\lesssim$ and $\mathcal{O}$; see Subsect.\ \ref{SS:NOTATIONANDINDEXCONVENTIONS} for a precise description of the way in which we allow constants to depend on the various parameters in the bulk of the paper. From \eqref{E:LITTLEQGRONWALLREADY} and Gronwall's inequality, we conclude that $ \sup_{u \in [0,1]} q(u) \lesssim \mathring{\upepsilon} $. Next, using the already obtained bound $|L \Psi| \lesssim \mathring{\upepsilon}$, the fundamental theorem of calculus, and the data-size assumptions \eqref{E:PSIITSLFPLANESYMMETRYSIGMA01} for $\Psi$, we deduce that for $(t,u) \in [0, T_{(Boot)}) \times [0,1]$, we have \begin{align} \left| \Psi \right| (t,u) & \leq \mathring{\upalpha} + \int_{s=0}^t |L \Psi|(s,u) \, ds \\ & \leq \mathring{\upalpha} + C \mathring{\updelta}_*^{-1} \mathring{\upepsilon} \leq \mathring{\upalpha} + C \mathring{\upepsilon}. \notag \end{align} We have thus derived the desired improvements of the bootstrap assumptions \eqref{E:INTROPSIITSELFBOOTSTRAP}-\eqref{E:INTROSMALLBOOT} (whenever $\mathring{\upalpha}$ and $\mathring{\upepsilon}$ are sufficiently small). Next, using the previously obtained estimates, the bootstrap assumptions, the evolution equation \eqref{E:MODELFAST}, the fundamental theorem of calculus, and the data assumption \eqref{E:PLANESYMMETRYTHEONELARGEDATUM}, we obtain \begin{align} \label{E:ULGOODPSIBOOTSTRAPIMPROVED} \left| \breve{\underline{L}} \Psi(t,u) - f(u) \right| \leq \int_{s=0}^t |L \breve{\underline{L}} \Psi(s,u)| \, ds & \leq C \int_{s=0}^t \mathring{\upepsilon} \, ds \\ & \leq C \mathring{\updelta}_*^{-1} \mathring{\upepsilon} \leq C \mathring{\upepsilon}, \notag \end{align} which yields an improvement of the bootstrap assumption \eqref{E:INTROTRANSVERSALBOOT}. Similarly, from the previously obtained estimates, the bootstrap assumptions, equation \eqref{E:UPMUSIMPLEPLANEWAVESCHEMATICEVOLUTION}, and the fact that (by construction) $\upmu|_{t=0} = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\Psi)$, we deduce that \begin{align} \label{E:UPMUSIMPLEPLANEWAVESCHEMATIC} \upmu(t,u) & = 1 + f(u) t + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\mathring{\upepsilon}), \end{align} where the implicit constants in $\mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\cdot)$ do not depend on $\mathring{\updelta}_*$ or $f(u)$. We have therefore improved the bootstrap assumption \eqref{E:INTROUPMUBOOT} (whenever $\mathring{\upalpha}$ and $\mathring{\upepsilon}$ are sufficiently small), which completes our proof of the improvement of the bootstrap assumptions. We now show that a shock forms in finite time. We start by setting \[ \upmu_{\star}(t) := \min_{u \in [0,1]} \upmu(t,u). \] From definition \eqref{E:MODELCASEBLOWUPTIMEPARAMETER} and \eqref{E:UPMUSIMPLEPLANEWAVESCHEMATIC}, we find that \begin{align} \label{E:SHOCKWILLFORMUPMUSIMPLEPLANEWAVESCHEMATIC} \upmu_{\star}(t) & = 1 - \mathring{\updelta}_* t + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\mathring{\upepsilon}). \end{align} From \eqref{E:SHOCKWILLFORMUPMUSIMPLEPLANEWAVESCHEMATIC}, we easily infer that $\upmu_{\star}(t)$ vanishes at the time $ \displaystyle T_{(Shock)} = \mathring{\updelta}_*^{-1} \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\mathring{\upepsilon}) \right\rbrace $. Finally, from \eqref{E:MODELCASEBLOWUPTIMEPARAMETER}, \eqref{E:PLANESYMMETRICCHOV}, and the bounds $|\Psi| \leq \mathring{\upalpha} + C \mathring{\upepsilon}$, $|L \Psi| \leq C \mathring{\upepsilon}$, and \eqref{E:ULGOODPSIBOOTSTRAPIMPROVED}, we see that if $\mathring{\upalpha}$ and $\mathring{\upepsilon}$ are sufficiently small, then as $t \uparrow T_{(Shock)}$, $ \sup_{u \in [0,1]} |\partial_t \Psi(t,u)| $ and $ \sup_{u \in [0,1]} |\partial_1 \Psi(t,u)| $ are equal to non-zero, bounded functions times $ \displaystyle \frac{1}{\upmu_{\star}(t)} $. Hence, $ \sup_{u \in [0,1]} |\partial_t \Psi(t,u)| $ and $ \sup_{u \in [0,1]} |\partial_1 \Psi(t,u)| $ blow up precisely at time $T_{(Shock)}$. We close this subsubsection by highlighting that there exist initial data, compactly supported in $\Sigma_0^1$, such that $\mathring{\upepsilon} = 0$ and such that the shock formation argument given above goes through; see Subsect.\ \ref{SS:EXISTENCEOFDATA} for further discussion. The corresponding solutions are simple outgoing plane waves. This clarifies why in perturbing these simple waves, we can consider initial data such that $\mathring{\upepsilon}$ is positive but small relative to the other relevant quantities in the problem, as the above bootstrap argument required. \subsubsection{The full problem without symmetry assumptions: data-size assumptions, bootstrap assumptions, and $L^{\infty}$ estimates} \label{SSS:DATAASSUMPTIONSANDBOOTSTRAPASSUMPTIONS} In our main theorem (Theorem~\ref{T:MAINTHEOREM}), we study (non-symmetric) perturbations of the plane symmetric nearly simple outgoing wave solutions studied in Subsubsect.\ \ref{SSS:NEARLYSIMPLEWAVES}. We now outline the size assumptions that we make on the data in proving our main theorem. Our assumptions are similar in spirit to our data assumptions from Subsubsect.\ \ref{SSS:NEARLYSIMPLEWAVES} but are more complicated in view of the additional spatial direction and the necessity of deriving energy estimates away from plane symmetry; see Subsect.\ \ref{SS:DATAASSUMPTIONS} for a precise statement of our assumptions on the data. We study solutions such that the interesting, relatively large portion of the data lies in $\Sigma_0^{U_0}$ when $U_0$ is near $1$ while the data on $\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}$ are very small; see Figure~\ref{F:REGION} on pg.~\pageref{F:REGION}. Here and in the remainder of the article, $\mathring{\updelta}_* > 0$ denotes the data-dependent parameter defined in \eqref{E:INTROCRITICALBLOWUPTIMEFACTOR}. We consider data for $\Psi$ such that along $\Sigma_0^{U_0}$, $\Psi$ itself is initially of small $L^{\infty}$ size $\mathring{\upalpha}$, the $\mathcal{P}_u$-tangential derivatives of $\Psi$ (that is, its $L$ and $Y$ derivatives) up to top order are of a relatively small size $\mathring{\upepsilon}$ in appropriate norms, while the pure $\breve{X}$ derivatives such as $\breve{X} \Psi$ and $\breve{X} \breve{X} \Psi$ are of a relatively large\footnote{$\mathring{\updelta}$ is allowed to be small in an absolute sense.} size $\mathring{\updelta}$. We assume that all mixed tangential-transversal derivatives such as $L \breve{X} \Psi$ are also of a relatively small size $\mathring{\upepsilon}$. Our size assumptions are such that the energies we use to control the solution are all initially of small size $\mathcal{O}(\mathring{\upepsilon}^2)$; see Subsubsect.\ \ref{SSS:ENERGYESTIMATES} for further discussion on this point. These size assumptions are similar to the ones made in \cites{jSgHjLwW2016,jLjS2016b} and correspond to data close to that of the nearly simple outgoing plane waves studied in Subsubsect.\ \ref{SSS:NEARLYSIMPLEWAVES}. Roughly, the relative largeness of $\mathring{\updelta}$ is tied to a Riccati-type blowup of $\max_{\alpha=0,1,2} |\partial_{\alpha} \Psi|$. We assume that the slow wave variable array $\vec{W}$ and all of its derivatives up to top order in \emph{all directions}\footnote{Actually, in our proof, we do not need estimates for more than two $\breve{X}$ derivatives of $\vec{W}$.} are initially of small size $\mathring{\upepsilon}$. Finally, we assume that along $\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}$, the derivatives of $\Psi$ and $\vec{W}$ up to top order in \emph{all directions} are of small size $\mathring{\upepsilon}$. The case $\mathring{\upepsilon} = 0$ corresponds to a simple outgoing (that is, right-moving) plane wave. See Subsect.\ \ref{SS:EXISTENCEOFDATA} for a proof sketch of the existence of data that verify our size assumptions and for discussion on why their existence is tied to our structural assumptions \eqref{E:SOMENONINEARITIESARELINEAR} on the semilinear inhomogeneous terms. \begin{remark}[\textbf{On the parameter} $\mathring{\upalpha}$] \label{R:NEWPARAMETER} In \cite{jSgHjLwW2016}, the $L^{\infty}$ smallness of $\Psi$ itself (un-differentiated) and the smallness of its $\mathcal{P}_u$-tangential derivatives were captured by the smallness of $\mathring{\upepsilon}$. That is, the parameter $\mathring{\upalpha}$ was not featured in the work \cite{jSgHjLwW2016}. For this reason, in \cite{jSgHjLwW2016}, $\mathring{\upepsilon}$ did not vanish for non-trivial simple outgoing plane wave solutions, which is different than in the present article. In this article, we have decided that it is better to introduce $\mathring{\upalpha}$ so that \textbf{i)} our results here apply in particular to non-trivial simple outgoing plane wave solutions and \textbf{ii)} the existence of an open set of initial data (without symmetry assumptions) that verify our size assumptions follows as an easy consequence of the fact that our shock formation results apply to some simple outgoing plane wave solutions (see Subsect.\ \ref{SS:EXISTENCEOFDATA} for further discussion). \end{remark} The data-size assumptions described in the previous paragraph correspond to a pair of waves in which one wave (namely $\Psi$) is nearly simple and outgoing while the other (namely $\vec{W}$) is uniformly small. A key point of our proof is showing how to propagate various aspects of the $\mathring{\upalpha}$-$\mathring{\updelta}$-$\mathring{\upepsilon}$ hierarchy all the way up to the shock, much like in Subsubsect.\ \ref{SSS:NEARLYSIMPLEWAVES}. As in Subsubsect.\ \ref{SSS:NEARLYSIMPLEWAVES}, to propagate the $\mathring{\upalpha}$-$\mathring{\updelta}$-$\mathring{\upepsilon}$ hierarchy, we find it convenient to make $L^{\infty}$ bootstrap assumptions for $\Psi$, $\vec{W}$, and their geometric derivatives on a bootstrap time interval of the form $[0,T_{(Boot)})$, on which $\upmu > 0$ and on which the solution exists classically. In view of the remarks made below \eqref{E:INTROCRITICALBLOWUPTIMEFACTOR}, we can assume that $T_{(Boot)} \leq 2 \mathring{\updelta}_*^{-1}$. Our ``fundamental'' bootstrap\footnote{To close our estimates, we also find it convenient to make additional ``auxiliary'' bootstrap assumptions; see Subsects.\ \ref{SS:AUXILIARYBOOTSTRAP} and \ref{SS:BOOTSTRAPFORHIGHERTRANSVERSAL}.} assumptions are (see Subsect.\ \ref{SS:PSIBOOTSTRAP}) \begin{align} \label{E:INTROPSIFUNDAMENTALC0BOUNDBOOTSTRAP} \left\| \mathscr{P}^{[1,10]} \Psi \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \mathscr{P}^{\leq 10} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon, \end{align} where $\mathscr{P}^{[1,M]}$ denotes an arbitrary differential operator of order in between $1$ and $M$ corresponding to repeated differentiation with respect to the $\mathcal{P}_u$-tangential vectorfields $\mathscr{P} = \lbrace L, Y \rbrace$, $\mathscr{P}^{\leq M}$ is defined similarly but allows for the possibility of zero differentiations, and $\varepsilon$ is a small bootstrap parameter that, at the end of the paper, by virtue of a priori energy estimates and Sobolev embedding, will have been shown to verify $\varepsilon \lesssim \mathring{\upepsilon}$. \begin{remark} Note that the uniform boundedness of $\max_{\alpha=0,1,2} |\partial_{\alpha} w|$ up to the shock is already accounted for in the bootstrap assumption \eqref{E:INTROPSIFUNDAMENTALC0BOUNDBOOTSTRAP}. Note furthermore that the same is \emph{not} true for $\max_{\alpha=0,1,2} |\partial_{\alpha} \Psi|$, which blows up at the shock. \end{remark} Using \eqref{E:INTROPSIFUNDAMENTALC0BOUNDBOOTSTRAP} and our data-size assumptions, we can derive $L^{\infty}$ estimates for $\Psi$ and the low-order pure transversal and mixed transversal-tangential derivatives of $\Psi$ and $\vec{W}$ on the bootstrap region, and for various derivatives of $\upmu$ and the Cartesian component functions $\lbrace L^a \rbrace_{a=1,2}$ for times up to as large as $2 \mathring{\updelta}_*^{-1}$. Roughly, this is the content of Sects.\ \ref{S:PRELIMINARYPOINTWISE} and ~\ref{S:LINFINITYESTIMATESFORHIGHERTRANSVERSAL}. The analysis is similar in spirit to that of Subsubsect.\ \ref{SSS:NEARLYSIMPLEWAVES} but is much more involved. We need the estimates for the derivatives of $\upmu$ and $\lbrace L^a \rbrace_{a=1,2}$ because these quantities arise as error terms when we commute the equations with the vectorfields $\lbrace L, \breve{X}, Y \rbrace$. \subsubsection{Proof sketch of the formation of the shock and the blowup of $\partial \Psi$} \label{SSS:FORMATIONOFSHOCK} Given the $L^{\infty}$ estimates described in Subsubsect.\ \ref{SSS:DATAASSUMPTIONSANDBOOTSTRAPASSUMPTIONS}, the proofs that $\upmu \to 0$ in finite time and that $\max_{\alpha=0,1,2} |\partial_{\alpha} \Psi|$ blows up are not much more difficult they were in Subsubsect.\ \ref{SSS:NEARLYSIMPLEWAVES}. We now sketch the proofs. First, one derives (essentially as a consequence of the eikonal equation \eqref{E:INTROEIKONAL}) the following transport equation for $\upmu$ (see Lemma~\ref{L:UPMUANDLUNITIFIRSTTRANSPORT}): \begin{align} \label{E:SCHEMATICUPMUTRANSPORT} L \upmu(t,u,\vartheta) & = \frac{1}{2} [G_{L L} \breve{X} \Psi](t,u,\vartheta) + \upmu \mathcal{O}(P \Psi)(t,u,\vartheta). \end{align} In \eqref{E:SCHEMATICUPMUTRANSPORT} and throughout, $P$ schematically denotes a differentiation in a direction tangential to the characteristics $\mathcal{P}_u$. Note that equation \eqref{E:SCHEMATICUPMUTRANSPORT} does not involve the slow wave $\vec{W}$. Using bootstrap assumptions and $L^{\infty}$ estimates of the type described in Subsubsect.\ \ref{SSS:DATAASSUMPTIONSANDBOOTSTRAPASSUMPTIONS}, it is easy to show that $\upmu \mathcal{O}(P \Psi)(t,u,\vartheta) = \mathcal{O}(\varepsilon)$ and that $[G_{L L} \breve{X} \Psi](t,u,\vartheta) = [G_{L L} \breve{X} \Psi](0,u,\vartheta) + \mathcal{O}(\varepsilon) $. Inserting these estimates into \eqref{E:SCHEMATICUPMUTRANSPORT}, we find that \begin{align} \label{E:ANOTHERSCHEMATICUPMUTRANSPORT} L \upmu(t,u,\vartheta) & = \frac{1}{2} [G_{L L} \breve{X} \Psi](0,u,\vartheta) + \mathcal{O}(\varepsilon). \end{align} From \eqref{E:ANOTHERSCHEMATICUPMUTRANSPORT}, definition \eqref{E:CRITICALBLOWUPTIMEFACTOR} and the fact that $\varepsilon$ is controlled by $\mathring{\upepsilon}$, we see that there exists $(u_*,\vartheta_*) \in [0,1] \times \mathbb{T}$ such that \begin{align} \label{E:MOSTNEGATIVEANOTHERSCHEMATICUPMUTRANSPORT} L \upmu(t,u_*,\vartheta_*) & = - \mathring{\updelta}_* + \mathcal{O}(\mathring{\upepsilon}). \end{align} Recalling that $ \displaystyle L = \frac{\partial}{\partial t} $ and that $ \upmu|_{t=0} = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\Psi) = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) $ (see Subsect.\ \ref{SS:NOTATIONANDINDEXCONVENTIONS} regarding the notation), we see that if $\mathring{\upalpha}$ and $\mathring{\upepsilon}$ are sufficiently small, then $\upmu$ vanishes for the first time when $t = \lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\mathring{\upepsilon}) \rbrace \mathring{\updelta}_*^{-1}$. Moreover, \underline{$\upmu$ vanishes linearly} in that $L \upmu$ is \emph{strictly negative} at the vanishing points; as we describe below in Subsubsect.\ \ref{SSS:ENERGYESTIMATES}, \emph{these are crucially important facts for our energy estimates}. Finally, we note that the above argument also yields that $|\breve{X} \Psi| \gtrsim 1$ at the points where $\upmu$ vanishes and thus $ \displaystyle X \Psi := \frac{1}{\upmu} \breve{X} \Psi $ blows up like $ \displaystyle \frac{1}{\upmu} $ at such points. Since we are also able to show that the Cartesian components $X^{\alpha}$ remain close to $- \delta_1^{\alpha}$ throughout the evolution, it follows that $\max_{\alpha=0,1,2} |\partial_{\alpha} \Psi|$ blows up when $\upmu$ vanishes. \subsubsection{Overview of the energy estimates} \label{SSS:ENERGYESTIMATES} Energy estimates are by far the most difficult aspect of the proof. For reasons to be explained, to close our energy estimates, we must commute the evolution equations up to $18$ times with the elements the $\mathcal{P}_u$-tangential commutation set $\mathscr{P} = \lbrace L, Y \rbrace$ and derive energy estimates for the differentiated quantities. The starting point for these energy estimates is energy identities for $\Psi$ and $\vec{W}$, which we obtain by applying the divergence theorem on the regions depicted in Figure~\ref{F:SOLIDREGION}. We provide the details behind these energy identities in Sect.\ \ref{S:ENERGIES}; here we focus mainly on outlining how to derive a priori energy estimates based on the energy identities. \begin{center} \begin{overpic}[scale=.2]{Solidregion.pdf} \put (54,32.5) {\large$\displaystyle \mathcal{M}_{t,u}$} \put (38,33) {\large$\displaystyle \mathcal{P}_u^t$} \put (74,34) {\large$\displaystyle \mathcal{P}_0^t$} \put (59,15) {} \put (32,17) {\large$\displaystyle \Sigma_0^u$} \put (48.5,13) {\large$\displaystyle \ell_{0,0}$} \put (12,13) {\large$\displaystyle \ell_{0,u}$} \put (90,60.5) {\large$\displaystyle \Sigma_t^u$} \put (93.5,56) {\large$\displaystyle \ell_{t,0}$} \put (87.2,56) {\large$\displaystyle \ell_{t,u}$} \put (-.6,16) {\large$\displaystyle x^2 \in \mathbb{T}$} \put (22,-3) {\large$\displaystyle x^1 \in \mathbb{R}$} \thicklines \put (-.9,3){\vector(.9,1){22}} \put (.7,1.8){\vector(100,-4.5){48}} \end{overpic} \captionof{figure}{The energy estimate region} \label{F:SOLIDREGION} \end{center} We start by describing our energy-null flux quantities. In Subsubsect.\ \ref{SSS:ENERGYESTIMATES} only, we denote the energy-null flux quantity\footnote{In our detailed analysis, when constructing $L^2$-controlling quantities, we separately define energies along $\Sigma_t^u$, null fluxes along $\mathcal{P}_u^t$, and spacetime integrals over $\mathcal{M}_{t,u}$. Here, to shorten our explanation of the main ideas, we have grouped them together.} for $\Psi$ by $\mathbb{H}^{(Fast)}$ and the one for $\vec{W}$ by $\mathbb{H}^{(Slow)}$. Schematically, $\mathbb{H}^{(Fast)}$ and $\mathbb{H}^{(Slow)}$ have the following strength (see Sect.\ \ref{S:ENERGIES} for precise statements concerning the energies and their coerciveness), where the integrals are with respect to the geometric coordinates:\footnote{In our schematic overview of the proof, we use the notation $A \sim B$ to imprecisely indicate that $A$ is well-approximated by $B$.} \begin{subequations} \begin{align} \mathbb{H}^{(Fast)}(t,u) & \sim \int_{\Sigma_t^u} \left\lbrace \upmu (L \Psi)^2 + (\breve{X} \Psi)^2 + \upmu (Y \Psi)^2 \right\rbrace \,d \vartheta du' + \int_{\mathcal{P}_u^t} \left\lbrace (L \Psi)^2 + \upmu (Y \Psi)^2 \right\rbrace \, d \vartheta dt' \label{E:INTROPSIENERGY} \\ & \ \ + \int_{\mathcal{M}_{t,u}} [L \upmu]_- (Y \Psi)^2 \, d \vartheta d u' dt', \notag \\ \mathbb{H}^{(Slow)}(t,u) & \sim \int_{\Sigma_t^u} \upmu |\vec{W}|^2 \,d \vartheta du' + \int_{\mathcal{P}_u^t} |\vec{W}|^2 \, d \vartheta dt'. \label{E:INTROSLOWWAVEENERGY} \end{align} \end{subequations} On RHS~\eqref{E:INTROPSIENERGY} and throughout, $ \displaystyle f_- := \max \lbrace -f, 0 \rbrace $. A crucially important feature of the above energies is that some of the integrals on RHSs \eqref{E:INTROPSIENERGY}-\eqref{E:INTROSLOWWAVEENERGY} are $\upmu$-weighted and thus become weak near the shock, that is, in regions where $\upmu$ is near $0$. It turns out that the $\upmu$-weighted integrals are not able to suitably control all of the error terms that arise in the energy identities. The reason is that we encounter some error terms that \emph{lack} $\upmu$ weights and are therefore relatively strong. However, it is also true that \emph{all} elements of $\lbrace L \Psi, \breve{X} \Psi, Y \Psi, \vec{W} \rbrace$ are controlled by one of $\mathbb{H}^{(Fast)}$ or $\mathbb{H}^{(Slow)}$ \emph{without a $\upmu$ weight} and thus, in total, we are able to control all error integrals. Let us further comment on the coerciveness of the spacetime integral $ \displaystyle \int_{\mathcal{M}_{t,u}} [L \upmu]_- (Y \Psi)^2 \, d \vartheta d u' dt' $ featured on RHS~\eqref{E:INTROPSIENERGY}. The key point is that $L \upmu$ is quantitatively negative (and thus $[L \upmu]_-$ is quantitatively positive) in the difficult region where $\upmu$ is small, which leads to the coerciveness of the integral. The reasons behind this were outlined in Subsubsect.\ \ref{SSS:FORMATIONOFSHOCK}. In all prior works on shock formation in more than one spatial dimension, similar spacetime integrals were exploited to close the energy estimates. The idea to exploit such a spacetime integral seems to have originated in the works \cites{sA1999a,dC2007}. We now let $\mathbb{H}^{(Fast)}_N$ denote an energy corresponding to commuting the wave equation for $\Psi$ with a string of vectorfields $\mathscr{P}^N$ consisting of precisely $N$ factors of elements the $\mathcal{P}_u$-tangential commutation set $\mathscr{P} = \lbrace L, Y \rbrace$. We let $\mathbb{H}^{(Slow)}_N$ be an analogous energy for $\vec{W}$. As we alluded to above, in our detailed proof, we will have $1 \leq N \leq 18$ for $\mathbb{H}^{(Fast)}_N$ and $N \leq 18$ for $\mathbb{H}^{(Slow)}_N$. For such $N$ values, our initial data are such that all energies are of initially small size $\mathcal{O}(\mathring{\upepsilon}^2)$, where $\mathring{\upepsilon}$ is the smallness parameter from Subsubsect.\ \ref{SSS:DATAASSUMPTIONSANDBOOTSTRAPASSUMPTIONS}. In particular, our energies \emph{completely vanish} for simple outgoing plane wave solutions (in which $L \Psi = Y \Psi = \vec{W} = 0$). We stress that we avoid using the energy $\mathbb{H}^{(Fast)}_0$, which involves the $L^2$ norm of the pure transversal derivative $\breve{X} \Psi$ and is therefore allowed to be of a relatively large size $\mathcal{O}(\mathring{\updelta}^2)$; as we mentioned earlier, in order to close our proof, we do not need to control $\breve{X} \Psi$ in $L^2$, but rather only in $L^{\infty}$. We also need to control $\breve{X} \breve{X} \Psi$ and $\breve{X} \breve{X} \breve{X} \Psi$ in $L^{\infty}$, for reasons that we clarify starting in Sect.\ \ref{S:LINFINITYESTIMATESFORHIGHERTRANSVERSAL}. We can obtain these $L^{\infty}$ estimates by treating the wave equation \eqref{E:FASTWAVE} like a transport equation of the form $L \breve{X} \Psi = \cdots$ (i.e., a transport equation in $\breve{X} \Psi$), where the source terms $\cdots$ are controlled by our energies, and by commuting this equation up to two times with $\breve{X}$; see Props.~\ref{P:IMPROVEMENTOFAUX} and \ref{P:IMPROVEMENTOFHIGHERTRANSVERSALBOOTSTRAP} for detailed proofs. We now provide a few more details about how to derive the energy identities that form the starting point of our $L^2$-type analysis. To obtain the relevant energy identities for $\Psi$, we commute $\upmu$ times\footnote{To avoid uncontrollable error terms, it is essential that we first multiply the equations by $\upmu$ before commuting them.} the wave equation \eqref{E:FASTWAVE} with $\mathscr{P}^N$, multiply by $T \mathscr{P}^N \Psi$, and then integrate by parts over $\mathcal{M}_{t,u}$. Here, \begin{align} \label{E:MULT} T := (1 + 2 \upmu) L + 2 \breve{X} \end{align} is a ``multiplier vectorfield'' with appropriately chosen $\upmu$ weights. Similarly, to obtain the relevant energy identities for $\vec{W}$, we multiply equations \eqref{E:SLOW0EVOLUTION}-\eqref{E:SYMMETRYOFMIXEDPARTIALS} with $\upmu$, commute them with $\mathscr{P}^N$, multiply by an appropriate quantity, and then integrate by parts over $\mathcal{M}_{t,u}$; see Sect.\ \ref{S:ENERGIES} for the details behind the integration by parts and Sect.\ \ref{S:POINTWISEESTIMATES} for the details behind the pointwise estimates for the semilinear inhomogeneous terms on RHSs \eqref{E:FASTWAVE} and \eqref{E:SLOW0EVOLUTION} and for pointwise estimates for the error terms that we generate upon commuting the equations. In total, we can use these energy identities and pointwise estimates to obtain a system of integral inequalities of the following type, where here we only schematically display a few representative terms: \begin{subequations} \begin{align} \mathbb{H}^{(Fast)}_N(t,u) & \leq C \mathring{\upepsilon}^2 + \int_{\mathcal{M}_{t,u}} (\breve{X} \Psi) (\breve{X} Y^N \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \, d \vartheta d u' dt' \label{E:PSIMODELENERGYINEQUALITY} \\ & \ \ + \int_{\mathcal{M}_{t,u}} (L Y^N \Psi) Y^N \vec{W} \, d \vartheta d u' dt' + \cdots, \notag \\ \mathbb{H}^{(Slow)}_N(t,u) & \leq C \mathring{\upepsilon}^2 + \int_{\mathcal{M}_{t,u}} |Y^N \vec{W}|^2 \, d \vartheta d u' dt' + \int_{\mathcal{M}_{t,u}} (Y^N \vec{W}) (\breve{X} Y^N \Psi) \, d \vartheta d u' dt' + \cdots. \label{E:SLOWWAVEMODELENERGYINEQUALITY} \end{align} \end{subequations} The $C \mathring{\upepsilon}^2$ terms on RHSs \eqref{E:PSIMODELENERGYINEQUALITY}-\eqref{E:SLOWWAVEMODELENERGYINEQUALITY} are generated by the data. The tensorfield $\upchi$ on RHS~\eqref{E:PSIMODELENERGYINEQUALITY} is the null second fundamental form of the co-dimension-two tori $\ell_{t,u}$. It is a symmetric type $\binom{0}{2}$ tensorfield with components $\upchi_{\Theta \Theta} = g(\mathscr{D}_{\Theta} L, \Theta)$ (see \eqref{E:CHIUSEFULID} and recall that $ \displaystyle \Theta = \frac{\partial}{\partial \vartheta} $), where $\mathscr{D}$ is the Levi--Civita connection of $g$. Moreover, ${\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ is the trace of $\upchi$ with respect to the Riemannian metric $g \mkern-8.5mu / $ induced on $\ell_{t,u}$ by $g$. Geometrically, ${\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ is the null mean curvature of the $g$-null hypersurfaces $\mathcal{P}_u$. Analytically, we have $L^{\alpha} \sim \partial u$ (see \eqref{E:LGEOEQUATION} and \eqref{E:LUNITDEF}) and thus ${\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \sim \partial^2 u$, where $u$ is the eikonal function. From the point of view of counting derivatives, one might expect to see terms such as $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \sim \partial^{N+2} u$ on the RHS of the $N$-times commuted fast wave equation since the Cartesian components $P^{\alpha}$ of the elements $ P \in \mathscr{P} = \lbrace L, Y \rbrace $ depend on $\partial u$; roughly, terms such as $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ can arise when one commutes operators of the form $\mathscr{P}^N$ through\footnote{As we mentioned above, in practice, we commute through $\upmu \square_g$ since the $\upmu$-weighted wave operator exhibits better commutation properties with the elements of $\lbrace L, \breve{X}, Y \rbrace $.} the expression $\square_g \Psi$ and the maximum number of derivatives falls on the components\footnote{In practice, we mostly rely on geometric decomposition formulas when decomposing the error terms in the commuted equations, rather than working with Cartesian components.} $P^{\alpha} \sim \partial u$. Thus, the presence of $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ on RHS~\eqref{E:PSIMODELENERGYINEQUALITY} signifies that the energy estimates for the wave variables are coupled to $L^2$ estimates for the derivatives of the eikonal function. This is a fundamental difficulty that one faces whenever one works with vectorfields constructed from an eikonal function adapted to the characteristics. We already stress here that a naive treatment of the term $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ in the energy estimates would result in the loss of a derivative that would preclude closure of the estimates. This is because crude estimates for $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \sim \partial^{N+2} u$, based on the eikonal equation \eqref{E:INTROEIKONAL}, lead to an $L^2$ estimate for $\partial^{N+2} u$ that depends on $\partial^{N+2} \Psi$, which is one more derivative of $\Psi$ than is controlled by $ \mathbb{H}^{(Fast)}_N$. However, the term $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ has a special tensorial structure, and we can avoid the loss of a derivative through a procedure that we describe below. As we explain below, we use this procedure only at the top order\footnote{More precisely, in treating the most difficult top-order error terms involving $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$, we use this procedure only at the top order. We also encounter less degenerate top-order error terms with factors that behave like $\upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ (see Lemma~\ref{L:LESSDEGENERATEENERGYESTIMATEINTEGRALS} and point (5) of Subsect.\ \ref{SS:OFTENUSEDESTIMATES}), and for these terms, thanks to the helpful factor of $\upmu$, it is permissible to use the procedure at all derivative levels.} because it leads to a rather degenerate top-order energy estimate, and we need improved estimates below top order to close our proof. The improved estimates are possible because below top order, one does not need to worry about the loss of a derivative. In fact, as we further explain below, to obtain the improved estimates, it is important that one \emph{should allow the loss of a derivative} in the estimates for $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ below top order. Taken together, these are unusual and technically challenging features of the study of shock formation that were first found in the works \cites{sA1995,sA1999a,sA1999b,dC2007} of Alinhac\footnote{Actually, Alinhac's approach allowed for some loss of differentiability stemming from his use of an eikonal function. As we mentioned in Subsect.\ \ref{SS:PRIORWORKSANDSUMMARY}, to overcome this difficulty, he used Nash--Moser estimates. In contrast, Christodoulou used an approach that avoided the derivative loss altogether, which is the approach we take in the present article.} and Christodoulou. We now provide some additional details on how we derive the a priori energy estimates. In the usual fashion, we must control RHSs~\eqref{E:PSIMODELENERGYINEQUALITY} and \eqref{E:SLOWWAVEMODELENERGYINEQUALITY} in terms of $\mathbb{H}^{(Fast)}_M$ and $\mathbb{H}^{(Slow)}_M$ (for suitable $M$) so that we can use a version of Gronwall's inequality. After a rather difficult Gronwall estimate, one obtains a hierarchy of energy estimates holding up to the shock, which we now explain. The estimates feature the quantity \[ \upmu_{\star}(t,u) := \min \left\lbrace 1 , \min_{\Sigma_t^u} \upmu \right\rbrace, \] which essentially measures the worst case smallness for $\upmu$ along $\Sigma_t^u$. As in all prior works on shock formation in more than one spatial dimension, our proof allows for the possibility that the high-order energies might blow up like negative powers of $\upmu_{\star}$, and, at the same time, guarantees that the energies become successively less singular as one reduces the number of derivatives. Moreover, one eventually reaches a level at which the energies remain uniformly bounded, all the way up to the shock; these non-degenerate energy estimates are what allows one to improve, via Sobolev embedding, the $L^{\infty}$ bootstrap assumptions (see Subsubsect.\ \ref{SSS:DATAASSUMPTIONSANDBOOTSTRAPASSUMPTIONS}) that are crucial for all aspects of the proof. The hierarchy of energy estimates that we derive can be modeled as follows: \begin{subequations} \begin{align} \mathbb{H}_{18}^{(Fast)}(t,u), \, \mathbb{H}_{18}^{(Slow)}(t,u) & \lesssim \mathring{\upepsilon}^2 \upmu_{\star}^{-11.8}(t,u), \label{E:INTROTOPENERGY} \\ \mathbb{H}_{17}^{(Fast)}(t,u), \, \mathbb{H}_{17}^{(Slow)}(t,u) & \lesssim \mathring{\upepsilon}^2 \upmu_{\star}^{-9.8}(t,u), \label{E:INTROJUSTBELOWTOPENERGY} \\ & \cdots, \\ \mathbb{H}_{13}^{(Fast)}(t,u), \, \mathbb{H}_{13}^{(Slow)}(t,u) & \lesssim \mathring{\upepsilon}^2 \upmu_{\star}^{-1.8}(t,u), \\ \mathbb{H}_{12}^{(Fast)}(t,u), \, \mathbb{H}_{12}^{(Slow)}(t,u) & \lesssim \mathring{\upepsilon}^2, \label{E:FIRSTNONDEGENERATEENERGY} \\ & \cdots \\ \mathbb{H}_1^{(Fast)}(t,u), \, \mathbb{H}_1^{(Slow)}(t,u) & \lesssim \mathring{\upepsilon}^2. \label{E:INTROLOWESTENERGY} \end{align} \end{subequations} The estimates \eqref{E:INTROTOPENERGY}-\eqref{E:INTROLOWESTENERGY} capture in spirit the energy estimates that we prove in this article; we refer the reader to Prop.~\ref{P:MAINAPRIORIENERGY} for the precise statements. The precise numerology behind the hierarchy \eqref{E:INTROTOPENERGY}-\eqref{E:INTROLOWESTENERGY} is intricate, but here are the main ideas: \textbf{i)} The top-order blowup-exponent of $11.8$ found on RHS~\eqref{E:INTROTOPENERGY} is tied to certain universal structural constants in the equations (such as the constant $A$ appearing in \eqref{E:INTROHARDESTERRRORINTEGRAL} below) and is close to optimal by our approach; for example, if one considered data belonging to the Sobolev space $H^{100}$, then our approach would only allow us to conclude that the energy controlling $100$ derivatives of $\Psi$ might blow up at the rate $\upmu_{\star}^{-11.8}(t,u)$. \textbf{ii)} The fact that the estimates become less singular by precisely two powers of $\upmu_{\star}$ at each step in the descent seems to be fundamental. The root of this phenomenon is the following: an integration in time of $\upmu_{\star}^{-B}(t,u)$ reduces the strength of the singularity by one degree to $\upmu_{\star}^{1-B}(t,u)$; see \eqref{E:INTEGRATEMUSTARINTIMELESSSINGULAR}. \textbf{iii)} To control error terms, it is convenient for the solutions to be such that slightly more than half of the energies are uniformly bounded up to the shock. Then, when we are bounding error term products in $L^2$, we can exploit the fact that all but at-most-one factor in the product is uniformly bounded in $L^{\infty}$ up to the shock. We now discuss some of the main ideas behind deriving the energy estimate hierarchy \eqref{E:INTROTOPENERGY}-\eqref{E:INTROLOWESTENERGY}. By far, the most difficult integrals to estimate are the ones on RHS~\eqref{E:PSIMODELENERGYINEQUALITY} involving $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ and some related ones that we have not displayed but that create similar difficulties. In the case of the scalar wave equations treated in \cite{jSgHjLwW2016}, these difficult integrals were handled via extensions of techniques developed in \cite{dC2007}. In the present article, in treating these integrals, we encounter some new terms stemming from interactions between the fast wave, the eikonal function, and the slow wave; see the proof outline of Prop.~\ref{P:KEYPOINTWISEESTIMATE} for further discussion on this point. Later in this section, we will say a few words about these difficult integrals, but we will not discuss them in detail here since the most challenging aspects of these integrals were handled in \cite{jSgHjLwW2016}. Instead, in this subsubsection, we focus on describing the influence of the $Y^N \vec{W}$-involving integrals from RHSs \eqref{E:PSIMODELENERGYINEQUALITY}-\eqref{E:SLOWWAVEMODELENERGYINEQUALITY} on the a priori estimates \eqref{E:INTROTOPENERGY}-\eqref{E:INTROLOWESTENERGY}. Roughly, these integrals account for the self-interactions of the slow wave and the interaction of the slow wave with the fast wave up to the shock, which are the main new kinds of interactions accounted for in this paper; the remaining error integrals on RHS~\eqref{E:PSIMODELENERGYINEQUALITY} involve self-interactions of $\Psi$ and the interaction of $\Psi$ with the eikonal function, which were handled in \cite{jSgHjLwW2016}, as well as interactions of $\vec{W}$ with the below-top-order derivatives of the eikonal function, which are relatively easy to treat (see Lemma~\ref{L:STANDARDPSISPACETIMEINTEGRALS}). Our main goal at present is the following: \begin{quote} \emph{We will sketch why the $Y^N \vec{W}$-involving integrals from RHSs \eqref{E:PSIMODELENERGYINEQUALITY}-\eqref{E:SLOWWAVEMODELENERGYINEQUALITY} create only harmless exponential growth in the energies}, which is allowable within our approach in view of our sufficiently good guess about the time of first shock formation (see the discussion following equation \eqref{E:INTROCRITICALBLOWUPTIMEFACTOR}) and our assumed smallness of $\mathring{\upepsilon}$. \end{quote} In particular, if the $Y^N \vec{W}$-involving integrals were the only types of error integrals that one encountered in the energy estimates, then at all derivatives levels, the energies would remain uniformly bounded by $\lesssim \mathring{\upepsilon}^2$ up to the shock. This shows that the interaction between the two waves is in some sense weak near the shock, \emph{even though the RHS of the slow wave equation \eqref{E:SLOWWAVE} contains $\partial \Psi$ source terms that blow up at the shock.} The weakness of the interaction is very much a ``PDE effect'' (it is not easily modeled by ODE inequalities) that is detectable only because our energies \eqref{E:INTROPSIENERGY}-\eqref{E:INTROSLOWWAVEENERGY} contain non-$\upmu$-degenerate $\mathcal{P}_u^t$ integrals and spacetime integrals. To proceed with our sketch, we let \[ \overline{\mathbb{H}}_N^{(Fast)}(t,u) := \sup_{(t',u') \in [0,t] \times [0,u]}\mathbb{H}^{(Fast)}_N(t',u'), \qquad \overline{\mathbb{H}}_N^{(Slow)}(t,u) := \sup_{(t',u') \in [0,t] \times [0,u]} \mathbb{H}^{(Slow)}_N(t',u'). \] We then note the following simple consequence of \eqref{E:INTROPSIENERGY}-\eqref{E:INTROSLOWWAVEENERGY}, \eqref{E:PSIMODELENERGYINEQUALITY}-\eqref{E:SLOWWAVEMODELENERGYINEQUALITY}, and Young's inequality, where we are ignoring the $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$-involving integral on RHS~\eqref{E:PSIMODELENERGYINEQUALITY}: \begin{align} \overline{\mathbb{H}}_N^{(Fast)}(t,u) & \leq C \mathring{\upepsilon}^2 + C \int_{u'=0}^u \overline{\mathbb{H}}_N^{(Fast)}(t,u') \, d u' + C \int_{u'=0}^u \overline{\mathbb{H}}_N^{(Slow)}(t,u') \, d u' + \cdots, \label{E:GRONWALLREADYPSIMODELENERGYINEQUALITY} \\ \overline{\mathbb{H}}_N^{(Slow)}(t,u) & \leq C \mathring{\upepsilon}^2 + C \int_{u'=0}^u \overline{\mathbb{H}}_N^{(Fast)}(t,u') \, d u' + C \int_{t'=0}^t \overline{\mathbb{H}}_N^{(Slow)}(t',u) \, d t' + \cdots. \label{E:GRONWALLREADYSLOWWAVEMODELENERGYINEQUALITY} \end{align} Then from \eqref{E:GRONWALLREADYPSIMODELENERGYINEQUALITY}-\eqref{E:GRONWALLREADYSLOWWAVEMODELENERGYINEQUALITY} and Gronwall's inequality in $t$ and $u$, we conclude that as long as the solution exists classically, we have the following estimates for $(t,u) \in [0,2 \mathring{\updelta}_*^{-1}] \times [0,1]$: \begin{align} \label{E:INTROEASYTERMSTOPENERGYGRONWALLED} \overline{\mathbb{H}}_N^{(Fast)}(t,u) & \leq C \mathring{\upepsilon}^2, && \overline{\mathbb{H}}_N^{(Slow)}(t,u) \leq C \mathring{\upepsilon}^2, \end{align} where, as we have mentioned, constants $C$ are allowed to depend on $\mathring{\updelta}_*^{-1}$, the approximate time of first shock formation (see \eqref{E:INTROCRITICALBLOWUPTIMEFACTOR} and the discussion below it). As we mentioned above, in reality, we are not able to prove the non-degenerate estimate \eqref{E:INTROEASYTERMSTOPENERGYGRONWALLED} for large $N$ because the $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$-involving integral on RHS~\eqref{E:PSIMODELENERGYINEQUALITY} leads to a much worse a priori energy estimate in the top-order case $N=18$. This phenomenon is explained in detail in \cite{jSgHjLwW2016} in the case of a homogeneous scalar covariant wave equation $\square_{g(\Psi)} \Psi = 0$; here we only describe the changes in the analysis of $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ compared to \cite{jSgHjLwW2016}, the new feature being the presence of the semilinear coupling terms on RHS~\eqref{E:FASTWAVE}. Let us first describe the estimate. Specifically, due to the difficult regularity theory of the eikonal function,\footnote{Recall that $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \sim \partial^{N+2} u$.} the following term is in fact present on RHS~\eqref{E:GRONWALLREADYPSIMODELENERGYINEQUALITY} in the case $N=18$ (see below for more details): \begin{align} \label{E:INTROHARDESTERRRORINTEGRAL} A \int_{t'=0}^t \left( \sup_{\Sigma_{t'}^u} \left| \frac{L \upmu}{\upmu} \right| \right) \mathbb{H}^{(Fast)}_{18}(t',u) \, dt' + \cdots, \end{align} where $A$ is a universal positive constant that is \textbf{independent of the structure of the nonlinearities and the number of times that the equations are commuted} and $\cdots$ denotes similar or less degenerate error terms. By itself, the error term \eqref{E:INTROHARDESTERRRORINTEGRAL} would change the a priori estimate in a way that can roughly be described as follows: \begin{align} \label{E:INTROALLTERMSTOPENERGYGRONWALLED} \overline{\mathbb{H}}^{(Fast)}_{18}(t,u) & \leq C \mathring{\upepsilon}^2 \upmu_{\star}^{-A}(t,u), && \overline{\mathbb{H}}^{(Slow)}_{18}(t,u) \leq C \mathring{\upepsilon}^2 \upmu_{\star}^{-A}(t,u). \end{align} The factor of $A$ on RHS~\eqref{E:INTROALLTERMSTOPENERGYGRONWALLED} is partially responsible for the magnitude of the top-order blowup-exponent $11.8$ on RHS~\eqref{E:INTROTOPENERGY}, though we stress that in a detailed proof, one encounters other types of degenerate error integrals that further enlarge the blowup-exponents. Throughout the paper, we indicate the ``important'' structural constants that substantially contribute to the blowup-exponents by drawing boxes around them (see, for example, the RHS of the estimates of Prop.~\ref{P:TANGENTIALENERGYINTEGRALINEQUALITIES}). The derivation of \eqref{E:INTROALLTERMSTOPENERGYGRONWALLED} as a consequence of the presence of the error term \eqref{E:INTROHARDESTERRRORINTEGRAL} is based on a difficult Gronwall estimate that requires having sharp information about the way that $\upmu \to 0$ as well as the behavior of $L \upmu$. Roughly, one can show (see Subsubsect.\ \ref{SSS:FORMATIONOFSHOCK} for a discussion of the main ideas) that \begin{align} \label{E:UPMUSTARSCHEMATIC} \upmu_{\star}(t,u) \sim 1 - \mathring{\updelta}_* t, \qquad \sup_{\Sigma_{t'}^u} |L \upmu| \sim \mathring{\updelta}_*, \end{align} from which one can obtain \eqref{E:INTROALLTERMSTOPENERGYGRONWALLED} by Gronwall's inequality (where it is important that the same factor $\mathring{\updelta}_*$ defined in \eqref{E:INTROCRITICALBLOWUPTIMEFACTOR} appears in both expressions in \eqref{E:UPMUSTARSCHEMATIC}). We now explain the origin of the difficult error integral \eqref{E:INTROHARDESTERRRORINTEGRAL} and its connection to the following aforementioned difficulty: that of avoiding a loss of a derivative at the top order when bounding the error term $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ in $L^2$. To proceed, we first note that using geometric decompositions,\footnote{By this, we essentially mean Raychaudhuri's identity for the component $\mbox{\upshape Ric}_{L L}$ of the Ricci curvature of the metric $g(\Psi)$. \label{FN:RAYCH}} one obtains the following evolution equation for $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$, expressed here in schematic form (see the proof outline of Prop.~\ref{P:KEYPOINTWISEESTIMATE} for further discussion): \begin{align} \label{E:TRCHISCHEMATICEVOLUTION} L Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi & = L \mathscr{P}^{N+1} \Psi + {\Delta \mkern-12mu / \, } \mathscr{P}^N \Psi + l.o.t., \end{align} where $ {\Delta \mkern-12mu / \, } $ is the covariant Laplacian induced on $\ell_{t,u}$ by $g(\Psi)$ and $l.o.t.$ are lower-order (in the sense of the number of derivatives involved) terms that do \emph{not} involve the slow wave variable $\vec{W}$. In the top-order case $N=18$, equation \eqref{E:TRCHISCHEMATICEVOLUTION} is not useful in its current form because the RHS involves one more derivative of $\Psi$ than we can control by commuting equation \eqref{E:FASTWAVE} $18$ times and deriving energy estimates. To overcome this difficulty, we follow the following strategy, whose blueprint originates in the proof of the stability of Minkowski spacetime \cite{dCsK1993} and that was later used in the context of low-regularity well-posedness for wave equations \cite{sKiR2003} and finally in the context of shock formation \cite{dC2007}: one can decompose the fast wave equation \eqref{E:FASTWAVE} using equation \eqref{E:LONOUTSIDEGEOMETRICWAVEOPERATORFRAMEDECOMPOSED} and then algebraically replace $\upmu {\Delta \mkern-12mu / \, } \mathscr{P}^N \Psi$ (note the crucial factor of $\upmu$ and see \eqref{E:LONOUTSIDEGEOMETRICWAVEOPERATORFRAMEDECOMPOSED}) with $L \breve{X} \mathscr{P}^N \Psi + L (\upmu \mathscr{P}^{N+1} \Psi) + \cdots$ (written in schematic form, where $\cdots$ denotes terms depending on $\leq N + 1$ derivatives of $\Psi$) plus the influence of the semilinear inhomogeneous terms on RHS~\eqref{E:FASTWAVE}, that is, plus $Y^N (\upmu \times \mbox{RHS~\eqref{E:FASTWAVE}})$. We can then bring these perfect $L$-derivative terms over to LHS~\eqref{E:TRCHISCHEMATICEVOLUTION} to obtain an evolution equation for a ``modified'' version of $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$, denoted here by {\textsf Modified},\footnote{In Prop.~\ref{P:KEYPOINTWISEESTIMATE}, we denote this modified quantity by $\upchifullmodarg{Y^N}$.} which can be written in the following schematic form:\footnote{More precisely, as we describe in our proof outline of Prop.~\ref{P:KEYPOINTWISEESTIMATE}, to control {\textsf Modified}, we first multiply its evolution equation by an integrating factor denoted by $\iota$.} \begin{align} \label{E:MODIFIEDTRCHISCHEMATICEVOLUTION} L \left\lbrace \overbrace{ \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi + \breve{X} \mathscr{P}^N \Psi + \upmu \mathscr{P}^{N+1} \Psi }^{\mbox{\textsf Modified}} \right\rbrace & = Y^N (\upmu \times \mbox{RHS~\eqref{E:FASTWAVE}}) + \cdots; \end{align} see \eqref{E:TOPORDERMODIFIEDTRCHI} for the precise definition of the modified quantity. We stress that if we had allowed $g = g(\Psi,w)$ instead of $g=g(\Psi)$, then our proof of \eqref{E:MODIFIEDTRCHISCHEMATICEVOLUTION} would have broken down in the sense that typically, we would not have been able to derive an analogous evolution equation featuring terms with allowable regularity on the RHS. To handle the first integral on RHS~\eqref{E:PSIMODELENERGYINEQUALITY}, we now algebraically replace $ \displaystyle Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi = \frac{1}{\upmu}\mbox{\textsf Modified} - \frac{1}{\upmu} \breve{X} \mathscr{P}^N \Psi - \mathscr{P}^{N+1} \Psi $, which leads to three error integrals. After a bit of additional work, one finds that the integral corresponding to the second term $ \displaystyle - \frac{1}{\upmu} \breve{X} \mathscr{P}^N \Psi $ leads to the integral in \eqref{E:INTROHARDESTERRRORINTEGRAL}. The error integral corresponding to the first term $ \displaystyle \frac{1}{\upmu}\mbox{\textsf{Modified}} $ leads to a similar but more difficult error integral that we treat in inequality \eqref{E:FIRSTSTEPDIFFICULTINTEGRALBOUND} and the discussion just below it (see also the proof outline of Prop.\ \ref{P:KEYPOINTWISEESTIMATE}). Note that in view of RHS~\eqref{E:MODIFIEDTRCHISCHEMATICEVOLUTION}, the $L^2$ estimates for $\textsf{Modified}$ are coupled to the semilinear inhomogeneous terms on the right-hand side of the fast wave equation \eqref{E:FASTWAVE}, which involve the slow wave variable $\vec{W}$. That is, our reliance on a modified version of ${\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ leads to the coupling of the slow wave variable to the top-order estimates for the null mean curvature of the $\mathcal{P}_u$. However, due to the presence of the factor of $\upmu$ on RHS~\eqref{E:MODIFIEDTRCHISCHEMATICEVOLUTION}, the coupling terms are weak and are among the easier error terms to treat (see the proof outline of Lemma~\ref{L:DIFFICULTTERML2BOUND}). The error integral corresponding to the third term $- \mathscr{P}^{N+1} \Psi$ from the above algebraic decomposition is much easier to treat and is handled in Lemma~\ref{L:STANDARDPSISPACETIMEINTEGRALS}. We have now sketched why the top-order quantity $\mathbb{H}^{(Fast)}_{18}$ from \eqref{E:INTROTOPENERGY} can blow up like $\mathring{\upepsilon}^2 \upmu_{\star}^{-11.8}$ as $\upmu_{\star} \to 0$. To understand why the same can occur for $\mathbb{H}^{(Slow)}_{18}$, we simply consider the integral $ \displaystyle \int_{u'=0}^u \overline{\mathbb{H}}^{(Fast)}_{18}(t,u') \, d u' $ on RHS~\eqref{E:GRONWALLREADYSLOWWAVEMODELENERGYINEQUALITY} (in the case $N=18$); the integration with respect to $u'$ does not ameliorate the strength of the singularity and thus $\mathbb{H}^{(Slow)}_{18}$ can blow up at the same rate as $\mathbb{H}^{(Fast)}_{18}$. It remains for us to explain why, in the energy hierarchy \eqref{E:INTROTOPENERGY}-\eqref{E:INTROLOWESTENERGY}, the energy estimates become successively less singular with respect to powers of $\upmu_{\star}^{-1}$ at each stage in the descent. The main ideas are the same as in all prior works on shock formation in more than one spatial dimension. Specifically, at each level of derivatives, the strength of the singularity is driven by the $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$-involving integral on RHS~\eqref{E:PSIMODELENERGYINEQUALITY} and a few other integrals similar to it. The key point is that below top order, we can estimate these integrals in a more direct fashion by integrating the RHS of the evolution equation \eqref{E:TRCHISCHEMATICEVOLUTION} with respect to time (recall that $\displaystyle L = \frac{\partial}{\partial t} $) to obtain an estimate for $\| Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \|_{L^2(\Sigma_t^u)}$. Such an approach involves the loss of one derivative (which is permissible below top order) and thus couples the below-top-order energy estimates to the top-order one. The gain is that the resulting error integrals are less singular with respect to powers of $\upmu_{\star}^{-1}$ compared to the top-order integral \eqref{E:INTROHARDESTERRRORINTEGRAL}. We now provide a few more details in the just-below-top-order case $N=17$ to illustrate the main ideas behind this ``descent scheme.'' The main idea is that the strength of the singularity is reduced with each integration in time due to the following estimates,\footnote{The estimates stated in \eqref{E:INTEGRATEMUSTARINTIMELESSSINGULAR} are a quasilinear version of the model estimates $\int_{s=t}^1 s^{-B} \, ds \lesssim t^{1 - B}$ and $\int_{s=t}^1 s^{-9/10} \, ds \lesssim 1$, where $B>1$ and $0 < t < 1$ in the model estimates and $t=0$ represents the time of first vanishing of $\upmu_\star$.} valid for constants $B > 1$ (see Prop.~\ref{P:MUINVERSEINTEGRALESTIMATES} for the precise statements): \begin{align} \label{E:INTEGRATEMUSTARINTIMELESSSINGULAR} \int_{t'=0}^t \frac{1}{\upmu_{\star}^B(t',u)} \, dt' \lesssim \upmu_{\star}^{1-B}(t',u), \qquad \int_{t'=0}^t \frac{1}{\upmu_{\star}^{9/10}(t',u)} \, dt' \lesssim 1, \end{align} which follow from having sharp information about the way in which $\upmu_{\star} \to 0$ in time (see \eqref{E:UPMUSTARSCHEMATIC}). We now explain the role that the estimates \eqref{E:INTEGRATEMUSTARINTIMELESSSINGULAR} play in the descent scheme. Using the above strategy and \eqref{E:GRONWALLREADYPSIMODELENERGYINEQUALITY}, we obtain \begin{align} \label{E:JUSTBELOWTOPORDER} \overline{\mathbb{H}}^{(Fast)}_{17}(t,u) & \leq C \mathring{\upepsilon}^2 + \int_{t'=0}^t \frac{1}{\upmu_{\star}^{1/2}(t',u)} \sqrt{\overline{\mathbb{H}}^{(Fast)}_{17}}(t',u) \int_{s=0}^{t'} \frac{1}{\upmu_{\star}^{1/2}(s,u)} \sqrt{\overline{\mathbb{H}}^{(Fast)}_{18}}(s,u) \, ds \, d t' + \cdots, \end{align} where $\cdots$ denotes easier error terms, the $\sqrt{\overline{\mathbb{H}}^{(Fast)}_{18}}(s,u)$ term corresponds to the loss of one derivative that one encounters in estimating $\| Y^{17} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \|_{L^2(\Sigma_{t'}^u)}$, and the factors of $ \displaystyle \frac{1}{\upmu_{\star}^{1/2}} $ arise from the $\upmu$-weights found in the energies \eqref{E:INTROPSIENERGY} along $\Sigma_t^u$ hypersurfaces. In reality, when deriving a priori estimates, one must treat $\overline{\mathbb{H}}^{(Fast)}_{18}$, $\overline{\mathbb{H}}^{(Fast)}_{17}$, $\cdots$ as unknowns in a coupled system of integral inequalities for which one derives a priori estimates via a complicated Gronwall argument. Here, instead of providing the lengthy technical details behind the Gronwall argument (which, as we describe in our proof outline of Prop.~\ref{P:MAINAPRIORIENERGY}, was essentially carried out already in the proof of \cite{jSgHjLwW2016}*{Proposition 14.1}), to illustrate the main ideas, we demonstrate only the \emph{consistency} of the integral inequality \eqref{E:JUSTBELOWTOPORDER} with the less singular estimate \eqref{E:INTROJUSTBELOWTOPENERGY} (less singular compared to \eqref{E:INTROTOPENERGY}, that is). Specifically, inserting the estimates \eqref{E:INTROTOPENERGY}-\eqref{E:INTROJUSTBELOWTOPENERGY} into the double time integral on RHS~\eqref{E:JUSTBELOWTOPORDER} and using the first of \eqref{E:INTEGRATEMUSTARINTIMELESSSINGULAR} two times, we obtain \begin{align} \label{E:CONSISTENCYJUSTBELOWTOPORDER} \overline{\mathbb{H}}^{(Fast)}_{17}(t,u) & \leq C \mathring{\upepsilon}^2 + \mathring{\upepsilon}^2 \int_{t'=0}^t \frac{1}{\upmu_{\star}^{5.4}(t',u)} \int_{s=0}^{t'} \frac{1}{\upmu_{\star}^{6.4}(s,u)} \, ds \, d t' + \cdots \\ & \leq C \mathring{\upepsilon}^2 + \mathring{\upepsilon}^2 \int_{t'=0}^t \frac{1}{\upmu_{\star}^{10.8}(t',u)} \, d t' + \cdots \notag \\ & \leq C \mathring{\upepsilon}^2 + \mathring{\upepsilon}^2 \frac{1}{\upmu_{\star}^{9.8}(t,u)} + \cdots. \notag \end{align} Thus, the strength of the singularity on RHS~\eqref{E:CONSISTENCYJUSTBELOWTOPORDER} is at least consistent with the estimate \eqref{E:INTROJUSTBELOWTOPENERGY} that one aims to prove. Subsequent to obtaining the estimate \eqref{E:INTROJUSTBELOWTOPENERGY}, one can continue the descent scheme, with the energies becoming successively less singular at each step in the descent. Eventually, one reaches a level \eqref{E:FIRSTNONDEGENERATEENERGY} at which, thanks to the second estimate in \eqref{E:INTEGRATEMUSTARINTIMELESSSINGULAR}, one can show that the energies remain bounded all the way up to the shock. Finally, from the non-degenerate energy estimates \eqref{E:FIRSTNONDEGENERATEENERGY}-\eqref{E:INTROLOWESTENERGY} and Sobolev embedding (see Lemma~\ref{L:SOBOLEV} and Cor.\ \ref{C:IMPROVEDFUNDAMENTALLINFTYBOOTSTRAPASSUMPTIONS}), one can recover the non-degenerate $L^{\infty}$ estimates described in Subsubsect.\ \ref{SSS:DATAASSUMPTIONSANDBOOTSTRAPASSUMPTIONS}, which, in a detailed proof, one needs to control various error terms and to show that $\upmu$ vanishes in finite time (as we outlined in Subsubsect.\ \ref{SSS:FORMATIONOFSHOCK}). \section{The remaining ingredients in the geometric setup} \label{S:GEOMETRICSETUP} When outlining our proof in Subsect.\ \ref{SS:OVERVIEWOFPROOF}, we defined some basic geometric objects that we use in studying the solution. In this section, we construct most of the remaining such objects, exhibit their basic properties, and give rigorous definitions of most of the quantities that we informally referred to in Sect.\ \ref{S:INTRO}. \subsection{Additional constructions related to the eikonal function} \label{SS:EIKONALFUNCTIONANDRELATED} We recall that we constructed the eikonal function in Def.~\ref{D:INTROEIKONAL}. To supplement the coordinates $t$ and $u$, we now construct a local coordinate function on the tori $\ell_{t,u}$ (which are defined in Def.~\ref{D:HYPERSURFACESANDCONICALREGIONS}). We remark that the coordinate $\vartheta$ plays only a minor role in our analysis. \begin{definition}[\textbf{Geometric torus coordinate}] \label{D:GEOMETRICTORUSCOORDINATE} We define the geometric torus coordinate $\vartheta$ to be the solution to the following initial value problem for a transport equation: \begin{align} \label{E:APPENDIXGEOMETRICTORUSCOORD} (g^{-1})^{\alpha \beta} \partial_{\alpha} u \partial_{\beta} \vartheta & = 0, \\ \vartheta|_{\Sigma_0} & = x^2, \label{E:APPENDIXGEOMETRICTORUSCOORDINITIALCOND} \end{align} where $x^2$ is the (locally defined) Cartesian coordinate function on $\mathbb{T}$. \end{definition} \begin{definition}[\textbf{Geometric coordinates and partial derivatives}] \label{D:GEOMETRICCOORDINATES} We refer to $(t,u,\vartheta)$ as the geometric coordinates, where $t$ is the Cartesian time coordinate. We denote the corresponding geometric coordinate partial derivative vectorfields by \begin{align} \label{E:GEOCOORDPARTDERIVVECTORFIELDS} \left\lbrace \frac{\partial}{\partial t}, \frac{\partial}{\partial u}, \Theta := \frac{\partial}{\partial \vartheta} \right\rbrace. \end{align} \end{definition} \begin{remark}[\textbf{Remarks on} $\Theta$] \label{R:REMARKSONCOORDANG} Note that $\Theta$ is positively oriented and globally defined even though $\vartheta$ is only locally defined along $\ell_{t,u}$. \end{remark} \subsection{Important vectorfields, the rescaled frame, and the unit frame} \label{SS:FRAMEANDRELATEDVECTORFIELDS} In this subsection, we construct some vectorfields that we use in our analysis and exhibit their basic properties. We start by defining the gradient vectorfield of the eikonal function: \begin{align} \label{E:LGEOEQUATION} L_{(Geo)}^{\nu} & := - (g^{-1})^{\nu \alpha} \partial_{\alpha} u. \end{align} It is straightforward to see that $L_{(Geo)}$ is future-directed\footnote{By a future-directed vectorfield $V$, we mean that $V^0 > 0$, where $V^0$ is the ``$0$'' Cartesian component of $V$. Similarly, by a future-directed one-form $\xi$, we mean that its $g$-dual vectorfield, which has the Cartesian components $(g^{-1})^{\nu \alpha} \xi_{\alpha}$, is future-directed. We analogously define past-directed vectorfields and one-forms by replacing ``$V^0 > 0$'' with ``$V^0 < 0$,'' etc.} and $g$-null: \begin{align} \label{E:LGEOISNULL} g(L_{(Geo)},L_{(Geo)}) := g_{\alpha \beta} L_{(Geo)}^{\alpha} L_{(Geo)}^{\beta} = 0. \end{align} Moreover, by differentiating the eikonal equation \eqref{E:INTROEIKONAL} with $\mathscr{D}^{\nu} := (g^{-1})^{\nu \alpha} \mathscr{D}_{\alpha}$, where $\mathscr{D}$ is the Levi--Civita connection of $g$, and using that $\mathscr{D}_{\alpha} \mathscr{D}_{\beta} u = \mathscr{D}_{\beta} \mathscr{D}_{\alpha} u$, we infer that $L_{(Geo)}$ is geodesic: \begin{align} \label{E:LGEOISGEODESIC} \mathscr{D}_{L_{(Geo)}} L_{(Geo)} & = 0. \end{align} In addition, it is straightforward to see that $L_{(Geo)}$ is $g$-orthogonal to the characteristics $\mathcal{P}_u$. Hence, the $\mathcal{P}_u$ have $g$-null normals, which justifies our use of the terminology \emph{null hypersurfaces} in referring to them. It is convenient to work with a rescaled version of $L_{(Geo)}$ that we denote by $L$. Our proof will show that the Cartesian component functions $\lbrace L^{\alpha} \rbrace_{\alpha = 0,1,2}$ remain uniformly bounded up to the shock. \begin{definition}[\textbf{Rescaled null vectorfield}] \label{D:LUNITDEF} Let $\upmu$ be the inverse foliation density from Def.\ \ref{D:FIRSTUPMU}. We define \begin{align} \label{E:LUNITDEF} L & := \upmu L_{(Geo)}. \end{align} \end{definition} Note that $L$ is $g$-null since $L_{(Geo)}$ is. We also note that by \eqref{E:APPENDIXGEOMETRICTORUSCOORD}, we have $L \vartheta = 0$. We now define the vectorfields $X$, $\breve{X}$, and $N$, which are transversal to the characteristics $\mathcal{P}_u$. For our subsequent analysis, it is important that $\breve{X}$ is rescaled by a factor of $\upmu$. \begin{definition}[$X$, $\breve{X}$, \textbf{and} $N$] \label{D:RADANDXIDEFS} We define $X$ to be the unique vectorfield that is $\Sigma_t$-tangent, $g$-orthogonal to the $\ell_{t,u}$, and normalized by \begin{align} \label{E:GLUNITRADUNITISMINUSONE} g(L,X) = -1. \end{align} We define \begin{align} \label{E:RADDEF} \breve{X} := \upmu X. \end{align} We define \begin{align} \label{E:TIMENORMAL} N & := L + X. \end{align} \end{definition} In our analysis, we find it convenient to use the following two vectorfield frames. \begin{definition}[\textbf{Two frames}] \label{D:RESCALEDFRAME} We define, respectively, the rescaled frame and the non-rescaled frame as follows: \begin{subequations} \begin{align} \label{E:RESCALEDFRAME} & \lbrace L, \breve{X}, \Theta \rbrace, && \mbox{Rescaled frame}, \\ &\lbrace L, X, \Theta \rbrace, && \mbox{Non-rescaled frame}. \label{E:UNITFRAME} \end{align} \end{subequations} \end{definition} We now exhibit some basic properties of the above vectorfields. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma~2.1; \textbf{Basic properties of} $L$, $X$, $\breve{X}$, \textbf{and} $N$} \label{L:BASICPROPERTIESOFFRAME} The following identities hold: \begin{subequations} \begin{align} \label{E:LUNITOFUANDT} L u & = 0, \qquad L t = L^0 = 1, \\ \breve{X} u & = 1, \qquad \breve{X} t = \breve{X}^0 = 0, \label{E:RADOFUANDT} \end{align} \end{subequations} \begin{subequations} \begin{align} \label{E:RADIALVECTORFIELDSLENGTHS} g(X,X) & = 1, \qquad g(\breve{X},\breve{X}) = \upmu^2, \\ g(L,X) & = -1, \qquad g(L,\breve{X}) = -\upmu. \label{E:LRADIALVECTORFIELDSNORMALIZATIONS} \end{align} \end{subequations} Moreover, relative to the geometric coordinates, we have \begin{align} \label{E:LISDDT} L = \frac{\partial}{\partial t}. \end{align} In addition, there exists an $\ell_{t,u}$-tangent vectorfield $\Xi = \upxi \Theta$ (where $\upxi$ is a scalar function) such that \begin{align} \label{E:RADSPLITINTOPARTTILAUANDXI} \breve{X} & = \frac{\partial}{\partial u} - \Xi = \frac{\partial}{\partial u} - \upxi \Theta. \end{align} The vectorfield $N$ defined in \eqref{E:TIMENORMAL} is future-directed, $g$-orthogonal to $\Sigma_t$ and is normalized by \begin{align} \label{E:TIMENORMALUNITLENGTH} g(N,N) & = - 1. \end{align} Moreover, relative to Cartesian coordinates, we have (for $\nu = 0,1,2$): \begin{align} \label{E:TIMENORMALRECTANGULAR} N^{\nu} = - (g^{-1})^{0 \nu}. \end{align} Finally, the following identities hold relative to the Cartesian coordinates (for $\nu = 0,1,2$): \begin{align} \label{E:DOWNSTAIRSUPSTAIRSSRADUNITPLUSLUNITISAFUNCTIONOFPSI} X_{\nu} & = - L_{\nu} - \delta_{\nu}^0, \qquad X^{\nu} = - L^{\nu} - (g^{-1})^{0\nu}, \end{align} where $\delta_{\nu}^0$ is the standard Kronecker delta. \end{lemma} \subsection{Projection tensorfields, \texorpdfstring{$G_{(Frame)}$}{frame components}, and projected Lie derivatives} \label{SS:PROJECTIONTENSORFIELDANDPROJECTEDLIEDERIVATIVES} Many of our constructions involve projections onto $\Sigma_t$ and $\ell_{t,u}$. \begin{definition}[\textbf{Projection tensorfields}] We define the $\Sigma_t$-projection tensorfield\footnote{In \eqref{E:SIGMATPROJECTION}, we have corrected a sign error that occurred in \cite{jSgHjLwW2016}*{Definition 2.8}.} $\underline{\Pi}$ and the $\ell_{t,u}$-projection tensorfield ${\Pi \mkern-12mu / } \, $ relative to Cartesian coordinates as follows: \begin{subequations} \begin{align} \underline{\Pi}_{\nu}^{\ \mu} &:= \delta_{\nu}^{\ \mu} + N_{\nu} N^{\mu} = \delta_{\nu}^{\ \mu} - \delta_{\nu}^{\ 0} L^{\mu} - \delta_{\nu}^{\ 0} X^{\mu}, \label{E:SIGMATPROJECTION} \\ {\Pi \mkern-12mu / } \, _{\nu}^{\ \mu} &:= \delta_{\nu}^{\ \mu} + X_{\nu} L^{\mu} + L_{\nu} (L^{\mu} + X^{\mu}) = \delta_{\nu}^{\ \mu} - \delta_{\nu}^{\ 0} L^{\mu} + L_{\nu} X^{\mu}, \label{E:LINEPROJECTION} \end{align} \end{subequations} where the second equalities in \eqref{E:SIGMATPROJECTION}-\eqref{E:LINEPROJECTION} follow from \eqref{E:TIMENORMAL} and \eqref{E:DOWNSTAIRSUPSTAIRSSRADUNITPLUSLUNITISAFUNCTIONOFPSI}. \end{definition} \begin{definition}[\textbf{Projections of tensorfields}] Given any spacetime tensorfield $\xi$, we define its $\Sigma_t$ projection $\underline{\Pi} \xi$ and its $\ell_{t,u}$ projection ${\Pi \mkern-12mu / } \, \xi$ as follows: \begin{subequations} \begin{align} (\underline{\Pi} \xi)_{\nu_1 \cdots \nu_n}^{\mu_1 \cdots \mu_m} & := \underline{\Pi}_{\widetilde{\mu}_1}^{\ \mu_1} \cdots \underline{\Pi}_{\widetilde{\mu}_m}^{\ \mu_m} \underline{\Pi}_{\nu_1}^{\ \widetilde{\nu}_1} \cdots \underline{\Pi}_{\nu_n}^{\ \widetilde{\nu}_n} \xi_{\widetilde{\nu}_1 \cdots \widetilde{\nu}_n}^{\widetilde{\mu}_1 \cdots \widetilde{\mu}_m}, \\ ({\Pi \mkern-12mu / } \, \xi)_{\nu_1 \cdots \nu_n}^{\mu_1 \cdots \mu_m} & := {\Pi \mkern-12mu / } \, _{\widetilde{\mu}_1}^{\ \mu_1} \cdots {\Pi \mkern-12mu / } \, _{\widetilde{\mu}_m}^{\ \mu_m} {\Pi \mkern-12mu / } \, _{\nu_1}^{\ \widetilde{\nu}_1} \cdots {\Pi \mkern-12mu / } \, _{\nu_n}^{\ \widetilde{\nu}_n} \xi_{\widetilde{\nu}_1 \cdots \widetilde{\nu}_n}^{\widetilde{\mu}_1 \cdots \widetilde{\mu}_m}. \label{E:STUPROJECTIONOFATENSOR} \end{align} \end{subequations} \end{definition} We say that a spacetime tensorfield $\xi$ is $\Sigma_t$-tangent (respectively $\ell_{t,u}$-tangent) if $\underline{\Pi} \xi = \xi$ (respectively if ${\Pi \mkern-12mu / } \, \xi = \xi$). Alternatively, we say that $\xi$ is a $\Sigma_t$ tensor (respectively $\ell_{t,u}$ tensor). \begin{definition}[\textbf{$\ell_{t,u}$ projection notation}] \label{D:STUSLASHPROJECTIONNOTATION} If $\xi$ is a spacetime tensor, then we define \begin{align} \label{E:TENSORSTUPROJECTED} { {\xi \mkern-9mu /} \, } := {\Pi \mkern-12mu / } \, \xi. \end{align} If $\xi$ is a symmetric type $\binom{0}{2}$ spacetime tensor and $V$ is a spacetime vector, then we define \begin{align} \label{E:TENSORVECTORANDSTUPROJECTED} \angxiarg{V} & := {\Pi \mkern-12mu / } \, (\xi_V), \end{align} where $\xi_V$ is the spacetime co-vector with Cartesian components $\xi_{\alpha \nu} V^{\alpha}$, $(\nu = 0,1,2)$. \end{definition} We often refer to the following arrays of $\ell_{t,u}$-tangent tensorfields in our analysis. \begin{definition}[\textbf{Components of $G$ and $G'$ relative to the non-rescaled frame}] \label{D:GFRAMEARRAYS} We define \[ G_{(Frame)} := \left(G_{L L}, G_{L X}, G_{X X}, \angGarg{L}, \angGarg{X}, {{G \mkern-12mu /} \, } \right) \] to be the array of components of the tensorfield $G$ defined in \eqref{E:BIGGDEF} relative to the non-rescaled frame \eqref{E:UNITFRAME}. Similarly, we define $G_{(Frame)}'$ to be the analogous array for the tensorfield $G_{(Frame)}'$ defined in \eqref{E:BIGGDEF}. \end{definition} Throughout, $\mathcal{L}_V \xi$ denotes the Lie derivative of the tensorfield $\xi$ with respect to $V$. If $V$ and $W$ are both vectorfields, then we often use the standard Lie bracket notation $[V,W] := \mathcal{L}_V W$. In our analysis, we often differentiate various quantities with the projected Lie derivatives featured in the following definition. \begin{definition}[$\ell_{t,u}$ \textbf{and} $\Sigma_t$-\textbf{projected Lie derivatives}] \label{D:PROJECTEDLIE} Given a tensorfield $\xi$ and a vectorfield $V$, we define the $\Sigma_t$-projected Lie derivative $\underline{\mathcal{L}}_V \xi$ of $\xi$ and the $\ell_{t,u}$-projected Lie derivative $ { \mathcal{L} \mkern-10mu / } _V \xi$ of $\xi$ as follows: \begin{align} \underline{\mathcal{L}}_V \xi & := \underline{\Pi} \mathcal{L}_V \xi, \qquad { \mathcal{L} \mkern-10mu / } _V \xi := {\Pi \mkern-12mu / } \, \mathcal{L}_V \xi. \label{E:PROJECTIONS} \end{align} \end{definition} \subsection{First and second fundamental forms, the trace of a tensorfield, covariant differential operators, and the geometric torus differential} \begin{definition}[\textbf{First fundamental forms}] \label{D:FIRSTFUND} We define the first fundamental form $\underline{g}$ of $\Sigma_t$ and the first fundamental form $g \mkern-8.5mu / $ of $\ell_{t,u}$ as follows: \begin{align} \underline{g} & := \underline{\Pi} g, && g \mkern-8.5mu / := {\Pi \mkern-12mu / } \, g. \label{E:GTANDGSPHERESPHEREDEF} \end{align} We define the corresponding inverse first fundamental forms by raising the indices with $g^{-1}$: \begin{align} (\underline{g}^{-1})^{\mu \nu} & := (g^{-1})^{\mu \alpha} (g^{-1})^{\nu \beta} \underline{g}_{\alpha \beta}, && (g \mkern-8.5mu / ^{-1})^{\mu \nu} := (g^{-1})^{\mu \alpha} (g^{-1})^{\nu \beta} g \mkern-8.5mu / _{\alpha \beta}. \label{E:GGTINVERSEANDGSPHEREINVERSEDEF} \end{align} \end{definition} \begin{definition}[\textbf{$g \mkern-8.5mu / $-trace of a tensorfield}] If $\xi$ is a type $\binom{0}{2}$ $\ell_{t,u}$-tangent tensor, then ${\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \xi := (\gsphere^{-1})^{\alpha \beta} \xi_{\alpha \beta}$ denotes its $g \mkern-8.5mu / $-trace. \end{definition} \begin{definition}[\textbf{Differential operators associated to the metrics}] \label{D:CONNECTIONS} We use the following notation for various differential operators associated to the spacetime metric $g$ and the Riemannian metric $g \mkern-8.5mu / $ induced on the $\ell_{t,u}$. \begin{itemize} \item $\mathscr{D}$ denotes the Levi--Civita connection of the spacetime metric $g$. \item $ {\nabla \mkern-14mu / \,} $ denotes the Levi--Civita connection of $g \mkern-8.5mu / $. \item If $\xi$ is an $\ell_{t,u}$-tangent one-form, then $\mbox{\upshape{div} $\mkern-17mu /$\,} \xi$ is the scalar-valued function $\mbox{\upshape{div} $\mkern-17mu /$\,} \xi := \gsphere^{-1} \cdot {\nabla \mkern-14mu / \,} \xi$. \item Similarly, if $V$ is an $\ell_{t,u}$-tangent vectorfield, then $\mbox{\upshape{div} $\mkern-17mu /$\,} V := \gsphere^{-1} \cdot {\nabla \mkern-14mu / \,} V_{\flat}$, where $V_{\flat}$ is the one-form $g \mkern-8.5mu / $-dual to $V$. \item If $\xi$ is a symmetric type $\binom{0}{2}$ $\ell_{t,u}$-tangent tensorfield, then $\mbox{\upshape{div} $\mkern-17mu /$\,} \xi$ is the $\ell_{t,u}$-tangent one-form $\mbox{\upshape{div} $\mkern-17mu /$\,} \xi := \gsphere^{-1} \cdot {\nabla \mkern-14mu / \,} \xi$, where the two contraction indices in $ {\nabla \mkern-14mu / \,} \xi$ correspond to the operator $ {\nabla \mkern-14mu / \,} $ and the first index of $\xi$. \end{itemize} \end{definition} \begin{definition}[\textbf{Covariant wave operator and Laplacian}] \label{D:WAVEOPERATORSANDLAPLACIANS} We use the following standard notation. \begin{itemize} \item $\square_g := (g^{-1})^{\alpha \beta} \mathscr{D}_{\alpha \beta}^2$ denotes the covariant wave operator corresponding to the spacetime metric $g$. \item $ {\Delta \mkern-12mu / \, } := \gsphere^{-1} \cdot {\nabla \mkern-14mu / \,} ^2$ denotes the covariant Laplacian corresponding to $g \mkern-8.5mu / $. \end{itemize} \end{definition} \begin{definition}[\textbf{Geometric torus differential}] \label{D:ANGULARDIFFERENTIAL} If $f$ is a scalar function on $\ell_{t,u}$, then $ {{d \mkern-9mu /} } f := {\nabla \mkern-14mu / \,} f = {\Pi \mkern-12mu / } \, \mathscr{D} f$, where $\mathscr{D} f$ is the gradient one-form associated to $f$. \end{definition} Def.~\ref{D:ANGULARDIFFERENTIAL} allows us to avoid potentially confusing notation such as $ {\nabla \mkern-14mu / \,} L^i$ by instead using $ {{d \mkern-9mu /} } L^i$; the latter notation clarifies that $L^i$ is to be viewed as a scalar Cartesian component function under differentiation. \begin{definition}[\textbf{Second fundamental forms}] We define the second fundamental form $k$ of $\Sigma_t$, which is a symmetric type $\binom{0}{2}$ $\Sigma_t$-tangent tensorfield, by \begin{align} \label{E:SECONDFUNDSIGMATDEF} k &:= \frac{1}{2} \underline{\mathcal{L}}_{N} \underline{g}. \end{align} We define the null second fundamental form $\upchi$, which is a symmetric type $\binom{0}{2}$ $\ell_{t,u}$-tangent tensorfield, by \begin{align} \label{E:CHIDEF} \upchi & := \frac{1}{2} { \mathcal{L} \mkern-10mu / } _{L} g \mkern-8.5mu / . \end{align} \end{definition} \subsection{Identities for various tensorfields} \label{SS:EXPRESSIONSFORCONNECTIONCOEFFICIETNS} We now provide some identities for various $\ell_{t,u}$ tensorfields. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma 2.3; \textbf{Alternate expressions for the second fundamental forms}} We have the following identities: \begin{subequations} \begin{align} \upchi_{\Theta \Theta} & = g(\mathscr{D}_{\Theta} L, \Theta), \label{E:CHIUSEFULID} && \angkdoublearg{X}{\Theta} = g(\mathscr{D}_{\Theta} L, X). \end{align} \end{subequations} \end{lemma} In the next lemma, we decompose some $\ell_{t,u}$-tangent tensorfields that arise in our analysis. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma 2.13; \textbf{Expressions for} $\upzeta$ and $ { {k \mkern-10mu /} \, } $} Let $\upzeta$ be the $\ell_{t,u}$ one-form defined by \begin{align} \label{E:ZETADEF} \upzeta_{\Theta} & := \angkdoublearg{X}{\Theta}. \end{align} We have the following identities for the $\ell_{t,u}$ tensorfields $ { {k \mkern-10mu /} \, } $ and $\upzeta$: \begin{subequations} \begin{align} \upzeta & = \upmu^{-1} \upzeta^{(Trans-\Psi)} + \upzeta^{(Tan-\Psi)}, \label{E:ZETADECOMPOSED} \\ { {k \mkern-10mu /} \, } & = \upmu^{-1} { {k \mkern-10mu /} \, } ^{(Trans-\Psi)} + { {k \mkern-10mu /} \, } ^{(Tan-\Psi)}, \label{E:ANGKDECOMPOSED} \end{align} \end{subequations} where \begin{subequations} \begin{align} \upzeta^{(Trans-\Psi)} & := - \frac{1}{2} \angGarg{L} \breve{X} \Psi, \label{E:ZETATRANSVERSAL} \\ { {k \mkern-10mu /} \, } ^{(Trans-\Psi)} & := \frac{1}{2} {{G \mkern-12mu /} \, } \breve{X} \Psi, \label{E:KABTRANSVERSAL} \end{align} \end{subequations} and \begin{subequations} \begin{align} \upzeta^{(Tan-\Psi)} & := \frac{1}{2} \angGarg{X} L \Psi - \frac{1}{2} G_{L X} {{d \mkern-9mu /} } \Psi - \frac{1}{2} G_{X X} {{d \mkern-9mu /} } \Psi, \label{E:ZETAGOOD} \\ { {k \mkern-10mu /} \, } ^{(Tan-\Psi)} & := \frac{1}{2} {{G \mkern-12mu /} \, } L \Psi - \frac{1}{2} \angGarg{L} \otimes {{d \mkern-9mu /} } \Psi - \frac{1}{2} {{d \mkern-9mu /} } \Psi \otimes \angGarg{L} - \frac{1}{2} \angGarg{X} \otimes {{d \mkern-9mu /} } \Psi - \frac{1}{2} {{d \mkern-9mu /} } \Psi \otimes \angGarg{X}. \label{E:KABGOOD} \end{align} \end{subequations} \end{lemma} \subsection{Metric decompositions} \label{SS:METRICEXPRESSIONS} \begin{lemma}\cite{jSgHjLwW2016}*{Lemma 2.4; \textbf{Expressions for $g$ and $g^{-1}$ in terms of the non-rescaled frame}} \label{L:METRICDECOMPOSEDRELATIVETOTHEUNITFRAME} We have the following identities: \begin{subequations} \begin{align} g_{\mu \nu} & = - L_{\mu} L_{\nu} - ( L_{\mu} X_{\nu} + X_{\mu} L_{\nu} ) + g \mkern-8.5mu / _{\mu \nu} \label{E:METRICFRAMEDECOMPLUNITRADUNITFRAME}, \\ (g^{-1})^{\mu \nu} & = - L^{\mu} L^{\nu} - ( L^{\mu} X^{\nu} + X^{\mu} L^{\nu} ) + (\gsphere^{-1})^{\mu \nu}. \label{E:GINVERSEFRAMEWITHRECTCOORDINATESFORGSPHEREINVERSE} \end{align} \end{subequations} \end{lemma} The following scalar function captures the ``$\ell_{t,u}$ part of $g$''. \begin{definition}[\textbf{The metric component} $\upsilon$] \label{D:METRICANGULARCOMPONENT} We define the scalar function $\upsilon > 0$ by \begin{align} \label{E:METRICANGULARCOMPONENT} \upsilon^2 & := g(\Theta,\Theta) = g \mkern-8.5mu / (\Theta,\Theta). \end{align} \end{definition} \subsection{The change of variables map} \label{SS:CHOV} In this subsection, we define the change of variables map between the geometric and Cartesian coordinates and illustrate some of its basic properties. \begin{definition}\label{D:CHOVMAP} We define $\Upsilon: [0,T) \times [0,U_0] \times \mathbb{T} \rightarrow \mathcal{M}_{T,U_0}$, $\Upsilon(t,u,\vartheta) := (t,x^1,x^2)$, to be the change of variables map from geometric to Cartesian coordinates. \end{definition} \begin{lemma}\cite{jSgHjLwW2016}*{Lemma 2.7; \textbf{Basic properties of the change of variables map}} \label{L:CHOV} We have the following expression for the Jacobian of $\Upsilon$: \begin{align} \label{E:CHOV} \frac{\partial \Upsilon}{\partial (t,u,\vartheta)} & := \frac{\partial (x^0,x^1,x^2)}{\partial (t,u,\vartheta)} = \left( \begin{array}{ccc} 1 & 0 & 0 \\ L^1 & \breve{X}^1 + \Xi^1 & \Theta^1 \\ L^2 & \breve{X}^2 + \Xi^2 & \Theta^2 \\ \end{array} \right). \end{align} Moreover, we have\footnote{There was a typo in \cite{jSgHjLwW2016}*{Lemma 2.7} in that the minus sign was missing from the RHS of the analog of \eqref{E:JACOBIAN}.} \begin{align} \label{E:JACOBIAN} \mbox{\upshape{det}} \frac{\partial (x^0,x^1,x^2)}{\partial (t,u,\vartheta)} = \mbox{\upshape{det}} \frac{\partial (x^1,x^2)}{\partial (u,\vartheta)} = - \upmu (\mbox{\upshape{det}} \underline{g}_{ij})^{-1/2} \upsilon, \end{align} where $\upsilon$ is the metric component from Def.~\ref{D:METRICANGULARCOMPONENT} and $(\mbox{\upshape{det}} \underline{g}_{ij})^{-1/2}$ is a smooth function of $\Psi$ in a neighborhood of $0$ with $(\mbox{\upshape{det}} \underline{g}_{ij})^{-1/2}(\Psi=0) = 1$. In \eqref{E:JACOBIAN}, $\underline{g}$ is viewed as the Riemannian metric on $\Sigma_t^{U_0}$ defined by \eqref{E:GTANDGSPHERESPHEREDEF} and $\mbox{\upshape{det}} \underline{g}_{ij}$ is the determinant of the corresponding $2 \times 2$ matrix of components of $\underline{g}$ relative to the Cartesian spatial coordinates. \end{lemma} \subsection{Commutation vectorfields and a basic vectorfield commutation identity} \label{SS:COMMUTATIONVECTORFIELDS} In this subsection, we define the commutation vectorfields that we use when commuting equations to obtain estimates for the solution's derivatives. \begin{definition}[\textbf{The vectorfields} $Y_{(Flat)}$ \textbf{and} $Y$] \label{D:ANGULARVECTORFIELDS} We define the Cartesian components of the $\Sigma_t$-tangent vectorfields $Y_{(Flat)}$ and $Y$ as follows ($i=1,2$): \begin{align} Y_{(Flat)}^i &: = \delta_2^i, \label{E:GEOANGEUCLIDEAN} \\ Y^i & := {\Pi \mkern-12mu / } \, _a^{\ i} Y_{(Flat)}^a = {\Pi \mkern-12mu / } \, _2^{\ i}, \label{E:GEOANGDEF} \end{align} where ${\Pi \mkern-12mu / } \, $ is the $\ell_{t,u}$ projection tensorfield defined in \eqref{E:LINEPROJECTION}. \end{definition} To derive estimates for the solution's derivatives, we commute the equations with the elements of the following two sets of vectorfields. \begin{definition}[\textbf{Commutation vectorfields}] \label{D:COMMUTATIONVECTORFIELDS} We define the commutation set $\mathscr{Z}$ as follows: \begin{subequations} \begin{align} \label{E:COMMUTATIONVECTORFIELDS} \mathscr{Z} := \lbrace L, \breve{X}, Y \rbrace, \end{align} where $L$, $\breve{X}$, and $Y$ are respectively defined by \eqref{E:LUNITDEF}, \eqref{E:RADDEF}, and \eqref{E:GEOANGDEF}. We define the $\mathcal{P}_u$-tangent commutation set $\mathscr{P}$ as follows: \begin{align} \label{E:TANGENTIALCOMMUTATIONVECTORFIELDS} \mathscr{P} := \lbrace L, Y \rbrace. \end{align} \end{subequations} \end{definition} We use the following commutation identity throughout our analysis. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma 2.10; $L$, $\breve{X}$, $Y$ \textbf{commute with} $ {{d \mkern-9mu /} }$} \label{L:LANDRADCOMMUTEWITHANGDIFF} For scalar functions $f$ and $V \in \lbrace L, \breve{X}, Y \rbrace$, we have \begin{align} \label{E:ANGLIECOMMUTESWITHANGDIFF} { \mathcal{L} \mkern-10mu / } _V {{d \mkern-9mu /} } f & = {{d \mkern-9mu /} } V f. \end{align} \end{lemma} The following quantities are convenient to study because for the solutions that we study in our main theorem, they are small. \begin{definition}[\textbf{Perturbed part of various vectorfields}] \label{D:PERTURBEDPART} For $i=1,2$, we define the following scalar functions: \begin{align} \label{E:PERTURBEDPART} L_{(Small)}^i & := L^i - \delta_1^i, \qquad X_{(Small)}^i := X^i + \delta_1^i, \qquad Y_{(Small)}^i := Y^i - \delta_2^i. \end{align} The vectorfields $L$, $X$, and $Y$ in \eqref{E:PERTURBEDPART} are defined in Defs.~\ref{D:LUNITDEF}, \ref{D:RADANDXIDEFS}, and \ref{D:ANGULARVECTORFIELDS}. \end{definition} In the next lemma, we characterize the discrepancy between $Y_{(Flat)}$ and $Y$. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma 2.8; \textbf{Decomposition of} $Y_{(Flat)}$} \label{L:GEOANGDECOMPOSITION} We can decompose $Y_{(Flat)}$ into an $\ell_{t,u}$-tangent vectorfield and a vectorfield parallel to $X$ as follows: there exists a scalar function $\uprho$ such that \begin{subequations} \begin{align} \label{E:GEOANGINTERMSOFEUCLIDEANANGANDRADUNIT} Y_{(Flat)}^i & = Y^i + \uprho X^i, \\ Y_{(Small)}^i & = - \uprho X^i. \label{E:GEOANGSMALLINTERMSOFRADUNIT} \end{align} \end{subequations} Moreover, we have\footnote{In the last term in equation \eqref{E:FLATYDERIVATIVERADIALCOMPONENT}, we have corrected a sign error that occurred in \cite{jSgHjLwW2016}*{Equation (2.55)}.} \begin{align} \label{E:FLATYDERIVATIVERADIALCOMPONENT} \uprho = g(Y_{(Flat)},X) = g_{ab} Y_{(Flat)}^a X^b = g_{2a} X^a = g_{21}^{(Small)} X^1 + g_{22} X_{(Small)}^2. \end{align} \end{lemma} \subsection{Deformation tensors} \label{SS:BASICVECTORFIELDCOMMUTATOR} In this subsection, we provide the standard definition of the deformation tensor of a vectorfield. \begin{definition}[\textbf{Deformation tensor of a vectorfield} $V$] \label{D:DEFORMATIONTENSOR} If $V$ is a spacetime vectorfield, then its deformation tensor $\deform{V}$ (relative to the spacetime metric $g$) is the symmetric type $\binom{0}{2}$ spacetime tensorfield \begin{align} \label{E:DEFORMATIONTENSOR} \deformarg{V}{\alpha}{\beta} := \mathcal{L}_V g_{\alpha \beta} = \mathscr{D}_{\alpha} V_{\beta} + \mathscr{D}_{\beta} V_{\alpha}, \end{align} where the last equality in \eqref{E:DEFORMATIONTENSOR} is a well-known consequence of the torsion-free property of the connection $\mathscr{D}$. \end{definition} \subsection{Transport equations for the eikonal function quantities} \label{SS:TRANSPORTEQUATIONSFOREIKONALFUNCTION} In this subsection, we provide the main evolution equations that we use to control $\upmu$, the Cartesian component functions $\lbrace L_{(Small)}^a \rbrace_{a=1,2}$, the $\ell_{t,u}$-tensor $\upchi$, and their derivatives, except at the top derivative level. We sometimes refer to these tensorfields as the \emph{eikonal function quantities} since they are constructed out of the eikonal function. To control their top-order derivatives, one needs to rely on the modified quantities described in Subsubsect.\ \ref{SSS:ENERGYESTIMATES}. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma~2.12; \textbf{The transport equations verified by} $\upmu$ \textbf{and} $L_{(Small)}^i$} \label{L:UPMUANDLUNITIFIRSTTRANSPORT} The inverse foliation density $\upmu$ defined in \eqref{E:FIRSTUPMU} verifies the following transport equation: \begin{align} \label{E:UPMUFIRSTTRANSPORT} L \upmu & = \frac{1}{2} G_{L L} \breve{X} \Psi - \frac{1}{2} \upmu G_{L L} L \Psi - \upmu G_{L X} L \Psi. \end{align} Moreover, the scalar-valued Cartesian component functions $L_{(Small)}^i$, ($i=1,2$), defined in \eqref{E:PERTURBEDPART}, verify the following transport equation: \begin{align} L L_{(Small)}^i & = - \frac{1}{2} G_{L L} (L \Psi) L^i - \frac{1}{2} G_{L L} (L \Psi) (g^{-1})^{0i} - \angGmixedarg{L}{\#} \cdot ( {{d \mkern-9mu /} } x^i) (L \Psi) + \frac{1}{2} G_{L L} (\angdiffuparg{\#} \Psi) \cdot {{d \mkern-9mu /} } x^i. \label{E:LLUNITI} \end{align} \end{lemma} The next lemma provides a useful identity for $\breve{X} L_{(Small)}^i$. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma~2.14; \textbf{Formula for} $\breve{X} L_{(Small)}^i$} \label{L:RADLUNITI} We have the following identity for the scalar-valued functions $L_{(Small)}^i$, ($i=1,2$): \begin{align} \breve{X} L_{(Small)}^i & = \left\lbrace - \frac{1}{2} G_{L L} \breve{X} \Psi + \frac{1}{2} \upmu G_{L L} L \Psi + \upmu G_{L X} L \Psi + \frac{1}{2} \upmu G_{X X} L \Psi \right\rbrace L^i \label{E:RADLUNITI} \\ & \ \ + \left\lbrace - \frac{1}{2} G_{L L} \breve{X} \Psi + \frac{1}{2} \upmu G_{L L} L \Psi + \upmu G_{L X} L \Psi + \frac{1}{2} \upmu G_{X X} L \Psi \right\rbrace (g^{-1})^{0i} \notag \\ & \ \ - \left\lbrace \angGmixedarg{L}{\#} \breve{X} \Psi + \frac{1}{2} \upmu G_{X X} \angdiffuparg{\#} \Psi \right\rbrace \cdot {{d \mkern-9mu /} } x^i + (\angdiffuparg{\#} \upmu) \cdot {{d \mkern-9mu /} } x^i. \notag \end{align} \end{lemma} \subsection{Useful expressions for the null second fundamental form} The identities provided by the following lemma are convenient for deriving estimates for $\upchi$ and related quantities. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma 2.15; \textbf{Identities involving} $\upchi$} \label{L:IDFORCHI} We have the following identities: \begin{subequations} \begin{align} \label{E:CHIINTERMSOFOTHERVARIABLES} \upchi & = g_{ab} ( {{d \mkern-9mu /} } L^a) \otimes {{d \mkern-9mu /} } x^b + \frac{1}{2} {{G \mkern-12mu /} \, } L \Psi, \\ {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi & = g_{ab} \gsphere^{-1} \cdot \left\lbrace ( {{d \mkern-9mu /} } L^a) \otimes {{d \mkern-9mu /} } x^b \right\rbrace + \frac{1}{2} \gsphere^{-1} \cdot {{G \mkern-12mu /} \, } L \Psi, \label{E:TRCHIINTERMSOFOTHERVARIABLES} \\ L \ln \upsilon & = {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi, \label{E:LDERIVATIVEOFVOLUMEFORMFACTOR} \end{align} \end{subequations} where $\upchi$ is the $\ell_{t,u}$-tangent tensorfield defined by \eqref{E:CHIDEF} and $\upsilon$ is the metric component from Def.~\ref{D:METRICANGULARCOMPONENT}. \end{lemma} \subsection{Arrays of unknowns and schematic notation} \label{SS:ARRAYS} We often use the following arrays for convenient shorthand notation when analyzing quantities that are tied to the fast wave variable and the eikonal function quantities. \begin{definition}[\textbf{Shorthand notation for the fast wave variable and the eikonal function quantities}] \label{D:ABBREIVATEDVARIABLES} We define the following arrays $\upgamma$ and $\underline{\upgamma}$ of scalar functions: \begin{subequations} \begin{align} \upgamma & := \left(\Psi, L_{(Small)}^1, L_{(Small)}^2 \right), \label{E:GOODABBREIVATEDVARIABLES} \\ \underline{\upgamma} & := \left(\Psi, \upmu - 1, L_{(Small)}^1, L_{(Small)}^2 \right). \label{E:BADABBREIVATEDVARIABLES} \end{align} \end{subequations} \end{definition} \begin{notation}[\textbf{Schematic functional dependence}] \label{N:SCHEMATICTENSORFIELDPRODUCTS} In the remainder of the article, we use the notation $\mathrm{f}(\xi_{(1)},\xi_{(2)},\cdots,\xi_{(m)})$ to schematically depict an expression (often tensorial and involving contractions) that depends smoothly on the $\ell_{t,u}$-tangent tensorfields $\xi_{(1)}, \xi_{(2)}, \cdots, \xi_{(m)}$. Note that in general, $\mathrm{f}(0) \neq 0$. \end{notation} \begin{notation}[\textbf{The meaning of the symbol} $P$] Throughout, $P$ schematically denotes a differential operator that is tangent to the characteristics $\mathcal{P}_u$, such as $L$, $Y$, or $ {{d \mkern-9mu /} }$. For example, $P f$ might denote $ {{d \mkern-9mu /} } f$ or $L f$. We use such notation when the precise details of $P$ are not important. \end{notation} Many of the geometric tensorfields that we have defined can be expressed as functions of $\upgamma$, $\underline{\upgamma}$, $\gsphere^{-1}$, and $\lbrace {{d \mkern-9mu /} } x^a \rbrace_{a=1,2}$. When deriving estimates for these tensorfields, it often will suffice for us to have only crude information about the functional dependence. The next lemma is the main result in this direction. \begin{lemma}[\textbf{Schematic structure of various scalar functions and tensorfields}] \label{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} We have the following schematic relations for scalar functions: \begin{subequations} \begin{align} \label{E:SCALARSDEPENDINGONGOODVARIABLES} g_{\alpha \beta}, (g^{-1})^{\alpha \beta}, (\underline{g}^{-1})^{\alpha \beta}, g \mkern-8.5mu / _{\alpha \beta}, (\gsphere^{-1})^{\alpha \beta}, G_{\alpha \beta}, G_{\alpha \beta}', {\Pi \mkern-12mu / } \, _{\beta}^{\ \alpha}, L^{\alpha}, X^{\alpha}, Y^{\alpha} & = \mathrm{f}(\upgamma), \\ G_{L L}, G_{L X}, G_{X X}, G_{L L}', G_{L X}', G_{X X}' & = \mathrm{f}(\upgamma), \label{E:GFRAMESCALARSDEPENDINGONGOODVARIABLES} \\ g_{\alpha \beta}^{(Small)}, Y_{(Small)}^{\alpha}, X_{(Small)}^{\alpha}, \uprho & = \mathrm{f}(\upgamma) \upgamma, \label{E:LINEARLYSMALLSCALARSDEPENDINGONGOODVARIABLES} \\ \breve{X}^{\alpha} & = \mathrm{f}(\underline{\upgamma}). \label{E:SCALARSDEPENDINGONBADVARIABLES} \end{align} \end{subequations} Moreover, we have the following schematic relations for $\ell_{t,u}$-tangent tensorfields: \begin{subequations} \begin{align} g \mkern-8.5mu / , \angGarg{L}, \angGarg{X}, {{G \mkern-12mu /} \, }, \angGprimearg{L}, \angGprimearg{X}, {{ {G'} \mkern-16mu /} \, \, } & = \mathrm{f}(\upgamma, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2), \label{E:TENSORSDEPENDINGONGOODVARIABLES} \\ Y & = \mathrm{f}(\upgamma,\gsphere^{-1}, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2), \label{E:TENSORSDEPENDINGONGOODVARIABLESANDGINVERSESPHERE} \\ \upzeta^{(Tan-\Psi)}, { {k \mkern-10mu /} \, } ^{(Tan-\Psi)} & = \mathrm{f}(\upgamma, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2) P \Psi, \label{E:TENSORSDEPENDINGONGOODVARIABLESGOODPSIDERIVATIVES} \\ \upzeta^{(Trans-\Psi)}, { {k \mkern-10mu /} \, } ^{(Trans-\Psi)} & = \mathrm{f}(\upgamma, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2) \breve{X} \Psi, \label{E:TENSORSDEPENDINGONGOODVARIABLESBADDERIVATIVES} \\ \upchi & = \mathrm{f}(\upgamma, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2) P \upgamma, \label{E:TENSORSDEPENDINGONGOODVARIABLESGOODDERIVATIVES} \\ {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi & = \mathrm{f}(\upgamma,\gsphere^{-1}, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2) P \upgamma. \label{E:TENSORSDEPENDINGONGOODVARIABLESGOODDERIVATIVESANDGINVERSESPHERE} \end{align} \end{subequations} Finally, the null form $\mathcal{Q}^g(\partial \Psi, \partial \Psi)$ defined in \eqref{E:STANDARDNULLFORM}, upon being multiplied by $\upmu$, has the following schematic structure: \begin{align} \label{E:UPMUTIMESNULLFORMSSCHEMATIC} \upmu \mathcal{Q}^g(\partial \Psi, \partial \Psi) & = \mathrm{f}(\underline{\upgamma},\breve{X} \Psi,P \Psi) P \Psi. \end{align} \end{lemma} \begin{proof} All relations were proved in \cite{jSgHjLwW2016}*{Lemma 2.19} except for \eqref{E:UPMUTIMESNULLFORMSSCHEMATIC}. \eqref{E:UPMUTIMESNULLFORMSSCHEMATIC} follows from the identity $ g^{-1} = - L \otimes L - L \otimes X - X \otimes L + \frac{1}{g_{ab} Y^a Y^b} Y \otimes Y $ (which follows easily from \eqref{E:GINVERSEFRAMEWITHRECTCOORDINATESFORGSPHEREINVERSE}) and the other schematic relations provided by the lemma. \end{proof} \subsection{Frame decomposition of the wave operator} \label{SS:FRAMEDCOMPOFBOX} In the following proposition, we decompose $\upmu \square_{g(\Psi)} f$ relative to the rescaled frame \eqref{E:RESCALEDFRAME}. \begin{proposition}\cite{jSgHjLwW2016}*{Proposition~2.16; \textbf{Frame decomposition of $\upmu \square_{g(\Psi)} f$}} \label{P:GEOMETRICWAVEOPERATORFRAMEDECOMPOSED} Let $f$ be a scalar function. Then relative to the rescaled frame $\lbrace L, \breve{X}, \Theta \rbrace$, $\upmu \square_{g(\Psi)} f$ can be expressed in either of the following two forms: \begin{subequations} \begin{align} \label{E:LONOUTSIDEGEOMETRICWAVEOPERATORFRAMEDECOMPOSED} \upmu \square_{g(\Psi)} f & = - L(\upmu L f + 2 \breve{X} f) + \upmu {\Delta \mkern-12mu / \, } f - {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \breve{X} f - \upmu {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} { {k \mkern-10mu /} \, } L f - 2 \upmu \upzeta^{\#} \cdot {{d \mkern-9mu /} } f, \\ & = - (\upmu L + 2 \breve{X}) (L f) + \upmu {\Delta \mkern-12mu / \, } f - {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \breve{X} f - \upomega L f + 2 \upmu \upzeta^{\#} \cdot {{d \mkern-9mu /} } f + 2 (\angdiffuparg{\#} \upmu) \cdot {{d \mkern-9mu /} } f, \label{E:LONINSIDEGEOMETRICWAVEOPERATORFRAMEDECOMPOSED} \end{align} \end{subequations} where the $\ell_{t,u}$-tangent tensorfields $\upchi$, $\upzeta$, and $ { {k \mkern-10mu /} \, } $ can be expressed via \eqref{E:CHIINTERMSOFOTHERVARIABLES}, \eqref{E:ZETADECOMPOSED}, and \eqref{E:ANGKDECOMPOSED}. \end{proposition} \subsection{Relationship between Cartesian and geometric partial derivative vectorfields} \label{SS:CARTESIANVSGEOMETRICPARTIALDERIVATIVES} In the next lemma, we provide explicit expressions for the Cartesian coordinate partial derivative vectorfields $\partial_{\nu}$ as (solution-dependent) linear combinations of the commutation vectorfields of Def.~\ref{D:COMMUTATIONVECTORFIELDS}. \begin{lemma}[{\textbf{Expression for} $\partial_{\nu}$ \textbf{in terms of geometric vectorfields}}] \label{L:CARTESIANVECTORFIELDSINTERMSOFGEOMETRICONES} We can express the Cartesian coordinate partial derivative vectorfields in terms of $L$, $X$, and $Y$ as follows, $(i=1,2)$: \begin{subequations} \begin{align} \partial_t & = L - (g_{\alpha 0} L^{\alpha}) X + \left( \frac{g_{a0} Y^a}{g_{cd} Y^c Y^d} \right) Y, \label{E:PARTIALTINTERMSOFLUNITRADUNITANDGEOANG} \\ \partial_i & = (g_{ai} X^a) X + \left( \frac{g_{ai} Y^a}{g_{cd} Y^c Y^d} \right) Y. \label{E:PARTIALIINTERMSOFRADUNITANDGEOANG} \end{align} \end{subequations} \end{lemma} \begin{proof} We expand $\partial_i = \upalpha_i X + \upbeta_i Y$ for scalars $\upalpha_i$ and $\upbeta_i$. Taking the $g$-inner product of each side with respect to $X$, we obtain $\upalpha_i = g(X,\partial_i) = g_{ab} X^a \delta_i^b = g_{ai} X^a$. Similarly, $\upbeta_i g_{cd} Y^c Y^d = g_{ai} Y^a$. Using these identities to substitute for $\upalpha_i$ and $\upbeta_i$, we conclude \eqref{E:PARTIALIINTERMSOFRADUNITANDGEOANG}. The identity \eqref{E:PARTIALTINTERMSOFLUNITRADUNITANDGEOANG} follows similarly with the help of \eqref{E:DOWNSTAIRSUPSTAIRSSRADUNITPLUSLUNITISAFUNCTIONOFPSI}. \end{proof} \subsection{An algebraic expression for the transversal derivative of the slow wave} \label{SS:EXPRESSIONFORRADSLOW} We will use the following algebraic lemma in order to control $\breve{X} \vec{W}$ in terms of $\mathcal{P}_u$-tangential derivatives of $\vec{W}$ and other simple error terms. \begin{lemma}[\textbf{Algebraic expression for} $\breve{X} \vec{W}$] \label{L:RADOFSLOWWAVEALGEBRAICALLYEXPRESSED} Equations \eqref{E:SLOW0EVOLUTION}-\eqref{E:SLOWEVOLUTION} imply the following schematic algebraic relation, where the $\mathrm{f}$ depend smoothly on their arguments whenever $|\upgamma| + |\vec{W}|$ is sufficiently small: \begin{align} \label{E:RADOFSLOWWAVEALGEBRAICALLYEXPRESSED} \breve{X} \vec{W} & = \mathrm{f}(\underline{\upgamma},\vec{W},\breve{X} \Psi, P \Psi) P \Psi + \mathrm{f}(\underline{\upgamma},\vec{W},\breve{X} \Psi, P \Psi) P \vec{W} + \mathrm{f}(\underline{\upgamma},\vec{W},\breve{X} \Psi, P \Psi) \vec{W}. \end{align} \end{lemma} \begin{proof} We first write the sub-system \eqref{E:SLOW0EVOLUTION}-\eqref{E:SLOWEVOLUTION} in the matrix-vector form $\upmu A^{\alpha} \partial_{\alpha} \vec{W} = \upmu F$, where $A^{\alpha} = A^{\alpha}(\Psi,\vec{W})$, $A^0$ is the $4 \times 4$ identity matrix, and $F$ corresponds to the semilinear inhomogeneous terms on the second line of RHS~\eqref{E:SLOW0EVOLUTION}. It is straightforward to check (say, at an arbitrary given point $p$, relative to a frame in which $h|_p = \mbox{\upshape diag}(-1,1,1)$) that if $\xi$ is $h$-timelike,\footnote{By $h$-timelike, we mean that $(h^{-1})^{\alpha \beta} \xi_{\alpha} \xi_{\beta} < 0$. \label{FN:HTIMELIKE}} then the matrix $A^{\alpha} \xi_{\alpha}$ is invertible (we note that in fact, $\mbox{\upshape det} (A^{\alpha} \xi_{\alpha}) = - (\xi_0)^2 (h^{-1})^{\alpha \beta} \xi_{\alpha} \xi_{\beta}$, though we do not use this precise formula in this proof). Decomposing $\upmu A^{\alpha} \partial_{\alpha} \vec{W} = \upmu \upalpha_1 L \vec{W} + \upalpha_2 \breve{X} \vec{W} + \upmu \upalpha_3 Y \vec{W} $, where the $\upalpha_i$ are $4 \times 4$ matrices, we compute, with the help of Lemma~\ref{L:CARTESIANVECTORFIELDSINTERMSOFGEOMETRICONES} and \eqref{E:DOWNSTAIRSUPSTAIRSSRADUNITPLUSLUNITISAFUNCTIONOFPSI}, that $ \upalpha_2 = - A^0 L_0 + A^a X_a = - A^{\alpha} L_{\alpha} $. Since $L$ is $g$-null, we deduce from \eqref{E:VECTORSGCAUSALIMPLIESHTIMELIKE} that the one-form $L_{\alpha}$ is $h$-timelike. Thus, from the above observations, we see that $\upalpha_2$ is invertible. Moreover, with the help of Lemmas~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} and \ref{L:CARTESIANVECTORFIELDSINTERMSOFGEOMETRICONES}, we deduce that $\upalpha_i = \mathrm{f}(\upgamma,\vec{W})$, $(i=1,2,3)$. The desired relation \eqref{E:RADOFSLOWWAVEALGEBRAICALLYEXPRESSED} is a simple consequence of these facts, the assumptions on the semilinear inhomogeneous terms stated in \eqref{E:SOMENONINEARITIESARELINEAR}, and Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}. \end{proof} \subsection{Geometric integration} \label{SS:GEOMETRICINTEGRATION} We define our geometric integrals in terms of length, area, and volume forms\footnote{Throughout the paper, we blur the distinction between these forms and the corresponding (non-negative) integration measures that they induce. The precise meaning will be clear from context. \label{FN:FORMSBLURRED}} that remain non-degenerate throughout the evolution, all the way up to the shock. \begin{definition}[\textbf{Geometric forms and related integrals}] \label{D:NONDEGENERATEVOLUMEFORMS} We define the length form $d \uplambda_{{g \mkern-8.5mu /}}$ on $\ell_{t,u}$, the area form $d \underline{\varpi}$ on $\Sigma_t^u$, the area form $d \overline{\varpi}$ on $\mathcal{P}_u^t$, and the volume form $d \varpi$ on $\mathcal{M}_{t,u}$ as follows (relative to the geometric coordinates): \begin{align} \label{E:RESCALEDVOLUMEFORMS} d \uplambda_{{g \mkern-8.5mu /}} & = d \uplambda_{{g \mkern-8.5mu /}}(t,u,\vartheta) := \upsilon(t,u,\vartheta) d \vartheta, && d \underline{\varpi} = d \underline{\varpi}(t,u',\vartheta) := d \uplambda_{{g \mkern-8.5mu /}}(t,u',\vartheta) du', \\ d \overline{\varpi} & = d \overline{\varpi}(t',u,\vartheta) := d \uplambda_{{g \mkern-8.5mu /}}(t',u,\vartheta) dt', && d \varpi = d \varpi(t',u',\vartheta) := d \uplambda_{{g \mkern-8.5mu /}}(t',u',\vartheta) du' dt', \notag \end{align} where $\upsilon$ is the scalar function from Def.~\ref{D:METRICANGULARCOMPONENT}. If $f$ is a scalar function, then we define \begin{subequations} \begin{align} \int_{\ell_{t,u}} f \, d \uplambda_{{g \mkern-8.5mu /}} & := \int_{\vartheta \in \mathbb{T}} f(t,u,\vartheta) \, \upsilon(t,u,\vartheta) d \vartheta, \label{E:LINEINTEGRALDEF} \\ \int_{\Sigma_t^u} f \, d \underline{\varpi} & := \int_{u'=0}^u \int_{\vartheta \in \mathbb{T}} f(t,u',\vartheta) \, \upsilon(t,u',\vartheta) d \vartheta du', \label{E:SIGMATUINTEGRALDEF} \\ \int_{\mathcal{P}_u^t} f \, d \overline{\varpi} & := \int_{t'=0}^t \int_{\vartheta \in \mathbb{T}} f(t',u,\vartheta) \, \upsilon(t',u,\vartheta) d \vartheta dt', \label{E:PUTINTEGRALDEF} \\ \int_{\mathcal{M}_{t,u}} f \, d \varpi & := \int_{t'=0}^t \int_{u'=0}^u \int_{\vartheta \in \mathbb{T}} f(t',u',\vartheta) \, \upsilon(t',u',\vartheta) d \vartheta du' dt'. \label{E:MTUTUINTEGRALDEF} \end{align} \end{subequations} \end{definition} \begin{remark} One can check that the canonical forms associated to $\underline{g}$ and $g$ are, respectively, $\upmu d \underline{\varpi}$ and $\upmu d \varpi$. \end{remark} \subsection{Integration with respect to Cartesian forms} \label{SS:INTEGRATIONWITHRESPECTTOCARTESIAN} In deriving energy \emph{identities} for the slow wave variables $\vec{W}$, it is convenient to carry out calculations relative to the Cartesian coordinates. In this subsection, we define some basic objects that play a role in these identities. \begin{definition}[\textbf{The one-form} $H$] \label{D:EUCLIDEANNORMALTONULLHYPERSURFACE} We define $H$ to be the one-form with the following Cartesian components: \begin{align} \label{E:EUCLIDEANNORMALTONULLHYPERSURFACE} H_{\alpha} & := - \frac{1}{(\delta^{\kappa \lambda} L_{\kappa} L_{\lambda})^{1/2}} L_{\alpha}, \end{align} where $\delta^{\kappa \lambda}$ is the standard inverse Euclidean metric on $\mathbb{R} \times \Sigma$ (that is, $\delta^{\kappa \lambda} = \mbox{\upshape diag (1,1,1)}$ relative to the Cartesian coordinates). Note that $H$ is the Euclidean-unit-length co-normal to $\mathcal{P}_u$. \end{definition} \begin{definition}[\textbf{Cartesian volume and area forms and related integrals}] \label{D:CARTESIANFORMS} We define \[d \mathcal{M} := dx^1 dx^2 dt, \qquad d \Sigma := dx^1 dx^2, \qquad d \mathcal{P} \] to be, respectively, the standard volume form on $\mathcal{M}_{t,u}$ induced by the Euclidean metric\footnote{By definition, the Euclidean metric has the components $\mbox{\upshape diag}(1,1,1)$ relative to the standard Cartesian coordinates $(t,x^1,x^2)$ on $\mathbb{R} \times \Sigma$.} on $\mathbb{R} \times \Sigma$, the standard area form induced on $\Sigma_t^u$ by the Euclidean metric on $\mathbb{R} \times \Sigma$, and the standard area form induced on $\mathcal{P}_u$ by the Euclidean metric on $\mathbb{R} \times \Sigma$. \end{definition} \begin{remark}[\textbf{We do not use the Cartesian forms when deriving estimates}] \label{R:NOFLATFORMSINESTIMATES} We could of course provide explicit expressions for the integrals of functions over various domains with respect to the Cartesian forms of Def.~\ref{D:CARTESIANFORMS}. For example, we have $ \int_{\Sigma_t^U} f \, d \Sigma = \int_{\lbrace 0 \leq u(t,x^1,x^2) \leq U \rbrace} f(t,x^1,x^2) \, dx^1 dx^2 $. We avoid providing further detailed expressions because we do not need them. The reason is that we never \emph{estimate} integrals involving the Cartesian volume forms; before deriving estimates, we will always use Lemma~\ref{L:VOLFORMRELATION} below order to replace the Cartesian forms with the geometric ones of Def.~\ref{D:NONDEGENERATEVOLUMEFORMS}. We use the Cartesian forms only when deriving energy \emph{identities} relative to the Cartesian coordinates, in which the Cartesian forms naturally appear. \end{remark} \subsection{Relationship between the Cartesian integration measures and the geometric integration measures} \label{E:CARTESIANEUCLIDEANFORMCOMPARISON} After we derive energy identities for the slow wave relative to the Cartesian coordinates, it will be convenient for us to express the corresponding integrals in terms of the geometric integration measures. The following lemma provides some identities that are useful in this regard. \begin{lemma}[\textbf{Relationship between Cartesian and geometric integration measures}] \label{L:VOLFORMRELATION} There exist scalar functions, schematically denoted by $\mathrm{f}(\upgamma)$, that are smooth for $|\upgamma|$ sufficiently small and such that the following relationship holds between the geometric integration measures corresponding to Def.~\ref{D:NONDEGENERATEVOLUMEFORMS} and the Cartesian integration measures corresponding to Def.~\ref{D:CARTESIANFORMS} (see Footnote~\ref{FN:FORMSBLURRED}): \begin{align} \label{E:VOLFORMRELATION} d \mathcal{M} & = \upmu \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace d \varpi, & d \Sigma & = \upmu \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace d \underline{\varpi}, & d \mathcal{P} & = \left\lbrace \sqrt{2} + \upgamma \mathrm{f}(\upgamma) \right\rbrace d \overline{\varpi}. \end{align} \end{lemma} \begin{proof} We prove only the identity $ d \mathcal{P} = \left\lbrace \sqrt{2} + \upgamma \mathrm{f}(\upgamma) \right\rbrace d \overline{\varpi} $ since the other two identities in \eqref{E:VOLFORMRELATION} are a straightforward consequence of Lemma~\ref{L:CHOV} (in particular, the Jacobian determinant\footnote{Note that the minus sign in equation \eqref{E:JACOBIAN} does not appear in equation \eqref{E:VOLFORMRELATION} since we are viewing \eqref{E:VOLFORMRELATION} as a relationship between integration measures.} expressions in \eqref{E:JACOBIAN}). In the proof, we view $d \overline{\varpi}$ (see \eqref{E:RESCALEDVOLUMEFORMS}) to be the two-form $\upsilon dt \wedge d \vartheta$ on $\mathcal{P}_u$, where $dt \wedge d \vartheta = dt \otimes d \vartheta - d \vartheta \otimes dt$. Similarly, we view $d \mathcal{P}$ to be the two-form induced on $\mathcal{P}_u$ by the standard Euclidean metric $\delta_{\alpha \beta} = \mbox{\upshape diag (1,1,1)}$ on $\mathbb{R} \times \Sigma$. Then relative to Cartesian coordinates, we have $d \mathcal{P} = (dx^0 \wedge dx^1 \wedge dx^2) \cdot V$, where $V$ is the future-directed Euclidean normal vectorfield to $\mathcal{P}_u$ and $(dx^0 \wedge dx^1 \wedge dx^2) \cdot V$ denotes contraction of $V$ against the first slot of $dx^0 \wedge dx^1 \wedge dx^2$. Note that $V^{\alpha} = \delta^{\alpha \beta} H_{\beta}$, where $H_{\alpha}$ is defined in \eqref{E:EUCLIDEANNORMALTONULLHYPERSURFACE} and $\delta^{\alpha \beta} = \mbox{\upshape diag (1,1,1)}$ is the standard inverse Euclidean metric on $\mathbb{R} \times \Sigma$. Since $d \overline{\varpi}$ and $d \mathcal{P}$ are proportional and since $dt \wedge d \vartheta \cdot (L \otimes \Theta) = 1$, it suffices to show that $ \left\lbrace \sqrt{2} + \upgamma \mathrm{f}(\upgamma) \right\rbrace \upsilon = (dx^0 \wedge dx^1 \wedge dx^2) \cdot (V \otimes L \otimes \Theta) $. To proceed, we note that $(dx^0 \wedge dx^1 \wedge dx^2) \cdot (V \otimes L \otimes \Theta)$ is equal to the determinant of the $3 \times 3$ matrix $ N: = \left( \begin{array}{ccc} V^0 & L^0 & 0 \\ V^1 & L^1 & \Theta^1 \\ V^2 & L^2 & \Theta^2 \\ \end{array} \right) $. Next, we consider the $3 \times 3$ matrix $M := N^{\top} \cdot g \cdot N$, where we view $g$ as a $3 \times 3$ matrix expressed relative to the Cartesian coordinates. Since \eqref{E:LITTLEGDECOMPOSED}-\eqref{E:METRICPERTURBATIONFUNCTION} imply that $|\mbox{\upshape det} g| = 1 + \upgamma \mathrm{f}(\upgamma)$ relative to the Cartesian coordinates and since $|\mbox{\upshape det} M| = |\mbox{\upshape det} g| (\mbox{\upshape det} N)^2$, the desired conclusion will follow once we prove $|\mbox{\upshape det} M| = \upsilon^2 \left\lbrace 2 + \upgamma \mathrm{f}(\upgamma) \right\rbrace$. To obtain this relation, we first compute that $ M = \left( \begin{array}{ccc} g(V,V) & g(V,L) & g(V,\Theta) \\ g(L,V) & 0 & 0 \\ g(\Theta,V) & 0 & \upsilon^2 \\ \end{array} \right) $ and thus $ \mbox{\upshape det} M = - \left(g(L,V) \right)^2 \upsilon^2 $. Finally, from \eqref{E:LITTLEGDECOMPOSED}, \eqref{E:METRICPERTURBATIONFUNCTION}, \eqref{E:PERTURBEDPART}, and \eqref{E:EUCLIDEANNORMALTONULLHYPERSURFACE}, we compute (relative to the Cartesian coordinates) that $g(L,V) = - \sqrt{2} + \upgamma \mathrm{f}(\upgamma)$, which, in conjunction with the above computations, yields $|\mbox{\upshape det} M| = \upsilon^2 \left\lbrace 2 + \upgamma \mathrm{f}(\upgamma) \right\rbrace$ as desired. \end{proof} \section{Norms, initial data, bootstrap assumptions, and smallness assumptions} \label{S:NORMSANDBOOTSTRAP} In this section, we state our size assumptions on the data, formulate appropriate bootstrap assumptions for the solution, and state our smallness assumptions. As we mentioned in the introduction, the solutions that we study in this article are perturbations of simple outgoing plane waves. In Subsect.\ \ref{SS:EXISTENCEOFDATA}, we show the existence of data for the system \eqref{E:FASTWAVE} + \eqref{E:SLOW0EVOLUTION}-\eqref{E:SYMMETRYOFMIXEDPARTIALS} that verify the size assumptions of the present article. In particular, we show that under the assumptions \eqref{E:SOMENONINEARITIESARELINEAR} on the semilinear inhomogeneous terms, the system admits simple outgoing plane wave solutions, thereby justifying our study of their perturbations. \subsection{Norms} \label{SS:NORMS} In our analysis, we will derive estimates for scalar functions and $\ell_{t,u}$-tangent tensorfields. We use the metric $g \mkern-8.5mu / $ when taking the pointwise norm of $\ell_{t,u}$-tangent tensorfields, a concept that we make precise in the next definition. \begin{definition}[\textbf{Pointwise norms}] \label{D:POINTWISENORM} Let $g \mkern-8.5mu / $ be the Riemannian metric on $\ell_{t,u}$ from Def.\ \ref{D:FIRSTFUND}. If $\xi_{\nu_1 \cdots \nu_n}^{\mu_1 \cdots \mu_m}$ is a type $\binom{m}{n}$ $\ell_{t,u}$ tensor, then we define the norm $|\xi| \geq 0$ by \begin{align} \label{E:POINTWISENORM} |\xi|^2 := g \mkern-8.5mu / _{\mu_1 \widetilde{\mu}_1} \cdots g \mkern-8.5mu / _{\mu_m \widetilde{\mu}_m} (\gsphere^{-1})^{\nu_1 \widetilde{\nu}_1} \cdots (\gsphere^{-1})^{\nu_n \widetilde{\nu}_n} \xi_{\nu_1 \cdots \nu_n}^{\mu_1 \cdots \mu_m} \xi_{\widetilde{\nu}_1 \cdots \widetilde{\nu}_n}^{\widetilde{\mu}_1 \cdots \widetilde{\mu}_m}. \end{align} \end{definition} We use $L^2$ and $L^{\infty}$ norms in our analysis. \begin{definition}[$L^2$ \textbf{and} $L^{\infty}$ \textbf{norms}] In terms of the geometric forms of Def.~\ref{D:NONDEGENERATEVOLUMEFORMS}, we define the following norms for $\ell_{t,u}$-tangent tensorfields: \label{D:SOBOLEVNORMS} \begin{subequations} \begin{align} \label{E:L2NORMS} \left\| \xi \right\|_{L^2(\ell_{t,u})}^2 & := \int_{\ell_{t,u}} |\xi|^2 \, d \uplambda_{{g \mkern-8.5mu /}}, \qquad \left\| \xi \right\|_{L^2(\Sigma_t^u)}^2 := \int_{\Sigma_t^u} |\xi|^2 \, d \underline{\varpi}, \\ \left\| \xi \right\|_{L^2(\mathcal{P}_u^t)}^2 & := \int_{\mathcal{P}_u^t} |\xi|^2 \, d \overline{\varpi}, \notag \end{align} \begin{align} \left\| \xi \right\|_{L^{\infty}(\ell_{t,u})} & := \mbox{ess sup}_{\vartheta \in \mathbb{T}} |\xi|(t,u,\vartheta), \qquad \left\| \xi \right\|_{L^{\infty}(\Sigma_t^u)} := \mbox{ess sup}_{(u',\vartheta) \in [0,u] \times \mathbb{T}} |\xi|(t,u',\vartheta), \label{E:LINFTYNORMS} \\ \left\| \xi \right\|_{L^{\infty}(\mathcal{P}_u^t)} & := \mbox{ess sup}_{(t',\vartheta) \in [0,t] \times \mathbb{T}} |\xi|(t',u,\vartheta). \notag \end{align} \end{subequations} \end{definition} \begin{remark}[\textbf{Subset norms}] \label{R:SUBSETNORMS} We sometimes use norms $\| \cdot \|_{L^2(\Omega)}$ and $\| \cdot \|_{L^{\infty}(\Omega)}$, where $\Omega$ is a subset of $\Sigma_t^u$. These norms are defined by replacing $\Sigma_t^u$ with $\Omega$ in \eqref{E:L2NORMS} and \eqref{E:LINFTYNORMS}. \end{remark} \subsection{Strings of commutation vectorfields and vectorfield seminorms} \label{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} We use the following shorthand notation to capture the relevant structure of our vectorfield operators and to schematically depict estimates. \begin{definition}[\textbf{Strings of commutation vectorfields and vectorfield seminorms}] \label{D:VECTORFIELDOPERATORS} \ \\ \begin{itemize} \item $\mathscr{Z}^{N;M} f$ denotes an arbitrary string of $N$ commutation vectorfields in $\mathscr{Z}$ (see \eqref{E:COMMUTATIONVECTORFIELDS}) applied to $f$, where the string contains \emph{at most} $M$ factors of the $\mathcal{P}_u^t$-transversal vectorfield $\breve{X}$. \item $\mathscr{P}^N f$ denotes an arbitrary string of $N$ commutation vectorfields in $\mathscr{P}$ (see \eqref{E:TANGENTIALCOMMUTATIONVECTORFIELDS}) applied to $f$. \item For $N \geq 1$, $\mathscr{Z}_*^{N;M} f$ denotes an arbitrary string of $N$ commutation vectorfields in $\mathscr{Z}$ applied to $f$, where the string contains \emph{at least} one $\mathcal{P}_u$-tangent factor and \emph{at most} $M$ factors of $\breve{X}$. We also set $\mathscr{Z}_*^{0;0} f := f$. \item For $N \geq 1$, $\mathscr{P}_*^N f$ denotes an arbitrary string of $N$ commutation vectorfields in $\mathscr{P}$ applied to $f$, where the string contains \emph{at least one factor} of $Y$ or \emph{at least two factors} of $L$. \item For $\ell_{t,u}$-tangent tensorfields $\xi$, we similarly define strings of $\ell_{t,u}$-projected Lie derivatives such as $ { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^{N;M} \xi$. \end{itemize} We also define pointwise seminorms constructed out of sums of the above strings of vectorfields: \begin{itemize} \item $|\mathscr{Z}^{N;M} f|$ simply denotes the magnitude of one of the $\mathscr{Z}^{N;M} f$ as defined above (there is no summation). \item $|\mathscr{Z}^{\leq N;M} f|$ is the \emph{sum} over all terms of the form $|\mathscr{Z}^{N';M} f|$ with $N' \leq N$ and $\mathscr{Z}^{N';M} f$ as defined above. When $N=M=1$, we sometimes write $|\mathscr{Z}^{\leq 1} f|$ instead of $|\mathscr{Z}^{\leq 1;1} f|$. \item $|\mathscr{Z}^{[1,N];M} f|$ is the sum over all terms of the form $|\mathscr{Z}^{N';M} f|$ with $1 \leq N' \leq N$ and $\mathscr{Z}^{N';M} f$ as defined above. \item Sums such as $|\mathscr{P}^{\leq N} f|$, $|\mathscr{P}_*^{[1,N]} f|$, $| { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^{\leq N;M} \xi|$, $|Y^{\leq 1} f|$, $|\breve{X}^{[1,N]} f|$, etc., are defined analogously. For example, $|\breve{X}^{[1,N]} f| = |\breve{X} f| + |\breve{X} \breve{X} f| + \cdots + |\overbrace{\breve{X} \breve{X} \cdots \breve{X}}^{N \mbox{ \upshape copies}} f| $. \end{itemize} \end{definition} \begin{remark} Some operators in Def.~\ref{D:VECTORFIELDOPERATORS} are decorated with a $*$. These operators involve $\mathcal{P}_u$-tangential differentiations that often lead to a gain in smallness in the estimates. More precisely, the operators $\mathscr{P}_*^N$ always lead to a gain in smallness while the operators $\mathscr{Z}_*^{N;M}$ lead to a gain in smallness except perhaps when they are applied to $\upmu$ (because $L \upmu$ and its $\breve{X}$ derivatives are not generally small for the solutions under study). We clarify that for the simple plane wave solutions (whose perturbations we study in our main theorem), the $\mathscr{P}_*^N$ derivatives of all variables in the array $\underline{\upgamma}$ (see \eqref{E:BADABBREIVATEDVARIABLES}) completely vanish, and that the same is true for $\mathscr{Z}_*^{N;M} \vec{W}$ (by our definition of a simple wave). \end{remark} \subsection{Assumptions on the data} \label{SS:DATAASSUMPTIONS} In this subsection, we state our size assumptions on the data. Our assumptions involve three parameters: $\mathring{\upalpha} > 0$, $\mathring{\upepsilon} \geq 0$, and $\mathring{\updelta} > 0$, whose sizes we describe in Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}. As we mentioned in Subsubsect.\ \ref{SSS:SPACETIMEREGION}, our analysis also involves the following data-dependent parameter. \begin{definition}[\textbf{The quantity that controls the time of first shock formation}] \label{D:CRITICALBLOWUPTIMEFACTOR} We define \begin{align} \label{E:CRITICALBLOWUPTIMEFACTOR} \mathring{\updelta}_* & := \frac{1}{2} \sup_{\Sigma_0^1} \left[G_{L L} \breve{X} \Psi \right]_-. \end{align} \end{definition} Our main theorem shows that the reciprocal of $\mathring{\updelta}_*$ is approximately equal to the time of first shock formation. We assume that the data verify the following size assumptions. \medskip \noindent \underline{\textbf{Assumptions along} $\Sigma_0^1$}. \begin{subequations} \begin{align} \label{E:PSIL2SMALLDATAASSUMPTIONSALONGSIGMA0} \left\| \mathscr{Z}_{\ast}^{[1,19];1} \Psi \right\|_{L^2(\Sigma_0^1)} & \leq \mathring{\upepsilon}, \\ \left\| \mathscr{P}^{\leq 18} \vec{W} \right\|_{L^2(\Sigma_0^1)} & \leq \mathring{\upepsilon}, \label{E:SLOWL2SMALLDATAASSUMPTIONSALONGSIGMA0} \end{align} \end{subequations} \begin{subequations} \begin{align} \left\| \Psi \right\|_{L^{\infty}(\Sigma_0^1)} & \leq \mathring{\upalpha}, \label{E:PSIITSELFLINFTYSMALLDATAASSUMPTIONSALONGSIGMA0} \\ \left\| \mathscr{Z}_{\ast}^{[1,11];1} \Psi \right\|_{L^{\infty}(\Sigma_0^1)}, \, \left\| \mathscr{Z}_{\ast}^{[1,10];2} \Psi \right\|_{L^{\infty}(\Sigma_0^1)}, \, \left\| L \breve{X} \breve{X} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_0^1)} & \leq \mathring{\upepsilon}, \label{E:PSILINFTYSMALLDATAASSUMPTIONSALONGSIGMA0} \\ \left\| \mathscr{P}^{\leq 10} \vec{W} \right\|_{L^{\infty}(\Sigma_0^1)}, \, \left\| \mathscr{Z}^{\leq 9;1} \vec{W} \right\|_{L^{\infty}(\Sigma_0^1)}, \, \left\| \breve{X} \breve{X} \vec{W} \right\|_{L^{\infty}(\Sigma_0^1)} & \leq \mathring{\upepsilon}, \label{E:SLOWWAVELINFTYSMALLDATAASSUMPTIONSALONGSIGMA0} \end{align} \end{subequations} \begin{align} \label{E:FASTWAVELINFTYLARGEDATAASSUMPTIONSALONGSIGMA0} \left\| \breve{X}^{[1,3]} \Psi \right\|_{L^{\infty}(\Sigma_0^1)} & \leq \mathring{\updelta}. \end{align} \noindent \underline{\textbf{Assumptions along} $\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}$}. \begin{subequations} \begin{align} \label{E:PSIL2SMALLDATAASSUMPTIONSALONGP0} \left\| \mathscr{P}^{[1,19]} \Psi \right\|_{L^2\left(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}\right)} & \leq \mathring{\upepsilon}, \\ \left\| \mathscr{P}^{\leq 18} \vec{W} \right\|_{L^2\left(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}\right)} & \leq \mathring{\upepsilon}, \label{E:SLOWL2SMALLDATAASSUMPTIONSALONGP0} \end{align} \end{subequations} \begin{subequations} \begin{align} \left\| \mathscr{P}^{\leq 17} \Psi \right\|_{L^{\infty}\left(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}\right)} & \leq \mathring{\upepsilon}, \label{E:PSILINFTYSMALLDATAASSUMPTIONSALONGP0} \\ \left\| \mathscr{P}^{\leq 16} \vec{W} \right\|_{L^{\infty} \left(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}\right)} & \leq \mathring{\upepsilon}. \label{E:SLOWLINFTYSMALLDATAASSUMPTIONSALONGP0} \end{align} \end{subequations} \noindent \underline{\textbf{Assumptions along} $\ell_{t,0}$}. We assume that for $t \in [0,2 \mathring{\updelta}_*^{-1}]$, we have \begin{align} \label{E:LINFTYPSISMALLDATAASSUMPTIONSALONGLT0} \left\| \mathscr{Z}^{\leq 1} \Psi \right\|_{L^{\infty}(\ell_{t,0})} & \leq \mathring{\upepsilon}. \end{align} \noindent \underline{\textbf{Assumptions along} $\ell_{0,u}$}. We assume that for $u \in [0,1]$, we have \begin{subequations} \begin{align} \left\| \mathscr{P}^{[1,18]} \Psi \right\|_{L^2(\ell_{0,u})} & \leq \mathring{\upepsilon}, \label{E:PSIL2SMALLDATAASSUMPTIONSALONGL0U} \\ \left\| \mathscr{P}^{\leq 17} \vec{W} \right\|_{L^2(\ell_{0,u})} & \leq \mathring{\upepsilon}. \label{E:SLOWL2SMALLDATAASSUMPTIONSALONGL0U} \end{align} \end{subequations} \begin{remark}[\textbf{A brief description of the data-size parameters}] \label{R:DATAPARAMETERSBRIEFDESCRIPTION} To prove our main theorem, we assume that $\mathring{\upalpha}$ and $\mathring{\upepsilon}$ are small in a sense that we make precise in Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}. The parameters $\mathring{\updelta}_* > 0$ and $\mathring{\updelta} > 0$ are allowed to be small or large in an absolute sense, but the smallness of $\mathring{\upepsilon}$ must be adapted to $\mathring{\updelta}_*^{-1}$ and $\mathring{\updelta}$. The parameter $\mathring{\upepsilon}$ vanishes for simple outgoing plane wave solutions (see Subsect.\ \ref{SS:EXISTENCEOFDATA} for further discussion). \end{remark} \subsection{The data of the eikonal function quantities} \label{SS:INITIALBEHAVIOROFEIKONAL} The data-size assumptions of Subsect.\ \ref{SS:DATAASSUMPTIONS} determine the initial size of various quantities constructed out of the eikonal function. In the this subsection, under appropriate smallness assumptions, we estimate these data-dependent quantities in various norms. The main result is Lemma~\ref{L:BEHAVIOROFEIKONALFUNCTIONQUANTITIESALONGSIGMA0}. We start with a simple lemma that provides algebraic identities that hold along $\Sigma_0$. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma~7.2; \textbf{Algebraic identities along} $\Sigma_0$} \label{L:ALGEBRAICIDALONGSIGMA0} The following identities hold along $\Sigma_0$ (for $i=1,2$): \begin{align} \upmu & = \frac{1}{\sqrt{(\underline{g}^{-1})^{11}}}, \qquad L_{(Small)}^i = \frac{(\underline{g}^{-1})^{i1}}{\sqrt{(\underline{g}^{-1})^{11}}} - \delta^{i1} - (g^{-1})^{0i}, \qquad \Xi^i = \frac{(\underline{g}^{-1})^{i1}}{(\underline{g}^{-1})^{11}} - \delta^{i1}, \label{E:INITIALRELATIONS} \end{align} where $\underline{g}$ is viewed as the $2 \times 2$ matrix of Cartesian spatial components of the Riemannian metric on $\Sigma_0$ defined by \eqref{E:GTANDGSPHERESPHEREDEF}, $\underline{g}^{-1}$ is the corresponding inverse matrix, and $\Xi$ is the $\ell_{t,u}$-tangent vectorfield from \eqref{E:RADSPLITINTOPARTTILAUANDXI}. \end{lemma} We now provide the main result of this subsection. \begin{lemma}[\textbf{Behavior of the eikonal function quantities along} $\Sigma_0^1$ \textbf{and} $\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}$] \label{L:BEHAVIOROFEIKONALFUNCTIONQUANTITIESALONGSIGMA0} For data verifying the assumptions of Subsect.\ \ref{SS:DATAASSUMPTIONS}, the following $L^2$ and $L^{\infty}$ estimates hold along $\Sigma_0^1$ and $\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}$ whenever $\mathring{\upalpha}$ and $\mathring{\upepsilon}$ are sufficiently small, where the constants $C$ are allowed to depend on $\mathring{\updelta}$ and $\mathring{\updelta}_*^{-1}$, the constants $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}$ can be chosen to be independent\footnote{Some of the constants denoted by ``$C$'' can also be chosen to be independent of $\mathring{\updelta}$ and $\mathring{\updelta}_*^{-1}$. We use the symbol ``$C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}$'' only when it is important that the constant can be chosen to be independent of $\mathring{\updelta}$ and $\mathring{\updelta}_*^{-1}$.} of $\mathring{\updelta}$ and $\mathring{\updelta}_*^{-1}$, and $i=1,2$ (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation): \begin{align} \left\| \mathscr{Z}_*^{[1,19];3} L_{(Small)}^i \right\|_{L^2(\Sigma_0^1)} & \leq C \mathring{\upepsilon}, \label{E:LUNITIDATAL2CONSEQUENCES} \end{align} \begin{align} \label{E:UPMUDATATANGENTIALL2CONSEQUENCES} \left\| \mathscr{P}_*^{[1,19]} \upmu \right\|_{L^2(\Sigma_0^1)} & \leq C \mathring{\upepsilon}, \end{align} \begin{subequations} \begin{align} \left\| L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_0^1)} & \leq C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}, \label{E:LUNITIITSEFLSMALLDATALINFINITYCONSEQUENCES} \\ \left\| \mathscr{Z}_*^{[1,17];2} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_0^1)} & \leq C \mathring{\upepsilon}, \label{E:LUNITIDATASMALLLINFTYCONSEQUENCES} \\ \left\| \breve{X}^{[1,2]} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_0^1)} & \leq C, \end{align} \end{subequations} \begin{subequations} \begin{align} \left\| \upmu - 1 \right\|_{L^{\infty}(\Sigma_0^1)} & \leq C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}, \label{E:UPITSELFLINFINITYSIGMA0CONSEQUENCES} \\ \left\| \mathscr{P}_*^{[1,17]} \upmu \right\|_{L^{\infty}(\Sigma_0^1)} & \leq C \mathring{\upepsilon}, \label{E:UPMUDATATANGENTIALLINFINITYCONSEQUENCES} \\ \left\| L \breve{X}^{[0,2]} \upmu \right\|_{L^{\infty}(\Sigma_0^1)}, \, \left\| \breve{X}^{[0,2]} L \upmu \right\|_{L^{\infty}(\Sigma_0^1)}, \, \left\| \breve{X} L \breve{X} \upmu \right\|_{L^{\infty}(\Sigma_0^1)}, \, \left\| \breve{X}^{[1,2]} \upmu \right\|_{L^{\infty}(\Sigma_0^1)} & \leq C, \label{E:UPMUDATARADIALLINFINITYCONSEQUENCES} \end{align} \end{subequations} \begin{subequations} \begin{align} \left\| L_{(Small)}^i \right\|_{L^{\infty}(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}})} & \leq C \mathring{\upepsilon}, \label{E:LUNITIDATASMALLP0LINFTYCONSEQUENCES} \\ \left\| \upmu - 1 \right\|_{L^{\infty}(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}})} & \leq C \mathring{\upepsilon}. \label{E:UPITSELFLINFINITYP0CONSEQUENCES} \end{align} \end{subequations} \end{lemma} \begin{proof} Readers can consult \cite{jSgHjLwW2016}*{Lemma~7.3} for the main ideas on how to prove the estimates along $\Sigma_0^1$ based on Lemma~\ref{L:ALGEBRAICIDALONGSIGMA0} and the assumptions of Subsect.\ \ref{SS:DATAASSUMPTIONS} on the data for $\Psi$ and $ \vec{W}$. The data in \cite{jSgHjLwW2016} were compactly supported in $\Sigma_0^1$, which is different than the present context, but that minor detail does not necessitate any substantial changes in the proof. To obtain the $L^{\infty}(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}})$ bounds for $L_{(Small)}^i$ and $\upmu - 1$ stated in \eqref{E:LUNITIDATASMALLP0LINFTYCONSEQUENCES}-\eqref{E:UPITSELFLINFINITYP0CONSEQUENCES}, we first use the evolution equations \eqref{E:UPMUFIRSTTRANSPORT}-\eqref{E:LLUNITI} (recall that $ \displaystyle L = \frac{\partial}{\partial t} $), Lemma \ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}, and the fundamental theorem of calculus to deduce that for $(t,\vartheta) \in [0,2 \mathring{\updelta}_*^{-1}] \times \mathbb{T}$, we have the following estimate, where $\mathrm{f}$ is smooth in its arguments: \begin{align} \label{E:MUANDLUNITISMALLALONGP0GRONWALLREADY} \left| \myarray[\upmu - 1] {\sum_{a=1}^2 |L_{(Small)}^a|} \right| (t,0,\vartheta) & \leq \left| \myarray[\upmu - 1] {\sum_{a=1}^2 |L_{(Small)}^a|} \right| (0,0,\vartheta) \\ & \ \ + C \int_{s=0}^t \left\lbrace \left| \mathrm{f}(\upmu - 1, L_{(Small)}^1, L_{(Small)}^2,\Psi) \right| \left| \mathscr{Z}^{\leq 1} \Psi \right| \right\rbrace (s,0,\vartheta) \, ds. \notag \end{align} Moreover, from \eqref{E:LITTLEGDECOMPOSED}-\eqref{E:METRICPERTURBATIONFUNCTION} and \eqref{E:INITIALRELATIONS}, we deduce the schematic relations $\upmu|_{\Sigma_0^1} = 1 + \Psi \mathrm{f}(\Psi)$ and $L_{(Small)}^i = \Psi \mathrm{f}(\Psi)$, where the functions $\mathrm{f}$ are smooth. Hence, from the smallness assumption \eqref{E:LINFTYPSISMALLDATAASSUMPTIONSALONGLT0}, we deduce that the first term on RHS~\eqref{E:MUANDLUNITISMALLALONGP0GRONWALLREADY} is $\leq C \mathring{\upepsilon}$. Moreover, from \eqref{E:LINFTYPSISMALLDATAASSUMPTIONSALONGLT0}, we deduce that the time integral on RHS~\eqref{E:MUANDLUNITISMALLALONGP0GRONWALLREADY} is $ \leq C \mathring{\upepsilon} \int_{s=0}^t \left\lbrace \left| \mathrm{f}(\upmu - 1, L_{(Small)}^1, L_{(Small)}^2) \right| \right\rbrace (s,0,\vartheta) \, ds $. Hence, from Gronwall's inequality, we conclude that $ \left| \myarray[\upmu - 1] {\sum_{a=1}^2 |L_{(Small)}^a|} \right| (t,0,\vartheta) \leq C \mathring{\upepsilon} $ for $t \in [0,2 \mathring{\updelta}_*^{-1}]$, which yields the desired bounds \eqref{E:LUNITIDATASMALLP0LINFTYCONSEQUENCES}-\eqref{E:UPITSELFLINFINITYP0CONSEQUENCES}. \end{proof} \subsection{\texorpdfstring{$T_{(Boot)}$}{The bootstrap time}, the positivity of \texorpdfstring{$\upmu$}{the inverse foliation density}, and the diffeomorphism property of \texorpdfstring{$\Upsilon$}{the change of variables map}} \label{SS:SIZEOFTBOOT} To control the solution up to the shock and to derive estimates, we find it convenient to rely on a set of bootstrap assumptions. In this subsection, we state some basic bootstrap assumptions. We start by fixing a real number $T_{(Boot)}$ with \begin{align} \label{E:TBOOTBOUNDS} 0 < T_{(Boot)} \leq 2 \mathring{\updelta}_*^{-1}, \end{align} where $\mathring{\updelta}_* > 0$ is defined in \eqref{E:CRITICALBLOWUPTIMEFACTOR}. We assume that on the spacetime domain $\mathcal{M}_{T_{(Boot)},U_0}$ (see \eqref{E:MTUDEF}), we have \begin{align} \label{E:BOOTSTRAPMUPOSITIVITY} \tag{$\mathbf{BA} \upmu > 0$} \upmu > 0. \end{align} Inequality \eqref{E:BOOTSTRAPMUPOSITIVITY} implies that no shocks are present in $\mathcal{M}_{T_{(Boot)},U_0}$. We also assume that \begin{align} \label{E:BOOTSTRAPCHOVISDIFFEO} & \mbox{The change of variables map $\Upsilon$ from Def.~\ref{D:CHOVMAP} is a $C^1$ diffeomorphism from} \\ & [0,T_{(Boot)}) \times [0,U_0] \times \mathbb{T} \mbox{ onto its image}. \notag \end{align} \subsection{Fundamental \texorpdfstring{$L^{\infty}$}{essential sup-norm} bootstrap assumptions} \label{SS:PSIBOOTSTRAP} Our fundamental bootstrap assumptions for $\Psi$ and $\vec{W}$ are that the following inequalities hold on $\mathcal{M}_{T_{(Boot)},U_0}$ (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation): \begin{align} \label{E:PSIFUNDAMENTALC0BOUNDBOOTSTRAP} \tag{$\mathbf{BA}\Psi-\vec{W}$} \left\| \mathscr{P}^{[1,11]} \Psi \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \mathscr{P}^{\leq 10} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon, \end{align} where $\varepsilon$ is a small positive bootstrap parameter whose smallness we describe in Sect.\ \ref{SS:SMALLNESSASSUMPTIONS}. \subsection{Auxiliary \texorpdfstring{$L^{\infty}$}{essential sup-norm} bootstrap assumptions} \label{SS:AUXILIARYBOOTSTRAP} In deriving pointwise estimates, we find it convenient to make the following auxiliary bootstrap assumptions. In Prop.~\ref{P:IMPROVEMENTOFAUX}, we will derive strict improvements of these assumptions. \medskip \noindent \underline{\textbf{Auxiliary bootstrap assumptions for small quantities}.} We assume that the following inequalities hold on $\mathcal{M}_{T_{(Boot)},U_0}$: \begin{align} \left\| \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \mathring{\upalpha} + \varepsilon^{1/2}, \label{E:PSIITSELFAUXLINFINITYBOOTSTRAP} \tag{$\mathbf{AUX1}\Psi$} \\ \left\| \mathscr{Z}_*^{[1,10];1} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon^{1/2}, \label{E:PSIAUXLINFINITYBOOTSTRAP} \tag{$\mathbf{AUX2}\Psi$} \\ \left\| \mathscr{Z}^{\leq 10;1} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon^{1/2}, \label{E:SLOWAUXLINFINITYBOOTSTRAP} \tag{$\mathbf{AUX}\vec{W}$} \end{align} \begin{align} \left\| L \mathscr{P}^{[1,9]} \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \mathscr{P}_*^{[1,9]} \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon^{1/2}, \label{E:UPMUBOOT} \tag{$\mathbf{AUX1}\upmu$} \\ \left\| L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \mathring{\upalpha}^{1/2}, \label{E:FRAMECOMPONENTS1BOOT} \tag{$\mathbf{AUX1}L_{(Small)}$} \\ \left\| \mathscr{Z}_*^{[1,9];1} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon^{1/2}, \label{E:FRAMECOMPONENTSIBOOT} \tag{$\mathbf{AUX2}L_{(Small)}$} \\ \left\| { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^{\leq 8;1} \upchi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon^{1/2}. \label{E:CHIBOOT} \tag{$\mathbf{AUX1}\upchi$} \end{align} \medskip \noindent \underline{\textbf{Auxiliary bootstrap assumptions for quantities that are allowed to be large}.} \begin{align} \left\| \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_0^u)} + \varepsilon^{1/2}, && \label{E:PSITRANSVERSALLINFINITYBOUNDBOOTSTRAP} \tag{$\mathbf{AUX2}\Psi$} \end{align} \begin{align} \left\| L \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \frac{1}{2} \left\| G_{L L} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_0^u)} + \varepsilon^{1/2}, \label{E:LUNITUPMUBOOT} \tag{$\mathbf{AUX2}\upmu$} \\ \left\| \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq 1 + \mathring{\updelta}_*^{-1} \left\| G_{L L} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_0^u)} + \mathring{\upalpha}^{1/2} + \varepsilon^{1/2}, \label{E:UPMUTRANSVERSALBOOT} \tag{$\mathbf{AUX3}\upmu$} \\ \left\| \breve{X} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_0^u)} + \varepsilon^{1/2}. \label{E:LUNITITRANSVERSALBOOT} \tag{$\mathbf{AUX2}L_{(Small)}$} \end{align} \subsection{Smallness assumptions} \label{SS:SMALLNESSASSUMPTIONS} For the remainder of the article, when we say that ``$A$ is small relative to $B$,'' we mean that there exists a continuous increasing function $f :(0,\infty) \rightarrow (0,\infty)$ such that $ \displaystyle A < f(B) $. In principle, the functions $f$ could always be chosen to be polynomials with positive coefficients or exponential functions. However, to avoid lengthening the paper, we typically do not specify the form of $f$. Throughout the rest of the paper, we make the following smallness assumptions. We continually adjust the required smallness in order to close our estimates. \begin{itemize} \item The bootstrap parameter $\varepsilon$ and the data smallness parameter $\mathring{\upepsilon}$ from Subsect.\ \ref{SS:DATAASSUMPTIONS} are small relative to $1$ (i.e., small in an absolute sense, without regard for the other parameters). \item $\varepsilon$ and $\mathring{\upepsilon}$ are small relative to $\mathring{\updelta}^{-1}$, where $\mathring{\updelta}$ is the data-size parameter from Subsect.\ \ref{SS:DATAASSUMPTIONS}. \item $\varepsilon$ and $\mathring{\upepsilon}$ are small relative to the data-size parameter $\mathring{\updelta}_*$ from Def.\ \ref{D:CRITICALBLOWUPTIMEFACTOR}. \item The data-size parameter $\mathring{\upalpha}$ from Subsect.\ \ref{SS:DATAASSUMPTIONS} is small relative to $1$. \item We assume that\footnote{In the proof of the main theorem (Theorem~\ref{T:MAINTHEOREM}), one sets $\varepsilon = C' \mathring{\upepsilon}$, where $C' > 1$ is chosen to be sufficiently large and $\mathring{\upepsilon}$ is assumed to be sufficiently small. This is compatible with \eqref{E:DATAEPSILONVSBOOTSTRAPEPSILON}.} \begin{align} \label{E:DATAEPSILONVSBOOTSTRAPEPSILON} \mathring{\upepsilon} & \leq \varepsilon. \end{align} \end{itemize} The first two assumptions will allow us, in particular, to treat error terms of size $\varepsilon \mathring{\updelta}^k$ as small quantities, where $k \geq 0$ is an integer. The third assumption is relevant because we only need to control the solution for times $t < 2 \mathring{\updelta}_*^{-1}$, which is plenty of time for us to show that a shock forms; hence, in many estimates, we can therefore consider factors of $t$ as being bounded by the ``constant'' $C = 2 \mathring{\updelta}_*^{-1}$. The assumption \eqref{E:DATAEPSILONVSBOOTSTRAPEPSILON} is convenient for closing our bootstrap argument. The smallness assumption on $\mathring{\upalpha}$ allows us to control the size of some key structural coefficients in our estimates (see, for example, RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}) and ensures that the solution remains in the regime of hyperbolicity of the equations. \subsection{Existence of simple plane waves and of admissible initial data} \label{SS:EXISTENCEOFDATA} In this subsection, we show that under the assumptions \eqref{E:SOMENONINEARITIESARELINEAR} on the semilinear inhomogeneous terms, there exist plane symmetric initial data (i.e., initial data on $\Sigma_0$ that depend only on the Cartesian coordinate $x^1$) for the system \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} that are compactly supported in $\Sigma_0^1$ and such that the solution has the following four properties. \begin{enumerate} \item The solution is plane symmetric, i.e., it depends only on the Cartesian coordinates $t$ and $x^1$, and, since $Y = \partial_2$ in plane symmetry, the $Y$ derivatives of the solution are $0$. \item The solution is simple and outgoing, i.e., $w = 0$ and $L \Psi = 0$. \item The assumptions of Subsect.\ \ref{SS:DATAASSUMPTIONS} hold with $\mathring{\upepsilon} = 0$, consistent with the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}. \item The $L^{\infty}$ assumption \eqref{E:PSIITSELFLINFTYSMALLDATAASSUMPTIONSALONGSIGMA0} holds, where $\mathring{\upalpha}$ can be made arbitrarily small, consistent with the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}. \end{enumerate} The key point is that (non-symmetric) perturbations of the above solutions will obey the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS} and hence fall under the scope of our main results. \begin{remark}[\textbf{The vanishing of} $\mathring{\upepsilon}$ \textbf{for simple plane symmetric waves}] \label{R:VANISHINGOFUPEPSILON} It is straightforward, though somewhat tedious, to show that if assumptions (1) and (2) above hold, then (3) follows automatically. That is, for simple outgoing plane wave solutions (which by definition verify $L \Psi \equiv 0$ and $\vec{W} \equiv 0$) whose initial data are supported in $\Sigma_0^1$, the assumptions of Subsect.\ \ref{SS:DATAASSUMPTIONS} hold with $\mathring{\upepsilon} = 0$. The model problem of Subsubsect.\ \ref{SSS:NEARLYSIMPLEWAVES} is a good starting point for readers interested in working out the details. \end{remark} \begin{remark}[\textbf{Ignoring the $x^2$ direction in this subsection}] \label{R:IGNORINGX2} In this subsection only, we ignore the $x^2$ direction (since we are considering plane symmetric solutions). For example, here we identity $\Sigma_s$ with the constant-time hypersurface $\lbrace (t,x^1) \in \mathbb{R} \times \mathbb{R} \ | \ t = s \rbrace$. \end{remark} To show that solutions with the properties (1)-(4) exist, we first show that for plane symmetric initial data that are compactly supported in $\Sigma_0^1$ and such that $w|_{\Sigma_0^1} = \partial_t w|_{\Sigma_0^1} = L \Psi|_{\Sigma_0^1} = 0$, the solution to \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} is also plane symmetric (i.e., it depends only on $t$ and $x^1$) and moreover, that $w = \partial_t w = L \Psi = 0$, as long as the solution remains smooth. The fact that plane symmetric initial data lead to plane symmetric solutions with $Y = \partial_2$ is a standard consequence of the fact that equations \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} and the eikonal equation \eqref{E:APPENDIXGEOMETRICTORUSCOORD} are invariant under the Cartesian coordinate translations $x^2 \rightarrow x^2 + a$ (with $a \in [0,1)$ and $x^2 + a$ interpreted mod $\mathbb{T}$), and we will not discuss these facts further here. Next, we note that in plane symmetry, equations \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} can be written in the following (schematic) form: \begin{subequations} \begin{align} (\upmu L + 2 \breve{X}) (L \Psi) & = \mathrm{f} \cdot L \Psi + \mathrm{f} \cdot (w,\partial w), \label{E:FASTWAVEINPLANENSYMMETRY} \\ (h^{-1})^{\alpha \beta}(\Psi,\vec{W}) \partial_{\alpha} \partial_{\beta} w & = \mathrm{f} \cdot L \Psi + \mathrm{f} \cdot (w,\partial w), \label{E:SLOWWAVEINPLANENSYMMETRY} \end{align} \end{subequations} where the $\mathrm{f}$ are smooth functions of $\Psi$, $w$, $\partial w$, etc.\ (the precise details of the $\mathrm{f}$ are not important here). Equations \eqref{E:FASTWAVEINPLANENSYMMETRY}-\eqref{E:SLOWWAVEINPLANENSYMMETRY} follow from equations \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE}, \eqref{E:CHIINTERMSOFOTHERVARIABLES}, and \eqref{E:LONINSIDEGEOMETRICWAVEOPERATORFRAMEDECOMPOSED}, Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}, our assumptions \eqref{E:SOMENONINEARITIESARELINEAR} on the semilinear inhomogeneous terms, and the fact that all $ {{d \mkern-9mu /} }$ derivatives of all scalar unknowns vanish in plane symmetry. We will now sketch a proof that if the (plane symmetric) initial data for \eqref{E:FASTWAVEINPLANENSYMMETRY}-\eqref{E:SLOWWAVEINPLANENSYMMETRY} verify $w|_{\Sigma_0^1} = \partial_t w|_{\Sigma_0^1} = L \Psi|_{\Sigma_0^1} = 0$, then $w = \partial_t w = L \Psi = 0$, as long as the solution remains smooth. To start, we assume that we have a smooth plane symmetric solution to \eqref{E:FASTWAVEINPLANENSYMMETRY}-\eqref{E:SLOWWAVEINPLANENSYMMETRY} with $\upmu > 0$. Multiplying equation \eqref{E:FASTWAVEINPLANENSYMMETRY} by $L \Psi$, integrating by parts over $\Sigma_t$, and using the assumption $L \Psi|_{\Sigma_0^1} = 0$, we obtain the a priori energy estimate $ \int_{\Sigma_t} (L \Psi)^2 \, dx^1 \lesssim \int_{s=0}^t \int_{\Sigma_s} (L \Psi)^2 \, dx^1 \, ds + \int_{s=0}^t \int_{\Sigma_s} |L \Psi|(|w| + |\partial w|) \, dx^1 \, ds $, where the implicit constants depend on the solution (in particular they can depend on an upper bound for $1/\upmu$). Similarly, using standard energy estimates for the wave equation \eqref{E:SLOWWAVEINPLANENSYMMETRY} (in the spirit of the estimates of Prop.\ \ref{P:SLOWWAVEDIVTHM}) and the assumption $w|_{\Sigma_0} = \partial_t w|_{\Sigma_0} = 0$, we obtain the a priori energy estimate $ \int_{\Sigma_t} \left\lbrace |\partial w|^2 + w^2 \right\rbrace \, dx^1 \lesssim \int_{s=0}^t \int_{\Sigma_s} \left\lbrace |\partial w|^2 + w^2 \right\rbrace \, dx^1 \, ds + \int_{s=0}^t \int_{\Sigma_s} |L \Psi|(|w| + |\partial w|) \, dx^1 \, ds $. Hence, setting $ Q(t) := \int_{\Sigma_t} \left\lbrace (L \Psi)^2 + |\partial w|^2 + w^2 \right\rbrace \, dx^1 $ and adding the two a priori energy estimates, we find that $Q(t) \lesssim \int_{s=0}^t Q(s) \, ds$. From Gronwall's inequality, we conclude that $Q(t) = 0$ (for times $t$ such that the solution is smooth), which is the desired result. We have therefore shown that equations \eqref{E:FASTWAVE}-\eqref{E:SLOWWAVE} admit simple plane wave solutions with $w = 0$ and $L \Psi = 0$, starting from any smooth initial data that are compactly supported in $\Sigma_0^1$ and such that $w|_{\Sigma_0^1} = \partial_t w|_{\Sigma_0^1} = L \Psi|_{\Sigma_0^1} = 0$. To complete our discussion, it remains for us to show that there exist plane symmetric initial data for $\Psi$ that satisfy our smallness assumptions and such that $L \Psi|_{\Sigma_0^1} = 0$. In our construction, we will think of $\Psi|_{\Sigma_0}$ and $\partial_t \Psi|_{\Sigma_0}$ as the initial data that we are free to prescribe. To proceed, we let $\phi = \phi(x^1)$ be any smooth, non-trivial function of the Cartesian coordinate $x^1$ that is compactly supported in $[0,1]$, and we set $\Psi|_{\Sigma_0} := \phi$. We now construct data for $\partial_t \Psi$ such $L \Psi|_{\Sigma_0} = 0$. Using \eqref{E:PERTURBEDPART} and the second identities in \eqref{E:LUNITOFUANDT} and \eqref{E:INITIALRELATIONS}, we see that $L \Psi|_{\Sigma_0} = 0$ is equivalent to $\partial_t \Psi|_{\Sigma_0} = - L^1 \partial_1 \Psi|_{\Sigma_0} = - L^1 \partial_1 \phi = - \left\lbrace \sqrt{(\underline{g}^{-1})^{11}\circ \phi} - (g^{-1})^{01}\circ \phi \right\rbrace \partial_1 \phi $. Put differently, we can prescribe $\partial_t \Psi|_{\Sigma_0}$ in terms of $\phi$ to achieve the desired result $L \Psi|_{\Sigma_0} = 0$. We have therefore constructed plane symmetric initial data such that $L \Psi|_{\Sigma_0} = 0$. To complete the discussion in this subsection, it remains only for us to point out that by rescaling $\phi \rightarrow \uplambda \phi$ (where $\phi$ is as in the previous paragraph) and choosing the real number $\uplambda$ to be sufficiently small but non-zero, we can ensure that $\| \Psi \|_{L^{\infty}(\Sigma_0^1)}$ is as small as we want, consistent with our assumption \eqref{E:PSIITSELFLINFTYSMALLDATAASSUMPTIONSALONGSIGMA0} and with the smallness assumption for $\mathring{\upalpha}$ that we imposed in Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}. \section{Energies, null fluxes, and energy-null flux identities} \label{S:ENERGIES} In this section, we define the energies and null fluxes that we use in our $L^2$ analysis. We also provide the basic energy-null flux identities for solutions to the fast wave equation \eqref{E:FASTWAVE} and to the slow wave equation, in the first-order form \eqref{E:SLOW0EVOLUTION}-\eqref{E:SYMMETRYOFMIXEDPARTIALS}. \subsection{Definitions of the energies and null fluxes} \label{SS:ENERGYDEFINITIONS} \subsubsection{Energies and null fluxes for the fast wave} \begin{definition}[\textbf{Energy and null flux for the fast wave}] \label{D:ENERGYFLUX} In terms of the geometric forms of Def.~\ref{D:NONDEGENERATEVOLUMEFORMS}, we define the fast wave energy functional $\mathbb{E}_{(Fast)}[\cdot]$ and the fast wave null flux functional $\mathbb{F}_{(Fast)}[\cdot]$ as follows: \begin{subequations} \begin{align} \label{E:ENERGYORDERZEROCOERCIVENESS} \mathbb{E}_{(Fast)}[f](t,u) & = \int_{\Sigma_t^u} \left\lbrace \frac{1}{2} (1 + 2 \upmu) \upmu (L f)^2 + 2 \upmu (L f) \breve{X} f + 2 (\breve{X} \Psi)^2 + \frac{1}{2} (1 + 2 \upmu)\upmu | {{d \mkern-9mu /} } f|^2 \right\rbrace \, d \underline{\varpi}, \\ \mathbb{F}_{(Fast)}[f](t,u) & = \int_{\mathcal{P}_u^t} \left\lbrace (1 + \upmu)(L f)^2 + \upmu | {{d \mkern-9mu /} } f|^2 \right\rbrace \, d \overline{\varpi}. \label{E:NULLFLUXENERGYORDERZEROCOERCIVENESS} \end{align} \end{subequations} \end{definition} \subsubsection{Energies and null fluxes for the slow wave} Let \begin{align} \label{E:ALTSLOWARRAY} \vec{V} & := (v,v_0,v_1,v_2) \end{align} be an array comprising four scalar functions. In later applications, the entries of $\vec{V}$ will be derivatives of the entries of the slow wave array $\vec{W}$ defined in \eqref{E:SLOWWAVEVARIABLES}. To construct energies and null fluxes for the slow wave, we will rely on the compatible current vectorfield $J = J[\vec{V}]$, which we define relative to the Cartesian coordinates as follows: \begin{align} \label{E:SLOWCURRENT} J^{\alpha}[\vec{V}] & := 2 Q^{\alpha \beta}[\vec{V}] \delta_{\beta}^0 - v^2 (h^{-1})^{\alpha \beta} \delta_{\beta}^0. \end{align} In \eqref{E:SLOWCURRENT}, \begin{align} \label{E:ENMOMENTUMSLOWWAVE} Q^{\alpha \beta}[\vec{V}] := (h^{-1})^{\alpha \kappa} (h^{-1})^{\beta \lambda} v_{\kappa} v_{\lambda} - \frac{1}{2} (h^{-1})^{\alpha \beta} (h^{-1})^{\kappa \lambda} v_{\kappa} v_{\lambda} \end{align} is the type $\binom{2}{0}$ energy-momentum tensorfield of the slow wave equation \eqref{E:SLOWWAVE}. \begin{remark}[\textbf{Suppression of some arguments of} $Q$] \label{R:ENMOMEMSUPPRESSIONOFARGUMENTS} Although $Q^{\alpha \beta}[\vec{V}]$ depends on $(\Psi,\vec{W})$ through the Cartesian component functions $(h^{-1})^{\alpha \beta} = (h^{-1})^{\alpha \beta}(\Psi,\vec{W})$, we typically suppress this dependence. \end{remark} Note that \eqref{E:VECTORSGCAUSALIMPLIESHTIMELIKE} and our assumption $(g^{-1})^{\alpha \beta}(\Psi = 0) = (m^{-1})^{\alpha \beta} = \mbox{\upshape diag}(-1,1,1)$ together imply that when $|\upgamma| + |\vec{W}|$ is sufficiently small, the one-form with Cartesian components $\delta_{\alpha}^0$ is past-directed (by \eqref{E:ZEROZEROISMINUSONE}) and $h$-timelike. Thus, the product $2 Q^{\alpha \beta}[\vec{V}] \delta_{\beta}^0$ on RHS~\eqref{E:SLOWCURRENT} is the contraction of the energy-momentum tensorfield of the slow wave with a past-directed $h$-timelike (see Footnote~\ref{FN:HTIMELIKE}) one-form. We also note the following explicit formulas, the first of which relies on the second relation in \eqref{E:ZEROZEROISMINUSONE}, where $(i=1,2)$: \begin{subequations} \begin{align} \label{E:SLOWCURRENTTIMECARTESIANCOMPONENT} J^0[\vec{V}] & = 2 (h^{-1})^{\alpha 0} (h^{-1})^{\beta 0} v_{\alpha} v_{\beta} + (h^{-1})^{\alpha \beta} v_{\alpha} v_{\beta} + v^2, \\ J^i[\vec{V}] & = 2 (h^{-1})^{\alpha i} (h^{-1})^{\beta 0} v_{\alpha} v_{\beta} - (h^{-1})^{i0} (h^{-1})^{\alpha \beta} v_{\alpha} v_{\beta} - (h^{-1})^{i0} v^2. \label{E:SLOWCURRENTSPATIALCARTESIANCOMPONENTS} \end{align} \end{subequations} We now again consider the past-directed one-form with Cartesian components $\delta_{\alpha}^0$, which we have already shown to be $h$-timelike when $|\upgamma| + |\vec{W}|$ is sufficiently small. Since, on RHS~\eqref{E:SLOWCURRENT}, this one-form is contracted against the energy-momentum tensorfield $Q[\vec{V}]$, the well-known dominant energy condition for $Q[\vec{V}]$ implies that if $\omega$ is any non-trivial past-directed, $h$-timelike one-form and if $\vec{V} \neq 0$, then $J^{\alpha} \omega_{\alpha} > 0$. It follows that $h(J,J) \leq 0$ and thus, by definition, $J$ is $h$-causal. Moreover, taking $\omega_{\alpha} := \delta_{\alpha}^0$, we find that $J^0 > 0$ whenever $\vec{V} \neq 0$. Thus, by \eqref{E:VECTORSHCAUSALIMPLIESGTIMELIKE} (with $J[\vec{V}]$ in the role of $V$ in \eqref{E:VECTORSHCAUSALIMPLIESGTIMELIKE}), the following holds when $|\upgamma| + |\vec{W}|$ is sufficiently small: \begin{align} \label{E:JISGTIMELIKE} \vec{V} \neq 0 \implies \mbox{\upshape the vectorfield $J[\vec{V}]$ is future-directed and $g$-timelike.} \end{align} We will use these facts in the proof of Lemma~\ref{L:COERCIVENESSOFSLOWWAVEENERGIESANDFLUXES}. We find it convenient to \emph{define} the slow wave energy and the slow wave null flux relative to the Cartesian forms. However, when describing their coerciveness properties and deriving energy estimates, we use the geometric forms; see Lemma~\ref{L:COERCIVENESSOFSLOWWAVEENERGIESANDFLUXES}. \begin{definition}[\textbf{Energy and null flux for the slow wave}] \label{D:SLOWWAVEENERGYFLUX} Let $J^{\alpha}[\vec{V}]$ be the compatible current vectorfield with Cartesian components given by \eqref{E:SLOWCURRENTTIMECARTESIANCOMPONENT}-\eqref{E:SLOWCURRENTSPATIALCARTESIANCOMPONENTS}. In terms of the Cartesian forms of Def.~\ref{D:CARTESIANFORMS} and the one-form $H_{\alpha}$ defined in \eqref{E:EUCLIDEANNORMALTONULLHYPERSURFACE}, we define the slow wave energy functional $\mathbb{E}_{(Slow)}[\cdot]$ and the slow wave null flux functional $\mathbb{F}_{(Slow)}[\cdot]$ as follows: \begin{align} \label{E:SLOWWAVEENERGYFLUX} \mathbb{E}_{(Slow)}[\vec{V}](t,u) & := \int_{\Sigma_t^u} J^0[\vec{V}] \, d \Sigma, && \mathbb{F}_{(Slow)}[\vec{V}](t,u) := \int_{\mathcal{P}_u^t} J^{\alpha}[\vec{V}] H_{\alpha} \, d \mathcal{P}. \end{align} \end{definition} \subsection{Coerciveness of the energy and null flux for the slow wave} \label{SS:ENERGYCOERCIVENESSSLOWWAVE} The coerciveness properties of the energy and null flux for the fast wave are fairly apparent from Def.~\ref{D:ENERGYFLUX}. In contrast, it takes some effort to reveal the coerciveness of the energy and null flux for the slow wave. The next lemma yields the desired coerciveness. \begin{lemma}[\textbf{Coerciveness of the energy and null flux for the slow wave}] \label{L:COERCIVENESSOFSLOWWAVEENERGIESANDFLUXES} In terms of the geometric forms of Def.~\ref{D:NONDEGENERATEVOLUMEFORMS}, the slow wave energy $\mathbb{E}_{(Slow)}[\cdot]$ and the slow wave null flux $\mathbb{F}_{(Slow)}[\cdot]$ from Def.~\ref{D:SLOWWAVEENERGYFLUX} enjoy the following coerciveness properties, valid when $|\upgamma| + |\vec{W}|$ is sufficiently small: \begin{subequations} \begin{align} \mathbb{E}_{(Slow)}[\vec{V}](t,u) & \approx \int_{\Sigma_t^u} \upmu \left\lbrace v^2 + \sum_{\alpha = 0}^2 v_{\alpha}^2 \right\rbrace \, d \underline{\varpi}, \label{E:SLOWSIGMATENERGYCOERCIVENESS} \\ \mathbb{F}_{(Slow)}[\vec{V}](t,u) & \approx \int_{\mathcal{P}_u^t} \left\lbrace v^2 + \sum_{\alpha = 0}^2 v_{\alpha}^2 \right\rbrace \, d \overline{\varpi}. \label{E:SLOWNULLFLUXCOERCIVENESS} \end{align} \end{subequations} \end{lemma} \begin{remark}[\textbf{On the necessity of the null fluxes and the necessity of the slow speed of the slow wave}] \label{R:NEEDNULLFLUX} It is critically important for our analysis that the null flux $\mathbb{F}_{(Slow)}[\vec{V}]$ controls $\vec{V}$ \emph{without any degenerate factor of} $\upmu$, as RHS~\eqref{E:SLOWNULLFLUXCOERCIVENESS} shows. Similar remarks apply to the fast wave null flux $\mathbb{F}_{(Fast)}[f](t,u)$ defined in \eqref{E:NULLFLUXENERGYORDERZEROCOERCIVENESS}, which controls $(L f)^2$ with a weight that does not degenerate as $\upmu \to 0$. We also recall that, as we described in Subsubsect.\ \ref{SSS:ENERGYESTIMATES}, we use the spacetime integrals from Def.\ \ref{D:COERCIVEINTEGRAL} to obtain non-degenerate control of the $ {{d \mkern-9mu /} }$ derivatives of the fast wave. We also note that to derive the estimate \eqref{E:SLOWNULLFLUXCOERCIVENESS}, we fundamentally rely on the assumption that the wave speed of $\vec{W}$ is slower than the wave speed of $\Psi$, and that this is the main spot in the article where we use this assumption. \end{remark} \begin{proof}[Proof of Lemma~\ref{L:COERCIVENESSOFSLOWWAVEENERGIESANDFLUXES}] Just above equation \eqref{E:JISGTIMELIKE}, we showed that $J^0 = J^0[\vec{V}]> 0$ as long as $\vec{V} \neq 0$ and $|\upgamma| + |\vec{W}|$ is sufficiently small. Since $J^0[\vec{V}]$ is precisely quadratic in its arguments $\vec{V}$, the desired estimate \eqref{E:SLOWSIGMATENERGYCOERCIVENESS} follows from this fact, the second identity in \eqref{E:VOLFORMRELATION}, and the first definition in \eqref{E:SLOWWAVEENERGYFLUX}. To prove \eqref{E:SLOWNULLFLUXCOERCIVENESS}, we first note that the one-form $H$ defined in \eqref{E:EUCLIDEANNORMALTONULLHYPERSURFACE} is past-directed (in view of the last identity in \eqref{E:LUNITOFUANDT}) and $g$-null. Thus, by \eqref{E:JISGTIMELIKE}, $J^{\alpha} H_{\alpha} > 0$ whenever $\vec{V} \neq 0$ and $|\upgamma| + |\vec{W}|$ is sufficiently small. Since $J^{\alpha} H_{\alpha}$ is precisely quadratic in $\vec{V}$, the desired estimate \eqref{E:SLOWNULLFLUXCOERCIVENESS} follows from the last identity in \eqref{E:VOLFORMRELATION} and the second definition in \eqref{E:SLOWWAVEENERGYFLUX}. \end{proof} \subsection{Energy-null flux identities} \label{SS:ENERGYIDENTITIES} In this subsection, we derive energy-null flux identities that we later will use to control solutions to the system \eqref{E:FASTWAVE} + \eqref{E:SLOW0EVOLUTION}-\eqref{E:SYMMETRYOFMIXEDPARTIALS}. \subsubsection{Energy-null flux identities for the fast wave} \label{SSS:FASTWAVEENERGYID} \begin{proposition}\cite{jSgHjLwW2016}*{Proposition~3.5; \textbf{Fundamental energy-null flux identity for the fast wave}} \label{P:DIVTHMWITHCANCELLATIONS} For solutions $f$ to the inhomogeneous wave equation \begin{align*} \upmu \square_{g(\Psi)} f & = \mathfrak{F}, \end{align*} we have the following identity for $t \geq 0$ and $u \in [0,U_0]$: \begin{align} \label{E:E0DIVID} & \mathbb{E}_{(Fast)}[f](t,u) + \mathbb{F}_{(Fast)}[f](t,u) \\ & = \mathbb{E}_{(Fast)}[f](0,u) + \mathbb{F}_{(Fast)}[f](t,0) - \int_{\mathcal{M}_{t,u}} \left\lbrace (1 + 2 \upmu) (L f) + 2 \breve{X} f \right\rbrace \mathfrak{F} \, d \varpi \notag \\ & \ \ - \frac{1}{2} \int_{\mathcal{M}_{t,u}} [L \upmu]_- | {{d \mkern-9mu /} } f|^2 \, d \varpi + \sum_{i=1}^5 \int_{\mathcal{M}_{t,u}} \basicenergyerrorarg{T}{i}[f] \, d \varpi, \notag \end{align} where $f_+: = \max \lbrace f,0 \rbrace$, $f_- := \max \lbrace -f, 0 \rbrace$, and \begin{subequations} \begin{align} \basicenergyerrorarg{T}{1}[f] & := (L f)^2 \left\lbrace - \frac{1}{2} L \upmu + \breve{X} \upmu - \frac{1}{2} \upmu {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi - {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} { {k \mkern-10mu /} \, } ^{(Trans-\Psi)} - \upmu {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} { {k \mkern-10mu /} \, } ^{(Tan-\Psi)} \right\rbrace, \label{E:MULTERRORINTEG1} \\ \basicenergyerrorarg{T}{2}[f] & := - (L f) (\breve{X} f) \left\lbrace {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi + 2 {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} { {k \mkern-10mu /} \, } ^{(Trans-\Psi)} + 2 \upmu {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} { {k \mkern-10mu /} \, } ^{(Tan-\Psi)} \right\rbrace, \label{E:MULTERRORINTEG2} \\ \basicenergyerrorarg{T}{3}[f] & := \upmu | {{d \mkern-9mu /} } f|^2 \left\lbrace \frac{1}{2} \frac{[L \upmu]_+}{\upmu} + \frac{\breve{X} \upmu}{\upmu} + 2 L \upmu - \frac{1}{2} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi - {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} { {k \mkern-10mu /} \, } ^{(Trans-\Psi)} - \upmu {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} { {k \mkern-10mu /} \, } ^{(Tan-\Psi)} \right\rbrace, \label{E:MULTERRORINTEG3} \\ \basicenergyerrorarg{T}{4}[f] & := (L f)(\angdiffuparg{\#} f) \cdot \left\lbrace (1 - 2 \upmu) {{d \mkern-9mu /} } \upmu + 2 \upzeta^{(Trans-\Psi)} + 2 \upmu \upzeta^{(Tan-\Psi)} \right\rbrace, \label{E:MULTERRORINTEG4} \\ \basicenergyerrorarg{T}{5}[f] & := - 2 (\breve{X} f)(\angdiffuparg{\#} f) \cdot \left\lbrace {{d \mkern-9mu /} } \upmu + 2 \upzeta^{(Trans-\Psi)} + 2 \upmu \upzeta^{(Tan-\Psi)} \right\rbrace. \label{E:MULTERRORINTEG5} \end{align} \end{subequations} The tensorfields $\upchi$, $\upzeta^{(Trans-\Psi)}$, $ { {k \mkern-10mu /} \, } ^{(Trans-\Psi)}$, $\upzeta^{(Tan-\Psi)}$, and $ { {k \mkern-10mu /} \, } ^{(Tan-\Psi)}$ from above are as in \eqref{E:CHIINTERMSOFOTHERVARIABLES}, \eqref{E:ZETATRANSVERSAL}, \eqref{E:KABTRANSVERSAL}, \eqref{E:ZETAGOOD}, and \eqref{E:KABGOOD}, while the symbol ``$T$'' in \eqref{E:MULTERRORINTEG1}-\eqref{E:MULTERRORINTEG5} merely signifies that the energies and error terms are tied to the multiplier vectorfield from \eqref{E:MULT}, as is shown by the proof of \cite{jSgHjLwW2016}*{Proposition~3.5}. \end{proposition} \subsubsection{Energy-null flux identities for the slow wave} \label{SSS:SLOWWAVEENERGYID} We now derive an analog of Prop.~\ref{P:DIVTHMWITHCANCELLATIONS} for the slow wave variable. \begin{proposition}[\textbf{Fundamental energy-null flux identity for the slow wave}] \label{P:SLOWWAVEDIVTHM} Solutions $\vec{V} := (v,v_0,v_1,v_2)$ to the inhomogeneous system \begin{subequations} \begin{align} \upmu \partial_t v_0 & = \upmu (h^{-1})^{ab} \partial_a v_b + 2 \upmu (h^{-1})^{0a} \partial_a v_0 + F_0, \label{E:SLOWTIMEINHOM} \\ \upmu \partial_t v_i & = \upmu \partial_i v_0 + F_i, \label{E:SLOWSPACEINHOM} \\ \upmu \partial_t v & = \upmu v_0 + F, \label{E:SLOWINHOM} \\ \upmu \partial_i v_j & = \upmu \partial_j v_i + F_{ij} \label{E:MIXEDPARTIALSINHOM} \end{align} \end{subequations} verify the following energy identity for $t \geq 0$ and $u \in [0,U_0]$: \begin{align} \label{E:SLOWENERGYID} & \mathbb{E}_{(Slow)}[\vec{V}](t,u) + \mathbb{F}_{(Slow)}[\vec{V}](t,u) \\ & = \mathbb{E}_{(Slow)}[\vec{V}](0,u) + \mathbb{F}_{(Slow)}[\vec{V}](t,0) \notag \\ & \ \ + \int_{\mathcal{M}_{t,u}} \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace \mathfrak{W}[\vec{V}] \, d \varpi \notag \\ & \ \ + \int_{\mathcal{M}_{t,u}} \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace \left\lbrace 4 (h^{-1})^{\alpha 0} (h^{-1})^{\beta 0} v_{\alpha} F_{\beta} + 2 (h^{-1})^{\alpha \beta} v_{\alpha} F_{\beta} + 2 (h^{-1})^{\alpha a} (h^{-1})^{b 0} v_{\alpha} F_{ab} + 2 v F \right\rbrace \, d \varpi, \notag \end{align} where the schematically denoted functions $\upgamma \mathrm{f}(\upgamma)$ are smooth and vanish when $\upgamma = 0$, and \begin{align} \label{E:SLOWWAVEBASICENERGYINTEGRAND} \mathfrak{W}[\vec{V}] & := 4 \upmu (\partial_t (h^{-1})^{\alpha 0}) (h^{-1})^{\beta 0} v_{\alpha} v_{\beta} + \upmu (\partial_t(h^{-1})^{\alpha \beta}) v_{\alpha} v_{\beta} \\ & \ \ + 2 \upmu (\partial_a (h^{-1})^{\alpha a}) (h^{-1})^{\beta 0} v_{\alpha} v_{\beta} + 2 \upmu (h^{-1})^{\alpha a} (\partial_a (h^{-1})^{\beta 0}) v_{\alpha} v_{\beta} \notag \\ & \ \ - \upmu (\partial_a (h^{-1})^{a0}) (h^{-1})^{\alpha \beta} v_{\alpha} v_{\beta} - \upmu (h^{-1})^{a0} (\partial_a (h^{-1})^{\alpha \beta}) v_{\alpha} v_{\beta} - \upmu (\partial_a (h^{-1})^{a0}) v^2 \notag \\ & \ \ - 2 \upmu (h^{-1})^{\alpha 0} v v_{\alpha}. \notag \end{align} \end{proposition} \begin{remark} Note that the above expressions depend on $(\Psi,\vec{W})$ through $(h^{-1})^{\alpha \beta} = (h^{-1})^{\alpha \beta}(\Psi,\vec{W})$. \end{remark} \begin{proof}[Proof of Prop.\ \ref{P:SLOWWAVEDIVTHM}] We consider the vectorfield $J = J[\vec{V}]$ defined relative to the Cartesian coordinates by \eqref{E:SLOWCURRENT}. Using the second relation in \eqref{E:ZEROZEROISMINUSONE}, the identities \eqref{E:SLOWCURRENTTIMECARTESIANCOMPONENT}-\eqref{E:SLOWCURRENTSPATIALCARTESIANCOMPONENTS}, and equations \eqref{E:SLOWTIMEINHOM}-\eqref{E:MIXEDPARTIALSINHOM}, we compute that \begin{align} \label{E:UPMUDIVOFSLOWJ} \partial_{\alpha} J^{\alpha} & = 4 (\partial_t (h^{-1})^{\alpha 0}) (h^{-1})^{\beta 0} v_{\alpha} v_{\beta} + (\partial_t(h^{-1})^{\alpha \beta}) v_{\alpha} v_{\beta} \\ & \ \ + 2 (\partial_a (h^{-1})^{\alpha a}) (h^{-1})^{\beta 0} v_{\alpha} v_{\beta} + 2 (h^{-1})^{\alpha a} (\partial_a (h^{-1})^{\beta 0}) v_{\alpha} v_{\beta} \notag \\ & \ \ - (\partial_a (h^{-1})^{a0}) (h^{-1})^{\alpha \beta} v_{\alpha} v_{\beta} - (h^{-1})^{a0} (\partial_a (h^{-1})^{\alpha \beta}) v_{\alpha} v_{\beta} - (\partial_a (h^{-1})^{a0}) v^2 \notag \\ & \ \ - 2 (h^{-1})^{\alpha 0} v v_{\alpha} \notag \\ & \ \ + \frac{4}{\upmu} (h^{-1})^{\alpha 0} (h^{-1})^{\beta 0} F_{\alpha} v_{\beta} + \frac{2}{\upmu} (h^{-1})^{\alpha \beta} F_{\alpha} v_{\beta} + \frac{2}{\upmu} (h^{-1})^{\alpha a} (h^{-1})^{b 0} v_{\alpha} F_{ab} + \frac{2}{\upmu} v F. \notag \end{align} We now apply the divergence theorem to the vectorfield $J$ on the region $\mathcal{M}_{t,u}$, where we use the Cartesian coordinates, the Euclidean metric $\delta^{\alpha \beta} := \mbox{\upshape diag}(1,1,1)$ on $\mathbb{R} \times \Sigma$, and the Cartesian forms of Def.~\ref{D:CARTESIANFORMS} in all computations. As the final step, we use the identities in \eqref{E:VOLFORMRELATION} to express all integrals as integrals with respect to the geometric integration measures corresponding to Def.~\ref{D:NONDEGENERATEVOLUMEFORMS}. Also taking into account Def.~\ref{D:SLOWWAVEENERGYFLUX}, we arrive at the desired identity \eqref{E:SLOWENERGYID}. Note that the one-form $H_{\alpha}$ on RHS~\eqref{E:SLOWWAVEENERGYFLUX} is the Euclidean unit-length co-normal to the hypersurfaces $\mathcal{P}_u$, which is the reason that $J^{\alpha}[\vec{V}] H_{\alpha}$ arises when we apply the standard divergence theorem relative to the Cartesian coordinates on $\mathcal{M}_{t,u}$. \end{proof} \section{Preliminary pointwise estimates} \label{S:PRELIMINARYPOINTWISE} In this section, we use the data-size assumptions, smallness assumptions, and bootstrap assumptions of Sect.\ \ref{S:NORMSANDBOOTSTRAP} to derive preliminary $L^{\infty}$ and pointwise estimates for the solution. These estimates serve as the starting point for related estimates that we derive in Sects.\ \ref{S:LINFINITYESTIMATESFORHIGHERTRANSVERSAL}-\ref{S:POINTWISEESTIMATES} \begin{remark}[\textbf{Many estimates were proved in} \cite{jSgHjLwW2016}] \label{R:ESTIMATESPROVEDINOTHERPAPER} Many of the estimates involving the fast wave variable $\Psi$ and the eikonal function are independent of the slow wave variable $\vec{W}$ and were proved in \cite{jSgHjLwW2016}; thus, we cite \cite{jSgHjLwW2016} for many of the estimates. \end{remark} \subsection{Notation for repeated differentiation} \label{SS:NOTATIONFORREPEATED} \begin{definition}[\textbf{Notation for repeated differentiation}] \label{D:REPEATEDDIFFERENTIATIONSHORTHAND} Recall that the commutation vectorfield sets $\mathscr{Z}$ and $\mathscr{P}$ are defined in Def.~\ref{D:COMMUTATIONVECTORFIELDS}. We label the three vectorfields in $\mathscr{Z}$ as follows: $Z_{(1)} = L, Z_{(2)} = Y, Z_{(3)} = \breve{X}$. Note that $\mathscr{P} = \lbrace Z_{(1)}, Z_{(2)} \rbrace$. We define the following vectorfield operators: \begin{itemize} \item If $\vec{I} = (\iota_1, \iota_2, \cdots, \iota_N)$ is a multi-index of order $|\vec{I}| := N$ with $\iota_1, \iota_2, \cdots, \iota_N \in \lbrace 1,2,3 \rbrace$, then $\mathscr{Z}^{\vec{I}} := Z_{(\iota_1)} Z_{(\iota_2)} \cdots Z_{(\iota_N)}$ denotes the corresponding $N^{th}$ order differential operator. We write $\mathscr{Z}^N$ rather than $\mathscr{Z}^{\vec{I}}$ when we are not concerned with the structure of $\vec{I}$, and we sometimes omit the superscript when $N=1$. \item Similarly, $ { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^{\vec{I}} := { \mathcal{L} \mkern-10mu / } _{Z_{(\iota_1)}} { \mathcal{L} \mkern-10mu / } _{Z_{(\iota_2})} \cdots { \mathcal{L} \mkern-10mu / } _{Z_{(\iota_N})}$ denotes an $N^{th}$ order $\ell_{t,u}$-projected Lie derivative operator (see Def.~\ref{D:PROJECTEDLIE}), and we write $ { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^N$ when we are not concerned with the structure of $\vec{I}$. \item If $\vec{I} = (\iota_1, \iota_2, \cdots, \iota_N)$, then $\vec{I}_1 + \vec{I}_2 = \vec{I}$ means that $\vec{I}_1 = (\iota_{k_1}, \iota_{k_2}, \cdots, \iota_{k_m})$ and $\vec{I}_2 = (\iota_{k_{m+1}}, \iota_{k_{m+2}}, \cdots, \iota_{k_N})$, where $1 \leq m \leq N$ and $k_1, k_2, \cdots, k_N$ is a permutation of $1,2,\cdots,N$. \item Sums such as $\vec{I}_1 + \vec{I}_2 + \cdots + \vec{I}_M = \vec{I}$ have an analogous meaning. \item $\mathcal{P}_u$-tangential vectorfield operators such as $\mathscr{P}^{\vec{I}}$ are defined analogously, except in this case we have $\iota_1, \iota_2, \cdots, \iota_N \in \lbrace 1,2 \rbrace$. We write $\mathscr{P}^N$ rather than $\mathscr{P}^{\vec{I}}$ when we are not concerned with the structure of $\vec{I}$, and we sometimes omit the superscript when $N=1$. \end{itemize} \end{definition} \subsection{Basic assumptions, facts, and estimates that we use silently} \label{SS:OFTENUSEDESTIMATES} For the reader's convenience, we present here some basic assumptions, facts, and estimates (similar to those from \cite{jSgHjLwW2016}*{Section 8.2}) that we silently use throughout the rest of the paper when deriving estimates. \begin{enumerate} \item All of the estimates that we derive hold on the bootstrap region $\mathcal{M}_{T_{(Boot)},U_0}$. Moreover, in deriving estimates, we rely on the data-size and bootstrap assumptions of Subsects.\ \ref{SS:DATAASSUMPTIONS}-\ref{SS:AUXILIARYBOOTSTRAP}, the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}, and in particular the estimates for the data of the eikonal function quantities provided by Lemma~\ref{L:BEHAVIOROFEIKONALFUNCTIONQUANTITIESALONGSIGMA0}. Moreover, when we refer to ``the bootstrap assumptions,'' we mean the fundamental bootstrap assumptions of Subsect.\ \ref{SS:PSIBOOTSTRAP} and the auxiliary bootstrap assumptions of Subsect.\ \ref{SS:AUXILIARYBOOTSTRAP}. \item We often use the assumption \eqref{E:DATAEPSILONVSBOOTSTRAPEPSILON} without explicitly mentioning it. \item All quantities that we estimate can be controlled in terms of the quantities $\underline{\upgamma} = \lbrace \Psi, \upmu - 1, L_{(Small)}^1, L_{(Small)}^2 \rbrace$, $\vec{W}$, and their derivatives. Note that many of these quantities are small, but for the solutions under consideration, the $\breve{X}$ derivatives of $\underline{\upgamma}$ do not have to be small, nor do $\upmu - 1$ or $L \upmu$. \item We typically use the Leibniz rule for the operators $ { \mathcal{L} \mkern-10mu / } _Z$ and $ {\nabla \mkern-14mu / \,} $ when deriving pointwise estimates for the $ { \mathcal{L} \mkern-10mu / } _Z$ and $ {\nabla \mkern-14mu / \,} $ derivatives of tensor products of the schematic form $\prod_{i=1}^m v_i$, where the $v_i$ are scalar functions or $\ell_{t,u}$-tangent tensors. Our derivative counts are such that all $v_i$ except at most one are uniformly bounded in $L^{\infty}$ on $\mathcal{M}_{T_{(Boot)},U_0}$. Thus, our pointwise estimates often explicitly feature (on the right-hand sides) only the factor with the most derivatives on it, multiplied by a constant (often implicit) that uniformly bounds the other factors. In some estimates, the right-hand sides also gain a smallness factor, such as $\mathring{\upalpha}$ or $\varepsilon^{1/2}$, generated by the remaining $v_i's$. \item The operators $ { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^N$ commute through $ {{d \mkern-9mu /} }$, as is shown by Lemma~\ref{L:LANDRADCOMMUTEWITHANGDIFF}. \item For scalar functions $f$, we have $ \left| Y f \right| = \left\lbrace 1 + \mathcal{O}(\upgamma) \right\rbrace \left| {{d \mkern-9mu /} } f \right| = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}^{1/2}) + \mathcal{O}(\varepsilon^{1/2}) \right\rbrace \left| {{d \mkern-9mu /} } f \right| $, a fact which follows from the proofs of Lemmas~\ref{L:TENSORSIZECONTROLLEDBYYCONTRACTIONS} and Lemma~\ref{L:ANGDERIVATIVESINTERMSOFTANGENTIALCOMMUTATOR} and the bootstrap assumptions. Hence, for scalar functions $f$, we sometimes schematically depict $ {{d \mkern-9mu /} } f$ as $\left(1 + \mathcal{O}(\upgamma)\right) P f$ or $P f$ when the factor $1 + \mathcal{O}(\upgamma)$ is not important. Similarly, Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}, the schematic identity $ {\nabla \mkern-14mu / \,} _{Y} \xi = { \mathcal{L} \mkern-10mu / } _{Y} \xi + \sum \xi \cdot {\nabla \mkern-14mu / \,} Y $, and the proof of Lemma~\ref{L:ANGDERIVATIVESINTERMSOFTANGENTIALCOMMUTATOR} imply that we can depict $ {\Delta \mkern-12mu / \, } f$ by\footnote{In \cite{jSgHjLwW2016}, we schematically denoted $ {\Delta \mkern-12mu / \, } f$ by $\mathrm{f}(\mathscr{P}^{\leq 1} \upgamma,\gsphere^{-1}) \mathscr{P}_*^{[1,2]} f$. Here we note that in fact, the dependence on $\mathrm{f}$ on $\gsphere^{-1}$ is not needed.} $ \mathrm{f}(\mathscr{P}^{\leq 1} \upgamma) \mathscr{P}_*^{[1,2]} f $ (see Subsubsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the notation $\mathscr{P}_*^{[1,2]} f$) or $\mathscr{P}_*^{[1,2]} f$ when the factor $\mathrm{f}(\mathscr{P}^{\leq 1} \upgamma)$ is not important, and, for type $\binom{0}{n}$ $\ell_{t,u}$-tangent tensorfields $\xi$, $ {\nabla \mkern-14mu / \,} \xi$ by $ \mathrm{f}(\mathscr{P}^{\leq 1} \upgamma,\gsphere^{-1}, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2) { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{\leq 1} \xi $ (or $ { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{\leq 1} \xi$ when the factor $\mathrm{f}(\mathscr{P}^{\leq 1} \upgamma,\gsphere^{-1}, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2)$ is not important\footnote{In the analogous discussion in \cite{jSgHjLwW2016}, the dependence of $\mathrm{f}$ on $ {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2$ was mistakenly omitted.}). \item The constants $C$ and the implicit constants in our estimates are allowed to depend on the data-size parameters $\mathring{\updelta}$ and $\mathring{\updelta}_*^{-1}$. In contrast, the constants $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}$ can be chosen to be independent of $\mathring{\updelta}$ and $\mathring{\updelta}_*^{-1}$. See Subsect.\ \ref{SS:NOTATIONANDINDEXCONVENTIONS} for a precise description of the way in which we allow constants to depend on the various parameters. \end{enumerate} \subsection{Omission of the independent variables in some expressions} \label{SS:OMISSION} We use the following notational conventions in the rest of the article. \begin{itemize} \item Many of our pointwise estimates are stated in the form \[ |f_1| \lesssim F(t)|f_2| \] for some function $F$. Unless we otherwise indicate, it is understood that both $f_1$ and $f_2$ are evaluated at the point with geometric coordinates $(t,u,\vartheta)$. \item Unless we otherwise indicate, in integrals $\int_{\ell_{t,u}} f \, d \uplambda_{{g \mkern-8.5mu /}}$, the integrand $f$ and the length form $d \uplambda_{{g \mkern-8.5mu /}}$ are viewed as functions of $(t,u,\vartheta)$ and $\vartheta$ is the integration variable. \item Unless we otherwise indicate, in integrals $\int_{\Sigma_t^u} f \, d \underline{\varpi}$, the integrand $f$ and the area form $d \underline{\varpi}$ are viewed as functions of $(t,u',\vartheta)$ and $(u',\vartheta)$ are the integration variables. \item Unless we otherwise indicate, in integrals $\int_{\mathcal{P}_u^t} f \, d \overline{\varpi}$, the integrand $f$ and the area form $d \overline{\varpi}$ are viewed as functions of $(t',u,\vartheta)$ and $(t',\vartheta)$ are the integration variables. \item Unless we otherwise indicate, in integrals $\int_{\mathcal{M}_{t,u}} f \, d \varpi$, the integrand $f$ and the volume form $d \varpi$ are viewed as functions of $(t',u',\vartheta)$ and $(t',u',\vartheta)$ are the integration variables. \end{itemize} \subsection{Differential operator comparison estimates} \label{SS:DIFFOPCOMPARISON} Our main goal in this subsection is to derive differential operator comparison estimates. We start with a simple lemma in which we show that the pointwise norm $|\cdot|$ of $\ell_{t,u}$-tangent tensors can be controlled by contractions against the vectorfield $Y$. \begin{lemma}[\textbf{The norm of $\ell_{t,u}$-tangent tensors can be measured via $Y$ contractions}] \label{L:TENSORSIZECONTROLLEDBYYCONTRACTIONS} Let $\xi_{\alpha_1 \cdots \alpha_n}$ be a type $\binom{0}{n}$ $\ell_{t,u}$-tangent tensor with $n \geq 1$. Under the data-size and bootstrap assumptions of Subsects.\ \ref{SS:DATAASSUMPTIONS}-\ref{SS:AUXILIARYBOOTSTRAP} and the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}, we have \begin{align} \label{E:TENSORSIZECONTROLLEDBYYCONTRACTIONS} |\xi| = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}^{1/2}) + \mathcal{O}(\varepsilon^{1/2}) \right\rbrace |\xi_{Y Y \cdots Y}|. \end{align} The same result holds if $|\xi_{Y Y \cdots Y}|$ is replaced with $|\xi_{Y \cdot}|$, $|\xi_{Y Y \cdot}|$, etc., where $\xi_{Y \cdot}$ is the type $\binom{0}{n-1}$ tensor with components $Y^{\alpha_1} \xi_{\alpha_1 \alpha_2 \cdots \alpha_n}$, and similarly for $\xi_{Y Y \cdot}$, etc. \end{lemma} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. \eqref{E:TENSORSIZECONTROLLEDBYYCONTRACTIONS} is easy to derive relative to the Cartesian coordinates by using the decomposition $(\gsphere^{-1})^{ij} = \frac{1}{|Y|^2} Y^i Y^j$ and the estimate $|Y| = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}^{1/2}) + \mathcal{O}(\varepsilon^{1/2})$. This latter estimate follows from the identity $|Y|^2 = g_{ab} Y^a Y^b = (\delta_{ab} + g_{ab}^{(Small)}) (\delta_2^a + Y_{(Small)}^a)(\delta_2^b + Y_{(Small)}^b)$, the fact that $g_{ab}^{(Small)} = \mathrm{f}(\upgamma) \upgamma$ with $\mathrm{f}$ smooth and similarly for $Y_{(Small)}^a$ (see Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}), and the bootstrap assumptions. \end{proof} We now provide the main result of this subsection. \begin{lemma}[\textbf{Controlling $ {\nabla \mkern-14mu / \,} $ derivatives in terms of $Y$ derivatives}] \label{L:ANGDERIVATIVESINTERMSOFTANGENTIALCOMMUTATOR} Let $f$ be a scalar function on $\ell_{t,u}$. Under the data-size and bootstrap assumptions of Subsects.\ \ref{SS:DATAASSUMPTIONS}-\ref{SS:AUXILIARYBOOTSTRAP} and the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}, the following comparison estimates hold on $\mathcal{M}_{T_{(Boot)},U_0}$: \begin{align} \label{E:ANGDERIVATIVESINTERMSOFTANGENTIALCOMMUTATOR} | {{d \mkern-9mu /} } f| & \leq (1 + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}^{1/2} + C \varepsilon^{1/2})\left| Y f \right|, \qquad | {\nabla \mkern-14mu / \,} ^2 f| \leq (1 + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}^{1/2} + C \varepsilon^{1/2})\left| {{d \mkern-9mu /} }(Y f) \right| + C \varepsilon^{1/2} | {{d \mkern-9mu /} } f|. \end{align} \end{lemma} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. The first inequality in \eqref{E:ANGDERIVATIVESINTERMSOFTANGENTIALCOMMUTATOR} follows directly from Lemma~\ref{L:TENSORSIZECONTROLLEDBYYCONTRACTIONS}. To prove the second, we first use Lemma~\ref{L:TENSORSIZECONTROLLEDBYYCONTRACTIONS}, the identity $ {\nabla \mkern-14mu / \,} _{Y Y}^2 f = Y \cdot {{d \mkern-9mu /} } (Y f) - {\nabla \mkern-14mu / \,} _{Y} Y \cdot {{d \mkern-9mu /} } f$, and the estimate $|Y| = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}^{1/2}) + \mathcal{O}(\varepsilon^{1/2})$ noted in the proof of Lemma~\ref{L:TENSORSIZECONTROLLEDBYYCONTRACTIONS} to deduce that \begin{align} \label{E:ANGDSQUAREFUNCTIONFIRSTBOUNDINTERMSOFGEOANG} | {\nabla \mkern-14mu / \,} ^2 f| & \leq (1 + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}^{1/2} + C \varepsilon^{1/2})| {\nabla \mkern-14mu / \,} _{Y Y}^2 f| \leq (1 + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}^{1/2} + C \varepsilon^{1/2}) \left\lbrace | {{d \mkern-9mu /} }(Y f)| + | {\nabla \mkern-14mu / \,} _{Y} Y|| {{d \mkern-9mu /} } f| \right\rbrace. \end{align} Next, we use Lemma~\ref{L:TENSORSIZECONTROLLEDBYYCONTRACTIONS} and the identity $ \angdeformarg{Y}{Y}{Y} = {\nabla \mkern-14mu / \,} _{Y} (g \mkern-8.5mu / (Y, Y)) = Y (g_{ab} Y^a Y^b) $ to deduce that \begin{align} \label{E:ANGDGEOANGOFGEOANGINTERMSOFGEOANGDEFORMSPHERE} \left| {\nabla \mkern-14mu / \,} _{Y} Y \right| & \lesssim \left| g( {\nabla \mkern-14mu / \,} _{Y} Y,Y) \right| \lesssim \left| \angdeformarg{Y}{Y}{Y} \right| \lesssim \left| Y(g_{ab} Y^a Y^b) \right|. \end{align} Since Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} implies that $g_{ab} Y^a Y^b = \mathrm{f}(\upgamma)$ with $\mathrm{f}$ smooth, the bootstrap assumptions yield that RHS~\eqref{E:ANGDGEOANGOFGEOANGINTERMSOFGEOANGDEFORMSPHERE} is $\lesssim |Y \upgamma| \lesssim \varepsilon^{1/2}$. The desired second inequality in \eqref{E:ANGDERIVATIVESINTERMSOFTANGENTIALCOMMUTATOR} now follows from this estimate, \eqref{E:ANGDSQUAREFUNCTIONFIRSTBOUNDINTERMSOFGEOANG}, and \eqref{E:ANGDGEOANGOFGEOANGINTERMSOFGEOANGDEFORMSPHERE}. \end{proof} \subsection{Pointwise estimates for the derivatives of the \texorpdfstring{$x^i$}{Cartesian spatial coordinate functions} and for the Lie derivatives of the Riemannian metric induced on \texorpdfstring{$\ell_{t,u}$}{the tori}} \begin{lemma}[{\textbf{Pointwise estimates for} $x^i$}] \label{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS} Assume that $1 \leq N \leq 18$. Let $x^i = x^i(t,u,\vartheta)$ denote the Cartesian coordinate function and let $\mathring{x}^i = \mathring{x}^i(u,\vartheta) := x^i(0,u,\vartheta)$. Then the following estimates hold for $i = 1,2$ (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation): \begin{subequations} \begin{align} \label{E:XIPOINTWISE} \left| x^i - \mathring{x}^i \right| & \lesssim 1, \\ \left| {{d \mkern-9mu /} } x^i \right| & \lesssim 1, \label{E:ANGDIFFXI} \\ \left| {{d \mkern-9mu /} } \mathscr{P}^{[1,N]} x^i \right| & \lesssim \left| \mathscr{P}^{[1,N]} \upgamma \right|, \label{E:ANGDIFFXIPURETANGENTIALDIFFERENTIATED} \\ \left| {{d \mkern-9mu /} } \mathscr{Z}^{[1,N];1} x^i \right| & \lesssim \left| \mathscr{Z}_*^{[1,N];1} \upgamma \right| + \left| \mathscr{P}_*^{[1,N]} \underline{\upgamma} \right|. \label{E:ANGDIFFXIONERADIALDIFFERENTIATED} \end{align} \end{subequations} In the case $i=2$ at fixed $u,\vartheta$, LHS~\eqref{E:XIPOINTWISE} is to be interpreted as the Euclidean distance traveled by the point $x^2$ in the flat universal covering space $\mathbb{R}$ of $\mathbb{T}$ along the corresponding integral curve of $L$ over the time interval $[0,t]$. \end{lemma} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} implies that for $V \in \lbrace L, X, Y \rbrace$, the component $V^i = V x^i$ verifies $V^i = \mathrm{f}(\upgamma)$ with $\mathrm{f}$ smooth. The estimates of the lemma therefore follow easily from the bootstrap assumptions, except for \eqref{E:XIPOINTWISE}. To obtain \eqref{E:XIPOINTWISE}, we first argue as above to deduce $|L x^i| = |L^i| = |\mathrm{f}(\upgamma)| \lesssim 1$. Since $ \displaystyle L = \frac{\partial}{\partial t} $, we can integrate with respect to time starting from $t = 0$ and use the previous estimate to conclude \eqref{E:XIPOINTWISE}. \end{proof} \begin{lemma}[{\textbf{Crude pointwise estimates for the Lie derivatives of $g \mkern-8.5mu / $ and $\gsphere^{-1}$}}] \label{L:POINTWISEESTIMATESFORGSPHEREANDITSDERIVATIVES} Assume that $N \leq 18$. Then the following estimates hold (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation): \begin{subequations} \begin{align} \label{E:POINTWISEESTIMATESFORGSPHEREANDITSTANGENTIALDERIVATIVES} \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{N+1} g \mkern-8.5mu / \right|, \, \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{N+1} \gsphere^{-1} \right| \, \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^N \upchi \right|, \, \left| \mathscr{P}^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right| & \lesssim \left| \mathscr{P}^{[1,N+1]} \upgamma \right|, \\ \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}_*}^{N+1;1} g \mkern-8.5mu / \right|, \, \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}_*}^{N+1;1} \gsphere^{-1} \right|, \, \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^{N;1} \upchi \right|, \, \left| \mathscr{Z}^{N;1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right| & \lesssim \left| \mathscr{Z}_*^{[1,N+1];1} \upgamma \right| + \left| \mathscr{P}_*^{[1,N+1]} \underline{\upgamma} \right|, \label{E:ONERADIALNOTPURERADIALFORGSPHERE} \\ \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^{N+1;1} g \mkern-8.5mu / \right|, \, \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^{N+1;1} \gsphere^{-1} \right| & \lesssim \left| \mathscr{Z}^{[1,N+1];1} \upgamma \right| + \left| \mathscr{P}_*^{[1,N+1]} \underline{\upgamma} \right|. \label{E:ONERADIALFORGSPHERE} \end{align} \end{subequations} \end{lemma} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. By Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}, we have $g \mkern-8.5mu / = \mathrm{f}(\upgamma, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2)$. The desired estimates for $ { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{N+1} g \mkern-8.5mu / $ thus follow from Lemma~\ref{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS} and the bootstrap assumptions. The desired estimates for $ { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{N+1} \gsphere^{-1}$ then follow from repeated use of the schematic identity $ { \mathcal{L} \mkern-10mu / } _{P} \gsphere^{-1} = - (\gsphere^{-1})^{-2} { \mathcal{L} \mkern-10mu / } _{P} g \mkern-8.5mu / $ (which is standard, see \cite{jSgHjLwW2016}*{Lemma~2.9}) and the estimates for $ { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{N+1} g \mkern-8.5mu / $. The estimates for $ { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^N \upchi$ and $\mathscr{P}^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ follow from the estimates for $ { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{N+1} g \mkern-8.5mu / $ and $ { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{N+1} \gsphere^{-1}$ since $\upchi \sim { \mathcal{L} \mkern-10mu / } _{P} g \mkern-8.5mu / $ (see \eqref{E:CHIDEF}) and ${\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \sim \gsphere^{-1} \cdot { \mathcal{L} \mkern-10mu / } _{P} g \mkern-8.5mu / $. \end{proof} \subsection{Commutator estimates} \label{SS:COMMUTATORESTIMATES} In this subsection, we establish some commutator estimates. \begin{lemma}[{\textbf{Pure $\mathcal{P}_u$-tangential commutator estimates}}] \label{L:COMMUTATORESTIMATES} Assume that $1 \leq N \leq 18$. Let $\vec{I}$ be an order $|\vec{I}| = N + 1$ multi-index for the set $\mathscr{P}$ of $\mathcal{P}_u$-tangential commutation vectorfields (see Def.~\ref{D:REPEATEDDIFFERENTIATIONSHORTHAND}), and let $\vec{I}'$ be any permutation of $\vec{I}$. Let $f$ be a scalar function, and let $\xi$ be an $\ell_{t,u}$-tangent one-form or a type $\binom{0}{2}$ $\ell_{t,u}$-tangent tensorfield. Then the following commutator estimates hold, where products involving the operators $\mathscr{P}_*^{[1,\lfloor N/2 \rfloor]}$ or $ { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{[1,N-1]}$ are absent when $N=1$: \begin{align} \left| \mathscr{P}^{\vec{I}} f - \mathscr{P}^{\vec{I}'} f \right| & \lesssim \varepsilon^{1/2} \left| \mathscr{P}_*^{[1,N]} f \right| + \left| \mathscr{P}_*^{[1,\lfloor N/2 \rfloor]} f \right| \left| \mathscr{P}^{[1,N]} \upgamma \right|. \label{E:PURETANGENTIALFUNCTIONCOMMUTATORESTIMATE} \end{align} Moreover, if $1 \leq N \leq 17$ and $\vec{I}$ is as above, then the following commutator estimates hold: \begin{subequations} \begin{align} \left| [ {\nabla \mkern-14mu / \,} ^2, \mathscr{P}^N] f \right| & \lesssim \varepsilon^{1/2} \left| \mathscr{P}_*^{[1,N]} f \right| + \left| \mathscr{P}_*^{[1,\lceil N/2 \rceil]} f \right| \left| \mathscr{P}^{[1,N+1]} \upgamma \right|, \label{E:ANGDSQUAREDPURETANGENTIALFUNCTIONCOMMUTATOR} \\ \left| [ {\Delta \mkern-12mu / \, } , \mathscr{P}^N] f \right| & \lesssim \varepsilon^{1/2} \left| \mathscr{P}_*^{[1,N+1]} f \right| + \left| \mathscr{P}_*^{[1,\lceil N/2 \rceil]} f \right| \left| \mathscr{P}^{[1,N+1]} \upgamma \right|, \label{E:ANGLAPPURETANGENTIALFUNCTIONCOMMUTATOR} \end{align} \end{subequations} \begin{subequations} \begin{align} \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{\vec{I}} \xi - { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{\vec{I}'} \xi \right| & \lesssim \varepsilon^{1/2} \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{[1,N]} \xi \right| + \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{\leq \lfloor N/2 \rfloor} \xi \right| \left| \mathscr{P}^{[1,N+1]} \upgamma \right|, \label{E:PURETANGENTIALTENSORFIELDCOMMUTATORESTIMATE} \\ \left| [ {\nabla \mkern-14mu / \,} , { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^N] \xi \right| & \lesssim \varepsilon^{1/2} \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{[1,N-1]} \xi \right| + \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{\leq \lfloor N/2 \rfloor} \xi \right| \left| \mathscr{P}^{[1,N+1]} \upgamma \right|, \label{E:ANGDANGLIETANGENTIALTENSORFIELDCOMMUTATORESTIMATE} \\ \left| [\mbox{\upshape{div} $\mkern-17mu /$\,}, { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^N] \xi \right| & \lesssim \varepsilon^{1/2} \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{[1,N]} \xi \right| + \left| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{\leq \lfloor N/2 \rfloor} \xi \right| \left| \mathscr{P}^{[1,N+1]} \upgamma \right|. \label{E:ANGDIVANGLIETANGENTIALTENSORFIELDCOMMUTATORESTIMATE} \end{align} \end{subequations} Finally, if $1 \leq N \leq 17$, then we have the following alternate version of \eqref{E:ANGDSQUAREDPURETANGENTIALFUNCTIONCOMMUTATOR}: \begin{align} \label{E:ALTERNATEANGDSQUAREDPURETANGENTIALFUNCTIONCOMMUTATOR} \left| [ {\nabla \mkern-14mu / \,} ^2, \mathscr{P}^N] f \right| & \lesssim \left| \mathscr{P}^{[1,\lceil N/2 \rceil +1]} \upgamma \right| \left| \mathscr{P}_*^{[1,N]} f \right| + \left| \mathscr{P}_*^{[1,\lceil N/2 \rceil]} f \right| \left| \mathscr{P}^{[1,N+1]} \upgamma \right|. \end{align} \end{lemma} \begin{proof}[Discussion of proof] The estimates of Lemma~\ref{L:COMMUTATORESTIMATES} were essentially proved in \cite{jSgHjLwW2016}*{Lemma~8.7}, based in part on bootstrap assumptions that are analogs of the bootstrap assumptions of the present article and estimates that are analogs of the estimates of Lemmas~\ref{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS} and \ref{L:POINTWISEESTIMATESFORGSPHEREANDITSDERIVATIVES}. We note that the right-hand sides of the estimates of Lemma~\ref{L:COMMUTATORESTIMATES} are slightly different than the right-hand sides of the estimates of \cite{jSgHjLwW2016}*{Lemma~8.7}. The difference is that on the right-hand sides of the estimates of Lemma~\ref{L:COMMUTATORESTIMATES}, all products contain a factor involving at least one differentiation with respect to a vectorfield belonging to the $\mathcal{P}_u$-tangential subset $\mathscr{P}$. In particular, the estimates hold true without the presence of pure order-zero products such as $ |f||\upgamma| $ on the right-hand sides. This structure was not stated in the estimates of \cite{jSgHjLwW2016}*{Lemma~8.7}, although the availability of this structure follows from the proof of \cite{jSgHjLwW2016}*{Lemma~8.7}. \end{proof} \begin{lemma}[{\textbf{Mixed $\mathcal{P}_u$-transversal-tangent commutator estimates}}] \label{L:TRANSVERALTANGENTIALCOMMUTATOR} Assume that $1 \leq N \leq 18$. Let $\mathscr{Z}^{\vec{I}}$ be a $\mathscr{Z}$-multi-indexed operator containing \textbf{exactly one} $\breve{X}$ factor, and assume that $|\vec{I}| = N+1$. Let $\vec{I}'$ be any permutation of $\vec{I}$. Let $f$ be a scalar function. Then the following commutator estimates hold (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation): \begin{align} \left| \mathscr{Z}^{\vec{I}} f - \mathscr{Z}^{\vec{I}'} f \right| & \lesssim \left| \mathscr{P}_*^{[1,N]} f \right| + \underbrace{ \varepsilon^{1/2} \left| Y \mathscr{Z}^{\leq N-1;1} f \right|}_{\mbox{\upshape Absent if $N=1$}} \label{E:ONERADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} \\ & \ \ + \left| \mathscr{P}_*^{[1,\lfloor N/2 \rfloor]} f \right| \left| \myarray [\mathscr{P}_*^{[1,N]} \underline{\upgamma}] {\mathscr{Z}_*^{[1,N];1} \upgamma} \right| + \underbrace{ \left| Y \mathscr{Z}^{[1,\lfloor N/2 \rfloor - 1];1} f \right| \left| \mathscr{P}^{[1,N]} \upgamma \right|}_{\mbox{\upshape Absent if $N \leq 3$}}. \notag \end{align} Moreover, if $1 \leq N \leq 17$, then the following estimates hold: \begin{subequations} \begin{align} \left| [ {\nabla \mkern-14mu / \,} ^2, \mathscr{Z}^{N;1}] f \right| & \lesssim \left| \mathscr{Z}_*^{[1,N];1} f \right| \label{E:ANGDSQUAREDONERADIALTANGENTIALFUNCTIONCOMMUTATOR} \\ & \ \ + \left| \mathscr{P}_*^{[1,\lceil N/2 \rceil]} f \right| \left| \myarray [\mathscr{P}_*^{[1,N+1]} \underline{\upgamma}] {\mathscr{Z}_*^{[1,N+1];1} \upgamma} \right| + \left| \mathscr{Z}_*^{[1,\lceil N/2 \rceil]} f \right| \left| \mathscr{P}^{[1,N+1]} \upgamma \right|, \notag \\ \left| [ {\Delta \mkern-12mu / \, } , \mathscr{Z}^{N;1}] f \right| & \lesssim \left| \mathscr{Z}_*^{[1,N+1];1} f \right| \label{E:ANGLAPONERADIALTANGENTIALFUNCTIONCOMMUTATOR} \\ & \ \ + \left| \mathscr{P}_*^{[1,\lceil N/2 \rceil]} f \right| \left| \myarray [\mathscr{P}_*^{[1,N+1]} \underline{\upgamma}] {\mathscr{Z}_*^{[1,N+1];1} \upgamma} \right| + \left| \mathscr{Z}_*^{[1,\lceil N/2 \rceil]} f \right| \left| \mathscr{P}^{[1,N+1]} \upgamma \right|. \notag \end{align} \end{subequations} \end{lemma} \begin{proof} The estimates were essentially proved as \cite{jSgHjLwW2016}*{Lemma~8.8}, based in part on bootstrap assumptions that are analogs of the bootstrap assumptions of the present article and estimates that are analogs of the estimates of Lemmas~\ref{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS} and \ref{L:POINTWISEESTIMATESFORGSPHEREANDITSDERIVATIVES}. We note that the right-hand sides of the estimates of Lemma~\ref{L:TRANSVERALTANGENTIALCOMMUTATOR} are slightly different than the right-hand sides of the estimates of \cite{jSgHjLwW2016}*{Lemma~8.8}. The difference is that on the right-hand sides of the estimates of Lemma~\ref{L:TRANSVERALTANGENTIALCOMMUTATOR}, no pure order-zero terms such as $|f|$ or $|\upgamma|$ appear. This structure was not stated in the estimates of \cite{jSgHjLwW2016}*{Lemma~8.8}, although the availability of this structure follows from the proof of \cite{jSgHjLwW2016}*{Lemma~8.8}. \end{proof} \subsection{Transport inequalities and strict improvements of the auxiliary bootstrap assumptions} \label{SS:IMPROVEMENTOFAUX} In this subsection, we use the previous estimates to derive transport inequalities for the eikonal function quantities and strict improvements of the auxiliary bootstrap assumptions stated in Subsect.\ \ref{SS:AUXILIARYBOOTSTRAP}. The transport inequalities form the starting point for our derivation of $L^2$ estimates for the below-top-order derivatives of the eikonal function quantities as well as their top-order derivatives involving at least one $L$ differentiation (see Lemma~\ref{L:EASYL2BOUNDSFOREIKONALFUNCTIONQUANTITIES}). The main challenge in proving the proposition is to propagate the smallness of the $\mathring{\upalpha}$-sized and the $\mathring{\upepsilon}$-sized quantities, even though some terms in the evolution equations involve $\mathring{\updelta}$-sized quantities, which are allowed to be large. To this end, we must find and exploit effective partial decoupling between various quantities, which is present because of the special structure of the evolution equations relative to the geometric coordinates, because of our assumptions on the structure of the semilinear inhomogeneous terms in the wave equations (especially \eqref{E:SOMENONINEARITIESARELINEAR}), and because of the good properties of the commutation vectorfield sets $\mathscr{Z}$ and $\mathscr{P}$. \begin{proposition}[\textbf{Transport inequalities and strict improvements of the auxiliary bootstrap assumptions}] \label{P:IMPROVEMENTOFAUX} The following estimates hold (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation). \medskip \noindent \underline{\textbf{Transport inequalities for the eikonal function quantities}.} \medskip \noindent \textbf{$\bullet$Transport inequalities for} $\upmu$. The following pointwise estimate holds: \begin{subequations} \begin{align} \left| L \upmu \right| & \lesssim \left| \mathscr{Z} \Psi \right|. \label{E:LUNITUPMUPOINTWISE} \end{align} Moreover, for $1 \leq N \leq 18$, the following estimates hold: \begin{align} \label{E:PURETANGENTIALLUNITUPMUCOMMUTEDESTIMATE} \left| L \mathscr{P}^N \upmu \right|, \, \left| \mathscr{P}^N L \upmu \right| & \lesssim \left| \mathscr{Z}_*^{[1,N+1];1} \Psi \right| + \left| \mathscr{P}^{[1,N]} \upgamma \right| + \varepsilon \left| \mathscr{P}_*^{[1,N]} \underline{\upgamma} \right|. \end{align} \end{subequations} \noindent \textbf{$\bullet$Transport inequalities for} $L_{(Small)}^i$ and ${\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$. For $N \leq 18$, the following estimates hold: \begin{subequations} \begin{align} \left| \myarray [L \mathscr{P}^N L_{(Small)}^i] {L \mathscr{P}^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi} \right|, \, \left| \myarray [\mathscr{P}^N L L_{(Small)}^i] {\mathscr{P}^{N-1} L {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi} \right| & \lesssim \left| \mathscr{P}^{[1,N+1]} \Psi \right| + \varepsilon \left| \mathscr{P}^{[1,N]} \upgamma \right|, \label{E:LUNITTANGENTDIFFERENTIATEDLUNITSMALLIMPROVEDPOINTWISE} \\ \left| \myarray [L \mathscr{Z}^{N;1} L_{(Small)}^i] {L \mathscr{Z}^{N-1;1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi} \right|, \, \left| \myarray [\mathscr{Z}^{N;1} L L_{(Small)}^i] {\mathscr{Z}^{N-1;1} L {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi} \right| & \lesssim \left| \mathscr{Z}_*^{[1,N+1];1} \Psi \right| + \left| \myarray[\varepsilon \mathscr{P}_*^{[1,N]} \underline{\upgamma}] {\mathscr{Z}_*^{[1,N];1} \upgamma} \right|. \label{E:LUNITONERADIALTANGENTDIFFERENTIATEDLUNITSMALLIMPROVEDPOINTWISE} \end{align} \end{subequations} \medskip \noindent \underline{$L^{\infty}$ \textbf{estimates for} $\Psi$, $\vec{W}$, \textbf{and the eikonal function quantities}.} \medskip \noindent \textbf{$\bullet$$L^{\infty}$ estimates involving at most one transversal derivative of $\Psi$}. The following estimates hold: \begin{subequations} \begin{align} \left\| \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \mathring{\upalpha} + C \varepsilon, \label{E:PSIITSELFBOOTSTRAPIMPROVED} \\ \left\| \mathscr{Z}_*^{[1,10];1} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon, \label{E:PSIMIXEDTRANSVERSALTANGENTBOOTSTRAPIMPROVED} \\ \left\| \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon. \label{E:PSITRANSVERSALLINFINITYBOUNDBOOTSTRAPIMPROVED} \end{align} \end{subequations} \noindent \textbf{$\bullet$$L^{\infty}$ estimates involving at most one transversal derivative of $\vec{W}$}. The following estimates hold: \begin{align} \left\| \mathscr{Z}^{\leq 10;1} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon. \label{E:SLOWWAVETRANSVERSALTANGENT} \end{align} \noindent \textbf{$\bullet$$L^{\infty}$ estimates for $\upmu$}. The following estimates hold: \begin{subequations} \begin{align} \label{E:LUNITUPMULINFINITY} \left\| L \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & = \frac{1}{2} \left\| G_{L L} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_0^u)} + \mathcal{O}(\varepsilon), \\ \left\| L \mathscr{P}^{[1,9]} \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \mathscr{P}_*^{[1,9]} \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon, \label{E:LUNITAPPLIEDTOTANGENTIALUPMUANDTANSETSTARLINFTY} \end{align} \end{subequations} \begin{subequations} \begin{align} \label{E:UPMULINFTY} \left\| \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq 1 + \mathring{\updelta}_*^{-1} \left\| G_{L L} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_0^u)} + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} + C \varepsilon. \end{align} \end{subequations} \noindent \textbf{$\bullet$$L^{\infty}$ estimates for $L_{(Small)}^i$ and $\upchi$}. The following estimates hold: \begin{subequations} \begin{align} \left\| L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} + C \varepsilon, \label{E:LUNITISMALLITSELFLSMALLINFTYESTIMATE} \\ \left\| L \mathscr{P}^{\leq 10} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \mathscr{P}^{[1,10]} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon, \label{E:PURETANGENTIALLUNITAPPLIEDTOLISMALLANDLISMALLINFTYESTIMATE} \\ \left\| L \mathscr{Z}^{\leq 9;1} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \mathscr{Z}_*^{[1,9];1} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon, \label{E:LUNITAPPLIEDTOLISMALLANDLISMALLINFTYESTIMATE} \\ \left\| \breve{X} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon, \label{E:LISMALLLONERADIALINFINITYESTIMATE} \end{align} \end{subequations} \begin{subequations} \begin{align} \label{E:PURETANGENTIALCHICOMMUTEDLINFINITY} \left\| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{\leq 9} \upchi \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| { \mathcal{L} \mkern-10mu / } _{\mathscr{P}}^{\leq 9} \upchi^{\#} \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \mathscr{P}^{\leq 9} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon, \\ \left\| { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^{\leq 8;1} \upchi \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| { \mathcal{L} \mkern-10mu / } _{\mathscr{Z}}^{\leq 8;1} \upchi^{\#} \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \mathscr{Z}^{\leq 8;1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon. \label{E:ONERADIALCHICOMMUTEDLINFINITY} \end{align} \end{subequations} \end{proposition} \begin{proof}[Proof outline] See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. Throughout, we refer to the data-size assumptions of Subsect.\ \ref{SS:DATAASSUMPTIONS} and the bounds of Lemma~\ref{L:BEHAVIOROFEIKONALFUNCTIONQUANTITIESALONGSIGMA0} as the ``conditions on the data.'' To derive \eqref{E:SLOWWAVETRANSVERSALTANGENT} in the case $\mathscr{Z}^{\leq 10;1} = \mathscr{P}^{\leq 10}$, we simply note that the desired bound is one of the bootstrap assumptions from \eqref{E:PSIFUNDAMENTALC0BOUNDBOOTSTRAP}. To prove \eqref{E:SLOWWAVETRANSVERSALTANGENT} in the remaining case in which $\mathscr{Z}^{\leq 10;1}$ contains a factor of $\breve{X}$, we first apply $\mathscr{P}^{\leq 9}$ to the identity \eqref{E:RADOFSLOWWAVEALGEBRAICALLYEXPRESSED} and use the bootstrap assumptions to deduce that $ \left\| \mathscr{P}^{\leq 9} \breve{X} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim \varepsilon $. We then use the commutator estimate \eqref{E:ONERADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} with $f = \vec{W}$, the estimate just proved for $\mathscr{P}^{\leq 9} \breve{X} \vec{W}$, and the bootstrap assumptions, which allow us to arbitrarily commute the vectorfields in the expression $\mathscr{P}^{\leq 9} \breve{X} \vec{W}$ up to errors bounded in $\| \cdot \|_{L^{\infty}(\Sigma_t^u)}$ by $\lesssim \varepsilon$. In total, we have derived the desired bound $ \left\| \mathscr{Z}^{\leq 10;1} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim \varepsilon $. The remaining estimates in Prop.~\ref{P:IMPROVEMENTOFAUX} can be established using arguments nearly identical to the ones used in proving \cite{jSgHjLwW2016}*{Proposition~8.10}, as we now outline. Specifically, one uses the transport equations of Lemma~\ref{L:UPMUANDLUNITIFIRSTTRANSPORT}, the estimates of Lemmas~\ref{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS}-\ref{L:POINTWISEESTIMATESFORGSPHEREANDITSDERIVATIVES}, the commutator estimates of Subsect.\ \ref{SS:COMMUTATORESTIMATES}, and the conditions on the data to derive the desired bounds for $\upmu$ and $L_{(Small)}^i$; these bounds are not explicitly tied to $\vec{W}$ and hence the proofs from \cite{jSgHjLwW2016}*{Proposition~8.10} go through nearly verbatim. There are two minor differences that we now highlight. \textbf{i)} Note that the estimates \eqref{E:LUNITUPMUPOINTWISE}, \eqref{E:PURETANGENTIALLUNITUPMUCOMMUTEDESTIMATE}, \eqref{E:LUNITTANGENTDIFFERENTIATEDLUNITSMALLIMPROVEDPOINTWISE}, and \eqref{E:LUNITONERADIALTANGENTDIFFERENTIATEDLUNITSMALLIMPROVEDPOINTWISE} do not feature any order $0$ terms such as $|\upgamma|$ on the RHS. This is different compared to the analogous estimates stated in \cite{jSgHjLwW2016}*{Proposition~8.10}, but follows from the proof given there and from the commutator estimates of Lemmas~\ref{L:COMMUTATORESTIMATES} and \ref{L:TRANSVERALTANGENTIALCOMMUTATOR} (see also the remarks made in the discussion of the proofs of Lemmas~\ref{L:COMMUTATORESTIMATES} and \ref{L:TRANSVERALTANGENTIALCOMMUTATOR}). \textbf{ii)} The estimates \eqref{E:PSIITSELFBOOTSTRAPIMPROVED}, \eqref{E:UPMULINFTY}, \eqref{E:LUNITISMALLITSELFLSMALLINFTYESTIMATE} feature the parameter $\mathring{\upalpha}$ on the RHS, which is different compared to the analogous estimates stated in \cite{jSgHjLwW2016}*{Proposition~8.10}. The difference stems from the fact that some of the order $0$ quantities in this paper are controlled by the data-size parameter $\mathring{\upalpha}$, which is not featured in \cite{jSgHjLwW2016}; see also Remark~\ref{R:NEWPARAMETER}. The estimates for $\upchi$ then follow from the estimates for $\upmu$ and $L_{(Small)}^i$ described above and Lemmas~\ref{L:IDFORCHI} and ~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}. To derive the desired estimate \eqref{E:PSITRANSVERSALLINFINITYBOUNDBOOTSTRAPIMPROVED} for $\Psi$ and the estimate \eqref{E:PSIMIXEDTRANSVERSALTANGENTBOOTSTRAPIMPROVED} for $\Psi$ when $\mathscr{Z}_*^{[1,10];1}$ contains exactly one factor of $\breve{X}$ (the desired bounds in the case case $\mathscr{Z}_*^{[1,10];1} = \mathscr{P}^{[1,10]}$ are restatements of one of the bootstrap assumptions \eqref{E:PSIFUNDAMENTALC0BOUNDBOOTSTRAP}), one can use equation \eqref{E:LONOUTSIDEGEOMETRICWAVEOPERATORFRAMEDECOMPOSED}, equation \eqref{E:UPMUFIRSTTRANSPORT}, Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}, Lemma~\ref{L:CARTESIANVECTORFIELDSINTERMSOFGEOMETRICONES}, and the assumptions \eqref{E:SOMENONINEARITIESARELINEAR} on the semilinear terms to rewrite the wave equation \eqref{E:FASTWAVE} for $\Psi$ in the following schematic ``transport equation'' form: \begin{align} \label{E:WAVEEQUATIONTRANSPORTINTERPRETATION} L \breve{X} \Psi & = \mathrm{f}(\underline{\upgamma}) {\Delta \mkern-12mu / \, } \Psi + \mathrm{f}(\underline{\upgamma},\gsphere^{-1}, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2,P \Psi, \breve{X} \Psi) P P \Psi + \mathrm{f}(\underline{\upgamma},\vec{W},\gsphere^{-1}, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2,P \Psi, \breve{X} \Psi) P \upgamma \\ & \ \ + \mathrm{f}(\underline{\upgamma}, \vec{W}, P \Psi, \breve{X} \Psi) \vec{W}. \notag \end{align} Then by applying $\mathscr{P}^{\leq 9}$ to \eqref{E:WAVEEQUATIONTRANSPORTINTERPRETATION} and using the commutator estimates of Subsect.\ \ref{SS:COMMUTATORESTIMATES} and the bootstrap assumptions, one can show that $\left|L \mathscr{P}^{\leq 9} \breve{X} \Psi \right| \lesssim \varepsilon$, from which the bounds $\| \breve{X} \Psi \|_{L^{\infty}(\Sigma_t^u)} \leq \| \breve{X} \Psi \|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon$ and $\| \mathscr{P}^{[1,9]} \breve{X} \Psi \|_{L^{\infty}(\Sigma_t^u)} \lesssim \varepsilon$ easily follow by integrating in time (recall that $\displaystyle L = \frac{\partial}{\partial t} $) and using the conditions on the data. Then by further applications of the commutator estimates of Subsect.\ \ref{SS:COMMUTATORESTIMATES}, we obtain $\| \mathscr{Z}_*^{[1,10];1} \Psi \|_{L^{\infty}(\Sigma_t^u)} \lesssim \varepsilon$. More precisely, all terms that arise from differentiating RHS~\eqref{E:WAVEEQUATIONTRANSPORTINTERPRETATION} with $\mathscr{P}^{\leq 9}$ were handled in the proof of \cite{jSgHjLwW2016}*{Proposition~8.10} except for the ones involving $\vec{W}$. Note in particular that the commutator estimates needed to commute $\mathscr{P}^{\leq 9}$ through the operator $L$ on LHS~\eqref{E:WAVEEQUATIONTRANSPORTINTERPRETATION} and through the operator $ {\Delta \mkern-12mu / \, } $ on RHS~\eqref{E:WAVEEQUATIONTRANSPORTINTERPRETATION} do not involve $\vec{W}$ in any way. That is, the only influence of $\vec{W}$ on the estimates under consideration is through the terms $\mathscr{P}^{\leq 9} \vec{W}$ that arise from RHS \eqref{E:WAVEEQUATIONTRANSPORTINTERPRETATION}. Due to the bootstrap assumption $\| \mathscr{P}^{\leq 10} \vec{W} \|_{L^{\infty}(\Sigma_t^u)} \leq \varepsilon$ stated in \eqref{E:PSIFUNDAMENTALC0BOUNDBOOTSTRAP}, the products containing a factor of $\mathscr{P}^{\leq 9} \vec{W}$ make only a negligible $\mathcal{O}(\varepsilon)$ contribution to the estimates. For this reason, the analysis for $\Psi$ in the present context is a negligible $\mathcal{O}(\varepsilon)$ perturbation of the analogous analysis carried out in the proof of \cite{jSgHjLwW2016}*{Proposition~8.10}. \end{proof} The following corollary is an immediate consequence of the fact that we have improved the auxiliary bootstrap assumptions of Subsect.\ \ref{SS:AUXILIARYBOOTSTRAP} by showing that they hold with $\varepsilon^{1/2}$ replaced by $C \varepsilon$ and with $\mathring{\upalpha}^{1/2}$ replaced by $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}$. \begin{corollary}[$\varepsilon^{1/2} \rightarrow C \varepsilon$ \textbf{and} $\mathring{\upalpha}^{1/2} \rightarrow C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}$] \label{C:SQRTEPSILONTOCEPSILON} All prior inequalities whose right-hand sides feature an explicit factor of $\varepsilon^{1/2}$ remain true with $\varepsilon^{1/2}$ replaced by $C \varepsilon$. Moreover, all prior inequalities whose right-hand sides feature an explicit factor of $\mathring{\upalpha}^{1/2}$ remain true with $\mathring{\upalpha}^{1/2}$ replaced by $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}$. This is true in particular for the auxiliary bootstrap assumptions of Subsect.\ \ref{SS:AUXILIARYBOOTSTRAP}. \end{corollary} \begin{remark}[\textbf{The auxiliary bootstrap assumptions are now redundant}] \label{R:AUXAREREDUNDANT} Since we have derived strict improvements of the auxiliary bootstrap assumptions of Subsect.\ \ref{SS:AUXILIARYBOOTSTRAP}, when proving estimates later in the paper, we no longer need to state them as assumptions. \end{remark} \section{\texorpdfstring{$L^{\infty}$}{Essential sup-norm} estimates involving higher-order transversal derivatives} \label{S:LINFINITYESTIMATESFORHIGHERTRANSVERSAL} Our energy estimates rely on the delicate estimate $ \displaystyle \left\| \frac{[\breve{X} \upmu]_+}{\upmu} \right\|_{L^{\infty}(\Sigma_t^u)} \leq \frac{C}{\sqrt{T_{(Boot)} - t}} $ (see \eqref{E:UNIFORMBOUNDFORMRADMUOVERMU}), whose proof relies on the bound $ \left\| \breve{X} \breve{X} \upmu \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim 1 $. In this section, we derive this bound and related ones that are needed to prove it. In particular, it turns out that to obtain the desired estimates for $\upmu$, we must show that $ \left\| \breve{X} \breve{X} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim 1 $. \subsection{Auxiliary \texorpdfstring{$L^{\infty}$}{essential sup-norm} bootstrap assumptions} \label{SS:BOOTSTRAPFORHIGHERTRANSVERSAL} To facilitate the analysis, we introduce the following auxiliary bootstrap assumptions. In Prop.~\ref{P:IMPROVEMENTOFHIGHERTRANSVERSALBOOTSTRAP}, we derive strict improvements of the assumptions based on the estimates of Sect.\ \ref{S:PRELIMINARYPOINTWISE} and our assumptions on the data. \medskip \noindent \underline{\textbf{Auxiliary bootstrap assumptions for small quantities}.} We assume that the following inequalities hold on $\mathcal{M}_{T_{(Boot)},U_0}$ (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation): \begin{align} \left\| \mathscr{Z}_*^{[1,4];2} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon^{1/2}, \label{E:HIGHERTRANSVERSALPSIMIXEDFUNDAMENTALLINFINITYBOUNDBOOTSTRAP} \tag{$\mathbf{BA'}1\Psi$} \\ \left\| \mathscr{Z}^{\leq 3;2} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon^{1/2}, \label{E:HIGHERTRANSVERSALSLOWWAVEMIXEDFUNDAMENTALLINFINITYBOUNDBOOTSTRAP} \tag{$\mathbf{BA'} \vec{W}$} \end{align} \begin{subequations} \begin{align} \label{E:UPMUONERADIALNOTPURERADIALBOOTSTRAP} \tag{$\mathbf{BA'}1\upmu$} \left\| \breve{X} Y \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \breve{X} L L \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \breve{X} Y Y \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \breve{X} L Y \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon^{1/2}, \end{align} \end{subequations} and \begin{align} \label{E:PERMUTEDUPMUONERADIALNOTPURERADIALBOOTSTRAP} \tag{$\mathbf{BA''}1\upmu$} \eqref{E:UPMUONERADIALNOTPURERADIALBOOTSTRAP} \mbox{ also holds for all permutations of the vectorfield operators on LHS } \eqref{E:UPMUONERADIALNOTPURERADIALBOOTSTRAP}, \end{align} \begin{align} \label{E:UPMUTWORADIALNOTPURERADIALBOOTSTRAP} \tag{$\mathbf{BA'}1L_{(Small)}$} \left\| \mathscr{Z}_*^{[1,3];2} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \varepsilon^{1/2}. \end{align} \noindent \underline{\textbf{Auxiliary bootstrap assumptions for quantities that are allowed to be large}.} We assume that the following inequalities hold on $\mathcal{M}_{T_{(Boot)},U_0}$: \begin{align} \left\| \breve{X}^M \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X}^M \Psi \right\|_{L^{\infty}(\Sigma_0^u)} + \varepsilon^{1/2}, && (2 \leq M \leq 3), \label{E:HIGHERPSITRANSVERSALFUNDAMENTALC0BOUNDBOOTSTRAP} \tag{$\mathbf{BA'}2\Psi$} \end{align} \begin{align} \left\| L \breve{X}^M \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \frac{1}{2} \left\| \breve{X}^M \left\lbrace G_{L L} \breve{X} \Psi \right\rbrace \right\|_{L^{\infty}(\Sigma_0^u)} + \varepsilon^{1/2}, && (1 \leq M \leq 2), \label{E:HIGHERLUNITUPMUBOOT} \tag{$\mathbf{BA'}2\upmu$} \\ \left\| \breve{X}^M \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X}^M \upmu \right\|_{L^{\infty}(\Sigma_0^u)} + \mathring{\updelta}_*^{-1} \left\| \breve{X}^M \left\lbrace G_{L L} \breve{X} \Psi \right\rbrace \right\|_{L^{\infty}(\Sigma_0^u)} + \varepsilon^{1/2}, && (1 \leq M \leq 2), \label{E:HIGHERUPMUTRANSVERSALBOOT} \tag{$\mathbf{BA'}3\upmu$} \\ \left\| \breve{X} \breve{X} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X} \breve{X} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_0^u)} + \varepsilon^{1/2}. && \label{E:HIGHERLUNITITRANSVERSALBOOT} \tag{$\mathbf{BA'}2L_{(Small)}$} \end{align} \subsection{Commutator estimates involving two transversal derivatives} \label{SS:TWORADDERIVATIVESCOMMUTATORESTIMATES} We now provide some basic commutator estimates involving two factors of the $\mathcal{P}_u$-transversal vectorfield $\breve{X}$. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma~9.1; \textbf{Mixed $\mathcal{P}_u$-transversal-tangent commutator estimates involving two $\breve{X}$ derivatives}} \label{L:HIGHERTRANSVERALTANGENTIALCOMMUTATOR} Let $\mathscr{Z}^{\vec{I}}$ be a $\mathscr{Z}$-multi-indexed operator containing exactly two $\breve{X}$ factors, and assume that $3 \leq |\vec{I}| := N+1 \leq 4$. Let $\vec{I}'$ be any permutation of $\vec{I}$. Under the data-size and bootstrap assumptions of Subsects.\ \ref{SS:DATAASSUMPTIONS}-\ref{SS:PSIBOOTSTRAP} and Subsect.\ \ref{SS:BOOTSTRAPFORHIGHERTRANSVERSAL} and the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}, the following commutator estimates hold for functions $f$ on $\mathcal{M}_{T_{(Boot)},U_0}$ (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation): \begin{align} \left| \mathscr{Z}^{\vec{I}} f - \mathscr{Z}^{\vec{I}'} f \right| & \lesssim \left| Y \mathscr{Z}^{\leq N-1;1} f \right| + \underbrace{ \left| Y \mathscr{Z}^{\leq N-1;2} f \right|}_{\mbox{\upshape Absent if $N=2$}}. \label{E:TWORADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} \end{align} Moreover, we have \begin{align} \left| [ {\Delta \mkern-12mu / \, } , \breve{X} \breve{X}] f \right| & \lesssim \left| Y \mathscr{Z}^{\leq 2;1} f \right|. \label{E:ANGLAPTWORADIALTANGENTIALFUNCTIONCOMMUTATOR} \end{align} \end{lemma} \begin{proof}[Discussion of the proof] Thanks in part to the $L^{\infty}$ estimates of Prop.\ \ref{P:IMPROVEMENTOFAUX} and the bootstrap assumption of Subsect.\ \ref{SS:BOOTSTRAPFORHIGHERTRANSVERSAL}, the lemma follows from the same arguments given in \cite{jSgHjLwW2016}*{Lemma~9.1}. In particular, we stress that these estimates do not depend on the slow wave variables $\vec{W}$. We note that we have ignored a smallness factor from \cite{jSgHjLwW2016}*{Lemma~9.1} that could have been placed in front of the second product on RHS~\eqref{E:TWORADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE}; the smallness factor is not important for our estimates. \end{proof} \subsection{The main estimates involving higher-order transversal derivatives} \label{SS:HIGHERORDERTRANSVERALMAINESTIMATES} We now prove the main result of Sect.\ \ref{S:LINFINITYESTIMATESFORHIGHERTRANSVERSAL}. \begin{proposition}[\textbf{$L^{\infty}$ estimates involving higher-order transversal derivatives}] \label{P:IMPROVEMENTOFHIGHERTRANSVERSALBOOTSTRAP} Under the data-size and bootstrap assumptions of Subsects.\ \ref{SS:DATAASSUMPTIONS}-\ref{SS:PSIBOOTSTRAP} and Subsect.\ \ref{SS:BOOTSTRAPFORHIGHERTRANSVERSAL} and the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}, the following statements hold true on $\mathcal{M}_{T_{(Boot)},U_0}$ (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation). \medskip \noindent \underline{\textbf{$L^{\infty}$ estimates involving two or three transversal derivatives of $\Psi$}.} The following estimates hold: \begin{subequations} \begin{align} \left\| \mathscr{Z}_*^{[1,4];2} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon, \label{E:IMPROVEDTRANSVERALESTIMATESFORTWORADBUTNOTPURERAD} \\ \left\| \breve{X} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon, \label{E:IMPROVEDTRANSVERALESTIMATESFORTWORAD} \\ \left\| L \breve{X} \breve{X} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon, \label{E:IMPROVEDTRANSVERALESTIMATESFORLUNITTHREERAD} \\ \left\| \breve{X} \breve{X} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X} \breve{X} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon. \label{E:IMPROVEDTRANSVERALESTIMATESFORTHREERAD} \end{align} \end{subequations} \noindent \underline{\textbf{$L^{\infty}$ estimates involving one or two transversal derivatives of $\upmu$}.} The following estimates hold: \begin{subequations} \begin{align} \left\| L \breve{X} \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \frac{1}{2} \left\| \breve{X} \left( G_{L L} \breve{X} \Psi \right) \right\|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon, \label{E:LUNITRADUPMULINFTY} \\ \left\| \breve{X} \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X} \upmu \right\|_{L^{\infty}(\Sigma_0^u)} + \mathring{\updelta}_*^{-1} \left\| \breve{X} \left( G_{L L} \breve{X} \Psi \right) \right\|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon, \label{E:RADUPMULINFTY} \end{align} \begin{align} \left\| L \breve{X} Y \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| L \breve{X} L L \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| L \breve{X} Y Y \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| L \breve{X} L Y \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon, \label{E:LUNITRADTANGENTIALUPMULINFTY} \\ \left\| \breve{X} Y \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \breve{X} L L \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \breve{X} Y Y \upmu \right\|_{L^{\infty}(\Sigma_t^u)}, \, \left\| \breve{X} L Y \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon, \label{E:RADTANGENTIALUPMULINFTY} \end{align} \begin{align} \label{E:PERMUTEDRADTANGENTIALUPMULINFTY} \eqref{E:LUNITRADTANGENTIALUPMULINFTY}-\eqref{E:RADTANGENTIALUPMULINFTY} \mbox{ also hold for all permutations of the vectorfield operators on the LHS }, \end{align} \begin{align} \left\| L \breve{X} \breve{X} \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \frac{1}{2} \left\| \breve{X} \breve{X} \left( G_{L L} \breve{X} \Psi \right) \right\|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon \label{E:LUNITRADRADUPMULINFTY}, \\ \left\| \breve{X} \breve{X} \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X} \breve{X} \upmu \right\|_{L^{\infty}(\Sigma_0^u)} + \mathring{\updelta}_*^{-1} \left\| \breve{X} \breve{X} \left( G_{L L} \breve{X} \Psi \right) \right\|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon. \label{E:RADRADUPMULINFTY} \end{align} \end{subequations} \noindent \underline{\textbf{$L^{\infty}$ estimates involving one or two transversal derivatives of $L_{(Small)}^i$}.} The following estimates hold: \begin{subequations} \begin{align} \left\| \mathscr{Z}_*^{[1,3];2} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon, \label{E:TANGENTIALANDTWORADAPPLIEDTOLUNITILINFINITY} \\ \left\| \breve{X} \breve{X} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \left\| \breve{X} \breve{X} L_{(Small)}^i \right\|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon. \label{E:TWORADAPPLIEDTOLUNITILINFINITY} \end{align} \end{subequations} \noindent \underline{\textbf{$L^{\infty}$ estimates involving two transversal derivatives of $\vec{W}$}}. The following estimates hold: \begin{align} \left\| \mathscr{Z}^{\leq 3;2} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} & \leq C \varepsilon. \label{E:UPTOTWOTRANSVERSALDERIVATIVESOFSLOWINLINFINITY} \end{align} \noindent \underline{\textbf{Sharp pointwise estimates involving the critical factor $G_{L L}$}.} Moreover, if $0 \leq M \leq 2$ and $0 \leq s \leq t < T_{(Boot)}$, then we have the following estimates: \begin{align} \left| \breve{X}^M G_{L L}(t,u,\vartheta) - \breve{X}^M G_{L L}(s,u,\vartheta) \right| & \leq C \varepsilon (t - s), \label{E:RADDERIVATIVESOFGLLDIFFERENCEBOUND} \\ \left| \breve{X}^M \left\lbrace G_{L L} \breve{X} \Psi \right\rbrace (t,u,\vartheta) - \breve{X}^M \left\lbrace G_{L L} \breve{X} \Psi \right\rbrace (s,u,\vartheta) \right| & \leq C \varepsilon (t - s). \label{E:RADDERIVATIVESOFGLLRADPSIDIFFERENCEBOUND} \end{align} Furthermore, with $L_{(Flat)} := \partial_t + \partial_1$, we have \begin{align} \label{E:LUNITUPMUDOESNOTDEVIATEMUCHFROMTHEDATA} L \upmu(t,u,\vartheta) & = \frac{1}{2} \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) \right\rbrace G_{L_{(Flat)} L_{(Flat)}}(\Psi = 0) \breve{X} \Psi(t,u,\vartheta) + \mathcal{O}(\varepsilon), \end{align} where $G_{L_{(Flat)} L_{(Flat)}}(\Psi = 0)$ is a non-zero constant (see \eqref{E:NONVANISHINGNONLINEARCOEFFICIENT}). \end{proposition} \begin{remark}[\textbf{Strict improvement of the auxiliary bootstrap assumptions}] \label{R:HIGHERTRANSAUXBOOTSTRAPIMPROVED} Note in particular that the estimates of Prop.\ \ref{P:IMPROVEMENTOFHIGHERTRANSVERSALBOOTSTRAP} yield strict improvements of the auxiliary bootstrap assumptions of Subsect.\ \ref{SS:BOOTSTRAPFORHIGHERTRANSVERSAL} whenever $\varepsilon$ is sufficiently small. Hence, when proving estimates later in the paper, we no longer need to state them as assumptions. \end{remark} \begin{proof}[Proof of Prop.\ \ref{P:IMPROVEMENTOFHIGHERTRANSVERSALBOOTSTRAP}] See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. Throughout this proof, we refer to the data-size assumptions of Subsect.\ \ref{SS:DATAASSUMPTIONS} and the bounds of Lemma~\ref{L:BEHAVIOROFEIKONALFUNCTIONQUANTITIESALONGSIGMA0} as the ``conditions on the data.'' Moreover, we refer to the auxiliary bootstrap assumptions of Subsect.\ \ref{SS:BOOTSTRAPFORHIGHERTRANSVERSAL} simply as ``the bootstrap assumptions.'' \medskip \noindent \textbf{Proof of \eqref{E:UPTOTWOTRANSVERSALDERIVATIVESOFSLOWINLINFINITY}:} We first note that by \eqref{E:SLOWWAVETRANSVERSALTANGENT}, it suffices to show that $ \left\| \mathscr{Z}^{\leq 3;2} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim \varepsilon $ whenever $\mathscr{Z}^{\leq 3;2}$ contains precisely two factors of $\breve{X}$. To proceed, we apply $\mathscr{P}^{\leq 1} \breve{X}$ to the identity \eqref{E:RADOFSLOWWAVEALGEBRAICALLYEXPRESSED}. Using the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX} and the bootstrap assumptions, we deduce that $ \left\| \mathscr{P}^{\leq 1} \breve{X} \breve{X} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim \varepsilon $. Then using the commutator estimate \eqref{E:TWORADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} with $f = \vec{W}$ and the estimate \eqref{E:SLOWWAVETRANSVERSALTANGENT}, we can arbitrarily permute the vectorfield factors in the expression $\mathscr{P}^{\leq 1} \breve{X} \breve{X} \vec{W}$ up to error terms that are bounded in $\| \cdot \|_{L^{\infty}(\Sigma_t^u)}$ by $\lesssim \varepsilon$, which yields the desired bound \eqref{E:UPTOTWOTRANSVERSALDERIVATIVESOFSLOWINLINFINITY}. \medskip \noindent \textbf{Proof of \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTWORADBUTNOTPURERAD}-\eqref{E:IMPROVEDTRANSVERALESTIMATESFORTWORAD}:} By \eqref{E:PSIMIXEDTRANSVERSALTANGENTBOOTSTRAPIMPROVED}, it suffices to prove \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTWORADBUTNOTPURERAD} when the operator $\mathscr{Z}_*^{[1,4];2}$ contains precisely two factors of $\breve{X}$. To proceed, we commute the wave equation \eqref{E:WAVEEQUATIONTRANSPORTINTERPRETATION} with $\mathscr{P}^{\leq 2} \breve{X}$ and use Lemmas~\ref{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS} and \ref{L:POINTWISEESTIMATESFORGSPHEREANDITSDERIVATIVES}, the commutator estimate \eqref{E:ANGLAPONERADIALTANGENTIALFUNCTIONCOMMUTATOR} with $f = \Psi$ (to commute $\mathscr{P}^{\leq 2} \breve{X}$ through $ {\Delta \mkern-12mu / \, } $), the commutator estimate \eqref{E:ONERADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} with $f = \breve{X} \Psi$ (to commute $\mathscr{P}^{\leq 2} \breve{X}$ through the operator $L$ on LHS~\eqref{E:WAVEEQUATIONTRANSPORTINTERPRETATION}), the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, Cor.\ \ref{C:SQRTEPSILONTOCEPSILON}, and the bootstrap assumptions to deduce \begin{align} \label{E:WAVEEQNONCETRANSVERSALLYCOMMUTEDTRANSPORTPOINTOFVIEW} \left| L \mathscr{P}^{\leq 2} \breve{X} \breve{X} \Psi \right| & \lesssim \varepsilon. \end{align} Since $ \displaystyle L = \frac{\partial}{\partial t} $, from \eqref{E:WAVEEQNONCETRANSVERSALLYCOMMUTEDTRANSPORTPOINTOFVIEW}, the fundamental theorem of calculus, and the conditions on the data, we deduce \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTWORAD} as well as the bound $|\mathscr{P}^{[1,2]} \breve{X} \breve{X} \Psi| \lesssim \varepsilon$. Next, using the commutator estimate \eqref{E:TWORADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE}, the bootstrap assumptions, and the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, we can reorder the vectorfield factors in the terms $P \breve{X} \breve{X} \Psi$ up to error terms that are bounded in $\| \cdot \|_{L^{\infty}(\Sigma_t^u)}$ by $\lesssim \varepsilon$ to deduce that $ \left\| \mathscr{Z}_*^{3;2} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} \leq C \varepsilon $. Finally, using this bound, we can similarly reorder the vectorfield factors in the terms $\mathscr{P}^2 \breve{X} \breve{X} \Psi$ up to error terms that are bounded in $\| \cdot \|_{L^{\infty}(\Sigma_t^u)}$ by $\lesssim \varepsilon$, which in total yields \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTWORADBUTNOTPURERAD}. \medskip \noindent \textbf{Proof of \eqref{E:LUNITUPMUDOESNOTDEVIATEMUCHFROMTHEDATA}:} First, we note that \eqref{E:PSIMIXEDTRANSVERSALTANGENTBOOTSTRAPIMPROVED} implies that $\| L \breve{X} \Psi \|_{L^{\infty}(\Sigma_t^u)} \lesssim \varepsilon$. Since $ \displaystyle L = \frac{\partial}{\partial t} $, from this bound and the fundamental theorem of calculus, we deduce that $ \breve{X} \Psi(t,u,\vartheta) = \breve{X} \Psi(0,u,\vartheta) + \mathcal{O}(\varepsilon) $. Similarly, from \eqref{E:LUNITAPPLIEDTOTANGENTIALUPMUANDTANSETSTARLINFTY}, we deduce that $L \upmu(t,u,\vartheta) = L \upmu(0,u,\vartheta) + \mathcal{O}(\varepsilon) $. Next, we use \eqref{E:UPMUFIRSTTRANSPORT}, the fact that $G_{L L}, G_{L X} = \mathrm{f}(\upgamma)$ (see Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}), and the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX} to deduce that $L \upmu(0,u,\vartheta) = \frac{1}{2} [G_{L L} \breve{X} \Psi](0,u,\vartheta) + \mathcal{O}(\varepsilon) $. Since $L^0 = L_{(Flat)}^0 = 1$, $L^i = L_{(Flat)}^i + L_{(Small)}^i$, and $G_{\alpha \beta} = G_{\alpha \beta}(\Psi = 0) + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\Psi)$, we can use the conditions on the data to deduce, with the help of \eqref{E:NONVANISHINGNONLINEARCOEFFICIENT}, the estimate $G_{L L}(0,u,\vartheta) = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) \right\rbrace G_{L_{(Flat)} L_{(Flat)}}(\Psi = 0) $. Combining this estimate with the previous ones and using \eqref{E:PSITRANSVERSALLINFINITYBOUNDBOOTSTRAPIMPROVED}, we conclude \eqref{E:LUNITUPMUDOESNOTDEVIATEMUCHFROMTHEDATA}. \medskip \noindent \textbf{Proof of \eqref{E:RADDERIVATIVESOFGLLDIFFERENCEBOUND}-\eqref{E:RADDERIVATIVESOFGLLRADPSIDIFFERENCEBOUND} in the cases $0 \leq M \leq 1$:} It suffices to prove that for $0 \leq M \leq 1$, we have \begin{align} \label{E:LDERIVATIVEOFRADIALDERIVATIVESOFCRITICALFACTOR} \left| L \breve{X}^M G_{L L} \right| & \lesssim \varepsilon, && \left| L \breve{X}^M \left\lbrace G_{L L} \breve{X} \Psi \right\rbrace \right| \lesssim \varepsilon. \end{align} Once we have shown \eqref{E:LDERIVATIVEOFRADIALDERIVATIVESOFCRITICALFACTOR}, we can use the fact that $ \displaystyle L = \frac{\partial}{\partial t} $ to obtain the desired estimates by integrating from time $s$ to $t$ and using the estimates \eqref{E:LDERIVATIVEOFRADIALDERIVATIVESOFCRITICALFACTOR}. To proceed, we first use Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} to deduce that $G_{L L} = \mathrm{f}(\upgamma)$ and $G_{L L} \breve{X} \Psi = \mathrm{f}(\upgamma) \breve{X} \Psi$. Hence, to obtain \eqref{E:LDERIVATIVEOFRADIALDERIVATIVESOFCRITICALFACTOR} when $M=0$, we differentiate these two identities with $L$ and use the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX} and the bootstrap assumptions. The proof is similar in the case $M=1$, but we must also use the estimate $\left\|L \breve{X} \breve{X} \Psi \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim \varepsilon$, which is a consequence of the previously established estimate \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTWORADBUTNOTPURERAD}. \medskip \noindent \textbf{Proof of \eqref{E:LUNITRADUPMULINFTY}-\eqref{E:PERMUTEDRADTANGENTIALUPMULINFTY}:} Let $1 \leq K \leq 3$ be an integer and let $\mathscr{Z}^{K;1}$ be an operator containing exactly one factor of $\breve{X}$. We commute equation \eqref{E:UPMUFIRSTTRANSPORT} with $\mathscr{Z}^{K;1}$ and use the aforementioned relations $G_{L L}, G_{L X} = \mathrm{f}(\upgamma)$, the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, and the bootstrap assumptions to deduce \begin{align} \label{E:UPMUEVOLUTIONEQUATIONRADANDTANGENTIALCOMMUTEDFIRSTBOUND} \left| L \mathscr{Z}^{K;1} \upmu \right| & \leq \frac{1}{2} \left| \mathscr{Z}^{K;1} \left\lbrace G_{L L} \breve{X} \Psi \right\rbrace \right| + \left| \mathscr{Z}_*^{[1,K+1];1} \Psi \right| + \left| [L, \mathscr{Z}^{K;1}] \upmu \right|. \end{align} We now show that the last two terms on RHS~\eqref{E:UPMUEVOLUTIONEQUATIONRADANDTANGENTIALCOMMUTEDFIRSTBOUND} are $\lesssim \varepsilon$. We already proved $ \left| \mathscr{Z}_*^{[1,K+1];1} \Psi \right| \lesssim \varepsilon $ in Prop.~\ref{P:IMPROVEMENTOFAUX}. To bound $[L, \mathscr{Z}^{K;1}] \upmu$, we use the commutator estimate \eqref{E:ONERADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} with $f = \upmu$, the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, and Cor.\ \ref{C:SQRTEPSILONTOCEPSILON} to deduce that $ \left| [L, \mathscr{Z}^{K;1}] \upmu \right| \lesssim \left| \mathscr{P}_*^{[1, K]} \upmu \right| + \varepsilon \left| Y \mathscr{Z}^{\leq K-1;1} \upmu \right| $. The $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX} imply that $ \left| \mathscr{P}_*^{[1,K]} \upmu \right| \lesssim \varepsilon $, while the bootstrap assumptions imply that $ \varepsilon \left| Y \mathscr{Z}^{\leq K-1;1} \upmu \right| \lesssim \varepsilon $ as well. We have thus shown that \begin{align} \label{E:UPMUEVOLUTIONEQUATIONRADANDTANGENTIALCOMMUTEDSECONDBOUND} \left\| L \mathscr{Z}^{K;1} \upmu \right\|_{L^{\infty}(\Sigma_t^u)} & \leq \frac{1}{2} \left\| \mathscr{Z}^{K;1} \left\lbrace G_{L L} \breve{X} \Psi \right\rbrace \right\|_{L^{\infty}(\Sigma_t^u)} + C \varepsilon. \end{align} We split the remainder of the proof into two cases, starting with the case $\mathscr{Z}^{K;1} = \breve{X}$. Using the bound \eqref{E:RADDERIVATIVESOFGLLRADPSIDIFFERENCEBOUND} with $s=0$ and $M=1$ (established above), we can replace the norm $\| \cdot \|_{L^{\infty}(\Sigma_t^u)}$ on RHS~\eqref{E:UPMUEVOLUTIONEQUATIONRADANDTANGENTIALCOMMUTEDSECONDBOUND} with the norm $\| \cdot \|_{L^{\infty}(\Sigma_0^u)}$ plus an error term that is bounded in the norm $\| \cdot \|_{L^{\infty}(\Sigma_t^u)}$ by $\leq C \varepsilon$, which yields \eqref{E:LUNITRADUPMULINFTY}. Using \eqref{E:LUNITRADUPMULINFTY}, recalling that $ \displaystyle L = \frac{\partial}{\partial t} $, and using the fundamental theorem of calculus as well as the assumption $T_{(Boot)} \leq 2 \mathring{\updelta}_*^{-1}$, we conclude \eqref{E:RADUPMULINFTY}. In the remaining case, $\mathscr{Z}^{K;1}$ is not the operator $\breve{X}$. That is, $2 \leq K \leq 3$ and $\mathscr{Z}^{K;1}$ must contain a $\mathcal{P}_u$-tangent factor, which is equivalent to $\mathscr{Z}^{K;1} = \mathscr{Z}_*^{K;1}$. Recalling that $G_{L L} \breve{X} \Psi = \mathrm{f}(\upgamma) \breve{X} \Psi$ and using the estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, the bootstrap assumptions, and \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTWORADBUTNOTPURERAD}, we find that $ \left\| \mathscr{Z}_*^{K;1} \left\lbrace G_{L L} \breve{X} \Psi \right\rbrace \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim \varepsilon $. Thus, in this case, we have shown that $ \mbox{RHS~\eqref{E:UPMUEVOLUTIONEQUATIONRADANDTANGENTIALCOMMUTEDSECONDBOUND}} \lesssim \varepsilon $. From the estimate $ \left\| L \mathscr{Z}_*^{K;1} \upmu \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim \varepsilon $, the fact that $ \displaystyle L = \frac{\partial}{\partial t} $, and the fundamental theorem of calculus, we conclude that $ \left\| \mathscr{Z}_*^{K;1} \upmu \right\|_{L^{\infty}(\Sigma_t^u)} \leq \left\| \mathscr{Z}_*^{K;1} \upmu \right\|_{L^{\infty}(\Sigma_0^u)} + C \varepsilon $. The bounds \eqref{E:LUNITRADTANGENTIALUPMULINFTY}-\eqref{E:RADTANGENTIALUPMULINFTY} now follow from the previous estimates and the conditions on the data. It remains for us to prove the estimate \eqref{E:PERMUTEDRADTANGENTIALUPMULINFTY} concerning the permutations of the vectorfields in \eqref{E:LUNITRADTANGENTIALUPMULINFTY}-\eqref{E:RADTANGENTIALUPMULINFTY}. To obtain the desired bound, we use the commutator estimate \eqref{E:ONERADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} with $f = \upmu$, the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, the estimates \eqref{E:LUNITRADTANGENTIALUPMULINFTY}-\eqref{E:RADTANGENTIALUPMULINFTY}, and the bootstrap assumptions. \medskip \noindent \textbf{Proof of \eqref{E:TANGENTIALANDTWORADAPPLIEDTOLUNITILINFINITY} and \eqref{E:TWORADAPPLIEDTOLUNITILINFINITY}:} We may assume that the operator $\mathscr{Z}_*^{[1,3];2}$ in \eqref{E:TANGENTIALANDTWORADAPPLIEDTOLUNITILINFINITY} contains two factors of $\breve{X}$ since otherwise the desired estimate is implied by \eqref{E:LUNITAPPLIEDTOLISMALLANDLISMALLINFTYESTIMATE}. To proceed, we first use Lemma \ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} to express \eqref{E:RADLUNITI} in the schematic form $ \breve{X} L_{(Small)}^i = \mathrm{f}(\upgamma,\gsphere^{-1}, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2) \breve{X} \Psi + \mathrm{f}(\underline{\upgamma},\gsphere^{-1}, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2) P \Psi + \mathrm{f}(\gsphere^{-1}, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2) {{d \mkern-9mu /} } \upmu $. We now apply $P \breve{X}$ to this identity, where $P \in \mathscr{P}$. Using Lemmas~\ref{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS} and \ref{L:POINTWISEESTIMATESFORGSPHEREANDITSDERIVATIVES}, the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, the already proven estimates \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTWORADBUTNOTPURERAD} and \eqref{E:LUNITRADTANGENTIALUPMULINFTY}-\eqref{E:PERMUTEDRADTANGENTIALUPMULINFTY}, and the bootstrap assumptions, we deduce that \begin{align} \label{E:TANGENTIALRADRADLUNITIFIRSTBOUND} \left| P \breve{X} \breve{X} L_{(Small)}^i \right| & \lesssim \left| \mathscr{Z}_*^{[1,3];2} \Psi \right| + \left| \mathscr{Z}_*^{[1,3];1} \upgamma \right| + \left| Y \mathscr{Z}^{\leq 2;1} \upmu \right| + \left| \mathscr{P}_*^{[1,2]} \upmu \right| \lesssim \varepsilon. \end{align} Also using the commutator estimate \eqref{E:TWORADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} with $f = L_{(Small)}^i$ and the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, we can arbitrarily reorder the vectorfield factors in the expression $P \breve{X} \breve{X} L_{(Small)}^i$ up to error terms bounded in the norm $\| \cdot \|_{L^{\infty}(\Sigma_t^u)}$ by $\lesssim \varepsilon$, which yields \eqref{E:TANGENTIALANDTWORADAPPLIEDTOLUNITILINFINITY}. Moreover, a special case of \eqref{E:TANGENTIALANDTWORADAPPLIEDTOLUNITILINFINITY} is the bound $\left|L \breve{X} \breve{X} L_{(Small)}^i \right| \lesssim \varepsilon$. From this estimate, the fact that $ \displaystyle L = \frac{\partial}{\partial t} $, and the fundamental theorem of calculus, we conclude \eqref{E:TWORADAPPLIEDTOLUNITILINFINITY}. \medskip \noindent \textbf{Proof of \eqref{E:IMPROVEDTRANSVERALESTIMATESFORLUNITTHREERAD} and \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTHREERAD}:} As a preliminary step, we establish the bounds $| { \mathcal{L} \mkern-10mu / } _{\breve{X} \breve{X}} \gsphere^{-1}| \lesssim 1$ and $| { \mathcal{L} \mkern-10mu / } _{\breve{X} \breve{X}} {{d \mkern-9mu /} } x^i| \lesssim 1$. To handle the terms $ { \mathcal{L} \mkern-10mu / } _{\breve{X} \breve{X}} {{d \mkern-9mu /} } x^i = {{d \mkern-9mu /} } \breve{X} \breve{X} x^i$, we first note that Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} yields $\breve{X} x^i = \breve{X}^i = \mathrm{f}(\underline{\upgamma})$. Thus, from the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX} and the bootstrap assumptions, we obtain $| {{d \mkern-9mu /} } \breve{X} \breve{X} x^i| \lesssim |\mathscr{Z}^{\leq 2;1} \underline{\upgamma}| \lesssim 1$ as desired. To handle the terms $ { \mathcal{L} \mkern-10mu / } _{\breve{X} \breve{X}} \gsphere^{-1}$, we rely on the basic identity $ { \mathcal{L} \mkern-10mu / } _{\breve{X}} \gsphere^{-1} = - ( { \mathcal{L} \mkern-10mu / } _{\breve{X}} g \mkern-8.5mu / )^{\# \#}$, which was proved in \cite{jSgHjLwW2016}*{Lemma~2.9}. From this identity, Lemma~\ref{L:POINTWISEESTIMATESFORGSPHEREANDITSDERIVATIVES}, and the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, we deduce that $| { \mathcal{L} \mkern-10mu / } _{\breve{X} \breve{X}} \gsphere^{-1}| \lesssim | { \mathcal{L} \mkern-10mu / } _{\breve{X} \breve{X}} g \mkern-8.5mu / | + 1$. Moreover, Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} yields that $g \mkern-8.5mu / = \mathrm{f}(\upgamma, {{d \mkern-9mu /} } x^1, {{d \mkern-9mu /} } x^2)$. Thus, from Lemma~\ref{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS}, the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, the bootstrap assumptions, and the bound $| {{d \mkern-9mu /} } \breve{X} \breve{X} x^i| \lesssim 1$ proved above, we conclude the desired bound $| { \mathcal{L} \mkern-10mu / } _{\breve{X} \breve{X}} \gsphere^{-1}| \lesssim 1$. We now commute equation \eqref{E:WAVEEQUATIONTRANSPORTINTERPRETATION} with $\breve{X} \breve{X}$ and use Lemmas~\ref{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS} and \ref{L:POINTWISEESTIMATESFORGSPHEREANDITSDERIVATIVES}, the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, the bootstrap assumptions, and the bounds $\left| { \mathcal{L} \mkern-10mu / } _{\breve{X}} { \mathcal{L} \mkern-10mu / } _{\breve{X}} \gsphere^{-1} \right| \lesssim 1 $ and $ \left| { \mathcal{L} \mkern-10mu / } _{\breve{X}} { \mathcal{L} \mkern-10mu / } _{\breve{X}} {{d \mkern-9mu /} } x \right| \lesssim 1 $ proved above to deduce that \begin{align} \label{E:WAVEEQNTWICETRANSVERSALLYCOMMUTEDTRANSPORTPOINTOFVIEW} \left| L \breve{X} \breve{X} \breve{X} \Psi \right| & \lesssim \left| \mathscr{Z}_*^{[1,4];2} \Psi \right| + \left| \mathscr{Z}_*^{[1,3];2} \upgamma \right| + \left| \breve{X}^{\leq 2} \vec{W} \right| \\ & \ \ + \left| [ {\Delta \mkern-12mu / \, } , \breve{X} \breve{X}] \Psi \right| + \left| L \breve{X} \breve{X} \breve{X} \Psi - \breve{X} \breve{X} L \breve{X} \Psi \right|. \notag \end{align} Next, we note that the already proven estimates \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTWORADBUTNOTPURERAD}, \eqref{E:TANGENTIALANDTWORADAPPLIEDTOLUNITILINFINITY}, and \eqref{E:UPTOTWOTRANSVERSALDERIVATIVESOFSLOWINLINFINITY} imply that $ \left| \mathscr{Z}_*^{[1,4];2} \Psi \right| \lesssim \varepsilon $, $ \left| \mathscr{Z}_*^{[1,3];2} \upgamma \right| \lesssim \varepsilon $, and $ \left| \breve{X}^{\leq 2} \vec{W} \right| \lesssim \varepsilon $. Next, we use \eqref{E:ANGLAPTWORADIALTANGENTIALFUNCTIONCOMMUTATOR} with $f = \Psi$ to bound the commutator term $\left| [ {\Delta \mkern-12mu / \, } , \breve{X} \breve{X}] \Psi \right|$ by $\lesssim$ the first term on RHS~\eqref{E:WAVEEQNTWICETRANSVERSALLYCOMMUTEDTRANSPORTPOINTOFVIEW} (and hence it is $\lesssim \varepsilon$ too). Next, we use \eqref{E:TWORADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} with $f=\breve{X} \Psi$ and $N=2$ and the bound $ \left| \mathscr{Z}_*^{[1,4];2} \Psi \right| \lesssim \varepsilon $ mentioned above to deduce that $ \left| L \breve{X} \breve{X} \breve{X} \Psi - \breve{X} \breve{X} L \breve{X} \Psi \right| \lesssim \left| \mathscr{Z}_*^{[1,3];2} \Psi \right| \lesssim \varepsilon $. Combining these estimates, we deduce that $ \left| L \breve{X} \breve{X} \breve{X} \Psi \right| \lesssim \varepsilon $, which implies \eqref{E:IMPROVEDTRANSVERALESTIMATESFORLUNITTHREERAD}. From \eqref{E:IMPROVEDTRANSVERALESTIMATESFORLUNITTHREERAD}, the fact that $ \displaystyle L = \frac{\partial}{\partial t} $, and the fundamental theorem of calculus, we conclude the desired estimate \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTHREERAD}. \medskip \noindent \textbf{Proof of \eqref{E:RADDERIVATIVESOFGLLDIFFERENCEBOUND}-\eqref{E:RADDERIVATIVESOFGLLRADPSIDIFFERENCEBOUND} in the case $M=2$:} The proof is very similar to the proof given above in the cases $M=0,1$, so we only highlight the main new ingredients needed in the case $M=2$: we must also use the estimates $ \left| L \breve{X} \breve{X} \breve{X} \Psi \right| \lesssim \varepsilon $ and $\left| L \breve{X} \breve{X} L_{(Small)}^i \right| \lesssim \varepsilon $ established in \eqref{E:IMPROVEDTRANSVERALESTIMATESFORLUNITTHREERAD} and \eqref{E:TANGENTIALANDTWORADAPPLIEDTOLUNITILINFINITY} in order to deduce \eqref{E:LDERIVATIVEOFRADIALDERIVATIVESOFCRITICALFACTOR} in the case $M=2$. \medskip \noindent \textbf{Proof of \eqref{E:LUNITRADRADUPMULINFTY}-\eqref{E:RADRADUPMULINFTY}:} We commute equation \eqref{E:UPMUFIRSTTRANSPORT} with $\breve{X} \breve{X}$ and argue as in the proof of \eqref{E:UPMUEVOLUTIONEQUATIONRADANDTANGENTIALCOMMUTEDFIRSTBOUND} to obtain \begin{align} \label{E:UPMUEVOLUTIONEQUATIONRADDOUBLECOMMUTEDFIRSTBOUND} \left| L \breve{X} \breve{X} \upmu \right| & \leq \frac{1}{2} \left| \breve{X} \breve{X} \left\lbrace G_{L L} \breve{X} \Psi \right\rbrace \right| + \left| \mathscr{Z}_*^{[1,3];2} \Psi \right| + \left| L \breve{X} \breve{X} \upmu - \breve{X} \breve{X} L \upmu \right|. \end{align} Using the commutator estimate \eqref{E:TWORADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} with $f = \upmu$, the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, and the already proven bounds \eqref{E:RADTANGENTIALUPMULINFTY}-\eqref{E:PERMUTEDRADTANGENTIALUPMULINFTY}, we deduce that $\left| L \breve{X} \breve{X} \upmu - \breve{X} \breve{X} L \upmu \right| \lesssim \left| Y \mathscr{Z}^{\leq 1} \upmu \right| \lesssim \varepsilon $. Next, we use \eqref{E:IMPROVEDTRANSVERALESTIMATESFORTWORADBUTNOTPURERAD} to deduce that $ \left| \mathscr{Z}_*^{[1,3];2} \Psi \right| \lesssim \varepsilon $. Thus, we have shown that the last two terms on RHS~\eqref{E:UPMUEVOLUTIONEQUATIONRADDOUBLECOMMUTEDFIRSTBOUND} are $\lesssim \varepsilon$. The remainder of the proof of \eqref{E:LUNITRADRADUPMULINFTY}-\eqref{E:RADRADUPMULINFTY} now proceeds as in the proof of \eqref{E:LUNITRADUPMULINFTY}-\eqref{E:RADUPMULINFTY}, thanks to the availability of the already proven estimates \eqref{E:RADDERIVATIVESOFGLLDIFFERENCEBOUND}-\eqref{E:RADDERIVATIVESOFGLLRADPSIDIFFERENCEBOUND} in the case $M=2$. \end{proof} \section{Sharp estimates for \texorpdfstring{$\upmu$}{the inverse foliation density}} \label{S:SHARPESTIMATESFORINVERSEFOLIATIONDENSITY} In this section, we derive sharp estimates for $\upmu$, its derivatives, and various time integrals, many of which involve the singular factor $ \displaystyle \frac{1}{\upmu} $. These estimates play a fundamental role in our energy estimates because our energies contain $\upmu$ weights and because in our energy identities, we will encounter error integrals that involve the derivatives of $\upmu$ and/or factors of $ \displaystyle \frac{1}{\upmu} $. The main results of this section are Props.~\ref{P:SHARPMU} and Prop.~\ref{P:MUINVERSEINTEGRALESTIMATES}. \subsection{Sharp \texorpdfstring{$L^{\infty}$}{essential sup-norm} estimates and pointwise estimates for \texorpdfstring{$\upmu$}{the inverse foliation density}} \label{SS:MUSHARPSUPNORM} We define the following quantities in order to facilitate our analysis of $\upmu$. \begin{definition}[\textbf{Auxiliary quantities used to analyze $\upmu$}] \label{D:AUXQUANTITIES} We define the following quantities, where $0 \leq s \leq t$: \begin{subequations} \begin{align} M(s,u,\vartheta;t) & := \int_{s'=s}^{s'=t} \left\lbrace L \upmu(t,u,\vartheta) - L \upmu(s',u,\vartheta) \right\rbrace \, ds', \label{E:BIGMDEF} \\ \mathring{\upmu}(u,\vartheta) & := \upmu(s=0,u,\vartheta), \\ \widetilde{M}(s,u,\vartheta;t) & := \frac{M(s,u,\vartheta;t)}{\mathring{\upmu}(u,\vartheta) - M(0,u,\vartheta;t)}, \label{E:WIDETILDEBIGMDEF} \\ \upmu_{(Approx)}(s,u,\vartheta;t) & := 1 + \frac{L \upmu(t,u,\vartheta)}{ \mathring{\upmu}(u,\vartheta) - M(0,u,\vartheta;t)}s + \widetilde{M}(s,u,\vartheta;t). \label{E:MUAPPROXDEF} \end{align} \end{subequations} \end{definition} As we outlined in Subsubsect.\ \ref{SSS:ENERGYESTIMATES}, our high-order energies are allowed to blow up as the shock forms. Specifically, the best estimates that we are able to derive allow for the possibility that the high-order energies blow up like negative powers of the quantity $\upmu_{\star}$, which we now define; see Prop.~\ref{P:MAINAPRIORIENERGY} for the detailed statement. \begin{definition}[\textbf{Definition of} $\upmu_{\star}$] \label{D:MUSTARDEF} \begin{align} \label{E:MUSTARDEF} \upmu_{\star}(t,u) & := \min \lbrace 1, \min_{\Sigma_t^u} \upmu \rbrace. \end{align} \end{definition} The following simple estimates play a role in our ensuing analysis. \begin{lemma}[\textbf{First estimates for the auxiliary quantities}] \label{L:FIRSTESTIMATESFORAUXILIARYUPMUQUANTITIES} The following estimates hold for $(t,u,\vartheta) \in [0,T_{(Boot)}) \times [0,U_0] \times \mathbb{T}$ and $0 \leq s \leq t$ (see Subsect.\ \ref{SS:NOTATIONANDINDEXCONVENTIONS} regarding our use of the notation $\mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\cdot)$): \begin{align} \mathring{\upmu}(u,\vartheta) & = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}), \label{E:MUINITIALDATAESTIMATE} \\ \mathring{\upmu}(u,\vartheta) & = 1 + M(0,u,\vartheta;t) + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) + \mathcal{O}(\varepsilon). \label{E:MUAMPLITUDENEARONE} \end{align} In addition, the following pointwise estimates hold: \begin{align} \left| L \upmu(t,u,\vartheta) - L \upmu(s,u,\vartheta) \right| & \lesssim \varepsilon(t-s), \label{E:LUNITUPMUATTIMETMINUSLUNITUPMUATTIMESPOINTWISEESTIMATE} \\ |M(s,u,\vartheta;t)|, |\widetilde{M}(s,u,\vartheta;t)| & \lesssim \varepsilon (t - s)^2, \label{E:BIGMEST} \\ \upmu(s,u,\vartheta) & = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace \upmu_{(Approx)}(s,u,\vartheta;t). \label{E:MUAPPROXMISLIKEMU} \end{align} \end{lemma} \begin{proof} \eqref{E:MUINITIALDATAESTIMATE} is a restatement of \eqref{E:UPITSELFLINFINITYSIGMA0CONSEQUENCES}. The estimate \eqref{E:LUNITUPMUATTIMETMINUSLUNITUPMUATTIMESPOINTWISEESTIMATE} follows from the mean value theorem and the estimate $\left| L L \upmu \right| \lesssim \varepsilon$, which is a special case of \eqref{E:LUNITAPPLIEDTOTANGENTIALUPMUANDTANSETSTARLINFTY}. The estimate \eqref{E:MUAMPLITUDENEARONE} and the estimate \eqref{E:BIGMEST} for $M$ then follow from definition \eqref{E:BIGMDEF} and the estimates \eqref{E:MUINITIALDATAESTIMATE} and \eqref{E:LUNITUPMUATTIMETMINUSLUNITUPMUATTIMESPOINTWISEESTIMATE}. The estimate \eqref{E:BIGMEST} for $\widetilde{M}$ follows from definition \eqref{E:WIDETILDEBIGMDEF}, the estimate \eqref{E:BIGMEST} for $M$, and \eqref{E:MUAMPLITUDENEARONE}. To prove \eqref{E:MUAPPROXMISLIKEMU}, we first note the following identity, which is a straightforward consequence of Def.\ \ref{D:AUXQUANTITIES}: \begin{align} \label{E:MUSPLIT} \upmu(s,u,\vartheta) = \left\lbrace \mathring{\upmu}(u,\vartheta) - M(0,u,\vartheta;t) \right\rbrace \upmu_{(Approx)}(s,u,\vartheta;t). \end{align} From \eqref{E:MUSPLIT} and \eqref{E:MUAMPLITUDENEARONE}, we conclude \eqref{E:MUAPPROXMISLIKEMU}. \end{proof} To derive some of the most important estimates, we will distinguish between regions in which $\upmu$ is appreciably shrinking and regions in which it is not. We define the relevant regions in the next definition. \begin{definition}[\textbf{Regions of distinct $\upmu$ behavior}] \label{D:REGIONSOFDISTINCTUPMUBEHAVIOR} For each $t \in [0,T_{(Boot)})$, $s \in [0,t]$, and $u \in [0,U_0]$, we partition \begin{subequations} \begin{align} [0,u] \times \mathbb{T} & = \Vplus{t}{u} \cup \Vminus{t}{u}, \label{E:OUINTERVALCROSSS2SPLIT} \\ \Sigma_s^u & = \Sigmaplus{s}{t}{u} \cup \Sigmaminus{s}{t}{u}, \label{E:SIGMASSPLIT} \end{align} \end{subequations} where \begin{subequations} \begin{align} \Vplus{t}{u} & := \left\lbrace (u',\vartheta) \in [0,u] \times \mathbb{T} \ | \ \frac{L \upmu(t,u',\vartheta)}{\mathring{\upmu}(u',\vartheta) - M(0,u',\vartheta;t)} \geq 0 \right\rbrace, \label{E:ANGLESANDUWITHNONDECAYINUPMUGBEHAVIOR} \\ \Vminus{t}{u} & := \left\lbrace (u',\vartheta) \in [0,u] \times \mathbb{T} \ | \ \frac{L \upmu(t,u',\vartheta)}{\mathring{\upmu}(u',\vartheta) - M(0,u',\vartheta;t)} < 0 \right\rbrace, \label{E:ANGLESANDUWITHDECAYINUPMUGBEHAVIOR} \\ \Sigmaplus{s}{t}{u} & := \left\lbrace (s,u',\vartheta) \in \Sigma_s^u \ | \ (u',\vartheta) \in \Vplus{t}{u} \right\rbrace, \label{E:SIGMAPLUS} \\ \Sigmaminus{s}{t}{u} & := \left\lbrace (s,u',\vartheta) \in \Sigma_s^u \ | \ (u',\vartheta) \in \Vminus{t}{u} \right\rbrace. \label{E:SIGMAMINUS} \end{align} \end{subequations} \end{definition} \begin{remark} Note that by \eqref{E:MUAMPLITUDENEARONE}, the denominators in \eqref{E:ANGLESANDUWITHNONDECAYINUPMUGBEHAVIOR}-\eqref{E:ANGLESANDUWITHDECAYINUPMUGBEHAVIOR} are positive. \end{remark} The following proposition provides our main sharp estimates for $\upmu$ and its derivatives. The estimates play a fundamental role in controlling the error integrals in our energy estimates. \begin{proposition}[\textbf{Sharp pointwise estimates for $\upmu$, $L \upmu$, and $\breve{X} \upmu$}] \label{P:SHARPMU} The following estimates hold for $(t,u,\vartheta) \in [0,T_{(Boot)}) \times [0,U_0] \times \mathbb{T}$ and $0 \leq s \leq t$. \medskip \noindent \underline{\textbf{Upper bound for $\displaystyle \frac{[L \upmu]_+}{\upmu}$}}. \begin{align} \label{E:POSITIVEPARTOFLMUOVERMUISBOUNDED} \left\| \frac{[L \upmu]_+}{\upmu} \right\|_{L^{\infty}(\Sigma_s^u)} & \leq C. \end{align} \medskip \noindent \underline{\textbf{Small $\upmu$ implies $L \upmu$ is quantitatively negative}}. \begin{align} \label{E:SMALLMUIMPLIESLMUISNEGATIVE} \upmu(s,u,\vartheta) \leq \frac{1}{4} \implies L \upmu(s,u,\vartheta) \leq - \frac{1}{4} \mathring{\updelta}_*, \end{align} where $\mathring{\updelta}_* > 0$ is defined in \eqref{E:CRITICALBLOWUPTIMEFACTOR}. \medskip \noindent \underline{\textbf{Upper bound for $\displaystyle \frac{[\breve{X} \upmu]_+}{\upmu}$}}. \begin{align} \label{E:UNIFORMBOUNDFORMRADMUOVERMU} \left\| \frac{[\breve{X} \upmu]_+}{\upmu} \right\|_{L^{\infty}(\Sigma_s^u)} & \leq \frac{C}{\sqrt{T_{(Boot)} - s}}. \end{align} \medskip \noindent \underline{\textbf{Sharp spatially uniform estimates}}. Consider a time interval $s \in [0,t]$ and consider the ($t,u$-dependent) constant $\upkappa$ defined by \begin{align} \label{E:CRUCIALLATETIMEDERIVATIVEDEF} \upkappa & := \sup_{(u',\vartheta) \in [0,u] \times \mathbb{T}} \frac{[L \upmu]_-(t,u',\vartheta)}{\mathring{\upmu}(u',\vartheta) - M(0,u',\vartheta;t)}, \end{align} and note that $\upkappa \geq 0$ in view of the estimate \eqref{E:MUAMPLITUDENEARONE}. Then the following estimates hold (see Subsect.\ \ref{SS:NOTATIONANDINDEXCONVENTIONS} regarding our use of the notation $\mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\cdot)$): \begin{subequations} \begin{align} \upmu_{\star}(s,u) & = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace \left\lbrace 1 - \upkappa s \right\rbrace, \label{E:MUSTARBOUNDS} \\ \left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_s^u)} & = \begin{cases} \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon^{1/2}) \right\rbrace \upkappa, & \mbox{if } \upkappa \geq \varepsilon^{1/2}, \\ \mathcal{O}(\varepsilon^{1/2}), & \mbox{if } \upkappa \leq \varepsilon^{1/2}. \end{cases} \label{E:LUNITUPMUMINUSBOUND} \end{align} \end{subequations} Furthermore, we have \begin{subequations} \begin{align} \label{E:UNOTNECESSARILYEQUALTOONECRUCIALLATETIMEDERIVATIVECOMPAREDTODATAPARAMETER} \upkappa & \leq \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace \mathring{\updelta}_*. \end{align} Moreover, when $u = 1$, we have \begin{align} \label{E:CRUCIALLATETIMEDERIVATIVECOMPAREDTODATAPARAMETER} \upkappa & = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace \mathring{\updelta}_*, \end{align} \end{subequations} and \begin{align} \upmu_{\star}(s,1) & = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace \left\lbrace 1 - \left[ 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right] \mathring{\updelta}_* s \right\rbrace. \label{E:MUSTARBOUNDSUISONE} \end{align} \medskip \noindent \underline{\textbf{Sharp estimates when $(u',\vartheta) \in \Vplus{t}{u}$}}. We recall that the set $\Vplus{t}{u}$ is defined in \eqref{E:ANGLESANDUWITHNONDECAYINUPMUGBEHAVIOR}. If $0 \leq s_1 \leq s_2 \leq t$, then the following estimate holds: \begin{align} \label{E:LOCALIZEDMUCANTGROWTOOFAST} \sup_{(u',\vartheta) \in \Vplus{t}{u}} \frac{\upmu(s_2,u',\vartheta)}{\upmu(s_1,u',\vartheta)} & \leq C. \end{align} In addition, if $s \in [0,t]$ and $\Sigmaplus{s}{t}{u}$ is as defined in \eqref{E:SIGMAPLUS}, then \begin{align} \label{E:KEYMUNOTDECAYBOUND} \inf_{\Sigmaplus{s}{t}{u}} \upmu & \geq 1 - C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} - C \varepsilon. \end{align} Moreover, if $s \in [0,t]$, then \begin{align} \left\| \frac{[L \upmu]_-}{\upmu} \right\|_{L^{\infty}(\Sigmaplus{s}{t}{u})} & \leq C \varepsilon. \label{E:KEYMUNOTDECAYINGMINUSPARTLMUOVERMUBOUND} \end{align} \medskip \noindent \underline{\textbf{Sharp estimates when $(u',\vartheta) \in \Vminus{t}{u}$}}. We recall that $\Vminus{t}{u}$ is the set defined in \eqref{E:ANGLESANDUWITHDECAYINUPMUGBEHAVIOR}. Let $\upkappa > 0$ be as in \eqref{E:CRUCIALLATETIMEDERIVATIVEDEF} and consider a time interval $s \in [0,t]$. Then there exists a constant $C > 0$ such that \begin{align} \label{E:LOCALIZEDMUMUSTSHRINK} \mathop{\sup_{0 \leq s_1 \leq s_2 \leq t}}_{(u',\vartheta) \in \Vminus{t}{u}} \frac{\upmu(s_2,u',\vartheta)}{\upmu(s_1,u',\vartheta)} & \leq 1 + C \varepsilon. \end{align} Furthermore, if $s \in [0,t]$ and $\Sigmaminus{s}{t}{u}$ is as defined in \eqref{E:SIGMAMINUS}, then \begin{align} \label{E:LMUPLUSNEGLIGIBLEINSIGMAMINUS} \left\| [L \upmu]_+ \right\|_{L^{\infty}(\Sigmaminus{s}{t}{u})} & \leq C \varepsilon. \end{align} Finally, there exist constants $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} > 0$ and $C > 0$ such that if $0 \leq s \leq t$, then \begin{align} \label{E:HYPERSURFACELARGETIMEHARDCASEOMEGAMINUSBOUND} \left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigmaminus{s}{t}{u})} & \leq \begin{cases} \left\lbrace 1 + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} + C \varepsilon^{1/2} \right\rbrace \upkappa, & \mbox{if } \upkappa \geq \varepsilon^{1/2}, \\ C \varepsilon^{1/2}, & \mbox{if } \upkappa \leq \varepsilon^{1/2}. \end{cases} \end{align} \noindent \underline{\textbf{Approximate time-monotonicity of $\upmu_{\star}^{-1}(s,u)$}}. There exist constants $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} > 0$ and $C > 0$ such that if $0 \leq s_1 \leq s_2 \leq t$, then \begin{align} \label{E:MUSTARINVERSEMUSTGROWUPTOACONSTANT} \upmu_{\star}^{-1}(s_1,u) & \leq (1 + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} + C \varepsilon) \upmu_{\star}^{-1}(s_2,u). \end{align} \end{proposition} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. \medskip \noindent \textbf{Proof of \eqref{E:POSITIVEPARTOFLMUOVERMUISBOUNDED}}: We may assume that $L \upmu(s,u,\vartheta) > 0$ since otherwise \eqref{E:POSITIVEPARTOFLMUOVERMUISBOUNDED} is trivial. Then by \eqref{E:LUNITUPMUATTIMETMINUSLUNITUPMUATTIMESPOINTWISEESTIMATE}, for $0 \leq s' \leq s \leq t < T_{(Boot)} \leq 2 \mathring{\updelta}_*^{-1}$, we have that $L \upmu(s',u,\vartheta) \geq L \upmu(s,u,\vartheta) - C \varepsilon(s-s') \geq - C \varepsilon $. Integrating this estimate with respect to $s'$ starting from $s'=0$ and using \eqref{E:MUINITIALDATAESTIMATE}, we find that $\upmu(s,u,\vartheta) \geq 1 - C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} - C \varepsilon$ and thus $1/\upmu(s,u,\vartheta) \leq 1 + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} + C \varepsilon$. Also using the bound $\left| L \upmu(s,u,\vartheta) \right| \leq C $ proved in \eqref{E:LUNITUPMULINFINITY}, we conclude the desired estimate. \medskip \noindent \textbf{Proof of \eqref{E:SMALLMUIMPLIESLMUISNEGATIVE}}: By \eqref{E:LUNITUPMUATTIMETMINUSLUNITUPMUATTIMESPOINTWISEESTIMATE}, for $0 \leq s \leq t < T_{(Boot)} \leq 2 \mathring{\updelta}_*^{-1}$, we have that $ L \upmu(s,u,\vartheta) = L \upmu(0,u,\vartheta) + \mathcal{O}(\varepsilon) $. Integrating this estimate with respect to $s$ starting from $s=0$ and using \eqref{E:MUINITIALDATAESTIMATE}, we find that $\upmu(s,u,\vartheta) = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon) + s L \upmu(0,u,\vartheta) $. Again using \eqref{E:LUNITUPMUATTIMETMINUSLUNITUPMUATTIMESPOINTWISEESTIMATE} to deduce that $L \upmu(0,u,\vartheta) = L \upmu(s,u,\vartheta) + \mathcal{O}(\varepsilon)$, we find that $\upmu(s,u,\vartheta) = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon) + s L \upmu(s,u,\vartheta) $. It follows that whenever $\upmu(s,u,\vartheta) < 1/4$, we have \[ L \upmu(s,u,\vartheta) < - \frac{1}{s} \left\lbrace 3/4 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace < - \frac{1}{2} \left\lbrace 3/4 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace \mathring{\updelta}_* < - \frac{1}{4} \mathring{\updelta}_* \] as desired. \medskip \noindent \textbf{Proof of \eqref{E:UNOTNECESSARILYEQUALTOONECRUCIALLATETIMEDERIVATIVECOMPAREDTODATAPARAMETER} and \eqref{E:CRUCIALLATETIMEDERIVATIVECOMPAREDTODATAPARAMETER}}: We prove only \eqref{E:UNOTNECESSARILYEQUALTOONECRUCIALLATETIMEDERIVATIVECOMPAREDTODATAPARAMETER} since \eqref{E:CRUCIALLATETIMEDERIVATIVECOMPAREDTODATAPARAMETER} follows from nearly identical arguments. From \eqref{E:UPMUFIRSTTRANSPORT}, Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}, \eqref{E:MUAMPLITUDENEARONE}, \eqref{E:LUNITUPMUATTIMETMINUSLUNITUPMUATTIMESPOINTWISEESTIMATE}, and the $L^{\infty}$ estimates of Prop.\ \ref{P:IMPROVEMENTOFAUX}, we have \begin{align} \label{E:WEIGHTEDLMUCOMPAREDTOCRUCIALPOINTWISE} \frac{L \upmu(t,u,\vartheta)}{\mathring{\upmu}(u,\vartheta) - M(0,u,\vartheta;t)} & = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) \right\rbrace L \upmu(0,u,\vartheta) + \mathcal{O}(\varepsilon) \\ & = \frac{1}{2} \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) \right\rbrace [G_{L L} \breve{X} \Psi](0,u,\vartheta) + \mathcal{O}(\varepsilon). \notag \end{align} From \eqref{E:WEIGHTEDLMUCOMPAREDTOCRUCIALPOINTWISE} and definitions \eqref{E:CRITICALBLOWUPTIMEFACTOR} and \eqref{E:CRUCIALLATETIMEDERIVATIVEDEF}, we conclude that $\upkappa \leq \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) \right\rbrace \mathring{\updelta}_* + \mathcal{O}(\varepsilon) = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace \mathring{\updelta}_* $ as desired. \medskip \noindent \textbf{Proof of \eqref{E:MUSTARBOUNDS}, \eqref{E:MUSTARBOUNDSUISONE}, \textbf{and} \eqref{E:MUSTARINVERSEMUSTGROWUPTOACONSTANT}}: We first prove \eqref{E:MUSTARBOUNDS}. We start by establishing the following preliminary estimate for the crucial quantity $\upkappa = \upkappa(t,u)$ (see \eqref{E:CRUCIALLATETIMEDERIVATIVEDEF}): \begin{align} \label{E:LATETIMELMUTIMESTISLESSTHANONE} t \upkappa < 1. \end{align} We may assume that $\upkappa > 0$ since otherwise \eqref{E:LATETIMELMUTIMESTISLESSTHANONE} is trivial. To proceed, we use \eqref{E:MUAPPROXDEF}, \eqref{E:MUAMPLITUDENEARONE}, \eqref{E:BIGMEST}, and \eqref{E:MUSPLIT} to deduce that the following estimate holds for $(s,u',\vartheta) \in [0,t] \times [0,u] \times \mathbb{T}$: \begin{align} \label{E:MUFIRSTLOWERBOUND} \upmu(s,u',\vartheta) & = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace \left\lbrace 1 + \frac{L \upmu(t,u',\vartheta)} {\mathring{\upmu}(u',\vartheta) - M(0,u',\vartheta;t)} s + \mathcal{O}(\varepsilon) (t-s)^2 \right\rbrace. \end{align} Setting $s=t$ in equation \eqref{E:MUFIRSTLOWERBOUND}, taking the min of both sides over $(u',\vartheta) \in [0,u] \times \mathbb{T}$, and appealing to definitions \eqref{E:MUSTARDEF} and \eqref{E:CRUCIALLATETIMEDERIVATIVEDEF}, we deduce that $\upmu_{\star}(t,u) = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace (1-\upkappa t) $. Since $\upmu_{\star}(t,u) > 0$ by \eqref{E:BOOTSTRAPMUPOSITIVITY}, we conclude \eqref{E:LATETIMELMUTIMESTISLESSTHANONE}. Having established the preliminary estimate, we now take the min of both sides of \eqref{E:MUFIRSTLOWERBOUND} over $(u',\vartheta) \in [0,u] \times \mathbb{T}$, appeal to definition \eqref{E:CRUCIALLATETIMEDERIVATIVEDEF}, and use the estimate \eqref{E:UPITSELFLINFINITYP0CONSEQUENCES} to obtain: \begin{align} \label{E:HARDERCASEMUFIRSTLOWERBOUND} \min_{(u',\vartheta) \in [0,u] \times \mathbb{T}} \upmu(s,u',\vartheta) & = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \right\rbrace \left\lbrace 1 - \upkappa s + \mathcal{O}(\varepsilon) (t-s)^2 \right\rbrace. \end{align} We will show that the terms in the second pair of braces on RHS~\eqref{E:HARDERCASEMUFIRSTLOWERBOUND} verify \begin{align} \label{E:MUSECONDLOWERBOUND} 1 - \upkappa s + \mathcal{O}(\varepsilon) (t-s)^2 & = \left\lbrace 1 + \mathrm{f}(s,u;t) \right\rbrace \left\lbrace 1 - \upkappa s \right\rbrace, \end{align} where \begin{align} \label{E:AMPLITUDEDEVIATIONFUNCTIONMUSECONDLOWERBOUND} \mathrm{f}(s,u;t) & = \mathcal{O}(\varepsilon). \end{align} The desired estimate \eqref{E:MUSTARBOUNDS} then follows easily from \eqref{E:HARDERCASEMUFIRSTLOWERBOUND}-\eqref{E:AMPLITUDEDEVIATIONFUNCTIONMUSECONDLOWERBOUND} and definition \eqref{E:MUSTARDEF}. To prove \eqref{E:AMPLITUDEDEVIATIONFUNCTIONMUSECONDLOWERBOUND}, we first use \eqref{E:MUSECONDLOWERBOUND} to solve for $\mathrm{f}(s,u;t)$: \begin{align} \label{E:AMPLITUDEDEVIATIONFUNCTIONEXPRESSION} \mathrm{f}(s,u;t) = \frac{\mathcal{O}(\varepsilon) (t-s)^2 } { 1 - \upkappa s } = \frac{\mathcal{O}(\varepsilon) (t-s)^2 } { 1 - \upkappa t + \upkappa (t-s) }. \end{align} We start by considering the case $\upkappa \leq (1/4) \mathring{\updelta}_*$. Since $0 \leq s \leq t < T_{(Boot)} \leq 2 \mathring{\updelta}_*^{-1}$, the denominator in the middle expression in \eqref{E:AMPLITUDEDEVIATIONFUNCTIONEXPRESSION} is $\geq 1/2$, and the desired estimate \eqref{E:AMPLITUDEDEVIATIONFUNCTIONMUSECONDLOWERBOUND} follows easily. In remaining case, we have $\upkappa > (1/4) \mathring{\updelta}_*$. Using \eqref{E:LATETIMELMUTIMESTISLESSTHANONE}, we deduce that RHS~\eqref{E:AMPLITUDEDEVIATIONFUNCTIONEXPRESSION} $ \displaystyle \leq \frac{1}{\upkappa} \mathcal{O}(\varepsilon) (t-s) \leq C \varepsilon \mathring{\updelta}_*^{-2} \lesssim \varepsilon $ as desired. Inequality \eqref{E:MUSTARINVERSEMUSTGROWUPTOACONSTANT} then follows as a simple consequence of \eqref{E:MUSTARBOUNDS}. Finally, we observe that the estimate \eqref{E:MUSTARBOUNDSUISONE} follows from \eqref{E:MUSTARBOUNDS} and \eqref{E:CRUCIALLATETIMEDERIVATIVECOMPAREDTODATAPARAMETER}. \medskip \noindent \textbf{Proof of \eqref{E:LUNITUPMUMINUSBOUND} and \eqref{E:HYPERSURFACELARGETIMEHARDCASEOMEGAMINUSBOUND}}: To prove \eqref{E:LUNITUPMUMINUSBOUND}, we first use \eqref{E:LUNITUPMUATTIMETMINUSLUNITUPMUATTIMESPOINTWISEESTIMATE} to deduce that for $0 \leq s \leq t < T_{(Boot)} \leq 2 \mathring{\updelta}_*^{-1}$ and $(u',\vartheta) \in [0,u] \times \mathbb{T}$, we have $L \upmu(s,u',\vartheta) = L \upmu(t,u',\vartheta) + \mathcal{O}(\varepsilon)$. Appealing to definition \eqref{E:CRUCIALLATETIMEDERIVATIVEDEF} and using the estimates \eqref{E:LUNITUPMULINFINITY} and \eqref{E:MUAMPLITUDENEARONE}, we find that $ \displaystyle \left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_s^u)} = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) \right\rbrace \upkappa + \mathcal{O}(\varepsilon) $. If $ \displaystyle \varepsilon^{1/2} \leq \upkappa $, we see that if $\varepsilon$ is sufficiently small, then we have the desired bound $ \displaystyle \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) \right\rbrace \upkappa + \mathcal{O}(\varepsilon) = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) + \mathcal{O}(\varepsilon^{1/2}) \right\rbrace \upkappa $. On the other hand, if $ \displaystyle \upkappa \leq \varepsilon^{1/2} $, then similar reasoning yields that $ \displaystyle \left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_s^u)} = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} (\mathring{\upalpha}) \right\rbrace \upkappa + \mathcal{O}(\varepsilon) = \mathcal{O}(\varepsilon^{1/2}) $ as desired. We have thus proved \eqref{E:LUNITUPMUMINUSBOUND}. The estimate \eqref{E:HYPERSURFACELARGETIMEHARDCASEOMEGAMINUSBOUND} can be proved via a similar argument and we omit the details. \medskip \noindent \textbf{Proof of \eqref{E:UNIFORMBOUNDFORMRADMUOVERMU}}: We fix times $s$ and $t$ with $0 \leq s \leq t < T_{(Boot)} \leq 2 \mathring{\updelta}_*^{-1}$ and a point $p \in \Sigma_s^u$ with geometric coordinates $(s,\widetilde{u},\widetilde{\vartheta})$. Let $\iota : [0,u] \rightarrow \Sigma_s^u$ be the integral curve of $\breve{X}$ that passes through $p$ and that is parametrized by the values $u'$ of the eikonal function. We set \[ \displaystyle F(u') := \upmu \circ \iota(u'), \qquad \dot{F}(u') := \frac{d}{d u'} F(u') = (\breve{X} \upmu)\circ \iota(u'). \] We must bound $ \displaystyle \frac{[\breve{X} \upmu]_+}{\upmu}|_p = \frac{[\dot{F}(\widetilde{u})]_+}{F(\widetilde{u})} $. We may assume that $\dot{F}(\widetilde{u}) > 0$ since otherwise the desired estimate is trivial. We now set \[ H := \sup_{\mathcal{M}_{T_{(Boot)},U_0}} \breve{X} \breve{X} \upmu. \] If $ \displaystyle F(\widetilde{u}) > \frac{1}{2} $, then the desired estimate is a simple consequence of \eqref{E:RADUPMULINFTY}. We may therefore also assume that $ \displaystyle F(\widetilde{u}) \leq \frac{1}{2} $. Then in view of the estimate $ \left\| \upmu - 1 \right\|_{L^{\infty}\left(\mathcal{P}_0^{T_{(Boot)}}\right)} \lesssim \varepsilon $ along $\mathcal{P}_0^{T_{(Boot)}}$ (see \eqref{E:UPITSELFLINFINITYP0CONSEQUENCES} and \eqref{E:DATAEPSILONVSBOOTSTRAPEPSILON}), we deduce that there exists a $u'' \in [0,\widetilde{u}]$ such that $\dot{F}(u'') < 0$. Considering also the assumption $\dot{F}(\widetilde{u}) > 0$, we see that $H > 0$. Moreover, by \eqref{E:RADRADUPMULINFTY}, we have $H \leq C$. Furthermore, by continuity, there exists a smallest $u_{\ast} \in [0,\widetilde{u}]$ such that $\dot{F}(u') \geq 0$ for $u' \in [u_{\ast},\widetilde{u}]$. We also set \begin{align} \label{E:MUMIN} \upmu_{(Min)}(s,u') := \min_{(u'',\vartheta) \in [0,u'] \times \mathbb{T}} \upmu(s,u'',\vartheta). \end{align} The two main steps in the proof are showing that \begin{align} \label{E:RADMUOVERMUALGEBRAICBOUND} \frac{[\breve{X} \upmu(s,\widetilde{u},\widetilde{\vartheta})]_+} {\upmu(s,\widetilde{u},\widetilde{\vartheta})} & \leq H^{1/2}\frac{1}{\sqrt{\upmu_{(Min)}}(s,\widetilde{u})} \end{align} and showing that for $0 \leq s \leq t < T_{(Boot)}$, we have \begin{align} \label{E:UPMUMINLOWERBOUND} \upmu_{(Min)}(s,u) & \geq \max \left\lbrace \left\lbrace 1 - C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} - C \varepsilon \right\rbrace \upkappa (t-s), \left\lbrace 1 - C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} - C \varepsilon \right\rbrace (1 - \upkappa s) \right\rbrace, \end{align} where $\upkappa = \upkappa(t,u)$ is defined in \eqref{E:CRUCIALLATETIMEDERIVATIVEDEF}. Once we have obtained \eqref{E:RADMUOVERMUALGEBRAICBOUND}-\eqref{E:UPMUMINLOWERBOUND} (we establish these estimates below), we split the remainder of the proof (which is relatively easy) into the two cases $ \displaystyle \upkappa \leq \frac{1}{4} \mathring{\updelta}_* $ and $ \displaystyle \upkappa > \frac{1}{4} \mathring{\updelta}_* $. In the first case $ \displaystyle \upkappa \leq \frac{1}{4} \mathring{\updelta}_* $, we have $ \displaystyle 1 - \upkappa s \geq 1 - \frac{1}{4} \mathring{\updelta}_* T_{(Boot)} \geq \frac{1}{2} $, and the desired bound $ \displaystyle \frac{[\breve{X} \upmu(s,\widetilde{u},\widetilde{\vartheta})]_+} {\upmu(s,\widetilde{u},\widetilde{\vartheta})} \leq C \leq \frac{C}{T_{(Boot)}^{1/2}} \leq \frac{C}{\sqrt{T_{(Boot)}-s}} \leq \mbox{RHS~\eqref{E:UNIFORMBOUNDFORMRADMUOVERMU}} $ follows easily from \eqref{E:RADMUOVERMUALGEBRAICBOUND} and the second term in the $\min$ on RHS~\eqref{E:UPMUMINLOWERBOUND}. In the remaining case $ \displaystyle \upkappa > \frac{1}{4} \mathring{\updelta}_* $, we have $ \displaystyle \frac{1}{\upkappa} \leq C $, and using \eqref{E:RADMUOVERMUALGEBRAICBOUND} and the first term in the $\min$ on RHS~\eqref{E:UPMUMINLOWERBOUND}, we deduce that $ \displaystyle \frac{[\breve{X} \upmu(s,\widetilde{u},\widetilde{\vartheta})]_+} {\upmu(s,\widetilde{u},\widetilde{\vartheta})} \leq \frac{C}{\sqrt{t-s}} $. Since this estimate holds for all $t < T_{(Boot)}$ with a uniform constant $C$, we conclude \eqref{E:UNIFORMBOUNDFORMRADMUOVERMU} in this case. We now prove \eqref{E:RADMUOVERMUALGEBRAICBOUND}. To this end, we will show that \begin{align} \label{E:FIRSTRADMUOVERMUALGEBRAICBOUND} \frac{[\breve{X} \upmu(s,\widetilde{u},\widetilde{\vartheta})]_+} {\upmu(s,\widetilde{u},\widetilde{\vartheta})} & \leq 2 H^{1/2} \frac{\sqrt{\upmu(s,\widetilde{u},\widetilde{\vartheta}) - \upmu_{(Min)}(s,\widetilde{u})}} {\upmu(s,\widetilde{u},\widetilde{\vartheta})}. \end{align} Then viewing RHS~\eqref{E:FIRSTRADMUOVERMUALGEBRAICBOUND} as a function of the real variable $\upmu(s,\widetilde{u},\widetilde{\vartheta})$ (with all other parameters fixed) on the domain $[\upmu_{(Min)}(s,\widetilde{u}),\infty)$, we carry out a simple calculus exercise to find that RHS~\eqref{E:FIRSTRADMUOVERMUALGEBRAICBOUND} $ \displaystyle \leq H^{1/2}\frac{1}{\sqrt{\upmu_{(Min)}(s,\widetilde{u})}} $, which yields \eqref{E:RADMUOVERMUALGEBRAICBOUND}. We now prove \eqref{E:FIRSTRADMUOVERMUALGEBRAICBOUND}. Let $u_{\ast}$ be as defined just above \eqref{E:MUMIN}. For any $u' \in [u_{\ast},\widetilde{u}]$, we use the mean value theorem to obtain \begin{align} \label{E:MVTESTIMATES} \dot{F}(\widetilde{u}) - \dot{F}(u') \leq H (\widetilde{u} - u'), \qquad F(\widetilde{u}) - F(u') \geq \min_{u'' \in [u',\widetilde{u}]} \dot{F}(u'') (\widetilde{u} - u'). \end{align} Setting $ \displaystyle u_1 := \widetilde{u} - \frac{1}{2} \frac{\dot{F}(\widetilde{u})}{H} $, we find from the first estimate in \eqref{E:MVTESTIMATES} that for $u' \in [u_1,\widetilde{u}]$, we have $ \displaystyle \dot{F}(u') \geq \frac{1}{2} \dot{F}(\widetilde{u}) $. Using also the second estimate in \eqref{E:MVTESTIMATES}, we find that $ \displaystyle F(\widetilde{u}) - F(u_1) \geq \frac{1}{2} \dot{F}(\widetilde{u}) (\widetilde{u}-u_1) = \frac{1}{4} \frac{\dot{F}^2(\widetilde{u})}{H} $. Noting that the definition \eqref{E:MUMIN} of $\upmu_{(Min)}$ implies that $F(u_1) \geq \upmu_{(Min)}(s,\widetilde{u}) $, we deduce from the previous estimate that \begin{align} \label{E:FDIFFERENCELOWERBOUND} \upmu (s,\widetilde{u},\widetilde{\vartheta}) - \upmu_{(Min)}(s,\widetilde{u}) & \geq \frac{1}{4} \frac{[\breve{X} \upmu(s,\widetilde{u},\widetilde{\vartheta})]_+^2}{H}. \end{align} Taking the square root of \eqref{E:FDIFFERENCELOWERBOUND}, rearranging, and dividing by $\upmu(s,\widetilde{u},\widetilde{\vartheta})$, we conclude the desired estimate \eqref{E:FIRSTRADMUOVERMUALGEBRAICBOUND}. It remains for us to prove \eqref{E:UPMUMINLOWERBOUND}. Reasoning as in the proof of \eqref{E:MUFIRSTLOWERBOUND}-\eqref{E:AMPLITUDEDEVIATIONFUNCTIONMUSECONDLOWERBOUND} and using \eqref{E:LATETIMELMUTIMESTISLESSTHANONE}, we find that for $0 \leq s \leq t < T_{(Boot)}$ and $u' \in [0,u]$, we have $\upmu_{(Min)}(s,u') \geq \left\lbrace 1 - C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} - C \varepsilon \right\rbrace (1 - \upkappa s) \geq \left\lbrace 1 - C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} - C \varepsilon \right\rbrace \upkappa (t-s) $. From these two inequalities, we conclude \eqref{E:UPMUMINLOWERBOUND}. \medskip \noindent {\textbf{Proof of} \eqref{E:LOCALIZEDMUMUSTSHRINK}}: A straightforward modification of the proof of \eqref{E:MUSTARBOUNDS}, based on equations \eqref{E:MUAPPROXDEF} and \eqref{E:MUSPLIT}, yields that for $0 \leq s_1 \leq s_2 \leq t < T_{(Boot)}$ and $(u',\vartheta) \in \Vminus{t}{u}$, we have $ \displaystyle \frac{\upmu(s_2,u',\vartheta)}{\upmu(s_1,u',\vartheta)} = \left\lbrace 1 + \mathcal{O}(\varepsilon) \right\rbrace \left\lbrace \frac{1 + \left(\frac{L \upmu(t,u',\vartheta)} {\mathring{\upmu}(u',\vartheta) - M(0,u',\vartheta;t)} \right) s_2}{ 1 + \left(\frac{L \upmu(t,u',\vartheta)} {\mathring{\upmu}(u',\vartheta) - M(0,u',\vartheta;t)} \right) s_1} \right\rbrace $. The estimate \eqref{E:LOCALIZEDMUMUSTSHRINK} then follows as a simple consequence. \medskip \noindent {\textbf{Proof of} \eqref{E:LOCALIZEDMUCANTGROWTOOFAST}, \eqref{E:KEYMUNOTDECAYBOUND}, \textbf{and} \eqref{E:KEYMUNOTDECAYINGMINUSPARTLMUOVERMUBOUND}}: By \eqref{E:LUNITUPMUATTIMETMINUSLUNITUPMUATTIMESPOINTWISEESTIMATE}, if $(u',\vartheta) \in \Vplus{t}{u}$ and $0 \leq s' \leq s \leq t < T_{(Boot)}$, then $[L \upmu]_-(s',u,\vartheta) \leq C \varepsilon$ and $L \upmu(s',u,\vartheta) \geq - C \varepsilon$. Integrating the latter estimate with respect to $s'$ from $0$ to $s$ and using \eqref{E:MUINITIALDATAESTIMATE}, we find that $\upmu(s,u',\vartheta) \geq 1 - C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} - C \varepsilon $. Moreover, from \eqref{E:UPMULINFTY}, we have the crude bound $\upmu(s,u',\vartheta) \leq C$. The desired bounds \eqref{E:LOCALIZEDMUCANTGROWTOOFAST}, \eqref{E:KEYMUNOTDECAYBOUND}, and \eqref{E:KEYMUNOTDECAYINGMINUSPARTLMUOVERMUBOUND} now readily follow from these estimates. \medskip \noindent {\textbf{Proof of} \eqref{E:LMUPLUSNEGLIGIBLEINSIGMAMINUS}}: By \eqref{E:LUNITUPMUATTIMETMINUSLUNITUPMUATTIMESPOINTWISEESTIMATE}, if $(u',\vartheta) \in \Vminus{t}{u}$ and $0 \leq s \leq t < T_{(Boot)}$, then $[L \upmu]_+(s,u',\vartheta) = [L \upmu]_+(t,u',\vartheta) + \mathcal{O}(\varepsilon) = \mathcal{O}(\varepsilon)$. The desired bound \eqref{E:LMUPLUSNEGLIGIBLEINSIGMAMINUS} thus follows. \end{proof} \subsection{Sharp time-integral estimates involving \texorpdfstring{$\upmu$}{the inverse foliation density}} \label{SS:SHARPTIMEINTEGRALESTIMATES} In deriving a priori energy estimates, we use a Gronwall argument that features time integrals involving difficult factors of $\upmu_{\star}^{-B}$ for various constants $B > 0$. In the next proposition, we bound these time integrals. \begin{proposition}[\textbf{Fundamental estimates for time integrals involving $\upmu_{\star}^{-1}$}] \label{P:MUINVERSEINTEGRALESTIMATES} Let \begin{align*} 1 < B \leq 100 \end{align*} be a real number.\footnote{In practice, to close our energy estimates, we need only to consider values of $B$ that are significantly less than $100$. At this point in the paper, we prefer to allow $B$ to be as large as $100$ so that we have a comfortable margin of error later in the paper.} The following estimates hold for $(t,u) \in [0,T_{(Boot)}) \times [0,U_0]$. \medskip \noindent \underline{\textbf{Estimates relevant for borderline top-order spacetime integrals}}. There exist constants $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} > 0$ (see Subsect.\ \ref{SS:NOTATIONANDINDEXCONVENTIONS} regarding our use of the notation $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}$) and $C > 0$ such that \begin{align} \label{E:KEYMUTOAPOWERINTEGRALBOUND} \int_{s=0}^t \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_s^u)}} {\upmu_{\star}^{B}(s,u)} \, ds & \leq \frac{1 + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} + C \varepsilon^{1/2}}{B-1} \upmu_{\star}^{1-B}(t,u). \end{align} \noindent \underline{\textbf{Estimates relevant for borderline top-order hypersurface integrals}}. Let $\Sigmaminus{t}{t}{u}$ be the subset of $\Sigma_t$ defined in \eqref{E:SIGMAMINUS}. There exist constants $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} > 0$ and $C > 0$ such that \begin{align} \label{E:KEYHYPERSURFACEMUTOAPOWERINTEGRALBOUND} \left\| L \upmu \right\|_{L^{\infty}(\Sigmaminus{t}{t}{u})} \int_{s=0}^t \frac{1} {\upmu_{\star}^{B}(s,u)} \, ds & \leq \frac{1 + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} + C \varepsilon^{1/2}}{B-1} \upmu_{\star}^{1-B}(t,u). \end{align} \medskip \noindent \underline{\textbf{Estimates relevant for less dangerous top-order spacetime integrals}}. There exists a constant $C > 0$ such that \begin{align} \label{E:LOSSKEYMUINTEGRALBOUND} \int_{s=0}^t \frac{1} {\upmu_{\star}^{B}(s,u)} \, ds & \leq C \left\lbrace 1 + \frac{1}{B-1} \right\rbrace \upmu_{\star}^{1-B}(t,u). \end{align} \medskip \noindent \underline{\textbf{Estimates for integrals that lead to only $\ln \upmu_{\star}^{-1}$ degeneracy}}. There exist constants $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} > 0$ and $C > 0$ such that \begin{align} \label{E:KEYMUINVERSEINTEGRALBOUND} \int_{s=0}^t \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_s^u)}} {\upmu_{\star}(s,u)} \, ds & \leq (1 + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} + C \varepsilon^{1/2}) \ln \upmu_{\star}^{-1}(t,u) + C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha} + C \varepsilon^{1/2}, \\ \int_{s=0}^t \frac{1}{\upmu_{\star}(s,u)} \, ds & \leq C \left\lbrace \ln \upmu_{\star}^{-1}(t,u) + 1 \right\rbrace. \label{E:LOGLOSSMUINVERSEINTEGRALBOUND} \end{align} \medskip \noindent \underline{\textbf{Estimates for integrals that break the $\upmu_{\star}^{-1}$ degeneracy}}. There exists a constant $C > 0$ such that \begin{align} \label{E:LESSSINGULARTERMSMPOINTNINEINTEGRALBOUND} \int_{s=0}^t \frac{1} {\upmu_{\star}^{9/10}(s,u)} \, ds & \leq C. \end{align} \end{proposition} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. \noindent {\textbf{Proof of} \eqref{E:KEYMUTOAPOWERINTEGRALBOUND}, \eqref{E:KEYHYPERSURFACEMUTOAPOWERINTEGRALBOUND}, and \eqref{E:KEYMUINVERSEINTEGRALBOUND}}: To prove \eqref{E:KEYMUTOAPOWERINTEGRALBOUND}, we first consider the case $\upkappa \geq \varepsilon^{1/2}$ in \eqref{E:LUNITUPMUMINUSBOUND}. Using \eqref{E:MUSTARBOUNDS} and \eqref{E:LUNITUPMUMINUSBOUND}, we deduce that \begin{align} \label{E:PROOFKEYMUTOAPOWERINTEGRALBOUND} \int_{s=0}^t \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_s^u)}} {\upmu_{\star}^{B}(s,u)} \, ds & = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon^{1/2}) \right\rbrace \int_{s=0}^t \frac{\upkappa}{(1 - \upkappa s)^{B}} \, ds \\ & \leq \frac{1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon^{1/2})}{B - 1} \frac{1}{(1 - \upkappa t)^{B-1}} = \frac{1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon^{1/2})}{B - 1} \upmu_{\star}^{1-B}(t,u) \notag \end{align} as desired. We now consider the remaining case $\upkappa \leq \varepsilon^{1/2}$ in \eqref{E:LUNITUPMUMINUSBOUND}. Using \eqref{E:MUSTARBOUNDS}, \eqref{E:LUNITUPMUMINUSBOUND}, and the fact that $0 \leq s \leq t < T_{(Boot)} \leq 2 \mathring{\updelta}_*^{-1}$, we see that for $\varepsilon$ sufficiently small relative to $\mathring{\updelta}_*$, we have \begin{align} \label{E:SECONDCASEPROOFKEYMUTOAPOWERINTEGRALBOUND} \int_{s=0}^t \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_s^u)}} {\upmu_{\star}^{B}(s,u)} \, ds & \leq C \varepsilon^{1/2} \int_{s=0}^t \frac{1}{(1 - \upkappa s)^{B}} \, ds \\ & \leq C \varepsilon^{1/2} \int_{s=0}^t 1 \, ds \leq C \varepsilon^{1/2} \leq C \varepsilon^{1/2} \frac{1}{(1 - \upkappa t)^{B-1}} \leq \frac{1}{B - 1} \upmu_{\star}^{1-B}(t,u) \notag \end{align} as desired. We have thus proved \eqref{E:KEYMUTOAPOWERINTEGRALBOUND}. Inequality \eqref{E:KEYMUINVERSEINTEGRALBOUND} can be proved using similar arguments and we omit the details. Inequality \eqref{E:KEYHYPERSURFACEMUTOAPOWERINTEGRALBOUND} can be proved using similar arguments with the help of the estimate \eqref{E:HYPERSURFACELARGETIMEHARDCASEOMEGAMINUSBOUND} and we omit the details. \medskip \noindent {\textbf{Proof of} \eqref{E:LOSSKEYMUINTEGRALBOUND}, \eqref{E:LOGLOSSMUINVERSEINTEGRALBOUND}, and \eqref{E:LESSSINGULARTERMSMPOINTNINEINTEGRALBOUND}}: To prove \eqref{E:LOSSKEYMUINTEGRALBOUND}, we first use \eqref{E:MUSTARBOUNDS} to deduce \begin{align} \label{E:PROOFLOSSKEYMUINTEGRALBOUND} \int_{s=0}^t \frac{1} {\upmu_{\star}^{B}(s,u)} \, ds & \leq C \int_{s=0}^t \frac{1}{(1 - \upkappa s)^{B}} \, ds, \end{align} where $\upkappa = \upkappa(t,u)$ is defined in \eqref{E:CRUCIALLATETIMEDERIVATIVEDEF}. We first assume that $ \displaystyle \upkappa \leq \frac{1}{4} \mathring{\updelta}_* $. Then since $0 \leq t < T_{(Boot)} < 2 \mathring{\updelta}_*^{-1}$, we see from \eqref{E:MUSTARBOUNDS} that $ \displaystyle \upmu_{\star}(s,u) \geq \frac{1}{4}$ for $0 \leq s \leq t $ and that RHS~\eqref{E:PROOFLOSSKEYMUINTEGRALBOUND} $\leq C \leq C \upmu_{\star}^{1-B}(t,u)$ as desired. In the remaining case, we have $ \displaystyle \upkappa > \frac{1}{4} \mathring{\updelta}_* $, and we can use \eqref{E:MUSTARBOUNDS}, the estimate $ \displaystyle \frac{1}{\upkappa} \leq C$, and \eqref{E:LATETIMELMUTIMESTISLESSTHANONE} to bound RHS~\eqref{E:PROOFLOSSKEYMUINTEGRALBOUND} by $ \displaystyle \leq \frac{C}{\upkappa} \frac{1}{(B - 1)} \frac{1}{(1 - \upkappa t)^{B-1}} \leq \frac{C}{B - 1} \upmu_{\star}^{1-B}(t,u) $ as desired. Inequalities \eqref{E:LOGLOSSMUINVERSEINTEGRALBOUND} and \eqref{E:LESSSINGULARTERMSMPOINTNINEINTEGRALBOUND} can be proved in a similar fashion. We omit the details, aside from remarking that the last step of the proof of \eqref{E:LESSSINGULARTERMSMPOINTNINEINTEGRALBOUND} relies on the trivial estimate $(1 - \upkappa t)^{1/10} \leq 1$. \end{proof} \section{Pointwise estimates for the error terms} \label{S:POINTWISEESTIMATES} In this section, we use some estimates that we established in prior sections to derive pointwise estimates for the error terms that we encounter in our energy estimates. Remark~\ref{R:ESTIMATESPROVEDINOTHERPAPER} especially applies in this section. \subsection{Definition of ``harmless'' error terms} \label{SS:HARMLESS} Most error terms that we encounter are harmless in the sense that they remain negligible all the way up to the shock. We now precisely define what we mean by ``harmless.'' \begin{definition}[\textbf{Harmless terms}] \label{D:HARMLESSTERMS} A $Harmless^{[1,N]}$ term is any term such that under the data-size and bootstrap assumptions of Subsects.\ \ref{SS:DATAASSUMPTIONS}-\ref{SS:PSIBOOTSTRAP} and the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}, the following bound holds on $\mathcal{M}_{T_{(Boot)},U_0}$ (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation): \begin{align} \label{E:HARMESSTERMPOINTWISEESTIMATE} \left| Harmless^{[1,N]} \right| & \lesssim \left| \mathscr{Z}_*^{[1,N+1];1} \Psi \right| + \left| \mathscr{Z}_*^{[1,N];1} \upgamma \right| + \left| \mathscr{P}_*^{[1,N]} \underline{\upgamma} \right|. \end{align} A $Harmless_{(Slow)}^{\leq N}$ term is any term such that under the data-size and bootstrap assumptions of Subsects.\ \ref{SS:DATAASSUMPTIONS}-\ref{SS:PSIBOOTSTRAP} and the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}, the following bound holds on $\mathcal{M}_{T_{(Boot)},U_0}$: \begin{align} \label{E:SLOWHARMESSTERMPOINTWISEESTIMATE} \left| Harmless_{(Slow)}^{\leq N} \right| & \lesssim \left| \mathscr{P}^{\leq N} \vec{W} \right|. \end{align} \end{definition} \begin{remark}[\textbf{A difference compared to \cite{jSgHjLwW2016}}] \label{R:DIFFERENTHARMLESSTERMDEF} Our definition of $Harmless^{[1,N]}$ terms is similar to the definition of the $Harmless^{\leq N}$ terms featured in \cite{jSgHjLwW2016}, the difference being that here we do not allow for the presence of order $0$ terms on RHS~\eqref{E:HARMESSTERMPOINTWISEESTIMATE}. The reason for our slightly different definition is that in this paper, some of the order $0$ quantities are controlled by the smallness parameter $\mathring{\upalpha}$ rather than by $\mathring{\upepsilon}$, and we find it convenient to highlight that (based in part on our assumptions \eqref{E:SOMENONINEARITIESARELINEAR} on the semilinear inhomogeneous terms) such order $0$ quantities do not appear in our energy estimates. Our definition of $Harmless_{(Slow)}^{\leq N}$ terms accounts for the harmless error terms corresponding to the slow wave variable $\vec{W}$. Note that terms that are order $0$ in $\vec{W}$ \emph{are} allowed on RHS~\eqref{E:SLOWHARMESSTERMPOINTWISEESTIMATE}. \end{remark} \subsection{Identification of the key difficult error terms in the commuted equations} \label{SS:KEYERRORTERMS} As we mentioned, most error terms that arise upon commuting the wave equations are negligible. In the next proposition, we identify those error terms that are not. \begin{proposition}[\textbf{Identification of the key difficult error term factors}] \label{P:IDOFKEYDIFFICULTENREGYERRORTERMS} Recall that $\uprho$ is the scalar function from Lemma~\ref{L:GEOANGDECOMPOSITION}. For $1 \leq N \leq 18$, we have the following estimates: \begin{subequations} \begin{align} \upmu \square_g (Y^{N-1} L \Psi) & = (\angdiffuparg{\#} \Psi) \cdot (\upmu {{d \mkern-9mu /} } Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi) + Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N}, \label{E:LISTHEFIRSTCOMMUTATORIMPORTANTTERMS} \\ \upmu \square_g (Y^N \Psi) & = (\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi + \uprho (\angdiffuparg{\#} \Psi) \cdot (\upmu {{d \mkern-9mu /} } Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi) + Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N}. \label{E:GEOANGANGISTHEFIRSTCOMMUTATORIMPORTANTTERMS} \end{align} \end{subequations} Furthermore, if $2 \leq N \leq 18$ and $\mathscr{P}^N$ is any $N^{th}$ order $\mathcal{P}_u$-tangential operator except for $Y^{N-1} L$ or $Y^N$, then \begin{align} \label{E:HARMLESSORDERNCOMMUTATORS} \upmu \square_g (\mathscr{P}^N \Psi) & = Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N}. \end{align} In addition, for $1 \leq N \leq 18$, we have the following estimates, ($i,j=1,2$): \begin{subequations} \begin{align} \upmu \partial_t \mathscr{P}^N w_0 & = \upmu (h^{-1})^{ab} \partial_a \mathscr{P}^N w_b + 2 \upmu (h^{-1})^{0a} \partial_a \mathscr{P}^N w_0 + Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N}, \label{E:SLOWTIMECOMMUTED} \\ \upmu \partial_t w_i & = \upmu \partial_i w_0 + Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N}, \label{E:SLOWSPACECOMMUTED} \\ \upmu \partial_t w & = \upmu w_0 + Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N}, \label{E:SLOWCOMMUTED} \\ \upmu \partial_i w_j & = \upmu \partial_j w_i + Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N}. \label{E:SYMMETRYOFMIXEDPARTIALSCOMMUTED} \end{align} \end{subequations} Finally, we have the following estimates: \begin{subequations} \begin{align} \upmu \partial_t w_0 & = \upmu (h^{-1})^{ab} \partial_a w_b + 2 \upmu (h^{-1})^{0a} \partial_a w_0 + Harmless^{[1,1]} + Harmless_{(Slow)}^{\leq 0}, \label{E:SLOWTIMENOTCOMMUTED} \\ \upmu \partial_t w_i & = \upmu \partial_i w_0, \label{E:SLOWSPACENOTCOMMUTED} \\ \upmu \partial_t w & = \upmu w_0, \label{E:SLOWNOTCOMMUTED} \\ \upmu \partial_i w_j & = \upmu \partial_j w_i. \label{E:SYMMETRYOFMIXEDPARTIALSNOTCOMMUTED} \end{align} \end{subequations} \end{proposition} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. We first establish \eqref{E:SLOWTIMECOMMUTED}. From Lemmas~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} and \ref{L:CARTESIANVECTORFIELDSINTERMSOFGEOMETRICONES} and the assumptions on the semilinear inhomogeneous terms stated in \eqref{E:SOMENONINEARITIESARELINEAR}, we see that the products of $\upmu$ and the semilinear inhomogeneous terms on the second line of RHS~\eqref{E:SLOW0EVOLUTION} are of the schematic form $ \mathrm{f}(\underline{\upgamma},\vec{W},\breve{X} \Psi,P \Psi) P \Psi + \mathrm{f}(\underline{\upgamma},\vec{W},\breve{X} \Psi,P \Psi) \vec{W} $. Thus, from the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, we find that the $\mathscr{P}^N$ derivatives of these terms are bounded in magnitude by $ \lesssim \left| \mathscr{Z}_*^{[1,N+1];1} \Psi \right| + \left| \mathscr{P}_*^{[1,N]} \underline{\upgamma} \right| + \left| \mathscr{P}^{\leq N} \vec{W} \right| = Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N} $. To complete the proof of \eqref{E:SLOWTIMECOMMUTED}, it remains for us to bound the commutator terms $ [\upmu \partial_t, \mathscr{P}^N] w_0 $, $ [(h^{-1})^{ab} \upmu \partial_a, \mathscr{P}^N] w_b $, and $ [(h^{-1})^{0a} \upmu \partial_a, \mathscr{P}^N] w_0 $. We show how to bound the last one; the first two can be bounded similarly. From Lemmas~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} and \ref{L:CARTESIANVECTORFIELDSINTERMSOFGEOMETRICONES} and the fact that $(h^{-1})^{\alpha \beta} = (h^{-1})^{\alpha \beta}(\Psi,\vec{W})$, we see that obtaining the desired bounds is equivalent to showing that $ [\mathrm{f}(\underline{\upgamma},\vec{W}) P, \mathscr{P}^N] w_0 + [\mathrm{f}(\upgamma,\vec{W}) \breve{X}, \mathscr{P}^N] w_0 = Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N} $. To obtain these estimates, we use the commutator estimates \eqref{E:PURETANGENTIALFUNCTIONCOMMUTATORESTIMATE} and \eqref{E:ONERADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} with $f = w_0$, the algebraic identity provided by Lemma~\ref{L:RADOFSLOWWAVEALGEBRAICALLYEXPRESSED} (which allows us to replace $\breve{X} \vec{W}$ with $P \vec{W}$ up to error terms), and the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}. To derive \eqref{E:SLOWSPACECOMMUTED}-\eqref{E:SYMMETRYOFMIXEDPARTIALSCOMMUTED}, we use the same reasoning that we used in the previous paragraph; the analysis is even simpler since, in view of the absence of semilinear inhomogeneous terms on RHSs~\eqref{E:SLOWIEVOLUTION}-\eqref{E:SYMMETRYOFMIXEDPARTIALS}, we encounter only commutator error terms. The estimates \eqref{E:SLOWTIMENOTCOMMUTED}-\eqref{E:SYMMETRYOFMIXEDPARTIALSNOTCOMMUTED} can be established via arguments similar to but simpler than the ones we used to derive \eqref{E:SYMMETRYOFMIXEDPARTIALSCOMMUTED}-\eqref{E:SYMMETRYOFMIXEDPARTIALSCOMMUTED}, and we therefore omit the details. We now prove the estimates \eqref{E:LISTHEFIRSTCOMMUTATORIMPORTANTTERMS}-\eqref{E:HARMLESSORDERNCOMMUTATORS}. These estimates were essentially proved in \cite{jSgHjLwW2016}*{Proposition 11.2}, based in part on estimates that are analogs of the estimates of Lemmas~\ref{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS} and \ref{L:POINTWISEESTIMATESFORGSPHEREANDITSDERIVATIVES} and the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}. However, the derivatives of $\upmu \times$ the semilinear inhomogeneous terms on RHS~\eqref{E:FASTWAVE} were not treated in \cite{jSgHjLwW2016}) (because these terms were not present in that paper). To handle these ``new'' terms, we first use Lemma~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS}, Lemma~\ref{L:CARTESIANVECTORFIELDSINTERMSOFGEOMETRICONES}, and the assumptions \eqref{E:SOMENONINEARITIESARELINEAR} to deduce that $\upmu \times$ the semilinear inhomogeneous terms on RHS~\eqref{E:FASTWAVE} are of the schematic form $ \mathrm{f}(\underline{\upgamma},\vec{W},\breve{X} \Psi,P \Psi) P \Psi + \mathrm{f}(\underline{\upgamma},\vec{W},\breve{X} \Psi,P \Psi) \vec{W} $. Thus, for the same reasons given in the first paragraph of the proof, the $\mathscr{P}^N$ derivatives of these terms are $ Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N} $ as desired. We also clarify that the right-hand sides of the estimates \eqref{E:LISTHEFIRSTCOMMUTATORIMPORTANTTERMS}-\eqref{E:HARMLESSORDERNCOMMUTATORS} do not feature the order $0$ terms $|\Psi|$ or $|\upgamma|$. This is different compared to the analogous estimates stated in \cite{jSgHjLwW2016}*{Proposition 11.2}, but follows from the proof given there and from the commutator estimates of Lemmas~\ref{L:COMMUTATORESTIMATES} and \ref{L:TRANSVERALTANGENTIALCOMMUTATOR} (see also the remarks made in the discussion of the proofs of Lemmas~\ref{L:COMMUTATORESTIMATES} and \ref{L:TRANSVERALTANGENTIALCOMMUTATOR} and Prop.\ \ref{P:IMPROVEMENTOFAUX}). \end{proof} \subsection{Pointwise estimates for the most difficult product} Out of all products that we encounter in the energy estimates, the most difficult one to control is the first product on RHS~\eqref{E:GEOANGANGISTHEFIRSTCOMMUTATORIMPORTANTTERMS}, namely $(\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$. In the next proposition, we derive pointwise estimates for this difficult product. We also derive pointwise estimates for the error term factor $ \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi $, which is much easier to control due to the factor of $\upmu$. Compared to previous works, the estimates of the proposition involve new terms stemming from the influence of the slow wave variable $\vec{W}$ on ${\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$, that is, on the null mean curvature of the characteristics corresponding to the fast wave $\Psi$. \begin{proposition}[\textbf{The key pointwise estimate for} $(\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$] \label{P:KEYPOINTWISEESTIMATE} For $1 \leq N \leq 18$, let \begin{align} \label{E:TOPORDERMODIFIEDTRCHI} \upchifullmodarg{Y^N} := \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi + Y^N \left\lbrace - G_{L L} \breve{X} \Psi - \frac{1}{2} \upmu {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} {{G \mkern-12mu /} \, } L \Psi - \frac{1}{2} \upmu G_{L L} L \Psi + \upmu \angGnospacemixedarg{L}{\#} \cdot {{d \mkern-9mu /} } \Psi \right\rbrace. \end{align} We have the following pointwise estimate: \begin{align} \label{E:KEYPOINTWISEESTIMATE} \left| (\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right| (t,u,\vartheta) & \leq \boxed{2} \frac {\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_t^u)}} {\upmu_{\star}(t,u)} \left| \breve{X} Y^N \Psi \right| (t,u,\vartheta) \\ & \ \ + \boxed{4} \frac { \left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_t^u)}} {\upmu_{\star}(t,u)} \int_{t'=0}^t \frac { \left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_{t'}^u)}} {\upmu_{\star}(t',u)} \left| \breve{X} Y^N \Psi \right|(t',u,\vartheta) \, dt' \notag \\ & \ \ + \mbox{\upshape Error}, \notag \end{align} where \begin{align} \label{E:ERRORTERMKEYPOINTWISEESTIMATE} \left| \mbox{\upshape Error} \right| (t,u,\vartheta) & \lesssim \frac{1}{\upmu_{\star}(t,u)} \left|\upchifullmodarg{Y^N} \right|(0,u,\vartheta) + \left| \mathscr{Z}_*^{[1,N+1];1} \Psi \right| (t,u,\vartheta) \\ & \ \ + \frac{1}{\upmu_{\star}(t,u)} \left| \mathscr{Z}_*^{[1,N];1} \Psi \right| (t,u,\vartheta) \notag \\ & \ \ + \frac{1}{\upmu_{\star}(t,u)} \left| \mathscr{P}^{[1,N]} \upgamma \right| (t,u,\vartheta) + \frac{1}{\upmu_{\star}(t,u)} \left| \mathscr{P}_*^{[1,N]} \underline{\upgamma} \right| (t,u,\vartheta) \notag \\ & \ \ + \varepsilon \frac{1}{\upmu_{\star}(t,u)} \int_{t'=0}^t \frac{1}{\upmu_{\star}(t',u)} \left| \breve{X} \mathscr{P}^N \Psi \right| (t',u,\vartheta) \, dt' \notag \\ & \ \ + \frac{1}{\upmu_{\star}(t,u)} \int_{t'=0}^t \left| \mathscr{Z}_*^{[1,N+1];1} \Psi \right| (t',u,\vartheta) \, dt' \notag \\ & \ \ + \frac{1}{\upmu_{\star}(t,u)} \int_{t'=0}^t \frac{1}{\upmu_{\star}(t',u)} \left\lbrace \left| \mathscr{Z}_*^{[1,N];1} \Psi \right| + \left| \mathscr{P}^{[1,N]} \upgamma \right| + \left| \mathscr{P}_*^{[1,N]} \underline{\upgamma} \right| \right\rbrace (t',u,\vartheta) \, dt' \notag \\ & \ \ + \frac{1}{\upmu_{\star}(t,u)} \int_{t'=0}^t \left| \mathscr{P}^{\leq N} \vec{W} \right| (t',u,\vartheta) \, dt'. \notag \end{align} Furthermore, we have the following less precise pointwise estimate: \begin{align} \label{E:LESSPRECISEKEYPOINTWISEESTIMATE} & \left| \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right| (t,u,\vartheta) \\ & \lesssim \left|\upchifullmodarg{Y^N} \right|(0,u,\vartheta) + \upmu \left| \mathscr{P}^{N+1} \Psi \right|(t,u,\vartheta) + \left| \breve{X} \mathscr{P}^N \Psi \right|(t,u,\vartheta) \notag \\ & \ \ + \left| \mathscr{Z}_*^{[1,N];1} \Psi \right|(t,u,\vartheta) + \left| \mathscr{P}^{[1,N]} \upgamma \right|(t,u,\vartheta) + \left| \mathscr{P}_*^{[1,N]} \underline{\upgamma} \right|(t,u,\vartheta) \notag \\ & \ \ + \int_{t'=0}^t \frac{1}{\upmu_{\star}(t',u)} \left| \breve{X} \mathscr{P}^N \Psi \right| (t',u,\vartheta) \, dt' + \int_{t'=0}^t \left| \mathscr{Z}_*^{[1,N+1];1} \Psi \right| (t',u,\vartheta) \, dt' \notag \\ & \ \ + \int_{t'=0}^t \frac{1}{\upmu_{\star}(t',u)} \left\lbrace \left| \mathscr{Z}_*^{[1,N];1} \Psi \right| + \left| \mathscr{P}^{[1,N]} \upgamma \right| + \left| \mathscr{P}_*^{[1,N]} \underline{\upgamma} \right| \right\rbrace (t',u,\vartheta) \, dt' \notag \\ & \ \ + \int_{t'=0}^t \left| \mathscr{P}^{\leq N} \vec{W} \right| (t',u,\vartheta) \, dt'. \notag \end{align} \end{proposition} \begin{proof}[Proof outline] The estimate \eqref{E:KEYPOINTWISEESTIMATE} was essentially proved in \cite{jSgHjLwW2016}*{Proposition 11.10}, based in part on estimates that are analogs of the estimates of Lemmas~\ref{L:POINTWISEFORRECTANGULARCOMPONENTSOFVECTORFIELDS} and \ref{L:POINTWISEESTIMATESFORGSPHEREANDITSDERIVATIVES}, the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, the estimate \eqref{E:RADDERIVATIVESOFGLLDIFFERENCEBOUND}, and the estimates of Prop.~\ref{P:SHARPMU}. We note that on RHS~\eqref{E:ERRORTERMKEYPOINTWISEESTIMATE}, we have corrected a typo that appeared in \cite{jSgHjLwW2016}. Specifically, the factor $\left| \breve{X} \mathscr{P}^N \Psi \right| $ in the term $ \displaystyle \varepsilon \frac{1}{\upmu_{\star}(t,u)} \int_{t'=0}^t \frac{1}{\upmu_{\star}(t',u)} \left| \breve{X} \mathscr{P}^N \Psi \right| (t',u,\vartheta) \, dt' $ on RHS~\eqref{E:ERRORTERMKEYPOINTWISEESTIMATE} was mistakenly listed as $ \left| \mathscr{Z}_*^{\leq N+1;1} \Psi \right| $ in \cite{jSgHjLwW2016}*{Equation (11.33)}. We also clarify that RHS~\eqref{E:KEYPOINTWISEESTIMATE} does not feature the order $0$ terms $|\Psi|$ or $|\upgamma|$. This is different compared to the analogous estimates stated in \cite{jSgHjLwW2016}*{Proposition 11.10}, but follows from the proof given there (for reasons similar to the ones that we described in the discussion of the proofs of Lemmas~\ref{L:COMMUTATORESTIMATES} and \ref{L:TRANSVERALTANGENTIALCOMMUTATOR}, Prop.\ \ref{P:IMPROVEMENTOFAUX}, and Prop.\ \ref{P:IDOFKEYDIFFICULTENREGYERRORTERMS}). In addition, we note that in \cite{jSgHjLwW2016}*{Proposition 11.10}, the coefficient in front of the analog of the second product on RHS~\eqref{E:KEYPOINTWISEESTIMATE} was stated as $\boxed{4}(1 + C \varepsilon)$ rather than $\boxed{4}$. Here, we have relegated the $C \varepsilon$ contribution to the error term $ \displaystyle \varepsilon \frac{1}{\upmu_{\star}(t,u)} \int_{t'=0}^t \cdots $ on the fourth line of RHS~\eqref{E:ERRORTERMKEYPOINTWISEESTIMATE}; this is a minor (essentially cosmetic) change that is justified by the arguments given in the proof of \cite{jSgHjLwW2016}*{Proposition 11.10}. The only new term appearing on RHS~\eqref{E:KEYPOINTWISEESTIMATE} compared to \cite{jSgHjLwW2016}*{Proposition 11.10} is the last one on RHS~\eqref{E:ERRORTERMKEYPOINTWISEESTIMATE} (which involves the time integral of $\left|\mathscr{P}^{\leq N} \vec{W} \right|$), whose origin we now explain. To do this, we must explain some features of the proof of \eqref{E:KEYPOINTWISEESTIMATE}, which relies on the ``modified'' version of ${\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ described in Subsubsect.\ \ref{SSS:ENERGYESTIMATES}, namely the quantity $\upchifullmodarg{Y^N}$ defined in \eqref{E:TOPORDERMODIFIEDTRCHI}. As we explained in the discussion below equation \eqref{E:TRCHISCHEMATICEVOLUTION}, we are forced to work with $\upchifullmodarg{Y^N}$ in order to avoid losing a derivative at the top order. Specifically, to prove \eqref{E:KEYPOINTWISEESTIMATE}, one first derives a transport equation for $\upchifullmodarg{Y^N}$ (see the proof of \cite{jSgHjLwW2016}*{Lemma 11.9} for more details) of the form $ L (\iota \upchifullmodarg{Y^N}) = \cdots $, where $\iota$ is an appropriately defined integrating factor that verifies $\iota(s,u,\vartheta) = \displaystyle \left\lbrace 1 + \mathcal{O}(\varepsilon) \right\rbrace \frac{\upmu^2(0,u,\vartheta)}{\upmu^2(s,u,\vartheta)}, $ and $\cdots$ contains, among other terms, $ \displaystyle \frac{1}{2} \iota Y^N \left( \upmu G_{L L} \times \mbox{{\upshape RHS}~\eqref{E:FASTWAVE}} \right) $; the terms generated by $ \displaystyle \frac{1}{2} \iota Y^N \left( \upmu G_{L L} \times \mbox{{\upshape RHS}~\eqref{E:FASTWAVE}} \right) $ are the new ones compared to the terms found in \cite{jSgHjLwW2016}*{Proposition 11.10}. Using Lemmas~\ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} and \ref{L:CARTESIANVECTORFIELDSINTERMSOFGEOMETRICONES} and our assumptions \eqref{E:SOMENONINEARITIESARELINEAR} on the semilinear inhomogeneous terms, we see that $ \displaystyle \frac{1}{2} \iota Y^N \left( \upmu G_{L L} \times \mbox{{\upshape RHS}~\eqref{E:FASTWAVE}} \right) = \iota Y^N \left\lbrace \mathrm{f}(\upgamma,\vec{W},\breve{X} \Psi,P \Psi)P \Psi + \mathrm{f}(\underline{\upgamma},\vec{W},\breve{X} \Psi,P \Psi) \vec{W} \right\rbrace $. Therefore, with the help of the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, we can pointwise bound these new terms in magnitude by $ \displaystyle \lesssim \iota \left| \mathscr{Z}_*^{[1,N+1];1} \Psi \right| + \iota \left| \mathscr{P}^{\leq N} \vec{W} \right| + \iota \left| \mathscr{P}_*^{[1,N]} \underline{\upgamma} \right| $. Revisiting the proofs of \cite{jSgHjLwW2016}*{Lemma 11.9} and \cite{jSgHjLwW2016}*{Lemma 11.10}, which are based on integrating the evolution equation $ L (\iota \upchifullmodarg{Y^N}) = \cdots $ in time, we obtain, using the above pointwise bounds for the new terms in $\cdots$, the following estimate: \[ \left| \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right|(t,u,\vartheta) \leq C \left\lbrace \sup_{0 \leq t' \leq t} \frac{\upmu(t,u,\vartheta)}{\upmu(t',u,\vartheta)} \right\rbrace^2 \times \int_{t'=0}^t \left| \mathscr{P}^{\leq N} \vec{W} \right|(t',u,\vartheta) \, dt' + \cdots, \] where the factor $ \displaystyle \left\lbrace \sup_{0 \leq t' \leq t} \frac{\upmu(t,u,\vartheta)}{\upmu(t',u,\vartheta)} \right\rbrace^2 $ is generated by the integrating factor $\iota$ and $\cdots$ now denotes terms of the same type that appeared in the proof of \cite{jSgHjLwW2016}*{Lemma 11.9}. From Def.~\ref{D:REGIONSOFDISTINCTUPMUBEHAVIOR} and the estimates \eqref{E:LOCALIZEDMUCANTGROWTOOFAST} and \eqref{E:LOCALIZEDMUMUSTSHRINK}, we find that \begin{align} \label{E:CRUDEMUOVERMUBOUND} \sup_{0 \leq t' \leq t} \frac{\upmu(t,u,\vartheta)}{\upmu(t',u,\vartheta)} & \leq C. \end{align} It therefore follows that \begin{align} \label{E:KEYDIFFICULTPRODUCTPROOFESTIMATE} \left| (\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right|(t,u,\vartheta) & \leq C \frac{1}{\upmu(t,u,\vartheta)} \left| \breve{X} \Psi \right| (t,u,\vartheta) \int_{t'=0}^t \left| \mathscr{P}^{\leq N} \vec{W} \right|(t',u,\vartheta) \, dt' + \cdots \\ & \leq C \frac{1}{\upmu_{\star}(t,u)} \int_{t'=0}^t \left| \mathscr{P}^{\leq N} \vec{W} \right|(t',u,\vartheta) \, dt' + \cdots, \notag \end{align} where $\cdots$ again denotes terms that appear in the proof of \cite{jSgHjLwW2016}*{Lemma 11.9} (and thus on RHS~\eqref{E:KEYPOINTWISEESTIMATE} as well) and to obtain the last inequality in \eqref{E:KEYDIFFICULTPRODUCTPROOFESTIMATE}, we used the simple bound $\| \breve{X} \Psi \|_{L^{\infty}(\Sigma_t^u)} \lesssim 1$ (that is, \eqref{E:PSITRANSVERSALLINFINITYBOUNDBOOTSTRAPIMPROVED}). This explains the origin of the last term on RHS~\eqref{E:ERRORTERMKEYPOINTWISEESTIMATE} and completes our proof outline of \eqref{E:KEYPOINTWISEESTIMATE}. Similarly, the estimate \eqref{E:LESSPRECISEKEYPOINTWISEESTIMATE} was essentially obtained in \cite{jSgHjLwW2016}*{Proposition 11.10} using ideas similar to but simpler than the ones used in the proof of \eqref{E:KEYPOINTWISEESTIMATE}. The new terms mentioned in the previous paragraph also make a contribution to RHS~\eqref{E:LESSPRECISEKEYPOINTWISEESTIMATE} for essentially the same reason that they appeared on RHS~\eqref{E:KEYDIFFICULTPRODUCTPROOFESTIMATE}. Specifically, they lead to the last term on RHS~\eqref{E:LESSPRECISEKEYPOINTWISEESTIMATE}. In closing, we note that RHS~\eqref{E:LESSPRECISEKEYPOINTWISEESTIMATE} is less singular with respect to factors of $ \displaystyle \frac{1}{\upmu} $ compared to RHS~\eqref{E:KEYPOINTWISEESTIMATE}. The reason is that LHS~\eqref{E:LESSPRECISEKEYPOINTWISEESTIMATE} has an extra factor of $\upmu$ in it compared to LHS~\eqref{E:KEYPOINTWISEESTIMATE}. \end{proof} \subsection{Pointwise estimates for the remaining terms in the energy estimates} \label{SS:POINTWISEESTIMATESREMININGTERMSENERGYESTIMATES} We now derive pointwise estimates for the energy estimates error integrands $\basicenergyerrorarg{T}{i}[f]$ from RHS~\eqref{E:E0DIVID} and the error integrand $ \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace \mathfrak{W}[\vec{V}] $ from \eqref{E:SLOWENERGYID}. \begin{lemma}[\textbf{Pointwise bounds for the remaining error terms in the energy estimates}] \label{L:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND} Consider the error terms $\basicenergyerrorarg{T}{1}[f]$, $\cdots$, $\basicenergyerrorarg{T}{5}[f]$ defined in \eqref{E:MULTERRORINTEG1}-\eqref{E:MULTERRORINTEG5}. Let $\varsigma > 0$ be a real number. Then the following pointwise estimate holds without any absolute value taken on the left, where the implicit constants are independent of $\varsigma$: \begin{align} \label{E:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND} \sum_{i=1}^5 \basicenergyerrorarg{T}{i}[f] & \lesssim (1 + \varsigma^{-1})(L f)^2 + (1 + \varsigma^{-1}) (\breve{X} f)^2 + \upmu | {{d \mkern-9mu /} } f|^2 + \varsigma \mathring{\updelta}_* | {{d \mkern-9mu /} } f|^2 \\ & \ \ + \frac{1}{\sqrt{T_{(Boot)} - t}} \upmu | {{d \mkern-9mu /} } f|^2. \notag \end{align} In addition, we have the following pointwise estimate for the error integrand $ \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace \mathfrak{W}[\vec{V}] $ on RHS~\eqref{E:SLOWENERGYID}, where $\mathfrak{W}[\vec{V}]$ is defined by \eqref{E:SLOWWAVEBASICENERGYINTEGRAND}: \begin{align} \label{E:SIMPLEPOINTWISEBOUNDSLOWWAVEBASICENERGYINTEGRAND} \left| \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace \mathfrak{W}[\vec{V}] \right| & \lesssim |\vec{V}|^2. \end{align} \end{lemma} \begin{remark} \label{R:NEEDTHEESTIMATEWITHTANSETNPSIINPLACEOFPSI} In deriving energy estimates, we will rely on the estimate \eqref{E:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND} with $\mathscr{P}^N \Psi$ in the role of $f$ and the estimate \eqref{E:SIMPLEPOINTWISEBOUNDSLOWWAVEBASICENERGYINTEGRAND} with $\mathscr{P}^N \vec{W}$ in the role of $\vec{V}$. \end{remark} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. We first prove \eqref{E:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND}. Only the term $\basicenergyerrorarg{T}{3}[f]$ is difficult to treat. Specifically, using \eqref{E:TENSORSDEPENDINGONGOODVARIABLESGOODPSIDERIVATIVES}, \eqref{E:TENSORSDEPENDINGONGOODVARIABLESBADDERIVATIVES}, \eqref{E:ANGDIFFXI}, and the $L^{\infty}$ estimates of Props.~\ref{P:IMPROVEMENTOFAUX} and \ref{P:IMPROVEMENTOFHIGHERTRANSVERSALBOOTSTRAP}, it is straightforward to verify that the terms in braces on RHSs \eqref{E:MULTERRORINTEG1}, \eqref{E:MULTERRORINTEG2}, \eqref{E:MULTERRORINTEG4}, and \eqref{E:MULTERRORINTEG5} are bounded in magnitude by $\lesssim 1$. It follows that for $i=1,2,4,5$, $\left| \basicenergyerrorarg{T}{i}[f] \right| $ is $\lesssim$ the terms on the first line of RHS~\eqref{E:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND}. The quantities $\varsigma$ and $\mathring{\updelta}_*$ appear on RHS~\eqref{E:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND} because we use Young's inequality to bound $\basicenergyerrorarg{T}{4}[f] \lesssim |L f|| {{d \mkern-9mu /} } f| \leq \varsigma^{-1} \mathring{\updelta}_*^{-1}(L f)^2 + \varsigma \mathring{\updelta}_* | {{d \mkern-9mu /} } f|^2 \leq C \varsigma^{-1} (L f)^2 + \varsigma \mathring{\updelta}_* | {{d \mkern-9mu /} } f|^2 $. Similar remarks apply to $\basicenergyerrorarg{T}{5}[f]$. To bound $\basicenergyerrorarg{T}{3}[f]$, we also use \eqref{E:POSITIVEPARTOFLMUOVERMUISBOUNDED} and \eqref{E:UNIFORMBOUNDFORMRADMUOVERMU}, which allow us to bound the first two terms in braces on RHS~\eqref{E:MULTERRORINTEG3}. Note that since no absolute value is taken on LHS~\eqref{E:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND}, we can replace the factor $(\breve{X} \upmu)/\upmu$ from RHS~\eqref{E:MULTERRORINTEG3} with the factor $[\breve{X} \upmu]_+/\upmu$, which is bounded by \eqref{E:UNIFORMBOUNDFORMRADMUOVERMU}. This completes our proof of \eqref{E:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND}. To prove \eqref{E:SIMPLEPOINTWISEBOUNDSLOWWAVEBASICENERGYINTEGRAND}, we first use Lemmas \ref{L:SCHEMATICDEPENDENCEOFMANYTENSORFIELDS} and \ref{L:CARTESIANVECTORFIELDSINTERMSOFGEOMETRICONES} to deduce that $ \displaystyle \upmu \partial_{\kappa} \left( (h^{-1})^{\alpha \beta}(\Psi,\vec{W}) \right) = \mathrm{f}(\underline{\upgamma}) P \Psi + \mathrm{f}(\upgamma) \breve{X} \Psi + \mathrm{f}(\underline{\upgamma}) P \vec{W} + \mathrm{f}(\upgamma) \breve{X} \vec{W} $. From this schematic identity and the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, we obtain the bounds $ \displaystyle \left| (h^{-1})^{\alpha \beta}(\Psi,\vec{W}) \right| \lesssim 1 $, $ \displaystyle \left| \upmu \partial_{\kappa} \left( (h^{-1})^{\alpha \beta}(\Psi,\vec{W}) \right) \right| \lesssim 1 $, $ \displaystyle \left| 1 + \upgamma \mathrm{f}(\upgamma) \right| \lesssim 1 $, and $\upmu \lesssim 1$, from which the desired estimate \eqref{E:SIMPLEPOINTWISEBOUNDSLOWWAVEBASICENERGYINTEGRAND} easily follows. \end{proof} \section{Energy estimates and improvements of the fundamental \texorpdfstring{$L^{\infty}$}{essential sup-norm} bootstrap assumptions} \label{S:ENERGYESTIMATES} In this section, we derive the main estimates of this article: a priori $L^2$ estimates for the solution up to top order. As a simple corollary, we will also derive strict improvements of the fundamental $L^{\infty}$ bootstrap assumptions \eqref{E:PSIFUNDAMENTALC0BOUNDBOOTSTRAP}. Remark~\ref{R:ESTIMATESPROVEDINOTHERPAPER} especially applies in this section. \subsection{Definitions of the fundamental \texorpdfstring{$L^2$}{square integral}-controlling quantities} \label{SS:SQUAREINTEGRALCONTNROLLINGQUANT} In this subsection, we define the quantities that we use to control the solution in $L^2$ up to top order. \begin{definition}[\textbf{The main coercive quantities used for controlling the solution and its derivatives in} $L^2$] \label{D:MAINCOERCIVEQUANT} In terms of the energies and null fluxes of Defs.~\ref{D:ENERGYFLUX} and \ref{D:SLOWWAVEENERGYFLUX}, we define \begin{subequations} \begin{align} \totTanmax{N}(t,u) & := \max_{|\vec{I}| = N} \sup_{(t',u') \in [0,t] \times [0,u]} \left\lbrace \mathbb{E}_{(Fast)}[\mathscr{P}^{\vec{I}} \Psi](t',u') + \mathbb{F}_{(Fast)}[\mathscr{P}^{\vec{I}} \Psi](t',u') \right\rbrace, \label{E:Q0TANNDEF} \\ \totTanmax{[1,N]}(t,u) & := \max_{1 \leq M \leq N} \totTanmax{M}(t,u), \label{E:MAXEDQ0TANLEQNDEF} \\ \slowtotTanmax{N}(t,u) & := \max_{|\vec{I}| = N} \sup_{(t',u') \in [0,t] \times [0,u]} \left\lbrace \mathbb{E}_{(Slow)}[\mathscr{P}^{\vec{I}} \vec{W}](t',u') + \mathbb{F}_{(Slow)}[\mathscr{P}^{\vec{I}} \vec{W}](t',u') \right\rbrace, \label{E:SLOWQ0TANNDEF} \\ \slowtotTanmax{\leq N}(t,u) & := \max_{M \leq N} \slowtotTanmax{M}(t,u). \label{E:MAXEDSLOWQ0TANLEQNDEF} \end{align} \end{subequations} \end{definition} We use the following coercive spacetime integrals to control non-$\upmu$-weighted error integrals involving geometric torus derivatives. These integrals are generated by the term $ \displaystyle - \frac{1}{2} \int_{\mathcal{M}_{t,u}} [L \upmu]_- | {{d \mkern-9mu /} } f|^2 $ on the RHS of the fast wave energy identity \eqref{E:E0DIVID}. \begin{definition}[\textbf{Key coercive spacetime integrals}] \label{D:COERCIVEINTEGRAL} We associate the following integrals to $\Psi$, where $[L \upmu]_- = |L \upmu|$ when $L \upmu < 0$ and $[L \upmu]_- = 0$ when $L \upmu \geq 0$: \begin{subequations} \begin{align} \label{E:COERCIVESPACETIMEDEF} \mathbb{K}[\Psi](t,u) & := \frac{1}{2} \int_{\mathcal{M}_{t,u}} [L \upmu]_- | {{d \mkern-9mu /} } \Psi|^2 \, d \varpi, \\ \coerciveTanspacetimemax{N}(t,u) & := \max_{|\vec{I}| = N} \mathbb{K}[\mathscr{P}^{\vec{I}} \Psi](t,u), \\ \coerciveTanspacetimemax{[1,N]}(t,u) & := \max_{1 \leq M \leq N} \coerciveTanspacetimemax{M}(t,u). \label{E:MAXEDCOERCIVESPACETIMEDEF} \end{align} \end{subequations} \end{definition} \begin{remark}[\textbf{The energies vanish for simple plane wave solutions}] \label{R:ENERGIESVANISHFORSIMPLEPLANEWAVE} Note that for simple outgoing plane wave solutions, if $N \geq 1$, then $\totTanmax{[1,N]}(t,u) \equiv 0$, $\slowtotTanmax{\leq N}(t,u) \equiv 0$, and $\coerciveTanspacetimemax{[1,N]}(t,u) \equiv 0$. Hence, in some sense, $\totTanmax{[1,N]}$ and $\slowtotTanmax{\leq N}$ measure the extent to which the solution deviates from a simple outgoing plane wave. These facts are tied to Lemma~\ref{L:INITIALSIZEOFL2CONTROLLING} below and to the fact that $\mathring{\upepsilon} = 0$ for simple outgoing plane wave solutions. \end{remark} \subsection{The coerciveness of the fundamental \texorpdfstring{$L^2$}{square integral}-controlling quantities} \label{SS:COERCIVENESSOFL2CONTROLLING} \subsubsection{Preliminary lemmas} \label{SSS:COERCIVENESSPRELIMINARYLEMMAS} In this subsection, we quantify the coerciveness of the $L^2$-controlling quantities that we defined in Subsect.\ \ref{SS:SQUAREINTEGRALCONTNROLLINGQUANT}. We start with a lemma that provides an identity for the time derivative of integrals over the tori $\ell_{t,u}$. \begin{lemma}\cite{jSgHjLwW2016}*{Lemma 3.6; \textbf{Identity for the time derivative of} $\ell_{t,u}$ \textbf{integrals}} \label{L:LDERIVATIVEOFLINEINTEGRAL} The following identity holds for scalar-valued functions $f$: \begin{align} \label{E:LDERIVATIVEOFLINEINTEGRAL} \frac{\partial}{\partial t} \int_{\ell_{t,u}} f \, d \uplambda_{{g \mkern-8.5mu /}} & = \int_{\ell_{t,u}} \left\lbrace L f + {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi f \right\rbrace \, d \uplambda_{{g \mkern-8.5mu /}}. \end{align} \end{lemma} In the next lemma, we derive estimates for the metric component $\upsilon$ defined in \eqref{E:METRICANGULARCOMPONENT}. \begin{lemma}[\textbf{Pointwise estimates for} $\upsilon$] \label{L:POINTWISEESTIMATEFORGTANCOMP} Let $\upsilon > 0$ be the scalar function defined in \eqref{E:METRICANGULARCOMPONENT}. The following estimates hold (see Subsect.\ \ref{SS:NOTATIONANDINDEXCONVENTIONS} regarding our use of the notation $\mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\cdot)$): \begin{align} \label{E:MOREPRECISEPOINTWISEESTIMATEFORGTANCOMP} \upsilon(t,u,\vartheta) & = \left\lbrace 1 + \mathcal{O}(\varepsilon) \right\rbrace \upsilon(0,u,\vartheta) = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon). \end{align} \end{lemma} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. Using \eqref{E:LDERIVATIVEOFVOLUMEFORMFACTOR} and \eqref{E:PURETANGENTIALCHICOMMUTEDLINFINITY}, we deduce $L \ln \upsilon = \mathcal{O}(\varepsilon)$. Integrating in time, we deduce $\ln \upsilon(t,u,\vartheta) = \ln \upsilon(0,u,\vartheta) + \mathcal{O}(\varepsilon)$, which yields the first equality in \eqref{E:MOREPRECISEPOINTWISEESTIMATEFORGTANCOMP}. The second equality in \eqref{E:MOREPRECISEPOINTWISEESTIMATEFORGTANCOMP} then follows from the first one and the estimate $\upsilon(0,u,\vartheta) = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha})$, which we now derive. To this end, we first note that by construction, we have $\Theta|_{t=0} = \partial_2$. Hence, from \eqref{E:LITTLEGDECOMPOSED}-\eqref{E:METRICPERTURBATIONFUNCTION} and \eqref{E:PSIITSELFLINFTYSMALLDATAASSUMPTIONSALONGSIGMA0}, we conclude that $ \upsilon^2|_{t=0} = g(\Theta,\Theta)|_{t=0} = g_{22}|_{t=0} = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) $ as desired. \end{proof} In the next lemma, we compare various integrals that are computed with respect to forms evaluated at different times. \begin{lemma}[\textbf{Comparison results for integrals}] \label{L:LINEVOLUMEFORMCOMPARISON} Let $p=p(\vartheta)$ be a non-negative function of $\vartheta$. Then the following estimates hold for $(t,u) \in [0,T_{(Boot)}) \times [0,U_0]$: \begin{align} \label{E:LINEVOLUMEFORMCOMPARISON} \int_{\vartheta \in \mathbb{T}} p(\vartheta) d \argspherevol{(0,u,\vartheta)} & = \left\lbrace 1 + \mathcal{O}(\varepsilon) \right\rbrace \int_{\ell_{t,u}} p(\vartheta) d \argspherevol{(t,u,\vartheta)}. \end{align} Furthermore, let $p=p(u',\vartheta)$ be a non-negative function of $(u',\vartheta) \in [0,u] \times \mathbb{T}$ that \textbf{does not depend on $t$}. Then for $s, t \in [0,T_{(Boot)})$ and $u \in [0,U_0]$, we have: \begin{align} \label{E:SIGMATVOLUMEFORMCOMPARISON} \int_{\Sigma_s^u} p \, d \underline{\varpi} & = \left\lbrace 1 + \mathcal{O}(\varepsilon) \right\rbrace \int_{\Sigma_t^u} p \, d \underline{\varpi}. \end{align} \end{lemma} \begin{proof} From \eqref{E:RESCALEDVOLUMEFORMS} and the first equality in \eqref{E:MOREPRECISEPOINTWISEESTIMATEFORGTANCOMP}, we deduce that $d \argspherevol{(t,u,\vartheta)} = \left\lbrace 1 + \mathcal{O}(\varepsilon) \right\rbrace d \argspherevol{(0,u,\vartheta)}$, which yields \eqref{E:LINEVOLUMEFORMCOMPARISON}. \eqref{E:SIGMATVOLUMEFORMCOMPARISON} then follows from \eqref{E:LINEVOLUMEFORMCOMPARISON} and the fact that $d \underline{\varpi}(t,u',\vartheta) = d \argspherevol{(t,u',\vartheta)} du'$ along $\Sigma_t^u$. \end{proof} We now provide a simple variant of Minkowski's inequality for integrals. \begin{lemma}[\textbf{Estimate for the norm} $\| \cdot \|_{L^2(\Sigma_t^u)}$ \textbf{of time-integrated functions}] \label{L:L2NORMSOFTIMEINTEGRATEDFUNCTIONS} Let $f$ be a scalar function and set $F(t,u,\vartheta) := \int_{t'=0}^t f(t',u,\vartheta) \, dt'$. The following estimate holds: \begin{align} \label{E:L2NORMSOFTIMEINTEGRATEDFUNCTIONS} \| F \|_{L^2(\Sigma_t^u)} & \leq (1 + C \varepsilon) \int_{t'=0}^t \| f \|_{L^2(\Sigma_{t'}^u)} \, dt'. \end{align} \end{lemma} \begin{proof} Recall that $\| F \|_{L^2(\Sigma_t^u)} : = \left\lbrace \int_{u'=0}^u \int_{\ell_{t,u'}} F^2(t,u',\vartheta) \, d \argspherevol{(t,u',\vartheta)} \, du' \right\rbrace^{1/2} $. Using \eqref{E:LINEVOLUMEFORMCOMPARISON}, we deduce that for $0 \leq t' \leq t$, we have $d \argspherevol{(t',u',\vartheta)} = \left\lbrace 1 + \mathcal{O}(\varepsilon) \right\rbrace d \argspherevol{(0,u',\vartheta)}$. \eqref{E:L2NORMSOFTIMEINTEGRATEDFUNCTIONS} follows from this estimate and from applying Minkowski's inequality for integrals (with respect to the measure $d \argspherevol{(0,u',\vartheta)} \, du'$) to the equation defining $F$. \end{proof} \subsubsection{The coerciveness of the fundamental $L^2$-controlling quantities} \label{SSS:COERCIVENESSOFL2CONTROLLING} We now provide the main lemma of Subsect.\ \ref{SS:COERCIVENESSOFL2CONTROLLING}. \begin{lemma}[\textbf{The coerciveness of the fundamental} $L^2$-\textbf{controlling quantities}] \label{L:COERCIVENESSOFCONTROLLING} Let $1 \leq M \leq N \leq 18$, and let $\mathscr{P}^M$ be an $M^{th}$-order $\mathcal{P}_u$-tangential vectorfield operator. We have the following bounds for $(t,u) \in [0,T_{(Boot)}) \times [0,U_0]$: \begin{align} \label{E:COERCIVENESSOFCONTROLLING} \totTanmax{[1,N]}(t,u) \geq \max \Big\lbrace & \frac{1}{2} \left\| \sqrt{\upmu} L \mathscr{P}^M \Psi \right\|_{L^2(\Sigma_t^u)}^2, \, \left\| \breve{X} \mathscr{P}^M \Psi \right\|_{L^2(\Sigma_t^u)}^2, \, \frac{1}{2} \left\| \sqrt{\upmu} {{d \mkern-9mu /} } \mathscr{P}^M \Psi \right\|_{L^2(\Sigma_t^u)}^2, \\ & \left\| L \mathscr{P}^M \Psi \right\|_{L^2(\mathcal{P}_u^t)}^2, \, \left\| \sqrt{\upmu} {{d \mkern-9mu /} } \mathscr{P}^M \Psi \right\|_{L^2(\mathcal{P}_u^t)}^2 \Big\rbrace. \notag \end{align} Moreover, if $1 \leq M \leq N \leq 18$, then the following bounds hold: \begin{subequations} \begin{align} \label{E:PSIHIGHERORDERL2ESTIMATELOSSOFONEDERIVATIVE} \left\| \mathscr{P}^M \Psi \right\|_{L^2(\Sigma_t^u)}, \, \left\| \mathscr{P}^M \Psi \right\|_{L^2(\ell_{t,u})} & \leq C \mathring{\upepsilon} + C \totTanmax{[1,N]}^{1/2}(t,u), \\ \left\| \mathscr{P}^M \Psi \right\|_{L^2(\Sigma_t^u)} & \leq C \mathring{\upepsilon} + C \int_{t'=0}^{t} \frac{1}{\upmu_{\star}^{1/2}(t',u)} \totTanmax{[1,N]}^{1/2}(t',u) \, dt'. \label{E:ANOTHERPSIHIGHERORDERL2ESTIMATELOSSOFONEDERIVATIVE} \end{align} \end{subequations} In addition, with $\mathbf{1}_{\lbrace \upmu \leq 1/4 \rbrace}$ denoting the characteristic function of the spacetime subset $ \displaystyle \lbrace (t,u,\vartheta) \in [0,\infty) \times [0,U_0] \times \mathbb{T} \ | \ \upmu(t,u,\vartheta) \leq 1/4 \rbrace $, then for $1 \leq M \leq N \leq 18$, we have the following bound: \begin{align} \label{E:KEYSPACETIMECOERCIVITY} \coerciveTanspacetimemax{[1,N]}(t,u) & \geq \frac{1}{8} \mathring{\updelta}_* \int_{\mathcal{M}_{t,u}} \mathbf{1}_{\lbrace \upmu \leq 1/4 \rbrace} \left| {{d \mkern-9mu /} } \mathscr{P}^M \Psi \right|^2 \, d \varpi. \end{align} In addition, for $M \leq N \leq 18$, we have the following bounds: \begin{align} \label{E:SLOWCOERCIVENESSOFCONTROLLING} \slowtotTanmax{\leq N}(t,u) \geq \frac{1}{C} \left\| \sqrt{\upmu} \mathscr{P}^M \vec{W} \right\|_{L^2(\Sigma_t^u)}^2 + \frac{1}{C} \left\| \mathscr{P}^M \vec{W} \right\|_{L^2(\mathcal{P}_u^t)}^2. \end{align} Finally, for $0 \leq M \leq N-1 \leq 17$, we have the following bounds: \begin{subequations} \begin{align} \label{E:ELLTUSLOWCOERCIVENESSOFCONTROLLING} \left\| \mathscr{P}^M \vec{W} \right\|_{L^2(\Sigma_t^u)}, \, \left\| \mathscr{P}^M \vec{W} \right\|_{L^2(\ell_{t,u})} & \leq C \mathring{\upepsilon} + C \slowtotTanmax{\leq N}^{1/2}(t,u), \\ \left\| \mathscr{P}^M \vec{W} \right\|_{L^2(\Sigma_t^u)} & \leq C \mathring{\upepsilon} + C \int_{t'=0}^{t} \frac{1}{\upmu_{\star}^{1/2}(t',u)} \slowtotTanmax{\leq N}^{1/2}(t',u) \, dt'. \label{E:ANOTHERLOWCOERCIVENESSOFCONTROLLING} \end{align} \end{subequations} \end{lemma} \begin{proof} \eqref{E:COERCIVENESSOFCONTROLLING} follows as a straightforward consequence of definition \eqref{E:ENERGYORDERZEROCOERCIVENESS}, Young's inequality, and definition \eqref{E:MAXEDQ0TANLEQNDEF}. \eqref{E:KEYSPACETIMECOERCIVITY} follows from the estimate \eqref{E:SMALLMUIMPLIESLMUISNEGATIVE} and definition \eqref{E:MAXEDCOERCIVESPACETIMEDEF}. The estimate \eqref{E:SLOWCOERCIVENESSOFCONTROLLING} is a straightforward consequence of Lemma~\ref{L:COERCIVENESSOFSLOWWAVEENERGIESANDFLUXES} and definition \eqref{E:MAXEDSLOWQ0TANLEQNDEF}. To prove \eqref{E:PSIHIGHERORDERL2ESTIMATELOSSOFONEDERIVATIVE}, we first note that it suffices to obtain the desired estimate for $\left\| \mathscr{P}^M \Psi \right\|_{L^2(\ell_{t,u})} $. The reason is that we can integrate the corresponding estimate for $\left\| \mathscr{P}^M \Psi \right\|_{L^2(\ell_{t,u})}^2 $ with respect to $u$ to obtain the desired bound for $ \left\| \mathscr{P}^M \Psi \right\|_{L^2(\Sigma_t^u)}^2 $. To obtain the desired bound \eqref{E:PSIHIGHERORDERL2ESTIMATELOSSOFONEDERIVATIVE} for $\left\| \mathscr{P}^M \Psi \right\|_{L^2(\ell_{t,u})} $, we first use \eqref{E:LDERIVATIVEOFLINEINTEGRAL} with $f= \left| \mathscr{P}^M \Psi \right|^2$, Young's inequality, and the estimate \eqref{E:PURETANGENTIALCHICOMMUTEDLINFINITY} to obtain \begin{align} \label{E:FIRSTSTEPOORDER0FASTCOERCIVENESSOFCONTROLLING} \frac{\partial}{\partial t} \int_{\ell_{t,u}} \left| \mathscr{P}^M \Psi \right|^2 \, d \argspherevol{(t,u',\vartheta)} & = \int_{\ell_{t,u}} \left\lbrace L \left( \left| \mathscr{P}^M \Psi \right|^2 \right) + {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \left| \mathscr{P}^M \Psi \right|^2 \right\rbrace \, d \argspherevol{(t,u',\vartheta)} \\ & \leq \int_{\ell_{t,u}} \left| L \mathscr{P}^M \Psi \right|^2 \, d \argspherevol{(t,u',\vartheta)} + C \int_{\ell_{t,u}} \left| \mathscr{P}^M \Psi \right|^2 \, d \argspherevol{(t,u',\vartheta)}. \notag \end{align} Integrating \eqref{E:FIRSTSTEPOORDER0FASTCOERCIVENESSOFCONTROLLING} with respect to time starting from time $0$, we obtain \begin{align} \label{E:SECONDSTEPOORDER0FASTCOERCIVENESSOFCONTROLLING} \left\| \mathscr{P}^M \Psi \right\|_{L^2(\ell_{t,u})}^2 & \leq \left\| \mathscr{P}^M \Psi \right\|_{L^2(\ell_{0,u})}^2 + \left\| L \mathscr{P}^M \Psi \right\|_{L^2(\mathcal{P}_u^t)}^2 + C \int_{t'=0}^t \left\| \mathscr{P}^M \Psi \right\|_{L^2(\ell_{t',u})}^2 \, dt'. \end{align} From the small-data assumption \eqref{E:PSIL2SMALLDATAASSUMPTIONSALONGL0U}, we see that the first term on RHS~\eqref{E:SECONDSTEPOORDER0FASTCOERCIVENESSOFCONTROLLING} is $\lesssim \mathring{\upepsilon}^2$, while from \eqref{E:COERCIVENESSOFCONTROLLING}, we see that the second term $\left\| L \mathscr{P}^M \Psi \right\|_{L^2(\mathcal{P}_u^t)}^2$ is $\lesssim \totTanmax{M}(t,u) \lesssim \totTanmax{[1,N]}(t,u)$. Using these bounds and applying Gronwall's inequality to \eqref{E:SECONDSTEPOORDER0FASTCOERCIVENESSOFCONTROLLING}, we conclude the desired estimate $ \left\| \mathscr{P}^M \Psi \right\|_{L^2(\ell_{t,u})}^2 \lesssim \mathring{\upepsilon}^2 + \totTanmax{[1,N]}(t,u) $. To prove \eqref{E:ANOTHERPSIHIGHERORDERL2ESTIMATELOSSOFONEDERIVATIVE}, we first use the fundamental theorem of calculus to express $ \mathscr{P}^M \Psi(t,u,\vartheta) = \mathscr{P}^M \Psi(0,u,\vartheta) + \int_{t'=0}^t L \mathscr{P}^M \Psi(t',u,\vartheta) \, dt' $. The desired estimate now follows from this identity, \eqref{E:SIGMATVOLUMEFORMCOMPARISON} with $s=0$ (which implies that $ \| \mathscr{P}^M \Psi(0,\cdot) \|_{L^2(\Sigma_t^u)} = \left\lbrace 1 + \mathcal{O}(\varepsilon) \right\rbrace \| \mathscr{P}^M \Psi \|_{L^2(\Sigma_0^u)} $), Lemma~\ref{L:L2NORMSOFTIMEINTEGRATEDFUNCTIONS}, the small-data assumption \eqref{E:PSIL2SMALLDATAASSUMPTIONSALONGSIGMA0}, and \eqref{E:COERCIVENESSOFCONTROLLING}. To prove \eqref{E:ANOTHERLOWCOERCIVENESSOFCONTROLLING}, we use a similar argument based on the small-data assumption \eqref{E:SLOWL2SMALLDATAASSUMPTIONSALONGSIGMA0} and \eqref{E:SLOWCOERCIVENESSOFCONTROLLING}. Finally, we note that \eqref{E:ELLTUSLOWCOERCIVENESSOFCONTROLLING} follows from arguments similar to the ones that we used to prove the estimate \eqref{E:PSIHIGHERORDERL2ESTIMATELOSSOFONEDERIVATIVE} for $\left\| \mathscr{P}^M \Psi \right\|_{L^2(\ell_{t,u})} $ together with the small-data assumption \eqref{E:SLOWL2SMALLDATAASSUMPTIONSALONGL0U}. \end{proof} \subsubsection{The initial smallness of the fundamental $L^2$-controlling quantities} \label{SSS:INITIALSMALLNESSOFL2CONTROLLING} The next lemma shows that the fundamental $L^2$-controlling quantities of Def.~\ref{D:MAINCOERCIVEQUANT} are initially small. \begin{lemma}[\textbf{The fundamental controlling quantities are initially small}] \label{L:INITIALSIZEOFL2CONTROLLING} Assume that $1 \leq N \leq 18$. Under the data-size assumptions of Subsect.\ \ref{SS:SIZEOFTBOOT}, the following estimates hold for $(t,u) \in [0,2 \mathring{\updelta}_*^{-1}] \times [0,U_0]$ (see also Remark~\ref{R:ENERGIESVANISHFORSIMPLEPLANEWAVE}): \begin{subequations} \begin{align} \label{E:INITIALSIZEOFL2CONTROLLING} \totTanmax{[1,N]}(0,u), \, \totTanmax{[1,N]}(t,0) \lesssim \mathring{\upepsilon}^2, \\ \slowtotTanmax{\leq N}(0,u), \, \slowtotTanmax{\leq N}(t,0) \lesssim \mathring{\upepsilon}^2. \label{E:SLOWWAVEINITIALSIZEOFL2CONTROLLING} \end{align} \end{subequations} \end{lemma} \begin{proof} We first note that by \eqref{E:UPITSELFLINFINITYSIGMA0CONSEQUENCES} and \eqref{E:UPITSELFLINFINITYP0CONSEQUENCES}, we have $\upmu \approx 1$ along $\Sigma_0^1$ and along $\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}$. Using these estimates, \eqref{E:ENERGYORDERZEROCOERCIVENESS}-\eqref{E:NULLFLUXENERGYORDERZEROCOERCIVENESS}, \eqref{E:SLOWSIGMATENERGYCOERCIVENESS}-\eqref{E:SLOWNULLFLUXCOERCIVENESS}, and Def.~\ref{D:MAINCOERCIVEQUANT}, we see that \begin{align} \totmax{[1,18]}(0,1) & \lesssim \left\| \mathscr{Z}_*^{[1,19];1} \Psi \right\|_{L^2(\Sigma_0^1)}^2, & \slowtotTanmax{\leq 18}(0,1) & \lesssim \left\| \mathscr{P}^{\leq 18} \vec{W} \right\|_{L^2(\Sigma_0^1)}^2, \label{E:CONTROLLINGQUANTITIESBOUNDEDALONGSIMGMA0} \\ \totmax{[1,18]}(2 \mathring{\updelta}_*^{-1},0) & \lesssim \left\| \mathscr{P}^{[1,19]} \Psi \right\|_{L^2(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}})}^2, & \slowtotTanmax{\leq 18}(2 \mathring{\updelta}_*^{-1},0) & \lesssim \left\| \mathscr{P}^{\leq 18} \vec{W} \right\|_{L^2(\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}})}^2. \label{E:CONTROLLINGQUANTITIESBOUNDEDALONGP0} \end{align} The estimates \eqref{E:INITIALSIZEOFL2CONTROLLING}-\eqref{E:SLOWWAVEINITIALSIZEOFL2CONTROLLING} now follow from \eqref{E:CONTROLLINGQUANTITIESBOUNDEDALONGSIMGMA0}-\eqref{E:CONTROLLINGQUANTITIESBOUNDEDALONGP0} and the data assumptions \eqref{E:PSIL2SMALLDATAASSUMPTIONSALONGSIGMA0}, \eqref{E:SLOWL2SMALLDATAASSUMPTIONSALONGSIGMA0}, \eqref{E:PSIL2SMALLDATAASSUMPTIONSALONGP0}, and \eqref{E:SLOWL2SMALLDATAASSUMPTIONSALONGP0}. \end{proof} \subsection{The main a priori energy estimates} \label{SS:MAINAPRIORIENERGYESTIMATES} In this subsection, we state our main a priori energy estimates. The main step in their proof is deriving $L^2$ estimates for the error terms in the commuted equations; we carry out this technical analysis in later subsections. \subsubsection{The system of integral inequalities verified by the energies} \label{SSS:ENERGYINTEGRALINEQUALITIES} We start with a proposition in which we provide the system of integral inequalities verified by the energies. Its proof is located in Subsect.\ \ref{SS:PROOFOFPROPTANGENTIALENERGYINTEGRALINEQUALITIES}. \begin{proposition}[\textbf{Integral inequalities for the fundamental $L^2$-controlling quantities}] \label{P:TANGENTIALENERGYINTEGRALINEQUALITIES} Assume that $1 \leq N \leq 18$ and let $\varsigma > 0$ be a real number. \medskip \noindent \underline{\textbf{Integral inequalities relevant for top-order energy estimates.}} Let $\Sigmaminus{t}{t}{u}$ be the subset of $\Sigma_t$ defined in \eqref{E:SIGMAMINUS}. There exists a constant $C > 0$, independent of $\varsigma$, such that following estimates hold for $(t,u) \in [0,T_{(Boot)}) \times [0,U_0]$, where the fourth-from-last product on RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} (which depends on $\totTanmax{[1,N-1]}$) is absent in the case $N=1$ and we recall that we defined the notation $\mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\cdot)$ in Subsect.\ \ref{SS:NOTATIONANDINDEXCONVENTIONS}: \begin{subequations} \begin{align} \label{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} & \max\left\lbrace \totTanmax{[1,N]}(t,u), \coerciveTanspacetimemax{[1,N]}(t,u), \slowtotTanmax{\leq N}(t,u) \right\rbrace \\ & \leq C (1 + \varsigma^{-1}) \mathring{\upepsilon}^2 \upmu_{\star}^{-3/2}(t,u) \notag \\ & \ \ + \boxed{\left\lbrace 6 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) \right\rbrace} \int_{t'=0}^t \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_{t'}^u)}} {\upmu_{\star}(t',u)} \totTanmax{[1,N]}(t',u) \, dt' \notag \\ & \ \ + \boxed{8} \int_{t'=0}^t \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_{t'}^u)}} {\upmu_{\star}(t',u)} \totTanmax{[1,N]}^{1/2}(t',u) \int_{s=0}^{t'} \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_s^u)}} {\upmu_{\star}(s,u)} \totTanmax{[1,N]}^{1/2}(s,u) \, ds \, dt' \notag \\ & \ \ + \boxed{\left\lbrace 2 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) \right\rbrace} \frac{1}{\upmu_{\star}^{1/2}(t,u)} \totTanmax{[1,N]}^{1/2}(t,u) \left\| L \upmu \right\|_{L^{\infty}(\Sigmaminus{t}{t}{u})} \int_{t'=0}^t \frac{1}{\upmu_{\star}^{1/2}(t',u)} \totTanmax{[1,N]}^{1/2}(t',u) \, dt' \notag \\ & \ \ + C \varepsilon \int_{t'=0}^t \frac{1} {\upmu_{\star}(t',u)} \totTanmax{[1,N]}^{1/2}(t',u) \int_{s=0}^{t'} \frac{1} {\upmu_{\star}(s,u)} \totTanmax{[1,N]}^{1/2}(s,u) \, ds \, dt' \notag \\ & \ \ + C \varepsilon \int_{t'=0}^t \frac{1} {\upmu_{\star}(t',u)} \totTanmax{[1,N]}(t',u) \, dt' \notag \\ & \ \ + C \varepsilon \frac{1}{\upmu_{\star}^{1/2}(t,u)} \totTanmax{[1,N]}^{1/2}(t,u) \int_{t'=0}^t \frac{1}{\upmu_{\star}^{1/2}(t',u)} \totTanmax{[1,N]}^{1/2}(t',u) \, dt' \notag \\ & \ \ + C \totTanmax{[1,N]}^{1/2}(t,u) \int_{t'=0}^t \frac{1}{\upmu_{\star}^{1/2}(t',u)} \totTanmax{[1,N]}^{1/2}(t',u) \, dt' \notag \\ & \ \ + C \int_{t'=0}^t \frac{1}{\sqrt{T_{(Boot)} - t'}} \totTanmax{[1,N]}(t',u) \, dt' \notag \\ & \ \ + C (1 + \varsigma^{-1}) \int_{t'=0}^t \frac{1} {\upmu_{\star}^{1/2}(t',u)} \totTanmax{[1,N]}(t',u) \, dt' \notag \\ & \ \ + C \int_{t'=0}^t \frac{1} {\upmu_{\star}(t',u)} \totTanmax{[1,N]}^{1/2}(t',u) \int_{s = 0}^{t'} \frac{1} {\upmu_{\star}^{1/2}(s,u)} \totTanmax{[1,N]}^{1/2}(s,u) \, ds \, dt' \notag \\ & \ \ + C \int_{t'=0}^t \frac{1} {\upmu_{\star}(t',u)} \totTanmax{[1,N]}^{1/2}(t',u) \int_{s = 0}^{t'} \frac{1}{\upmu_{\star}(s,u)} \int_{s' = 0}^s \frac{1} {\upmu_{\star}^{1/2}(s',u)} \totTanmax{[1,N]}^{1/2}(s',u) \, ds' \, ds \, dt' \notag \\ & \ \ + C (1 + \varsigma^{-1}) \int_{u'=0}^u \totTanmax{[1,N]}(t,u') \, du' \notag \\ & \ \ + C \varepsilon \totTanmax{[1,N]}(t,u) + C \varsigma \totTanmax{[1,N]}(t,u) + C \varsigma \coerciveTanspacetimemax{[1,N]}(t,u) \notag \\ & \ \ + C \int_{t'=0}^t \frac{1} {\upmu_{\star}^{5/2}(t',u)} \totTanmax{[1,N-1]}(t',u) \, dt' \notag \\ & \ \ + C \int_{t'=0}^t \frac{1}{\upmu_{\star}(t',u)} \totTanmax{[1,N]}^{1/2}(t',u) \int_{s=0}^{t'} \frac{1}{\upmu_{\star}^{1/2}(s,u)} \slowtotTanmax{\leq N}^{1/2}(s,u) \, ds \, dt' \notag \\ & \ \ + C \int_{t'=0}^t \slowtotTanmax{\leq N}(t',u) \, dt' \notag \\ & \ \ + C (1 + \varsigma^{-1}) \int_{u'=0}^u \slowtotTanmax{\leq N}(t,u') \, du'. \notag \end{align} \medskip \noindent \underline{\textbf{Integral inequalities relevant for below-top-order energy estimates.}} Moreover, if $2 \leq N \leq 18$, then we have the following estimates: \begin{align} \label{E:BELOWTOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} & \max\left\lbrace \totTanmax{[1,N-1]}(t,u), \coerciveTanspacetimemax{[1,N-1]}(t,u), \slowtotTanmax{\leq N-1}(t,u) \right\rbrace \\ & \leq C \mathring{\upepsilon}^2 \notag \\ & \ \ + C \int_{t'=0}^t \frac{1}{\upmu_{\star}^{1/2}(t',u)} \totTanmax{[1,N-1]}^{1/2}(t',u) \int_{s=0}^{t'} \frac{1}{\upmu_{\star}^{1/2}(s,u)} \totTanmax{[1,N]}^{1/2}(s,u) \, ds \, dt' \notag \\ & \ \ + C \int_{t'=0}^t \frac{1}{\sqrt{T_{(Boot)} - t'}} \totTanmax{[1,N-1]}(t',u) \, dt' \notag \\ & \ \ + C (1 + \varsigma^{-1}) \int_{t'=0}^t \frac{1} {\upmu_{\star}^{1/2}(t',u)} \totTanmax{[1,N-1]}(t',u) \, dt' \notag \\ & \ \ + C \mathring{\upepsilon} \int_{t'=0}^t \frac{1} {\upmu_{\star}^{1/2}(t',u)} \totTanmax{[1,N-1]}^{1/2}(t',u) \, dt' \notag \\ & \ \ + C (1 + \varsigma^{-1}) \int_{u'=0}^u \totTanmax{[1,N-1]}(t,u') \, du' \notag \\ & \ \ + C \varsigma \coerciveTanspacetimemax{[1,N-1]}(t,u) \notag \\ & \ \ + C \int_{t'=0}^t \slowtotTanmax{\leq N-1}(t',u) \, dt' \notag \\ & \ \ + C (1 + \varsigma^{-1}) \int_{u'=0}^u \slowtotTanmax{\leq N-1}(t,u') \, du'. \notag \end{align} \end{subequations} \end{proposition} \begin{remark}[\textbf{The significance of the ``boxed-constant-involving'' integrals}] \label{R:BOXEDCONSTANTENERGYINTEGRALS} The boxed-constant-involving products on RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}, such as $ \boxed{\left\lbrace 6 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) \right\rbrace} \int_{t'=0}^t \cdots $ are particularly important in that the boxed constants control the maximum possible blowup-rates of our high-order energies. Moreover, the maximum possible energy blowup-rates affect the number of derivatives that we need to close the estimates. These are the reasons that we carefully track the size of the boxed constants. See the proof of Prop.\ \ref{P:MAINAPRIORIENERGY} for further discussion. \end{remark} \subsubsection{The main a priori energy estimates} \label{SSS:MAINAPRIRORIENERGY} We now provide the main a priori energy estimates. \begin{proposition}[\textbf{The main a priori energy estimates}] \label{P:MAINAPRIORIENERGY} There exists a constant $C > 0$ such that under the data-size and bootstrap assumptions of Subsects.\ \ref{SS:DATAASSUMPTIONS}-\ref{SS:PSIBOOTSTRAP} and the smallness assumptions of Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}, the following estimates hold for $(t,u) \in [0,T_{(Boot)}) \times [0,U_0]$: \begin{subequations} \begin{align} \totTanmax{[1,13+M]}^{1/2}(t,u) + \coerciveTanspacetimemax{[1,13+M]}^{1/2}(t,u) + \slowtotTanmax{[1,13+M]}^{1/2}(t,u) & \leq C \mathring{\upepsilon} \upmu_{\star}^{-(M+.9)}(t,u), && (0 \leq M \leq 5), \label{E:MULOSSMAINAPRIORIENERGYESTIMATES} \\ \totTanmax{[1,12]}^{1/2}(t,u) + \coerciveTanspacetimemax{[1,12]}^{1/2}(t,u) + \slowtotTanmax{\leq 12}^{1/2}(t,u) & \leq C \mathring{\upepsilon}. && \label{E:NOMULOSSMAINAPRIORIENERGYESTIMATES} \end{align} \end{subequations} \end{proposition} \begin{proof}[Discussion of proof] Based on the inequalities of Prop.~\ref{P:TANGENTIALENERGYINTEGRALINEQUALITIES} and the sharp estimates of Props.\ \ref{P:SHARPMU} and \ref{P:MUINVERSEINTEGRALESTIMATES}, the proof of \cite{jSgHjLwW2016}*{Proposition 14.1} applies with only very minor changes that in particular account for the terms depending on $\slowtotTanmax{N}$, $N=1,2,\cdots,18$. In fact, if one views the quantities $ \displaystyle \max\left\lbrace \totTanmax{[1,N]}(t,u), \coerciveTanspacetimemax{[1,N]}(t,u), \slowtotTanmax{\leq N}(t,u) \right\rbrace $ and $ \displaystyle \max\left\lbrace \totTanmax{[1,N-1]}(t,u), \coerciveTanspacetimemax{[1,N-1]}(t,u), \slowtotTanmax{\leq N-1}(t,u) \right\rbrace $ to be the unknowns in the system of inequalities \eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}-\eqref{E:BELOWTOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}, then the proof of \cite{jSgHjLwW2016}*{Proposition 14.1} goes through almost verbatim. For this reason, we omit the details, noting only that the sharp estimates of Prop.~\ref{P:MUINVERSEINTEGRALESTIMATES} are essential for handling the ``boxed-constant-involving'' products on RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} and that the smallness of some factors of type $\mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha})$ is important for controlling the size of various error term coefficients (such as the coefficients $\boxed{6 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha})}$ and $\boxed{2 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha})}$ on RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} and the factors on the right-hand sides of the estimates of Prop.~\ref{P:MUINVERSEINTEGRALESTIMATES} of the form $C_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}} \mathring{\upalpha}$). We also note that the proof of \cite{jSgHjLwW2016}*{Proposition 14.1} shows that the size of the boxed constants is directly tied to the energy blowup-rates on RHS~\eqref{E:MULOSSMAINAPRIORIENERGYESTIMATES}. In particular, the boxed constants control the ``maximum top-order energy blowup-rate'' of $\upmu_{\star}^{-5.9}(t,u)$, which is featured on RHS~\eqref{E:MULOSSMAINAPRIORIENERGYESTIMATES} in the top-order case $M=5$. \end{proof} \subsubsection{Strict improvement of the fundamental bootstrap assumptions} \label{SSS:IMPROVEMENTOFFUNDAMENTALBOOT} Using the energy estimates provided by Prop.~\ref{P:MAINAPRIORIENERGY}, we can derive strict improvements of the fundamental $L^{\infty}$ bootstrap assumptions \eqref{E:PSIFUNDAMENTALC0BOUNDBOOTSTRAP}. The main ingredient in this vein is the following simple Sobolev embedding result. \begin{lemma}[{\textbf{Sobolev embedding along} $\ell_{t,u}$}] \label{L:SOBOLEV} The following estimate holds for scalar-valued functions $f$ defined on $\ell_{t,u}$ for $(t,u) \in [0,T_{(Boot)}) \times [0,U_0]$: \begin{align} \label{E:SOBOLEV} \left\| f \right\|_{L^{\infty}(\ell_{t,u})} & \leq C \left\| Y^{\leq 1} f \right\|_{L^2(\ell_{t,u})}. \end{align} \end{lemma} \begin{proof} Standard Sobolev embedding on $\mathbb{T}$ yields $ \left\| f \right\|_{L^{\infty}(\mathbb{T})} \leq C \left\| \Theta^{\leq 1} f \right\|_{L^2(\mathbb{T})} $, where the integration measure defining $\| \cdot \|_{L^2(\mathbb{T})}$ is $d \vartheta$. Next, we note the estimate $|Y| = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon)$, which follows from the proof of Lemma~\ref{L:TENSORSIZECONTROLLEDBYYCONTRACTIONS} and Cor.\ \ref{C:SQRTEPSILONTOCEPSILON}. Similarly, from Def.\ \ref{D:METRICANGULARCOMPONENT} and the estimate \eqref{E:MOREPRECISEPOINTWISEESTIMATEFORGTANCOMP}, we deduce the estimate $|\Theta| = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon)$. It follows that $|\Theta^{\leq 1} f| \leq C |Y^{\leq 1} f|$ and hence $ \left\| f \right\|_{L^{\infty}(\mathbb{T})} \leq C \left\| \Theta^{\leq 1} f \right\|_{L^2(\mathbb{T})} \leq C \left\| Y^{\leq 1} f \right\|_{L^2(\mathbb{T})} $. Also using the estimate \eqref{E:MOREPRECISEPOINTWISEESTIMATEFORGTANCOMP}, and referring to definition \eqref{E:LINEINTEGRALDEF}, we conclude \eqref{E:SOBOLEV}. \end{proof} We now derive strict improvements of the bootstrap assumptions \eqref{E:PSIFUNDAMENTALC0BOUNDBOOTSTRAP}. \begin{corollary}[\textbf{Strict improvement of the fundamental $L^{\infty}$ bootstrap assumptions}] \label{C:IMPROVEDFUNDAMENTALLINFTYBOOTSTRAPASSUMPTIONS} The fundamental bootstrap assumptions \eqref{E:PSIFUNDAMENTALC0BOUNDBOOTSTRAP} stated in Subsect.\ \ref{SS:PSIBOOTSTRAP} hold with RHS~\eqref{E:PSIFUNDAMENTALC0BOUNDBOOTSTRAP} replaced by $C \mathring{\upepsilon}$. In particular, if $C \mathring{\upepsilon} < \varepsilon$, then we have obtained a strict improvement of the bootstrap assumptions \eqref{E:PSIFUNDAMENTALC0BOUNDBOOTSTRAP} on $\mathcal{M}_{T_{(Boot)},U_0}$. \end{corollary} \begin{proof} From \eqref{E:PSIHIGHERORDERL2ESTIMATELOSSOFONEDERIVATIVE}, \eqref{E:ELLTUSLOWCOERCIVENESSOFCONTROLLING}, the a priori energy estimates stated in \eqref{E:NOMULOSSMAINAPRIORIENERGYESTIMATES}, and the Sobolev embedding result \eqref{E:SOBOLEV}, we deduce that $ \left\| \mathscr{P}^{[1,11]} \Psi \right\|_{L^{\infty}(\ell_{t,u})} \lesssim \mathring{\upepsilon} + \totTanmax{[1,12]}^{1/2}(t,u) \lesssim \mathring{\upepsilon} $ and $ \left\| \mathscr{P}^{\leq 10} \vec{W} \right\|_{L^{\infty}(\ell_{t,u})} \lesssim \mathring{\upepsilon} + \slowtotTanmax{[1,12]}^{1/2}(t,u) \lesssim \mathring{\upepsilon} $, from which the desired estimates easily follow. \end{proof} \begin{remark}[\textbf{The main step in the article}] In view of Cor.\ \ref{C:IMPROVEDFUNDAMENTALLINFTYBOOTSTRAPASSUMPTIONS}, we have justified the fundamental $L^{\infty}$ bootstrap assumptions until the time of first shock formation. This is the main step in the article. \end{remark} \subsection{Estimates for the energy error integrals} \label{SS:BOUNDSFORENERGYESTIMATEERRORINTEGRALS} It remains for us to prove Prop.~\ref{P:TANGENTIALENERGYINTEGRALINEQUALITIES}. The proof is located in Subsect.\ \ref{SS:PROOFOFPROPTANGENTIALENERGYINTEGRALINEQUALITIES}. To prove the proposition, we must bound all of the error integrals appearing in the energy-null flux identities (up to top order) of Props.~\ref{P:DIVTHMWITHCANCELLATIONS} and \ref{P:SLOWWAVEDIVTHM} in terms of the fundamental $L^2$-controlling quantities. The error integrals are generated by the inhomogeneous terms on the RHS of the equations of Prop.~\ref{P:IDOFKEYDIFFICULTENREGYERRORTERMS} as well as the integrands from Lemma~\ref{L:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND} (see also Remark~\ref{R:NEEDTHEESTIMATEWITHTANSETNPSIINPLACEOFPSI}). \subsubsection{Estimates for the most difficult top-order energy estimate error term} \label{SSS:ESTIMATESFORTHEMOSTDIFFICULT} As an important first step, in the next lemma, we bound the norm $\| \cdot \|_{L^2(\Sigma_t^u)}$ of the most difficult product that we encounter, namely the product $(\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ on the RHS of the wave equation \eqref{E:GEOANGANGISTHEFIRSTCOMMUTATORIMPORTANTTERMS} verified by $Y^N \Psi$. The proof of the lemma is based on the pointwise estimates of Prop.~\ref{P:KEYPOINTWISEESTIMATE}, which in turn was based on the modified quantities described in Subsubsect.\ \ref{SSS:ENERGYESTIMATES}; we recall that the modified quantity \eqref{E:TOPORDERMODIFIEDTRCHI} was needed in the proof of Prop.~\ref{P:KEYPOINTWISEESTIMATE} in order to avoid the loss of a derivative at the top order. \begin{lemma}[$L^2$ \textbf{bound for the most difficult product}] \label{L:DIFFICULTTERML2BOUND} Assume that $1 \leq N \leq 18$. There exists a constant $C > 0$ such that the following $L^2$ estimate holds for the difficult product $(\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ from Prop.~\ref{P:KEYPOINTWISEESTIMATE}: \begin{align} \label{E:DIFFICULTTERML2BOUND} \left\| (\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_t^u)} & \leq \boxed{2} \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_t^u)}} {\upmu_{\star}(t,u)} \totTanmax{[1,N]}^{1/2}(t,u) \\ & \ \ + \boxed{4} \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_t^u)}} {\upmu_{\star}(t,u)} \int_{s=0}^t \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_s^u)}} {\upmu_{\star}(s,u)} \totTanmax{[1,N]}^{1/2}(s,u) \, ds \notag \\ & \ \ + C \varepsilon \frac{1} {\upmu_{\star}(t,u)} \int_{s=0}^t \frac{1} {\upmu_{\star}(s,u)} \totTanmax{[1,N]}^{1/2}(s,u) \, ds \notag \\ & \ \ + C \frac{1}{\upmu_{\star}(t,u)} \int_{s'=0}^t \frac{1}{\upmu_{\star}(s',u)} \int_{s=0}^{s'} \frac{1}{\upmu_{\star}^{1/2}(s,u)} \totTanmax{[1,N]}^{1/2}(s,u) \, ds \, ds' \notag \\ & \ \ + C \frac{1}{\upmu_{\star}(t,u)} \int_{s=0}^t \frac{1}{\upmu_{\star}^{1/2}(s,u)} \totTanmax{[1,N]}^{1/2}(s,u) \, ds \notag \\ & \ \ + C \frac{1}{\upmu_{\star}^{1/2}(t,u)} \totTanmax{[1,N]}^{1/2}(t,u) + \underbrace{C \frac{1}{\upmu_{\star}^{3/2}(t,u)} \totTanmax{[1,N-1]}^{1/2}(t,u)}_{\mbox{\upshape Absent if $N=1$}} \notag \\ & \ \ + C \frac{1} {\upmu_{\star}(t,u)} \int_{s=0}^t \frac{1} {\upmu_{\star}^{1/2}(s,u)} \slowtotTanmax{\leq N}^{1/2}(s,u) \, ds \notag \\ & \ \ + C \frac{1}{\upmu_{\star}^{3/2}(t,u)} \mathring{\upepsilon}. \notag \end{align} Furthermore, we have the following less precise estimate: \begin{align} \label{E:LESSPRECISEDIFFICULTTERML2BOUND} \left\| \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_t^u)} & \lesssim \totTanmax{[1,N]}^{1/2}(t,u) + \int_{s=0}^t \frac{1} {\upmu_{\star}(s,u)} \totTanmax{[1,N]}^{1/2}(s,u) \, ds \\ & \ \ + \int_{s=0}^t \frac{1} {\upmu_{\star}^{1/2}(s,u)} \slowtotTanmax{\leq N}^{1/2}(s,u) \, ds + \mathring{\upepsilon} \left\lbrace \ln \upmu_{\star}^{-1}(t,u) + 1 \right\rbrace. \notag \end{align} \end{lemma} \begin{proof}[Proof outline] To prove \eqref{E:DIFFICULTTERML2BOUND}, we start by taking the norm $\| \cdot \|_{L^2(\Sigma_t^u)}$ of both sides of inequality \eqref{E:KEYPOINTWISEESTIMATE}. The norms $\| \cdot \|_{L^2(\Sigma_t^u)}$ of all terms on RHS~\eqref{E:KEYPOINTWISEESTIMATE} were bounded in the proof of \cite{jSgHjLwW2016}*{Lemma~14.8} up to the following five remarks: \textbf{i)} As we noted in our proof outline of \eqref{E:KEYPOINTWISEESTIMATE}, the right-hand side of \eqref{E:KEYPOINTWISEESTIMATE} does not feature the order $0$ terms $|\Psi|$ or $|\upgamma|$, which is different than the analogous estimate stated in \cite{jSgHjLwW2016}. It is for this reason that RHS~\eqref{E:DIFFICULTTERML2BOUND} does not feature any term of size $\mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha})$. \textbf{ii)} The typo correction mentioned in the proof of Prop.~\ref{P:KEYPOINTWISEESTIMATE} is important for the proof of \cite{jSgHjLwW2016}*{Lemma~14.8}. \textbf{iii)} The last term $ \displaystyle \frac{1}{\upmu_{\star}(t,u)} \int_{t'=0}^t \left| \mathscr{P}^{\leq N} \vec{W} \right| (t',u,\vartheta) \, ds $ on RHS~\eqref{E:ERRORTERMKEYPOINTWISEESTIMATE} was not present in \cite{jSgHjLwW2016}. To handle this last term, we use \eqref{E:L2NORMSOFTIMEINTEGRATEDFUNCTIONS} and \eqref{E:SLOWCOERCIVENESSOFCONTROLLING} to bound its norm $\| \cdot \|_{L^2(\Sigma_t^u)}$ by \begin{align} \label{E:SLOWWAVETERMINTOPORDERACOUSTICALGEOMETRYESTIMATE} & \leq C \frac{1}{\upmu_{\star}(t,u)} \int_{s=0}^t \left\| \mathscr{P}^{\leq N} \vec{W} \right\|_{L^2(\Sigma_s^u)} \, ds \leq C \frac{1} {\upmu_{\star}(t,u)} \int_{s=0}^t \frac{1} {\upmu_{\star}^{1/2}(s,u)} \slowtotTanmax{\leq N}^{1/2}(s,u) \, ds. \end{align} Clearly RHS~\eqref{E:SLOWWAVETERMINTOPORDERACOUSTICALGEOMETRYESTIMATE} is bounded by the next-to-last product on RHS~\eqref{E:DIFFICULTTERML2BOUND} as desired. \textbf{iv)} In \cite{jSgHjLwW2016}*{Lemma~14.8}, the coefficient of the analog of the second product on RHS~\eqref{E:DIFFICULTTERML2BOUND} was stated as $\boxed{4.05}$ rather than $\boxed{4}$. We note that due to the remarks made in the proof outline of Prop.\ \ref{P:KEYPOINTWISEESTIMATE}, in the present paper, we have relegated the ``additional $.05$ contribution'' to the error term $C \varepsilon \cdots$ on RHS~\eqref{E:DIFFICULTTERML2BOUND}; this is a minor (essentially cosmetic) change that we described in our proof outline of Prop.\ \ref{P:KEYPOINTWISEESTIMATE}. \textbf{v)} In the case $N=1$, we use the estimate \eqref{E:ANOTHERPSIHIGHERORDERL2ESTIMATELOSSOFONEDERIVATIVE} with $M=1$ to bound the norm $\| \cdot \|_{L^2(\Sigma_t^u)}$ of the product $ \frac{1}{\upmu_{\star}(t,u)} \left| \mathscr{Z}_*^{[1,N];1} \Psi \right| (t,u,\vartheta) $ on RHS~\eqref{E:ERRORTERMKEYPOINTWISEESTIMATE} by $ \leq $ the sum of the fifth product on RHS~\eqref{E:DIFFICULTTERML2BOUND} and the last product on RHS~\eqref{E:DIFFICULTTERML2BOUND}. This detail was not mentioned in \cite{jSgHjLwW2016}*{Lemma~14.8}; it is relevant because the term $ C \frac{1}{\upmu_{\star}^{3/2}(t,u)} \totTanmax{[1,N-1]}^{1/2}(t,u) $ on RHS~\eqref{E:DIFFICULTTERML2BOUND}, which can be used to help control the norm $\| \cdot \|_{L^2(\Sigma_t^u)}$ of the term $ \frac{1}{\upmu_{\star}(t,u)} \left| \mathscr{Z}_*^{[1,N];1} \Psi \right| (t,u,\vartheta) $ when $N > 1$, is absent from RHS~\eqref{E:DIFFICULTTERML2BOUND} in the case $N=1$. This completes our proof outline of \eqref{E:DIFFICULTTERML2BOUND}. The estimate \eqref{E:LESSPRECISEDIFFICULTTERML2BOUND} can similarly be proved by taking the norm $\| \cdot \|_{L^2(\Sigma_t^u)}$ of both sides of inequality \eqref{E:LESSPRECISEKEYPOINTWISEESTIMATE}. All terms were handled in the proof of \cite{jSgHjLwW2016}*{Lemma~14.8} (remark \textbf{i)} from the previous paragraph also applies here) except for the last term $ \displaystyle \int_{t'=0}^t \left| \mathscr{P}^{\leq N} \vec{W} \right| (t',u,\vartheta) \, dt' $ on RHS~\eqref{E:LESSPRECISEKEYPOINTWISEESTIMATE}, which, by the arguments given in the previous paragraph, can be bounded in the norm $\| \cdot \|_{L^2(\Sigma_t^u)}$ by $ \displaystyle \leq C \int_{s=0}^t \frac{1} {\upmu_{\star}^{1/2}(s,u)} \slowtotTanmax{\leq N}^{1/2}(s,u) \, ds $ as desired. \end{proof} \subsubsection{\texorpdfstring{$L^2$}{Square integral} bounds for less degenerate top-order error integrals} \label{SSS:LESSDEGENERATEENERGYESTIMATEINTEGRALS} In the next lemma, we bound some up-to-top-order error integrals that appear in our energy estimates. As in the proof of Lemma~\ref{L:DIFFICULTTERML2BOUND}, the proof relies on modified quantities, which are needed to avoid the loss of a derivative. However, the estimates of the lemma are much less degenerate than those of Lemma~\ref{L:DIFFICULTTERML2BOUND} because of the availability of a helpful factor of $\upmu$ in the integrands. \begin{lemma}[\textbf{Bounds for less degenerate top-order error integrals}] \label{L:LESSDEGENERATEENERGYESTIMATEINTEGRALS} Assume that $1 \leq N \leq 18$, let $\mathscr{P}^N$ be any $N^{th}$ order $\mathcal{P}_u$-tangential operator, and let $\uprho$ be the scalar function from Lemma~\ref{L:GEOANGDECOMPOSITION}. We have the following following integral estimates: \begin{subequations} \begin{align} \label{E:FIRSTLESSDEGENERATEENERGYESTIMATEINTEGRALS} & \left| \int_{\mathcal{M}_{t,u}} (\breve{X} \mathscr{P}^N \Psi) \myarray[\uprho] {1} (\angdiffuparg{\#} \Psi) \cdot (\upmu {{d \mkern-9mu /} } Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi) \, d \varpi \right| \\ & \lesssim \int_{t'=0}^t \left\lbrace \ln \upmu_{\star}^{-1}(t',u) + 1 \right\rbrace^2 \totTanmax{[1,N]}(t',u) \, dt' + \int_{u'=0}^u \totTanmax{[1,N]}(t,u') \, du' \notag \\ & \ \ + \int_{t'=0}^t \slowtotTanmax{\leq N}(t',u) \, dt' + \mathring{\upepsilon}^2, \notag \\ & \left| \int_{\mathcal{M}_{t,u}} (1 + 2 \upmu) (L \mathscr{P}^N \Psi) \myarray[\uprho] {1} (\angdiffuparg{\#} \Psi) \cdot (\upmu {{d \mkern-9mu /} } Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi) \, d \varpi \right| \label{E:SECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS} \\ & \lesssim \int_{t'=0}^t \left\lbrace \ln \upmu_{\star}^{-1}(t',u) + 1 \right\rbrace^2 \totTanmax{[1,N]}(t',u) \, dt' + \int_{u'=0}^u \totTanmax{[1,N]}(t,u') \, du' \notag \\ & \ \ + \int_{t'=0}^t \slowtotTanmax{\leq N}(t',u) \, dt' + \mathring{\upepsilon}^2. \notag \end{align} \end{subequations} \end{lemma} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. To prove \eqref{E:SECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS}, we use the fact that $\uprho = \mathrm{f}(\upgamma) \upgamma$ (see \eqref{E:LINEARLYSMALLSCALARSDEPENDINGONGOODVARIABLES}), the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX}, Cauchy--Schwarz, and \eqref{E:COERCIVENESSOFCONTROLLING} to deduce \begin{align} \label{E:FIRSTBOUNDFORSECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS} \mbox{LHS~\eqref{E:SECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS}} & \lesssim \int_{\mathcal{M}_{t,u}} \left| L \mathscr{P}^N \Psi \right|^2 \, d \varpi + \int_{\mathcal{M}_{t,u}} \left| \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right|^2 \, d \varpi \\ & \lesssim \int_{u'=0}^u \left\| L \mathscr{P}^N \Psi \right\|_{L^2(\mathcal{P}_{u'}^t)}^2 \, du' + \int_{t' = 0}^t \left\| \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_{t'}^u)}^2 \, dt' \notag \\ & \lesssim \int_{u'=0}^u \totTanmax{[1,N]}(t,u') \, du' + \int_{t' = 0}^t \left\| \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_{t'}^u)}^2 \, dt'. \notag \end{align} To complete the proof of \eqref{E:SECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS}, we must handle the final integral on RHS~\eqref{E:FIRSTBOUNDFORSECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS}. To bound the integral by $\leq$ RHS~\eqref{E:SECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS}, we bound the integrand $ \displaystyle \left\| \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_{t'}^u)}^2 $ by using inequality \eqref{E:LESSPRECISEDIFFICULTTERML2BOUND} (with $t'$ in place of $t$), simple estimates of the form $ab \lesssim a^2 + b^2$, and the following bounds for the two time integrals on RHS~\eqref{E:LESSPRECISEDIFFICULTTERML2BOUND}: \[ \int_{s=0}^{t'} \frac{1} {\upmu_{\star}(s,u)} \totTanmax{[1,N]}^{1/2}(s,u) \, ds \lesssim \left\lbrace \ln \upmu_{\star}^{-1}(t',u) + 1 \right\rbrace \totTanmax{[1,N]}^{1/2}(t',u), \] \[ \int_{s=0}^{t'} \frac{1} {\upmu_{\star}^{1/2}(s,u)} \slowtotTanmax{\leq N}^{1/2}(s,u) \, ds \lesssim \slowtotTanmax{\leq N}^{1/2}(t',u). \] The above two bounds follow from \eqref{E:LOGLOSSMUINVERSEINTEGRALBOUND} and the fact that $\totTanmax{[1,N]}$ and $\slowtotTanmax{\leq N}$ are increasing in their arguments. These steps yield that all terms generated by $ \displaystyle \int_{t' = 0}^t \left\| \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_{t'}^u)}^2 \, dt' $ are $\lesssim \mbox{RHS~\eqref{E:SECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS}}$, except for the following integral generated by the last term on RHS~\eqref{E:LESSPRECISEDIFFICULTTERML2BOUND}: \[ \mathring{\upepsilon}^2 \int_{t'=0}^t \left\lbrace \ln \upmu_{\star}^{-1}(t',u) + 1 \right\rbrace^2 \, dt'. \] Using \eqref{E:LESSSINGULARTERMSMPOINTNINEINTEGRALBOUND}, we deduce that the above term is $\lesssim \mathring{\upepsilon}^2$ as desired. We have thus proved \eqref{E:SECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS}. The proof of \eqref{E:FIRSTLESSDEGENERATEENERGYESTIMATEINTEGRALS} starts with the following analog of \eqref{E:FIRSTBOUNDFORSECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS}, which can be proved in the same way: \begin{align} \mbox{LHS~\eqref{E:FIRSTLESSDEGENERATEENERGYESTIMATEINTEGRALS}} & \lesssim \int_{t' = 0}^t \left\| \breve{X} \mathscr{P}^N \Psi \right\|_{L^2(\Sigma_{t'}^u)}^2 \, dt' + \int_{t' = 0}^t \left\| \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_{t'}^u)}^2 \, dt' \\ & \lesssim \int_{t'=0}^t \totTanmax{[1,N]}(t',u) \, dt' + \int_{t' = 0}^t \left\| \upmu Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_{t'}^u)}^2 \, dt'. \notag \end{align} The remaining details are similar to the ones that we gave above in our proof of \eqref{E:SECONDLESSDEGENERATEENERGYESTIMATEINTEGRALS}; we therefore omit them. \end{proof} \subsubsection{Estimates involving simple error terms} \label{SSS:ENERGYESTMATESHARMLESS} In this subsubsection, we derive $L^2$ bounds for some simple error terms that we encounter in the energy estimates. We start with a lemma in which we control the $L^2$ norms of the ``easy derivatives'' of the eikonal function quantities. By easy derivatives, we mean ones that we can control without using the modified quantities described in Subsubsect.\ \ref{SSS:ENERGYESTIMATES}. \begin{lemma}[$L^2$ \textbf{bounds for the eikonal function quantities that do not require modified quantities}] \label{L:EASYL2BOUNDSFOREIKONALFUNCTIONQUANTITIES} The following estimates hold for $1 \leq N \leq 18$ (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation): \begin{subequations} \begin{align} \left\| L \mathscr{P}_*^{[1,N]} \upmu \right\|_{L^2(\Sigma_t^u)}, \, \left\| L \mathscr{P}^{\leq N} L_{(Small)}^i \right\|_{L^2(\Sigma_t^u)}, \, \left\| L \mathscr{P}^{\leq N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_t^u)} & \lesssim \mathring{\upepsilon} + \frac{\totTanmax{[1,N]}^{1/2}(t,u)}{\upmu_{\star}^{1/2}(t,u)}, \label{E:LUNITTANGENGITALEIKONALINTERMSOFCONTROLLING} \\ \left\| L \mathscr{Z}^{\leq N;1} L_{(Small)}^i \right\|_{L^2(\Sigma_t^u)}, \, \left\| L \mathscr{Z}^{\leq N-1;1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_t^u)} & \lesssim \mathring{\upepsilon} + \frac{\totTanmax{[1,N]}^{1/2}(t,u)}{\upmu_{\star}^{1/2}(t,u)}, \label{E:LUNITONERADIALEIKONALINTERMSOFCONTROLLING} \\ \left\| \mathscr{P}_*^{[1,N]} \upmu \right\|_{L^2(\Sigma_t^u)}, \, \left\| \mathscr{P}^{[1,N]} L_{(Small)}^i \right\|_{L^2(\Sigma_t^u)}, \, \left\| \mathscr{P}^{\leq N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_t^u)} & \lesssim \mathring{\upepsilon} + \int_{s=0}^t \frac{\totTanmax{[1,N]}^{1/2}(s,u)}{\upmu_{\star}^{1/2}(s,u)} \, ds, \label{E:TANGENGITALEIKONALINTERMSOFCONTROLLING} \\ \left\| \mathscr{Z}_*^{[1,N];1} L_{(Small)}^i \right\|_{L^2(\Sigma_t^u)}, \, \left\| \mathscr{Z}^{\leq N-1;1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_t^u)} & \lesssim \mathring{\upepsilon} + \int_{s=0}^t \frac{\totTanmax{[1,N]}^{1/2}(s,u)}{\upmu_{\star}^{1/2}(s,u)} \, ds. \label{E:ONERADIALEIKONALINTERMSOFCONTROLLING} \end{align} \end{subequations} \end{lemma} \begin{proof} Thanks in part to the estimates of Lemmas~\ref{L:BEHAVIOROFEIKONALFUNCTIONQUANTITIESALONGSIGMA0}, \ref{L:COMMUTATORESTIMATES}, and \ref{L:TRANSVERALTANGENTIALCOMMUTATOR}, Props.~\ref{P:IMPROVEMENTOFAUX} and \ref{P:MUINVERSEINTEGRALESTIMATES}, and Lemma~\ref{L:L2NORMSOFTIMEINTEGRATEDFUNCTIONS}, the proof of \cite{jSgHjLwW2016}*{Lemma~14.3} goes through nearly verbatim. In particular, these estimates do not involve the slow wave variable $\vec{W}$. One small difference compared to \cite{jSgHjLwW2016}*{Lemma~14.3} is that we do not state estimates for $\left\| L_{(Small)}^i \right\|_{L^2(\Sigma_t^u)} $ in Lemma~\ref{L:EASYL2BOUNDSFOREIKONALFUNCTIONQUANTITIES} since in the present paper, these quantities are not controlled by the data-size parameter $\mathring{\upepsilon}$, but rather by $\mathring{\upalpha}$ (much like in the $L^{\infty}$ estimate \eqref{E:LUNITIITSEFLSMALLDATALINFINITYCONSEQUENCES}). \end{proof} We now estimate the error integrals generated by the terms $Harmless^{[1,N]}$ and $Harmless_{(Slow)}^{\leq N}$ on the RHSs of the equations of Prop.~\ref{P:IDOFKEYDIFFICULTENREGYERRORTERMS}. \begin{lemma}[$L^2$ \textbf{bounds for error integrals involving} $Harmless^{[1,N]}$ \textbf{and} $Harmless_{(Slow)}^{\leq N}$ \textbf{terms}] \label{L:STANDARDPSISPACETIMEINTEGRALS} Assume that $1 \leq N \leq 18$ and $\varsigma > 0$. Recall that we defined terms of the form $Harmless^{[1,N]}$ and $Harmless_{(Slow)}^{\leq N}$ in Def.~\ref{D:HARMLESSTERMS}. We have the following estimates, where the implicit constants are independent of $\varsigma$: \begin{align} \label{E:STANDARDPSISPACETIMEINTEGRALS} \int_{\mathcal{M}_{t,u}} & \left| \threemyarray[(1 + \upmu) L \mathscr{P}^N \Psi] {\breve{X} \mathscr{P}^N \Psi} {\mathscr{P}^{\leq N} \vec{W}} \right| \left| \myarray[Harmless^{[1,N]}] {Harmless_{(Slow)}^{\leq N}} \right| \, d \varpi \\ & \lesssim (1 + \varsigma^{-1}) \int_{t'=0}^t \totTanmax{[1,N]}(t',u) \, dt' + (1 + \varsigma^{-1}) \int_{u'=0}^u \totTanmax{[1,N]}(t,u') \, du' \notag \\ & \ \ + \varsigma \coerciveTanspacetimemax{[1,N]}(t,u) + (1 + \varsigma^{-1}) \int_{u'=0}^u \slowtotTanmax{\leq N}(t,u') \, du' + \mathring{\upepsilon}^2. \notag \end{align} \end{lemma} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. The bounds for error integrals that do not involve $\vec{W}$ can be derived by using arguments nearly identical to the ones given in the proof of \cite{jSgHjLwW2016}*{Lemma~14.7}; we therefore omit those details. It remains for us to prove the desired bounds involving $\vec{W}$, which means that we must estimate the spacetime integrals of various quadratic terms. In this proof, we derive the desired estimates only for some representative quadratic terms. The remaining terms can be similarly bounded and we omit those details. Specifically, we show how bound the integral of the product $\left| \mathscr{P}^{\leq N} \vec{W} \right| \left| \mathscr{Z}_*^{[1,N+1];1} \Psi \right| $. We note that our proof of the desired bound relies on all of the main ideas needed to prove all of the estimates stated in the lemma. To proceed, we repeatedly use the commutator estimate \eqref{E:ONERADIALTANGENTIALFUNCTIONCOMMUTATORESTIMATE} and the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX} to commute the (at most one) factor of $\breve{X}$ in the operator $\mathscr{Z}_*^{[1,N+1];1}$ so that $\breve{X}$ acts last, which yields \[ |\mathscr{Z}_*^{[1,N+1];1} \Psi| \lesssim |\breve{X} \mathscr{P}^{[1,N]} \Psi| + |\mathscr{P}^{[1,N+1]} \Psi| + |\mathscr{Z}_*^{[1,N];1} \upgamma| + |\mathscr{P}_*^{[1,N]} \underline{\upgamma}|. \] Thus, we must bound the integral of the four corresponding products generated by the RHS of the previous inequality. To bound the integral of the first product, we use Young's inequality and Lemma~\ref{L:COERCIVENESSOFCONTROLLING} to obtain \begin{align} & \int_{\mathcal{M}_{t,u}} \left| \mathscr{P}^{\leq N} \vec{W} \right| \left| \breve{X} \mathscr{P}^{[1,N]} \Psi \right| \, d \varpi \label{E:FINALHARMLESSEXAMPLEINTEGRAL} \\ & \lesssim \int_{u'=0}^u \int_{\mathcal{P}_{u'}^t} \left| \mathscr{P}^{\leq N} \vec{W} \right|^2 \, d \overline{\varpi} \, du' + \int_{t'=0}^t \int_{\Sigma_{t'}^u} \left| \breve{X} \mathscr{P}^{[1,N]} \Psi \right|^2 \, d \underline{\varpi} \, dt' \notag \\ & \lesssim \int_{u'=0}^u \slowtotTanmax{\leq N}(t,u') \, du' + \int_{t'=0}^t \totTanmax{[1,N]}(t',u) \, dt', \notag \end{align} which is $\lesssim$ RHS~\eqref{E:STANDARDPSISPACETIMEINTEGRALS} as desired. To handle the second product $|\mathscr{P}^{\leq N} \vec{W}| |\mathscr{P}^{[1,N+1]} \Psi|$, we first consider terms of the form $|\mathscr{P}^{\leq N} \vec{W}| |Y \mathscr{P}^{\leq N} \Psi| $. To bound the corresponding integrals, we use Young's inequality and Lemma~\ref{L:COERCIVENESSOFCONTROLLING} and, for terms with more than one derivative on $\Psi$, we consider separately the regions in which $\upmu < 1/4$ and $\upmu > 1/4$, which in total leads to the following bound: \begin{align} & \int_{\mathcal{M}_{t,u}} \left| \mathscr{P}^{\leq N} \vec{W} \right| \left| Y \mathscr{P}^{\leq N} \Psi \right| \, d \varpi \label{E:ANOTHERFINALHARMLESSEXAMPLEINTEGRAL} \\ & \lesssim (1 + \varsigma^{-1}) \int_{u'=0}^u \int_{\mathcal{P}_{u'}^t} \left| \mathscr{P}^{\leq N} \vec{W} \right|^2 \, d \overline{\varpi} \, du' + \varsigma \int_{\mathcal{M}_{t,u}} \mathbf{1}_{\lbrace \upmu \leq 1/4 \rbrace} \left| {{d \mkern-9mu /} } \mathscr{P}^{[1,N]} \Psi \right|^2 \, d \varpi \notag \\ & \ \ + \int_{t'=0}^t \int_{\Sigma_{t'}^u} \upmu \left| {{d \mkern-9mu /} } \mathscr{P}^{[1,N]} \Psi \right|^2 \, d \underline{\varpi} \, dt' + \int_{t'=0}^t \int_{\Sigma_{t'}^u} \left| Y \Psi \right|^2 \, d \underline{\varpi} \, dt' \notag \\ & \lesssim (1 + \varsigma^{-1}) \int_{u'=0}^u \slowtotTanmax{\leq N}(t,u') \, du' + \varsigma \coerciveTanspacetimemax{[1,N]}(t,u) + \int_{t'=0}^t \totTanmax{[1,N]}(t',u) \, dt' + \mathring{\upepsilon}^2, \notag \end{align} which is $\lesssim$ RHS~\eqref{E:STANDARDPSISPACETIMEINTEGRALS} as desired. To finish the proof of the desired bounds for the second product $|\mathscr{P}^{\leq N} \vec{W}| |\mathscr{P}^{[1,N+1]} \Psi|$, it remains for us to bound the spacetime integral of the product $|\mathscr{P}^{\leq N} \vec{W}| |L \mathscr{P}^{\leq N} \Psi| $. We can obtain the desired bound by using arguments similar to the ones we used in proving \eqref{E:ANOTHERFINALHARMLESSEXAMPLEINTEGRAL}, except that we also rely on the bounds $ \displaystyle \int_{\mathcal{M}_{t,u}} |L \mathscr{P}^{[1,N]} \Psi|^2 \, d \varpi \lesssim \int_{u'=0}^u \totTanmax{[1,N]}(t,u') \, du' $ and $ \displaystyle \int_{\mathcal{M}_{t,u}} |L \Psi|^2 \, d \varpi \lesssim \mathring{\upepsilon}^2 + \int_{t'=0}^t \totTanmax{[1,N]}(t',u) \, du' $, which are simple consequences of Lemma~\ref{L:COERCIVENESSOFCONTROLLING}. To bound the integral of the third product $ |\mathscr{P}^{\leq N} \vec{W}| |\mathscr{Z}_*^{[1,N];1} \upgamma| $ and the fourth product $ |\mathscr{P}^{\leq N} \vec{W}| |\mathscr{P}_*^{[1,N]} \underline{\upgamma}| $, we use arguments similar to the ones we used above as well as the estimates \eqref{E:TANGENGITALEIKONALINTERMSOFCONTROLLING} and \eqref{E:ONERADIALEIKONALINTERMSOFCONTROLLING} which, when combined with the estimate \eqref{E:LESSSINGULARTERMSMPOINTNINEINTEGRALBOUND} and the fact that $\totTanmax{[1,N]}$ is increasing in its arguments, yield the following spacetime integral bounds: \begin{align} \label{E:LASTREPHARMLESSERRORINTEGRAL} & \int_{\mathcal{M}_{t,u}} \left| \mathscr{Z}_*^{[1,N];1} L_{(Small)}^i \right|^2 \, d \varpi + \int_{\mathcal{M}_{t,u}} \left| \mathscr{P}_*^{[1,N]} \upmu \right|^2 \, d \varpi \\ & \lesssim \int_{t'=0}^t \left\lbrace \int_{s=0}^{t'} \frac{\totTanmax{[1,N]}^{1/2}(s,u)}{\upmu_{\star}^{1/2}(s,u)} \, ds \right\rbrace^2 \, dt' + \mathring{\upepsilon}^2 \notag \\ & \lesssim \int_{t'=0}^t \totTanmax{[1,N]}(t',u) \, dt' + \mathring{\upepsilon}^2. \notag \end{align} Finally, we observe that $\mbox{\upshape RHS~\eqref{E:LASTREPHARMLESSERRORINTEGRAL}} \lesssim \mbox{\upshape RHS~\eqref{E:STANDARDPSISPACETIMEINTEGRALS}} $ as desired. This completes our proof of the representative estimates. \end{proof} \subsubsection{Estimates for the error integrals \underline{not} depending on the semilinear inhomogeneous terms} \label{SSS:ENERGYESTIMATESFORERRORTERMSNOTDEPENDINGONINHOMOGENEOUS} In the next lemma, we derive bounds for the error integrals whose integrands we pointwise bounded in Lemma~\ref{L:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND}. \begin{lemma}[\textbf{Estimates for the error integrals not depending on the semilinear inhomogeneous terms}] \label{L:ENERGYESTIMATESFORSLOWWAVEERRORTERMSNOINHOMOGENEOUS} Assume that $1 \leq N \leq 18$. Let $\varsigma > 0$ be a real number. We have the following estimate for the last term on RHS~\eqref{E:E0DIVID} (with $\mathscr{P}^N \Psi$ in the role of $f$ and without any absolute value taken on the left), where the implicit constants are independent of $\varsigma$: \begin{align} \label{E:MULTIPLIERVECTORFIELDERRORINTEGRALS} \sum_{i=1}^5 \int_{\mathcal{M}_{t,u}} \basicenergyerrorarg{T}{i}[\mathscr{P}^N \Psi] \, d \varpi & \lesssim \int_{t'=0}^t \frac{1}{\sqrt{T_{(Boot)} - t'}} \totTanmax{[1,N]}(t',u) \, dt' \\ & \ \ + (1 + \varsigma^{-1}) \int_{t'=0}^t \totTanmax{[1,N]}(t',u) \, dt' \notag \\ & \ \ + (1 + \varsigma^{-1}) \int_{u'=0}^u \totTanmax{[1,N]}(t,u') \, du' + \varsigma \coerciveTanspacetimemax{[1,N]}(t,u). \notag \end{align} Moreover, for $N \leq 18$, we have the following estimate for the term on the next-to-last line of RHS~\eqref{E:SLOWENERGYID} (with $\mathscr{P}^N \vec{W}$ in the role of $\vec{V}$): \begin{align} \label{E:ENERGYESTIMATESFORSLOWWAVEERRORTERMSNOINHOMOGENEOUS} \int_{\mathcal{M}_{t,u}} \left| \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace \mathfrak{W}[\mathscr{P}^N \vec{W}] \right| \, d \varpi & \leq C \int_{u'=0}^u \slowtotTanmax{\leq N}(t,u') \, du'. \end{align} \end{lemma} \begin{proof} See Subsect.\ \ref{SS:OFTENUSEDESTIMATES} for some comments on the analysis. To prove \eqref{E:MULTIPLIERVECTORFIELDERRORINTEGRALS}, we integrate \eqref{E:MULTIPLIERVECTORFIEDERRORTERMPOINTWISEBOUND} (with $\mathscr{P}^N \Psi$ in the role of $f$) over $\mathcal{M}_{t,u}$ and use Lemma~\ref{L:COERCIVENESSOFCONTROLLING}. To prove \eqref{E:ENERGYESTIMATESFORSLOWWAVEERRORTERMSNOINHOMOGENEOUS}, we use the pointwise bound \eqref{E:SIMPLEPOINTWISEBOUNDSLOWWAVEBASICENERGYINTEGRAND} (with $\mathscr{P}^N \vec{W}$ in the role of $\vec{V}$) and the coerciveness estimate \eqref{E:SLOWCOERCIVENESSOFCONTROLLING} to deduce that \begin{align} \mbox{{\upshape LHS}~\eqref{E:ENERGYESTIMATESFORSLOWWAVEERRORTERMSNOINHOMOGENEOUS}} \lesssim \int_{u'=0}^u \| \mathscr{P}^N \vec{W} \|_{\mathcal{P}_{u'}^t}^2 \, du' \lesssim \int_{u'=0}^u \slowtotTanmax{\leq N}(t,u') \, du' \end{align} as desired. \end{proof} \subsection{Proof of Prop.~\ref{P:TANGENTIALENERGYINTEGRALINEQUALITIES}} \label{SS:PROOFOFPROPTANGENTIALENERGYINTEGRALINEQUALITIES} Armed with the estimates of the previous subsections, we are now ready to prove Prop.~\ref{P:TANGENTIALENERGYINTEGRALINEQUALITIES}. \noindent \textbf{Proof of \eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}}: Assume that $1 \leq N \leq 18$ and let $\mathscr{P}^N$ be an $N^{th}$-order $\mathcal{P}_u$-tangential vectorfield operator. Using \eqref{E:E0DIVID} with $\mathscr{P}^N \Psi$ in the role of $f$ and appealing to definition \eqref{E:COERCIVESPACETIMEDEF}, we deduce the following identity: \begin{align} \label{E:E0DIVIDMAINESTIMATES} \mathbb{E}_{(Fast)}[\mathscr{P}^N \Psi](t,u) & + \mathbb{F}_{(Fast)}[\mathscr{P}^N \Psi](t,u) + \mathbb{K}[\mathscr{P}^N \Psi](t,u) \\ & = \mathbb{E}_{(Fast)}[\mathscr{P}^N \Psi](0,u) + \mathbb{E}_{(Fast)}[\mathscr{P}^N \Psi](t,0) \notag \\ & \ \ - \int_{\mathcal{M}_{t,u}} \left\lbrace (1 + 2 \upmu) (L \mathscr{P}^N \Psi) + 2 \breve{X} \mathscr{P}^N \Psi \right\rbrace \upmu \square_g(\mathscr{P}^N \Psi) \, d \varpi \notag \\ & \ \ + \sum_{i=1}^5 \int_{\mathcal{M}_{t,u}} \basicenergyerrorarg{T}{i}[\mathscr{P}^N \Psi] \, d \varpi. \notag \end{align} Similarly, if $N \leq 18$, then by \eqref{E:SLOWENERGYID} with $\mathscr{P}^N \vec{W}$ in the role of $\vec{V}$, we have \begin{align} \label{E:SLOWENERGYIDMAINESTIMATES} & \mathbb{E}_{(Slow)}[\mathscr{P}^N \vec{W}](t,u) + \mathbb{F}_{(Slow)}[\mathscr{P}^N \vec{W}](t,u) \\ & = \mathbb{E}_{(Slow)}[\mathscr{P}^N \vec{W}](0,u) + \mathbb{F}_{(Slow)}[\mathscr{P}^N \vec{W}](t,0) \notag \\ & \ \ + \int_{\mathcal{M}_{t,u}} \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace \mathfrak{W}[\mathscr{P}^N \vec{W}] \, d \varpi \notag \\ & \ \ + \int_{\mathcal{M}_{t,u}} \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace \left\lbrace 4 (h^{-1})^{\alpha 0} (h^{-1})^{\beta 0} (\mathscr{P}^N w_{\alpha}) F_{\beta} + 2 (h^{-1})^{\alpha \beta} (\mathscr{P}^N w_{\alpha}) F_{\beta} \right\rbrace \, d \varpi \notag \\ & \ \ + \int_{\mathcal{M}_{t,u}} \left\lbrace 1 + \upgamma \mathrm{f}(\upgamma) \right\rbrace \left\lbrace 2 (h^{-1})^{\alpha a} (h^{-1})^{b 0} (\mathscr{P}^N w_{\alpha}) F_{ab} + 2 (\mathscr{P}^N w) F \right\rbrace \, d \varpi, \notag \end{align} where $F_0$, $F_a$, $F$, and $F_{ab}$ respectively denote the terms $Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N}$ on RHSs \eqref{E:SLOWTIMECOMMUTED}-\eqref{E:SYMMETRYOFMIXEDPARTIALSCOMMUTED} and/or \eqref{E:SLOWTIMENOTCOMMUTED}. We will show that \textbf{i)} when $1 \leq N \leq 18$, RHS~\eqref{E:E0DIVIDMAINESTIMATES} $\leq$ RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}; \textbf{ii)} when $1 \leq N \leq 18$, RHS~\eqref{E:SLOWENERGYIDMAINESTIMATES} $\leq$ RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}; and \textbf{iii)} when $N=0$, RHS~\eqref{E:SLOWENERGYIDMAINESTIMATES} $\leq$ RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} where $N=1$ in \eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} (that is, the order $0$ energies for $\vec{W}$ are controlled by RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} with $N=1$). Then, in view of Defs.~\ref{D:MAINCOERCIVEQUANT} and \ref{D:COERCIVEINTEGRAL}, we take the appropriate maxes and supremums over these estimates, thereby arriving at the desired estimate \eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}. We start by showing that when $1 \leq N \leq 18$, we have that RHS~\eqref{E:E0DIVIDMAINESTIMATES} $\leq$ RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}. The vast majority of the terms on RHS~\eqref{E:E0DIVIDMAINESTIMATES} were suitably bounded by $\leq \mbox{\upshape RHS}~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}$ in \cite{jSgHjLwW2016}*{Section 14.8}. In particular, the terms on the first line of RHS~\eqref{E:E0DIVIDMAINESTIMATES} were bounded by $\lesssim \mathring{\upepsilon}^2$ in Lemma~\ref{L:INITIALSIZEOFL2CONTROLLING} while the last integral on RHS~\eqref{E:E0DIVIDMAINESTIMATES} was bounded via the estimate \eqref{E:MULTIPLIERVECTORFIELDERRORINTEGRALS}; for these terms, these are precisely the same estimates proved in \cite{jSgHjLwW2016}. We now explain the origin of the terms that are new compared to \cite{jSgHjLwW2016} and how to bound them. We stress that \emph{all of the new terms are non-borderline} in the sense that they do not contribute to the ``boxed-constant-involving'' terms on RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}, which drive the blowup-rate of the top-order energies. The new terms are all found within the factor $\upmu \square_g(\mathscr{P}^N \Psi)$ in the first error integral $ - \int_{\mathcal{M}_{t,u}} \left\lbrace (1 + 2 \upmu) (L \mathscr{P}^N \Psi) + 2 \breve{X} \mathscr{P}^N \Psi \right\rbrace \upmu \square_g(\mathscr{P}^N \Psi) \, d \varpi $ on RHS~\eqref{E:E0DIVIDMAINESTIMATES}. To bound this error integral, we first use (depending on the structure of $\mathscr{P}^N$) one of equations \eqref{E:LISTHEFIRSTCOMMUTATORIMPORTANTTERMS}-\eqref{E:GEOANGANGISTHEFIRSTCOMMUTATORIMPORTANTTERMS} or \eqref{E:HARMLESSORDERNCOMMUTATORS} to algebraically substitute for $\upmu \square_g(\mathscr{P}^N \Psi)$. The error integrals generated by the $Harmless^{[1,N]}$ terms were suitably bounded in \cite{jSgHjLwW2016}*{Section 14.8} (alternatively, see Lemma~\ref{L:STANDARDPSISPACETIMEINTEGRALS}). To bound the error integrals generated by the terms $Harmless_{(Slow)}^{\leq N}$, we use Lemma~\ref{L:STANDARDPSISPACETIMEINTEGRALS}. It remains for us to bound the error integrals generated by the three products on RHSs \eqref{E:LISTHEFIRSTCOMMUTATORIMPORTANTTERMS}-\eqref{E:GEOANGANGISTHEFIRSTCOMMUTATORIMPORTANTTERMS} that are not of the form $Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N}$. We proceed on a case by case basis, depending on the structure of the commutator vectorfield string $\mathscr{P}^N$. In the case $\mathscr{P}^N = Y^N$, we must bound the difficult error integral \begin{align} \label{E:RADDIFFICULTERRORINTEGRAL} - 2 \int_{\mathcal{M}_{t,u}} (\breve{X} Y^N \Psi) (\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \, d \varpi, \end{align} which is generated by the term $(\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ on RHS~\eqref{E:GEOANGANGISTHEFIRSTCOMMUTATORIMPORTANTTERMS}, by $\leq$ RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}. To this end, we first use Cauchy--Schwarz and \eqref{E:COERCIVENESSOFCONTROLLING} to bound the magnitude of \eqref{E:RADDIFFICULTERRORINTEGRAL} by \begin{align} \label{E:FIRSTSTEPDIFFICULTINTEGRALBOUND} \leq 2 \int_{t' = 0}^t \totTanmax{[1,N]}^{1/2}(t',u) \left\| (\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_{t'}^u)} \, dt'. \end{align} We now substitute the estimate \eqref{E:DIFFICULTTERML2BOUND} (with $t$ replaced by $t'$) for the second integrand factor on RHS~\eqref{E:FIRSTSTEPDIFFICULTINTEGRALBOUND}. In \cite{jSgHjLwW2016}*{Section 14.8}, all of the corresponding terms that arise from this substitution were bounded by $\leq \mbox{\upshape RHS}~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}$ except for the term generated by the one product on RHS~\eqref{E:DIFFICULTTERML2BOUND} involving $\slowtotTanmax{\leq N}$ (that is, the next-to-last product on RHS~\eqref{E:DIFFICULTTERML2BOUND}). Upon substituting this remaining product into \eqref{E:FIRSTSTEPDIFFICULTINTEGRALBOUND}, we generate the following error term, which is new compared to the terms appearing in \cite{jSgHjLwW2016}*{Section 14.8}: \begin{align} \label{E:MAINDIFFICULTENERGYESTIMETERMSLOWWAVECONTRIBUTION} C \int_{t' = 0}^t \frac{1}{\upmu_{\star}(t',u)} \totTanmax{[1,N]}^{1/2}(t',u) \int_{s=0}^{t'} \frac{1} {\upmu_{\star}^{1/2}(s,u)} \slowtotTanmax{\leq N}^{1/2}(s,u) \, ds \, dt'. \end{align} We now observe that the term \eqref{E:MAINDIFFICULTENERGYESTIMETERMSLOWWAVECONTRIBUTION} is $\leq$ the third-from-last product on RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} as desired. We now clarify that in \cite{jSgHjLwW2016}, the coefficient of the analog of the third product on RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} was stated as $\boxed{8.1}$ rather than $\boxed{8}$. This is because the coefficient of the second product on RHS~\eqref{E:DIFFICULTTERML2BOUND} is $\boxed{4}$, which is different than \cite{jSgHjLwW2016}*{Lemma~14.8}, where the analogous coefficient was stated as $\boxed{4.05}$ (for reasons that we described in our proof outline of Lemma~\ref{L:DIFFICULTTERML2BOUND}); this is a minor remark that has no substantial bearing on our analysis. Next, we note that the error integral \[ - \int_{\mathcal{M}_{t,u}} (1 + 2 \upmu) (L Y^N \Psi) (\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \, d \varpi, \] which is also generated by the term $(\breve{X} \Psi) Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$ on RHS~\eqref{E:GEOANGANGISTHEFIRSTCOMMUTATORIMPORTANTTERMS}, can be bounded by $\leq$ RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} by using arguments nearly identical to the ones in \cite{jSgHjLwW2016}*{Section 14.8}. In particular, the arguments are independent of $\vec{W}$ and therefore do not involve the slow wave controlling quantities $\slowtotTanmax{M}$. We do not repeat the proof here since it is exactly the same and since it relies on a lengthy integration by parts argument in which one trades the factor of $L$ from $L Y^N \Psi$ for a factor of $Y$ from $Y^N {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi$. We note that the integration by parts generates some of the ``boxed-constant-involving'' terms on RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}, including the ``boundary term'' $ \displaystyle \boxed{\left\lbrace 2 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) \right\rbrace} \frac{1}{\upmu_{\star}^{1/2}(t,u)} \totTanmax{[1,N]}^{1/2}(t,u) \left\| L \upmu \right\|_{L^{\infty}(\Sigmaminus{t}{t}{u})} \cdots $ on the fourth line of RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} and a contribution to the term $ \displaystyle \boxed{\left\lbrace 6 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) \right\rbrace} \int_{t'=0}^t \frac{\left\| [L \upmu]_- \right\|_{L^{\infty}(\Sigma_{t'}^u)}} {\upmu_{\star}(t',u)} \totTanmax{[1,N]}(t',u) \, dt' $ on the second line of RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}. We clarify that the factors of $ \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) $ appearing in the above two terms arise because in invoking the arguments given in \cite{jSgHjLwW2016}*{Section 14.8}, one must at one point derive pointwise control of $| {\Delta \mkern-12mu / \, } Y^{N-1} \Psi|$ in terms of $| {{d \mkern-9mu /} } Y^N \Psi| + \cdots$, where $\cdots$ denotes simpler error terms; by \eqref{E:ANGDERIVATIVESINTERMSOFTANGENTIALCOMMUTATOR} and Cor.\ \ref{C:SQRTEPSILONTOCEPSILON}, we in fact have the bound $| {\Delta \mkern-12mu / \, } Y^{N-1} \Psi| \leq \lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\varepsilon) \rbrace | {{d \mkern-9mu /} } Y^N \Psi| + \cdots $, which is the source of the factor $ \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha})$ in the above estimates. We also clarify that the factor of $\mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha})$ does not appear in the arguments of \cite{jSgHjLwW2016} since the parameter $\mathring{\upalpha}$ was not featured in that work (see also Remark~\ref{R:NEWPARAMETER}). To bound the remaining two error integrals generated by RHS~\eqref{E:GEOANGANGISTHEFIRSTCOMMUTATORIMPORTANTTERMS}, namely \[ - 2 \int_{\mathcal{M}_{t,u}} (\breve{X} Y^N \Psi) \uprho (\angdiffuparg{\#} \Psi) \cdot (\upmu {{d \mkern-9mu /} } Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi) \, d \varpi \] and \[ - \int_{\mathcal{M}_{t,u}} (1 + 2 \upmu) (L Y^N \Psi) \uprho (\angdiffuparg{\#} \Psi) \cdot (\upmu {{d \mkern-9mu /} } Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi) \, d \varpi, \] by $\leq \mbox{\upshape RHS}~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}$, we simply use Lemma~\ref{L:LESSDEGENERATEENERGYESTIMATEINTEGRALS}. To complete the proof that RHS~\eqref{E:E0DIVIDMAINESTIMATES} $\leq$ RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}, it remains only for us to bound the difficult error integrals that arise in the case $\mathscr{P}^N = Y^{N-1} L$, which are generated by the terms on RHS~\eqref{E:LISTHEFIRSTCOMMUTATORIMPORTANTTERMS} that are not of the form $Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N}$. Specifically, we encounter the error integrals \[ - 2 \int_{\mathcal{M}_{t,u}} (\breve{X} Y^{N-1} L \Psi) (\angdiffuparg{\#} \Psi) \cdot (\upmu {{d \mkern-9mu /} } Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi) \, d \varpi, \] and \[ - \int_{\mathcal{M}_{t,u}} (1 + 2 \upmu) (L Y^{N-1} L \Psi) (\angdiffuparg{\#} \Psi) \cdot (\upmu {{d \mkern-9mu /} } Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi) \, d \varpi, \] which we bounded in Lemma~\ref{L:LESSDEGENERATEENERGYESTIMATEINTEGRALS}. We have thus shown that RHS~\eqref{E:E0DIVIDMAINESTIMATES} $\leq$ RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}. To complete the proof of \eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}, it remains only for us to show that \textbf{ii)} when $1 \leq N \leq 18$, RHS~\eqref{E:SLOWENERGYIDMAINESTIMATES} $\leq$ RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} and \textbf{iii)} when $N=0$, RHS~\eqref{E:SLOWENERGYIDMAINESTIMATES} $\leq$ RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} but with $N=1$ on RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} (that is, the order $0$ energies for $\vec{W}$ are controlled by RHS~\eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} with $N=1$). We start with case \textbf{ii)}. To proceed, we first use Lemma~\ref{L:INITIALSIZEOFL2CONTROLLING} to deduce that $ \mathbb{E}_{(Slow)}[\mathscr{P}^N \vec{W}](0,u) + \mathbb{F}_{(Slow)}[\mathscr{P}^N \vec{W}](t,0) \lesssim \mathring{\upepsilon}^2 $ as desired. Next, we note that we suitably bounded the first integral on RHS~\eqref{E:SLOWENERGYIDMAINESTIMATES} in Lemma~\ref{L:ENERGYESTIMATESFORSLOWWAVEERRORTERMSNOINHOMOGENEOUS}. To bound the integrals on the last two lines of RHS~\eqref{E:SLOWENERGYIDMAINESTIMATES}, we recall that, as we mentioned above, the terms $F_0$, $F_a$, $F$, and $F_{ab}$ in the integrands are of the form $ Harmless^{[1,N]} + Harmless_{(Slow)}^{\leq N} $. Moreover, the $L^{\infty}$ estimates of Prop.~\ref{P:IMPROVEMENTOFAUX} imply that $\left| 1 + \upgamma \mathrm{f}(\upgamma) \right| \lesssim 1$ and, since $(h^{-1})^{\alpha \beta} = \mathrm{f}(\upgamma,\vec{W})$, that $\left| (h^{-1})^{\alpha \beta} \right| \lesssim 1$. Thus, the desired bounds follow from Lemma~\ref{L:STANDARDPSISPACETIMEINTEGRALS}. Finally, we note that the argument is identical in case \textbf{iii)}, which completes the proof of \eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}. \medskip \noindent \textbf{Proof of \eqref{E:BELOWTOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}} The proof of \eqref{E:BELOWTOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} is similar to that of \eqref{E:TOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} but involves one key change. To proceed, we let $\mathscr{P}^{N-1}$ be an $(N-1)^{st}$-order $\mathcal{P}_u$-tangential vectorfield operator, where $2 \leq N \leq 18$. We then argue as above, starting with the identity \eqref{E:E0DIVIDMAINESTIMATES} with $\mathscr{P}^{N-1} \Psi$ in place of $\mathscr{P}^N \Psi$ and the identity \eqref{E:SLOWENERGYIDMAINESTIMATES} with $\mathscr{P}^{N-1} \vec{W}$ in place of $\mathscr{P}^N \vec{W}$. We bound almost all error integrals in exactly the same way as before, the key change being that we bound the two most difficult error integrals in a different way. Specifically, the two difficult integrals are \[ - 2 \int_{\mathcal{M}_{t,u}} (\breve{X} Y^{N-1} \Psi) (\breve{X} \Psi) Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \, d \varpi, \] which is an analog of \eqref{E:RADDIFFICULTERRORINTEGRAL}, and \[ - \int_{\mathcal{M}_{t,u}} (1 + 2 \upmu) (L Y^{N-1} \Psi) (\breve{X} \Psi) Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \, d \varpi, \] whose top-order analog we did not discuss in detail above since it was treated in \cite{jSgHjLwW2016}*{Section 14.8} using arguments that are independent of the slow wave variable $\vec{W}$. Both of the above error integrals were shown in \cite{jSgHjLwW2016}*{Section 14.8} to be bounded by $\leq$ RHS~\eqref{E:BELOWTOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES} using a simple argument that does not involve the slow wave controlling quantities $\slowtotTanmax{M}$; the key point is to use the derivative-losing estimate \eqref{E:TANGENGITALEIKONALINTERMSOFCONTROLLING} to control $\| Y^{N-1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \|_{L^2(\Sigma_t^u)}$. We remark that the arguments in \cite{jSgHjLwW2016}*{Section 14.8} for bounding these two error integrals lead to the presence of the $\totTanmax{[1,N]}^{1/2}(s,u)$-involving double time integral on the second line of RHS~\eqref{E:BELOWTOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}. That is, these estimates lose one derivative (which is permissible below top order) and are therefore coupled to the next-highest-order energy estimates; the gain is that the resulting integrals are less singular with respect to $\upmu_{\star}^{-1}$. Finally, we note that the proofs of the energy estimates for $\mathscr{P}^{N-1} \vec{W}$ in the below-top-order cases $1 \leq N \leq 18$ are exactly the same as in the top-order case treated above. We have therefore proved \eqref{E:BELOWTOPORDERTANGENTIALENERGYINTEGRALINEQUALITIES}, which finishes the proof of Prop.~\ref{P:TANGENTIALENERGYINTEGRALINEQUALITIES}. $\hfill \qed$ \section{The main stable shock formation theorem} \label{S:MAINTHEOREM} \setcounter{equation}{0} We now prove our main result. \begin{theorem}[\textbf{Stable shock formation}] \label{T:MAINTHEOREM} Consider a solution $\Psi,\vec{W}$ to the system \eqref{E:FASTWAVE} + \eqref{E:SLOW0EVOLUTION}-\eqref{E:SYMMETRYOFMIXEDPARTIALS} with nonlinearities verifying the assumptions stated in Subsubsects.\ \ref{SSS:STATEMENTOFEQUATIONS}-\ref{SSS:WAVESPEEDASSUMPTIONS}. Let $\mathring{\upalpha} > 0$, $\mathring{\upepsilon} \geq 0$, $\mathring{\updelta} > 0$, and $\mathring{\updelta}_* > 0$ be the data-size parameters introduced in Sect.\ \ref{S:NORMSANDBOOTSTRAP}. For each $U_0 \in [0,1]$, let $T_{(Lifespan);U_0}$ be the classical lifespan of $\Psi,\vec{W}$ with respect to the \underline{Cartesian} coordinates $\lbrace x^{\alpha} \rbrace_{\alpha = 0,1,2}$ in the region that is completely determined by the non-trivial data lying in $\Sigma_0^{U_0}$ and the small data given along $\mathcal{P}_0^{2 \mathring{\updelta}_*^{-1}}$ (see Figure~\ref{F:REGION} on pg.~\pageref{F:REGION}). If $\mathring{\upalpha}$ is sufficiently small relative to $1$, and if $\mathring{\upepsilon}$ is sufficiently small relative to 1, small relative to $\mathring{\updelta}^{-1}$, and small relative to $\mathring{\updelta}_*$ (in the sense explained in Subsect.\ \ref{SS:SMALLNESSASSUMPTIONS}), then the following conclusions hold, where all constants can be chosen to be independent of $U_0$. \medskip \noindent \underline{\textbf{Dichotomy of possibilities.}} One of the following mutually disjoint possibilities must occur, where $\upmu_{\star}(t,u) = \min \lbrace 1, \min_{\Sigma_t^u} \upmu \rbrace$. \begin{enumerate} \renewcommand{\labelenumi}{\textbf{\Roman{enumi})}} \item $T_{(Lifespan);U_0} > 2 \mathring{\updelta}_*^{-1}$. In particular, the solution exists classically on the spacetime region $\mbox{\upshape cl} \mathcal{M}_{2 \mathring{\updelta}_*^{-1},U_0}$, where $\mbox{\upshape cl}$ denotes closure. Furthermore, $\inf \lbrace \upmu_{\star}(s,U_0) \ | \ s \in [0,2 \mathring{\updelta}_*^{-1}] \rbrace > 0$. \item $T_{(Lifespan);U_0} \leq 2 \mathring{\updelta}_*^{-1}$, and \begin{align} \label{E:MAINTHEOREMLIFESPANCRITERION} T_{(Lifespan);U_0} = \sup \left\lbrace t \in [0, 2 \mathring{\updelta}_*^{-1}) \ | \ \inf \lbrace \upmu_{\star}(s,U_0) \ | \ s \in [0,t) \rbrace > 0 \right\rbrace. \end{align} \end{enumerate} In addition, case $\textbf{II)}$ occurs when $U_0 = 1$. In this case, we have\footnote{See Subsect.\ \ref{SS:NOTATIONANDINDEXCONVENTIONS} regarding our use of the notation $\mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha})$.} \begin{align} \label{E:CLASSICALLIFESPANASYMPTOTICESTIMATE} T_{(Lifespan);1} = \left\lbrace 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\mathring{\upepsilon}) \right\rbrace \mathring{\updelta}_*^{-1}. \end{align} \medskip \noindent \underline{\textbf{What happens in Case I).}} In case $\textbf{I)}$, all of the bootstrap assumptions from Subsects.\ \ref{SS:SIZEOFTBOOT}-\ref{SS:PSIBOOTSTRAP}, the estimates of Props.~\ref{P:IMPROVEMENTOFAUX} and \ref{P:IMPROVEMENTOFHIGHERTRANSVERSALBOOTSTRAP}, and the energy estimates of Prop.~\ref{P:MAINAPRIORIENERGY} hold on $\mbox{\upshape cl} \mathcal{M}_{2 \mathring{\updelta}_*^{-1},U_0}$ with the factors of $\varepsilon$ on the RHSs replaced by $C \mathring{\upepsilon}$. Moreover, for $0 \leq M \leq 5$, the following estimates hold for $(t,u) \in [0,2 \mathring{\updelta}_*^{-1}] \times [0,U_0]$ (see Subsect.\ \ref{SS:STRINGSOFCOMMUTATIONVECTORFIELDS} regarding the vectorfield operator notation): \begin{subequations} \begin{align} \left\| \mathscr{P}_*^{[1,12]} \upmu \right\|_{L^2(\Sigma_t^u)}, \, \left\| \mathscr{P}^{[1,12]} L_{(Small)}^i \right\|_{L^2(\Sigma_t^u)}, \, \left\| \mathscr{P}^{\leq 11} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_t^u)} & \leq C \mathring{\upepsilon}, \label{E:LOWORDERTANGENTIALEIKONALL2MAINTHEOREM} \\ \left\| \mathscr{P}_*^{13 + M} \upmu \right\|_{L^2(\Sigma_t^u)}, \, \left\| \mathscr{P}^{13 + M} L_{(Small)}^i \right\|_{L^2(\Sigma_t^u)}, \, \left\| \mathscr{P}^{12 + M} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_t^u)} & \leq C \mathring{\upepsilon} \upmu_{\star}^{-(M+.4)}(t,u), \label{E:MIDORDERTANGENTIALEIKONALL2MAINTHEOREM} \\ \left\| L \mathscr{P}^{18} \upmu \right\|_{L^2(\Sigma_t^u)}, \, \left\| L \mathscr{Z}^{18;1} L_{(Small)}^i \right\|_{L^2(\Sigma_t^u)}, \, \left\| L \mathscr{Z}^{17;1} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_t^u)} & \leq C \mathring{\upepsilon} \upmu_{\star}^{-6.4}(t,u), \label{E:HIGHORDERLUNITTANGENTIALEIKONALL2MAINTHEOREM} \\ \left\| \upmu Y^{18} {\mbox{\upshape{tr}}_{\mkern-2mu \gsphere}} \upchi \right\|_{L^2(\Sigma_t^u)} & \leq C \mathring{\upepsilon} \upmu_{\star}^{-5.9}(t,u). \label{E:HIGHORDERANGULARTRCHIL2MAINTHEOREM} \end{align} \end{subequations} \medskip \noindent \underline{\textbf{What happens in Case II).}} In case $\textbf{II)}$, all of the bootstrap assumptions from Subsects.\ \ref{SS:SIZEOFTBOOT}-\ref{SS:PSIBOOTSTRAP}, the estimates of Props.~\ref{P:IMPROVEMENTOFAUX} and \ref{P:IMPROVEMENTOFHIGHERTRANSVERSALBOOTSTRAP}, and the energy estimates of Prop.~\ref{P:MAINAPRIORIENERGY} hold on $\mathcal{M}_{T_{(Lifespan);U_0},U_0}$ with the factors of $\varepsilon$ on the RHS replaced by $C \mathring{\upepsilon}$. Moreover, for $0 \leq M \leq 5$, the estimates \eqref{E:LOWORDERTANGENTIALEIKONALL2MAINTHEOREM}-\eqref{E:HIGHORDERANGULARTRCHIL2MAINTHEOREM} hold for $(t,u) \in [0,T_{(Lifespan);U_0}) \times [0,U_0]$. In addition, the scalar functions $\mathscr{Z}^{\leq 9;1} \Psi$, $\mathscr{Z}^{\leq 3;2} \Psi$, $\breve{X} \breve{X} \breve{X} \Psi$, $\mathscr{Z}^{\leq 9;1} \vec{W}$, $\breve{X} \breve{X} \vec{W}$, $\mathscr{Z}^{\leq 9;1} L^i$, $\breve{X} \breve{X} L^i$, $\mathscr{P}^{\leq 9} \upmu$, $\mathscr{Z}^{\leq 3;1} \upmu$, and $\breve{X} \breve{X} \upmu$ extend to\footnote{In \cite{jSgHjLwW2016}*{Theorem~15.1}, it was stated that $\mathscr{Z}^{\leq 2;1} \upmu$ extend. In fact, the arguments given there imply that $\mathscr{Z}^{\leq 3;1} \upmu$ extend, as we have stated here. Moreover, we note that here we have stated that $\mathscr{Z}^{\leq 3;2} \Psi$ extend. This stands in contrast to \cite{jSgHjLwW2016}*{Theorem~15.1}, in which incorrect reasoning led to the conclusion that $\mathscr{Z}^{\leq 4;2} \Psi$ extend. The faulty reasoning started with the not-fully-justified estimate $\| L \mathscr{Z}^{\leq 4;2} \Psi \|_{L^{\infty}(\Sigma_t^u)} \leq C \varepsilon$, which was stated in \cite{jSgHjLwW2016}*{Proposition~9.2}. The proof given there, however, works only for a subset of operators of type $L \mathscr{Z}^{\leq 4;2}$ in which a factor of $\breve{X}$ acts first, for the same reasons that led to our proof of \eqref{E:WAVEEQNONCETRANSVERSALLYCOMMUTEDTRANSPORTPOINTOFVIEW}. These are minor points that have no substantial bearing on the main results.} $\Sigma_{T_{(Lifespan);U_0}}^{U_0}$ as functions of the geometric coordinates $(t,u,\vartheta)$ belonging to the space $C\left([0,T_{(Lifespan);U_0}],L^{\infty}([0,U_0] \times \mathbb{T}) \right)$. Furthermore, the Cartesian component functions $g_{\alpha \beta}(\Psi)$ verify the estimate $g_{\alpha \beta} = m_{\alpha \beta} + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\mathring{\upepsilon})$ (where $m_{\alpha \beta} = \mbox{\upshape diag}(-1,1,1)$ is the standard Minkowski metric) and have the same extension properties as $\Psi$ (in particular, the same $\mathscr{Z}$-derivatives of $g_{\alpha \beta}$ extend as elements of $C\left([0,T_{(Lifespan);U_0}],L^{\infty}([0,U_0] \times \mathbb{T}) \right)$). Moreover, let $\Sigma_{T_{(Lifespan);U_0}}^{U_0;(Blowup)}$ be the subset of $\Sigma_{T_{(Lifespan);U_0}}^{U_0}$ defined by \begin{align} \label{E:BLOWUPPOINTS} \Sigma_{T_{(Lifespan);U_0}}^{U_0;(Blowup)} := \left\lbrace (T_{(Lifespan);U_0},u,\vartheta) \ | \ \upmu(T_{(Lifespan);U_0},u,\vartheta) = 0 \right\rbrace. \end{align} Then for each point $(T_{(Lifespan);U_0},u,\vartheta) \in \Sigma_{T_{(Lifespan);U_0}}^{U_0;(Blowup)}$, there exists a past neighborhood\footnote{By a past neighborhood of $(T_{(Lifespan);U_0},u,\vartheta)$, we mean a set that is the intersection of the closed half-space $\lbrace (t,u',\vartheta') \in \mathbb{R} \times \mathbb{R} \times \mathbb{T} \ | \ t \leq T_{(Lifespan);U_0} \rbrace$ with a set containing $(T_{(Lifespan);U_0},u,\vartheta)$ that is open with respect to the standard topology corresponding to the geometric coordinates.} containing it such that the following lower bound holds in the neighborhood: \begin{align} \label{E:BLOWUPPOINTINFINITE} \left| X \Psi (t,u,\vartheta) \right| \geq \frac{\mathring{\updelta}_*}{4 |G_{L_{(Flat)} L_{(Flat)}}(\Psi = 0)|} \frac{1}{\upmu(t,u,\vartheta)}. \end{align} In \eqref{E:BLOWUPPOINTINFINITE}, $ \displaystyle \frac{\mathring{\updelta}_*}{4 \left|G_{L_{(Flat)} L_{(Flat)}}(\Psi = 0) \right|} $ is a \textbf{positive} data-dependent constant (see \eqref{E:NONVANISHINGNONLINEARCOEFFICIENT}), and the $\ell_{t,u}$-transversal vectorfield $X$ has near-Euclidean-unit length: $\delta_{ab} X^a X^b = 1 + \mathcal{O}_{{\mkern-1mu \scaleobj{.75}{\blacklozenge}}}(\mathring{\upalpha}) + \mathcal{O}(\mathring{\upepsilon})$. In particular, $X \Psi$ blows up like $1/\upmu$ at all points in $\Sigma_{T_{(Lifespan);U_0}}^{U_0;(Blowup)}$. Conversely, at all points in $(T_{(Lifespan);U_0},u,\vartheta) \in \Sigma_{T_{(Lifespan);U_0}}^{U_0} \backslash \Sigma_{T_{(Lifespan);U_0}}^{U_0;(Blowup)}$, we have \begin{align} \label{E:NONBLOWUPPOINTBOUND} \left| X \Psi (T_{(Lifespan);U_0},u,\vartheta) \right| < \infty. \end{align} \end{theorem} \begin{proof}[Discussion of proof] The proof of \cite{jSgHjLwW2016}*{Theorem~15.1} applies nearly verbatim, except for a few of the statements regarding $\vec{W}$. The main ingredients in the proof are the estimates of Props.~\ref{P:IMPROVEMENTOFAUX} and \ref{P:IMPROVEMENTOFHIGHERTRANSVERSALBOOTSTRAP}, the energy estimates of Prop.~\ref{P:MAINAPRIORIENERGY}, Cor.\ \ref{C:IMPROVEDFUNDAMENTALLINFTYBOOTSTRAPASSUMPTIONS}, and \eqref{E:MUSTARBOUNDSUISONE}. We clarify that \eqref{E:MUSTARBOUNDSUISONE} easily yields that $\upmu_{\star}(t,1)$ vanishes for the first time when $t$ is equal to RHS~\eqref{E:CLASSICALLIFESPANASYMPTOTICESTIMATE}, that is, that when $U_0 = 1$, a shock forms at a time $T_{(Lifespan);1}$ verifying the estimate \eqref{E:CLASSICALLIFESPANASYMPTOTICESTIMATE}. Strictly speaking, in the proof of \cite{jSgHjLwW2016}*{Theorem~15.1}, a few additional estimates, beyond the main ingredients we just mentioned, were needed to complete the proof. For example, one needs estimates for all of the components of the change of variables map $\Upsilon$, including the scalar-valued functions $\Xi^i$ on RHS~\eqref{E:CHOV}. However, in \cite{jSgHjLwW2016}, the needed estimates followed as straightforward consequences of the main ingredients mentioned above, and we do not even bother to state them here since their proofs are exactly the same as in \cite{jSgHjLwW2016}; in particular the proofs of the omitted estimates do not involve $\vec{W}$. For the sake of thoroughness, we now prove some facts concerning $\vec{W}$ that are not part of the statement or proof of \cite{jSgHjLwW2016}*{Theorem~15.1}. Specifically, the facts that $\mathscr{Z}^{\leq 9;1} \vec{W}$ and $\breve{X} \breve{X} \vec{W}$ extend to $\Sigma_{T_{(Lifespan);U_0}}^{U_0}$ as elements of the function space $C\left([0,T_{(Lifespan);U_0}],L^{\infty}([0,U_0] \times \mathbb{T}) \right)$ are simple consequences of the fundamental theorem of calculus, the completeness of the space $L^{\infty}([0,U_0] \times \mathbb{T})$, and the uniform bounds $ \displaystyle \left\| L \mathscr{Z}^{\leq 9;1} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim \mathring{\upepsilon} $ and $ \left\| L \breve{X} \breve{X} \vec{W} \right\|_{L^{\infty}(\Sigma_t^u)} \lesssim \mathring{\upepsilon} $, which are special cases of the estimates \eqref{E:SLOWWAVETRANSVERSALTANGENT} and \eqref{E:UPTOTWOTRANSVERSALDERIVATIVESOFSLOWINLINFINITY} and Cor.\ \ref{C:IMPROVEDFUNDAMENTALLINFTYBOOTSTRAPASSUMPTIONS} (recall that $ \displaystyle L = \frac{\partial}{\partial t} $). \end{proof} \section*{Acknowledgments} The author would like to thank the anonymous referee for their useful comments, which helped him to clarify the exposition. \bibliographystyle{amsalpha}
1,941,325,220,822
arxiv
\section{Introduction} The Shatashvili-Vafa $G_{2}$ algebra \cite{Shatashvili-Vafa95} is a superconformal vertex algebra with six generators $\{L,G,\Phi,K,X,M\}$. It is an extension of the $N=1$ superconformal algebra of central charge $c=21/2$ (formed by the super-partners $\{L,G\}$) by two fields $\Phi$ and $K$, primary of conformal weight $\frac{3}{2}$ and $2$ respectively, and their superpartners $X$ and $M$ (of conformal weight $2$ and $\frac{5}{2}$ respectively). Their OPEs can be found in Appendix \ref{App:AppendixA} in the language of lambda brackets of \cite{dandrea}. This superconformal algebra appeared as the chiral algebra associated to the sigma model with target a manifold with $G_{2}$ holonomy in \cite{Shatashvili-Vafa95} its classical counterpart had been studied by Howe and Papadopoulos in \cite{Howe-Papadopoulos93}. In fact this algebra is a member of a two-parameter family $SW(\frac{3}{2}, \frac{3}{2}, 2)$ previously studied in \cite{Blumenhagen92} where the author found the family of all superconformal algebras which are extension of the super-Virasoro algebra, i.e., the $N=1$ superconformal algebra, by two primary supercurrents of conformal weights $\frac{3}{2}$ and $2$ respectively. It is a family parametrized by $(c,\varepsilon)$ ($c$ is the central charge and $\varepsilon$ the coupling constant) of non-linear $W$-algebras. Its generators and relations are recalled in Appendix \ref{App:AppendixB}. The Shatashvili-Vafa $G_{2}$ algebra is a quotient of $SW(\frac{3}{2},\frac{3}{2}, 2)$ with $c=\frac{21}{2}$ and $\varepsilon=0$, in other words is the only one among this family which has central charge $c=\frac{21}{2}$ and contains the tri-critical Ising model as a subalgebra. It is precisely the fact that the Shatashvili-Vafa $G_{2}$ algebra appears as a $W$-algebra that motivated the authors to try to obtain this algebra as a quantum Hamiltonian reduction of some Lie superalgebra using the method developed in \cite{KacRoanWakimoto03}. That $D(2,1;\alpha)$ is the right Lie superalgebra candidate to be used in the Hamiltonian reduction is known from scattered results in the physics literature. It was shown in \cite{Mallwitz95} that $SW(\frac{3}{2},\frac{3}{2}, 2)$ is the symmetry algebra of the quantized Toda theory corresponding to $D(2,1;\alpha)$ (in \cite{Nohara-Mohri91} was worked a classical version of this result in the case $\alpha=1$ ($D(2,1;\alpha)=osp(4|2)$)) and from the well established connection between the theory of nonlinear integrable equations and W-algebras, see for example \cite{FeiginFrenkel95}. A coset realization of the $SW(\frac{3}{2},\frac{3}{2},2)$ superconformal algebra and therefore of the Shatashvili-Vafa algebra can be found in \cite{Noyvert02}. In \cite{FeiginSemikhatov01} was shown that the Hamiltonian reduction of $D(2,1;\alpha)$ coincides with this coset model (the authors however restrict their attention to the even part of the superalgebra). Some representations of the Shatashvili-Vafa $G_{2}$ superconformal algebra can be found in \cite{Noyvert02}, but the character formulae remains unknown. It was observed in \cite{BoerNaqviShomer} that in order to systematically study the representation theory and the character formula for this algebra one should construct the Shatasvili-Vafa algebra using the quantum Drinfeld-Sokolov reduction developed in \cite{KacWakimoto04,FrenkelKacWakimoto92}. This step is accomplished in this article. In section \ref{preliminaries} we review how to perform the quantum Hamiltonian reduction of a Lie superalgebra as introduced in \cite{KacRoanWakimoto03}. We recap some of the main theorems as well as under which conditions this Hamiltonian reduction process induces a free field realization. In section \ref{quantum reduction of the algebra} we prove that the $SW(\frac{3}{2},\frac{3}{2},2)$ superconformal algebra is the quantum Hamiltonian reduction of the Lie superalgebra $D(2,1;\alpha)$, and obtain a free field realization of the $SW(\frac{3}{2},\frac{3}{2},2)$ algebra on a space of three free Bosons and three free Fermions. As particular cases $\alpha\in\{1,-\frac{1}{2},-2\}$ we obtain the Shatashvili-Vafa $G_{2}$ algebra as a quantum Hamiltonian reduction of the Lie superalgebra $osp(4|2)$, and also the corresponding free field realizations. We summarize our main result as (see Theorem \ref{thm:main-thm} its remark) \noindent \begin{thm} Let $\mathfrak{h}$ be the Cartan subalgebra of $D(2,1;\alpha)$. It is a three dimensional vector space with a non-degenerate bilinear form $(,)$ given by the Cartan matrix. Consider $\Pi \mathfrak{h}^*$ the odd vector space ($\Pi$ denotes parity change) with its natural bilinear form $-(,)$. Let $V_{k}(\mathfrak{h}_{\mathrm{super}})$ be the super affine vertex algebra generated by three Bosons from $\mathfrak{h}$ and three Fermions from $\Pi \mathfrak{h^*}$ and lambda brackets \[ [h_\lambda h'] = k \lambda (h,h'), \qquad [\phi_\lambda \phi']= -(\phi, \phi'), \qquad h,h' \in \mathfrak{h}, \: \phi, \phi' \in \Pi \mathfrak{h}^*. \] \begin{enumerate} \item $SW(\frac{3}{2},\frac{3}{2},2)$ is a sub-vertex algebra of $V_{k}(\mathfrak{h}_{\mathrm{super}})$. The particular case of central charge $c=21/2$ and vanishing coupling constant corresponds to $k=-2/3$. In this case the fields $G$ and $\Phi$ are given by \eqref{expressionforPhi} and all other fields can be obtained from these two. \item The Shatashvili-Vafa $G_2$ superconformal algebra is a quotient of this algebra by an ideal generated in conformal weight $7/2$ \eqref{ideal}. \item For each $\alpha_i$ of the three odd simple roots of $D(2,1, \alpha)$ there exists a module $M_i$ of $V_{k}(\mathfrak{h}_{\mathrm{super}})$ generated by a vector $|\alpha_i\rangle$ such that $h_n |\alpha_i\rangle = 0$ for $n > 0$, $h_0 |\alpha_i\rangle = (h, \alpha_i) |\alpha_i\rangle$ and $\phi_n |\alpha_i\rangle = 0$ for $n > 0$ (for all $h \in \mathfrak{h}$ and $\phi \in \Pi \mathfrak{h}^*$). Let $\Gamma_i(z)$ be the unique intertwiner of type $\binom{V_{k}(\mathfrak{h}_{\mathrm{super}})}{V_{k}(\mathfrak{h}_{\mathrm{super}}) \; M_i}$ and $Q_i \in \mathrm{Hom} (V_{k}(\mathfrak{h}_{\mathrm{super}}), M_i)$ its zero mode. Then for generic values of $(c,\varepsilon)$ we have $SW(\frac{3}{2},\frac{3}{2},2) = \cap_i Q_i \subset V_{k}(\mathfrak{h}_{\mathrm{super}})$. \end{enumerate} \end{thm} In Section \ref{quantum reduction of the algebra} the reader can find a stronger version of this Theorem as the generators for $SW(\frac{3}{2},\frac{3}{2},2)$ are found for any values of the parameters $(c,\varepsilon)$. \section{Quantum reduction of Lie superalgebras}\label{preliminaries} In this section we recall the construction of the W-algebras $W_{k}(\mathfrak{g},x,f)$ introduced in \cite{KacRoanWakimoto03}. We follow the presentation in \cite{KacWakimoto04}. To construct the vertex algebra $W_{k}(\mathfrak{g},x,f)$ we need a quadruple $(\mathfrak{g},x,f,k)$ where $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus \mathfrak{g}_{\bar{1}}$ is a simple finite-dimensional Lie superalgebra with a non-degenerate even invariant supersymmetric bilinear form $(.|.)$, and $x,f\in \mathfrak{g}_{\bar{0}}$ such that $ad\;x$ is diagonalizable on $\mathfrak{g}$ with half-integer eigenvalues, $[x,f]=-f$, the eigenvalues of ad $x$ on the centralizer $\mathfrak{g}^{f}$ of $f$ in $\mathfrak{g}$ are non-positive, and $k\in\mathbb{C}$. We recall that a bilinear form $(.|.)$ on $\mathfrak{g}$ is called even if $(\mathfrak{g}_{\bar{0}}|\mathfrak{g}_{\bar{1}})=0$, supersymmetric if $(.|.)$ is symmetric (resp.\!\!\! skewsymmetric) on $\mathfrak{g}_{\bar{0}}$ (resp.\!\!\! $\mathfrak{g}_{\bar{1}}$), invariant if $([a,b]|c)=(a|[b,c])$ for all $a,b,c\in \mathfrak{g}$. A pair $(x,f)$ satisfying the above properties can be obtained when $x,f$ are part of an $\mathfrak{sl}_{2}$ triple, i.e., $[x,e]=e$, $[x,f]=-f$ and $[e,f]=x$. As this will be the case in the quantum reduction performed in section \ref{quantum reduction of the algebra}, we assume for the rest of this section that we are working with such a pair. Let $\mathfrak{g}=\oplus_{j\in\frac{1}{2}\mathbb{Z}}\mathfrak{g}_{j}$, be the eigenspace decomposition with respect to ad $x$. Denote \begin{equation} \mathfrak{g}_{+}=\bigoplus_{j>0}\mathfrak{g}_{j},\;\;\;\; \mathfrak{g}_{-}=\bigoplus_{j<0}\mathfrak{g}_{j}, \;\;\;\; \mathfrak{g}_{\le}=\mathfrak{g}_{0}\bigoplus \mathfrak{g}_{-}.\nonumber \end{equation} Let $V_{k}(\mathfrak{g})$ denote the affine vertex algebra of level $k$ associated to $\mathfrak{g}$. Denote by $F(A)$ the vertex algebra of free superfermions associated to a vector superspace $A$ with an even skew-supersymmetric non-degenerate bilinear form $\left<.|.\right>$, i.e., the $\lambda$-bracket is given by $[\varphi_{\lambda}\psi]=\left<\varphi|\psi\right>$, $\varphi, \psi\in A$. \sloppy On the vector superspace $\mathfrak{g}_{1/2}$ the element $f$ defines an even skew-supersymmetric non-degenerate bilinear form $\left<.|.\right>_{ne}$ by the formula: \fussy \begin{equation} \left<a|b\right>=(f|[a,b]).\nonumber \end{equation} The associated vertex algebra $F(\mathfrak{g}_{1/2})$ is called the vertex algebra of neutral free superfermions. Similary on the vector superspace $\Pi\mathfrak{g}_{+}\oplus\Pi\mathfrak{g}_{+}^{*}$( where $\Pi$ denotes parity-reversing), define an even skew-supersymmetric non-degenerate bilinear form $\left<.|.\right>_{ch}$ by: \begin{equation} \left<\Pi\mathfrak{g}_{+}|\Pi\mathfrak{g}_{+}\right>_{ch}=0=\left<\Pi\mathfrak{g}_{+}^*|\Pi\mathfrak{g}_{+}^*\right>_{ch},\nonumber \end{equation} \begin{equation} \left<a|b^*\right>_{ch}=-(-1)^{p(a)p(b^*)}\left<b^*|a\right>_{ch}=b^*(a), \;\;\; a\in\Pi\mathfrak{g}_{+}, b^*\in \Pi\mathfrak{g}_{+}^*, \nonumber \end{equation} where $p(a)$ denotes the parity of the element $a$. The associated vertex algebra $F(\Pi\mathfrak{g}_{+}\oplus\Pi\mathfrak{g}_{+}^{*})$ is called the vertex algebra of charged free superfermions. This vertex algebra carries an extra $\mathbb{Z}$-grading by charge by assigning: charge $\varphi=1$ and charge $\varphi^*=-1$, $\varphi\in\Pi\mathfrak{g}_{+}$, $\varphi^*\in\Pi\mathfrak{g}_{+}^{*}$. Consider the vertex algebra \begin{equation} C(\mathfrak{g},x,f,k)=V_{k}(\mathfrak{g})\otimes F(\Pi\mathfrak{g}_{+}\oplus\Pi\mathfrak{g}_{+}^{*})\otimes F(\mathfrak{g}_{1/2}). \nonumber \end{equation} The charge decomposition of $F(\Pi\mathfrak{g}_{+}\oplus\Pi\mathfrak{g}_{+}^{*})$ induces a charge decomposition on $C(\mathfrak{g},x,f,k)$ by declaring charge $V_{k}(\mathfrak{g})=0$ and charge $F(\mathfrak{g}_{1/2})=0$. This makes $C(\mathfrak{g},x,f,k)$ a $\mathbb{Z}$-graded vertex algebra. We introduce a differential $d_{(0)}$ that makes $(C(\mathfrak{g},x,f,k),d_{(0)})$ a $\mathbb{Z}$-graded complex as follows. Let $\{u_{\alpha}\}_{\alpha\in S_{j}}$ be a basis of each $\mathfrak{g}_{j}$, an let $S:=\coprod_{j\in\frac{1}{2}\mathbb{Z}}S_{j}$, $S_{+}=\coprod_{j>0}S_{j}$. Put $m_{\alpha}:=j$ if $\alpha\in S_{j}$. The structure constants $c_{\alpha\beta}^{\gamma}$ are defined by $[u_{\alpha},u_{\beta}]=\sum_{\gamma}c_{\alpha\beta}^{\gamma}u_{\gamma}$ for $(\alpha,\beta,\gamma\in S)$. Denote by $\{\varphi_{\alpha}\}_{\alpha\in S_{+}}$ the corresponding basis of $\Pi \mathfrak{g}_{+}$ and by $\{\varphi^{\alpha}\}_{\alpha\in S_{+}}$ the basis of $\Pi \mathfrak{g}_{+}^*$ such that $\left<\varphi_{\alpha},\varphi^{\beta}\right>_{ch}=\delta^{\beta}_{\alpha}$. Similary denote by $\{\Phi_{\alpha}\}_{\alpha\in S_{1/2}}$ the corresponding basis of $\mathfrak{g}_{1/2}$, and by $\{\Phi^{\alpha}\}_{\alpha\in S_{1/2}}$ the dual basis with respect to $\left<.,.\right>_{ne}$, i.e., $\left<\Phi_{\alpha},\Phi^{\beta}\right>_{ne}=\delta^{\beta}_{\alpha}$. It is useful to define $\Phi_{u}$ for any $u=\sum_{\alpha\in S}c_{\alpha}u_{\alpha}\in \mathfrak{g}$ by $\Phi_{u}:=\sum_{\alpha\in S_{1/2}}c_{\alpha}\Phi_{\alpha}$. Define the odd field \begin{eqnarray} d&=&\sum_{\alpha\in S_{+}}(-1)^{p(u_{\alpha})}:u_{\alpha}\varphi^{\alpha}:-\frac{1}{2}\sum_{\alpha,\beta,\gamma\in S_{+}}(-1)^{p(u_{\alpha})p(u_{\gamma})}c^{\gamma}_{\alpha \beta}:\varphi_{\gamma}\varphi^{\alpha}\varphi^{\beta}: \nonumber\\ &&+\sum_{\alpha\in S_{+}}(f|u_{\alpha})\varphi^{\alpha}+\sum_{\alpha\in S_{1/2}}:\varphi^{\alpha}\Phi_{\alpha}:.\nonumber \end{eqnarray} Its Fourier mode $d_{(0)}$ is an odd derivation of all products of the vertex algebra $C(\mathfrak{g},x,f,k)$, such that $d_{(0)}^2=0$ and that $d_{(0)}$ decreases the charge by $1$. Thus $(C(\mathfrak{g},x,f,k),d_{(0)})$ becomes a $\mathbb{Z}$-graded homology complex. Define \emph{the affine W-algebra} $W_{k}(\mathfrak{g},x,f)$ to be: as vector superspace the homology of this complex $W_{k}(\mathfrak{g},x,f):=H(C(\mathfrak{g},x,f,k),d_{(0)})$ together with the vertex algebra structure induced from $C(\mathfrak{g},x,f,k)$. The vertex algebra $W_{k}(\mathfrak{g},x,f)$ is also called the \emph{quantum reduction} associated to the quadruple $(\mathfrak{g},x,f,k)$. Define the Virasoro field of $C(\mathfrak{g},x,f,k)$ by \begin{equation} L=L^{\mathfrak{g}}+\partial x+L^{ch}+L^{ne},\nonumber \end{equation} where \begin{equation} L^{\mathfrak{g}}=\tfrac{1}{2(k+h^{\vee})}\sum_{\alpha\in S}(-1)^{p(u_{\alpha})}:u_{\alpha}u^{\alpha}:,\nonumber \end{equation} is given by the Sugawara construction, where $\{u^{\alpha}\}_{\alpha\in S}$ is the dual basis to $\{u_{\alpha}\}_{\alpha\in S}$, i.e., $(u_{\alpha}|u^{\beta})=\delta^{\beta}_{\alpha}$. Here we are assuming that $k\neq -h^{\vee}$, where $h^{\vee}$ denotes the dual Coxeter number of $\mathfrak{g}$. \begin{equation} L^{ch}=-\sum_{\alpha\in S_{+}}m_{\alpha}:\varphi^{\alpha}\partial\varphi_{\alpha}:+\sum_{\alpha\in S_{+}}(1-m_{\alpha}):(\partial\varphi^{\alpha})\varphi_{\alpha}:,\nonumber \end{equation} \begin{equation} L^{ne}=\tfrac{1}{2}\sum_{\alpha\in S_{1/2}}:(\partial\Phi^{\alpha})\Phi_{\alpha}:.\nonumber \end{equation} The central charge of $L$ is given by \begin{eqnarray} \label{centralchargeformula} c(\mathfrak{g},x,f,k)&=&\frac{k\,sdim\, \mathfrak{g}}{k+h^{\vee}}-12k(x|x)\\ &&-\sum_{\alpha\in S_{+}}(-1)^{p(u_{\alpha})}\left(12 m_{\alpha}^{2}-12 m_{\alpha}+2\right)-\frac{1}{2} sdim\, \mathfrak{g}_{1/2}.\nonumber \end{eqnarray} With respect to $L$ the fields $a\,(a\in \mathfrak{g}_{j}),$ $\varphi_{\alpha}, \varphi^{\alpha}$ $(\alpha\in S_{+})$ and $\Phi_{\alpha}(\alpha\in S_{1/2})$ are primary vectors except for $a\,(a\in \mathfrak{g}_{0})$ such that $(a|x)\neq 0$, and the conformal weights are as follows: $\Delta(a)=1-j\; (a\in \mathfrak{g}_{j})$, $\Delta(\varphi_{\alpha})=1-m_{\alpha}$, $\Delta(\varphi^{\alpha})=m_{\alpha}$ and $\Delta(\Phi_{\alpha})=\frac{1}{2}$. In \cite{KacRoanWakimoto03} is proved that $d_{(0)}L=0$, then the homology class of $L$ (which does not vanish) defines the Virasoro field of $W_{k}(\mathfrak{g},x,f)$, which is again denoted by $L$. To construct other fields of $W_{k}(\mathfrak{g},x,f)$ define for each $v\in\mathfrak{g}_{j}$ \begin{equation} J^{(v)}=v+\sum_{\alpha,\beta\in S_{+}}(-1)^{p(u_{\alpha})}c^{\alpha}_{\beta}(v):\varphi_{\alpha}\varphi^{\beta}:,\nonumber \end{equation} where the numbers $c^{\alpha}_{\beta}(v)$ are given by $[v,u_{\alpha}]=\sum_{\alpha\in S}c^{\alpha}_{\beta}(v)u_{\alpha}$. The field $J^{(v)}\in C(\mathfrak{g},x,f,k) $ has the same charge, the same parity and the same conformal weight as the field $v$. The $\lambda$-bracket between these fields is as follows: \begin{equation}\label{currentbracket} [{J^{(v)}}_{\lambda}J^{(v')}]=J^{([v,v'])}+\lambda\left(k(v|v')+\tfrac{1}{2}\left(\kappa_{\mathfrak{g}}(v,v')-\kappa_{\mathfrak{g}_{0}}(v,v')\right)\right), \end{equation} if $v\in\mathfrak{g}_{i}, v'\in \mathfrak{g}_{j}$ and $ij\ge 0$ where $\kappa_{\mathfrak{g}}$(resp.$\kappa_{\mathfrak{g}_{0}}$) denotes the Killing form on $\mathfrak{g}$ (resp. $\mathfrak{g}_{0}$). Denote by $C^{-}$ the vertex subalgebra of the vertex algebra $C(\mathfrak{g},x,f,k)$ generated by the fields $J^{(u)}$ for all $u \in \mathfrak{g}_{\le}$, the fields $\varphi^{\alpha}$ for all $\alpha\in S_{+}$ and the fields $\Phi_{\alpha}$ for all $\alpha \in S_{1/2}$. One of the main theorems on the structure of the vertex algebra $W_{k}(\mathfrak{g},x,f)$ is the following: \begin{theorem}\cite[Theorem 4.1]{KacWakimoto04}\label{maintheorem} Let $\mathfrak{g}$ be a simple finite-dimensional Lie superalgebra with an invariant bilinear form $(.|.)$ and let $x, f$ be a pair of even elements of $\mathfrak{g}$ such that $ad\;x$ is diagonalizable with eigenvalues in $\frac{1}{2}\mathbb{Z}$ and $[x,f]=-f$. Suppose that all eigenvalues of $ad\;x$ on $g^{f}$ (the centralizer of $f$) are non-positive: $g^{f}=\oplus_{j\le 0}g^{f}_{j}$. Then \begin{itemize} \item[a)] For each $a\in \mathfrak{g}^{f}_{-j}(j\ge 0)$ there exists a $d_{(0)}$-closed field $J^{\{a\}}$ in $C^{-}$ of conformal weight $1 + j$ (with respect to $L$) such that $J^{\{a\}}-J^{(a)}$ is a linear combination of normal ordered products of the fields $J^{(b)}$, where $b\in\mathfrak{g}_{-s}$, $0\le s< j$, the fields $\Phi_{\alpha}$, where $\alpha\in S_{1/2}$, and the derivatives of these fields. \item[b)] The homology classes of the fields $J^{\{a_{i}\}}$, where $a_1, a_2,\dots$ is a basis of $g^{f}$ compatible with its $\frac{1}{2}\mathbb{Z}$-gradation, strongly generate the vertex algebra $W_{k}(\mathfrak{g}, x, f)$. \item[c)]$H_{0}\left(C(\mathfrak{g},x,f,k),d_{(0)}\right)=W_{k}(\mathfrak{g},x,f)$ and $H_{j}\left(C(\mathfrak{g},x,f,k),d_{(0)}\right)=0$ if $j\neq 0$. \end{itemize} \end{theorem} \begin{remark} The complex $\left(C(\mathfrak{g},x,f,k),d_{(0)}\right)$ is formal, that is, the vertex algebra $W_{k}(\mathfrak{g},x,f)$ is a subalgebra of $C(\mathfrak{g},x,f,k)$ consisting of $d_{(0)}$-closed charge $0$ elements of $C^{-}$, furthermore the $J^{\{a\}}$ can be computed recursively, for example in the case $a\in g^{f}_{-1/2}$ the solution is unique an is given by: \end{remark} \begin{theorem}\cite[Theorem 2.1 (d)]{KacWakimoto04}\label{reconstruction of the fields} For $v \in \mathfrak{g}_{-1/2}$ let \begin{eqnarray} G^{\{v\}}&=& J^{(v)}+\sum_{\beta\in S_{1/2}}:J^{([v,u_{\beta}])}\Phi^{\beta}:+\tfrac{(-1)^{p(v)+1}}{3}\sum_{\alpha,\beta\in S_{1/2}}:\Phi^{\alpha}\Phi^{\beta}\Phi_{[u_{\beta}[u_{\alpha},v]]}:\nonumber\\ &&-\sum_{\beta\in S_{1/2}}\left(k(v|u_{\beta})+str_{g_{+}}(ad\,v)(ad\,u_{\beta})\right)\partial\Phi^{\beta}, \nonumber \end{eqnarray} Then provided that $v \in \mathfrak{g}^{f}_{-1/2}$, we have $d_{(0)}(G^{\{v\}})=0$, hence the homology class of $G^{\{v\}}$ defines a field of the vertex algebra $W_{k}(\mathfrak{g},x,f)$ of conformal weight $\frac{3}{2}$. This field is primary. \end{theorem} \begin{remark}\label{W algebra as a subalegbra} In the case $\mathfrak{g}^{f}\subset g_{\le}$ Theorem \ref{maintheorem} and the identity (\ref{currentbracket}) provides a construction of the vertex algebra $W_{k}(\mathfrak{g},x,f)$ as a subalgebra of $V_{\nu_{k}}(\mathfrak{g}_{\le})\otimes F(\mathfrak{g}_{1/2})$ where $\nu_{k}$ is the 2-cocycle on $\mathfrak{g}_{\le}[t,t^{-1}]$ given by \begin{equation}\label{newcocycle} \nu_{k}(at^{m},bt^{n})=m\delta_{m,-n}\left(k(a|b)+\tfrac{1}{2}\left(\kappa_{\mathfrak{g}}(a,b)-\kappa_{\mathfrak{g}_{0}}(a,b)\right)\right) \end{equation} for $a,b\in \mathfrak{g_{\le}}$ and $m,n\in \mathbb{Z}$. \end{remark} \begin{remark}\label{free field realization of W} Furthermore if this 2-cocycle is trivial outside $\mathfrak{g}_{0}[t,t^{-1}]$, the cannonical homomorphism $\mathfrak{g}_{\le}\rightarrow\mathfrak{g}_{0}$ induces a homomorphism from $V_{\nu_{k}}(\mathfrak{g}_{\le})\otimes F(\mathfrak{g}_{1/2})$ to $V_{\nu_{k}}(\mathfrak{g}_{0})\otimes F(\mathfrak{g}_{1/2})$, obtaining in this way a free field realization of $W_{k}(\mathfrak{g},x,f)$ inside $V_{\nu_{k}}(\mathfrak{g}_{0})\otimes F(\mathfrak{g}_{1/2})$. \end{remark} \section{Quantum Hamiltonian Reduction of $D(2,1;\alpha)$}\label{quantum reduction of the algebra} In this section we prove that the family $SW(\frac{3}{2},\frac{3}{2},2)$ of W-algebras which has generators $\{G, H, L, \tilde{M}, W, U\}$ of conformal weights $\left(\frac{3}{2}, \frac{3}{2}, 2, 2, 2,\frac{5}{2}\right)$ and relations as given in Appendix \ref{App:AppendixB} can be obtained as the quantum Hamiltonian reduction of $D(2,1;\alpha)$. As a collorary we obtain a free field realization of this family. As a particular case we obtain a free-field realization of the Shatashvili-Vafa $G_{2}$ algebra on a space of three free Bosons and three free Fermions. The Lie superalgebra $D(2,1;\alpha)$ where $\alpha\in\mathbb{C}\backslash \{-1,0\}$ is a one-parameter family of exceptional Lie superalgebras of rank $3$ and dimension $17$, which contains $D(2,1)=osp(4,2)$ as special cases (when $\alpha\in\{1,-\frac{1}{2},-2\}$), see \cite{Kac77}. We present $\mathfrak{g}=D(2,1;\alpha)$ as the contragradient Lie superalgebra associated to the Cartan matrix $A=(a_{ij})_{i,j}$ and $\tau=\{1,2,3\}$ \begin{eqnarray}\label{Cartanmatrix} {(a_{ij})}_{i,j=1}^{3}= \left( \begin{array}{ccc} 0 & 1 & \alpha \\ 1 & 0 & -1-\alpha \\ \alpha & -1-\alpha & 0 \end{array} \right). \end{eqnarray} We have generators $\{h_{1},h_{2},h_{3},e_{1},e_{2},e_{3},f_{1},f_{2},f_{3}\}$, $h_i$ being even for all $i$ and $e_i,f_i$ being odd for all $i$ and relations \begin{equation} [e_i,f_j]=\delta_{ij}h_{i},\;\; [h_i,e_j]=a_{ij}e_j,\;\; [h_i,f_j]=-a_{ij}f_j.\nonumber \end{equation} Introduce the elements: \begin{equation} [e_1,e_2]=:e_{12},\;\; [e_1,e_3]=:e_{13},\;\; [e_2,e_3]=:e_{23},\;\; [e_1,e_{23}]=:e_{123},\nonumber \end{equation} \begin{equation} [f_1,f_2]=:f_{12},\;\; [f_1,f_3]=:f_{13},\;\; [f_2,f_3]=:f_{23},\;\; [f_1,f_{23}]=:f_{123}.\nonumber \end{equation} Recall that $\mathfrak{g}$ has vanishing Killing form and consequently the dual Coxeter number $h^{\vee}=0$. Fix the following non-degenerate even supersymmetric invariant bilinear form $(.|.)$ \begin{equation} (h_i,h_j)=a_{ij},\;\; (e_i,f_j)=\delta_{ij},\;\; (e_{12},f_{12})=(f_{12},e_{12})=-1,\nonumber \end{equation} \begin{equation} (e_{13},f_{13})=(f_{13},e_{13})=-\alpha,\;\; (e_{23},f_{23})=(f_{23},e_{23})=1+\alpha,\nonumber \end{equation} \begin{equation} (e_{123},f_{123})=-(f_{123},e_{123})=(1+\alpha)^2.\nonumber \end{equation} To perform the quantum Hamiltonian reduction we take the pair $(x,f)$: \begin{eqnarray} x:=\tfrac{(\alpha+1)}{2\alpha}h_1+\tfrac{\alpha}{2(\alpha+1)}h_2+\tfrac{1}{2\alpha(\alpha+1)}h_3,&f:=f_{12}+f_{13}+f_{23}.\nonumber \end{eqnarray} This pair together with $e=(-\tfrac{1}{2})e_{12}+(-\tfrac{1}{2\alpha^{2}})e_{13}+(-\tfrac{1}{2(\alpha+1)^{2}})e_{23}$ forms an $sl_{2}$ triple. We have the following eigenspace decomposition of the algebra with respect to $ad$ $x$: \[ \begin{array}{ccccccc} \mathfrak{g}_{-3/2} & \mathfrak{g}_{-1} & \mathfrak{g}_{-1/2} & \mathfrak{g}_{0} & \mathfrak{g}_{1/2} & \mathfrak{g}_{1} & \mathfrak{g}_{3/2} \\ f_{123} & f_{12} & f_{1} & h_{1} & e_{1} & e_{12} & e_{123} \\ & f_{13} & f_{2} & h_{2} & e_{2} & e_{13} & \\ & f_{23} & f_{3} & h_{3} & e_{3} & e_{23} & \\ \end{array} \] Furthermore $\mathfrak{g}^{f}=\mathfrak{g}_{-1/2}^{f}\oplus \mathfrak{g}_{-1}^{f}\oplus \mathfrak{g}_{-3/2}^{f}$ with dim $ \mathfrak{g}_{-1/2}^{f}=2$ , dim $ \mathfrak{g}_{-1}^{f}=3$ and dim $\mathfrak{g}_{-3/2}^{f}=1$. This shows that the algebra $W_{k}(\mathfrak{g},x,f)$ has six generators with the expected conformal weights. The set of vectors $\{e_{1},e_{2},e_{3}\}$ is a basis of $\mathfrak{g}_{1/2}$ , denote by $\Phi_{1}:=e_{1}$, $\Phi_{2}:=e_{2}$ and $\Phi_{3}:=e_{3}$ the corresponding free neutral Fermions. The non-zero values of the (symmetric) bilinear form $\left<.|.\right>_{ne}$ on $\mathfrak{g}_{1/2}$ are given by: $$\left<\Phi_{1}|\Phi_{2}\right>_{ne}=-1,\;\;\left<\Phi_{1}|\Phi_{3}\right>_{ne}=-\alpha,\;\; \left<\Phi_{2}|\Phi_{3}\right>_{ne}=1+\alpha,$$ \noindent (note that this is exactly minus the Cartan matrix of $D(2,1;\alpha)$.) Then the free neutral fermions satisfy the following non-zero $\lambda$-brackets: $$[{\Phi_{1}}_{\lambda}\Phi_{2}]=-1,\;\;[{\Phi_{1}}_{\lambda}\Phi_{3}]=-\alpha,\;\;[{\Phi_{2}}_{\lambda}\Phi_{3}]=1+\alpha,$$ \noindent and the dual free neutral fermions with respect to $\left<.|.\right>_{ne}$ are: \begin{eqnarray} \Phi^{1}&=&(-\tfrac{1+\alpha}{2\alpha})\Phi_{1}+(-\tfrac{1}{2})\Phi_{2}+(-\tfrac{1}{2\alpha})\Phi_{3}, \nonumber\\ \Phi^{2}&=&(-\tfrac{1}{2})\Phi_{1}+(-\tfrac{\alpha}{2+2\alpha})\Phi_{2}+(\tfrac{1}{2+2\alpha})\Phi_{3}, \nonumber\\ \Phi^{3}&=&(-\tfrac{1}{2\alpha})\Phi_{1}+(\tfrac{1}{2+2\alpha})\Phi_{2}+(-\tfrac{1}{2\alpha+2\alpha^{2}})\Phi_{3}.\nonumber \end{eqnarray} \noindent We fix the basis $\{h_{1},h_{2},h_{3},f_{1},f_{2},f_{3},f_{12},f_{13},f_{23},f_{123}\}$ of $\mathfrak{g}_{\le}$ compatible with the $\frac{1}{2}\mathbb{Z}$ and $\mathbb{Z}_{2}$ gradation of $\mathfrak{g}$. We consider the building blocks $J^{(v)}$ for each $v$ that belongs to the above basis, (\ref{currentbracket}) reduces to \begin{equation} [{J^{(v)}}_{\lambda}J^{(v')}]=J^{([v,v'])}+\lambda k(v|v'),\nonumber \end{equation} \noindent because the Killing form $\kappa_{\mathfrak{g}}$ of $\mathfrak{g}$ is zero and $\mathfrak{g}_{0}$ equals the Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{g}$, that is, the generators $J^{(v)}$ obey the same commutation relations as the generators of $V_{k}(D(2,1;\alpha))$. Using Remark \ref{W algebra as a subalegbra} we obtain that $W_{k}(\mathfrak{g},x,f)$ is a subalgebra of $V_{k}(\mathfrak{g}_{\le})\otimes F(\mathfrak{g}_{1/2})$. For this reason and to simplify the notation we denote $J^{(v)}$ simply by $v$. Furthermore as the cocycle (\ref{newcocycle}) is the original cocycle of $V_{k}(D(2,1;\alpha))$ and this cocycle is trivial in $\mathfrak{g}_{\le}$ outside $\mathfrak{g}_{0}=\mathfrak{h}$, Remark \ref{free field realization of W} gives a free field realization of $W_{k}(\mathfrak{g},x,f)$ inside $V_{k}(\mathfrak{h})\otimes F(\mathfrak{g}_{1/2})$. Let $J^{\{f_i\}}$ denote the $d_{0}$-closed fields associated to $\{f_{i}\}_{i=1}^{3}$ provided by Theorem \ref{maintheorem}. Using Theorem \ref{reconstruction of the fields} we can compute $J^{\{f_i\}}$ explicitly: \begin{eqnarray} J^{\{f_1\}}&=&f_{1}+\left(\tfrac{\alpha^{2}-1}{3}\right):\Phi^{1}\Phi^{2}\Phi^{3}:+:\Phi^{1}h_{1}:+k \partial\Phi^{1},\nonumber \\ J^{\{f_2\}}&=&f_{2}+\left(\tfrac{\alpha(\alpha+2) }{3}\right):\Phi^{1}\Phi^{2}\Phi^{3}:+:\Phi^{2}h_{2}:+k \partial\Phi^{2},\nonumber \\ J^{\{f_3\}}&=&f_{3}+\left(\tfrac{2\alpha+1}{3}\right):\Phi^{1}\Phi^{2}\Phi^{3}:+:\Phi^{3}h_{3}:+k \partial\Phi^{3}. \nonumber \end{eqnarray} We can compute the other fields $J^{\{f_{1,2}\}},J^{\{f_{1,3}\}},J^{\{f_{2,3}\}},J^{\{f_{1,2,3}\}}$ given by Theorem \ref{maintheorem} that jointly with $\{J^{\{f_i\}}\}_{i=1}^{3}$ strongly generate $W_{k}(\mathfrak{g},x,f)$, but in the $SW(\frac{3}{2},\frac{3}{2},2)$ superconformal algebra we can recover (using $\lambda$-brackets) all the fields from the generators in conformal weight $\frac{3}{2}$, i.e., $G$ and $H$ (see Appendix \ref{App:AppendixB}). Thus we only need to construct $G$ and $H$ from $\{J^{\{f_i\}}\}_{i=1}^{3}$. In order to do that observe that: \begin{equation}\label{condition to be conformal weight 2} a_{1}f_{1}+a_{2}f_{2}+a_{3}f_{3}\in \mathfrak{g}^{f}_{-1/2} \Leftrightarrow a_{1}+a_{2}(-\tfrac{\alpha}{\alpha+1})+a_{3}(-\tfrac{1}{\alpha+1})=0, \end{equation} \noindent and that the central charge of the Virasoro field of $W_{k}(\mathfrak{g},x,f)$ given by formula (\ref{centralchargeformula}) is $c(\alpha,k)=\tfrac{9}{2}-12k(x|x)=\tfrac{9}{2}-\tfrac{6 k \left(1+\alpha +\alpha ^2\right)}{\alpha(1+\alpha )}$. We want to define a field $G$ such that $\{G,L:=\tfrac{1}{2}{G}_{(0)}G\}$ generate an $N=1$ superconformal algebra with the above central charge, this is accomplished taking $a_{1}=a_{2}=a_{3}=\tfrac{i}{\sqrt{k}}$, i.e., \begin{equation} G:=\tfrac{i}{\sqrt{k}}\left(J^{\{f_1\}}+J^{\{f_2\}}+J^{\{f_3\}}\right).\nonumber \end{equation} We are looking for a vector $H$ of conformal weight $\frac{3}{2}$, such that: \begin{equation}\label{definition of H hat} {G}_{(j)}H=0, \;\;j>0, \end{equation} \noindent The most general vector of conformal weight $\frac{3}{2}$ given by (\ref{condition to be conformal weight 2}) is \begin{equation} \left(\tfrac{\alpha}{\alpha+1}a_{2}+\tfrac{1}{\alpha+1}a_{3}\right)J^{\{f_1\}}+a_{2}J^{\{f_2\}}+a_{3} J^{\{f_3\}},\nonumber \end{equation} \noindent (\ref{definition of H hat}) imposes the condition $a_{2}\alpha(-1+2 k)(1+2 \alpha )+ a_{3}(2 k-\alpha )(2+\alpha )=0,$ which has as solution \begin{eqnarray} {a_{1}}'&:=&\alpha(-1+\alpha )(1+2k+\alpha ),\nonumber\\ {a_{2}}'&:=&(-1)(2k-\alpha )(2+\alpha )(1+\alpha ),\nonumber\\ {a_{3}}'&:=&\alpha(-1+2k)(1+2\alpha)(1+\alpha).\nonumber \end{eqnarray} It follows from ${H}_{(2)}H=\frac{2c}{3}$ (cf. (\ref{bracketHwithH})) that we need to rescale this solution to define $H=\sum_{i=1}^{3}a_{i}J^{\{f_i\}}$ with \begin{equation} a_{i}:=\left(-\tfrac{3}{2} (-1+2 k)\alpha ^2 (1+\alpha )^2 \left(2 k+4 k^2-\alpha (1+\alpha )\right)\right)^{-\tfrac{1}{2}}{a_{i}}'.\nonumber \end{equation} We can obtain all other generators from $G$ and $H$, to perform this computations we use Thielemans's software \cite{Thielemans91}. Listed below are the explicit expressions of all the generators of $W_{k}(\mathfrak{g},x,f)$ as a subalgebra of $V_{k}(\mathfrak{g}_{\le})\otimes F(\mathfrak{g}_{1/2})$: \begin{eqnarray} G&=&\tfrac{i}{\sqrt{k}}f_{1}+\tfrac{i}{\sqrt{k}}f_{2}+\tfrac{i}{\sqrt{k}}f_{3}+\tfrac{i}{\sqrt{k}}:\Phi^{1}h_{1}:+\tfrac{i}{\sqrt{k}}:\Phi^{2}h_{2}:+\tfrac{i}{\sqrt{k}}:\Phi^{3}h_{3}:\nonumber\\ &&+i\sqrt{k} \partial\Phi^{1}+i\sqrt{k} \partial\Phi^{2}+i\sqrt{k} \partial\Phi^{3},\nonumber \end{eqnarray} \begin{eqnarray} L&=&-\tfrac{1}{k}f_{12}-\tfrac{1}{k}f_{13}-\tfrac{1}{k}f_{23}+\tfrac{(1+\alpha )}{4 k \alpha }:h_{1}h_{1}:+\tfrac{1}{2k}:h_{1}h_{2}:+\tfrac{1}{2k\alpha}:h_{1}h_{3}:\nonumber\\ &&+\tfrac{\alpha}{4 k+4 k \alpha }:h_{2}h_{2}:-\tfrac{1}{2 k+2 k \alpha }:h_{2}h_{3}:+\tfrac{1}{4 k \alpha +4 k \alpha ^2}:h_{3}h_{3}:+\tfrac{1}{k}:\Phi^{1}f_{2}:\nonumber\\ &&+\tfrac{\alpha}{k}:\Phi^{1}f_{3}:+\tfrac{1}{2}:\Phi^{1}\partial\Phi^{2}:+\tfrac{1}{2} \alpha:\Phi^{1}\partial\Phi^{3}:+\tfrac{1}{k}:\Phi^{2}f_{1}:-\tfrac{(1+\alpha )}{k}:\Phi^{2}f_{3}:\nonumber\\ &&+\tfrac{1}{2} (-1-\alpha ):\Phi^{2}\partial\Phi^{3}:+\tfrac{\alpha}{k}:\Phi^{3}f_{1}:-\tfrac{(1+\alpha )}{k} :\Phi^{3}f_{2}:-\tfrac{1}{2}:\partial\Phi^{1}\Phi^{2}:\nonumber\\ &&-\tfrac{1}{2} \alpha :\partial\Phi^{1}\Phi^{3}:+\tfrac{1}{2} (1+\alpha ):\partial\Phi^{2}\Phi^{3}:+\tfrac{(1+\alpha)}{2\alpha }\partial h_{1}+\tfrac{\alpha}{2+2 \alpha }\partial h_{2}\nonumber\\ &&+\tfrac{1}{2\alpha+2\alpha^2}\partial h_{3},\nonumber \end{eqnarray} \small \begin{eqnarray} H&=&\tfrac{1}{\sqrt{-\tfrac{3}{2} (-1+2 k)\alpha ^2 (1+\alpha )^2 \left(2 k+4 k^2-\alpha (1+\alpha )\right)}}\left(\left(-1+\alpha\right) \alpha \left(1+2k+\alpha\right) \left(f_{1}+ :\Phi^{1}h_{1}:\right.\right.\nonumber\\ &&\left.\left.+k \partial\Phi^{1}\right)-(2k-\alpha ) \left(2+3 \alpha +\alpha ^2\right) (f_{2}+ :\Phi^{2}h_{2}:+k\partial\Phi^{2})\right.\nonumber\\ &&\left.+(-1+2 k) \alpha \left(1+3 \alpha +2 \alpha ^2\right) (f_{3}+ :\Phi^{3}h_{3}:+k\partial\Phi ^{3})\right.\nonumber\\ &&\left.+\alpha (1+\alpha ) \left(-3 \alpha (1+\alpha )+4 k \left(1+\alpha +\alpha ^2\right)\right) :\Phi^{1}\Phi^{2}\Phi^{3}:\right),\nonumber \end{eqnarray} \normalsize \small \begin{eqnarray} \tilde{M}&=&\tfrac{1}{\sqrt{-\tfrac{3}{2} (-1+2 k)\alpha ^2 (1+\alpha )^2 \left(2 k+4 k^2-\alpha (1+\alpha )\right)}}\left(\tfrac{i (1+2 \alpha ) \left(-4 k+\alpha +\alpha ^2\right) }{\sqrt{k}}\left(f_{12}-\tfrac{1}{2}:h_{1}h_{2}:\right.\right.\nonumber\\ &&\left.\left.- :\Phi^{1}f_{2}- :\Phi^{2}f_{1}:\right)+\tfrac{i (2+\alpha ) (-1+(-1+4 k) \alpha ) }{\sqrt{k}}\left(\alpha f_{13}-\tfrac{1}{2}:h_{1}h_{3}:-\alpha ^2:\Phi^{1}f_{3}:\right.\right.\nonumber\\ &&\left.\left.- \alpha^2:\Phi^{3}f_{1}:\right)+\tfrac{i \left(-1+\alpha ^2\right) (-\alpha +4 k (1+\alpha )) }{\sqrt{k}}\left(f_{23}+\tfrac{1}{2 (1+\alpha )}:h_{2}h_{3}:\right.\right.\nonumber\\ &&\left.\left.+ (1+\alpha ):\Phi^{3}f_{2}:+ (1+\alpha ):\Phi^{2}f_{3}:\right)\right.\nonumber\\ &&\left.-i \sqrt{k} (-1+\alpha ) (1+\alpha ) (1+2 k+\alpha )\left( \partial h_{1}+\tfrac{1}{2k}:h_{1}h_{1}:\right)\right.\nonumber\\ &&\left.+i \sqrt{k} (2 k-\alpha ) \alpha (2+\alpha )\left( \partial h_{2}+\tfrac{1}{2k}:h_{2}h_{2}:\right)\right.\nonumber\\ &&\left.-i \sqrt{k} (-1+2 k) (1+2 \alpha )\left( \partial h_{3}+\tfrac{1}{2k}:h_{3}h_{3}:\right)\right.\nonumber\\ &&\left.+\tfrac{i\left(-3 \alpha (1+\alpha )+4 k \left(1+\alpha +\alpha ^2\right)\right) }{2 \sqrt{k}}\left(-(1+\alpha ):\Phi^{1}\Phi^{2}h_{1}:+\alpha:\Phi^{1}\Phi^{2}h_{2}:-:\Phi^{1}\Phi^{2}h_{3}:\right.\right.\nonumber\\ &&\left.\left.+\alpha(1+\alpha ):\Phi^{1}\Phi^{3}h_{1}:+\alpha^2:\Phi^{1}\Phi^{3}h_{2}:-\alpha:\Phi^{1}\Phi^{3}h_{3}: -(1+\alpha )^2:\Phi^{2}\Phi^{3}h_{1}:\right.\right.\nonumber\\ &&\left.\left.-\alpha(1+\alpha ):\Phi^{2}\Phi^{3}h_{2}:-(1+\alpha ):\Phi^{2}\Phi^{3}h_{3}:\right)\right.\nonumber\\ &&\left.-i \sqrt{k} (-1+\alpha ) \alpha (1+2 k+\alpha ):\Phi^{1}\partial\Phi^{2}:\right.\nonumber\\ &&\left.-i \sqrt{k} (-1+\alpha ) \alpha ^2 (1+2 k+\alpha ):\Phi^{1}\partial\Phi^{3}:\right.\nonumber\\ &&\left.-i \sqrt{k} (2 k-\alpha ) (1+\alpha )^2 (2+\alpha ) :\Phi^{2}\partial\Phi^{3}:\right.\nonumber\\ &&\left.-i \sqrt{k} (2 k-\alpha ) \left(2+3 \alpha +\alpha ^2\right) :\partial\Phi^{1}\Phi^{2}:\right.\nonumber\\ &&\left.+i \sqrt{k} (-1+2 k) \alpha ^2 \left(1+3 \alpha +2 \alpha ^2\right) :\partial\Phi^{1}\Phi^{3}:\right.\nonumber\\ &&\left.-i \sqrt{k} (-1+2 k) \alpha (1+\alpha )^2 (1+2 \alpha ) :\partial\Phi^{2}\Phi^{3}:\right)\nonumber, \end{eqnarray} \normalsize \small \begin{eqnarray} W&=&\tfrac{\mu }{\left(-3 \alpha (1+\alpha )+4 k \left(1+\alpha +\alpha ^2\right)\right)}\left(\tfrac{\left(2 k+\alpha +\alpha ^2\right)}{k}\left(- f_{12}+\tfrac{1}{2 } :h_{1}h_{2}:+ :\Phi^{1}f_{2}:\right.\right.\nonumber\\ &&\left.\left.+ :\Phi^{2}f_{1}:\right)+\tfrac{(1+\alpha +2 k \alpha ) }{k}\left(-\alpha f_{13}+\tfrac{1}{2 } :h_{1}h_{3}:+\alpha ^2 :\Phi^{1}f_{3}:+\alpha ^2:\Phi^{3}f_{1}:\right)\right.\nonumber\\ &&\left.+\tfrac{(1+\alpha ) (\alpha +2 k (1+\alpha )) }{k}\left(-f_{23}-(1+\alpha ):\Phi^{2}f_{3}:-(1+\alpha ):\Phi^{3}f_{2}\right)\right.\nonumber\\ &&\left.+\tfrac{(1+\alpha ) (1+2 k+\alpha )}{4 k}:h_{1}h_{1}:+\tfrac{\alpha (-2 k+\alpha )}{4k}:h_{2}h_{2}:+\tfrac{-2k+\alpha(-2k-1)}{2k}:h_{2}h_{3}:\right.\nonumber\\ &&\left.+\tfrac{-2k+1}{4k}:h_{3}h_{3}:+\left(-1+\alpha ^2\right):\Phi^{1}\Phi^{2}h_{1}:-\alpha (2+\alpha ):\Phi^{1}\Phi^{2}h_{2}:\right.\nonumber\\ &&\left.+(-1-2 \alpha ) :\Phi^{1}\Phi^{2}h_{3}:+\left(\alpha -\alpha ^3\right) :\Phi^{1}\Phi^{3}h_{1}:-\alpha ^2 (2+\alpha ):\Phi^{1}\Phi^{3}h_{2}:\right.\nonumber\\ &&\left.-\alpha (1+2 \alpha ):\Phi^{1}\Phi^{3}h_{3}:-\alpha (1+2 k+\alpha ) :\Phi^{1}\partial\Phi^{2}:\right.\nonumber\\ &&\left.-\alpha ^2 (1+2 k+\alpha ):\Phi^{1}\partial\Phi^{3}:+(-1+\alpha ) (1+\alpha )^2 :\Phi^{2}\Phi^{3}h_{1}: \right.\nonumber\\ &&\left.+\alpha \left(2+3 \alpha +\alpha ^2\right) :\Phi^{2}\Phi^{3}h_{2}:+\left(-1-3 \alpha -2 \alpha ^2\right):\Phi^{2}\Phi^{3}h_{3}:\right.\nonumber\\ &&\left.-(2 k-\alpha ) (1+\alpha )^2 :\Phi^{2}\partial\Phi^{3}:-(2 k-\alpha ) (1+\alpha ):\partial\Phi^{1}\Phi^{2}:\right.\nonumber\\ &&\left.-(-1+2 k) \alpha ^2 (1+\alpha ) :\partial\Phi^{1}\Phi^{3}:+(-1+2 k) \alpha (1+\alpha )^2 :\partial\Phi^{2}\Phi^{3}:\right.\nonumber\\ &&\left.+\tfrac{1}{2} (1+\alpha ) (1+2 k+\alpha ) \partial h_{1}+\tfrac{1}{2} \alpha (-2 k+\alpha ) \partial h_{2}+\tfrac{1-2k}{2}\partial h_{3}\right),\nonumber \end{eqnarray} \normalsize \small \begin{align*} U&=\tfrac{\mu }{\left(-3 \alpha (1+\alpha )+4 k \left(1+\alpha +\alpha ^2\right)\right)}\left(-\tfrac{6i\alpha}{\sqrt{k}}f_{123}+\tfrac{3i(1+\alpha)}{\sqrt{k}}:h_{1}f_{2}:+\tfrac{3i\alpha(1+\alpha ) }{\sqrt{k}}:h_{1}f_{3}:\right.\\ &\quad\left.-\tfrac{3 i \alpha}{\sqrt{k}}:h_{2}f_{1}:+\tfrac{3 i \alpha (1+\alpha ) }{\sqrt{k}}:h_{2}f_{3}:-\tfrac{3 i \alpha}{\sqrt{k}}:h_{3}f_{1}:\right.\\ &\quad\left.+\frac{3 i (1+\alpha ) }{\sqrt{k}}:h_{3}f_{2}:+\tfrac{6 i \alpha (1+\alpha )}{\sqrt{k}}:\Phi^{1}f_{23}:-\tfrac{3 i \alpha }{\sqrt{k}}:\Phi^{1}h_{1}h_{2}:\right.\\ &\quad\left.-\tfrac{3 i \alpha }{\sqrt{k}}:\Phi^{1}h_{1}h_{3}:-\tfrac{3 i \alpha (1+\alpha )}{\sqrt{k}}:\Phi^{1}\Phi^{2}f_{1}:+\tfrac{3 i \alpha (1+\alpha ) }{\sqrt{k}}:\Phi^{1}\Phi^{2}f_{2}:\right.\\ &\quad\left.+\tfrac{3 i \alpha \left(1+3 \alpha +2 \alpha ^2\right) }{\sqrt{k}}:\Phi^{1}\Phi^{2}f_{3}:+i \sqrt{k} \alpha \left(1+3 \alpha +2 \alpha ^2\right) :\Phi^{1}\Phi^{2}\partial\Phi^{3}:\right.\\ &\quad\left.-\tfrac{3 i \alpha ^2 (1+\alpha )}{\sqrt{k}}:\Phi^{1}\Phi^{3}f_{1}:+\tfrac{3 i \alpha \left(2+3 \alpha +\alpha ^2\right) }{\sqrt{k}}:\Phi^{1}\Phi^{3}f_{2}:+\tfrac{3 i \alpha ^2 (1+\alpha )}{\sqrt{k}}:\Phi^{1}\Phi^{3}f_{3}:\right.\\ &\quad\left.-3 i \sqrt{k} \alpha (1+\alpha ):\Phi^{1}\partial\Phi^{2}\Phi^{2}:-i \sqrt{k} \alpha \left(2+3 \alpha +\alpha ^2\right):\Phi^{1}\partial\Phi^{2}\Phi^{3}:\right.\\ &\quad\left.-3 i \sqrt{k} \alpha ^2 (1+\alpha ):\Phi^{1}\partial\Phi^{3}\Phi^{3}:-\tfrac{i \alpha (1+2 k+\alpha )}{\sqrt{k}}:\Phi^{1}\partial h_{1}:\right.\\ &\quad\left.+\tfrac{6 i \alpha (1+\alpha )}{\sqrt{k}}:\Phi^{2}f_{13}:+\tfrac{3 i (1+\alpha ) }{\sqrt{k}}:\Phi^{2}h_{1}h_{2}:+\tfrac{3 i (1+\alpha )}{\sqrt{k}}:\Phi^{2}h_{2}h_{3}:\right.\\ &\quad\left.-\tfrac{3 i \alpha \left(-1+\alpha ^2\right) }{\sqrt{k}}:\Phi^{2}\Phi^{3}f_{1}:+\tfrac{3 i \alpha (1+\alpha )^2 }{\sqrt{k}}:\Phi^{2}\Phi^{3}f_{2}:-\tfrac{3 i \alpha (1+\alpha )^2 }{\sqrt{k}}:\Phi^{2}\Phi^{3}f_{3}:\right.\\ &\quad\left.+3 i \sqrt{k} \alpha (1+\alpha )^2 :\Phi^{2}\partial\Phi^{3}\Phi^{3}:+\tfrac{i (2 k-\alpha ) (1+\alpha ) }{\sqrt{k}}:\Phi^{2}\partial h_{2}:+\tfrac{6 i \alpha (1+\alpha )}{\sqrt{k}}:\Phi^{3}f_{12}:\right.\\ &\quad\left.+\tfrac{3 i \alpha (1+\alpha )}{\sqrt{k}}:\Phi^{3}h_{1}h_{3}:+\tfrac{3 i \alpha (1+\alpha )}{\sqrt{k}}:\Phi^{3}h_{2}h_{3}:+\tfrac{i (-1+2 k) \alpha (1+\alpha )}{\sqrt{k}}:\Phi^ {3}\partial h_{3}:\right.\\ &\quad\left.-\tfrac{2 i (-1+k-\alpha ) \alpha }{\sqrt{k}}:\partial\Phi^{1}h_{1}:-3 i \sqrt{k} \alpha :\partial\Phi^{1}h_{2}:-3 i \sqrt{k} \alpha:\partial\Phi^{1}h_{3}:\right.\\ &\quad\left.-3 i \sqrt{k} \alpha (1+\alpha ):\partial\Phi^{1}\Phi^{1}\Phi^{2}:-3 i \sqrt{k} \alpha ^2 (1+\alpha ):\partial\Phi^{1}\Phi^{1}\Phi^{3}:\right.\\ &\quad\left.-i \sqrt{k} \alpha \left(-1+\alpha ^2\right):\partial\Phi^{1}\Phi^{2}\Phi^{3}:+3 i \sqrt{k} (1+\alpha ):\partial\Phi^{2}h_{1}:\right.\\ &\quad\left.+\tfrac{2 i (1+\alpha ) (k+\alpha )}{\sqrt{k}}:\partial\Phi^{2}h_{2}:+3 i \sqrt{k} (1+\alpha ):\partial\Phi^{2}h_{3}:\right.\\ \displaybreak &\quad\left.+3 i \sqrt{k} \alpha (1+\alpha )^2:\partial\Phi^{2}\Phi^{2}\Phi^{3}:+3 i \sqrt{k} \alpha (1+\alpha ):\partial\Phi^{3}h_{1}:\right.\\ &\quad\left.+3 i \sqrt{k} \alpha (1+\alpha ):\partial\Phi^{3}h_{2}:+\tfrac{2 i (1+k) \alpha (1+\alpha )}{\sqrt{k}}:\partial\Phi^{3}h_{3}:\right.\\ &\quad\left.-\tfrac{i \alpha (1+2 k+\alpha )}{\sqrt{k}}\partial f_{1}+\tfrac{i (2 k-\alpha ) (1+\alpha )}{\sqrt{k}}\partial f_{2}+\tfrac{i (-1+2 k) \alpha (1+\alpha ) }{\sqrt{k}}\partial f_{3}\right.\\ &\quad\left.-\tfrac{1}{2} i \sqrt{k} (-1+4 k-\alpha ) \alpha \partial^2\Phi^{1}+\frac{1}{2} i \sqrt{k} (1+\alpha ) (4 k+\alpha ) \partial^2\Phi^{2}\right.\\ &\quad\left.+\frac{1}{2} i \sqrt{k} (1+4 k) \alpha (1+\alpha ) \partial^2\Phi^{3}\right), \end{align*} \normalsize where $\mu=\sqrt{\tfrac{9c(4+\varepsilon^{2})}{2(27-2c)}}$ and $\varepsilon(\alpha,k)=-\tfrac{4 i \sqrt{\frac{2}{3}} k^{3/2} (1+2 \alpha ) \left(-2+\alpha +\alpha ^2\right)}{3\sqrt{-(-1+2 k) \alpha ^2 (1+\alpha )^2 \left(2 k+4 k^2-\alpha (1+\alpha )\right)}}$. One can check straightforwardly with the aid of \cite{Thielemans91} that the $\lambda$-brackets of the algebra $W_{k}(\mathfrak{g},x,f)$ coincides with the $\lambda$-brackets of the family of superconformal algebras $SW(\frac{3}{2},\frac{3}{2},2)$ with parameters $\left(c(\alpha,k),\varepsilon(\alpha,k)\right)$. Shatashvili-Vafa's $G_{2}$ superconformal algebra is a quotient of this algebra for $(c,\varepsilon)=(21/2,0)$ modulo an ideal generated in conformal weight $\frac{7}{2}$ (cf.\! Remark \ref{remarkabouttheideal}), in particular, the explicit commutation relations obtained in \cite{Shatashvili-Vafa95} are an artifact of the free field realization the authors used \cite{OFarrill97}. Solving $(c,\varepsilon)=(21/2,0)$ in terms of $\alpha$ and $k$ there are three solutions: $\{\alpha=1,k=-2/3\}$, $\{\alpha=-2,k=-2/3\}$ and $\{\alpha=-1/2,k=1/3\}$. Precisely for this values of $\alpha$ the superalgebra $D(2,1;\alpha)$ is nothing but the superalgebra $osp(4|2)$, then the Shatashvili-Vafa $G_{2}$ superconformal algebra is a quotient of the quantum Hamiltonian reduction of $osp(4|2)$. \begin{remark} The existence of the ideal (\ref{ideal}) can be guessed from the fact that the affine vertex algebra $V_{k}(osp(4|2))$ at level $k\in\{-\frac{2}{3},\frac{1}{3}\}$ is not simple, i.e., contains a non-trivial ideal \cite{GorelikKac07}. \end{remark} \sloppy Listed below are the explicit expressions of all the generators of the Shatashvili-Vafa $G_{2}$ superconformal algebra in the case $\{\alpha=1,k=-2/3\}$. Note that we are using the change of basis (\ref{changeofbasis}). \fussy \begin{eqnarray} G&=&\sqrt{\tfrac{3}{2}}f_{1}+\sqrt{\tfrac{3}{2}}f_{2}+\sqrt{\tfrac{3}{2}} f_{3}+\sqrt{\tfrac{3}{2}}:\Phi^{1}h_{1}:+\sqrt{\tfrac{3}{2}}:\Phi^{2}h_{2}:\nonumber\\ &&+\sqrt{\tfrac{3}{2}}:\Phi^{3}h_{3}:-\sqrt{\tfrac{2}{3}} \partial\Phi^{1}-\sqrt{\tfrac{2}{3}} \partial\Phi^{2}-\sqrt{\tfrac{2}{3}} \partial\Phi^{3},\nonumber \end{eqnarray} \begin{eqnarray} L&=&\tfrac{3}{2}f_{12}+\tfrac{3}{2}f_{13}+\tfrac{3}{2}f_{23}-\tfrac{3}{4}:h_{1}h_{1}:-\tfrac{3}{4} :h_{1}h_{2}:-\tfrac{3}{4}:h_{1}h_{3}:-\tfrac{3}{16}:h_{2}h_{2}:\nonumber\\ &&+\tfrac{3}{8}:h_{2}h_{3}-\tfrac{3}{16} :h_{3}h_{3}:-\tfrac{3}{2} :\Phi^{1}f_{2}:-\tfrac{3}{2}:\Phi^{1}f_{3}:+\tfrac{1}{2}:\Phi^{1}\partial\Phi^{2}:\nonumber\\ &&+\tfrac{1}{2} :\Phi^{1}\partial\Phi^{3}:-\tfrac{3}{2}:\Phi^{2}f_{1}:+3:\Phi^{2}f_{3}:-:\Phi^{2}\partial\Phi^{3}:-\tfrac{3}{2} :\Phi^{3}f_{1}:\nonumber\\ &&+3:\Phi^{3}f_{2}:-\tfrac{1}{2}:\partial\Phi^{1}\Phi^{2}:-\tfrac{1}{2} :\partial\Phi^{1}\Phi^{3}:+:\partial\Phi^{2}\Phi^{3}:+\partial h_{1}\nonumber\\ &&+\tfrac{1}{4} \partial h_{2}+\tfrac{1}{4} \partial h_{3},\nonumber \end{eqnarray} \begin{eqnarray} \Phi&=&3 f_{2}-3 f_{3}-6 :\Phi^{1}\Phi^{2}\Phi^{3}:+3 :\Phi^{2}h_{2}:-3:\Phi^{3}h_{3}:-2 \partial\Phi^{2}+2 \partial\Phi^{3},\nonumber \end{eqnarray} \begin{eqnarray} K&=&3 \sqrt{\tfrac{3}{2}} f_{12}-3 \sqrt{\tfrac{3}{2}} f_{13}-\tfrac{3}{2} \sqrt{\tfrac{3}{2}} :h_{1}h_{2}:+\tfrac{3}{2} \sqrt{\tfrac{3}{2}}:h_{1}h_{3}:-\tfrac{3}{4} \sqrt{\tfrac{3}{2}} :h_{2}h_{2}:\nonumber\\ &&+\tfrac{3}{4} \sqrt{\tfrac{3}{2}} :h_{3}h_{3}:-3 \sqrt{\tfrac{3}{2}} :\Phi^{1}f_{2}:+3 \sqrt{\tfrac{3}{2}}:\Phi^{1}f_{3}:+3 \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{2}h_{1}:\nonumber\\ &&-\tfrac{3}{2} \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{2}h_{2}:+\tfrac{3}{2} \sqrt{\tfrac{3}{2}} :\Phi^{1}\Phi^{2}h_{3}:-3 \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{3}h_{1}:\nonumber\\ &&-\tfrac{3}{2} \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{3}h_{2}:+\tfrac{3}{2} \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{3}h_{3}:-3 \sqrt{\tfrac{3}{2}}:\Phi^{2}f_{1}:+3 \sqrt{6}:\Phi^{2}\Phi^{3}h_{1}:\nonumber\\ &&+3 \sqrt{\tfrac{3}{2}} :\Phi^{2}\Phi^{3}h_{2}:+3 \sqrt{\tfrac{3}{2}}:\Phi^{2}\Phi^{3}h_{3}:-2 \sqrt{6}:\Phi^{2}\partial\Phi^{3}:+3 \sqrt{\tfrac{3}{2}} :\Phi^{3}f_{1}:\nonumber\\ &&-\sqrt{6} :\partial\Phi^{1}\Phi^{2}:+\sqrt{6}:\partial\Phi^{1}\Phi^{3}:-2 \sqrt{6} :\partial\Phi^{2}\Phi^{3}:+\sqrt{\tfrac{3}{2}}\partial h_{2}-\sqrt{\tfrac{3}{2}} \partial h_{3},\nonumber \end{eqnarray} \begin{eqnarray} X&=&-3 f_{23}-\tfrac{3}{8}:h_{2}h_{2}:-\tfrac{3}{4}:h_{2}h_{3}-\tfrac{3}{8} :h_{3}h_{3}:-\tfrac{3}{2} :\Phi^{1}\Phi^{2}h_{2}:\nonumber\\ &&-\tfrac{3}{2}:\Phi^{1}\Phi^{2}h_{3}:-\tfrac{3}{2}:\Phi^{1}\Phi^{3}h_{2}:-\tfrac{3}{2}:\Phi^{1}\Phi^{3}h_{3}:-\tfrac{1}{2}:\Phi^{1}\partial\Phi^{2}:\nonumber\\ &&-\tfrac{1}{2}:\Phi^{1}\partial\Phi^{3}:-6 :\Phi^{2}f_{3}:+3:\Phi^{2}\Phi^{3}h_{2}:-3:\Phi^{2}\Phi^{3}h_{3}:+5 :\Phi^{2}\partial\Phi^{3}:\nonumber\\ &&-6 :\Phi^{3}f_{2}:+\tfrac{5}{2}:\partial\Phi^{1}\Phi^{2}:+\tfrac{5}{2} :\partial\Phi^{1}\Phi^{3}:-5:\partial\Phi^{2}\Phi^{3}:+\tfrac{1}{2}\partial h_{2}+\tfrac{1}{2}\partial h_{3},\nonumber \end{eqnarray} \small \begin{eqnarray} M&=&-3 \sqrt{\tfrac{3}{2}} f_{123}+3 \sqrt{\tfrac{3}{2}}:h_{1}f_{2}:+3 \sqrt{\tfrac{3}{2}} :h_{1}f_{3}:-\tfrac{3}{2} \sqrt{\tfrac{3}{2}}:h_{2}f_{1}:+3 \sqrt{\tfrac{3}{2}} :h_{2}f_{3}:\nonumber\\ &&-\tfrac{3}{2} \sqrt{\tfrac{3}{2}}:h_{3}f_{1}:+3 \sqrt{\tfrac{3}{2}} :h_{3}f_{2}:+3 \sqrt{6} :\Phi^{1}f_{23}:-\tfrac{3}{2} \sqrt{\tfrac{3}{2}}:\Phi^{1}h_{1}h_{2}:\nonumber\\ &&-\tfrac{3}{2} \sqrt{\tfrac{3}{2}}:\Phi^{1}h_{1}h_{3}:-3 \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{2}f_{1}:+3 \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{2}f_{2}:+9 \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{2}f_{3}:\nonumber\\ &&-\sqrt{6}:\Phi^{1}\Phi^{2}\partial\Phi^{3}:-3 \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{3}f_{1}:+9 \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{3}f_{2}:+3 \sqrt{\tfrac{3}{2}}:\Phi^{1}\Phi^{3}f_{3}:\nonumber\\ &&+\sqrt{6}:\Phi^{1}\partial\Phi^{2}\Phi^{2}:+\sqrt{6} :\Phi^{1}\partial\Phi^{2}\Phi^{3}:+\sqrt{6} :\Phi^{1}\partial\Phi^{3}\Phi^{3}:-\tfrac{1}{2} \sqrt{\tfrac{3}{2}}:\Phi^{1}\partial h_{1}:\nonumber\\ &&+3 \sqrt{6}:\Phi^{2}f_{13}:+3 \sqrt{\tfrac{3}{2}} :\Phi^{2}h_{1}h_{2}:+3 \sqrt{\tfrac{3}{2}}:\Phi^{2}h_{2}h_{3}:+3 \sqrt{6}:\Phi^{2}\Phi^{3}f_{2}:\nonumber\\ &&-3 \sqrt{6}:\Phi^{2}\Phi^{3}f_{3}:-2 \sqrt{6}:\Phi^{2}\partial\Phi^{3}\Phi^{3}:-\tfrac{5}{2} \sqrt{\tfrac{3}{2}}:\Phi^{2}\partial h_{2}:+3 \sqrt{6}:\Phi^{3}f_{12}:\nonumber\\ &&+3 \sqrt{\tfrac{3}{2}}:\Phi^{3}h_{1}h_{3}:+3 \sqrt{\tfrac{3}{2}}:\Phi^{3}h_{2}h_{3}:-\tfrac{5}{2} \sqrt{\tfrac{3}{2}}:\Phi^{3}\partial h_{3}:+\tfrac{5}{2} \sqrt{\tfrac{3}{2}}:\partial\Phi^{1}h_{1}:\nonumber\\ &&+\sqrt{\tfrac{3}{2}} :\partial\Phi^{1}h_{2}:+\sqrt{\tfrac{3}{2}}:\partial\Phi^{1}h_{3}:+\sqrt{6} :\partial\Phi^{1}\Phi^{1}\Phi^{2}:+ \sqrt{6} :\partial\Phi^{1}\Phi^{1}\Phi^{3}:\nonumber\\ &&-\sqrt{6}:\partial\Phi^{2}h_{1}:+\tfrac{1}{2} \sqrt{\tfrac{3}{2}}:\partial\Phi^{2}h_{2}:-\sqrt{6}:\partial\Phi^{2}h_{3}:-2 \sqrt{6} :\partial\Phi^{2}\Phi^{2}\Phi^{3}:\nonumber\\ &&-\sqrt{6}:\partial\Phi^{3}h_{1}:-\sqrt{6} :\partial\Phi^{3}h_{2}:+\tfrac{1}{2} \sqrt{\tfrac{3}{2}}:\partial\Phi^{3}h_{3}:-\tfrac{1}{2} \sqrt{\tfrac{3}{2}} \partial f_{1}\nonumber\\ &&-\tfrac{5}{2} \sqrt{\tfrac{3}{2}} \partial f_{2}-\tfrac{5}{2} \sqrt{\tfrac{3}{2}} \partial f_{3}-\sqrt{\tfrac{2}{3}} \partial^{2}\Phi^{1}+\sqrt{\tfrac{2}{3}}\partial^{2}\Phi^{2}+\sqrt{\tfrac{2}{3}} \partial^{2}\Phi^{3}.\nonumber \end{eqnarray} \normalsize The free field realization of $W_{k}(\mathfrak{g},x,f)$ inside $V_{k}(\mathfrak{h})\otimes F(\mathfrak{g}_{1/2})$ is induced by the cannonical homomorphism $\mathfrak{g}_{\le}\rightarrow\mathfrak{g}_{0}$, then we simply obtain the free field realization by removing the terms that contains a current $v\in\mathfrak{g}_{\le}\backslash \mathfrak{g}_{0}$, i.e., the terms containing $f$'s. For example in the case $\{\alpha=1,k=-2/3\}$ the generators $G$ and $\Phi$ look as: \begin{equation} G=\sqrt{\tfrac{3}{2}}:\Phi^{1}h_{1}:+\sqrt{\tfrac{3}{2}}:\Phi^{2}h_{2}: +\sqrt{\tfrac{3}{2}}:\Phi^{3}h_{3}:-\sqrt{\tfrac{2}{3}} \partial\Phi^{1}-\sqrt{\tfrac{2}{3}} \partial\Phi^{2}-\sqrt{\tfrac{2}{3}} \partial\Phi^{3},\nonumber \end{equation} \begin{equation}\label{expressionforPhi} \Phi=-6 :\Phi^{1}\Phi^{2}\Phi^{3}:+3 :\Phi^{2}h_{2}:-3:\Phi^{3}h_{3}:-2 \partial\Phi^{2}+2 \partial\Phi^{3}. \end{equation} Therefore we have proved: \sloppy \begin{theorem} \label{thm:main-thm} Let $V_{-2/3}(\mathfrak{h})$ be the affine vertex algebra of level $-2/3$ associated to $\mathfrak{h}$ with bilinear form $A$, and $F(\mathfrak{g}_{1/2})$ the vertex algebra of neutral free fermions as defined above. The vectors $G$ and $\Phi$ given by the expressions above generate the $SW(\frac{3}{2},\frac{3}{2},2)$ vertex algebra with $c=21/2$ and $\varepsilon=0$ inside $V_{-2/3}(\mathfrak{h})\otimes F(\mathfrak{g}_{1/2})$. This vertex algebra is not simple and dividing by the ideal (\ref{ideal}) we obtain the Shatashvili-Vafa $G_{2}$ superconformal algebra. \end{theorem} \fussy \begin{remark} Note that $V_{-2/3}(\mathfrak{h})\otimes F(\mathfrak{g}_{1/2})$ is isomorphic (by a linear transformation on the generators) to the vertex algebra of three free Bosons and three free Fermions with inner product minus the inverse of Cartan matrix (\ref{Cartanmatrix}) of $D(2,1;1)\simeq osp(4|2)$. \end{remark} \begin{remark} This free field realization was found by Mallwitz \cite{Mallwitz95} using the most general ansatz on three free superfields of conformal weights $\frac{1}{2}$. By obtaining this realization from the quantum Hamiltonian formalism we can find explicitly the screening operators associated with the reduction as follows. \end{remark} First we rescale the currents $h\in V_{k}(\mathfrak{h})$ and consider instead $\bar{h}:=\frac{h}{\sqrt{k}}$, therefore $V_{k}(\mathfrak{h})$ is identified as a vertex algebra with the Heisenberg algebra $V_{1}(\mathfrak{h})$ associated to $\mathfrak{h}$. Let $V_{Q}$ denote the lattice vertex algebra \cite{Kac96} associated to the root lattice $Q$ (that correspond to the Cartan matrix that we have fixed at the beginning of the section) of $D(2,1;\alpha)$, i.e., we have three odd simple roots $\{\alpha_{1},\alpha_{2},\alpha_{3}\}$. Then for every lattice element $\alpha$ we have a $V_{1}(\mathfrak{h})$-module $M_{\alpha}$ and a vertex operator $\Gamma_{\alpha}$ which is an intertwiner of type $\binom{M_{0}}{M_{0} \; M_{\alpha}}$, hence its zero mode maps $V_{1}(\mathfrak{h})=M_{0}\rightarrow M_{\alpha}$. Let $M_{-\alpha_{i}/\sqrt{k}}$ be the $V_{1}(\mathfrak{h})$-module with highest weight $-\alpha_{i}/\sqrt{k}$ and $\Gamma_{-\alpha_{i}/\sqrt{k}}$ the intertwiner constructed just as in the lattice case, so that \begin{equation} \left[{\bar{h_{i}}}_{\lambda}\Gamma_{-\alpha_{j}/\sqrt{k}}\right]=-\frac{(\alpha_{i},\alpha_{j})}{\sqrt{k}}\Gamma_{-\alpha_{j}/\sqrt{k}},\;\;\;\; \partial\left(\Gamma_{-\alpha_{j}/\sqrt{k}}\right)=-\bar{h}_{j}\Gamma_{-\alpha_{j}/\sqrt{k}}.\nonumber \end{equation} Define the operators \begin{equation} Q_{i}=:\Phi_{i}\Gamma_{-\alpha_{i}/\sqrt{k}}:\,\in V_{1}(\mathfrak{h})\otimes F(\mathfrak{g}_{1/2})\rightarrow M_{-\alpha_{i}/\sqrt{k}}\otimes F(\mathfrak{g}_{1/2}), \; i=1,2,3.\nonumber \end{equation} A straightforward computation using \cite{Thielemans91} shows that \begin{equation} W_{k}(\mathfrak{g},x,f)\simeq\bigcap_{i=1}^{3}Ker \;{Q_{i}}_{(0)}\subset V_{1}(h)\otimes F(\mathfrak{g}_{1/2}),\nonumber \end{equation} \noindent equals the free field realization of $W_{k}(\mathfrak{g},x,f)$ inside $V_{k}(\mathfrak{h})\otimes F(\mathfrak{g}_{1/2})$ that we have produced above. \begin{remark} In fact a similar result can be obtained for the quantum Hamiltonian reduction of any simple Lie superalgebra when the nilpotent $f$ is \emph{super-principal}, that is, there exists an odd nilpotent $F \in \mathfrak{g}_{-1/2}$ with $[F,F]=f$ ($f \in \mathfrak{g}_{-1}$ being a principal nilpotent) and these two vectors together with $x$ form part of a copy of $osp(1|2) \subset \mathfrak{g}$. Not all Lie superalgebras admit a superprincipal embedding, in particular, it is necessary to admit a root system with all odd simple roots. In this case, one takes $F = \sum_i e_{-\alpha_i}$ the sum of all simple root vectors. The list of simple Lie superalgebras admiting an $osp(1|2)$ superprincipal embedding consists of \begin{gather*} sl(n \pm 1|n), \qquad osp(2n\pm 1|2n), \qquad osp(2n|2n), \\ osp(2n +2 | 2n), \qquad D(2,1;\alpha) \end{gather*} In these case we see that $\mathfrak{g}_{1/2}$ is naturally isomorphic to $\Pi \mathfrak{h}^*$ and we can form the Boson-Fermion system and the screening charges as above. The intersection of their kernels coincides with the quantum Hamiltonian reduction for generic levels. \end{remark}
1,941,325,220,823
arxiv
\section{Conclusion and perspective} \label{conc_sec} Based on a functional LLN and a functional CLT for the rank invariant, the present work is the first step towards putting multi-parameter persistence on a sound statistical basis. However, in order to cover the variety of contexts where multi-parameter persistence is encountered in applications more research is needed. First, on the modeling side, the assumption that the marks be independent of the location applies only rather specific application contexts. A more flexible framework would consider the setting of geostatistical markings, where the marks are determined through a possibly correlated random field in the background \cite{geost}. On a complementary side, it is also of interest to work with marks that are determined entirely by the underlying point configuration such as kernel-density estimators. The latter would present a refinement of the idea of the multicover bifiltration for delivering a persistence-based tool that is robust with respect to outliers. Second, in multi-parameter persistence, the rank invariant is only a rather cumbersome summary statistics for the rich structure inherent in multi-parameter persistence modules. Recently, signed barcodes were proposed as a possibility to extend many of the strong points of the classical barcode representation in single-parameter persistence to the multi-parameter setting \cite{signed}. Hence, it would be worthwhile to investigate to what extent the asymptotic results of the present work extend to this characteristic. Moreover, it would be exciting to develop statistics for the occurrence of specific indecomposable summands under different null hypotheses. The importance of this research direction for applications in materials science has also been stressed by Y.~Hiraoka in a recent series of talks and will be the topic of a manuscript \cite{shimizu}. Finally, the simulation in Section \ref{sim_sec} was designed as a proof of concept, where we illustrate that the asymptotic normality is already visible in bounded sampling windows, and where we provide first indications that linear combinations of test statistics constructed from filtrations involving different parameters can outperform more classical single-parameter invariants. However, we have not touched upon the question on how to find a combination which is most powerful for discriminating the alternatives from the null model. More generally, more research is needed in order to find out how to design test statistics that combine the information residing in the different layers of multi-parameter persistence in order to deliver the best performance for a given testing problem. \section{Proof of Theorem \ref{clt_proc_thm}} \label{fclt_sec} The basis for the asymptotic normality of the persistent Betti numbers is a functional CLT. On a very general level, proving such a functional CLT involves two steps: 1) normality of the multivariate marginals, and 2) tightness. In Theorem \ref{clt_rank_thm} and the subsequent remark we established multivariate normality, so that it remains to verify tightness. To that end, we build on the blueprint used in \cite{krebs} and control moments of block increments. For multi-parameter persistence, three new challenges appear. First, \cite{krebs} was set in a quasi-1D domain, which simplified the cumulant expansion drastically. Second, more substantially, in multi-parameter persistence, we lose the interpretation of the increment $\beta_n(E)$ as the number of features with birth- and death times in certain intervals because the algebraic structure of multiparameter persistence modules can be highly involved \cite{carlsson2009theory}. Third, the regularity conditions on the mark distribution need to be taken into account for in the moment bound. Although the general strategy applies to the combined model described in Example \ref{comb_exc}, writing out the proofs directly at this level of generality becomes cumbersome since 0-simplices in the $k$-cover filtration are represented by $k$-tuples of data points. Hence, we will generally first explain in detail the idea for $k = 1$, and then indicate how to adapt the arguments for the multicover bifiltration. In order to simplify notation, we henceforth fix the feature dimension $q \le d$ and do not highlight it further in the notation. A \emph{block} $E =\prod_{m \le 3}E_{\ms b,m} \times \prod_{m \le 3}E_{\ms d,m}$ is a product of intervals $E_{\ms b,m}, E_{\ms d,m} \subseteq [0, T]$. Writing $E_{\ms b,m} = [b_{-, m}, b_{+, m}]$ and $E_{\ms d,m} = [d_{-, m}, d_{+, m}]$, we then define the increment of the rank invariant in the block $E$ as the alternating sum \begin{align} \label{alt_eq} \beta_n(E) := \sum_{i, i', i'', j,j',j'' \in \{-, +\}} {i i' i'' jj'j''}\beta_n^{(b_{i, 1}, b_{i', 2}, b_{i'', 3}), (d_{j, 1}, d_{j', 2}, d_{j'', 3})}, \end{align} where, $ii'i''jj'j''$ denotes the sign obtained when computing the product of the corresponding signs. For instance, in the single-parameter case, i.e., if $E_{\ms b,m} = E_{\ms d,m} = [0, T]$ for $m \in \{2, 3\}$, then $\beta_n(E)$ describes the number of features that are born within the interval $[b_{-, 1}, b_{+, 1}]$ and die in the interval $[d_{-, 1}, d_{+, 1}]$. Note that there is an issue with the current definition of $\beta_n(E)$ for instance, if $b_{+, 1} > d_{-, 1}$ since we would then need to compute an expression of the form $\beta_n^{(b_{+, 1}, b_{-, 2}, b_{-, 3}), (d_{-,1}, d_{-,2}, d_{-,3})}$ which is ill-defined. However, we can actually exploit this situation to our advantage and set \begin{align*} \beta_n^{(b_{+, 1}, b_{-, 2}, b_{-, 3}), (d_{-,1}, d_{-,2}, d_{-, 3})} &:= \beta_n^{(b_{+, 1}, b_{-, 2}, b_{-, 3}), (b_{+,1}, d_{-,2}, d_{-, 3})} \\ &\phantom{:=}+ \beta_n^{(d_{-, 1}, b_{-, 2}, b_{-, 3}), (d_{-,1}, d_{-,2},d_{-,3})} \\ &\phantom{:=}- \beta_n^{(d_{-, 1}, b_{-, 2}, b_{-, 3}), (b_{+,1}, d_{-,2}, d_{-, 3})}. \end{align*} In particular, if $b_{-, 1} = d_{-, 1}$ and $b_{+, 1} = d_{+, 1}$, then $\beta_n(E) = 0$. Hence, when deriving bounds on $\beta_n(E)$, we may restrict to the cases, where $b_{+, m} \le d_{-, m}$. Note that this extension of $\beta_n$ differs from the one considered in Section \ref{lln_sec}. However, since the claim in \ref{clt_proc_thm} only concerns the original and not the extended Betti number, this incompatibility does not cause any issues. The Chentsov condition in the form of \cite[Display (3)]{bickel} shows that to verify tightness, it suffices to produce $C, \varepsilon >0$ such that for the centered increment $\bar\beta_n(E) := \beta_n(E) - \mathbb E[\beta_n(E)]$ the moment condition \begin{align} \label{lav_eq} n^{-2d} \mathbb E[\bar\beta_n(E)^4] \le C |E|^{1 + \varepsilon} \end{align} holds for all $n \ge 1$ and all blocks $E \subseteq \mc S$. Note that if $E = E' \cup E''$ is the union of two adjacent blocks, then $$\bar\beta_n(E)^4 = \big(\bar\beta_n(E') + \bar\beta_n(E'')\big)^4 \le 2^4 \max\{\bar\beta_n(E')^4, \bar\beta_n(E'')^4\}\le 2^4\bar\beta_n(E')^4 + 2^4\bar\beta_n(E'')^4.$$ For the discrete index corresponding to the multicover filtration, this observation implies that it suffices to treat the case where the intervals pertaining to that index reduce to singletons. That is, $E_{\ms b,3} = \{b_3\}$ and $E_{\ms d,3} = \{d_3\}$. As in the proofs of \cite[Theorems 1, 2]{krebs}, we first show that it suffices to verify \eqref{lav_eq} for blocks belonging to a grid. In a second step, we derive crucial variance- and cumulant bounds for block increments in such a grid. Here, the \emph{mixed cumulant} of random variables $Z_1, \dots, Z_4$ with finite fourth moment is given by \begin{align} \label{E:C4M1.1} c^4(Z_1, \dots, Z_4) = \sum_{\{N_1, \dots, N_r\}\prec\{1, \dots, 4\}}(-1)^{r - 1}(r - 1)! \mathbb E\Big[\prod_{i \in N_1}Z_i\Big] \cdots \mathbb E\Big[\prod_{i \in N_r}Z_i \Big], \end{align} where the sum is taken over all the partitions $N_1 \cup \cdots \cup N_r$ of $\{1, \dots, 4\}$, see \cite[Identity (3.9)]{raic}. Moreover, we put $c^4(Z) := c^4(Z, Z, Z, Z)$. In Sections \ref{grid_sec} and \ref{md_sec}, we prove the following key auxiliary steps. Henceforth, for a block $E = \prod_{m \le 3}E_{\ms b,m} \times \prod_{m \le 3}E_{\ms d,m}$ we set $E_m := E_{\ms b,m} \times E_{\ms d,m}$. Then, we say that $E$ is \emph{$n$-good} if $|E_1| \ge n^{-d - \varepsilon_{\ms B}}$ and $|E_2| \ge n^{-8d/5 - \varepsilon_{\ms B}}$, where we set $\varepsilon_{\ms B} := 1/(100d)$. \begin{proposition}[Reduction to grid] \label{grid_prop} If the Chentsov condition \eqref{lav_eq} holds for all $n \ge 1$ and all $n$-good $E \subseteq \SSS$, then the processes $\{\b_n^{\bb, \dd}\}_{\boldsymbol b, \boldsymbol d}$ are tight in the Skorokhod topology. \end{proposition} Next, we state the cumulant bounds. \begin{proposition}[Variance and cumulant bound] \label{var_prop} It holds that $$\sup_{\substack{n \ge 1\\ E\subseteq \SSS \emph{ $n$-good}}}\frac{\ms{Var}\big(\beta_n(E)\big) + c^4\big(\beta_n(E)\big)}{n^d |E_1|^{3/4 - \varepsilon_{\ms B}}|E_2|^{5/8 - \varepsilon_{\ms B}}}< \infty.$$ \end{proposition} We now explain, how to complete the proof of Theorem \ref{clt_proc_thm}. \begin{proof}[Proof of Theorem \ref{clt_proc_thm}] First, we insert the bounds from Proposition \ref{var_prop} into the centered fourth moment expansion in terms of the variance and the fourth-order cumulant. Thus, for some $C_{\ms{CV}} > 0$, \begin{align*} \mathbb E[\bar\beta_n(E)^4] &= 3\ms{Var}(\beta_n(E))^2 + c^4(\beta_n(E)) \\ &\le 3C_{\ms{CV}}^2 n^{2d} |E_1|^{3/2 - 2\varepsilon_{\ms B}}|E_2|^{5/4 - 2\varepsilon_{\ms B}} + C_{\ms{CV}} n^d |E_1|^{3/4 - \varepsilon_{\ms B}}|E_2|^{5/8 - \varepsilon_{\ms B}}\\ &\le 3C_{\ms{CV}}^2 n^{2d}|E|^{5/4 - 2\varepsilon_{\ms B}} + C_{\ms{CV}}|E|^{1 + \varepsilon_{\ms B}} n^d|E_1|^{-1/4 -2 \varepsilon_{\ms B}}|E_2|^{-3/8 -2 \varepsilon_{\ms B}}\\ &\le 3C_{\ms{CV}}^2 n^{2d}|E|^{5/4 - 2\varepsilon_{\ms B}} + C_{\ms{CV}}|E|^{1 + \varepsilon_{\ms B}} n^dn^{d/4 + 4d\varepsilon_{\ms B}}n^{3d/5 + 4d\varepsilon_{\ms B}}. \end{align*} Since $n^dn^{d/4 + 4d\varepsilon_{\ms B}}n^{3d/5 + 4d\varepsilon_{\ms B}} \le n^{2d}$, we conclude the proof. \end{proof} \input{grid} \input{md} \subsection{Proof of Proposition \ref{grid_prop}} \label{grid_sec} By the analog of \cite[Theorem 16.8]{billingsley} in the multivariate setting, proving tightness requires a control on the modulus of continuity $$\omega'_\eta(\bar\beta_n) := \inf_\Lambda\max_{G\in\Lambda} \sup_{(\boldsymbol b, \boldsymbol d), (\boldsymbol b', \boldsymbol d') \in L} |\bar\beta_n^{\boldsymbol b, \boldsymbol d} - \bar\beta_n^{\boldsymbol b', \boldsymbol d'}|,$$ where the infimum extends over all $\eta$-grids $\Lambda$ in $\SSS$. To that end, we proceed in a similar vein as \cite[Proposition 5]{krebs}. More precisely, by invoking a result from \cite{davydov}, we show that it suffices to prove the claim for blocks from grids of the form $G_n = (n^{-\alpha_1}\mathbb Z \times n^{-\alpha_2}\mathbb Z)^2 $, $n \ge 1$ with $\alpha_1 := d/2 + \varepsilon_{\ms B}/2$ and $\alpha_2 := 4d/5 + \varepsilon_{\ms B}/2$. \begin{proof}[Proof of Proposition \ref{grid_prop}] To prove the claim, we will show that restricting $\bar\beta_n^{\boldsymbol b, \boldsymbol d}$ to values $(\boldsymbol b, \boldsymbol d)$ taken from a sufficiently fine grid does not decrease the modified modulus of continuity substantially. More precisely, let $\varepsilon, \delta >0$. Then, there exists $n_0 = n_0(\varepsilon, \delta)$ such that almost surely \begin{align} \label{dav_eq} \sup_{n \ge n_0}n^{-d/2}(\omega'_\delta(\bar\b_n^{\bb, \dd})- \omega'_\delta(\bar\b_n^{\bb, \dd}|_{G_n})) \le \varepsilon. \end{align} In order to prove \eqref{dav_eq}, we note that the process $\b_n^{\bb, \dd}$ is increasing in $\boldsymbol b$ and $(-\boldsymbol d)$. Hence, by \cite[Corollary 2]{davydov}, it suffices to control $$ \mathbb E\big[ \beta_n^{(b_1', b_2, k_{\ms b}), (d_1, d_2, k_{\ms d})} - \beta^{(b_1, b_2, k_{\ms b}), (d_1, d_2, k_{\ms d})}_n \big], \quad \mathbb E\big[\beta_n^{(b_1, b_2', k_{\ms b}), (d_1, d_2, k_{\ms d})} - \beta^{(b_1, b_2, k_{\ms b}), (d_1, d_2, k_{\ms d})}_n \big] $$ for $|b_1' - b_1|\le n^{-\alpha_1}$, $|b_2' - b_2| \le n^{-\alpha_2}$ and $$\mathbb E\big[\beta_n^{(b_1, b_2, k_{\ms b}), (d_1, d_2, k_{\ms d})} - \beta^{(b_1, b_2, k_{\ms b}), (d_1, d_2', k_{\ms d})}_n \big], \quad \mathbb E\big[\beta_n^{(b_1, b_2, k_{\ms b}), (d_1, d_2, k_{\ms d})} - \beta^{(b_1, b_2, k_{\ms b}), (d_1', d_2, k_{\ms d})}_n \big] $$ for $ | d_1 - d_1'|\le n^{-\alpha_1}$, $ | d_2 - d_2'| \le n^{-\alpha_2}$. By Corollary \ref{cont_filt_cor} it suffices to bound the number of simplices with filtration time contained in the corresponding interval of length $n^{-\alpha_i}$. We explain in detail how to proceed for $b_1' - b_1$ and $b_2' - b_2$, noting that the arguments for $d_1 - d_1'$ and $d_2 - d_2'$ are similar. Moreover, to make the arguments more accessible, we first lay out in detail how to proceed for the single-cover case, i.e., where $k_{\ms b} = 1$. {$\boldsymbol{b_1' - b_1}.$} By applying the Mecke formula \cite[Theorem 4.4]{poisBook}, the expected number of $q$-simplices formed by Poisson points in $[0, n]^d$ is at most \begin{align*} &\mathbb E\big[\#\big\{X_{i_0}, \dots, X_{i_q} \in [0, n]^d \text{ pw.~distinct}\colon r(X_{i_0}, \dots, X_{i_q}) \in [b_1, b_1']\big\}\big] \\ &\quad \le \lambda^{q + 1}\int_{[0, n]^d} \int_{B_T(x_0)^q}\mathbbmss{1}\{r(x_0, x_1, \dots, x_q) \in [b_1, b_1']\} {\rm d} x_1\cdots {\rm d} x_q{\rm d} x_0\\ &\quad \le \lambda^{q + 1}n^d \int_{B_T^q(o)}\mathbbmss{1}\big\{r(o, y_1, \dots, y_q) \in [b_1, b_1']\big\} {\rm d} y_1\cdots {\rm d} y_q. \end{align*} Hence, by Lemma \ref{divol_lem}, the expected number of $q$-simplices with \v Cech filtration time in $[b_1, b_1']$ is at most $$c\lambda^{q + 1}n^d|B_T(o)|^q (b_1' - b_1) \le c\lambda^{q + 1}|B_T(o)|^q n^{d - \alpha_1},$$ which is of order $o(n^{d/2})$. {$\boldsymbol{b_2' - b_2}.$} Again, we apply the Mecke formula to bound the expected number of $q$-simplices with at least one vertex with mark in $[b_2, b_2']$ by \begin{align*} &\mathbb E\big[\#\big\{X_{i_0}\in [0, n]^d, X_{i_1}, \dots, X_{i_q} \in B_T(X_{i_0}) \text{ pw.~distinct}\colon \{M_{i_0}, M_{i_1}, \dots, M_{i_q}\} \cap [b_2, b_2'] \ne \varnothing\big\}\big] \\ &\quad \le \lambda^{q + 1}(q + 1)\int_{[0, n]^d} \int_{B_T(x_0)^q}\mathbb P(M \in [b_2, b_2']) {\rm d} x_1\cdots {\rm d} x_q{\rm d} x_0. \end{align*} Since the distribution function of the typical mark $M$ is H\"older-continuous, we deduce that the expected number of $q$-simplices with mark filtration time in $[b_2, b_2']$ is at most $$c\lambda^{q + 1}(q + 1) n^d|B_T(o)|^q (b_2' - b_2)^{5/8} \le c\lambda^{q + 1}(q + 1) n^d|B_T(o)|^q n^{d - 5\alpha_2/8},$$ which is of order $o(n^{d/2})$. We now explain how to proceed when $k_{\ms b} = k > 1$. \\ {$\boldsymbol{b_1' - b_1}.$} We need to bound the number of $q$-simplices in the multicover filtration with filtration time in $[b_1, b_1']$. Any such $q$-simplex has the form $$\sigma =(X_{i_{0, 1}}, \dots, X_{i_{0, k}}, \dots, X_{i_{q, 0}}, \dots, X_{i_{q, k}}),$$ with $r(\sigma) \in [b_1, b_1']$ where the appearing points need not be distinct. Hence, an application of the Mecke formula gives the bound $$\sum_{q \le \ell \le d} \mathbb E\big[\#\big\{X_{i_0}, \dots, X_{i_\ell} \in [0, n]^d \text{ pw.~distinct}\colon r(X_{i_0}, \dots, X_{i_\ell}) \in [b_1, b_1']\big\}\big].$$ From here, we proceed as in the case $k_{\ms b} = 1$. {$\boldsymbol{b_2' - b_2}.$} Here, we need to bound the expected number of $q$-simplices with at least one vertex with mark in $[b_2, b_2']$. But again, any such $q$-simplex has the form $$\sigma =(X_{i_{0, 1}}, \dots, X_{i_{0, k}}, \dots, X_{i_{q, 0}}, \dots, X_{i_{q, k}})$$ so that one of the associated marks $(M_{i_{0, 1}}, \dots, M_{i_{0, k}}, \dots, M_{i_{q, 0}}, \dots, M_{i_{q, k}})$ is contained in $[b_2, b_2']$. From here, we again proceed as in the case $k_{\ms b} = 1$. \end{proof} \section{Introduction} The goal of topological data analysis (TDA) is to infer the ''shape'' of data by means of topological invariants. One of the most notable such tools, \emph{persistent homology}, outputs a collection of intervals, the \emph{barcode}, that track the evolution of the topological (homological) features along a filtration of a simplicial complex. Short intervals are treated as ''noise'' and long bars as true topological signals, with the precise notion of what constitutes a long interval being application-dependent. For instance, the circular feature of the data set in Figure \ref{fig:circle} is readily deduced from the associated barcode shown in \ref{fig:barcode-circle}. Note that it is also customary to visualize the collection of intervals as a \emph{persistence diagram} in which the interval $[a,b)$ in the barcode is plotted as the point $(a,b)\in \mathbb R^2$; see Figure \ref{pd_mc_fig} for an example. The persistence diagram has now become a powerful tool to analyze complex phenomena in fields as diverse as astronomy, biology, materials science, medicine and neuroscience \cite{wasserman}. This rapid dissemination in the natural sciences has lead to a vigorous research stream aiming to put TDA on a rigorous statistical foundation \cite{bendich,divol,jmva}. What makes persistent homology particularly appealing to the data-analyst is the fact that it is stable: data sets that are close to each other (e.g. in the Hausdorff distance) also have similar barcodes. At the same time, a drawback of persistent homology is its sensitivity to outliers. To see this, consider the same circle as before but with a few additional points scattered around as in Figure \ref{fig:circle-all-points}. The associated barcode in Figure \ref{fig:barcode-all} no longer suggests an underlying circular structure. A potential way to rectify this problem would be to consider only points above a certain density threshold, but this would in turn be very sensitive to the choice of threshold. See for instance Figure \ref{fig:circle-too-few} and its associated barcode in Figure \ref{fig:barcode-few}. Ideally one would thus have a tool which allows one to deduce topological signatures across both scale and density. That is precisely the promise of \emph{multi-parameter persistent homology} is about. \begin{figure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.9\linewidth]{circle.png} \caption{} \label{fig:circle} \end{subfigure}% \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.9\linewidth]{circle-all-points.png} \caption{} \label{fig:circle-all-points} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.9\linewidth]{circle-too-few.png} \caption{} \label{fig:circle-too-few} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.9\linewidth]{barcode-circle.png} \caption{} \label{fig:barcode-circle} \end{subfigure}% \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.9\linewidth]{barcode-all-points.png} \caption{} \label{fig:barcode-all} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.9\linewidth]{barcode-too-few.png} \caption{} \label{fig:barcode-few} \end{subfigure} \caption{\textbf{(a)} A data set with a circular shape. \textbf{(b)} The data from (a) with added noise and colorized by a local density estimate. \textbf{(c)} The data points in (b) with a sufficiently high local density estimate. \textbf{(d)} The barcode of the data in (a). \textbf{(e)} The barcode of the data in (b). \textbf{(f)} The barcode of the data in (c). } \end{figure} While constructing multi-parameter persistent homology is straightforward, the transition from a single to multiple parameters comes at the price of a significant complexity increase of the underlying algebraic objects. This has prompted the introduction of novel invariants \cite{harrington2019stratifying,miller2020homological} but so far much of the work has be centered around the idea of studying the family of barcodes obtained by restricting the indexing set to a set of straight lines \cite{cerri2013betti,rivet}. The information contained in such restrictions is equivalent to the data of the \emph{rank invariant}: the collection of ranks between every pair of comparable points. Since the rank invariant is one of few efficiently computable invariants \cite{botnan_et_al:LIPIcs:2020:12180,carlsson2009theory}, it offers a natural starting point for the development of a sound statistical foundation for multi-parameter persistent homology. More precisely, for a prototypical model in multi-parameter persistence, we prove a functional law of large numbers (LLN) and a functional central limit theorem (CLT) in large domains, thereby showing the strong consistency and asymptotic normality of a flexible class of test statistics. Our general framework comprises two examples that are of particular interest. Our first example for bifiltered simplicial complex -- the \emph{marked \v Cech bifiltration} -- concerns locations that are scattered at random in a sampling window according to a Poisson point process, and that are endowed with some independent marking. For instance, we may think of the points as measurement locations with the mark as the quantity that is being measured. Indeed, in the axis of the point locations, we can consider the standard \v Cech filtration. Additionally, we can use sub-level sets of the mark space to define a second filtration, which effectively leads to a thinning of the measurement locations whose mark does not exceed a certain value. Our second example, the \emph{multicover bifiltration} from \cite{osang} connects to the topic of robustness to outliers mentioned above. Also this filtration builds on locations scattered in a sampling window. The 1-cover corresponds to the ordinary \v Cech complex, which describes the topology of the union of growing balls centered at the locations. In general, in the $k$-cover we extract more refined information which captures the topology of the set covered by at least $k \ge 2$ of the balls. Hence, the corresponding characteristics will not be influenced by the occurrence of isolated outliers located in atypical locations. The present work is the first step towards a rigorous statistical foundation of multi-parameter persistence. Moreover, also on the methodological level, we introduce novel proof ideas in order to deal with the challenges in the multi-parameter setting in comparison to the established results in the literature \cite{krebs}. The most fundamental difference is that in the multi-parameter setting, there is no general analog of the persistence diagram. Thus, it is no longer possible to interpret the increments of the rank invariant in terms of topological features with birth- and death times, which requires a more detailed analysis of the relevant geometric configurations. Moreover, we need a precise control over the H\"older continuity of the marks in order to establish the moment bounds in the tightness part of the CLT proof. The rest of the manuscript is organized as follows. First, Section \ref{mod_sec} contains a precise definition of the multi-parameter persistence model described above. Then, Section \ref{res_sec} presents the main results of this work, namely a functional strong LLN and a functional CLT for the rank invariant, thus yielding asymptotic normality for a flexible class of test statistics. In a simulation study in Section \ref{sim_sec}, we illustrate how to leverage the functional CLT in order to develop specific goodness-of-fit tests for different point patterns. Finally, in Section \ref{conc_sec}, we summarize the findings and provide an outlook to further research. The detailed proofs for the main results are then given in Sections \ref{prel_sec}. \section{Proof of Theorem \ref{lln_proc_thm}} \label{lln_sec} To prove the functional convergence asserted in Theorem \ref{lln_proc_thm}, we build on a trick from \cite[Proposition 4.2]{thomas} and leverage the monotonicity properties of persistent Betti numbers. First, we explain how to derive Theorem \ref{lln_proc_thm} from a central continuity statement of the limiting persistent Betti number. On the geometric side, the key ingredient is the following continuity property of \v Cech filtration times from \cite[Lemma 6.10]{divol}. \begin{lemma}[Lipschitz continuity of \v Cech filtration times] \label{divol_lem} Let $ q \le d$ and $\rho > 0$. Let $X_0,\dots, X_q $ be iid uniform in a box $Q \subseteq \mathbb R^d$. Then, the function $F(r) := \mathbb P\big( r(X_0,\dots,X_q) \le r \big)$ is Lipschitz continuous on $[0, T]$. \end{lemma} Expressing the uniform distribution via the Lebesgue measure, we can also rewrite $F(r)$ in integral form $$F(r) = |Q|^{-d(q + 1)}\int_{Q^{q + 1}} \mathbbmss{1}\{r(x_0, \dots, x_q) \le r\} {\rm d} (x_0, \dots, x_q).$$ In order to carry out monotonicity arguments, it will be convenient to extend the definition of $\b_{q, n}^{\bb, \dd}$ from indices $\boldsymbol b$, $\boldsymbol d$ satisfying $\boldsymbol b \le \boldsymbol d$ to general pairs by setting $\b_{q, n}^{\bb, \dd} := \beta_{q, n}^{\boldsymbol b, \max\{\boldsymbol b, \boldsymbol d\}}$, the maximum being taken pointwise. \begin{lemma}[Continuity of $\bar\beta_q^{\boldsymbol b, \boldsymbol d}$] \label{cont_lim_lem} Assume that the distribution of the typical mark does not contain atoms. Then, $\bar\beta_q^{\boldsymbol b, \boldsymbol d}$ is continuous in the indices $(\boldsymbol b, \boldsymbol d) \in \mc S^2$. \end{lemma} Fixing a value $q \le d$, we suppress this parameter in the notation and simply write $\bar\beta^{\boldsymbol b, \boldsymbol d}$ and $\beta_n^{\boldsymbol b, \boldsymbol d}$. \begin{proof}[Proof of Theorem \ref{lln_proc_thm}] Since the cover parameter $k \in \{1, \dots, T\}$ only takes a finite number of discrete values, we may fix $k_{\ms b}, k_{\ms d} \in \{1, \dots, T\}$ corresponding to the birth time and the death time for that parameter, respectively. Then, to simplify notation, for $\boldsymbol b, \boldsymbol d \in [0, T]^2$, we write $\beta_n^{\boldsymbol b, \boldsymbol d}$ instead of $\beta_n^{(\boldsymbol b, k_{\ms b}), (\boldsymbol d, k_{\ms d})}$, and similarly $\bar\b^{\boldsymbol b, \boldsymbol d}$ instead of $\bar\b^{(\boldsymbol b, k_{\ms b}), (\boldsymbol d, k_{\ms d})}$. We split up the absolute value $|\beta_n^{\boldsymbol b, \boldsymbol d} - \bar\b^{\boldsymbol b, \boldsymbol d}|$ into the positive and the negative part, which are considered separately.\\ \noindent {$\boldsymbol{\sup_{\boldsymbol b, \boldsymbol d \in [0, T]^2} (\bar\b^{\boldsymbol b, \boldsymbol d } - \beta_n^{\boldsymbol b, \boldsymbol d} )\to 0}$.} Let $\varepsilon > 0$. By Lemma \ref{cont_lim_lem}, $\bar\b^{\boldsymbol b, \boldsymbol d}$ is continuous on the compact domain $[0, T]^4$ so that there exists $\delta > 0$ such that $|\bar\b^{\boldsymbol b, \boldsymbol d} - \bar\b^{\boldsymbol b', \boldsymbol d'}| < \varepsilon$ whenever $|(\boldsymbol b, \boldsymbol d) - (\boldsymbol b', \boldsymbol d')|_\infty < \delta$. Note that $\beta_n^{\boldsymbol b, \boldsymbol d}$ is increasing in $\boldsymbol b$ and decreasing in $\boldsymbol d$. Thus, writing $\mc S_{\delta, T} := [0, T]^2 \cap (\delta\mathbb Z)^2$, \begin{align*} \sup_{\boldsymbol b, \boldsymbol d \in \mc S^2} (\bar\b^{\boldsymbol b, \boldsymbol d}- \beta_n^{\boldsymbol b, \boldsymbol d} ) &= \max_{\boldsymbol b', \boldsymbol d' \in\mc S_{\delta, T} }\sup_{(\boldsymbol b, \boldsymbol d) \in (\boldsymbol b', \boldsymbol d') + [0, \delta]^2\times [-\delta, 0]^2 }(\bar\b^{\boldsymbol b, \boldsymbol d} - \beta_n^{\boldsymbol b, \boldsymbol d})\\ &\le \max_{\boldsymbol b', \boldsymbol d' \in\mc S_{\delta, T} }\sup_{(\boldsymbol b, \boldsymbol d)\in (\boldsymbol b', \boldsymbol d') + [0, \delta]^2\times [-\delta, 0]^2 }\Big((\bar\b^{\boldsymbol b, \boldsymbol d} - \bar\b^{\boldsymbol b', \boldsymbol d'}) + (\bar\b^{\boldsymbol b' , \boldsymbol d' } - \beta_n^{\boldsymbol b', \boldsymbol d'})\Big)\\ &, \varepsilon + \max_{\boldsymbol b', \boldsymbol d' \in\mc S_{\delta, T} }(\bar\b^{\boldsymbol b' , \boldsymbol d' } - \beta_n^{\boldsymbol b', \boldsymbol d'}). \end{align*} Now, applying Theorem \ref{lln_rank_thm} shows that the maximum in the last line tends to 0 as $n \to \infty$, thereby concluding the proof.\\ \noindent {$\boldsymbol{\sup_{\boldsymbol b \le \boldsymbol d \in [0, T]^2} (\beta_n^{\boldsymbol b, \boldsymbol d} - \bar\b^{\boldsymbol b, \boldsymbol d })\to 0}$.} The argumentation is very similar to the setting considered above. More precisely, choosing $\boldsymbol b', \boldsymbol d' \in \delta \mathbb Z^2$ such that $(\boldsymbol b, \boldsymbol d)$ is contained in $(\boldsymbol b', \boldsymbol d') + [-\delta, 0]^2\times [0, \delta]^2$ gives that \begin{align*} \beta_n^{\boldsymbol b, \boldsymbol d} - \bar\b^{\boldsymbol b, \boldsymbol d} &\le (\beta_n^{\boldsymbol b', \boldsymbol d'} - \bar\b^{\boldsymbol b', \boldsymbol d'}) + (\bar\b^{\boldsymbol b' , \boldsymbol d' } - \bar\b^{\boldsymbol b , \boldsymbol d }) \le (\beta_n^{\boldsymbol b', \boldsymbol d'} - \bar\b^{\boldsymbol b', \boldsymbol d'}) + \varepsilon. \end{align*} We can now take $n$ large enough such that $(\beta_n^{\boldsymbol b', \boldsymbol d'} - \bar\b^{\boldsymbol b', \boldsymbol d'})$ is smaller than $\varepsilon$ uniformly over all $\delta$-discretized indices. \end{proof} The proof idea for Lemma \ref{cont_lim_lem} is to perform a reduction to the continuity of $n^{-d}\mathbb E[\beta_n^{\boldsymbol b, \boldsymbol d}]$ in $(\boldsymbol b, \boldsymbol d)$ uniformly over all $n \ge 1$, which becomes a consequence of the stationarity. \begin{proof}[Proof of Lemma \ref{cont_lim_lem}] We may fix the discrete values $k_{\ms b}, k_{\ms d} \in \{1, \dots, T\}$ for the cover parameter corresponding to the birth time and the death time for that parameter, respectively. By Theorem \ref{lln_rank_thm}, it suffices to show that $n^{-d}\mathbb E[\b_{q, n}^{\bb, \dd}]$ is continuous in each $(\boldsymbol b, \boldsymbol d) \in [0, T]^4$ uniformly in $n \ge 1$. To that end, fix $\varepsilon > 0$ and $(\boldsymbol b, \boldsymbol d) \in [0, T]^4$. Now, in order to bound $\mathbb E[|\b_n^{\bb, \dd} - \b_n^{\bb', \dd'}|]$ for an arbitrary $(\boldsymbol b', \boldsymbol d') \in [0, T]^4$, we proceed stepwise and investigate the changes when modifying one of the indices at a time. More precisely, applying Corollary \ref{cont_filt_cor} twice, $$n^{-d}\mathbb E[|\beta_{q, n}^{\boldsymbol b', \boldsymbol d} - \b_n^{\bb', \dd'}|] \le n^{-d}\big( \mathbb E\big[|K_{(\boldsymbol d', k_{\ms d}), n}^{q + 1} \Delta K_{(d_1', d_2, k_{\ms d}), n}^{q + 1}|\big]+ \mathbb E\big[|K_{(\boldsymbol d, k_{\ms d}), n}^{q + 1} \Delta K_{(d_1', d_2, k_{\ms d}), n}^{q + 1}|\big]\big),$$ where $\Delta$ denotes the symmetric difference. The main part of the proof is to show that there exists $\delta > 0$ such that the right-hand side is smaller than $\varepsilon$ whenever $|d_1 - d_1'| \vee |d_2 - d_2'| \le \delta$. This choice of $\delta$ is uniform with respect to all $n \ge 1$ and all $d_1, d_1', d_2, d_2'$ satisfying $|d_1 - d_1'| \vee |d_2 - d_2'| \le \delta$. Arguing in the same manner for the indices in $\boldsymbol b'$ then concludes the proof. We start by explaining in detail how to proceed for $k_{\ms d} = 1$. Afterwards, we elucidate how to argue for general $k_{\ms d} \in \{1, \dots, T\}$. Moreover, we assume that $d_1 \le d_1'$, $d_2 \le d_2'$; the arguments are identical for the other possibilities. Let $\mathbb P^0$ be the Palm probability for the marked point process $X = \{(X_i, M_i)\}_{i \ge 1}$, \cite[Definition 9.3]{poisBook}. That is, for any nonnegative measurable $f$, $$\mathbb E^0[f(X)] = \f1\lambda\mathbb E\Big[\sum_{X_i \in [0, 1]^d} f(X - X_i)\Big].$$ If a $(q + 1)$-simplex $\sigma = (X_{i_0}, \dots, X_{i_{q + 1}})$ is contained in $K_{(\boldsymbol d', k_{\ms d}), n}^q \Delta K_{(\boldsymbol d^\circ, k_{\ms d}), n}^q$, then $r(\sigma) \in [d_1, d_1']$ or $M_{i_j} \in [d_2, d_2']$ for some $j \le q$. Writing $N_{1, n}, N_{2, n}$ for the number of ordered $(q + 1)$-simplices satisfying the first, respectively the second condition, it therefore suffices to show that there exists $\delta > 0$ such that $n^{-d} \mathbb E[N_{i, n}] < \varepsilon$ whenever $d_i' - d_i < \delta$. To achieve this goal, we note that \begin{align*} \mathbb E[N_{1, n}] &\le \mathbb E\Big[\sum_{X_0 \in [0, n]^d}\#\{X_1, \dots, X_{q + 1} \in B_T(X_0) \text{ pw.~distinct}\colon r(\sigma(X_0, \dots, X_{q + 1})) \in [d_1, d_1'] \}\Big]\\ &\le \lambda n^d \int_{\mathbb R^{d(q + 1)}}\mathbbmss{1}\big\{r(\sigma(o, y_1, \dots, y_{q + 1})) \in [d_1, d_1']\big\} \alpha_{q + 1}^!({\rm d} y_1,{\rm d} y_2, \dots, {\rm d} y_{q + 1}). \end{align*} Since $\alpha_{q + 1}^!$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb R^{d(q + 1)}$, we deduce from Lemma \ref{divol_lem} that there exists some $\delta > 0$ such that $n^{-d}\mathbb E[N_{1,n}] < \varepsilon$ provided that $d_1' - d_1 < \delta$. Similarly, any point $X_i \in X$ is contained in at most $X(B_T(X_i))^{q + 1}$ many $(q + 1)$-simplices of side length at most $T$. Hence, by the Cauchy-Schwarz inequality, \begin{align*} \mathbb E[N_{2, n}] &\le \mathbb E\Big[\sum_{X_0 \in [0, n]^d}X(B_T(X_0))^{q + 1}\mathbbmss{1}\{M_0 \in [d_2, d_2']\}\Big]\\ &= \lambda n^d \mathbb E^0\big[X(B_T(o))^{q + 1} \mathbbmss{1}\{M_0 \in [d_2, d_2']\}\big]\\ &\le \lambda n^d \sqrt{\mathbb E^0\big[X(B_T(o))^{2q + 2}\big]} \sqrt{\mathbb P^0(M_0 \in [d_2, d_2'])} \end{align*} so that again there exists some $\delta > 0$ such that $n^{-d}\mathbb E[N_{2,n}] < \varepsilon$ provided that $d_2' - d_2 < \delta$. For general $k_{\ms d} \in \{1, \dots, T\}$, we proceed essentially as above except that a simplex $\sigma$ is now described by $k_{\ms d} (q + 1)$ not necessarily distinct points, i.e., $$\sigma =(X_{i_0, 1}, \dots,X_{i_0, k_{\ms d}}, \dots, X_{i_{q + 1}, 1}, \dots,X_{i_{q + 1}, k_{\ms d}}).$$ Still, the same computation as for the marked \v C-bifiltration shows that $n^{-d}\mathbb E[N_{i, n}]$ becomes small uniformly in $n$ provided that $d_i'$ is sufficiently close to $d_i$. \end{proof} \subsection{Proof of Proposition \ref{var_prop}} \label{md_sec} In broad strokes, we follow the proof strategy of \cite[Theorems 1 \& 2]{krebs} and expand $\bar\beta_n(E)$ as a sum of martingale differences. More precisely, we denote by $\{z_1, z_2, \dots, z_{n^d}\}$ the enumeration of $\{0, 1, \dots, n - 1\}^d$ according to the lexicographic order $\le_{\mathsf{lex}}$. Let $$\mc G_i := \sigma\big( \cup_{z \le_{\mathsf{lex}} z_i}(X \cap (z + [0, 1]^d))\big)$$ denote the $\sigma$-algebra generated by the Poisson points in cubes $z + [0,1]^d$ attached to lattice points $z \in \mathbb Z^d$ preceding $z_i$ in the lexicographic order. Then, $\beta_n$ admits the martingale-difference decomposition \begin{align} \label{mdd_eq} \bar\beta_n(E) = \sum_{i \le n^d}D_{i, n}(E), \end{align} where $D_{i, n}(E) := \mathbb E[\beta_n(E)|\, \mc G_i ] - \mathbb E[\beta_n(E)|\, \mc G_{i - 1}]$. In particular, $\ms{Var}(\beta_n(E)) = \sum_{i \le n^d} \ms{Var}(D_{i, n}(E))$, so that to bound the variance, it remains to control the second moments of $D_{i, n}(E)$ uniformly in $i \le n^d$ and $n \ge 1$ by a quantity of order $O(|E|^{1/2 + \varepsilon})$. Since the proof of Lemma \ref{diffBoundLem} is entirely analogous to \cite{krebs}, we omit the proof at this point. \begin{lemma}[Moment bound] \label{diffBoundLem} Let $k \ge 1$. Then, $$\sup_{\substack{n \ge 1, i \le n^d\\ E\subseteq \SSS }}\frac{\mathbb E[|D_{i, n}(E)|^k]}{ |E_1|^{3/4 - \varepsilon_{\ms B}/2}|E_2|^{5/8 - \varepsilon_{\ms B}/2}}< \infty.$$ \end{lemma} While the moment bounds from Lemma \ref{diffBoundLem} are enough to control the variance, dealing with the cumulants also requires the control of correlations. Since the proof of Lemma \ref{covBoundLem} can be copied from \cite{krebs}, we omit it at this point. \begin{lemma}[Covariance bound] \label{covBoundLem} For every $p_1, p_2 \ge 1$ there exist $C_{p_1, p_2} C_{p_1, p_2}' > 0$ with the following property. Let $n \ge 1$ and $A_1, A_2 \subseteq \{1, \dots, n^d\}$ with $|A_1| = p_1$, $|A_2| = p_2$, and set $X_{1, n}(E) = \prod_{i \in A_1}D_{i, n}(E)$ and $X_{2, n}(E) = \prod_{j \in A_2 }D_{j, n}(E)$. Then, $$\ms{Cov}\big(X_{1, n}(E), X_{2, n}(E)\big) \le C_{p_1, p_2} \exp\big(-\ms{dist}(\{z_i\}_{i \in A_1},\{z_j\}_{j \in A_2})^{C_{p_1, p_2}'}\big)\sqrt{\mathbb E[X_{1, n}(E)^4]\mathbb E[X_{2, n}(E)^4]}.$$ \end{lemma} We now elucidate how to complete the proof of Proposition \ref{var_prop} by extending the arguments from the cylindrical setup of \cite{krebs} to the general case considered here. After that, we prove Lemma \ref{diffBoundLem}. \begin{proof}[Proposition \ref{var_prop}, cumulant bound] We henceforth write $D_i$ for $D_{i, n}(E)$ and $\beta_n$ for $\beta_n(E)$. Combining the martingale decomposition in \eqref{mdd_eq} with the multilinearity of cumulants yields that \begin{align}\label{Cum_LargeBlocks} c^4(\bar\beta_n(E)) = \sum_{ i ,j ,k ,\ell }a_{i, j, k, \ell} \ c^4\big(D_i, D_j, D_k, D_\ell\big), \end{align} where the $a_{i, j, k, \ell} \ge 1$ are suitable combinatorial coefficients, depending only on which of the indices $i, j, k, \ell$ are equal. In order to bound the right-hand side of \eqref{Cum_LargeBlocks}, we set $\delta := \varepsilon_{\ms B}/1000$ and distinguish three cases. Every other case can be reduced to one of the three after permuting $(i, j, k, \ell)$. \begin{enumerate} \item\label{cum_a} $\ms{diam}(\{z_i, z_j, z_k, z_\ell\}) < |E|^{-\delta}$, \item\label{cum_b} $\ms{dist}(z_i, \{z_j, z_k, z_\ell\}) \ge |E|^{-\delta}$, \item\label{cum_c} $\ms{dist}(\{z_i, z_j\}, \{z_k, z_\ell\}) \ge |E|^{-\delta}$ and $|z_i - z_j| \vee |z_k - z_\ell|\le |E|^{-\delta}$. \end{enumerate} \noindent{\bf \eqref{cum_a}.} We need to bound the partial sum $$\sum_{i,j,k,\ell \colon \eqref{cum_a}}a_{i, j, k, \ell} c^4(D_i, D_j, D_k, D_\ell).$$ Here, the sum is taken over all indices $i, j, k, \ell$ for which condition \eqref{cum_a} holds. To achieve this goal, we leverage the H\"older inequality and the representation in \eqref{E:C4M1.1}. Hence, \begin{align*} \big|c^4(D_i, D_j, D_k, D_\ell)\big| &\le \hspace{-.5cm}\sum_{\{N_1, \dots, N_r\}\prec\{i, \dots, \ell\}}\hspace{-.5cm}a_{\{N_1, \dots, N_r\}}' \prod_{m \in N_1}\hspace{-.1cm} \mathbb E[ |D_m|^{|N_1|} ]^{\f1{|N_1|}} \cdots \prod_{m \in N_r}\hspace{-.1cm} \mathbb E[ |D_m|^{|N_r|} ]^{\f1{|N_r|}}\\ &\le c \sup_{n \ge 1}\max_{\substack{k \le 4 \\ m \le n}}\mathbb E[|D_m|^k] \end{align*} with the coefficients $a_{\{N_1, \dots, N_r\}}'$ only depending on the structure of the partition. Now, by Lemma \ref{diffBoundLem} there exists $c' > 0$ such that $\mathbb E[|D_m|^k] \le c'|E_1|^{3/4 - \varepsilon_{\ms B}/2}|E_2|^{5/8 - \varepsilon_{\ms B}/2}$. Hence, $$\sum_{\{N_1, \dots, N_r\}\prec\{i, \dots, \ell\}}a_{i, j, k, \ell} c^4(D_i, D_j, D_k, D_\ell)\le 8cc'n|E|^{5/8 -\varepsilon_{\ms B}/2}|E|^{-3\delta},$$ and the right-hand side is in $O(n|E|^{5/8 - \varepsilon_{\ms B}}).$ \noindent{\bf \eqref{cum_b} \& \eqref{cum_c}.} To avoid redundancy we provide only the details for \eqref{cum_b}. That is, we control the sum $$\sum_{i,j,k,\ell \colon \eqref{cum_b}}a_{i, j, k, \ell} c^4(D_i, D_j, D_k, D_\ell),$$ To that end, we proceed along the lines of \cite[Proposition 9]{krebs}. More precisely, invoking the semi-cluster decomposition from \cite{barysh, raic} allows to express the individual cumulants in the form \begin{align*} c^4(D_i, D_j, D_k, D_\ell) =\hspace{-.5cm}\sum_{\{N_1, \dots, N_r\}\prec\{j, k, \ell\}} \hspace{-.5cm}a_{\{N_1, \dots, N_r\}}'\ms{Cov}\Big(D_i, \prod_{s \in N_1}D_s\Big)\mathbb E\Big[\prod_{s \in N_2}D_s\Big]\cdots \mathbb E\Big[\prod_{s \in N_r}D_s\Big] \end{align*} for some coefficients $a_{\{N_1, \dots, N_r\}}'$ only depending on the structure of the partition. Hence, combining the moment bounds from Lemma \ref{diffBoundLem} with the covariance bounds from Lemma \ref{covBoundLem} concludes the proof. \end{proof} As in the previous proof, we henceforth write $D_i$ for $D_{i, n}(E)$ and $\beta_n$ for $\beta_n(E)$. \begin{proof}[Proof of Lemma \ref{diffBoundLem}] First, we rely on another representation of $D_{i, n}$. Namely, for $i \le n^d$, we set $$X^{(i, n)} :=\big(X^{(n)}\setminus (z_i +[0, 1]^d)\big) \cup \big(\widetilde{X}^{(n)}\cap (z_i + [0,1]^d)\big),$$ where $\widetilde{X}^{(n)}$ is an independent copy of $X^{(n)}$. Then, $\mathbb E[\beta_n|\, \mc G_{i - 1}] = \mathbb E[\b_{i, n} |\, \mc G_i]$, where $\b_{i, n} := \beta_n(X^{(i, n)})$. Since, $D_{i, n} = \mathbb E[\De_{i, n} |\, \mc G_i ]$, where $\De_{i, n} := \beta_n - \b_{i, n}$, it suffices to bound the moments of $|\De_{i, n}|$. First, we may restrict to simplices that are in a connected component intersecting $z_i + [0, 1]^d$. Indeed, let $X^{(n, *)}$ denote all points of $X^{(n)}$ whose connected component in $\bigcup_{X_i \in X^{(n)}} B_T(X_i)$ intersects $z_i + [0, 1]^d$, and define $X^{(i, n, *)}$ similarly. Then, $$\beta_n = \beta_n(X^{(n, *)}) + \b_{i, n}(X^{(n)} \setminus X^{(n, *)}) \,\text{ and }\,\b_{i, n} = \b_{i, n}(X^{(i, n, *)}) + \b_{i, n}(X^{(i, n)} \setminus X^{(i, n, *)}).$$ Since $X^{(i, n)} \setminus X^{(i, n, *)} = X^{(n)} \setminus X^{(n, *)},$ we conclude that the difference can be computed with respect to $X^{(n, *)}$ and $X^{(i, n, *)}$. Setting $K := 500$, $p:= Kk(q + 1)$ and using that $\beta_n(X^{(n, *)})$ and $\b_{i, n}(X^{(i, n, *)})$ are bounded above by the number of $q$-simplices in $X^{(n, *)}$ and $X^{(i, n, *)}$, respectively, the H\"older inequality gives that \begin{align*} \mathbb E[|\De_{i, n}|^k] &\le \mathbb E\Big[ \mathbbmss{1}\{\De_{i, n} \ne 0\}\Big((\#X^{(n, *)})^{q + 1}+ (\#X^{(i, n, *)})^{q + 1}\Big)^k\Big] \\ &\le 2^k\mathbb P(\b_{i, n}(X^{(n, *)}) + \b_{i, n}(X^{(i, n, *)}) \ne 0)^{1 - 1/K}\mathbb E\big[(\#X^{(n, *)})^p+ (\#X^{(i, n, *)})^p\big]^{1/K}\\ &\le 2^{k + 1}\mathbb P\big(\b_{i, n}(X^{(n, *)}) \ne 0\big)^{1 - 1/K}\mathbb E\big[(\#X^{(n, *)})^p\big]^{1/K}. \end{align*} Since we assumed $X$ to be in the sub-critical regime of continuum percolation, the expected value in the last line is bounded above by a finite constant not depending on $i$ or $n$. It remains to bound the probability $\mathbb P\big(\b_{i, n}(X^{(n, *)}) \ne 0\big)$. Now, we define $$N_{i, n} := \min\big\{\ell \ge 1\colon X^{(n, *)} \subseteq Q_\ell(z_i)\big\}$$ as the size of the smallest box $Q_\ell(z_i) := z_i + [-\ell/2, \ell/2]^d$ centered at $z_i$ containing $X^{(n, *)}$. Then, \begin{align*} \mathbb P\big(\b_{i, n}(X^{(n, *)}) \ne 0\big)&= \sum_{\ell \ge 1}\mathbb P(\b_{i, n}(X^{(n, *)}) \ne 0, N_{i, n} = \ell) \\ &\le \sum_{\ell \ge 1}\mathbb P(N_{i, n} = \ell)^{1/K}\mathbb P(\b_{i, n}(X^{(n, *)}) \ne 0, N_{i, n} = \ell)^{1- 1/K}. \end{align*} Since $X$ is in the sub-critical regime of continuum percolation, we deduce that the tail probabilities $\sup_{n \ge 1}\sup_{i \le n^d}\mathbb P(N_{i, n} = \ell)$ decay exponentially fast in $\ell$. Hence, it suffices to show that for some $c > 0$, \begin{align} \label{dnzl_eq} \mathbb P(\b_{i, n}(X^{(n, *)}) \ne 0, N_{i, n} = \ell) \le c \ell^{d(2q + 5)} |E_1|^{3/4}|E_2|^{5/8}. \end{align} The key to \eqref{dnzl_eq} is to show that \begin{align} \label{dnzl_r_eq} \{\b_{i, n}(X^{(n, *)}) \ne 0, N_{i, n} = \ell\} \subseteq F_{1, \ell} \cap F_{2, \ell}, \end{align} where \begin{align*} F_{1, \ell} := \big\{&\big(r(\sigma), r(\sigma')\big) \in E_1 \text{ for some $q$- and $(q + 1)$-simplices $\sigma$, $\sigma'$ contained in $Q_\ell(z_i)$}\big\} \end{align*} describes the event that there is at least one $q$-simplex with filtration time in $[b_{1, -}, b_{1, +}]$ and at least one $(q + 1)$-simplex with filtration time in $[d_{1, -}, d_{1, +}]$, and where $$F_{2, \ell} := \big\{(M_k , M_j) \in E_2 \text{ for some $X_k, X_j \in Q_\ell(z_i)$} \big\}$$ denotes the event of finding marks in $E_2$. Now, we prove \eqref{dnzl_r_eq}. First, if there is no $q$-simplex with filtration time in $[b_{1, -}, b_{1, +}]$, then in the alternating sum \eqref{alt_eq}, the contributions from $b_{1, -}$ and from $b_{1, +}$ cancel. Similarly, if there is no $(q + 1)$-simplex with filtration time in $[d_{1, -}, d_{1, +}]$, then the contributions from $d_{1, -}$ and $d_{1, +}$ cancel. Thus, $\{\b_{i, n}(X^{(n, *)}) \ne 0, N_{i, n} = \ell\} \subseteq F_{1, \ell}$, and by similar arguments, $\{\b_{i, n}(X^{(n, *)}) \ne 0, N_{i, n} = \ell\} \subseteq F_{2, \ell}$. After having established \eqref{dnzl_r_eq} it remains to show that $\mathbb P(F_{1, \ell} \cap F_{2, \ell}) \le c \ell^{d(2q + 5)}|E_1|^{3/4}|E_2|^{5/8}.$ As in the previous proofs, we first deal with the single-cover setting, i.e., where $E_3 = \{(1, 1)\}$. Under the event $F_{1, \ell} \cap F_{2, \ell}$ there exist $2q + 5$ Poisson points $(X_{i_0}, M_{i_0})$, \dots, $(X_{i_q}, M_{i_q})$, $(X_{i_0'}, M_{i_0'})$, \dots,$(X_{i_{q + 1}'}, M_{i_{q + 1}'})$ and $(X_k, M_k)$, $(X_j, M_j)$ all contained in $Q_\ell(z_i)$. One slight nuisance is that several of these points may coincide, which needs to be taken into account before applying the Mecke formula. We present the detailed argumentation in one prototypical case, noting that the other ones can be handled similarly. More precisely, we assume that $i_0 = i_0' = k$, $i_1 = i_1' = j$ and $i_j = i_j'$ for $2 \le j \le q$, whereas apart from these, all other indices are pairwise distinct. Letting $F_\ell^*$ denote the event that the original event $F_{1, \ell} \cap F_{2, \ell}$ occurs due to this form of configuration, we deduce that \begin{align*} \mathbb P(F_\ell^*) &\le \mathbb E\Big[\#\Big\{{X_{i_0}, \dots, X_{i_q}, X_{i_{q + 1}' }\in Q_\ell(z_i) \text{ pw.~distinct}}\colon \\ &\phantom{\le \mathbb E\Big[\#\Big\{}\big(r(X_{i_0}, \dots, X_{i_q}), r(X_{i_0}, \dots, X_{i_q}, X_{i_{q + 1}'})\big) \in E_1,\, (M_{i_0}, M_{i_1})\in E_2\Big\}\Big]\\ &= \lambda^{q + 2} \int_{Q_\ell(z_i)^{q + 2}}\mathbbmss{1}\big\{\big(r(x_{0}, \dots, x_q)), r(x_0, \dots, x_{{q + 1}})\big) \in E_1\big\} \mathbb P\big((M_0, M_1)\in E_2\big){\rm d} x_0 \cdots {\rm d} x_{q + 1}\\ &\le c_{\mathsf{Lip}}^2\lambda^{q + 2}|E_2|^{5/8} \int_{Q_\ell(z_i)^{q + 2}}\mathbbmss{1}\Big\{\big(r(x_{0}, \dots, x_q)), r(x_{0}, \dots, x_{{q + 1}})\big) \in E_1\Big\}{\rm d} x_0 \cdots {\rm d} x_{q + 1}. \end{align*} After scaling by $\ell^{(q +2 )d}$ and letting $X_0', \dots, X_{q + 1}'$ to be iid uniform in $Q_1(z_i)$, the final integral becomes the probability that $\big(r(X_0', \dots, X_q'), r(X_0', \dots, X_q', X_{{q + 2}}')\big) \in E_1$. Thus, by \cite[Proposition 6]{krebs}, we arrive at the asserted $\mathbb P(F_\ell^*) \le c\lambda^{q + 2}\ell^{(q + 2)d}|E_1|^{3/4}|E_2|^{5/8}$ for some constant $c > 0$. Finally, we discuss how to extend the above proof to the general setting where $E_3 = \{(k, k')\}$ for some general $k \ge k' \ge 1$. Then, the $q$-simplex $\sigma$ and the $(q + 1)$-simplex $\sigma'$ have the form $$\sigma =(X_{i_{0, 1}}, \dots, X_{i_{0, k}}, \dots, X_{i_{q, 0}}, \dots, X_{i_{q, k}}),$$ and $$\sigma' =(X_{i_{0, 1}'}, \dots, X_{i_{0, k'}'}, \dots, X_{i_{q + 1, 0}'}, \dots, X_{i_{q + 1, k'}'}).$$ Hence, we can bound $\mathbb P(F_{1, \ell} \cap F_{2, \ell})$ by the same arguments as in the single-cover setting. The only difference is that now there are more combinatorial possibilities concerning which of the points are pairwise distinct. Still, all of these cases can be handled through an application of a combination of the Mecke formula with \cite[Proposition 6]{krebs}. \end{proof} \section{Model} \label{mod_sec} We shall assume that the reader is familiar with simplicial complexes and simplicial homology. We recommend \cite{wasserman} for a highly accessible account of these topics, which is written for an audience with a background in statistics. Henceforth, $\mathbb K$ always denotes a \emph{multifiltered simplicial complex}. That is, \begin{enumerate} \item $\mathbb K = \{K_{\boldsymbol a}\}_{\boldsymbol a}$ describes a family of simplicial complexes indexed by $\boldsymbol a = (a_1, \dots, a_u) \in \mc S := \mathcal S_1 \times \cdots \times \mathcal S_u$ for some totally ordered sets $\mathcal S_1, \dots, \mathcal S_u$; \item for every $\boldsymbol a = (a_1, \dots, a_u), \boldsymbol b = (b_1, \dots, b_u) \in \mc S$ with $a_i \le b_i$ for every $i \le u$ there is a simplicial map $K_{\boldsymbol a} \to K_{\boldsymbol b}$. Here, a simplicial map is a map from the 0-simplices of $K_{\boldsymbol a}$ to the 0-simplices of $K_{\boldsymbol b}$ such that every $q$-simplex in $K_{\boldsymbol a}$ is mapped to a $q$-simplex in $K_{\boldsymbol b}$. \end{enumerate} In this work, we consider bifiltrations built on geometric point patterns. More precisely, throughout the entire manuscript, we fix some deterministic $T > 0$ and let $ X = \{(X_i, M_i)\}_{i \ge 1}$ denotes a $[0, T]$-marked point process, which is stationary in the sense that the distribution of $\{(X_i + x, M_i)\}_{i \ge 1}$ is the same for every $x \in \mathbb R^d$. In applications, the mark could allow for a variety of different interpretations such as the volume of a particle, the amount of current flowing through a given measurement location, or the total precipitation at a given location. We assume that the intensity $\lambda := \mathbb E\big[\#\{i\colon X_i \in [0, 1]^d\}\big]$ is positive and finite. For instance, $\{X_i\}_{i \ge 1}$ can be a homogeneous Poisson point process endowed with iid marks \cite{poisBook}. The conceptual framework developed in the present article is guided by two fundamental examples of bifiltrations: the \emph{marked \v Cech-bifiltration} and the \emph{multicover bifiltration} \cite{osang, marked}. In order to render the presentation of the overarching framework more accessible, we discuss these examples first before moving to the general setting. Loosely speaking, the \emph{marked \v Cech bifiltration (\v C-bifiltration)} illustrated in Figure \ref{bifilt_fig} combines the \v C-filtration with the sub-level filtration on the marks. \begin{example}[Marked \v Cech bifiltration] \label{mark_exc} Set \begin{align} \label{bifi_eq} \mathbb K_{(r_1, r_2), n} := \ms{ Cech}_{r_1}\big(\{X_i \in [0, n]^d \colon M_i \le r_2\}\big). \end{align} Here, the \emph{filtration time} in the \v C-filtration of a $q$-simplex $\{x_0, \dots, x_q\} \subseteq \mathbb R^d$ is given by $$r(\{x_0, \dots, x_q\}) := \min\big\{t > 0\colon B_t(x_0) \cap \cdots \cap B_t(x_q) \ne \varnothing\big\},$$ where $B_t(y) := \{x \in \mathbb R^d\colon |x - y| \le t\}$ denotes the Euclidean ball with radius $t > 0$ centered at $y \in \mathbb R^d$. The simplicial maps are given by the natural inclusions of simplicial complexes. \begin{figure}[!h] \centering \input{bifilt} \caption{Marked \v C-bifiltration with uniform marks on $[0, 1]$. Illustration shows the bifiltration at levels $(0.8, 0.7)$ (left), $(0.8, 0.2)$ (center), and $(1.2, 0.2)$ (right). For clarity, only Delaunay triangles are drawn.} \label{bifilt_fig} \end{figure} Note that it would also be possible to consider multi-variate marks, i.e., $M_i \in [0, T]^p$ for some $p \ge 1$. In order to render the presentation more accessible, we focus on the case $p = 1$, noting that most of the arguments in this work extends to $p \ge 1$ after straightforward modifications. \end{example} \begin{example}[Multicover bifiltration] \label{cov_exc} Although highly popular in applications, the \v C-filtration has the drawback of being sensitive to outliers \cite{merigot}. A small number of data points placed in sparse regions in space may influence sensitively the topological properties of the corresponding \v C-complex. One attractive option to mitigate these effects is to work with \emph{multicover filtrations}. Instead of studying the topology of the union of balls centered at data points, one works with the set of points that are covered by a certain number $k \ge 1$ of such balls. Figure \ref{vis_fig} illustrates the multiple coverage for a set of random points in a 2D sampling window. \begin{figure}[!h] \centering \input{mult_vis} \caption{Areas of multiple coverage shown by gray shades. Small radius (left) and large radius (right).} \label{vis_fig} \end{figure} Some work is needed to embed the idea of multicover bifiltrations into the general framework of simplicial complexes. For $k \ge 1$, $r > 0$, the 0-simplices of the complex $\ms{Mult}_{(r, k)}(X)$ are given by all unordered $k$-tuples of points in $\{X_i\}_{i \ge 1}$. Then, $\varphi_0, \dots, \varphi_p \subseteq \{X_i\}_{i \ge 1}$ with $\#\varphi_i = k$ form a $p$-simplex at level $r > 0$ if and only if $\varphi_0 \cup \cdots \cup \varphi_p$ forms a simplex in the standard \v Cech filtration. In symbols, if \begin{align} \label{mc_eq} \varphi_0 \cup \cdots \cup \varphi_p \in \ms{ Cech}_r\big(\{X_i \in [0, n]^d\}\big). \end{align} In order to turn $\ms{Mult}_{(r, k)}(X)$ into a bifiltration, we reverse the natural ordering on the $k$-component. Hence, in order to define a simplicial map from $\ms{Mult}_{(r_1, k_1)}(X)$ to $\ms{Mult}_{(r_2, k_2)}(X)$ for $r_1 \le r_2$ and $k_1 \ge k_2$, we need to provide a map sending a subset $\varphi_1 \subseteq \{X_i\}_{i \ge 1}$ of size $k_1$ to a subset $\varphi_2 \subseteq \{X_i\}_{i \ge 1}$ of size $k_2$. This is achieved by defining $\varphi_2$ to consist of the $k_2$ smallest elements of $\varphi_1$ according to the lexicographic order. This choice is of course arbitrary but we stress that a different selection rule does not alter the persistence properties of the resulting complex. \end{example} Finally, it is possible to unify Examples \ref{mark_exc} and \ref{cov_exc} under a single umbrella. Loosely speaking, we compute the multicover bifiltration only among those points whose mark is smaller than a certain filtration level. \begin{example}[Combined example] \label{comb_exc} Set \begin{align} \label{comb_eq} \K_{(r_1, r_2, k), n}(\{(X_i, M_i)\}_{i \ge 1}) := \ms{Mult}_{(r_1, k)}\big(\{X_i \in [0, n]^d \colon M_i \le r_2\}\big). \end{align} Henceforth, we fix some $T', K' \ge 1$ and work on the compact index set $\mc S := [0, T'] \times [0, T] \times \{1, \dots, K'\}$. In order to ease the presentation, we assume from now on that $T = T' = K'$. In particular, setting $k = 1$ recovers the {marked \v Cech-bifiltration}, and setting $r_2 = T$ recovers the {multicover bifiltration}. \end{example} For single-parameter filtrations, the persistence diagram is a collection of pairs $\{(B_j, D_j)\}_j$ representing the filtration values where features such as connected components, loops, or higher-dimensional cavities appear and disappear. For instance by fixing one of the axes and varying the other one, Figure \ref{pd_mark_fig} provides two persistence diagrams that can be extracted from the marked \v C-bifiltration. \begin{figure}[!h] \centering \includegraphics[width=.4\textwidth]{pd_height} \includegraphics[width=.4\textwidth]{pd_cech} \caption{Persistence diagrams constructed from the point clouds in Figure \ref{bifilt_fig} with respect to the mark-axis (left) and \v Cech-axis (right).} \label{pd_mark_fig} \end{figure} Similarly, the multicover bifiltration yields persistence diagrams for different fixed values of the covering depth $k \ge 1$. For instance, Figure \ref{pd_mc_fig} illustrates these diagrams when $k = 1$ and $k = 2$. \begin{figure}[!h]\centering \includegraphics[width=.4\textwidth]{pd_mult1} \includegraphics[width=.4\textwidth]{pd_mult2} \caption{Persistence diagrams constructed from the point clouds in Figure \ref{vis_fig} for the 1-cover (left) and the 2-cover (right).} \label{pd_mc_fig} \end{figure} Unfortunately, in the general setting of multiparameter persistence, a concise visual representation such as the one provided by the persistence diagram is still unknown in general. Nevertheless, a more algebraic encoding of this information can be achieved through the concept of \emph{persistent Betti numbers}. More precisely, given a bifiltered simplicial complex $\mathbb K$, the \emph{persistent Betti numbers $\{\beta_q^{\boldsymbol b, \boldsymbol d}\}_{\boldsymbol b \le \boldsymbol d}$} are defined as the rank invariants of the associated homology groups $\{H_q(K_{\boldsymbol b})\}_{\boldsymbol b \in \mc S}$, i.e., $$\beta_q^{\boldsymbol b, \boldsymbol d} := \ms{dim}\big( \ms{Im}\big(H_q(K_{\boldsymbol b}) \to H_q(K_{\boldsymbol d})\big)\big), $$ where we always work with $\mathbb Z/2$-coefficients. In a single-parameter setting, the persistent Betti number $\beta_q^{b, d}$ has a clear connection with the persistence diagram: it counts the number of points that are contained in the upper-left domain $[0, b] \times [d, \infty)$. In other words, this is the number of $q$-features that are born before time $b$ and live past time $d$. To simplify notation, we set henceforth $\b_{q, n}^{\bb, \dd} := \b_q^{\bb, \dd}(\mathbb K_{\cdot, n}).$ \section{Demonstration of asymptotic results} \label{prel_sec} Henceforth, we will often need to bound the change in the persistent Betti numbers $\b_n^{\bb, \dd}$ when modifying $(\boldsymbol b, \boldsymbol d)$. In the following arguments, we rely on the representation \begin{align*} \beta_q^{\boldsymbol b, \boldsymbol d} &:= \ms{dim}\big( \ms{Im}\big(H_q(K_{\boldsymbol b}) \to H_q(K_{\boldsymbol d})\big)\big) \\ &= \ms{dim}(Z_q(K_{\boldsymbol b})) - \ms{dim}(Z_q(K_{\boldsymbol b}) \cap B_q(K_{\boldsymbol d}))\\ &= \ms{dim}(Z_q(K_{\boldsymbol b}) + B_q(K_{\boldsymbol d}))- \ms{dim}(B_q(K_{\boldsymbol b})) , \end{align*} where $Z_q := \ms{ker}(\partial_q)$ and $B_q := \ms{Im}(\partial_{q + 1})$ denote the kernel and boundary spaces. Here, we tacitly identify $Z_q(K_{\boldsymbol b})$ with its image in $Z_q(K_{\boldsymbol d})$. We will need an extension of \cite[Lemma 2.11]{shirai}, which can be seen as a continuity result when changing the underlying filtration and filtration times. We also write $K_{\boldsymbol b}^q$ for the set of $q$-simplices in the simplicial complex $K_{\boldsymbol b}$. \begin{lemma}[Continuity of cycle and boundary spaces] \label{cont_filt_lem} Let $n \ge 1$ and $q \le d$. Let $\mathbb K = \{K_{\boldsymbol a}\}_{\boldsymbol a}$ and $\mathbb L = \{L_{\boldsymbol a}\}_{\boldsymbol a}$ be filtrations with $K_{\boldsymbol a} \subseteq L_{\boldsymbol a}$ for all indices $\boldsymbol a$. Moreover, let $\boldsymbol b, \boldsymbol b', \boldsymbol d, \boldsymbol d'$ be indices such that \begin{enumerate} \item $\boldsymbol b\le \boldsymbol b'$ and $\boldsymbol d\le \boldsymbol d'$; \item $K_{\boldsymbol b} \subseteq K_{\boldsymbol b'}$ and $K_{\boldsymbol d} \subseteq K_{\boldsymbol d'}$. \end{enumerate} Then, \begin{enumerate} \item $\ms{dim}(Z_q(L_{\boldsymbol b'})) - \ms{dim}(Z_q(K_{\boldsymbol b})) \le |L_{\boldsymbol b'}^q \setminus K_{\boldsymbol b}^q|$; \item $\ms{dim}(B_q(L_{\boldsymbol b'})) - \ms{dim}(B_q(K_{\boldsymbol b})) \le |L_{\boldsymbol b'}^{q + 1} \setminus K_{\boldsymbol b}^{q + 1}|$; \item $\ms{dim}(Z_q(L_{\boldsymbol b'}) \cap B_q(L_{\boldsymbol d'})) - \ms{dim}(Z_q(K_{\boldsymbol b})\cap B_q(L_{\boldsymbol d})) \le |L_{\boldsymbol b'}^q \setminus K_{\boldsymbol b}^q| + |L_{\boldsymbol d'}^{q + 1} \setminus K_{\boldsymbol d}^{q + 1}|. $ \end{enumerate} \end{lemma} \begin{proof} To ease notation, we set $Z' := Z_q(L_{\boldsymbol b'})$, $Z:=Z_q(K_{\boldsymbol b})$, $B' := B_q(L_{\boldsymbol d'})$ and $B := B_q(K_{\boldsymbol d})$. For part 1., let $\psi$ and $\psi'$ be $q$-cycles in $Z'$ such that $\psi$ and $\psi'$ share the same $q$-simples in $L_{\boldsymbol b'}^q \setminus K_{\boldsymbol b}^q$, then $\psi + \psi' \in Z$. Therefore, $\ms{dim}(Z' ) - \ms{dim}(Z)\le |L_{\boldsymbol b'}^q \setminus K_{\boldsymbol b}^q|.$ For part 2., let $\partial(\psi), \partial(\psi') \in B'$ be $q$-boundaries coming from $(q + 1)$-chains $\psi, \psi'$ on $L_{\boldsymbol d'}$ such that $\psi$ and $\psi'$ share the same simplices in $L_{\boldsymbol d'}^{q + 1} \setminus K_{\boldsymbol d}^{q + 1}$. Then, $\psi + \psi'$ is a $(q + 1)$-chain in $K_{\boldsymbol d}^{q + 1}$. Therefore, $\ms{dim}(B') - \ms{dim}(B) \le |L_{\boldsymbol d'}^{q + 1} \setminus K_{\boldsymbol d}^{q + 1}|.$ The final claim now follows from the embeddings $(Z' \cap B') / (Z' \cap B) \subseteq B' /B$ and $(Z' \cap B) / (Z \cap B) \subseteq Z' /Z$. \end{proof} For us, the main consequences of Lemma \ref{cont_filt_lem} are the continuity of the $\beta$ and $\gamma$ characteristics. \begin{corollary}[Continuity of $\beta$ and $\gamma$] \label{cont_filt_cor} Let $n \ge 1$ and $q \le d$. Let $\mathbb K = \{K_{\boldsymbol a}\}_{\boldsymbol a}$ and $\mathbb L = \{L_{\boldsymbol a}\}_{\boldsymbol a}$ be filtrations with $K_{\boldsymbol a} \subseteq L_{\boldsymbol a}$ for all indices $\boldsymbol a$. Moreover, let $\boldsymbol b = (b_1, b_2, k_{\ms b}), \boldsymbol b'=(b_1', b_2', k_{\ms b}), \boldsymbol d = (d_1, d_2, k_{\ms d}), \boldsymbol d' = (d_1', d_2', k_{\ms d})$ be indices such that $\boldsymbol b \le \boldsymbol d$ and $\boldsymbol b'\le \boldsymbol d'$. Then, $\big|\b_q^{\bb, \dd}(\mathbb K) - \b_q^{\bb', \dd'}(\mathbb L)\big| \le 2|L_{\boldsymbol b'}^q \setminus K_{\boldsymbol b}^q| + |L_{\boldsymbol d'}^{q + 1} \setminus K_{\boldsymbol d}^{q + 1}|.$ \end{corollary} \section{Main results} \label{res_sec} The main conceptual contribution of this work consists in establishing the strong consistency and asymptotic normality for persistent Betti numbers in the setting of multi-parameter persistence. This will be achieved first in the \emph{scalar setting}, and then, under stronger conditions, in the \emph{functional setting}. While scalar results pertain to the asymptotic behavior of $\b_{q, n}^{\bb, \dd}$ for a \emph{fixed index pair} $\boldsymbol b, \boldsymbol d \in \mc S$, functional limit results allow for a description of $\b_{q, n}^{\bb, \dd}$ when seen as a stochastic process with varying $\boldsymbol b$ and $\boldsymbol d$. Before stating the main results, we stress the importance of functional limit results. This is because already in the single-parameter setting, many of the most popular characteristics extracted from the persistence diagram cannot be expressed in terms of a single fixed persistent number. This applies in particular to the total persistence. In contrast, it was shown in \cite[Corollary 3.4]{svane} how a functional CLT in the single-parameter setting yields the asymptotic normality of the total persistence. Similarly, one immediate use case of the multi-parameter functional CLT is the asymptotic normality for the linear combination of total persistences taken with respect to different single-parameter filtrations that are derived from the bifiltration. We will revisit such examples in the discussion of the simulation experiments in Section \ref{sim_sec}. Moreover, the functional setting is also the most relevant one in terms of the methodological contributions. While scalar strong consistency and asymptotic normality are essentially corollaries from known results in the single-parameter setting, substantial novel work is needed for the functional results. \subsection{Scalar consistency and asymptotic normality} \label{scal_sub} As elucidated above, we first comment on the limiting behavior of $\b_{q, n}^{\bb, \dd}$ as $n \to \infty$ for fixed $\boldsymbol b, \boldsymbol d \in \mathcal \mc S$. More precisely, in Theorems \ref{lln_rank_thm} and \ref{clt_rank_thm} below, we extend the \emph{strong consistency} and \emph{asymptotic normality} derived for single-parameter persistent Betti numbers in \cite[Theorems 1.11 and 1.12]{shirai} to the multi-parameter setting. To ease notation, we henceforth write $X(B) := \#\{i \ge 1 \colon X_i \in B\}$ for the number of points of $X$ contained in a set $B \subseteq \mathbb R^d$. \begin{theorem}[Strong consistency for the scalar rank invariant] \label{lln_rank_thm} Let $q \le d$, $\boldsymbol b \le \boldsymbol d$ and assume that $\{(X_i, M_i)\}_{i \ge 1}$ is ergodic and that $\mathbb E[X([0, 1]^d)^p] < \infty$ for all $p \ge 1$. Then, there exists a deterministic $\bar\b_q^{\bb, \dd}$ such that $$\lim_{n \to \infty} \f1{n^d}\mathbb E[\b_{q, n}^{\bb, \dd}] = \bar\b_q^{\bb, \dd},$$ and, almost surely, $$\lim_{n \to \infty}\f1{n^d}\b_{q, n}^{\bb, \dd} = \bar\b_q^{\bb, \dd}.$$ \end{theorem} One of the strengths of Theorem \ref{lln_rank_thm} is that it holds for general ergodic point processes satisfying the moment condition. On the other hand, in the specific setting of binomial point processes where the points are sampled iid from a given density, one of the main results of \cite{lesnick} is a different LLN with respect to a highly refined homotopy interleaving distance. Next, we proceed with asymptotic normality. \begin{theorem}[Asymptotic normality for the scalar rank invariant] \label{clt_rank_thm} Let $q\le d$, $\boldsymbol b \le \boldsymbol d$ and assume that $\{(X_i, M_i)\}_{i \ge 1}$ is an independently marked homogeneous Poisson point process. Then, as $n \to \infty$, $$\frac{\b_{q, n}^{\bb, \dd} - \mathbb E\big[\b_{q, n}^{\bb, \dd}\big]}{n^{d/2}} \Rightarrow Z,$$ where $Z$ is a centered normal random variable. \end{theorem} We note that an incremental extension of the proof would establish asymptotic normality of linear combinations of the form $\sum_{j \le j_0}a_j \beta_{q, n}^{\boldsymbol b_j, \boldsymbol d_j}$. Hence, the Cram\'er-Wold device \cite[Theorem 7.7]{pm} also gives the following corollary on the normality of the multivariate marginals. \begin{theorem}[Asymptotic multivariate normality for the scalar rank invariant] \label{clt_rank_cor} Let $j_0 \ge1$, $q\le d$, $\boldsymbol b_j \le \boldsymbol d_j$ for $j \le j_0$ and assume that $\{(X_i, M_i)\}_{i \ge 1}$ is an independently marked homogeneous Poisson point process. Then, as $n \to \infty$, $$\frac{(\{\beta_{q, n}^{\boldsymbol b_j, \boldsymbol d_j}\}_{j \le k}) - \mathbb E\big[\{\beta_{q, n}^{\boldsymbol b_j, \boldsymbol d_j}\}_{j \le k}\big]}{n^{d/2}} \Rightarrow Z,$$ where $Z$ is a $k$-dimensional centered normal random vector. \end{theorem} To prove Theorems \ref{lln_rank_thm} and \ref{clt_rank_thm}, we note that for fixed $\boldsymbol b \le \boldsymbol d$, we can always interpret $\b_{q, n}^{\bb, \dd}$ within the standard framework of single-parameter persistence. Indeed, $\b_{q, n}^{\bb, \dd}$ can be seen as the standard persistent Betti number indexed over a set consisting of two elements, namely $\{\boldsymbol b, \boldsymbol d\}$. Hence, the proof of the scalar limit theorems will reduce to discussing to what extent the conditions stated in \cite{shirai} are valid for the \v C- and the multicover-bifiltrations. \subsection{Functional consistency and asymptotic normality} \label{func_sub} While Theorems \ref{lln_rank_thm} and \ref{clt_rank_thm} describe the large-volume limit $\b_{q, n}^{\bb, \dd}$ for fixed $\boldsymbol b, \boldsymbol d \in \mc S$, we now view $\b_{q, n}^{\bb, \dd}$ as an $\mc S^2$-indexed stochastic process in the variables $\boldsymbol b$ and $\boldsymbol d$. Already in the setting of single-parameter persistence this viewpoint is essential for statistical applications since the total persistence and many other of the most commonly used characteristics extracted from persistence diagram are not linear combinations of the persistent Betti numbers. However, they can be expressed as a continuous functional given all persistent Betti numbers as input. Hence, also in the multi-parameter setting, it is essential to move beyond scalar consistency and asymptotic normality. Furthermore, in comparison to the single-parameter case, new difficulties appear since there is no longer a clear representation of the persistent Betti numbers through the persistence diagram. Thus, we can no longer speak of features that are born in certain time intervals and die at later points in time. First, to extend Theorem \ref{lln_rank_thm} to the process level, we impose additional continuity constraints on the point process and on the mark distribution. In particular, independently marked Poisson point processes will satisfy all these conditions. First, for $q \ge 1$ the \emph{reduced $q$th factorial moment measure} $\alpha_q^!$ of a stationary point process is determined by the disintegration formula $$\mathbb E\Big[\hspace{-.3cm}\sum_{\substack{ i_1,\dots, i_q \ge 1 \\\text{pw. distinct}}}\hspace{-.5cm}f(X_{i_1}, \dots, X_{i_q})\Big] = \lambda \int_{\mathbb R^d}\int_{\mathbb R^{d(q - 1)}} \hspace{-.8cm}f(x_0, x_0 + y_1, \dots, x_0 + y_{q - 1})\alpha_q^!({\rm d} y_1, \dots, {\rm d} y_{q - 1}) {\rm d} x_0$$ for any measurable $f\colon \mathbb R^{dq} \to [0,\ff)$, see \cite[Definition 8.6]{poisBook}. Additionally, we recall that the distribution $\mathbb P^0$ of a \emph{typical mark} is determined by the property that $$\mathbb E^0[f(M)] = \f1\lambda \mathbb E\Big[\sum_{X_i \in Q_1} f(M_i)\Big]$$ for any measurable $f\colon [0,\ff) \to [0,\ff)$, see \cite[Definition 9.3]{poisBook}. \begin{theorem}[Strong consistency for the functional rank invariant] \label{lln_proc_thm} Let $T\ge 0$, $q \le d$ and assume that $\{(X_i, M_i)\}_{i \ge 1}$ is ergodic and that $\mathbb E[X([0, 1]^d)^p] < \infty$ for all $p \ge 1$. Moreover, assume that $\alpha_{q + 2}^!$ are absolutely continuous, and that the distribution of the typical mark does not contain atoms. Then, almost surely, $\{\b_{q, n}^{\bb, \dd}\}_{(\boldsymbol b, \boldsymbol d)}$ converges to $\{\bar\b_q^{\bb, \dd}\}_{\boldsymbol b, \boldsymbol d}$ in the sup-norm of processes on $\mc S^2$. \end{theorem} Finally, we improve the CLT to a functional statement. This result is most naturally formulated in the Skorokhod space of multi-parameter c\`adl\`ag functions \cite{bickel}. To that end, we require additional regularity conditions on the mark distribution. Moreover, we assume that the point process $\{(X_i, M_i)\}_{i \ge 1}$ is in the subcritical regime of continuum percolation \cite{cPerc}. That is, with probability 1, there does not exist an infinite sequence $X_{i_1}, X_{i_2}, \dots$ of pairwise distinct points such that $|X_{i_j} - X_{i_{j + 1}}| \le T$ for all $j \ge 1$. \begin{theorem}[Asymptotic normality for the functional rank invariant] \label{clt_proc_thm} Let $q\le d$ and $\{(X_i, M_i)\}_{i \ge 1}$ be an independently marked homogeneous Poisson point process. We assume that the distribution function of the typical mark is H\"older continuous with parameter $5/8$. Moreover, we assume that the point process $\{(X_i, M_i)\}_{i \ge 1}$ is in the subcritical regime of continuum percolation. Then, as $n \to \infty$, as a process $$\frac{\b_{q, n}^{\bb, \dd} - \mathbb E\big[\b_{q, n}^{\bb, \dd}\big]}{n^{d/2}}\Rightarrow \mathcal Z$$ where $Z$ is a centered Gaussian process. \end{theorem} \section{Proof of Theorems \ref{lln_rank_thm} and \ref{clt_rank_thm}} \label{scal_sec} As announced in Section \ref{res_sec}, the main insight is that Theorems \ref{lln_rank_thm} and \ref{clt_rank_thm} may be interpreted in the setting of standard persistent homology by considering the binary filtration $\{\boldsymbol b, \boldsymbol d\} = \{(b_1, b_2, k_{\ms b}), (d_1, d_2, k_{\ms d})\}$. We first reproduce the conditions (K1)--(K3) stated in \cite{shirai}. See also \cite{marked} for an extension to the marked setting. The assumption is that there exists a non-negative measurable function $\kappa$ defined on finite sets in $\mathbb R^d$ and taking values in $[0, \infty]$ such that: \begin{itemize} \item[(K1)] $0 \le \kappa(\sigma) \le \kappa(\tau)$, if $\sigma$ is a subset of $\tau$; \item[(K2)] $\kappa$ is translation invariant, i.e., $\kappa(\sigma + x) = \kappa(\sigma)$ for any $x\in \mathbb R^d$; \item[(K3)] there is an increasing function $\rho \colon [0, \infty) \to [0, \infty)$ such that \[|x - y| \le \rho(\kappa(\{x,y\})), \] \end{itemize} The $q$-simplices of the filtration at a level $r$ are then given by $(q + 1)$-subsets $\varphi \subseteq \mathbb R^d$ with $\kappa(\varphi) \le r$. Note that (K1)--(K3) are also extended to the marked settings in \cite{marked}. In order to illustrate the idea in a simplified setting, we first consider the special case where we work with the 1-cover, i.e., where $k = 1$. Here, we first set $\kappa(\varphi) := 3 + \ms{diam}(\varphi)$ as soon as $\varphi$ contains two elements that are distance at least $d_2$ apart. Otherwise, we distinguish on the number of elements of $\varphi$. To compute $\b_{q, n}^{\bb, \dd}$ only the $q$-simplices and $(q + 1)$-simplices are relevant. Recall that $r_{\mathsf C}(\varphi)$ denotes the filtration time in the \v Cech filtration. If $\varphi \subseteq \mathbb R^d \times [0, T]$ consists of $q + 1$ elements, then we set $\kappa(\varphi) := 0$ if $r_{\mathsf C}(\varphi) \le b_1$ and all marks of the elements in $\varphi$ are at most $b_2$. Otherwise, we set $\kappa(\varphi) := 1$. Similarly, if $\varphi \subseteq \mathbb R^d \times [0, T]$ consists of $q + 2$ elements, then we set $\kappa(\varphi) := 2$ if $r_{\mathsf C}(\varphi) \le d_1$ and all marks of the elements in $\varphi$ are at most $d_2$, and $\kappa(\varphi) := 3$ otherwise. Finally, we set $\rho(s) := d_2 + s$. Hence, the rank invariant $\b_q^{\bb, \dd}$ corresponding to the birth time $\boldsymbol b$ and death time $\boldsymbol d$ in the multifiltration corresponds to the rank invariant for birth time $0$ and death time $2$ in the $\kappa$-filtration. For the general setting involving also the multicover filtration, the situation is a bit more delicate since the 0-simplices are no longer points in $\mathbb R^d$ but rather $k$-tuple of points. Hence, property (K3) does not make sense even syntactically any longer. However, the reason for introducing (K3) is to confine a simplex of filtration value $t > 0$ to a Euclidean ball with radius $\rho(t)$, and this remains true in the multicover case: the diameter of any simplex at level $t$ is at most 2t. \section{Simulation study} \label{sim_sec} In this section, we illustrate the asymptotic normality derived in Theorem \ref{clt_proc_thm} at the hand of simulated point patterns. This serves two purposes. First, we illustrate that the asymptotic normality is already clearly visible on bounded sampling windows. Second, the tests shed light on the power of goodness-of-fit tests that can be derived from the asymptotic normality. To that end, we build on the simulation set-up that from \cite{krebs}. We will discuss the marked \v C-bifiltration and the multicover bifiltration separately in Sections \ref{cech_sub} and \ref{multi_sub}, respectively. \subsection{Marked \v C-bifiltration} \label{cech_sub} As a null model, we take a homogeneous Poisson point process with intensity $\lambda = 0.2$ in a $10 \times 10 \times 10$ sampling window. By the marking theorem \cite[Theorem 5.6]{poisBook}, this can be also seen as a 2D homogeneous Poisson point process with intensity $\lambda = 2$ in a $10 \times 10$ sampling window that is endowed with iid $\mathsf{Unif}([0, 10])$-marks. In order to reflect effects coming from the multiparameter persistence, we start from the general bifiltration $\mathbb K_{(r_1, r_2), n}$ from \eqref{bifi_eq} and extract three specific unifiltrations. First, fixing $r_1$ and varying $r_2$ corresponds to the sublevel-filtration with respect to the marks. Second, fixing $r_2$ and varying $r_1$ recovers the \v C-filtration. In addition to considering only these two specific axes, a popular general approach is to work with linear combinations in the sense that given $a, b \ge 0$ a simplex $\sigma$ is contained in the filtration at level $r$ if and only if $\sigma \in \ms{ Cech}_{r_1}\big(\{X_i \in [0, n]^d \colon M_i \le r_2\}\big)$ for some $r_1, r_2 \ge 0$ with $a r_1 + br_2 \le r$. To present specific examples, we look at the combinations $b = 1$ and $a\in \{5, 10, 20\}$. After having extracted several unifiltrations from the bifiltration in \eqref{bifi_eq}, we can now compute the associated persistence diagrams. This provides us with an ample choice of summary statistics that have been proposed in literature. One of the most basic ones is the \emph{total persistence} $\sum_i (D_i - B_i)$, i.e., the sum of all lifetimes when $\{(B_i, D_i)\}_i$ describe the birth- and death times in the persistence diagram. By our main result, this statistic is asymptotically normal in large sampling windows provided that the radii are sub-critical regime. Moreover, taking the total persistence of the mark filtration and of the \v C-filtration yields a bivariate random vector that is also asymptotically normal under the null hypothesis. \subsubsection{Asymptotic normality} \label{norm_mark_sec} We now illustrate that the normal approximation becomes accurate in moderately-sized windows and extends beyond the sub-critical regime. To that end, we generate $n = 1,000$ realizations of the null model and recenter/rescale the total persistence by the empirical mean/standard deviation. To visualize the bivariate persistence vector, we first use the Cholesky decomposition of the estimated covariance matrix to decorrelate the coordinates and then compute the squared norm of the resulting vector. Under the null hypothesis, asymptotically this quantity follows a $\chi^2$-distribution. The resulting histograms in Figure \ref{high_mark_fig} already provide strong evidence for the approximate normality in moderately-sized windows. In general, this impression is reinforced by the Q-Q plots. However, a closer inspection reveals that the mark-filtration may have a small tendency towards more pronounced right and less pronounced left tails. \begin{figure}[!h] \begin{center} \includegraphics[width=.94\textwidth]{hist_mark} \includegraphics[width=.94\textwidth]{qq_mark} \caption{Histograms and Q-Q plot (top/bottom row) for the total persistence in the mark-/\v C-/combined filtration (left/center left/center right, respectively). In the bivariate case, after decorrelation of the coordinates, the squared norm of the random vector is compared to a $\chi^2$ distribution.} \label{high_mark_fig} \end{center} \end{figure} % % \subsubsection{Goodness-of-fit tests} \label{good_mark_sec} The most striking benefit of asymptotic normality are tests for the goodness of fit in large sampling windows. In this section, we illustrate this application at the hand of three specific point-process alternatives. They are prototypical examples for point patterns exhibiting attraction, repulsion, and a more complex interaction pattern, respectively. For the attractive point pattern we work with a {Mat\'ern cluster process} based on a 3D parent Poisson process with intensity $0.2$, and where each parent generates a $\mathsf{Poi}(1)$-distributed number of offspring that are scattered uniformly in a ball with radius 1. In particular, it has the same intensity as the null model. As repulsive pattern, we choose a {Strauss process} with intensity parameter $\beta = 0.25$, interaction parameter $\gamma = 0.5$ and interaction radius $R = 1$. Finally, we also consider a {cell process} that exhibits a complex interaction structure but whose first- and second-order characteristics are similar to those of the null model. More precisely, we work with a modified Baddeley-Silverman process, where we partition the sampling window into $6^3$ congruent boxes and then place uniformly and independently in each of the boxes either 0, 1 or 2 points with probabilities 0.45, 0.1 and 0.45, respectively. Figure \ref{pattern_fig} shows realizations of the null model and the alternatives. It highlights that the differences between the models are subtle and difficult to recognize with the bare eye. \begin{figure}[!h] \centering \includegraphics[width=1.09\textwidth]{patterns} \caption{Realizations of a Poisson, Mat\'ern cluster, Strauss, and cell process (from left to right).} \label{pattern_fig} \end{figure} To perform the goodness-of-fit tests, we draw 1,000 samples from the Poisson null model in order to determine the mean and the standard deviation under the null model. Under the approximate normality, this yields the 5\% confidence region. Furthermore, for the bivariate setting, we proceed as in Section \ref{norm_mark_sec}. That is, under the null model we compute additionally a linear transformation that decorrelates the coordinates. Then, we compare the distribution of the squared vector norm with a $\chi^2$-distribution. Based on also 1,000 samples from the alternatives, this makes it possible to compute the rejection rates, which are summarized in Table \ref{pow_mark_tab}. First, the type 1 error is close to the nominal level. Considering the rejection rates of the alternatives, we observe that the power of the test based on the combined filtration corresponding to $(a, b) = (20, 1)$ is superior to the pure mark, the pure \v C-filtration and also the corresponding bivariate vector. This illustrates one of the benefits of multiparameter persistence. We also observe that in general the test power is relatively low. On the one hand, this reflects that the alternatives are based on rather subtle deviations from the null hypothesis. However, a test based on Ripley's $K$-function illustrates that for the Mat\'ern and Strauss case, classical tools from spatial statistics are very powerful. Still when moving to the cell process, we see that such tests can become meaningless for point patterns involving complex interactions, whereas TDA-based tests still retain some power. \begin{table}[!htpb] \begin{center} \caption{Rejection rates for goodness-of-fit tests in the marked \v C-bifiltration.} \label{pow_mark_tab} \begin{tabular}{lllll} & $\mathsf{Poi}$ & $\mathsf{Mat}$ & $\mathsf{Str}$ & $\mathsf{Cell}$ \\ \hline Total persistence -- mark filtration & 4.4\% & 6.4\% & 15.2\% & 9.3\% \\ Total persistence -- \v C filtration & 5.2\% & 12.6\% & 13.7\% & 15\%\\ Total persistence -- combined (5, 1) & 5.4\% & 13.4\% & 15.7\% & 14.3\%\\ Total persistence -- combined (10, 1) & 5.8\% & 15.6\% & 17.1\% & 14.9\%\\ Total persistence -- combined (20, 1) & 5.7\% & 16.5\% & 17.4\% & 15.2\%\\ Bivariate total persistence & 4.6\% & 8.1\% & 14.3\% & 12.4\%\\ Ripley's $K$-function & 5.1\% & 92.9\% & 92.8\% & 3.4\% \end{tabular} \end{center} \end{table} \subsection{Multicover bifiltration} \label{multi_sub} In contrast to Section \ref{cech_sub}, we set up the simulation study in two dimensions. The null model is a Poisson point process with intensity $\lambda = 2$ in a $10 \times 10$ sampling window. As in Section \ref{cech_sub}, we extract two specific filtrations from the multicover filtration, namely the ones corresponding to the $k$-cover for $k \in \{1,2,3\}$. In particular, the case of the 1-cover reproduces the standard \v C-filtration on the underlying point pattern. As a test statistics, we also rely on total persistence. However, for the goodness-of-fit tests discussed in Section \ref{good_mc_sec} we found it to be beneficial to restrict to features born before a threshold. In order to provide a glimpse on the potential of characteristics combining the information inherent in different covers, we also consider a bivariate quantity where we form the weighted linear combination of the total persistences from the 1- and the 2-cover with weights $1/3$ and $2/3$, respectively. Again, by our main result, this statistic is asymptotically normal in large sampling windows provided that the radii are sub-critical regime. We now illustrate that the normal approximation becomes accurate in moderately-sized windows and extends beyond the sub-critical regime. \subsubsection{Asymptotic normality} \label{norm_mc_sec} As in Section \ref{cech_sub}, we illustrate the asymptotic normality at the hand of histograms and Q-Q plots based on $n = 1,000$ realizations of the null model. Figure \ref{high_mc_fig} clearly indicates the asymptotic normality in the setting of the $k$-cover for any $k \in \{1, 2, 3\}$. \begin{figure}[!h] \begin{center} \includegraphics[width=.94\textwidth]{hist_mc} \includegraphics[width=.94\textwidth]{qq_mc} \caption{Histograms (top row) and Q-Q plot (bottom row) for the total persistence in the \v C-filtration for the 1-cover (left) and 2-cover (right).} \label{high_mc_fig} \end{center} \end{figure} % % \subsubsection{Goodness-of-fit tests} \label{good_mc_sec} When evaluating the test power for possible alternatives, we work with examples that are similar to the ones considered in Section \ref{good_mark_sec} albeit we are now set in two dimensions. More precisely, as first alternative we fix a 2D {Mat\'ern cluster process} with parent-process intensity $0.2$ generating a $\mathsf{Poi}(1)$-distributed number of offspring uniformly scattered in a disk with radius 0.5. We also work again with a Strauss process, where we now choose $\beta = 2.8$ as intensity parameter, $\gamma = 0.6$ as interaction parameter, and $R = 0.5$ as interaction radius. Finally, the cell process relies on a subdivision into 196 congruent boxes each containing either 0, 1 or 2 points with probabilities 0.45, 0.1 and 0.45, respectively. Figure \ref{pattern_mc_fig} shows the patterns. \begin{figure}[!h] \hspace{-2cm}\includegraphics[width=1.29\textwidth]{patterns_mc} \caption{Realizations of a Poisson, Mat\'ern cluster, Strauss, and cell process (from left to right).} \label{pattern_mc_fig} \end{figure} We base the evaluation of the goodness-of-fit tests on 1,000 samples from the null model and from the alternatives, where we again rely on the asymptotic normality in order to construct the acceptance regions. First, the type 1 errors presented in Table \ref{pow_mc_tab} are close to the nominal 5\% level. When evaluating the test power at the alternatives, we see that the rejection rates are higher for the 2-cover filtration in comparison with the 1-cover filtration. Depending on the point pattern, the power for the test based on the 3-cover is sometimes higher and sometimes lower than that of the 2-cover. This gives again a glimpse on the potential of extracting different filtrations from multicover bifiltration. We also see that in general, the rejection rates are higher than the ones reported in Table \ref{pow_mark_tab}, although the latter is of course based on different point patterns, and in 3D. Finally, a comparison with Ripley's $K$-function reinforces the impressions from Section \ref{good_mark_sec}: although classical characteristics from spatial statistics often exhibit superior performance for simple point patterns, they may fail to detect deviations from complete spatial randomness for point patterns with complex interactions. Here, TDA-based tests can offer valuable additional insights. \begin{table}[!htpb] \begin{center} \caption{Rejection rates for goodness-of-fit tests in the multicover bifiltration.} \label{pow_mc_tab} \begin{tabular}{lllcc} & $\mathsf{Poi}$ & $\mathsf{Mat}$ & $\mathsf{Str}$ & $\mathsf{Cell}$ \\ \hline total persistence -- 1-cover & 5.6\% & 38.3\% & 24.4\% & 13.3\% \\ total persistence -- 2-cover & 5.6\% & 63\% & 59.9\% & 17.2\%\\ total persistence -- 3-cover & 4.7\% & 61.5\% & 63.4\% & 8.7\%\\ total persistence -- Bivariate(1,2) & 5.3\% & 65.4\% & 65.5\% & 20.9\%\\ Ripley's $K$-function & 4.9\% & 93.4\% & 83.1\% & 4.8\% \end{tabular} \end{center} \end{table}
1,941,325,220,824
arxiv
\section{Introduction} \label{sec-intro} A classical approaches to special functions developed in the $19^{th}$ century was to classify them from the point of view of solutions to (second) order differential equations with analytic coefficients \begin{equation} \label{ode-1} a(z) \frac{d^{2}u}{dz^{2}} + b(z) \frac{du}{dz} + c(z) u = 0. \end{equation} \noindent Gauss understood that the singularities of this equation are central to this classification. Moreover, the role of $\infty$ as a possible singularity of \eqref{ode-1} plays an important role. The notion of regular singularities and the well-known Frobenius method to produce solutions arose from this point of view. The reader will find information about these ideas in \cite{gray-2000a} and \cite{whittaker-2020a}. It is a classical result that any differential equation with $3$ regular singular points (on $\mathbb{C} \cup \{ \infty \}$) can be transformed to the hypergeometric differential equation \begin{equation} \label{hyper-ode1} z(1-z) \frac{d^{2}w}{dz^{2}} + [ c - (a+b-1)z] \frac{dw}{dz} - a b w = 0, \end{equation} \noindent with singularities at $0, \, 1, \, \infty$. The solution of \eqref{hyper-1}, normalized by $w(0) = 1$ is the hypergeometric function, defined by the series expansion \begin{equation} \pFq21{a \,\,\, b}{c}{z} = \sum_{n=0}^{\infty} \frac{(a)_{n} (b)_{n}}{(c)_{n} \, n!} z^{n}, \end{equation} \noindent where $(q)_{n}$ os the Pochhammer symbol defined by \begin{equation} (q)_{n} = \begin{cases} 1 & \textnormal{ if } n = 0 \\ q(q+1) \cdots (q+n-1) & \textnormal{ if } n > 0. \end{cases} \end{equation} \noindent Among the many representations of the hypergeometric function we single out the so-called Mellin-Barnes integral \begin{equation} \pFq21{a \,\,\, b}{c}{z} = \frac{\Gamma(c)}{\Gamma(a) \Gamma(b)} \times \frac{1}{2 \pi i } \int_{\gamma} \frac{\Gamma(a+s) \Gamma(b+s) \Gamma(-s)}{\Gamma(c+s)} (-z)^{s} \, ds, \end{equation} \noindent where the contour of integration $\gamma$ joins $- i \infty$ to $+ i \infty$ and separates the poles of $\Gamma(-s)$ (namely $0, \, 1, \, 2, \ldots$) from those of $\Gamma(a+s) \Gamma(b+s)$ (located at $-a, \, -a-1, \ldots, -b, -b-1, \ldots$). In the general case, the poles of the gamma factors are located on horizontal semi-axes. The contour $\gamma$ has to be chosen to separate those moving to the right from those moving to the left. Examples may be found in \cite{smirnov-2004a,smirnov-2006a}. Many important functions are obtained from the hypergeometric function by coalescing singularities. For instance, the so-called Whittaker function $W_{\kappa,\mu}(z)$, defined from the equation \begin{equation} \frac{d^{2}w}{dz^{2}}+ \left( - \frac{1}{4} + \frac{k}{z} + \frac{1/4 - m^{2}}{z^{2}} \right) w = 0 \end{equation} also has the integral representation \begin{equation} W_{\kappa,\mu}(z) = \frac{e^{-z/2} }{2 \pi i } \int_{\gamma} z^{-s} \frac{\Gamma \left( \tfrac{1}{2} + \mu + s \right) \Gamma \left( \tfrac{1}{2} - \mu + s \right) \Gamma \left(- \kappa - s \right) } {\Gamma \left( \tfrac{1}{2} + \mu - \kappa \right) \Gamma \left( \tfrac{1}{2} - \mu - \kappa \right) } \, ds \end{equation} \noindent with a similar condition on the contour of integration. The main type of integrals considered in this work come from Mellin transforms. This concept is recalled next. \begin{definition} Given a function $f(x)$, defined on the positive real axis $\mathbb{R}^{+}$, its Mellin transform is defined by \begin{equation} \varphi(s) = (\mathcal{M}f)(s) = \int_{0}^{\infty} x^{s-1} f(x) \, dx. \end{equation} \noindent This relation may be inverted by a line integral \begin{equation} \label{mellin-2} f(x) = (\mathcal{M}^{-1} \varphi)(x) = \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \varphi(s) \, ds, \end{equation} \noindent by an appropriate choice of the contour $\gamma$ of the type described above. \end{definition} \smallskip The goal of the current work is to present a procedure to evaluate inverse Mellin transforms based on the method of brackets. This is a method to evaluate definite integrals and it is described in Section \ref{sec-rules}. The class of functions considered here are of the form \begin{equation} \label{gen-fun1} \varphi(s) = \frac{\prod\limits_{j=1}^{M} \Gamma \left( a_{j}+A_{j}s\right) }{\prod\limits_{j=1}^{N}\Gamma \left( b_{j}+B_{j}s\right) }\frac{\prod\limits_{j=1}^{P}\Gamma \left( c_{j}-C_{j}s\right) }{\prod\limits_{j=1}^{Q}\Gamma \left( d_{j}-D_{j}s\right) }. \end{equation} \noindent For simplicity the parameters $A_{j}, \, B_{j}, \, C_{j}, \, D_{j}$ are taken real and positive. Integrals of the form \eqref{mellin-2} with an integrand of the type \eqref{gen-fun1} will be referred as \texttt{Mellin-Barnes integrals}. \begin{remark} In high energy physics Mellin-Barnes integrals appear frequently at the intermediate steps in the process of calculations \cite{allendes-2010a,allendes-2012a,gonzalez-2017c,gonzalez-2013a,gonzalez-2013b,kniehl-2013a, gonzalez-2018a,smirnov-2004a}. These are contour integrals in which the integrands are quotients of Euler Gamma functions, as in \eqref{gen-fun1}. This type of contour integrals represent a typical representation of generalized hypergeometric functions. The Mellin-Barnes representation of denominators in integrands corresponding to Feynman diagrams in the momentum space representation is a standard procedure in quantum field theory. This approach helps to calculate very difficult Feynman diagrams \cite{smirnov-2004a} and this is the reason why Barnes integrals usually appear at the penultimate step of calculation. \end{remark} \begin{remark} In the previous work \cite{Gonzalez:2015msa} it was mentioned that it would be interesting to combine the Mellin-Barnes integrals and the method of brackets, due to the similarity of these methods. This combination is presented here. In addition, several known relations for generalized hypergeometric functions have been reestablished by combining the method of brackets and Mellin-Barnes transformations. Our investigation has been motivated by the work of Prausa \cite{prausa-2017a} which proposes an interesting symbolic procedure. Our work compares it with the method of brackets. \end{remark} \section{The method of brackets} \label{sec-rules} This is a method that evaluates definite integrals over the half line $[0, \, \infty)$. The application of the method consists of small number of rules, deduced in heuristic form, some of which are placed on solid ground \cite{amdeberhan-2012b}. For $a \in \mathbb{R}$, the symbol \begin{equation} \langle a \rangle = \int_{0}^{\infty} x^{a-1} \, dx \end{equation} is the {\em bracket} associated to the (divergent) integral on the right. The symbol \begin{equation} \phi_{n} = \frac{(-1)^{n}}{\Gamma(n+1)} \end{equation} \noindent is called the {\em indicator} associated to the index $n$. The notation $\phi_{i_{1}i_{2}\cdots i_{r}}$, or simply $\phi_{12 \cdots r}$, denotes the product $\phi_{i_{1}} \phi_{i_{2}} \cdots \phi_{i_{r}}$. \medskip \noindent {\bf {\em Rules for the production of bracket series}} \smallskip The first part of the method is to associate to the integral \begin{equation} I(f) = \int_{0}^{\infty} f(x) \, dx \end{equation} a bracket series according to \noindent ${\mathbf{Rule \, \, P_{1}}}$. Assume $f$ has the expansion \begin{equation} f(x) = \sum_{n=0}^{\infty} \phi_{n} a_{n} x^{\alpha n + \beta -1 }. \end{equation} \noindent Then $I(f)$ is assigned the \texttt{bracket series} \begin{equation} I(f) = \sum_{n \geq 0}a_{n} \langle \alpha n + \beta \rangle. \end{equation} \smallskip \noindent ${\mathbf{Rule \, \, P_{2}}}$. For $\alpha \in \mathbb{R}$, the multinomial power $(a_{1} + a_{2} + \cdots + a_{r})^{\alpha}$ is assigned the $r$-dimension bracket series \begin{equation} \sum_{n_{1} \geq 0} \sum_{n_{2} \geq 0} \cdots \sum_{n_{r} \geq 0} \phi_{n_{1}\, n_{2} \, \cdots n_{r}} a_{1}^{n_{1}} \cdots a_{r}^{n_{r}} \frac{\langle -\alpha + n_{1} + \cdots + n_{r} \rangle}{\Gamma(-\alpha)}. \end{equation} \noindent ${\mathbf{Rule \, \, P_{3}}}$. Each representation of an integral by a bracket series has associated a {\em complexity index of the representation} via \begin{equation} \text{complexity index } = \text{number of sums } - \text{ number of brackets}. \end{equation} \noindent It is important to observe that the complexity index is attached to a specific representation of the integral and not just to integral itself. The experience obtained by the authors using this method suggests that, among all representations of an integral as a bracket series, the one with {\em minimal complexity index} should be chosen. The level of difficulty in the analysis of the resulting bracket series increases with the complexity index. \medskip \noindent {\bf {\em Rules for the evaluation of a bracket series}} \smallskip \noindent ${\mathbf{Rule \, \, E_{1}}}$. Let $a, \, b \in \mathbb{R}$. The one-dimensional bracket series is assigned the value \begin{equation} \label{mult-sum} \sum_{n \geq 0} \phi_{n} f(n) \langle an + b \rangle = \frac{1}{|a|} f(n^{*}) \Gamma(-n^{*}), \end{equation} \noindent where $n^{*}$ is obtained from the vanishing of the bracket; that is, $n^{*}$ solves $an+b = 0$. This is precisely the Ramanujan's Master Theorem. \smallskip The next rule provides a value for multi-dimensional bracket series of index $0$, that is, the number of sums is equal to the number of brackets. \smallskip \noindent ${\mathbf{Rule \, \, E_{2}}}$. Let $a_{ij} \in \mathbb{R}$. Assuming the matrix $A = (a_{ij})$ is non-singular, then the assignment is \begin{equation} \sum_{n_{1} \geq 0} \cdots \sum_{n_{r} \geq 0} \phi_{n_{1} \cdots n_{r}} f(n_{1},\cdots,n_{r}) \langle a_{11}n_{1} + \cdots + a_{1r}n_{r} + c_{1} \rangle \cdots \langle a_{r1}n_{1} + \cdots + a_{rr}n_{r} + c_{r} \rangle \nonumber \end{equation} \begin{equation} = \frac{1}{| \text{det}(A) |} f(n_{1}^{*}, \cdots n_{r}^{*}) \Gamma(-n_{1}^{*}) \cdots \Gamma(-n_{r}^{*}) \nonumber \end{equation} \noindent where $\{ n_{i}^{*} \}$ is the (unique) solution of the linear system obtained from the vanishing of the brackets. There is no assignment if $A$ is singular. \smallskip \noindent ${\mathbf{Rule \, \, E_{3}}}$. The value of a multi-dimensional bracket series of positive complexity index is obtained by computing all the contributions of maximal rank by Rule $E_{2}$. These contributions to the integral appear as series in the free indices. Series converging in a common region are added and divergent/nulls series are discarded. There is no assignment to a bracket series of negative complexity index. If all the resulting series are discarded, then the method is not applicable. \begin{remark} There is a small collection of formal operational rules for brackets. These will be used in the calculations presented below. \begin{rules} For any $\alpha \in \mathbb{R}$ the bracket satisfies $\langle - \alpha \rangle = \langle \alpha \rangle$. \end{rules} \begin{proof} This follows from the change of variables $x \mapsto 1/x$ in $\begin{displaystyle} \langle - \alpha \rangle = \int_{0}^{\infty} x^{-\alpha - 1} \, dx. \end{displaystyle}$ \end{proof} A similar change of variables gives the next scaling rule: \begin{rules} \label{rule2.2} For any $\alpha, \, \beta, \, \gamma \in \mathbb{R}$ with $\alpha \neq 0$ the bracket satisfies \begin{equation} \langle \alpha \gamma + \beta \rangle = \frac{1}{| \alpha |} \left\langle \gamma + \frac{\beta}{\alpha} \right\rangle. \end{equation} \noindent This can be deduced from Rule $E_{1}$. \end{rules} \begin{rules} For any $\alpha, \, \beta \in \mathbb{R}$ with $\alpha \neq 0$, for any $n \in \mathbb{N}$ appearing as the index of a sum any allowable function $F$, the identity \begin{equation} F(n) \langle \alpha n + \beta \rangle = \frac{1}{| \alpha |} F \left( - \frac{\beta}{\alpha} \right) \left\langle n + \frac{\beta}{\alpha} \right\rangle \end{equation} \noindent in the sense that any appearance of the left-hand side in a bracket series may be replaced by the right-hand side. \end{rules} \begin{proof} This follows directly from the rule $E_{1}$ to evaluate bracket series. \end{proof} \end{remark} \section{Some operational rules for integration} \label{sec-operational} This section describes the relation between line integrals, like those appearing in the inverse Mellin transform, and the method of brackets. These complement those given for the discrete sums. The results presented here have appeared in \cite{prausa-2017a} as an extension of the method of brackets and used to produce a minimal Mellin-Barnes representations of integrals appearing in connection with Feynman diagrams. \subsection{The equivalence of brackets and the Dirac's delta} Let $f$ be a function defined on $\mathbb{R}^{+}$ and consider its Mellin transform \begin{equation} F(s) = \int_{0}^{\infty} x^{s-1} f(x) \, dx, \label{mellin-form1} \end{equation} \noindent with inversion rule \begin{equation} f(x) = \frac{1}{2 \pi i } \int_{\gamma} x^{-s} F(s) \, ds, \label{mellin-form2} \end{equation} \noindent with the usual convention on the contour $\gamma$. Then, replacing \eqref{mellin-form2} into \eqref{mellin-form1} produces \begin{eqnarray} F(s) & = & \int_{0}^{\infty} x^{s-1} \left[ \frac{1}{2 \pi i } \int_{\gamma} x^{-s'} F(s') \, ds' \right] \, dx \\ & = & \frac{1}{2 \pi i } \int_{\gamma} F(s') \left[ \int_{0}^{\infty} x^{s - s' - 1} \, dx \right] \, ds' \nonumber \\ & = & \frac{1}{2 \pi i } \int_{\gamma} F(s') \langle s - s' \rangle \, ds'. \nonumber \end{eqnarray} This proves: \begin{rules} \label{thm-fun1} The rule for integration respect to brackets is given by \begin{equation} \int_{\gamma} F(s) \langle s + \alpha \rangle \, ds = 2 \pi i F(-\alpha) \end{equation} \noindent where $\gamma$ is a contour of the usual type. The generalization \begin{equation} \int_{\gamma} F(s) \langle \beta s + \alpha \rangle \, ds = \frac{2 \pi i }{| \beta| } F \left( - \frac{\alpha}{\beta} \right), \quad \textnormal{with} \,\, \beta \in \mathbb{R} \end{equation} \noindent can be obtained from Rule \ref{rule2.2}. \end{rules} \begin{remark} From an operational point of view, the result in Theorem \ref{thm-fun1} may be written as \begin{equation} \langle s + \alpha \rangle = 2 \pi i \delta(s+ \alpha), \end{equation} \noindent where $\delta$ is Dirac's delta function. \end{remark} \subsection{Mellin transform} \label{MTB} This section contains a brief review of the Mellin transform. Recall that this is defined by \begin{equation} \label{MT} \mathcal{M}(f)(z) = \int_0^{\infty} x^{z-1}f(x)~dx, \end{equation} where the arguments are the function $f$ to be transformed and the variable $z$ appearing in the integral. The inverse Mellin transformation is \begin{equation} \label{MT Cauchy} f(x) = \frac{1}{2\pi i}\int_{c-i\infty}^{c + i\infty}x^{-z}\mathcal{M}(f)(z) \, dz \quad \textnormal{ for } x \in (0,\infty). \end{equation} The point $c$ associated to the contour of integration must be in the vertical strip $c_1 < c < c_2$, with boundaries determined by the condition that \begin{eqnarray} \label{holom} \int_0^{1} x^{c_1-1}f(x)~dx & {\rm and } & \int_1^{\infty} x^{c_2-1}f(x)~dx \end{eqnarray} must be finite. This is satisfied if the function $f$ satisfies the growth conditions \begin{equation*} |f(x)| < 1/x^{c_1} \quad {\rm when} \; x \to +0, \textnormal{ and } \quad |f(x)|< 1/x^{c_2} \quad \textnormal{ when} \; x \to + \infty . \end{equation*} The conditions (\ref{holom}) imply that the Mellin transform $\mathcal{M}(f)(z)$ is holomorphic in the vertical strip $c_1 < {\rm Re}~z < c_2$. The asymptotic behavior of the integrand is then used to determine the direction in which (a finite segment) of the vertical line contour is closed in order to produce a contour to apply Cauchy's integral theorem. The singularities of the integrand are then used to analyze the behavior of the integrals as the finite segment becomes infinite. One of the simplest examples of the Mellin transformation is \begin{eqnarray*} \Gamma(z) = \int_0^{\infty} e^{-x}x^{z-1}~dx & {\rm and } & e^{-x} = \frac{1}{2\pi i}\int_{c-i\infty}^{c + i\infty}x^{-z} \Gamma(z) ~dz. \end{eqnarray*} The contour in the complex plane is the vertical line with ${\rm Re}~z = c$ is in the strip $0 < c < A,$ where $A$ is a real positive number, the vertical line contour must be closed to the left. It is convenient to include here a proof of equations (\ref{MT}-\ref{MT Cauchy}). This is classical and may be found in any textbook on the theory of complex variable. It is reproduced here for pedagogical arguments. First, use the fact that $\mathcal{M}(f)(z)$ is holomorphic in the strip $c_1 < {\rm Re}~z < c_2$, then taking $\delta > 0$ to be infinitesimally small, \begin{eqnarray} & & \label{MT direct} \\ \mathcal{M}(f) (z) & = & \int_0^{\infty} x^{z-1}f(x)~dx \nonumber \\ & = & \frac{1}{2\pi i}\int_0^{\infty} x^{z-1} dx\int_{c-i\infty}^{c + i\infty}x^{-\omega} \mathcal{M}(f)(\omega) ~d\omega \nonumber \\ & = & \frac{1}{2\pi i}\int_0^1 x^{z-1} dx\int_{c-i\infty}^{c + i\infty}x^{-\omega} \mathcal{M}(f)(\omega) ~d\omega \nonumber \\ & & \quad \quad + \frac{1}{2\pi i}\int_1^\infty x^{z-1} dx\int_{c-i\infty}^{c + i\infty}x^{-\omega} \mathcal{M}(f) (\omega) ~d\omega \nonumber \\ & = & \frac{1}{2\pi i}\int_0^1 x^{z-1} dx\int_{c_1-\delta-i\infty}^{c_1-\delta + i\infty}x^{-\omega} \mathcal{M}(f)(\omega) ~d\omega \nonumber \\ & & \quad \quad + \frac{1}{2\pi i}\int_1^\infty x^{z-1} dx\int_{c_2+\delta-i\infty}^{c_2+\delta + i\infty}x^{-\omega} \mathcal{M}(f)(\omega) ~d\omega \nonumber \\ & & = \frac{1}{2\pi i}\int_{c_1-\delta-i\infty}^{c_1-\delta + i\infty}\frac{\mathcal{M}(f)(\omega)}{z-\omega} ~d\omega - \frac{1}{2\pi i}\int_{c_2+\delta-i\infty}^{c_2+\delta + i\infty}\frac{\mathcal{M}(f)(\omega)}{z-\omega} ~d\omega \nonumber \\ & & = \frac{1}{2\pi i}\int_{c_1-\delta+i\infty}^{c_1-\delta - i\infty}\frac{\mathcal{M}(f)(\omega)}{\omega-z} ~d\omega + \frac{1}{2\pi i}\int_{c_2+\delta-i\infty}^{c_2+\delta + i\infty}\frac{\mathcal{M}(f)(\omega)}{\omega-z} ~d\omega \nonumber \\ & & = \frac{1}{2\pi i}\oint_{CR}\frac{\mathcal{M}(f)(\omega)}{\omega-z} ~d\omega, \nonumber \end{eqnarray} \noindent where $CR$ is a rectangular contour constructed from the two vertical lines from (\ref{MT direct}) supplemented by two horizontal lines at the imaginary complex infinities of the strip $c_1 < {\rm Re}~z < c_2.$ The contour $CR$ is closed in the counterclockwise orientation. The inverse transformation proof is even simpler and may be used in order to define Dirac $\delta$-function. Observe that \begin{eqnarray} \label{MT inverse} f(x) & = & \frac{1}{2\pi i}\int_{c-i\infty}^{c + i\infty}x^{-z} \mathcal{M}(f)(z) ~dz \\ & = & \frac{1}{2\pi i}\int_{c-i\infty}^{c + i\infty}x^{-z}~dz\int_0^{\infty} y^{z-1}f(y)~dy \nonumber \\ & = & \int_0^{\infty}\delta(\ln{(y/x)}) y^{-1}f(y)~dy = f(x), \nonumber \end{eqnarray} which is valid in view of the relation \begin{eqnarray} \label{delta} \frac{1}{2\pi i} \int_{c - i\infty}^{c + i\infty}e^{(x-y)z} dz & = & \frac{1}{2\pi } \int_{-\infty}^{\infty}e^{(x-y)(c +i\tau)} d\tau \\ & = & \frac{e^{(x-y)c}}{2\pi }\int_{-\infty}^{\infty}e^{i(x-y)\tau} d\tau \nonumber \\ & = & e^{(x-y)c}\delta(x-y) = \delta(x-y). \nonumber \end{eqnarray} This proof of the inverse Mellin transformation belongs to D. Hilbert and may be found in any good textbook dedicated to complex analysis. In this paper we show that the method of bracket when applied to Mellin integrals in some sense is equivalent to this old proof of the inverse Mellin transformation. {\it More precisely, we may argue that by using the same trick like in this quite old proof of the inverse Mellin transformation, namely to divide the integration over the variable $x \in [0,\infty[$ in two parts, $x \in [0,1]$ and $x \in [1,\infty[,$ and by creating in a such way a closed contour in the plane of the complex variable we may map all the rules of the method of bracket to the Cauchy integral formula.} In high energy particle physics, in order to solve integro-differential equations representing evolution of important physical quantities, the transformation to Mellin moment is frequently used [see, for example \cite{DGLAP}]. The inverse transformation of the Mellin moment has completely the same form like the inverse Mellin transformation. The question may appear if the inverse Mellin transformation returns us to some function but how we may know if we came back to this function of a real variable from its Mellin moment or from its Mellin transform in the complex plane? We may differ the Mellin moments from the the Mellin transforms by studying the asymptotic behaviour at the complex infinity of the given function in the complex plane. \begin{remark} The question of complexity, as introduced in \textbf{Rule} $\mathbf{P_{3}}$ is now extended. In the process of evaluating an integral by the method of brackets, define $\sigma$ to be the number of sums plus the number of contour integrals appearing and $\delta$ to be the number of brackets plus the number of integrals on the half-line $[0, \, \infty)$ that appear. The (generalized) index of complexity is $\iota = \sigma - \delta$. This index should be seen as a measure of difficulty in the evaluation of the integral by the method of brackets. In the case $\iota = 0$, the answer is given by a single term. For $\iota > 0$, the gamma factors appearing in the numerators of the line integrals must be expanded in bracket series. This guarantees that the method provides all series representations of the solution. As usual series converging in a common region must be added. It is the heuristic observation that the bracket/line integral representations of a problem should aim to minimize the index $\iota$. \end{remark} \subsection{Multiple integrals} The method discussed here can be extended to evaluate multiple integrals with a bracket representation \begin{multline} J = \left( \frac{1}{2 \pi i} \right)^{N} \int_{\gamma_{1}} ds_{1} \cdots \int_{\gamma_{N}} ds_{N} \\ F(s_{1}, \cdots, s_{N}) \langle a_{11}s_{1} + \cdots a_{1N}s_{N} + c_{1} \rangle \cdots \langle a_{N1}s_{1} + \cdots a_{NN}s_{N} + c_{N} \rangle, \end{multline} \noindent by using the one-dimensional rule in iterated form. The expression for $J$ has the form \begin{equation} J = \frac{1}{| \det(A) |} F(s_{1}^{*}, \cdots, s_{N}^{*}) \end{equation} \noindent where $A = (a_{ij})$, with $a_{ij} \in \mathbb{R}$ and $\{ s_{i}^{*} \} \, \, (i=1, \ldots, N)$ is the solution of the system $A \vec{s} = - \vec{c}$ produced by the vanishing of the arguments in the brackets appearing in this process. \section{Transforming Mellin-Barnes integrals to bracket series} \label{sec-MB-to-brackets} This section evaluates Mellin-Barnes integrals by transforming them into a bracket series. The rules of Section \ref{sec-rules} are then used to produce an analytic expression for the integral. \begin{lemma} \label{gamma-brackets1} The gamma function has the bracket series representation \begin{equation} \Gamma(\alpha) = \sum_{n=0}^{\infty} \phi_{n} \langle \alpha + n \rangle. \label{gamma-brack1} \end{equation} \end{lemma} \begin{proof} This follows simply from expanding $e^{-t}$ in power series in the integral representation of the gamma function to obtain \begin{equation} \Gamma(\alpha) = \int_{0}^{\infty} t^{\alpha-1} e^{-t} \, dt = \int_{0}^{\infty} t^{\alpha - 1} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} t^{n} \, dt = \sum_{n=0}^{\infty} \phi_{n} \langle \alpha + n \rangle. \end{equation} \end{proof} In this section we present a systematic procedure to evaluate Mellin-Barnes integrals of the form \begin{equation} \label{mellin-int0} I(x) = \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \frac{ \prod\limits_{j=1}^{M} \Gamma(a_{j} + A_{j}s) \prod\limits_{j=1}^{P} \Gamma(c_{j} - C_{j}s) } { \prod\limits_{j=1}^{N} \Gamma(b_{j} + B_{j}s) \prod\limits_{j=1}^{Q} \Gamma(d_{j} - D_{j} s) } \, ds \end{equation} \noindent where $\gamma$ is the usual vertical line contour. A similar argument has appeared in \cite{prausa-2017a}. The idea is to use the method of brackets to produce the bracket series associated to \eqref{mellin-int0}. The parameters satisfy the rules $a_{j}, \, b_{j}, \, c_{j}, \, d_{j} \in \mathbb{C}$ and $A_{j}, \, B_{j}, \, C_{j}, \, D_{j} \in \mathbb{R}^{+}$ for the index $j$ in the corresponding range; for instance, the $j$ associated to $a_{j}, \, A_{j}$ vary from $1$ to $M$. The procedure to obtain the bracket series is systematic: the gamma factors in the numerator are replaced using formula \eqref{gamma-brack1} (the gamma factors in the denominator do not contribute): \begin{eqnarray} \prod_{j=1}^{M} \Gamma(a_{j}+A_{j}s) & = & \prod_{j=1}^{M} \left[ \sum_{k_{j}} \phi_{k_{j}} \langle a_{j} + A_{j} s + k_{j} \rangle \right] \label{gamma-brack2} \\ & = & \sum_{k_{1}} \cdots \sum_{k_{M}} \phi_{k_{1} \cdots k_{M}} \prod_{j=1}^{M} \langle a_{j} + A_{j} s + k_{j} \rangle \nonumber \end{eqnarray} \noindent and similarly \begin{eqnarray} \prod_{j=1}^{P} \Gamma(c_{j}-C_{j}s) & = & \prod_{j=1}^{P} \left[ \sum_{\ell_{j}} \phi_{\ell_{j}} \langle c_{j} - C_{j} s + \ell_{j} \rangle \right] \label{gamma-brack3} \\ & = & \sum_{\ell_{1}} \cdots \sum_{\ell_{P}} \phi_{\ell_{1} \cdots \ell_{P}} \prod_{j=1}^{P} \langle c_{j} - C_{j} s + \ell_{j} \rangle. \nonumber \end{eqnarray} The rules of the method of brackets described in Section \ref{sec-rules} now yield a bracket series associated with the integral \eqref{mellin-int0}. To illustrate these idea, introduce the function \begin{equation} G(s) = \frac{1}{\prod\limits_{j=1}^{N} \Gamma(b_{j} + B_{j}s)} \times \frac{1}{\prod\limits_{j=1}^{Q} \Gamma(d_{j} - D_{j}s)}, \end{equation} \noindent The previous rules transform the integral $I(x)$ in \eqref{mellin-int0} to \begin{eqnarray} I(x) & = & \frac{1}{2 \pi i } \sum_{k_{1}} \cdots \sum_{k_{M}} \sum_{\ell_{1}} \cdots \sum_{\ell_{P}} \phi_{k_{1} \cdots k_{M} \ell_{1} \cdots \ell_{P}} \\ && \times \int_{\gamma} x^{-s} G(s) \left[ \prod_{j=1}^{M} \langle a_{j} + A_{j}s + k_{j} \rangle \right] \, \left[ \prod_{j=1}^{P} \langle c_{j} - C_{j}s + \ell_{j} \rangle \right] \, ds. \nonumber \end{eqnarray} this will be written in the more compact form \begin{equation} I(x) = \frac{1}{2 \pi i } \sum_{\{ k \} } \sum_{\{ \ell \}} \phi_{\{k\}, \, \{\ell \}} \int_{\gamma} x^{-s} G(s) \left[ \prod_{j=1}^{M} \langle a_{j} + A_{j}s + k_{j} \rangle \right] \, \left[ \prod_{j=1}^{P} \langle c_{j} - C_{j}s + \ell_{j} \rangle \right] \, ds. \label{int-mellin1} \end{equation} Now select the bracket $\langle a_{M} + A_{M}s + k_{M} \rangle $ to evaluate the integral \eqref{int-mellin1}. Any other choice of bracket gives an equivalent value for $I(x)$. Start with \begin{multline} \label{int-mellin2} I(x) = \sum_{\{ k \} } \sum_{\{ \ell \}} \phi_{ \{ k \}, \{ \ell \}} \int_{\gamma} x^{-s} G(s) \\ \left[ \prod_{j=1}^{M-1} \langle a_{j} + A_{j}s + k_{j} \rangle \right] \, \left[ \prod_{j=1}^{P} \langle c_{j} - C_{j}s + \ell_{j} \rangle \right] \, \frac{\langle a_{M} + A_{M} s + k_{M} \rangle}{2 \pi i } \, ds. \nonumber \end{multline} \noindent The rules of the method of brackets given requires to solve the linear equation coming from the vanishing of the last bracket and using Rule \ref{thm-fun1}. This produces \begin{equation} s^{*} = - \frac{a_{M} + k_{M}}{A_{M}}. \end{equation} \noindent Therefore \begin{multline} \label{int-mellin3} I(x) = \frac{1}{|A_{M}|} \sum_{\{ k \} } \sum_{\{ \ell \}} \phi_{ \{ k \}, \{ \ell \}} x^{-s^{*}} G(s^{*}) \prod_{j=1}^{M-1} \langle a_{j} + A_{j}s^{*} + k_{j} \rangle \prod_{j=1}^{P} \langle c_{j} - C_{j}s^{*} + \ell_{j} \rangle \end{multline} \noindent Therefore, the value of the integral $I(x)$ obtained from the selection of the the bracket $\langle a_{M} + A_{M}s + k_{M} \rangle $ is given by \begin{multline} I(x) = \frac{x^{a_{M}/A_{M}} }{|A_{M} |} \sum_{\{ k \}} \sum_{\{\ell \}} \phi_{\{ k \}, \{ \ell \}} x^{k_{M}/A_{M}} \\ \frac{ \prod\limits_{j=1}^{M-1} \left \langle a_{j} - \frac{A_{j} a_{M} }{A_{M} } - \frac{A_{j} k_{M} } {A_{M} } + k_{j} \right \rangle \prod\limits_{j=1}^{P} \left \langle c_{j} + \frac{C_{j} a_{M} }{A_{M} } + \frac{C_{j} k_{M} } {A_{M} } + \ell_{j} \right \rangle } { \prod\limits_{j=1}^{N} \left \langle b_{j} - \frac{B_{j} a_{M} }{A_{M} } - \frac{B_{j} k_{M} } {A_{M} } \right \rangle \prod\limits_{j=1}^{Q} \left \langle d_{j} + \frac{D_{j} a_{M} }{A_{M} } + \frac{D_{j} k_{M} } {A_{M} } \right \rangle }. \end{multline} Observe that one obtains a total of $M+P$ series representations for the integral $I(x)$. There are $P$ of them in the argument $x^{-1/C_{j}}$ and the remaining $M$ of them in the argument $x^{1/A_{j}}$. This procedure extends without difficulty to multiple integrals. \medskip These ideas are illustrated next. \begin{example} The hypergeometric function ${_{2}}F_{1}$, defined by the series \begin{equation} \label{hyper-1} \pFq21{a,b}{c}{x} = \sum_{n=0}^{\infty} \frac{(a)_{n} (b)_{n}}{(c)_{n} \, n!} x^{n}, \end{equation} for $|x| < 1$, admits the Mellin-Barnes representation \begin{equation} \label{barnes-hyper1} \pFq21{a,b}{c}{x} = \frac{\Gamma(c)}{\Gamma(a) \Gamma(b)} \frac{1}{2 \pi i } \int_{\gamma} \frac{\Gamma(-s) \Gamma(s+a) \Gamma(s+b)}{\Gamma(s+c)} (-x)^{s} \, ds \end{equation} \noindent as a contour integral. This appears as entry $9.113$ in \cite{gradshteyn-2015a}. The integral in \eqref{barnes-hyper1} is now used to obtain the series representation \eqref{hyper-1}. As an added consequence, this method will also produce an analytic continuation of the series \eqref{hyper-1} to the domain $|x|>1$. The starting point is now the right-hand side of \eqref{barnes-hyper1} \begin{equation} G(a,b,c;x) = \frac{\Gamma(c)}{\Gamma(a) \Gamma(b)} \frac{1}{2 \pi i } \int_{\gamma} \frac{\Gamma(-s) \Gamma(s+a) \Gamma(s+b)}{\Gamma(s+c)} (-x)^{s} \, ds \end{equation} \noindent and using \eqref{gamma-brack1} in the three gamma factors yields \begin{equation} \label{form-G} G(a,b,c;x) = \frac{\Gamma(c)}{\Gamma(a) \Gamma(b)} \frac{1}{2 \pi i } \sum_{n_{1},n_{2},n_{3}} \phi_{123} \int_{\gamma} \frac{(-x)^{s} \langle -s+n_{1} \rangle \langle s+ a + n_{2} \rangle \langle s+b + n_{3} \rangle}{\Gamma(s+c)} ds. \end{equation} \noindent The gamma term in the denominator has no poles, so it is not expanded. In order to evaluate the expression \eqref{form-G}, select the bracket containing the index $n_{1}$ and use Theorem \ref{thm-fun1} to obtain the bracket series: \begin{equation} \label{value-G} G(a,b,c;x) = \frac{\Gamma(c)}{\Gamma(a) \Gamma(b)} \sum_{n_{1},n_{2},n_{3}} \phi_{123} \frac{(-x)^{n_{1}}}{\Gamma(n_{1}+c)} \langle a + n_{1} + n_{2} \rangle \langle b+n_{1}+n_{3} \rangle. \end{equation} \noindent The evaluation of this series is done according to the rules given in Section \ref{sec-rules}. \smallskip \noindent \texttt{Take $n_{1}$ as the free index}. Then the indices $n_{2}, \, n_{3}$ are determined by the system \begin{equation} a+n_{1}+n_{2} = 0 \quad \text{ and } \quad b+n_{1}+n_{3} = 0, \end{equation} \noindent which gives $n_{2} = - a - n_{1}$ and $n_{3} = -b-n_{1}$. Then \eqref{mult-sum} produces \begin{equation} G_{1}(a,b,c;x) = \frac{\Gamma(c)}{\Gamma(a) \Gamma(b)} \sum_{n_{1}=0}^{\infty} \phi_{1} \frac{(-x)^{n_{1}}}{\Gamma(n_{1}+c)} \Gamma(a+n_{1}) \Gamma(b+n_{1}), \end{equation} \noindent where the index on $G_{1}$ is used to indicate that this sum comes from the free index $n_{1}$. This reduces to \eqref{hyper-1}, showing that \begin{equation} G_{1}(a,b,c;x) = \sum_{n=0}^{\infty} \frac{(a)_{n} (b)_{n}}{(c)_{n} \, n!} z^{n}. \end{equation} \noindent The series on the right converges for $|x|<1$. This recovers equation \eqref{hyper-1}. \smallskip \noindent \texttt{Take $n_{2}$ as the free index}. Then the vanishing of the brackets give $n_{1} = -a - n_{2}$ and $n_{3} = -b+a+n_{2}$. Then \begin{equation} \label{series-3} G_{2}(a,b,c;x) = \frac{\Gamma(c)}{\Gamma(a) \Gamma(b)} \sum_{n_{2}=0}^{\infty} \phi_{2} \frac{(-x)^{-a-n_{2}}}{\Gamma(-a-n_{2}+c)} \Gamma(a+n_{2}) \Gamma(b-a-n_{2}). \end{equation} \noindent Using $\Gamma(u+m) = \Gamma(u) (u)_{m}$ converts \eqref{series-3} into \begin{equation} \label{series-4} G_{2}(a,b,c;x) = \frac{\Gamma(c) \Gamma(b-a)}{ \Gamma(b) \Gamma(c-a)} \sum_{n_{2}=0}^{\infty} \phi_{2} \frac{(-x)^{-a-n_{2}}}{ (c-a)_{-n_{2}}} (a)_{n_{2}} (b-a)_{-n_{2}}. \end{equation} \noindent The final step uses the transformation rule \begin{equation} (u)_{-n} = \frac{(-1)^{n}}{(1-u)_{n}} \end{equation} \noindent to eliminate the negative indices on the Pochhammer symbols and convert \eqref{series-4} into \begin{eqnarray} \label{series-5} \quad G_{2}(a,b,c;x) & = & \frac{\Gamma(c) \Gamma(b-a)}{ \Gamma(b) \Gamma(c-a)} \sum_{n_{2}=0}^{\infty} \phi_{2} \frac{(-x)^{-a-n_{2}}}{ (1-b+a)_{n_{2}}} (a)_{n_{2} } (1-c+a)_{n_{2}} \\ & = & (-x)^{-a} \frac{\Gamma(c) \Gamma(b-a)}{\Gamma(b) \Gamma(c-a)} \sum_{n_{2}=0}^{\infty} \frac{(a)_{n_{2}} (1-c+a)_{n_{2}}}{(1-b+a)_{n_{2}} \, n_{2}!} x^{-n_{2}}. \nonumber \end{eqnarray} \noindent The series on the right is identified as a hypergeometric series and it yields \begin{equation} G_{2}(a,b,c;x) = (-x)^{-a} \frac{\Gamma(c) \Gamma(b-a)}{\Gamma(b) \Gamma(c-a)} \pFq21{a \,\,\,\, \,\, 1-c+a}{1-b+a}{\frac{1}{x}} \end{equation} \noindent and this series converges for $|x|>1$. \smallskip \noindent \texttt{Finally take $n_{3}$ as the free index}. This case is similar to the previous one and it produces \begin{equation} G_{3}(a,b,c;x) = (-x)^{-b} \frac{\Gamma(c) \Gamma(a-b)}{\Gamma(a) \Gamma(c-b)} \pFq21{b \,\,\,\,\,\, 1-c+b}{1-a+b}{\frac{1}{x}} \end{equation} \noindent and this series also converges for $|x|>1$. The rules in Section \ref{sec-rules} state that if in the evaluation of an integral one obtains a collection of series, coming from choices of free indices, those converging in a common region must be added. Thus, the integral $G(a,b,c;x)$ in \eqref{value-G} has the representations \begin{equation} G(a,b,c;x) = \pFq21{a \,\, b}{c}{x} \quad \text{for} \,\, |x| < 1 \end{equation} \noindent and \begin{multline} G(a,b,c;x) = (-x)^{-a} \frac{\Gamma(c) \Gamma(b-a)}{\Gamma(b) \Gamma(c-a)} \pFq21{a \,\,\,\, \,\, 1-c+a}{1-b+a}{\frac{1}{x}} \label{analytic1} \\ + (-x)^{-b} \frac{\Gamma(c) \Gamma(a-b)}{\Gamma(a) \Gamma(c-b)} \pFq21{b \,\,\,\,\,\, 1-c+b}{1-a+b}{\frac{1}{x}}, \quad \text{for} \,\, |x|>1. \end{multline} Therefore we have obtained an analytic continuation of the hypergeometric function $_{2}F_{1}(a,b,c;x)$ from $|x|<1$ to the exterior of the unit circle. The identity \eqref{analytic1} appears as entry $9.132.2$ in \cite{gradshteyn-2015a}. \end{example} \section{Inverse Mellin transforms} \label{sec-MB-IMT} The method of brackets is now used to evaluate integrals of the form \eqref{mellin-2} \begin{equation} f(x) = \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \varphi(s) \, ds, \end{equation} \noindent where $\gamma$ is a vertical line on $\mathbb{C}$, adjusted to each problem. Given $\varphi(s)$, the function $f(x)$ has Mellin transform $\varphi$. \begin{example} \label{example-1} Consider the function $\varphi(s) = \Gamma(s-a).$ Its inverse Mellin transform is given by \begin{equation} \label{ex-1} f(x)= \frac{1}{2 \pi i} \int_{\gamma} x^{-s} \Gamma(s-a) \, ds. \end{equation} \noindent Now use \eqref{gamma-brack1} to write \begin{equation} \Gamma(s-a) = \sum_{n} \phi_{n} \langle s - a + n \rangle \end{equation} \noindent and \eqref{ex-1} yields \begin{equation} f(x) = \sum_{n} \phi_{n} \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \langle s-a+n \rangle \, ds \end{equation} \noindent Theorem \ref{thm-fun1} now gives \begin{equation} f(x)= \sum_{n} \phi_{n} x^{n-a} = x^{-a}e^{-x}. \end{equation} \noindent This is written as \begin{equation} \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \Gamma(s-a) \, ds = x^{-a} e^{-x}, \end{equation} \noindent or equivalently \begin{equation} \int_{0}^{\infty} x^{s-a-1}e^{-x} \, dx = \Gamma(s-a). \end{equation} \noindent Replacing $s-a$ by $s$, this is the integral definition of the gamma function. \end{example} \begin{example} \label{example-1a} The inverse Mellin transform of $\varphi(s) = \Gamma(a-s)$ is obtained as in Example \ref{example-1}. The result is \begin{equation} f(x) = x^{-a} e^{-1/x}, \end{equation} \noindent also written as \begin{equation} \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \Gamma(a-s) \, ds = x^{-a} e^{-1/x}, \end{equation} \noindent or equivalently \begin{equation} \int_{0}^{\infty} x^{s-1}x^{-a}e^{-1/x} \, dx = \Gamma(a-s). \end{equation} \noindent The change of variables $u = x^{-1}$ gives the integral representation of the gamma function. \end{example} \begin{example} \label{example-2} The inversion of $\varphi(s) = \Gamma(s-a) \Gamma(s-b)$ amounts to the evaluation of the line integral \begin{equation} \label{ex-2} f(x)= \mathcal{M}^{-1} (\Gamma(s-a) \Gamma(s-b))(x) = \frac{1}{2 \pi i} \int_{\gamma} x^{-s} \Gamma(s-a) \Gamma(s-b) \, ds. \end{equation} \noindent Now use \eqref{gamma-brack1} to write \begin{equation} \Gamma(s-a) = \sum_{n_{1}} \phi_{n_{1}} \langle s - a + n_{1} \rangle \quad \text{and} \quad \Gamma(s-b) = \sum_{n_{2}} \phi_{n_{2}} \langle s - b + n_{2} \rangle \end{equation} \noindent and produce \begin{equation} \label{series-n1} f(x) = \frac{1}{2 \pi i} \int_{\gamma} x^{-s} \sum_{n_{1},n_{2}} \phi_{12} \langle s-a+n_{1} \rangle \langle s - b + n_{2} \rangle. \end{equation} \noindent Now select the bracket containing the index $n_{1}$ and write \eqref{series-n1} using Theorem \ref{thm-fun1} as \begin{eqnarray} f(x) & = & \sum_{n_{1},n_{2}} \phi_{12} \frac{1}{2 \pi i } \int_{\gamma} \left( x^{-s} \langle s - b + n_{2} \rangle \right) \langle s - a + n_{1} \rangle \\ & = & \sum_{n_{1}, n_{2} } \phi_{12} \, x^{-a+n_{1} } \langle a-n_{1} - b + n_{2} \rangle. \nonumber \end{eqnarray} \noindent This is a two-dimensional bracket series and its evaluation is achieved using the rules in Section \ref{sec-rules}: \\ \noindent \texttt{$n_{1}$ is a free index}. Then $n_{2} = n_{1}-a+b$ and this produces the value \begin{eqnarray} f_{1}(x) & = & \sum_{n_{1}=0}^{\infty} \phi_{1} x^{-a+n_{1}} \Gamma(-n_{1}+a-b) \\ & = & x^{-a} \Gamma(a-b) \sum_{n_{1}=0}^{\infty} \phi_{1} (a-b)_{-n_{1}} \nonumber \\ & = & x^{-a} \Gamma(a-b) \sum_{n_{1}=0}^{\infty} \frac{x^{n_{1}}}{n_{1}! \, (1-a+b)_{n_{1}}} \nonumber \\ & = & x^{-a} \Gamma(a-b)\, \pFq01{-}{1-a+b}{x}. \nonumber \end{eqnarray} \noindent \texttt{$n_{2}$ is a free index}. A similar argument gives \begin{equation} f_{2}(x) = x^{-b} \Gamma(b-a) \, \pFq01{-}{1-b+a}{x}. \end{equation} Since both representations always converge, one obtains \begin{equation} \label{two-IBessel} f(x) = x^{-a} \Gamma(a-b)\, \pFq01{-}{1-a+b}{x} + x^{-b} \Gamma(b-a) \, \pFq01{-}{1-b+a}{x}. \end{equation} The function $_{0}F_{1}$ is now expressed in terms of the modified Bessel function $I_{\nu}(z)$. This is defined in \cite[10.25.2]{olver-2010a} by the power series \begin{equation} \label{mod-bes1} I_{\nu}(z) = \left( \frac{z}{2} \right)^{\nu} \sum_{k=0}^{\infty} \frac{1} {k! \, \Gamma(\nu+k+1)} \left( \frac{z^{2}}{4} \right)^{k}. \end{equation} \begin{lemma} \label{lemma-1} For $\alpha \in \mathbb{R}$, the identity \begin{equation} \pFq01{-}{\alpha}{x} = \Gamma(\alpha) x^{(1-\alpha)/2} I_{\alpha -1}(2 \sqrt{x}) \end{equation} \noindent holds. \end{lemma} \begin{proof} This follows directly from \eqref{mod-bes1}. \end{proof} Replacing the expression in Lemma \ref{lemma-1} in \eqref{two-IBessel} gives \begin{equation} f(x) = \frac{\pi}{\sin(\pi (a-b))} x^{-(a+b)/2} \left( I_{b-a}(2 \sqrt{x}) - I_{a-b}(2 \sqrt{x}) \right). \end{equation} \noindent The relation \cite[10.27.4]{olver-2010a} \begin{equation} K_{\nu}(z) = \frac{\pi}{2} \frac{I_{-\nu}(z) - I_{\nu}(z)}{\sin( \pi \nu))} \end{equation} \noindent (which is here as the definition of $K_{\nu}(z)$) now implies \begin{equation} f(x) = 2 x^{-\nu/2 - b} K_{\nu}(2 \sqrt{x}); \end{equation} \noindent with $\nu = a-b$. This is \begin{equation} \label{mellin-ex1} \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \Gamma(s-a) \Gamma(s-b) \, ds = 2x^{-(a+b)/2} K_{a-b}(2 \sqrt{x}). \end{equation} \noindent After some elementary changes, this is written as \begin{equation} \label{mellin-bes1} K_{\nu}(x) = \frac{1}{4 \pi i } \left( \frac{x}{2} \right)^{\nu} \int_{\gamma} \Gamma(s) \Gamma(s- \nu) \left( \frac{x}{2} \right)^{-2s} \, ds, \end{equation} \noindent the form appearing in \cite[10.32.13]{olver-2010a}. The expression \eqref{mellin-bes1} is now written in the equivalent form \begin{equation} \label{bes-int-1} \int_{0}^{\infty}x^{s-1} K_{\nu}(x) \, dx = 2^{s-2} \Gamma \left( \frac{s+\nu}{2} \right) \Gamma \left( \frac{s-\nu}{2} \right). \end{equation} \end{example} \begin{example} \label{example-3} Consider the inversion of $\varphi(s) = \Gamma(s-a) \Gamma(b-s)$. Observe that there is a change in the order of the argument of the second gamma factor with respect to Example \ref{example-2}. To evaluate this example, expand the term $\Gamma(s-a)$ in a bracket series using \eqref{gamma-brack1} to obtain \begin{equation} f(x) = \sum_{n} \phi_{n} \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \langle s - a + n \rangle \Gamma(b-s) \, ds. \end{equation} \noindent Theorem \ref{thm-fun1} yields \begin{equation} f(x) = x^{-a} \sum_{n=0}^{\infty} \phi_{n} \Gamma(b-a+n) x^{n}. \end{equation} \noindent To simplify this answer write $\Gamma(b-a+n)= \Gamma(b-a) (b-a)_{n}$, use \begin{equation} \pFq10{\alpha}{-}{x} = (1-x)^{-\alpha} \end{equation} \noindent and conclude that \begin{equation} f(x) = \frac{\Gamma(b-a)}{x^{a}(1+x)^{b-a}}. \end{equation} \noindent This is equivalent to the evaluation \begin{equation} \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \Gamma(s-a) \Gamma(b-s) \, ds = \frac{\Gamma(b-a)}{x^{a}(1+x)^{b-a}}. \end{equation} \noindent Expanding the other gamma factors produces the same analytic expression for the integral. \end{example} \begin{example} \label{example-4} This example considers the simplest case of an integrand where a quotient of gamma factors appears. This is the inversion of \begin{equation} \varphi(s) = \frac{\Gamma(s-a)}{\Gamma(s-b)}. \end{equation} \noindent The usual formulation now gives \begin{equation} f(x) = \sum_{n} \phi_{n} \frac{1}{2 \pi i } \int_{\gamma} \left( \frac{x^{-s}}{\Gamma(s-b)} \right) \langle s-a+n \rangle \, ds. \end{equation} \noindent Theorem \ref{thm-fun1} now yields \begin{equation} f(x) = \sum_{n=0}^{\infty} \phi_{n} \frac{x^{n-a}}{\Gamma(a-b-n)}. \end{equation} \noindent This expression is simplified using \begin{equation} \Gamma(a-b-n) = \Gamma(a-b) (a-b)_{-n} = (-1)^{n} \frac{\Gamma(a-b)}{(1-a+b)_{n}} \end{equation} \noindent to obtain \begin{eqnarray} f(x) & = & \frac{x^{-a}}{\Gamma(a-b)} \sum_{n=0}^{\infty} \frac{(1-a+b)_{n}}{n!} x^{n} \\ & = & \frac{1}{\Gamma(a-b)} x^{-a} (1-x)^{-1+a-b}. \nonumber \end{eqnarray} \noindent This can be written as \begin{equation} \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \frac{\Gamma(s-a)}{\Gamma(s-b)} \, ds = \frac{1}{\Gamma(a-b)} x^{-a}(1-x)^{-1+a-b}. \end{equation} \end{example} \begin{example} \label{example-5} The inverse Mellin transform $f(x)$ of \begin{equation} \varphi(s) = \frac{\Gamma(s-a)}{\Gamma(b-s)} \end{equation} \noindent is computed from the line integral \begin{equation} f(x) = \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \frac{\Gamma(s-a)}{\Gamma(b-s)} \, ds. \end{equation} \noindent The usual procedure now yields \begin{equation} f(x) = \sum_{n=0}^{\infty} (-1)^{n} \frac{x^{n-a}}{n! \, \Gamma(b-a+n)} = \frac{x^{-a}}{\Gamma(b-a)} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n! \, (b-a)_{n}} x^{n}. \end{equation} \noindent The series is identified as an ${_{0}F_{1}}$ and using the identity \begin{equation} J_{\nu}(z) = \frac{1}{\Gamma(\nu+1)} \left( \frac{z}{2} \right)^{\nu} \pFq01{-}{\nu+1}{- \frac{z^{2}}{4}} \end{equation} \noindent produces \begin{equation} f(x) = x^{(1-a-b)/2} J_{-1-a+b}( 2 \sqrt{x}) \end{equation} \noindent and gives the evaluation \begin{equation} \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \frac{\Gamma(s-a)}{\Gamma(b-s)} \, ds = x^{(1-a-b)/2} J_{-1-a+b}( 2 \sqrt{x}). \end{equation} \end{example} \begin{example} \label{example-6} The inverse Mellin transform $f(x)$ of \begin{equation} \varphi(s) = \frac{\Gamma(a-s)}{\Gamma(s-b)} \end{equation} \noindent is computed as in Example \ref{example-5}. The result is \begin{equation} \label{bessel-mellin1} \frac{1}{2 \pi i } \int_{\gamma} x^{-s} \frac{\Gamma(a-s)}{\Gamma(s-b)} \, ds = x^{-(a+b+1)/2} J_{a-b-1} \left( \frac{2}{\sqrt{x}} \right) \end{equation} \noindent and thus \begin{equation} \int_{0}^{\infty} x^{s-1} x^{-(a+b+1)/2} J_{a-b-1} \left( \frac{2}{\sqrt{x}} \right) \, dx = \frac{\Gamma(a-s)}{\Gamma(s-b)}. \end{equation} \noindent The identity \eqref{bessel-mellin1}, with $a=0$ and $b=-\nu-1$, can be written as \begin{equation} \label{mellin-bes3} J_{\nu}(x) = \frac{1}{2 \pi i } \int_{\gamma} \frac{\Gamma(-s)}{\Gamma(s+ \nu +1)} \left( \frac{x}{2} \right)^{2s + \nu} \, ds. \end{equation} \end{example} \begin{example} The Mellin inversion of the function \begin{equation} \varphi(s) = \frac{\Gamma(s) \Gamma(1-s)}{\Gamma(\beta - \alpha s)} \end{equation} \noindent is given by \begin{equation} f(x) = \frac{1}{2 \pi i} \int_{\gamma} x^{-s} \frac{\Gamma(s) \Gamma(1-s)}{\Gamma(\beta - \alpha s)} \, ds. \end{equation} \noindent The standard procedure gives \begin{equation} f(x) = \sum_{n_{1},n_{2}} \phi_{12} \, \frac{1}{2 \pi i } \int_{\gamma} \left( \frac{\langle 1 - s + n_{2} \rangle x^{-s}}{\Gamma(\beta - \alpha s)} \right) \, \langle s + n_{1} \rangle \, ds. \end{equation} \noindent Theorem \ref{thm-fun1} then produces \begin{equation} f(x) = \sum_{n_{1},n_{2}} \phi_{12} \frac{x^{n_{1}}}{\Gamma(\beta+ \alpha n_{1})} \langle 1 + n_{1}+n_{2} \rangle. \end{equation} \noindent To evaluate this two-dimensional bracket series proceed as in Example \ref{example-2}. This gives \begin{equation} \label{ml-1} f(x) = \sum_{n=0}^{\infty} \frac{(-x)^{n} }{\Gamma(\beta + \alpha n)} \quad \text{when} \,\, |x| < 1 \end{equation} \noindent and \begin{equation} \label{ml-2} f(x) = \frac{1}{x} \sum_{n=0}^{\infty} \frac{1}{\Gamma(\beta - \alpha - \alpha n)} \frac{(-1)^{n}}{x^{n}} \quad \text{when} \,\, |x| > 1. \end{equation} \noindent The function appearing in \eqref{ml-1} is the \texttt{Mittag-Leffler function}, defined in \cite{olver-2010a} by \begin{equation} E_{\alpha,\beta}(z) = \sum_{n=0}^{\infty} \frac{z^{n}}{\Gamma(\alpha n + \beta)}. \label{ml-def} \end{equation} \noindent produces the final expression \begin{equation} f(x) = \begin{cases} E_{\alpha, \beta}(-x) & \quad \text{if} \,\, |x| < 1 \\ x^{-1} E_{-\alpha, \beta-\alpha}(-1/x) & \quad \text{if} \,\, |x|>1. \end{cases} \end{equation} \end{example} \section{Direct computations of Mellin transforms} \label{sec-direct} This section describes how to use the method of brackets to produce the evaluation of the Mellin transform \begin{equation} \mathcal{M}(f(x))(s) = \int_{0}^{\infty} x^{s-1} f(x) \, dx. \end{equation} \begin{example} \label{example-int1} Example \ref{example-2} has produced the evaluation of \begin{equation} \label{bes-int-2} \int_{0}^{\infty}x^{\alpha-1} K_{\nu}(x) \, dx = 2^{\alpha-1} \Gamma \left( \frac{\alpha+\nu}{2} \right) \Gamma \left( \frac{\alpha-\nu}{2} \right), \end{equation} \noindent from the Mellin inversion of $\Gamma(s-a)\Gamma(s-b)$. \end{example} \begin{example} \label{example-int2} The next example evaluates \begin{equation} \label{int-mess1} I(\alpha,\mu,\nu) = \int_{0}^{\infty} x^{\alpha-1} K_{\mu}(x) K_{\nu}(x) \, dx. \end{equation} by the methods developed here. Entry $10.32.19$ in \cite{olver-2010a} contains the representation \begin{multline} K_{\mu}(x)K_{\nu}(x) = \\ \frac{1}{8 \pi i } \int_{\gamma} \left( \frac{x}{2} \right)^{-2s} \frac{1}{\Gamma(2s)} \Gamma\left( s + \frac{\mu+\nu}{2} \right) \Gamma\left( s + \frac{\mu-\nu}{2} \right) \Gamma\left( s - \frac{\mu-\nu}{2} \right) \Gamma\left( s - \frac{\mu+\nu}{2} \right) \, ds \end{multline} \noindent Replacing this in \eqref{int-mess1} and identifying the $x$-integral as a bracket, yields \begin{multline} I(\alpha,\mu,\nu) = \frac{1}{8 \pi i } \int_{\gamma} \frac{2^{2s} }{\Gamma(2s)} \Gamma\left( s + \frac{\mu+\nu}{2} \right) \Gamma\left( s + \frac{\mu-\nu}{2} \right) \\ \Gamma\left( s - \frac{\mu-\nu}{2} \right) \Gamma\left( s - \frac{\mu+\nu}{2} \right) \langle \alpha - 2s \rangle \, ds \end{multline} \noindent Since this problem contains one bracket and one contour integral, there is no need to expand the gamma terms in bracket series and the result is obtained directly from Theorem \ref{thm-fun1}. The result is \begin{multline} \int_{0}^{\infty} x^{\alpha-1} K_{\mu}(x) K_{\nu}(x) \, dx \\ = \frac{2^{\alpha-3}}{\Gamma(\alpha)} \Gamma \left( \frac{\alpha + \mu + \nu}{2} \right) \Gamma \left( \frac{\alpha + \mu - \nu}{2} \right) \Gamma \left( \frac{\alpha - \mu + \nu}{2} \right) \Gamma \left( \frac{\alpha - \mu - \nu}{2} \right). \end{multline} \end{example} \begin{example} \label{example-int3} The evaluation of \begin{equation} \int_{0}^{\infty} x^{2a-1} K_{\nu}^{2}(x) \, dx = \frac{\sqrt{\pi}\, \Gamma(a+\nu) \Gamma(a-\nu) \Gamma(a)}{4\Gamma \left( a + \tfrac{1}{2} \right)} \end{equation} \noindent is the special case of Example \ref{example-int2} with $\mu = \nu$. Note that the parameter $a$ has been replaced by $2a$, in order to write the answer in a more compact form. In particular, with $\nu=0$, this becomes \begin{equation} \int_{0}^{\infty} x^{2a-1} K_{0}^{2}(x) \, dx = \frac{\sqrt{\pi} \Gamma^{3}(a)}{4 \, \Gamma \left(a + \tfrac{1}{2} \right)}. \end{equation} \noindent The final special case mentioned here has $a = \tfrac{1}{2}$: \begin{equation} \int_{0}^{\infty} K_{0}^{2}(x) \, dx = \frac{\pi^{2}}{4}. \end{equation} \noindent These examples have been evaluated in \cite{gonzalez-2017a} by a different procedure. \end{example} \begin{example} Now consider the integral \begin{equation} \varphi_{3}(a) = \int_{0}^{\infty} K_{0}^{3}(ax) \, dx \end{equation} \noindent with an auxiliary parameter $a$ that naturally can be scaled out. The evaluation begins with a more general problem \begin{equation} I = I(a,b;\mu,\nu,\alpha) = \int_{0}^{\infty} K_{\mu}(ax) K_{\nu}(ax) K_{\alpha}(bx) \, dx \label{Kint-1} \end{equation} \noindent and then \begin{equation} \label{Kint-2} \varphi_{3}(a) = \lim\limits_{\stackrel{\alpha = \mu = \nu \rightarrow 0}{b \rightarrow a}} I(a,b;\mu,\nu,\alpha). \end{equation} \noindent The Mellin-Barnes representations of the factors in the integrand \begin{multline} \label{Kint-3} K_{\mu}(ax) K_{\nu}(ax) = \\ \frac{1}{8 \pi i } \int_{\gamma} \Gamma \left( t + \frac{\mu + \nu}{2} \right) \Gamma \left( t + \frac{\mu - \nu}{2} \right) \Gamma \left( t - \frac{\mu + \nu}{2} \right) \Gamma \left( t - \frac{\mu - \nu}{2} \right) \left( \frac{ax}{2} \right)^{-2t} \frac{dt}{\Gamma(2t)}, \end{multline} \noindent and \begin{equation} K_{\alpha}(bx) = \frac{1}{4 \pi i } \left( \frac{bx}{2} \right)^{\alpha} \int_{\gamma} \Gamma(s) \Gamma(s- \alpha) \left( \frac{bx}{2} \right)^{-2s} \, ds. \label{Kint-4} \end{equation} \noindent Replacing in \eqref{Kint-1} gives \begin{multline} \label{Kint-6} I = \frac{1}{8(2 \pi i )^{2}} \int_{\gamma_{1}} \int_{\gamma_{2}} \frac{ \Gamma \left( t + \frac{\mu + \nu}{2} \right) \Gamma \left( t + \frac{\mu - \nu}{2} \right) \Gamma \left( t - \frac{\mu + \nu}{2} \right) \Gamma \left( t - \frac{\mu - \nu}{2} \right) \Gamma(s) \Gamma(s- \alpha)} {a^{2t} b^{2s - \alpha} 2^{-2t - 2s + \alpha} \Gamma(2t)} \\ \times \langle -2s - 2t + \alpha +1 \rangle \, dt \, ds. \end{multline} \noindent Now replace the gamma factors in the denominator by their corresponding bracket series to obtain \begin{eqnarray} I & = & \frac{1}{8} \sum_{\{ n\}} \phi_{n_{1} \cdots n_{6}} \frac{1}{(2 \pi i)^{2}} \int_{\gamma_{1}} \int_{\gamma_{2}} \frac{1}{a^{2t} b^{2s - \alpha} 2^{-2t - 2s + \alpha} \Gamma(2t)} \label{Kint-7} \\ & \times & \langle t + \frac{\mu + \nu}{2} + n_{1} \rangle \langle t + \frac{\mu - \nu}{2} + n_{2} \rangle \langle t - \frac{\mu + \nu}{2} + n_{3 } \rangle \nonumber \\ & \times & \langle t - \frac{\mu - \nu}{2} + n_{4} \rangle \langle s + n_{5} \rangle \langle s - \alpha + n_{6} \rangle \langle -2s - 2t + \alpha + 1 \rangle \, dt \, ds. \nonumber \end{eqnarray} \noindent To evaluate this expression choose to eliminate the sums with indices $n_{1}$ and $n_{5}$. The resulting sums now depend on four indices $n_{2}, \, n_{3}, \, n_{4}$ and $n_{6}$ and the variables of integration $t$ and $s$ take the values \begin{equation} t^{*} = - \frac{\mu+\nu}{2} - n_{1} \quad \textnormal{and} \,\, s^{*} = - n_{5}. \end{equation} \noindent This yields \begin{multline} I = \frac{1}{8} \sum_{\{ n \}} \phi_{n_{2}n_{3}n_{4}n_{6}} \times \\ \frac{ \langle t^{*} + \frac{\mu-\nu}{2} + n_{2} \rangle \langle t^{*} - \frac{\mu+\nu}{2} + n_{3} \rangle \langle t^{*} - \frac{\mu - \nu}{2} + n_{4} \rangle \langle s^{*} - \alpha + n_{6} \rangle \langle -2s^{*} - 2t^{*} + \alpha + 1 \rangle } { a^{2t^{*}} b^{2s^{*} - \alpha} 2^{-2t^{*} - 2s^{*}+ \alpha} \Gamma(2t^{*}) }. \end{multline} \noindent Under the assumption $|4a^{2}| < |b^{2}|$ the integral in \eqref{Kint-1} is expressed as \begin{equation} I = I(a,b;\mu,\nu,\alpha) = T_{1}+T_{2}+T_{3}+T_{4} \label{expression-1} \end{equation} \noindent with \begin{eqnarray} \quad T_{1} & = & \frac{1}{8} \frac{a^{\mu+\nu}}{b^{1+ \mu + \nu}} \Gamma \left( \frac{1-\alpha + \mu + \nu}{2} \right) \Gamma \left( \frac{1 + \alpha + \mu + \nu}{2} \right) \Gamma(-\mu) \Gamma(-\nu) \label{Kint-11} \\ & & \quad \quad \times \pFq43{1 + \tfrac{\mu+\nu}{2} \,\, \tfrac{1+ \mu + \nu}{2} \,\, \frac{1+ \alpha + \mu + \nu}{2} \,\, \frac{1 - \alpha + \mu + \nu}{2} } {1+ \mu \,\, 1+ \nu \,\, 1+ \mu + \nu}{\frac{4a^{2}}{b^{2}}} \nonumber \\ \quad T_{2} & = & \frac{1}{8} \frac{a^{\mu-\nu}}{b^{1+ \mu - \nu}} \Gamma \left( \frac{1-\alpha + \mu - \nu}{2} \right) \Gamma \left( \frac{1 + \alpha + \mu - \nu}{2} \right) \Gamma(-\mu) \Gamma(\nu) \nonumber \\ & & \quad \quad \times \pFq43{1 + \tfrac{\mu-\nu}{2} \,\, \tfrac{1+ \mu - \nu}{2} \,\, \frac{1- \alpha + \mu - \nu}{2} \,\, \frac{1 + \alpha + \mu - \nu}{2} } {1+ \mu \,\, 1- \nu \,\, 1+ \mu - \nu}{\frac{4a^{2}}{b^{2}}} \nonumber \\ \quad T_{3} & = & \frac{1}{8} \frac{b^{\mu+\nu-1}}{a^{\mu + \nu}} \Gamma \left( \frac{1-\alpha - \mu - \nu}{2} \right) \Gamma \left( \frac{1 + \alpha - \mu - \nu}{2} \right) \Gamma(\mu) \Gamma(\nu) \nonumber \\ & & \quad \quad \times \pFq43{1 - \tfrac{\mu+\nu}{2} \,\, \tfrac{1- \mu - \nu}{2} \,\, \frac{1-\alpha - \mu - \nu}{2} \,\, \frac{1 + \alpha - \mu - \nu}{2} } {1- \mu \,\, 1- \nu \,\, 1- \mu - \nu}{\frac{4a^{2}}{b^{2}}} \nonumber \\ \quad T_{4} & = & \frac{1}{8} \frac{b^{\mu-\nu-1}}{a^{\mu - \nu}} \Gamma \left( \frac{1-\alpha - \mu + \nu}{2} \right) \Gamma \left( \frac{1 + \alpha - \mu + \nu}{2} \right) \Gamma(\mu) \Gamma(-\nu) \nonumber \\ & & \quad \quad \times \pFq43{1 - \tfrac{\mu-\nu}{2} \,\, \tfrac{1- \mu + \nu}{2} \,\, \frac{1-\alpha - \mu + \nu}{2} \,\, \frac{1 + \alpha - \mu + \nu}{2} } {1- \mu \,\, 1+ \nu \,\, 1- \mu + \nu}{\frac{4a^{2}}{b^{2}}}. \nonumber \end{eqnarray} \noindent Now passing to the limit as $\alpha, \, \nu, \, \mu \rightarrow 0$ yields \begin{eqnarray} \int_{0}^{\infty} K_{0}^{2}(ax) K_{0}(bx) \, dx & = & \lim\limits_{\mu \rightarrow 0} \frac{1}{8b} \left[ \left( \frac{a^{2}}{b^{2}} \right)^{\mu} \Gamma \left( \frac{1 + 2 \mu}{2} \right)^{2} \Gamma(-\mu)^{2} \pFq32{ \tfrac{1+2 \mu}{2} \,\, \tfrac{1+ 2 \mu}{2} \,\, \tfrac{1+ 2 \mu}{2} } {1+ \mu \,\, 1+ 2 \mu}{\frac{4a^{2}}{b^{2}}} \nonumber \right. \\ & & \quad \quad + 2 \pi \Gamma(-\mu) \Gamma(\mu) \pFq32{ \tfrac{1}{2} \,\,\, \tfrac{1}{2} \,\,\, \tfrac{1}{2} }{1+\mu \,\, 1 - \mu} { \frac{4a^{2}}{b^{2}}} \nonumber \\ & & \left. \quad \quad + \left( \frac{a^{2}}{b^{2}} \right)^{-\mu} \Gamma^{2} \left( \tfrac{1- 2 \mu}{2} \right) \Gamma^{2}(\mu) \pFq32{ \tfrac{1- 2 \mu}{2} \,\, \tfrac{1- 2 \mu}{2} \,\, \tfrac{1 - 2 \mu}{2} }{1- \mu \,\,\, 1 - 2 \mu} { \frac{4a^{2}}{b^{2}}} \right] \nonumber \end{eqnarray} In a similar form, in the case $|4a^{2}| >|b^{2}|$ one obtains \begin{equation} I = I(a,b;\mu,\nu,\alpha) = T_{5}+T_{6} \label{expression-2} \end{equation} \noindent with \begin{eqnarray} \quad T_{5} & = & \frac{1}{8} \frac{b^{\alpha}}{a^{\alpha +1}} \nonumber \\ & & \frac{\Gamma(- \alpha)}{\Gamma(\alpha+1)} \Gamma \left( \frac{1+\alpha - \mu + \nu}{2} \right) \Gamma \left( \frac{1 + \alpha - \mu - \nu}{2} \right) \Gamma \left( \frac{1 + \alpha + \mu - \nu}{2} \right) \Gamma \left( \frac{1 + \alpha + \mu + \nu}{2} \right) \nonumber \\ & & \quad \quad \times \pFq43{\tfrac{1+\alpha+ \mu+\nu}{2} \,\, \tfrac{1+\alpha - \mu + \nu}{2} \,\, \frac{1+ \alpha + \mu - \nu}{2} \,\, \frac{1 + \alpha - \mu - \nu}{2} } {1+ \alpha \,\, 1+ \tfrac{\alpha}{2} \,\, \tfrac{1+ \alpha}{2} }{\frac{b^{2}}{4a^{2}}} \nonumber \\ \quad T_{6} & = & \frac{1}{8} \frac{a^{\alpha-1}}{b^{\alpha }} \nonumber \\ & & \frac{\Gamma(\alpha)}{\Gamma(1-\alpha)} \Gamma \left( \frac{1-\alpha - \mu + \nu}{2} \right) \Gamma \left( \frac{1 - \alpha - \mu - \nu}{2} \right) \Gamma \left( \frac{1 - \alpha + \mu - \nu}{2} \right) \Gamma \left( \frac{1 - \alpha + \mu + \nu}{2} \right) \nonumber \\ & & \quad \quad \times \pFq43{\tfrac{1-\alpha+ \mu+\nu}{2} \,\, \tfrac{1-\alpha - \mu + \nu}{2} \,\, \frac{1- \alpha + \mu - \nu}{2} \,\, \frac{1 - \alpha - \mu - \nu}{2} } {1- \alpha \,\, 1- \tfrac{\alpha}{2} \,\, \tfrac{1- \alpha}{2} }{\frac{b^{2}}{4a^{2}}} \nonumber \end{eqnarray} \noindent and then letting $\mu = \nu = 0, \, b=a$ and $\alpha \rightarrow 0$ yields, after scaling the parameter $a$ \begin{multline} \label{Kint-20} \int_{0}^{\infty} K_{0}^{3}(x) = \lim\limits_{\alpha \rightarrow 0} \frac{1}{8} \left[ \frac{\Gamma(- \alpha) \Gamma^{4} \left( \frac{1+ \alpha}{2} \right) }{\Gamma(1 + \alpha)} \pFq32{ \tfrac{1+ \alpha}{2} \,\, \tfrac{1+ \alpha}{2} \,\, \tfrac{1+\alpha}{2} }{ 1+ \alpha \,\, 1+ \tfrac{\alpha}{2} }{\frac{1}{4} } \right. \\ \left. + \frac{\Gamma(\alpha) \Gamma^{4} \left( \tfrac{1 - \alpha}{2} \right)}{\Gamma(1 - \alpha)} \pFq32{ \tfrac{1- \alpha}{2} \,\, \tfrac{1- \alpha}{2} \,\, \tfrac{1-\alpha}{2} }{ 1 - \alpha \,\, 1- \tfrac{\alpha}{2} }{\frac{1}{4} } \right]. \end{multline} The authors have been unable to produce a simpler analytic expression for this limiting value. \end{example} \begin{example} A similar argument as the one presented in the previous example yields \begin{multline} \label{Kint-41} \int_{0}^{\infty} K_{0}^{4}(x) \, dx = \frac{\sqrt{\pi}}{16} \lim\limits_{\alpha \rightarrow 0} \left[ \frac{\Gamma^{2}(-\alpha) \Gamma^{2} \left( \alpha + \tfrac{1}{2} \right) \Gamma \left( 2 \alpha + \tfrac{1}{2} \right)}{\Gamma(2 \alpha + 1)} \pFq43{ \tfrac{1}{2} \,\, \alpha + \tfrac{1}{2} \,\, \alpha + \tfrac{1}{2} \,\, 2 \alpha + \tfrac{1}{2}}{1+ \alpha \,\, 1+ \alpha \,\, 1+ 2 \alpha }{1} \right. \\ \left. + 2 \sqrt{\pi} \Gamma(-\alpha) \Gamma(\alpha) \Gamma( \tfrac{1}{2} + \alpha ) \Gamma( \tfrac{1}{2} - \alpha) \pFq43{ \tfrac{1}{2} \,\,\,\, \tfrac{1}{2} \,\,\,\, \tfrac{1}{2} + \alpha \,\,\,\, \tfrac{1}{2} - \alpha }{1 \,\,\,\, 1+ \alpha \,\,\,\, 1- \alpha }{1} \right. \\ \left. + \frac{\Gamma^{2}(\alpha) \Gamma^{2}( \tfrac{1}{2} - \alpha) \Gamma( \tfrac{1}{2} - 2 \alpha)}{\Gamma(1 -2 \alpha)} \pFq43{ \tfrac{1}{2} \,\,\,\, \tfrac{1}{2} - \alpha \,\,\,\, \tfrac{1}{2} - \alpha \,\,\,\, \tfrac{1}{2} - 2 \alpha }{1- \alpha \,\,\,\, 1 - \alpha \,\,\,\, 1- 2 \alpha }{1} \right] \end{multline} \noindent Details are omitted. \end{example} \section{Mellin transforms of products} \label{sec-special} This section presents a method to evaluate the Mellin transform \begin{equation} I(s,b) = \int_{0}^{\infty} x^{s-1} f(x) g(bx) \,dx. \end{equation} \noindent knowing a series for $f$ of the form \begin{equation} f(x) = \sum_{n} \phi_{n} F(n)x^{\beta n} \end{equation} \noindent and the inverse Mellin transform \begin{equation} g(x) = \frac{1}{2 \pi i } \int_{\gamma}x^{-s} \varphi(s) \, ds. \end{equation} \noindent For $\kappa > 0$, the change of variables $s = \kappa s'$ gives \begin{equation} g(x) = \frac{1}{2 \pi i } \int_{- i \infty}^{i \infty}x^{-\kappa s} \widetilde{\varphi}(s) \, ds, \label{rep-gamma} \end{equation} \noindent where $\widetilde{\varphi}(s) = \kappa \varphi(\kappa s)$. The formula \eqref{rep-gamma} is now written as \begin{equation} g(x) = \frac{1}{2 \pi i } \int_{\gamma}x^{-\kappa s} \varphi(s) \, ds, \label{rep-gamma1} \end{equation} \noindent that is, the tilde notation is dropped and the parameter $\kappa$ is kept. Then \begin{eqnarray} I & = & \int_{0}^{\infty} x^{\alpha -1} f(x) g(bx) \, dx \\ & = & \frac{1}{2 \pi i } \int_{\gamma} \varphi(s) b^{- \kappa s} \left[ \sum_{n} \phi_{n} F(n) \left[ \int_{0}^{\infty} x^{\alpha + \beta n - \kappa s -1} \, dx \right] \right] \, ds \nonumber \\ & = & \frac{1}{2 \pi i } \int_{\gamma} \varphi(s) b^{- \kappa s} \left[ \sum_{n} \phi_{n} F(n) \langle \alpha + \beta n - \kappa s \rangle \right] \, ds \nonumber \\ & = & \frac{1}{2 \pi i } \int_{\gamma} \frac{\varphi(s) b^{- \kappa s} }{| \beta |} \left[ \sum_{n} \phi_{n} F(n) \left \langle \frac{\alpha}{\beta} - \frac{\kappa s}{\beta} + n \right \rangle \right] \, ds \nonumber \\ & = & \frac{1}{ | \beta | } \frac{1}{2 \pi i } \int_{\gamma} \varphi(s) b^{-\kappa s} \Gamma \left( \frac{\alpha - \kappa s}{\beta} \right) F \left( \frac{\kappa s - \alpha}{\beta} \right) \, ds. \nonumber \end{eqnarray} This is stated as a theorem. \begin{theorem} \label{thm-identity1} Assume the function $f(x)$ has an expansion given by \begin{equation} \label{exp-f} f(x) = \sum_{n} \phi_{n} F(n) x^{\beta n} \end{equation} \noindent and the function $g(x)$ is given by rescaled version of the inverse Mellin transform \begin{equation} \label{exp-g} g(x) = \frac{1}{2 \pi i } \int_{\gamma} \varphi(s) x^{-\kappa s} \, ds. \end{equation} \noindent Then \begin{equation*} \int_{0}^{\infty} x^{\alpha - 1} f(x) g(bx) \, dx = \frac{1}{ | \beta | } \frac{1}{2 \pi i } \int_{\gamma} \varphi(s) \Gamma \left( \frac{\alpha - \kappa s}{\beta} \right) F \left( \frac{\kappa s - \alpha}{\beta} \right) b^{- \kappa s} \, ds. \end{equation*} \end{theorem} \begin{example} Entry $6.532.4$ in \cite{gradshteyn-2015a} states that \begin{equation} \label{65324} \int_{0}^{\infty} \frac{x \, J_{0}(Ax) \, dx}{x^{2}+k^{2}} = K_{0}(Ak). \end{equation} \noindent Theorem \ref{thm-identity1} is now used to establish this evaluation. Start with the expansion \begin{equation} f(x) = \frac{1}{x^{2}+k^{2}} = \sum_{n} \phi_{n} \left[ \Gamma(n+1) k^{-2n-2} \right] x^{2n}. \end{equation} \noindent This gives \eqref{exp-f} with $F(n) = \Gamma(n+1)k^{-2n-2}$ and $\beta = 2$. Now use \cite[10.9.23]{olver-2010a} \begin{equation} J_{\nu}(z) = \frac{1}{2 \pi i } \int_{\gamma} \frac{\Gamma(t) }{\Gamma(\nu-t+1)} \left( \frac{z}{2} \right)^{\nu - 2t} \, dt, \end{equation} \noindent and replace $z$ by $Ax$ to produce the desired representation of the Bessel function: \begin{equation} J_{0}(Ax) = \frac{1}{2 \pi i } \int_{\gamma} \frac{(Ax)^{2s} \Gamma(s) }{2^{2s} \Gamma(1-s)} \, ds. \end{equation} \noindent In the notation of \eqref{exp-g}, the parameters are $\kappa=2, \, \beta = 2, \, \alpha = 2$ and $b=A$ and $\begin{displaystyle} \varphi(s) = \frac{\Gamma(s)}{2^{2s} \, \Gamma(1-s)} \end{displaystyle}$. Theorem \ref{thm-identity1} now gives \begin{equation} \label{bessel-last1} \int_{0}^{\infty} \frac{x J_{0}(Ax) }{x^{2}+k^{2}} \, dx = \frac{1}{4 \pi i } \int_{ \gamma} \Gamma^{2}(s) \left( \frac{Ak}{2} \right)^{-2s} \, ds. \end{equation} Now the formula \cite[10.32.13]{olver-2010a}) \begin{equation} \label{mellin-bes2} K_{\nu}(z) = \frac{1}{4 \pi i } \left( \frac{z}{2} \right)^{\nu} \int_{\gamma} \Gamma(t) \Gamma(t- \nu) \left( \frac{z}{2} \right)^{-2t} \, dt, \end{equation} \noindent In particular for $\nu = 0$ this becomes \begin{equation} \label{bessel-last2} K_{0}(z) = \frac{1}{4 \pi i } \int_{\gamma} \Gamma^{2}(t) \left( \frac{z}{2} \right)^{-2t} \, dt. \end{equation} \noindent Formula \eqref{65324} now follows from \eqref{bessel-last1} and \eqref{bessel-last2}. \end{example} \begin{example} The next evaluation is \begin{equation} \label{65212} I = \int_{0}^{\infty} x K_{\nu}(ax) J_{\nu}(bx) \, dx = \frac{b^{\nu}}{a^{\nu}(a^{2}+b^{2})}. \end{equation} \noindent This is entry $6.521.2$ in \cite{gradshteyn-2015a}. Formulas \eqref{mellin-bes1} and \eqref{mellin-bes3} give, respectively, the representations \begin{equation} K_{\nu}(z) = \frac{1}{4 \pi i } \int_{\gamma} \Gamma(s) \Gamma(s-\nu) \, \left( \frac{z}{2} \right)^{-2s+\nu} \, ds \end{equation} \noindent and \begin{equation} J_{\nu}(z) = \frac{1}{2 \pi i} \int_{\gamma} \frac{\Gamma(-s)}{\Gamma(s + \nu +1)} \left( \frac{z}{2} \right)^{2s+\nu} \, ds. \end{equation} \noindent Replacing in \eqref{65212} and recognizing the $x$-integral into a bracket yields \begin{multline} I = \frac{1}{2 (2 \pi i)^{2}} \left( \frac{ab}{4} \right)^{\nu} \int_{\gamma_{1}} \int_{\gamma_{2}} \frac{\Gamma(-t) \Gamma(s) \Gamma(s- \nu) }{\Gamma(t + \nu + 1) } \left( \frac{a}{2} \right)^{- 2s} \left( \frac{b}{2} \right)^{2t} \\ \times \frac{1}{2} \langle 1 + \nu + t - s \rangle \, ds \, dt. \end{multline} \noindent The bracket is now used to eliminate the $s$-integral, the result is \begin{equation} I = \frac{1}{a^{2}} \left( \frac{b}{a} \right)^{\nu} \frac{1}{2 \pi i } \int_{\gamma} \Gamma(-t) \Gamma(1+t) \left( \frac{b^{2}}{a^{2}} \right)^{t} \, dt. \end{equation} The gamma terms are expanded as bracket series using \eqref{gamma-brack1} and then eliminate the line integral with one of the brackets to obtain \begin{eqnarray} I & = & \frac{b^{ \nu}}{a^{ \nu +2}} \frac{1}{2 \pi i } \int_{\gamma} \sum_{n_{1}, n_{2}} \left( \frac{b^{2}}{a^{2}} \right)^{t} \phi_{n_{1}n_{2}} \langle n_{1} - t \rangle \langle n_{2} + 1 + t \rangle \, dt \\ & = & \frac{b^{\nu}}{a^{\nu+2}} \sum_{n_{1},n_{2}} \phi_{n_{1}n_{2}} \left( \frac{b^{2}}{a^{2}} \right)^{n_{1}} \langle n_{2}+n_{1}+1 \rangle. \nonumber \end{eqnarray} \noindent The method of brackets now gives two different expressions obtained by using $n_{1}$ or $n_{2}$ as the free index of summation. These options give series converging in disjoint regions $|b|< |a|$ and $|b|> |a|$. The rules of the method of brackets shows that each sum gives the value of the integral in the corresponding region. In this case, both cases give the same expression \begin{equation} I = \frac{b^{\nu}}{a^{\nu}(a^{2}+b^{2})}, \end{equation} \noindent for the value of the integral. This confirms \eqref{65212}. \end{example} \section{An integral involving exponentials and Bessel modified functions} \label{sec-exp-bessel} The method developed in this work is now applied to the computation of some definite integrals. The idea is relatively simple: given a function $f(x)$ with a Mellin transform $\varphi(s)$ containing gamma factors, the integral \begin{equation} F = \int_{0}^{\infty} e^{-x} f(x) \, dx \end{equation} \noindent can be evaluated by writing the exponential as \begin{equation} e^{-x} = \sum_{n} \phi_{n}x^{n} \label{exp-ser1} \end{equation} \noindent and then using the method of brackets. The examples below illustrates this process. Integrals involving the Bessel function $K_{\nu}(x)$ have been presented in \cite{gonzalez-2017a}. The computation there is based on the concept of \textit{totally null/divergent representations}. The first type includes \begin{equation} K_{0}(x) = \frac{1}{x} \sum_{n=0}^{\infty} \frac{\Gamma^{2}(n + \tfrac{1}{2})}{n! \, \Gamma(-n)} \left( - \frac{4}{x^{2}} \right)^{n} \end{equation} \noindent in which every term vanishes. There is a similar expression for a series in which every term diverges. In spite of the lack of rigor, these series have shown to be useful in the evaluation of definite integrals. See \cite{gonzalez-2017a} for details. A second technique used before is the integral representation of $K_{\nu}(x)$ such as \begin{equation} K_{\nu}(x) = \frac{2^{\nu} \Gamma \left( \nu + \tfrac{1}{2} \right)}{\Gamma(\tfrac{1}{2})} x^{\nu} \int_{0}^{\infty} \frac{\cos t \, dt}{(x^{2}+t^{2})^{\nu+1/2}}, \end{equation} \noindent and then apply the methods of brackets. The method present next is an improvement over those employed in \cite{gonzalez-2017a}. \begin{example} Consider the integral \begin{equation} F(b,\nu) = \int_{0}^{\infty} e^{-x} K_{\nu}(bx) \, dx. \end{equation} \noindent Naturally the parameter $b$ can be scaled out, but it is instructive to leave it as is. Write the exponential function as in \eqref{exp-ser1} and the Bessel function from \eqref{mellin-bes1} as \begin{equation} \label{mellin-bes2a} K_{\nu}(bx) = \frac{1}{4 \pi i } \left( \frac{bx}{2} \right)^{\nu} \int_{-i \infty}^{i \infty} \Gamma(s) \Gamma(s- \nu) \left( \frac{bx}{2} \right)^{-2s} \, ds. \end{equation} \noindent Then \begin{eqnarray} F(b,\nu) & = & \frac{1}{2} \sum_{n_{1}} \phi_{n_{1}} \frac{1}{2 \pi i } \int_{\gamma} \left( \frac{b}{2} \right)^{\nu - 2s} \Gamma(s) \Gamma(s- \nu) \left( \int_{0}^{\infty} x^{n_{1}-2s + \nu} \, dx \right) \, ds \nonumber \\ & = & \frac{1}{2} \sum_{n_{1}} \phi_{n_{1}} \frac{1}{2 \pi i } \int_{\gamma} \left( \frac{b}{2} \right)^{\nu - 2s} \Gamma(s) \Gamma(s- \nu) \langle n_{1} - 2 s + \nu +1 \rangle \, ds \nonumber \end{eqnarray} \noindent Now replace the gamma factors by their bracket expansions as in \eqref{gamma-brack1} to produce \begin{equation} F(b,\nu) = \frac{1}{2} \sum_{n_{1}} \phi_{123} \frac{1}{2 \pi i } \int_{\gamma} \left( \frac{b}{2} \right)^{\nu - 2s} \langle s + n_{2} \rangle \, \langle s - \nu + n_{3} \rangle \, \langle n_{1} - 2s + \nu + 1 \rangle \, ds \nonumber \end{equation} \noindent Using the bracket $\langle s + n_{2} \rangle$ to apply Theorem \ref{thm-fun1} gives a bracket series for the integral $F(b,\nu)$: \begin{equation} \label{value-f1} F(b,\nu) = \frac{1}{2} \sum_{\vec{n}} \phi_{123} \left( \frac{b}{2} \right)^{\nu+2n_{2}} \langle -n_{2} -\nu + n_{3} \rangle \langle n_{1} + 2 n_{2} + \nu + 1 \rangle, \end{equation} \noindent where $\vec{n} = \{n_{1},n_{2},n_{3}\}$. The method of brackets is now used to produce three sums for \eqref{value-f1}: \begin{eqnarray} T_{1} & = & \frac{1}{4} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \Gamma \left( \frac{1 - \nu +n}{2} \right) \Gamma \left( \frac{1+ \nu + n}{2} \right) \left( \frac{b}{2} \right)^{-n-1} \label{tsum-1} \\ T_{2} & = & \frac{1}{2} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \Gamma(-n-\nu) \Gamma(1+2n+\nu) \left( \frac{b}{2} \right)^{\nu + 2n} \label{tsum-2} \\ T_{3} & = & \frac{1}{2} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \Gamma(\nu-n) \Gamma(1+2n - \nu) \left( \frac{b}{2} \right)^{-\nu+2n} \label{tsum-3} \end{eqnarray} \noindent Observe that $T_{1}$ diverges since it is of type ${_{2}}F_{0}$ (this sum plays a role in the asymptotic study of the integral when $b \rightarrow \infty$, not described here) and the series $T_{2}, \, T_{3}$ converge when $|b|<1$. The method of brackets now states that $F(b,\nu) = T_{2}+T_{3}$, when $|b| <1$. Under this assumption and since $T_{3}(b,\nu) = T_{2}(b, - \nu)$, it suffices to obtain an expression for $T_{2}(b, \nu)$. This is the next result. \begin{proposition} The function $T_{2}(b, \nu)$ is given by \begin{equation} T_{2}(b,\nu) = - \frac{\pi}{2 \sin(\pi \nu)} \frac{b^{\nu} (1 + \sqrt{1 - b^{2}})^{- \nu}}{\sqrt{1-b^{2}}}. \end{equation} \end{proposition} \begin{proof} The proof is divided in a sequence of steps. \noindent \texttt{Step 1}. The function $T_{2}(b,\nu)$ is given by \begin{equation} T_{2}(b, \nu) = - \frac{\pi b^{\nu}}{2^{\nu+1} \sin(\pi \nu)} \sum_{n=0}^{\infty} \frac{\left( \frac{1+\nu}{2} \right)_{n} \, \left( 1 + \frac{\nu}{2} \right)_{n}}{(1+\nu)_{n}} \, \frac{b^{2n}}{n!}. \end{equation} \noindent \textit{Proof}. In the definition of $T_{2}(b,\nu)$ use $(w)_{2n} = 2^{2n} \left( \frac{w}{2} \right)_{n} \, \left( \frac{w+1}{2} \right)_{n},$ \begin{equation*} \Gamma(-n-\nu) = \frac{(-1)^{n} \Gamma(-\nu)}{(1+\nu)_{n}}, \,\, \Gamma(1+\nu + 2n) = \Gamma(1+\nu) 2^{2n} \left( \frac{1+\nu}{2} \right)_{n} \left(1 + \frac{\nu}{2} \right)_{n} \end{equation*} \noindent and $\Gamma(-\nu) \Gamma(1+\nu) = - \pi/\sin(\pi \nu)$, to obtain the result. This identity shows that the statement of the Proposition is equivalent to proving \begin{equation} \frac{1}{2^{\nu}} \sum_{n=0}^{\infty} \frac{\left( \frac{1+\nu}{2} \right)_{n} \, \left( 1 + \frac{\nu}{2} \right)_{n}}{(1+\nu)_{n}} \frac{b^{2n}}{n!} = \frac{(1+ \sqrt{1-b^{2}} \,\, )^{-\nu}}{\sqrt{1-b^{2}}}. \end{equation} \smallskip \noindent \texttt{Step 2}. Formula (2.5.16) in H.~Wilf \cite{wilf-1990a} \begin{equation} \left( \frac{1 - \sqrt{1-4x} }{2x} \right)^{k} = \sum_{n=0}^{\infty} \frac{k \, (2n+k-1)!}{n! \, (n+k)!} x^{n} \end{equation} \noindent and the identity $(1+ \sqrt{1-b^{2}})(1 - \sqrt{1- b^{2}}) = b^{2}$ shows, after some elementary simplifications, that the statement of the Proposition is equivalent to \begin{equation} \sum_{n=0}^{\infty} \frac{\left( \frac{1+\nu}{2} \right)_{n} \, \left( 1 + \frac{\nu}{2} \right)_{n}}{(1+\nu)_{n}} \frac{c^{n}}{n!} = \frac{1}{\sqrt{1-c}} \sum_{n=0}^{\infty} \frac{\left( \frac{\nu}{2} \right)_{n} \, \left( 1 + \frac{\nu}{2} \right)_{n}}{(1+\nu)_{n}} \frac{c^{n}}{n!}, \end{equation} \noindent where $c = b^{2}$. \smallskip \noindent \texttt{Step 3}. The identity at the end of Step 2 is equivalent to \begin{equation} \pFq21{\frac{1+\nu}{2} \,\, 1 + \frac{\nu}{2}}{1+ \nu}{u} = \pFq10{\tfrac{1}{2}}{-}{u} \times \pFq21{\frac{1+\nu}{2} \,\, \frac{\nu}{2}}{1+ \nu}{u}. \end{equation} \noindent \textit{Proof}. Simply use the binomial theorem \begin{equation} (1 - t)^{-\alpha} = \sum_{n=0}^{\infty} \frac{(\alpha)_{n}}{n!} t^{n}. \end{equation} \noindent with $\alpha = 1/2$. \smallskip \noindent \texttt{Step 4}. The conclusion of the Proposition is equivalent to the identity \begin{equation} \label{new-iden1} \sum_{j=0}^{n} \frac{\binom{n}{j} }{(1+ \nu)_{n-j}} \left( \frac{1}{2} \right)_{j} \left( \frac{1+ \nu}{2} \right)_{n-j} \left( \frac{\nu}{2} \right)_{n-j} = \frac{\left( \frac{\nu}{2} \right)_{n} \, \left( 1 + \frac{\nu}{2} \right)_{n}}{(1+\nu)_{n}}, \end{equation} \noindent for every $n \in \mathbb{N}$. \end{proof} \smallskip \begin{proof} The umbral method \cite{gessel-2003a,roman-1978a} shows that the identity is equivalent to the one formed by replacing Pochhammer symbols $(a)_{k}$ by $a^{k}$. In this case, \eqref{new-iden1} becomes \begin{equation} \sum_{j=0}^{n} \binom{n}{j} 2^{j} \nu^{n-j} = (\nu+2)^{n}, \end{equation} \noindent and this follows from the binomial theorem. \end{proof} \begin{proof} An alternative proof of \eqref{new-iden1} is presented next. Expressing the Pochhammer symbols in terms of binomial coefficients, it is routine to check that the desired identity is equivalent to \begin{equation} \sum_{j=0}^{n} \binom{2j}{j} \binom{\nu+2(n-j)}{n-j} \frac{\nu}{\nu+2(n-j)} = \binom{\nu+2n}{n}. \label{new-iden2} \end{equation} \noindent This identity is interpreted as the coefficient of $x^{n}$ in the product of the two series \begin{equation} A(x) = \sum_{j=0}^{\infty} \binom{2j}{j}x^{j} \quad \text{and} \quad B(x) = \sum_{k=0}^{\infty} \nu \binom{\nu+2k}{k} \frac{x^{k}}{\nu+2k}. \end{equation} \noindent The first sum is given by the binomial theorem as \begin{equation} A(x) = \frac{1}{\sqrt{1-4x}}. \end{equation} \noindent To obtain an analytic expression for $B(x)$ start with entry $2.5.15$ in \cite{wilf-1990a} \begin{equation} \frac{1}{\sqrt{1-4x}} \left( \frac{1 - \sqrt{1-4x}}{2x} \right)^{\nu} = \sum_{n=0}^{\infty} \binom{\nu+2k}{k} x^{k} \end{equation} \noindent where the term in brackets is the generating function of the Catalan numbers. Some elementary manipulations give \begin{equation} B(x) = \nu x^{-\nu/2} \int_{0}^{\sqrt{x}} \frac{t^{\nu-1}}{\sqrt{1-4t^{2}}} \left( \frac{1 - \sqrt{1-4t^{2}}}{2t^{2}} \right)^{\nu} \, dt. \end{equation} \noindent Then \eqref{new-iden2} is equivalent to the identity \begin{equation} \int_{0}^{\sqrt{x}} \frac{t^{\nu-1}}{\sqrt{1-4t^{2}}} \left( \frac{1 - \sqrt{1-4t^{2}}}{2t^{2}} \right)^{\nu} \, dt = \frac{1}{\nu} x^{\nu/2} \left( \frac{1-\sqrt{1- 4x}}{2x} \right)^{\nu}. \end{equation} \noindent This can now be verified by observing that both sides vanish at $x=0$ and a direct computation shows that the derivatives match. \end{proof} The integral is stated next. It appears as entry $6.611.3$ in \cite{gradshteyn-2015a}. \begin{corollary} For $\nu, \, b \in \mathbb{R}$ \begin{equation} \int_{0}^{\infty} e^{-x} K_{\nu}(bx) \, dx = \\ \frac{\pi}{2 b^{\nu} \sin(\pi \nu)} \left[ \frac{(1+ \sqrt{1-b^{2}})^{\nu} - (1 - \sqrt{1 - b^{2}})^{\nu}}{\sqrt{1-b^{2}}} \right]. \end{equation} \noindent In particular, as $b \rightarrow 1$, \begin{equation} \int_{0}^{\infty} e^{-x} K_{\nu}(x) \, dx = \\ \frac{\pi \nu}{ \sin(\pi \nu)}, \end{equation} and as $\nu \rightarrow 0$, \begin{equation} \int_{0}^{\infty} e^{-x}K_{0}(bx) \, dx = \frac{\log(1+\sqrt{1-b^{2}}) - \log(1-\sqrt{1-b^2})}{2 \sqrt{1-b^2}}. \end{equation} \noindent Finally, letting $\nu \rightarrow 0$ and $b \rightarrow 1$ gives \begin{equation} \int_{0}^{\infty} e^{-x} K_{0}(x) \, dx = 1. \end{equation} \end{corollary} \end{example} \section{Flexibility of the method of brackets} \label{sec-flexibility} This final section illustrates the flexibility of the method of brackets. To achieve this goal, we present four different evaluations of the integral \begin{equation} \label{flex-0} I(a,b) = \int_{0}^{\infty} \frac{e^{-a^{2}x^{2}} \, dx}{x^{2} + b^{2}}. \end{equation} \noindent The parameters $a,\, b$ are assumed to be real and positive. This formula appears as entry $3.466.1$ in \cite{gradshteyn-2015a} with value \begin{equation} \label{real-value} I(a,b) = \frac{\pi}{2b} e^{a^{2}b^{2} }( 1 - \textnormal{Erf}(ab) ) \end{equation} \noindent and the reader will find in \cite{albano-2011a} an elementary proof of it. This section presents $4$ different ways to use the method of brackets to evaluate this integral. \smallskip \noindent \texttt{Method 1}. Start with the bracket series representations \begin{eqnarray} \exp(- a^{2} x^{2}) & = & \sum_{n_{1}} \phi_{n_{1}} a^{2n_{1}} x^{2n_{1}} \label{flex-4} \\ \frac{1}{x^{2}+b^{2}} & = & \sum_{n_{2}} \sum_{n_{3}} \phi_{n_{2}n_{3}} b^{2n_{2}} x^{2n_{3}} \langle 1 + n_{2} + n_{3} \rangle \nonumber \end{eqnarray} \noindent and produce the bracket series \begin{equation} I(a,b) = \sum_{n_{1}} \sum_{n_{2}} \sum_{n_{3}} \phi_{n_{1}n_{2}+n_{3}} a^{2n_{1}} b^{2n_{2}} \langle 1+ n_{2} + n_{3} \rangle \, \langle 2n_{1} + 2n_{3} + 1 \rangle. \end{equation} \smallskip \noindent \texttt{Method 2}. The second form begins with the Mellin-Barnes representations \begin{eqnarray} \exp(-a^{2} x^{2} ) & = & \frac{1}{4 \pi i } \int_{\gamma} x^{-t} a^{-t} \Gamma \left( \frac{t}{2} \right) \, dt \label{flex-8} \\ \frac{1}{x^{2}+b^{2}} & = & \frac{1}{4 \pi b^{2} i} \int_{\gamma} x^{-s} b^{s} \Gamma \left( \frac{s}{2} \right) \Gamma \left( 1 - \frac{s}{2} \right) \, ds \nonumber \end{eqnarray} \noindent Replacing in \eqref{flex-0} a bracket appears and one obtains the representation \begin{equation} I(a,b) = \frac{1}{4b^{2} (2 \pi i )^{2}} \int_{\gamma_{1}} \int_{\gamma_{2}} a^{-t} b^{s} \Gamma \left( \tfrac{t}{2} \right) \Gamma \left( \tfrac{s}{2} \right) \Gamma \left( 1 - \tfrac{s}{2} \right) \langle 1 - t - s \rangle \, ds \, dt. \label{flex-11} \end{equation} \noindent The bracket is used to evaluate the $s$-integral to produce $s^{*} = 1-t$ and the expression \begin{equation} I(a,b) = \frac{1}{4b^{2} (2 \pi i )} \int_{\gamma} a^{-t} b^{1-t} \Gamma \left( \tfrac{t}{2} \right) \Gamma \left( \tfrac{1-t}{2} \right) \Gamma \left( \tfrac{1+t}{2} \right) \, dt. \end{equation} \noindent The next step is to express the gamma factors in the numerator as bracket series to obtain \begin{equation*} I(a,b) = \frac{1}{4b^{2} (2 \pi i )} \sum_{n_{1}} \sum_{n_{2}} \sum_{n_{3}} \phi_{n_{1} n_{2} n_{3}} \int_{\gamma} a^{-t} b^{1-t} \langle \tfrac{t}{2} + n_{1} \rangle \langle \tfrac{1}{2} - \tfrac{t}{2} + n_{2} \rangle \langle \tfrac{1}{2} + \tfrac{t}{2} + n_{3} \rangle \, dt \end{equation*} \noindent and for instance selecting the bracket in $n_{1}$ to eliminate the integral and using \begin{equation} \left \langle \tfrac{t}{2} + n_{1} \right \rangle = 2 \langle t + 2 n_{1} \rangle \end{equation} \noindent gives \begin{equation} I(a,b) = \frac{1}{2b} \sum_{n_{1}} \sum_{n_{2}} \sum_{n_{3}} \phi_{n_{1}n_{2}n_{3}} a^{2n_{1}} b^{2n_{1}} \langle \tfrac{1}{2} + n_{1} + n_{2} \rangle \langle \tfrac{1}{2} - n_{1} + n_{3} \rangle. \end{equation} \smallskip \noindent \texttt{Method 3}. The next way to evaluate the integral \eqref{flex-0} is a mixture of the previous two. Using \begin{eqnarray} \frac{1}{x^{2} + b^{2}} & = & \sum_{n_{1}} \sum_{n_{2}} \phi_{n_{1}n_{2}} b^{2n_{1}} x^{2n_{2}} \langle 1 + n_{1} + n_{2} \rangle \label{flex-17} \\ \exp(-a^{2}x^{2}) & = & \frac{1}{4 \pi i } \int_{\gamma} x^{-t} a^{-t} \Gamma \left( \tfrac{t}{2} \right) \, dt \nonumber \end{eqnarray} \noindent and the usual procedures gives \begin{equation*} I(a,b) = \frac{1}{4 \pi i } \sum_{n_{1}} \sum_{n_{2}} b^{2n_{1}} \langle 1+ n_{1} + n_{2} \rangle \int_{\gamma} a^{-t} \Gamma \left( \tfrac{t}{2} \right) \langle 2n_{2} - t +1 \rangle \, dt \end{equation*} \noindent Now use the bracket in $n_{2}$ to eliminate the integral and produce \begin{equation} I(a,b) = \frac{1}{2} \sum_{n_{1}} \sum_{n_{2}} \phi_{n_{1}n_{2}} a^{-2 n_{2}-1} b^{2n_{1}} \langle 1+ n_{1} + n_{2} \rangle \Gamma \left( n_{2} + \tfrac{1}{2} \right) \end{equation} \noindent and replacing the gamma factor by its bracket series yields \begin{equation} I(a,b) = \frac{1}{2} \sum_{n_{1}} \sum_{n_{2}} \sum_{n_{3}} \phi_{n_{1}n_{2}n_{3}} b^{2n_{1}} a^{-2n_{2}-1} \langle 1+ n_{1} + n_{2} \rangle \langle\tfrac{1}{2} + n_{2} + n_{3} \rangle. \end{equation} \smallskip \noindent \texttt{Method 4}. The final form uses the representations \begin{eqnarray} \frac{1}{x^{2} + b^{2}} & = & \frac{1}{4 \pi b^{2}i} \int_{\gamma} x^{-s} b^{s} \Gamma \left( \frac{s}{2} \right) \Gamma \left( 1 - \frac{s}{2} \right) \, ds \label{felx-22} \\ \exp(- a^{2}x^{2} ) & = & \sum_{n_{1}} \phi_{n_{1}} a^{2n_{1}} x^{2n_{1}} \nonumber \end{eqnarray} \noindent to write \begin{equation} I(a,b) = \frac{1}{4 b^{2} \pi i } \sum_{n_{1}} \phi_{n_{1}} a^{2n_{1}} \int_{\gamma} b^{s} \Gamma \left( \tfrac{s}{2} \right) \Gamma \left( 1 - \tfrac{s}{2} \right) \langle 2n_{1} - s + 1 \rangle \, ds, \end{equation} \noindent where the last bracket comes the original integral. Now write the gamma factors as bracket series to produce \begin{equation*} I(a,b) = \frac{1}{2b^{2}(2 \pi i )} \sum_{n_{1}} \sum_{n_{2}} \sum_{n_{3}} \phi_{n_{1} n_{2} n_{3}} a^{2n_{1}} \left[ \int_{\gamma} b^{s} \langle \tfrac{s}{2} + n_{2} \rangle \langle 1 - \tfrac{s}{2} + n_{3} \rangle \langle 2n_{1} - s +1 \rangle \, ds \right], \end{equation*} \noindent and using the $n_{2}$-bracket to eliminate the integral yields \begin{equation} I(a,b) = \frac{1}{b^{2}} \sum_{n_{1}} \sum_{n_{2}} \sum_{n_{3}} \phi_{n_{1}n_{2}n_{3}} a^{2n_{1}} b^{-2n_{2}} \langle 1+ n_{2} + n_{3} \rangle \, \langle 2n_{1} + 2n_{2} + 1 \rangle. \label{flex-26} \end{equation} These four bracket series lead to the terms \begin{eqnarray} T_{1} & = & \frac{1}{2b} \sum_{k=0}^{\infty} \frac{1}{k!} \Gamma( \tfrac{1}{2} - k ) \Gamma( \tfrac{1}{2} +k ) (-a^{2} b^{2})^{k} \\ & = & \frac{\pi}{2b} \exp(a^{2}b^{2}) \nonumber \\ T_{2} & = & \frac{a}{2} \sum_{k=0}^{\infty} \frac{1}{k!} \Gamma(1+k) \Gamma( - \tfrac{1}{2} - k) (-a^{2} b^{2} )^{k} \\ & = & - \frac{\pi}{2b} \exp(a^{2}b^{2}) \textnormal{erf}(ab) \nonumber \\ T_{3} & = & \frac{1}{2a b^{2}} \sum_{k=0}^{\infty} \frac{1}{k!} \Gamma(1+k) \Gamma( \tfrac{1}{2} + k ) \left( - \frac{1}{a^{2}b^{2}} \right)^{k} \\ & = & \frac{\sqrt{\pi}}{2ab^{2}} \pFq20{1 \,\,\,\, \tfrac{1}{2}}{-}{- \frac{1}{a^{2}b^{2}}}. \nonumber \end{eqnarray} \noindent The values $T_{1}$ and $T_{2}$ and Rule $E_{3}$ in Section \ref{sec-rules} are combined to give the value \begin{equation} I(a,b) = \frac{\pi}{2b} \exp(a^{2}b^{2}) \left[ 1 - \textnormal{erf}(ab) \right], \end{equation} \noindent conforming \eqref{real-value}. The value of $T_{3}$ is useful in the asymptotic study of $I(a,b)$ as $a^{2}b^{2} \rightarrow \infty$. \medskip The final example illustrates a different combination ideas. \begin{example} Consider the two-dimensional integral \begin{equation} J(a,b) = \int_{0}^{\infty} \int_{0}^{\infty} x^{a-1} y^{b-1} \textnormal{Ei}(-x^{2}y) K_{0} \left( \frac{x}{y} \right) \, dx \, dy \end{equation} \noindent where $\textnormal{Ei}$ is the exponential integral defined by \begin{equation} \textnormal{Ei}(-x) = - \int_{1}^{\infty} \frac{\exp(- x t )}{t} \, dt \quad \textnormal{for } x>0 \end{equation} \noindent with divergent representation \begin{equation} \textnormal{Ei}(-x^{2}y) = \sum_{\ell \geq 0} \phi_{\ell} \frac{x^{2 \ell} y^{\ell}}{\ell} \end{equation} \noindent (see \cite{gonzalez-2017a} for the concept of divergent expansion in the context of the method of brackets). Using this series and the Mellin-Barnes representation of $K_{0}$ \begin{equation} K_{0} \left( \frac{x}{y} \right) = \frac{1}{4 \pi i } \int_{\gamma} \Gamma^{2}(t) \left( \frac{x}{2y} \right)^{-2t} \, dt \end{equation} \noindent yields \begin{equation} J(a,b) = \frac{1}{4 \pi i } \sum_{\ell \geq 0} \phi_{\ell} \frac{1}{\ell} \int_{\gamma} \Gamma^{2}(t) 2^{2t} \langle a + 2 \ell - 2t \rangle \langle b + \ell + 2t \rangle \, dt, \end{equation} \noindent where the two brackets come from the integrals on the half-line. To evaluate this integral use the bracket $\langle a + 2 \ell - 2 t \rangle $ to produce \begin{eqnarray} \label{1025} J(a,b) & = & \frac{1}{8 \pi i } \sum_{\ell \geq 0} \frac{\phi_{\ell}}{\ell} \left[ \int_{\gamma} \Gamma^{2}(t) 2^{2t} \langle \tfrac{a}{2} + \ell - t \rangle \langle b + \ell + 2t \rangle \, dt \right] \\ & = & \frac{1}{8 \pi i } \sum_{\ell \geq 0} \frac{\phi_{\ell}}{\ell} \left[ 2 \pi i \Gamma^{2}(t) 2^{2t} \langle b+ \ell + 2 t \rangle \right]_{t = \tfrac{a}{2} + \ell} \nonumber \\ & = & \frac{1}{4} \sum_{\ell \geq 0} \frac{\phi_{\ell}}{\ell} \Gamma^{2} \left( \tfrac{a}{2}+ \ell \right) 2^{a + 2 \ell} \langle a + b + 3 \ell \rangle, \nonumber \end{eqnarray} \noindent and eliminating the series with the bracket yields the value \begin{equation} J(a,b) = - \frac{4^{-1 - b/3 + a/6}}{a+b} \Gamma^{2} \left( \frac{a - 2 b }{6} \right) \Gamma \left( \frac{a+b}{3} \right). \end{equation} \end{example} \section{Feynman diagrams} \label{sec-feynman} A variety of interesting integrals appear in the evaluation of Feynman diagrams. See \cite{smirnov-2004a,smirnov-2006a} for details. The example corresponds to a loop diagram with a propagator in the scalar theory with internal propagators of equal masses \cite{boos-1991a}: {{ \begin{figure}[!ht] \begin{center} \centerline{\psfig{file=Diagram1.eps,width=15em,angle=0}} \caption{The loop diagram} \label{figure1} \end{center} \end{figure} }} \newline \noindent The integral attached to this diagram is \begin{equation} J(\alpha, \beta, m,m) = \int d^{D}Q \frac{1}{ \left[ Q^{2}-m^{2} \right]^{\alpha} \left[ (p+Q)^{2} - m^{2} \right]^{\beta}} \end{equation} \noindent with Mellin-Barnes representation given by \begin{multline} J(\alpha,\beta,m,m) = \pi^{D/2} i^{1-D} \frac{ (-m^{2})^{D/2-\alpha-\beta}}{\Gamma(\alpha) \Gamma(\beta)} \\ \times \frac{1}{2 \pi i } \int_{\gamma} (p^{2})^{_u} (-m^{2})^{u} \frac{\Gamma(-u) \Gamma(\alpha + u) \Gamma(\beta + u) \Gamma(\alpha + \beta - D/2 +u)} {\Gamma(\alpha + \beta + 2u)} \, du \end{multline} \noindent Replacing the Gamma factors in the numerator by its corresponding bracket series gives \begin{multline} J(\alpha, \beta, m,m) \leftarrow \pi^{D/2} i^{1-D} \frac{(-m^{2})^{D/2-\alpha - \beta}}{\Gamma(\alpha) \Gamma(\beta)} \sum_{\{ n \}} \phi_{n_{1}, \ldots, n_{d}} \\ \frac{1}{2 \pi i} \int_{\gamma} (p^{2})^{u} (- m^{2})^{u} \frac{ \langle -u +n_{1} \rangle \langle \alpha + u + n_{2} \rangle \langle \beta + u + n_{3} \rangle \langle \alpha + \beta - D/2+ u + n_{4} \rangle } {\Gamma(\alpha + \beta + 2u) } \, du. \end{multline} \noindent To evaluate the integral, the vanishing of the bracket $\langle - u + n_{1} \rangle$ gives $u^{*} = n_{1}$ and $J$ is expressed as a bracket series \begin{multline} J(\alpha, \beta, m,m) = \pi^{D/2} i^{1-D} \frac{(-m^{2})^{D/2- \alpha - \beta}}{\Gamma(\alpha) \Gamma(\beta)} \\ \times \sum_{\{ n \}} \phi_{n_{1}, \ldots, n_{4}} (p^{2})^{u^{*}} (-m^{2})^{u^{*}} \frac{ \langle \alpha + u^{*} + n_{2} \rangle \langle \beta + u^{*} + n_{3} \rangle \langle \alpha + \beta - D/2 + u^{*} + n_{4} \rangle } { \Gamma(\alpha + \beta + 2 u^{*}) }. \end{multline} The terms obtained from the bracket series above are given as hypergeometric values \begin{eqnarray*} T_{1} & = & \pi^{D/2} i^{1-D} (-m^{2})^{D/2 - \alpha - \beta} \frac{\Gamma(\alpha + \beta - D/2)}{\Gamma(\alpha + \beta)} \pFq32{\alpha \,\,\,\,\,\, \beta \,\,\,\,\,\, \alpha + \beta - \tfrac{D}{2}}{\tfrac{\alpha+ \beta}{2} \,\,\,\,\,\, \tfrac{\alpha + \beta +1}{2} }{ \frac{p^{2}}{4m^{2}}} \\ T_{2} & = & \pi^{D/2} i^{1-D} (-m^{2})^{D/2 - \beta} (p^{2})^{-\alpha} \, \frac{\Gamma( \beta - D/2)}{\Gamma( \beta)} \pFq32{\alpha \,\,\,\,\,\, 1+ \frac{\alpha - \beta}{2} \,\,\,\,\,\, \tfrac{1+ \alpha - \beta}{2}}{ 1 + \alpha - \beta \,\,\,\,\,\, 1 - \beta + \tfrac{D}{2} }{ \frac{4m^{2}}{p^{2}}} \\ T_{3} & = & \pi^{D/2} i^{1-D} (-m^{2})^{D/2 - \alpha} (p^{2})^{-\beta} \, \frac{\Gamma( \alpha - D/2)}{\Gamma( \alpha)} \pFq32{\beta \,\,\,\,\,\, 1+ \frac{\beta - \alpha}{2} \,\,\,\,\,\, \tfrac{1+ \beta - \alpha}{2}}{ 1 + \beta - \alpha \,\,\,\,\,\, 1 - \alpha + \tfrac{D}{2} }{ \frac{4m^{2}}{p^{2}}} \\ T_{4} & = & \pi^{D/2} i^{1-D} (p^{2})^{D/2 - \alpha -\beta} \, \frac{\Gamma(D/2 - \alpha ) \Gamma( D/2 - \beta) \Gamma( \alpha + \beta - D/2)} {\Gamma(\alpha) \Gamma(\beta) \Gamma(D- \alpha - \beta)} \\ & & \quad \quad \quad \quad \quad \quad \times \pFq32{ \alpha + \beta - \frac{D}{2} \,\,\,\,\,\, 1 + \tfrac{\alpha + \beta - D }{2} \,\,\,\,\, \tfrac{1 + \alpha + \beta -D}{2} } { 1 +\beta - \tfrac{D}{2} \,\,\,\,\, 1 + \alpha - \tfrac{D}{2} }{ \frac{4m^{2}}{p^{2}}}. \end{eqnarray*} The conclusion is the evaluation of the Feynman diagram shown in Figure \ref{figure1} is given by \begin{equation} J(\alpha, \beta, m, m ) = \begin{cases} T_{1} \quad & \textnormal{for} \,\, \left| \frac{p^{2}}{4m^{2}} \right| < 1 \\ T_{2} + T_{3} + T_{4} \quad & \textnormal{for} \,\, \left| \frac{4m^{2}}{p^{2}} \right| < 1. \end{cases} \end{equation} . \section{Conclusions} \label{sec-conclusion} The work presented here gives a new procedure to evaluate Mellin-Barnes integral representations. The main idea is to replace the Gamma factors appearing in such representations by its corresponding bracket series. The method of brackets is then used to evaluate these integrals. This method has been shown to be simple and efficient, given its algorithmic nature. A collection of representative examples has been provided. \section{Acknowledgments} I.K. was supported in part by Fondecyt (Chile) Grants Nos. 1040368, 1050512 and 1121030, by DIUBB (Chile) Grant Nos. 102609, GI 153209/C and GI 152606/VC. I.~G. wishes to thank the hospitality of the Department of Mathematics at Tulane University, where part of this work was conducted. \smallskip The Feynman diagram has been drawn using JaxoDraw \cite{binosi-2009a}.
1,941,325,220,825
arxiv
\section{Introduction} In Kirby's problem 4.5 \cite{kirbylist}, Casson asks which rational homology $3$-spheres bound rational homology $4$-balls. While rational homology $3$-spheres abound in nature, including the $r$-surgery $S^3_r(K)$ on a knot $K$ for any $r \in \mathbb{Q} - \{ 0 \}$, very few of them actually bound rational homology balls. In fact, Aceto and Golla showed in \cite[Theorem 1.1]{acetogolla}, that for every knot $K$ and every $q \in \mathbb{Z}_+$, there exist at most finitely many $p \in \mathbb{Z}_+$ such that $S^3_{p/q}(K)$ bounds a rational homology ball. It is hard to answer Casson's question in full generality, but recently a great deal of progress has been made on specific classes of rational homology $3$-spheres. For example, in 2007 we learnt the answer for lens spaces \cite{liscasingle07, liscamultiple07}, in 2020 for positive integral surgeries on positive torus knots \cite{acetogolla, GALL}, and in between we learnt the answer for several other classes on Seifert fibred spaces with three exceptional fibres \cite{lecuonacomplementary, lecuonamontesinos}. We do not yet know the answer for general Seifert fibred spaces with three exceptional fibres. In \cite{lokteva2021surgeries}, the author started studying surgeries on algebraic (iterated torus) knots, which are not Seifert fibred but decompose into Seifert fibred spaces when cut along a maximal system of incompressible tori \cite{gordon}. An important tool to study which $3$-manifolds bound rational homology balls is the following corollary of Donaldson's diagonalisation theorem \cite[Theorem 1]{donaldsonthm}: \begin{thm*}[Corollary of Donaldson's Theorem] Let $Y$ be a rational homology 3-sphere and $Y=\partial X$ for $X$ a negative definite smooth connected oriented 4-manifold. If $Y=\partial W$ for a smooth rational homology 4-ball $W$, then there exists a lattice embedding \[ (H_2(X)/\Torsion, Q_{X}) \hookrightarrow (\mathbb{Z}^{\rk H_2(X)}, -\Id). \] \end{thm*} \noindent Here $Q_X$ is the intersection form on $H_2(X)/\Torsion$. Determining which $3$-manifolds in a family $\mathfrak{F}$ bound rational homology $4$-balls using lattice embeddings often goes like this: \begin{enumerate}[label=(\arabic*)] \item Find a negative-definite filling $X(Y)$ for every $Y \in \mathfrak{F}$. \item Guess the family $\mathfrak{F}' \subset \mathfrak{F}$ of manifolds whose filling's intersection lattice (that is second homology with the intersection form) embeds into the standard lattice of the same rank. \item Show that $(H_2(X(Y)),Q_{X(Y)})$ does not embed into $(\mathbb{Z}^{b_2(X(Y))}, -\Id)$ for any $Y \in \mathfrak{F}-\mathfrak{F}'$. \item Hopefully prove that $Y$ bounds a rational homology ball for any $Y\in \mathfrak{F}'$. \end{enumerate} This process is sensitive at every step. There exist $3$-manifolds without any definite fillings \cite{gollalarson}. However, lens spaces, surgeries on torus knots and large surgeries on algebraic knots do have definite fillings. In fact, they all bound definite plumbings of disc bundles on spheres. Step (4) is definitely not guaranteed to work either. For example, $S^3_{-m^2}(K)$ bounds the knot trace $D^4_{-m^2}(K)$ ($D^4$ with a $-m^2$-framed $2$-handle glued along $K$) which has intersection lattice $(\mathbb{Z}, (-m^2))$ which embeds into $(\mathbb{Z}, -\Id)$, but according to \cite[Theorem 1.2]{acetogolla}, $S^3_{-m^2}(K)$ bounds a rational homology ball for at most two positive integer values of $m$. However, in \cite{liscasingle07, liscamultiple07,acetogolla,GALL} the authors managed to find an $X(Y)$ for each $Y$ in such a way that the lattice embedding obstruction turned out perfect. These $X(Y)$s have been plumbings of disc bundles on spheres with a tree-shaped plumbing graph, moreover satisfying the property that the quantity \[I=\sum_{v \in V} (-w(v)-3), \] where $V$ is the set of vertices of the graph and $w(v)$ is the weight of $v$, would be negative. Steps (2) and (3) can sometimes be done at the same time, but often, like in \cite{GALL} where $\mathfrak{F}$ is the set of positive integral surgeries on positive torus knots, they cannot. It is then important to eliminate embeddable cases early in order to proceed with step (3). Theorem 1.1 in \cite{GALL}, the classification of positive integral surgeries on positive torus knots bounding rational homology balls, lists 5 families that are Seifert fibred spaces with 3 exceptional fibres. They bound a negative-definite star-shaped plumbing with three legs. Families (1)-(3) have two complementary legs, that is two legs whose weight sequences are Riemenschneider dual (defined, for the reader's convenience, in Section \ref{sec:complegs} of this paper). All such 3-manifolds that bound a rational homology ball have been classified by Lecuona in \cite{lecuonacomplementary}. Family (5) contains two exceptional graphs which were known to bound rational homology balls both because they arise as boundaries of tubular neighbourhoods of rational cuspidal curves in \cite{bobluhene07} and because they are surgeries on torus knots $T(p,q)$ where $q \equiv \pm 1 \pmod{p}$, which were studied in \cite{acetogolla}. However, Family (4) took the authors of \cite{GALL} a while to find, in the meantime thwarting their attempts at step (3). Eventually they found Family (4) using a computer. This allowed them to finish off their lattice embedding analysis, but Family (4) still looked surprising and strange and begged the question of ``How could we have predicted its existence?" This work came out of widening the perspective and asking which boundaries of $4$-manifolds described by plumbing trees with negative definite intersection forms and low $I$ bound rational homology balls. In particular, we asked ourselves which plumbing trees generate an embeddable intersection lattice. We looked at what the graphs of $3$-manifolds we know to bound rational homology balls look like and tried to see if there are any common patterns. In \cite[Remark 3.2]{lecuonamontesinos}, Lecuona describes how to get all lens spaces that bound rational homology balls from the linear graphs $(-2,-2,-2)$, $(-2,-2,-3,-4)$, $(-3,-2,-2,-3)$ and $(-3,-2,-3,-3,-3)$ using some modifications. (She restates Lisca's result in \cite{liscasingle07} in the language of plumbing graphs rather than fractions $p/q$ for $L(p,q)$.) In this paper we define a couple of moves called GOCL and IGOCL moves on embedded plumbing graphs that preserve embeddability and generalise the moves described by Lecuona. From this point of view, Lecuona's list simply turns into a list of $IGOCL$ and $GOCL$ moves that keep the graph linear. We may then ask ourselves if these moves preserve the property of the described $3$-manifolds bounding rational homology balls. There is unfortunately no obvious rational homology cobordism between two $3$-manifolds differing by a GOCL or an IGOCL move. We can however prove that repeated applications of these moves to the embeddable linear graphs $(-2,-2,-2)$, $(-2,-2,-3,-4)$, $(-3,-2,-2,-3)$ and $(-3,-2,-3,-3,-3)$ give $3$-manifolds bounding rational homology balls. We get the following theorem: \begin{thm} \label{thm:general} All $3$-manifolds described by the plumbing graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general} bound rational homology balls. \end{thm} \noindent In fact, we prove this theorem by showing that the above plumbed $3$-manifolds bound a double cover of $D^4$ branched over a $\chi$-slice link \cite[Definition 1]{donaldowens}. By \cite[Proposition 5.1]{donaldowens}, this must be a rational homology $4$-ball. At the same time, we show that these links are $\chi$-ribbon. These families, together with the one generated from $(-2,-2,-2)$ which only contains linear graphs already found by Lisca, include all lens spaces bounding rational homology balls. They also contain more complicated graphs, with linear complexity (defined in \cite{aceto20}) up to 2. Many papers, e.g. \cite{aceto20, acetogolla, GALL, lecuonamontesinos, simone2020classification}, using lattice embeddings to obstruct plumbed $3$-manifolds from bounding a rational homology ball have used arguments of the form ``If my graph $\Gamma$ is embeddable, then this other linear graph obtained from $\Gamma$ is embeddable, and we know what those look like.", which gets harder to do the further $\Gamma$ is away from being linear. Thus we only really have lattice embedding obstructions so far for families of graphs of complexity $1$. The families of Theorem \ref{thm:general} include many graphs of Seifert fibred spaces. They include Family (4) in \cite{GALL} and predict its existence because Family (4) is just the intersection between the set of graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general} and the negative-definite plumbing graphs of positive integral surgeries on positive torus knots. As mentioned above, there is no obvious rational homology cobordism between the $3$-manifolds described by two plumbing graphs differing by a GOCL or an IGOCL move. This is interesting in comparison with the case in the works by Aceto \cite{aceto20} and Lecuona \cite{lecuonacomplementary}. Lecuona shows that given a plumbing graph $\Gamma$, you can modify it to a graph $\Gamma'$ by subtracting $1$ from the weight of a vertex $v$ and attaching two complementary legs $(-a_1,\dots,-a_m)$ and $(-b_1,\dots,-b_n)$ (see Section \ref{sec:complegs} or \cite{lecuonacomplementary} for definitions) to $v$, and the $3$-manifolds $Y_\Gamma$ and $Y_{\Gamma'}$ described by the graphs will be rational homology cobordant, that is bound a rational homology $4$-ball if and only if the other one does. Thus, if she wants to know if a $Y_{\Gamma'}$, for $\Gamma'$ a graph with two complementary legs coming out of the same vertex, bounds a rational homology ball, she can reduce it to the same question for a simpler graph. However, because we do not know if the $GOCL$ and $IGOCL$ moves are rational homology cobordisms, we cannot play this trick for complementary legs growing out of different vertices. An interesting generalisation of \cite[Theorem 1.1]{GALL}, would be to classify all positive rational surgeries on positive torus knots that bound rational homology balls. Theorem \ref{thm:general} allows us to construct more examples of such surgeries than is sightly to write down. Instead, we may ask ourselves the following question: \begin{question} For which $1<p<q$ with $\GCD(p,q)=1$ is there an $r\in \mathbb{Q}_+$ such that $S^3_r(T(p,q))$ bounds a rational homology ball? \end{question} The entirety of Section \ref{sec:torusknots} is consecrated to showing the following theorem: \begin{thm} \label{thm:torusknotlist} For the following pairs $(p,q)$ with $1<p<q$ and $\GCD(p,q)=1$, there is at least one $r \in \mathbb{Q}_+$ such that $S^3_r(T(p,q))$ bounds a rational ball. Here $k,l \geq 0$. \begin{enumerate} \item $(k+2,(l+1)(k+2)+1)$ \item $(k+2,(l+2)(k+2)-1)$ \item $(2k+3,(l+1)(2k+3)+2)$ \item $(2k+3,(l+2)(2k+3)-2)$ \item $(k^2+7k+11, k^3+12k^2+45k+51)$ \item $(P_{l+1},P_{l+2})$ for $(P_i)$ a sequence defined by $P_0=1$, $P_1=4$, $P_2=19$ and $P_{i+2}=5P_{i+1}-P_i$. \item $(Q_{l+1},Q_{l+2})$ for $(Q_i)$ a sequence defined by $Q_0=1$, $Q_1=2$, $Q_2=9$ and $Q_{i+2}=5Q_{i+1}-Q_i$. \item $(R_{l+1},R_{l+2})$ for $(R_i)$ a sequence defined by $R_0=1$, $R_1=3$, $R_2=17$ and $R_{i+2}=6R_{i+1}-R_i$. \item $(S^{(k)}_{l+1},S^{(k)}_{l+2})$ for $(S^{(k)}_i)$ a sequence defined by $S^{(k)}_0=1$, $S^{(k)}_1=2$, $S^{(k)}_2=2k+7$ and $S^{(k)}_{i+2}=(k+4)S^{(k)}_{i+1}-S^{(k)}_i$. \item $(T^{(k)}_{l+1},T^{(k)}_{l+2})$ for $(T^{(k)}_i)$ a sequence defined by $T^{(k)}_0=1$, $T^{(k)}_1=k+2$, $T^{(k)}_2=k^2+6k+7$ and $T^{(k)}_{i+2}=(k+4)T^{(k)}_{i+1}-T^{(k)}_i$. \item $(U_{l+1},U_{l+2})$ for $(U_i)$ a sequence defined by $U_0=1$, $U_1=3$, $U_2=14$ and $U_{i+2}=5U_{i+1}-U_i$. \item $(A, (n+1)Q+P)$ for $P$ and $Q$ such that $L(Q,P)$ bounds a rational homology ball (or equivalently $\frac{Q}{P}$ lying in Lisca's set $\mathcal{R}$ \cite{liscasingle07}), and $A$ a multiplicative inverse to either $Q$ or $nQ+P$ modulo $(n+1)Q+P$ such that $0<A<(n+1)Q+P$. \item $((n+1)Q+P, (l+1)((n+1)Q+P)+A)$ for $P$ and $Q$ such that $L(Q,P)$ bounds a rational homology ball (or equivalently $\frac{Q}{P}$ lying in Lisca's set $\mathcal{R}$ \cite{liscasingle07}), and $A$ a multiplicative inverse to either $Q$ or $nQ+P$ modulo $(n+1)Q+P$ such that $0<A<(n+1)Q+P$. \item $(B, P)$ for $P$ and $Q$ such that $L(Q,P)$ bounds a rational homology ball (or equivalently $\frac{Q}{P}$ lying in Lisca's set $\mathcal{R}$ \cite{liscasingle07}), and $B$ a multiplicative inverse to either $P\lceil \frac{Q}{P}\rceil-Q$ or $Q-P\lfloor \frac{Q}{P} \rfloor$ modulo $P$ such that $0<B<P$. \item $(P, (l+1)P+B)$ for $P$ and $Q$ such that $L(Q,P)$ bounds a rational homology ball (or equivalently $\frac{Q}{P}$ lying in Lisca's set $\mathcal{R}$ \cite{liscasingle07}), and $B$ a multiplicative inverse to either $P\lceil \frac{Q}{P}\rceil-Q$ or $Q-P\lfloor \frac{Q}{P} \rfloor$ modulo $P$ such that $0<B<P$. \item $(P,Q)$ such that there is a number $n$ such that $(P,Q,n) \in \mathcal{R} \sqcup \mathcal{L}$ for the sets $\mathcal{R}$ and $\mathcal{L}$ defined in \cite[Theorem 1.1]{GALL}. (Note that here $r=n \in \{PQ, PQ-1, PQ+1 \}$, so we are looking at an integral surgery.) \end{enumerate} \end{thm} \noindent The interested reader can use the methods of Section \ref{sec:torusknots} to obtain a surgery coefficient $r$ too (or several). In this theorem, case 16 is shown to bound rational homology balls in \cite{GALL} and reflects the degenerate cases of surgeries on torus knots that are lens spaces or connected sums of lens spaces, cases 12-15 are shown to bound rational homology balls in \cite{lecuonacomplementary} because their graphs have a pair of complementary legs, while all the cases 1-11 are shown to bound rational homology balls in this paper. The authors of \cite{GALL} classified all positive integral surgeries on positive torus knots that bound rational homology balls. The classification included 18 families, whereof families 6-18 are included in our family 16, family 4 in our family 8, and the others in families 1 and 2. At the moment of writing we do not know of any other positive torus knots having positive surgeries bounding rational homology balls. The pair $(8,19)$ is in some metric the smallest example not to appear on the list of Theorem \ref{thm:torusknotlist}. Thus we may concretely ask: \begin{question} Is there an $r \in \mathbb{Q}_+$ such that $S^3_r(T(8,19))$ bounds a rational homology ball? \end{question} We may also note that some positive torus knots have many surgeries that bound rational homology balls. For example, Theorem \ref{thm:general} allows us to construct numerous finite and infinite families of surgery coefficients $r \in \mathbb{Q}_+$ such that $S^3_r(T(2,3))$ bounds a rational homology ball. All we need to do is to choose weights in the graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general} so that we get a starshaped graph with three legs whereof one is $(-2)$ and another is either $(-2,-2)$ or $(-3)$. For example $S^3_{\frac{(11k+20)^2}{22k^2+79k+71}}(T(2,3))$ bounds a plumbing of the shape in Figure \ref{fig:liscagraph2234extendedgeneral} with $(b_1,\dots, b_{m_1})=(4)$ and $(a_1,\dots,a_{n_1})=(3)$, and thus bounds a rational homology ball for any $k \geq 0$. There are also surgeries on $T(2,3)$ that bound rational balls, but do not come from our construction. For example, $S^3_{\frac{64}{7}}(T(2,3))=-S^3_{64}(T(3,22))$, which bounds a rational homology ball because it is the boundary of the tubular neighbourhood of a rational curve in $\mathbb{C}P^2$ \cite{bobluhene07}, but whose lattice embedding contains a basis vector with coefficient $2$, which we do not get by applying $GOCL$ or $IGOCL$ moves to $(-2,-2,-3,-4)$, $(-3,-2,-2,-3)$ and $(-3,-2,-3,-3,-3)$. The lattice embedded plumbing graph of $S^3_{\frac{64}{7}}(T(2,3))$ does however fit into Family $\mathcal{C}$ of \cite{stipszabwahl} of symplectically embeddable plumbings. Unfortunately, Family $\mathcal{C}$ of \cite{stipszabwahl} contains both surgeries on $T(2,3)$ that bound rational homology balls and ones that do not. For example, $S^3_{\frac{169}{25}} (T(2,3))$ of \cite[Section 2.4, Figure 12]{stipszabwahl} does not bound a rational ball despite bounding a plumbing with an embeddable intersection form. A later paper \cite{bhupalstipsicz} classified which surgeries on $T(2,3)$ appearing in Family $\mathcal{C}$, viewed as surface singularity links, bound a rationally acyclic Milnor fibre. Interestingly, all but two of the embedded graphs in that family are generated by applying IGOCL moves to the graph of $S^3_{\frac{64}{7}}(T(2,3))$. However, we do not know if any other members of Family $\mathcal{C}$ bound a rational homology ball which is not a Milnor fibre. Hence, the following is a rich open question worth studying: \begin{question} For which $r \in \mathbb{Q}_+$ does $S^3_r(T(2,3))$ bound a rational homology ball? \end{question} \subsection{Outline} We start off the paper with Section \ref{sec:complegs} by recalling some results on complementary legs and the basics of the lattice embedding setup. In Section \ref{sec:analysis} we define the GOCL and IGOCL moves and show Theorem \ref{thm:general}. In Section \ref{sec:torusknots} we prove Theorem \ref{thm:torusknotlist} for the families 1-11, while the other families follow directly from \cite{lecuonacomplementary} and \cite{GALL}. \section{Background on Complementary Legs} \label{sec:complegs} In this section we review the definition and basic properties of complementary (sometimes called Riemenschneider dual) legs. \begin{dfn} We define the negative continued fraction $[a_1,\dots, a_n]^-$ as \[ [a_1,\dots, a_n]^-=a_1-\cfrac{1}{a_2-\cfrac{1}{\ddots - \cfrac{1}{a_n.}}} \] \end{dfn} Negative continued fractions often show up in low-dimensional topology because of the slam-dunk Kirby move \cite[Figure 5.30]{gompfstipsicz}, which allows us to substitute a rational surgery on a knot by an integral surgery on a link. \begin{dfn} A two-component weighted linear graph $(-\alpha_1,...,-\alpha_n), (-\beta_1,...,-\beta_k)$ (with $\alpha_i, \beta_j$ integers greater than or equal to 2) is called a pair of complementary legs if \[ \frac{1}{[\alpha_1,...,\alpha_n]^-}+\frac{1}{[\beta_1,...,\beta_k]^-}=1. \] We call the sequence $(\beta_1,...,\beta_k)$ the Riemenschneider dual or complement of the sequence $(\alpha_1,...,\alpha_n)$, and we call the fractions $[\alpha_1,...,\alpha_n]^-$ and $[\beta_1,...,\beta_k]^-$ complementary. \end{dfn} \begin{dfn} A Riemenschneider diagram is a finite set of points $S$ in $\mathbb{Z}_+ \times \mathbb{Z}_-$ such that $(1,-1) \in S$ and for every point $(a,b) \in S$ but one, exactly one of $(a+1,b)$ or $(a,b-1)$ is in $S$. If $(n,k) \in S$ is the point with the largest $n-k$, we say that the Riemenschneider diagram represents the fractions $[\alpha_1,...,\alpha_n]^-$ and $[\beta_1,...,\beta_k]^-$, where $\alpha_i$ is one more than the number of points with $x=i$ and $\beta_j$ is one more than the number of points with $y=-j$. \end{dfn} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{riemenschneider_diagram_example_2.png} \caption{Example of a Riemenschneider diagram representing the complementary fractions $[5,3,2,2]^-$ and $[2,2,2,3,4]^-$.} \label{riemenschneider_diagram} \end{figure} \begin{example} See Figure \ref{riemenschneider_diagram} for an example of a Riemenschneider diagram. \end{example} \begin{thm}[Riemenschneider \cite{riemenschneider}] The two fractions represented by a Riemenschneider diagram are complementary. \end{thm} \begin{rmk} Note that given any continued fraction $[\alpha_1,...,\alpha_n]^-$ with all $\alpha_i \geq 2$, we may construct a Riemenschneider diagram representing $[\alpha_1,...,\alpha_n]^-$ and its Riemenschneider dual. \end{rmk} In this paper, we are going to be interested in lattice embeddings, especially of complementary legs. Let $X$ be a plumbing of disc bundles on spheres. It can be described using a plumbing graph, which is a weighted simple graph. The weight indicates the Euler number of the disc bundle and an edge indicates a plumbing. (See \cite[Example 4.6.2]{gompfstipsicz} for a longer explanation.) The second homology of $X$ is the free abelian group $\mathbb{Z}\langle V_1,\dots,V_k \rangle $ on the vertices and the intersection form is \[ \langle V_i, V_j \rangle_{Q_{X}} = \begin{cases} \text{weight of } V_i & \text{ if } i=j \\ 1 & \text{ if } V_i \text{ is adjacent to } V_j \\ 0 & \text{ otherwise.} \end{cases} \] A \textit{lattice embedding} $f: (H_2(X)/\Torsion, Q_{X}) \hookrightarrow (\mathbb{Z}^{N}, -\Id)$ is a linear map $f$ such that $\langle V_i, V_j \rangle_{Q_{X}} = \langle f(V_i), f(V_j) \rangle_{-\Id}$. We will simply denote $\langle \cdot, \cdot \rangle := \langle \cdot, \cdot \rangle_{-\Id}$. If nothing else is specified, then $N=\rk H_2(X)$, that is the number of vertices in the graph. Common abuses of notation include ``embedding of the graph", meaning an embedding of the lattice $(H_2(X)/\Torsion, Q_{X})$, where $X$ is described by the plumbing graph. The following theorem is well-known, but we explicitly write out the embedding construction for the reader's convienience. \begin{thm} \label{thm:complegembed} Every pair of complementary legs has a lattice embedding. \end{thm} \begin{proof} The embedding can be constructed algorithmically from a Riemenschneider diagram. Denote the vertices of the two complementary legs by $(U_1,\dots, U_{m_1})$ and $(V_1,\dots,V_{m_2})$. These vertices generate the second homology of the plumbed $4$-manifold described by the graph. We need to send every vertex to an element of $\mathbb{Z}\langle e_1, \dots, e_{m_1+m_2}$. Start by mapping both $U_1$ and $V_1$ to $e_1$. Order the points in the Riemenschneider diagram so that $P_1=(1,-1)$, and if $P_i=(a,b)$, then point $P_{i+1}$ is either $(a+1,b)$ or $(a,b-1)$. Now, we recursively build an embedding as follows. For each non-final $i$, if the current partial embedding is $(u_1,...,u_n), (v_1,...,v_k)$ (meaning that $(U_1,\dots, U_n)$ gets mapped to $(u_1,...,u_n)$ and $(V_1,\dots, V_k)$ gets mapped to $(v_1, \dots, v_k)$) and $P_i=(a,b)$ is such that $P_{i+1}=(a+1,b)$, then the new partial embedding will be $(u_1,...,u_n+e_{i+1}), (v_1,...,v_k-e_{i+1},e_{i+1})$. If $P_{i+1}=(a,b-1)$, then the new partial embedding will be $(u_1,...,u_n-e_{i+1},e_{i+1}), (v_1,...,v_k+e_{i+1})$ instead. If $P_i$ is final and the current partial embedding is $(u_1,...,u_n), (v_1,...,v_k)$, the new embedding will be $(u_1,...,u_n+e_{i+1}), (v_1,...,v_k-e_{i+1})$, or the other way around, whatever is preferred. It is easy to see that an embedding $(u_1,...,u_{m_1}), (v_1,...,v_{m_2})$ constructed this way will have the properties: \begin{itemize} \item Each $u_i$ for $i=1,...,n-1$ and $v_j$ for $j=1,...,k$ will be a sum of consecutive basis vectors, all but the last one with coefficient $1$, and the last one with coefficient $-1$. Meanwhile $u_n$ will be a sum of consecutive basis vectors all with coefficient $1$. \item If the Riemenschneider diagram represents the fractions $[\alpha_1,...,\alpha_n]^-$ and $[\beta_1,...,\beta_k]^-$, then $\langle u_i, u_i \rangle=-\alpha_i$ and $\langle v_j, v_j \rangle=-\beta_j$. \item Since $u_i$ and $u_{i+1}$ have exactly one basis vector in common, one with a positive coefficient and one with a negative one, $\langle u_i, u_{i+1} \rangle =1$, and similarly $\langle v_i, v_{i+1} \rangle =1$. \item The other pairs $(u_i, u_j)$ (with $|i-j|>1$) don't share basis vectors and are thus orthogonal. Similarly, the pairs $(v_i, v_j)$ with $|i-j|>1$ don't share basis vectors and are thus orthogonal. \item It is easy to show by induction on the construction that $\langle u_i, v_j \rangle=0$ for all $i,j$. \end{itemize} These properties show that we are in fact looking at a lattice embedding of the complementary legs. \end{proof} \begin{rmk} In fact, if $e_1$ is fixed to hit the first vertex of each complementary leg, the rest of the embedding is unique up to renaming of elements and sign of the coefficient. \cite[Lemma 5.2]{bakerbucklecuona} \end{rmk} The following facts are useful when dealing with lattice embeddings. We will often use these properties without citing them. The first fact follows from reversing the Riemenschneider diagram, the second from embedding the sequences $(a_m,\dots,a_1)$ and $(b_n,\dots, b_1)$ as in Theorem \ref{thm:complegembed} and mapping the $-1$-weighted vertex to $-e_1$, and the rest from looking at a Riemenscheider diagram. \begin{prop} \label{prop:latticeembeddingproperties} Let $(a_1,\dots,a_m)$ and $(b_1,\dots, b_n)$ be complementary sequences. Then the following hold: \begin{enumerate} \item The sequences $(a_m,\dots,a_1)$ and $(b_n,\dots, b_1)$ are complementary. \item The linear graph $(-a_1,\dots,-a_m,-1,-b_n,\dots,-b_1)$ embeds in $(\mathbb{Z}^{m+n}, -\Id)$. \item Either $a_m$ or $b_n$ must equal $2$, so assume without loss of generality that $b_n=2$. Blowing down the $-1$ in the linear graph $(-a_1,\dots,-a_m,-1,-b_n,\dots,-b_1)$ gives us the linear graph $(-a_1,\dots,-(a_m-1),-1,-b_{n-1},\dots,-b_1)$. This graph is once again a pair of complementary legs linked by a $-1$, described by the Riemenschneider diagram obtained by the removing the last point. \item Repeatedly blowing down the $-1$ in linear graphs of the form \[ (-a_1,\dots,-a_m,-1,-b_n,\dots,-b_1)\] eventually takes us to $(-2,-1,-2)$. \item Similarly, blowing up next to the $-1$ gives $(-a_1,\dots,-(a_m+1),-1,-2,-b_n,\dots,-b_1)$ or $(-a_1,\dots,-a_m,-2,-1,-(b_n+1),\dots,-b_1)$, which are both pairs of complementary legs connected by a $-1$, described by Riemenschneider diagrams that are extensions of the initial one by one dot. \end{enumerate} \end{prop} \section{Growing Complementary Legs on Lisca's Graphs} \label{sec:analysis} The idea for this work comes from studying the lattice embeddings of linear graphs and other trees that are known to bound rational homology 4-balls. Consider for example Lisca's classification of connected linear graphs that bound rational homology 4-balls \cite{liscasingle07}, in the most convenient form for us described by Lecuona in \cite[Remark 3.2]{lecuonamontesinos}. Every family of embeddable graphs can be obtained from the basic graphs $(-2,-2,-2)$, $(-2,-2,-3,-4)$, $(-3,-2,-2,-3)$ and $(-3,-2,-3,-3,-3)$ by repeated application of two types of moves, one of which is the following: choose a basis vector $e$ hitting exactly two vertices $v$ and $w$, where $w$ is final, subtract $1$ from the weight of $v$ and attach a new vertex $u$ of weight $-2$ to $w$. I will show that we can do away with the assumption that $w$ is final and still get 3-manifolds bounding rational homology 4-balls through repeating this operation. \begin{figure} \centering \tikzfig{liscagraph2234} \caption{Lisca's $(-2,-2,-3,-4)$ graph with embedding.} \label{fig:liscagraph2234} \end{figure} \begin{figure} \centering \tikzfig{liscagraph2234extended1} \caption{An extension of Lisca's $(-2,-2,-3,-4)$ graph with embedding.} \label{fig:liscagraph2234extended1} \end{figure} \begin{example} Consider Figure \ref{fig:liscagraph2234}, showing an embedding of Lisca's $(-2,-2,-3,-4)$ graph into the standard lattice $(\mathbb{Z} \langle e_1, e_2, e_3, e_4 \rangle , -\Id)$. Note that $e_4$ and $e_3$ hit two vertices each. Choose $e_4$. We can now perform the operation described above by choosing $v$ to be the vertex of weight $-4$ and $w$ the vertex of weight $-3$. The result is shown in Figure \ref{fig:liscagraph2234extended1} together with its embedding, which is a kind of ``expansion'' of the embedding in Figure \ref{fig:liscagraph2234}. Our new embedding has two basis vectors hitting exactly two vertices each, namely $e_3$ and $f_1$, whereas $e_4$ now hits three vertices. We may now perform the same operation again on any of these basis vectors, thereby obtaining any graph of the form described in Figure \ref{fig:liscagraph2234extendedgeneral}, with $k=0$. We will show that these graphs do not only have lattice embeddings, but also bound rational homology 4-balls. \end{example} We will now introduce two moves on embedded plumbing graphs. Let $\Gamma=(V,E,W)$ be a weighted negative-definite graph with lattice embedding $F: (V,Q_{X_{\Gamma}}) \to (\mathbb{Z}^{|V|},-\Id)$. Assume that there is a basis vector $e$ of $\mathbb{Z}^{|V|}$ hitting exactly two vertices $A$ and $B$ in $\Gamma$, whose images are $v$ and $w$, in any order we prefer. Then a \textbf{GOCL (growth of complementary legs) operation} is constructing an embedded graph $(\Gamma'=(V',E',W'), F')$ by $V'=V\cup C$, $E'=E\cup \{AC\}$ and $u:=F'(C)=-\langle e, v \rangle e - f$, $w':=F'(B)=w-\langle e, v \rangle \langle e, w \rangle f$ and $F'(D)=F(D)$ for all $D \in V-\{A\}$. Note that $\langle e, v \rangle \langle e, w \rangle = \langle f, u \rangle \langle f, w' \rangle$. Thus, the GOCL operation substitutes $e$ by $f$ in the set of basis vectors hitting the graph exactly twice and moreover, the sign difference between the two occurrences of the basis vector is preserved. This operation can therefore be applied repeatedly. If we start with the graph consisting of two vertices of weight $-2$ and no edges, and the embedding $e_1-e_2$ and $e_1+e_2$, then repeated application of GOCL will simply give us two complementary legs. The other operation which we will call \textbf{IGOCL (inner growth of complementary legs)} could be described as growing complementary legs from the inside. Suppose a basis vector $e$ hits exactly three vertices $A$, $B$ and $C$ in $\Gamma$, with their images under the lattice embedding $F$ being $u$, $v$ and $w$ respectively. Assume also that $B$ and $C$ are adjacent and that $\langle v, e \rangle \langle w, e \rangle = -1$, that is $e$ hits $v$ and $w$ with opposite signs. Then $\Gamma'=(V', E', W')$ is described by $V'=V\cup \{ D \}$, $E'=(E-\{BC\})\cup \{BD, DC\}$, $F'(D)=-\langle v, e \rangle e + \langle v, e \rangle f$, $F'(C)=w-\langle w, e \rangle e + \langle w, e \rangle f$, $F'(A)=u+\langle u, e \rangle f$ and $F'(X)=F(X)$ for all $X \in V - \{A,C\}$. After this operation is performed, we can perform it again on either $e$ or $f$, but the result is essentially the same. What it does is grow a chain of $-2$'s between two vertices and compensate by subtracting from the weight of a different vertex. If we apply the IGOCL operation on a vector hitting a pair of complementary legs three times, we still get a pair of complementary legs, which explains the name. Now that we have defined the GOCL and IGOCL moves, the remainder of this section will be dedicated to their applications to Lisca's basic graphs $(-2,-2,-2)$, $(-2,-2,-3,-4)$, $(-3,-2,-2,-3)$ and $(-3,-2,-3,-3,-3)$. We will show for each Lisca graph one by one that the results obtained from repeatedly applying the aforementioned operations always bound rational homology balls. We introduce the following notation. Given a weighted graph $\Gamma$, we let $X_{\Gamma}$ be the 4-dimensional plumbing associated to it and $Y_\Gamma$ be its boundary or, in other words, the associated 3-dimensional plumbing. If $\Gamma$ is a tree, then the attaching link is strongly invertible \cite{montesinoscoverings}, that is can always be drawn in a way equivariant with respect to the $180^{\circ}$ rotation around the $x$-axis in such a way that every knot intersects the $x$-axis in exactly two points. (For example, see Figure \ref{fig:2234involution}.) Let $\pi: X_\Gamma \to X_\Gamma$ be the involution given by extending this $180^\circ$ rotation around the $x$-axis and let $p: X_\Gamma \to X_\Gamma/ \pi$ be the quotient map when we identify $x \sim \pi(x)$. By \cite[Theorem 3]{montesinoscoverings}, $X_\Gamma/ \pi=B^4$ and $p$ is a double covering, branched over a surface $S_\Gamma \subset B^4$. The surface $S_\Gamma$ can be drawn by attaching bands to a disc according to the bottom half of the rotation-equivariant drawing, adding as many half-twists as the weight of the corresponding unknot \cite{montesinoscoverings}. (See Figure \ref{fig:2234arborescent}.) By $K_\Gamma$ we denote the link $K_\Gamma=S_\Gamma\cap S^3$. \begin{figure} \centering \includegraphics[width=.7\linewidth]{2234involution} \caption{Proof that the attaching link of the graph in Figure \ref{fig:liscagraph2234extended1} is strongly invertible.} \label{fig:2234involution} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{2234arborescentmorecorrect} \caption{$S_\Gamma$ for $\Gamma$ the graph in Figure \ref{fig:liscagraph2234extended1} is a disc with five bands attached.} \label{fig:2234arborescent} \end{figure} We will show that if $\Gamma$ is obtainable from the basic Lisca graphs by GOCL and IGOCL moves, then $Y_\Gamma$ bounds a rational homology 3-ball. We will do this by finding a $\Gamma'$ such that $Y_{\Gamma'}=Y_\Gamma$ and $K_{\Gamma'}$ is $\chi$-slice, that is bounds a surface of Euler characteristic $1$ inside $B^4$ \cite[Definition 1]{donaldowens}. The $\chi$-sliceness will be proven by adding two (or one) 2-handles to $X_{\Gamma'}$ in a $\mathbb{Z}/2\mathbb{Z}$-equivariant fashion (changing the branching locus from $S_{\Gamma'}$ to $S$ by adding two (or one) bands), in such a way that we obtain $(S^1\times S^2)\# (S^1\times S^2)$ (or $(S^1\times S^2)$) and that $\partial S$ is the 3-component (or 2-component) unlink. Addition of such bands is a concordance of boundary links. It will follow that $K_{\Gamma'}$ bounds an embedded surface $F$ in $B^4$ obtained from adding two (or one) bands to $K_\Gamma'$ and capping off with three (or two) discs, meaning that $K_\Gamma'$ is the boundary of a surface obtained from attaching two bands to three discs (or one band to two discs). This surface is homotopy equivalent to three points with two edges (or two points with one edge), which has Euler characteristic $1$. We use \cite[Proposition 5.1]{donaldowens} to conclude that the double cover of $B^4$ branched over $F$ is a rational homology ball with boundary $Y_{\Gamma'}=Y_\Gamma$. \subsection{$(-2,-2,-2)$} This graph has embedding $(e_1-e_2,e_2-e_3,-e_1-e_2)$. The only basis vector hitting twice is $e_1$ whose both occurrences are in final vertices. Thus applying the GOCL operation keeps the graph linear and all such graphs have been shown to be bounding rational homology balls by Lisca. In fact, these graphs describe the lens spaces $L(p^2, pq \pm 1)$, for $p>q>0$ with some orientation. \subsection{$(-2,-2,-3,-4)$} \label{subsec:orientationreversalhere} \begin{figure} \centering \tikzfig{liscagraph2234extendedgeneral} \caption{General extension of Lisca's $(-2,-2,-3,-4)$ graph. Here $1/[a_1,\dots,a_{n_1}]^{-}+1/[\alpha_1,\dots,\alpha_{n_2}]^{-}=2$; in other words $[a_1,\dots,a_{n_1}]^{-}$ and $[\alpha_1,\dots,\alpha_{n_2}]^{-}$ are complementary. The fractions $[b_1,\dots,b_{m_1}]^{-}$ and $[\beta_1,\dots,\beta_{m_2}]^{-}$ are complementary as well. The length of the chain of $-2$'s is $k$.} \label{fig:liscagraph2234extendedgeneral} \end{figure} First, we look at the embedding of the $(-2,-2,-3,-4)$ graph in Figure \ref{fig:liscagraph2234}. There are two vectors occuring twice, namely $e_3$ and $e_4$ and one vector occurring three times, namely $e_1$. The vector $e_1$ fulfils the requirement for the IGOCL move to happen, so we can extend this to a $(-(2+k), -2, -3, (-2)^{[k]}, -4)$ graph, a case mentioned in \cite[Remark 3.2]{lecuonamontesinos}. Further, GOCL moves applied to $e_3$ and $e_4$ give us the graphs described by Figure \ref{fig:liscagraph2234extendedgeneral}. Note that applying IGOCL or GOCL again will not move us out of the set of graphs described by Figure \ref{fig:liscagraph2234extendedgeneral}. \begin{figure} \centering \tikzfig{2234extendedplus-1s} \caption{The black part of the graph represents the same 3-manifold as Figure \ref{fig:liscagraph2234extendedgeneral}. We attach two $-1$-framed $2$-handles to the $4$-manifold. We will show that the new boundary $3$-manifold is $(S^1\times S^2)\#(S^1\times S^2)$.} \label{fig:2234extendedplus-1s} \end{figure} \begin{figure} \centering \tikzfig{2234easiertoembedplus-1s} \caption{A graph representing the same $3$-manifold as Figure \ref{fig:2234extendedplus-1s}.} \label{fig:2234easiertoembedplus-1s} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{2234equivarianceright} \caption{Proof that the attaching link of the graph in Figure \ref{fig:2234easiertoembedplus-1s} is strongly invertible.} \label{fig:2234equivariance} \end{figure} We want to show that if $\Gamma$ is a graph described by Figure~\ref{fig:liscagraph2234extendedgeneral}, then $Y_\Gamma$ bounds a rational homology 4-ball. We add two $-1$-framed $2$-handles to $X_\Gamma$ and obtain Figure~\ref{fig:2234extendedplus-1s}. I have not been able to do this equivariantly when $k\geq 2$, but employing the first part of Proposition~\ref{prop:changingorientation} below, we can find a graph $\Gamma'$ such that $Y_\Gamma=Y_{\Gamma'}$ and such that the Kirby diagram of $\Gamma'$ with the extra $-1$-framed knots can be embedded equivariantly. This $\Gamma'$ will be the non-purple part of Figure~\ref{fig:2234easiertoembedplus-1s}. Also note that Figure~\ref{fig:2234extendedplus-1s} and Figure~\ref{fig:2234easiertoembedplus-1s} show the same 3-manifold. The equivariant embedding of the graph in Figure \ref{fig:2234easiertoembedplus-1s} is drawn in Figure \ref{fig:2234equivariance}. The below proposition follows from Neumann's plumbing calculus \cite[Page 305]{neumann89} together with the algorithm of 1) performing a $1$-blow-up at the right of the rightmost chain element greater than $1$, 2) blowing down any $-1$-weighted vertices, and 3) repeating. Following the Riemenschneider diagram, we see that this algorithm gradually substitutes a sequence by its Riemenschneider dual. Blowing up by $1$ increases the number of positive vertices by $1$ and blowing down a $-1$ decreases the number of negative vertices by $1$. \begin{prop} Let $\Gamma$ be a plumbing graph (not necessarily a tree) containing a chain $(-\alpha_1,\dots,-\alpha_k)$ (not necessarily splitting the rest of the graph into two components), as in Figure \ref{fig:changingorientation1}. Let $\Gamma'$ be the graph $\Gamma$ with the chain substituted by the chain $(\beta_1,\dots, \beta_j)$, for $[\alpha_1,\dots, \alpha_k]^-$ and $[\beta_1,\dots, \beta_j]^-$ complementary fractions, and the weight of the vertices adjacent to the chain increased by 1. Then $Y_\Gamma=Y_{\Gamma'}$. Moreover, $b^2_+(X_{\Gamma'})=b^2_+(X_\Gamma)+j$ and $b^2_-(X_{\Gamma'})=b^2_-(X_\Gamma)-k$. \label{prop:changingorientation} \end{prop} \begin{figure} \begin{minipage}{\textwidth} \begin{subfigure}{\linewidth} \centering \tikzfig{changingorientation1} \caption{Changing a negative chain...} \label{fig:changingorientation1} \end{subfigure} \begin{subfigure}{\linewidth} \centering \tikzfig{changingorientation2} \caption{... to a positive one.} \label{fig:changingorientation2} \end{subfigure} \end{minipage}% \caption{The graphs above bound the same 3-manifold if $[\alpha_1,\dots, \alpha_k]^-$ and $[\beta_1,\dots, \beta_j]^-$ are complementary fractions.} \label{fig:changingorientation} \end{figure} \begin{figure} \begin{subfigure}{\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns1} \caption{} \label{fig:liscagraph2234twoblowdowns1} \end{subfigure} \begin{subfigure}{0.5\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns2} \caption{} \label{fig:liscagraph2234twoblowdowns2} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns3} \caption{} \label{fig:liscagraph2234twoblowdowns3} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns4} \caption{} \label{fig:liscagraph2234twoblowdowns4} \end{subfigure} \begin{subfigure}{0.25\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns5} \caption{} \label{fig:liscagraph2234twoblowdowns5} \end{subfigure} \begin{subfigure}{0.25\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns6} \caption{} \label{fig:liscagraph2234twoblowdowns6} \end{subfigure} \caption{Plumbing/Kirby calculus to show that adding two $-1$-framed $2$-handles to the graphs in Figure \ref{fig:liscagraph2234extendedgeneral} gives a $4$-manifold with boundary $(S^1\times S^2)\# (S^1\times S^2)$.} \label{fig:2234kirbycalculus} \end{figure} \begin{figure} \begin{minipage}{.5\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus} \caption{} \label{fig:sub1} \end{subfigure}\\[1ex] \end{minipage}% \begin{minipage}{.5\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus2} \caption{} \label{fig:sub2} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus3} \caption{} \label{fig:sub3} \end{subfigure} \end{minipage}% \caption{Kirby calculus to remove a negative 3-cycle in a plumbing graph.} \label{fig:neg3cycleremoval} \end{figure} \begin{figure} \begin{minipage}{.5\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus-chain} \caption{} \label{fig:sub4} \end{subfigure}\\[1ex] \end{minipage}% \begin{minipage}{.5\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus2-chain} \caption{} \label{fig:sub5} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus3-chain} \caption{} \label{fig:sub6} \end{subfigure} \end{minipage}% \caption{Kirby calculus to remove a negative cycle in a plumbing graph.} \label{fig:negcycleremoval} \end{figure} \begin{figure} \centering \includegraphics[width=.4\linewidth]{untwisting} \caption{A schematic picture of the graph in Figure \ref{fig:liscagraph2234twoblowdowns3}. From this picture, it is clear that blowing down the green $-1$ unlinks the red and the blue curves, yielding the link in Figure \ref{fig:liscagraph2234twoblowdowns4}.} \label{fig:untwisting} \end{figure} It remains to show that Figures \ref{fig:2234easiertoembedplus-1s} and \ref{fig:2234extendedplus-1s} represent the $3$-manifold $(S^1\times S^2)\#(S^1\times S^2)$. We start from Figure \ref{fig:2234extendedplus-1s} and blow down every $-1$ vertex that we can. The sequence of blow-downs that we do can be seen in Figure \ref{fig:2234kirbycalculus}. Each $-1$ in Figure \ref{fig:2234extendedplus-1s} is connected to a pair of complementary legs and blowing down such a $-1$ is equivalent to substituting the complementary legs by the complementary legs with the Riemenschneider diagram obtained by removing the last point. Thus blowing down the left $-1$ repeatedly leads to first shortening the complementary legs $(-b_1,\dots,-b_{m_1})$ and $(-\beta_1,\dots,-\beta_{m_2})$ to just be $(-2)$ and $(-2)$, and blowing down again we obtain Figure \ref{fig:liscagraph2234twoblowdowns1}. Similarly we blow down the right $-1$ until we obtain $(a_1,\dots,a_{n_1})=(2)$ and $(\alpha_1,\dots,\alpha_{n_2})=(2)$, to obtain Figure \ref{fig:liscagraph2234twoblowdowns2}. Now, it is not obvious how to blow down the left $-1$. Figures \ref{fig:sub1} and \ref{fig:sub4} schematically show the links represented by the graph in Figure \ref{fig:liscagraph2234twoblowdowns2}, depending on whether $k=0$ or not. (If $k>1$, then consider Figure \ref{fig:sub4}. but imagine the green knot as a longer chain.) The little squares represent a continuation of the link and it does not matter if the blue and black squares are in fact connected as long as they are contained strictly to the right of the red square. Blowing down the $-1$ leads us to Figures \ref{fig:sub2} and \ref{fig:sub5} and isotoping the red curve leads to Figures \ref{fig:sub3} and \ref{fig:sub6}. This Kirby calculus shows that if we blow down the left $-1$ in Figure \ref{fig:liscagraph2234twoblowdowns2} and if $k=0$, then we obtain Figure \ref{fig:liscagraph2234twoblowdowns5}. If $k\geq 1$, then we get Figure \ref{fig:liscagraph2234twoblowdowns3}. Looking at the schematic image Figure \ref{fig:untwisting} of the link represented by the graph in Figure \ref{fig:liscagraph2234twoblowdowns3}, we see that blowing down its rightmost $-1$ yields Figure \ref{fig:liscagraph2234twoblowdowns4}, which can be blown down to Figure \ref{fig:liscagraph2234twoblowdowns5}. Blowing down the top $-1$ now gives us Figure \ref{fig:liscagraph2234twoblowdowns6}. Finally, applying the Kirby calculus of Figure \ref{fig:neg3cycleremoval} while ignoring the red curve, we can blow down the left $-1$ and be left with two disconnected $0$-weighted vertices, which is the plumbing graph of $(S^1\times S^2)\#(S^1\times S^2)$. \subsection{$(-3,-2,-2,-3)$} \begin{figure} \centering \tikzfig{3223general} \caption{The form of all graphs obtainable from $(-3,-2,-2,-3)$ using GOCL moves. Here the sequences $(a_1,\dots,a_{l_1})$ and $(\alpha_1,\dots,\alpha_{l_2})$ are complementary, as well as the sequences $(b_1,\dots, b_{m_1})$ and $(\beta_1,\dots,\beta_{m_2})$, and the sequences $(z_1,\dots, z_{n_1})$ and $(\zeta_1,\dots,\zeta_{n_2})$.} \label{fig:3223general} \end{figure} \begin{figure} \centering \tikzfig{3223generalplustwo2handles} \caption{Two 2-handles attached with $-1$-framing.} \label{fig:3223generalplustwo2handles} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{3223equivariance} \caption{An equivariant embedding of the link in Figure \ref{fig:3223generalplustwo2handles}.} \label{fig:3223equivariance} \end{figure} \begin{figure} \begin{subfigure}{0.5\linewidth} \centering \tikzfig{3223blowndown1} \caption{ } \label{fig:3223blowndown1} \end{subfigure} \begin{subfigure}{0.5\linewidth} \centering \tikzfig{3223blowndown2} \caption{ } \label{fig:3223blowndown2} \end{subfigure} \begin{subfigure}{0.5\linewidth} \centering \tikzfig{3223blowndown3} \caption{ } \label{fig:3223blowndown3} \end{subfigure} \begin{subfigure}{0.5\linewidth} \centering \tikzfig{3223blowndown4} \caption{ } \label{fig:3223blowndown4} \end{subfigure} \caption{Plumbing/Kirby calculus showing that Figure \ref{fig:3223generalplustwo2handles} bounds an $(S^1\times S^2)\# (S^1\times S^2)$.} \label{fig:3223kirby} \end{figure} Note that $(-3,-2,-2,-3)$ has embedding $(e_2+e_3+e_4,e_1-e_2,e_2-e_3,-e_1-e_2+e_4)$ which has three basis vectors occurring twice and one basis vector occurring four times. Thus we can only perform GOCL moves here, which gives us the graph family in Figure \ref{fig:3223general}. We will add two $2$-handles as by Figure \ref{fig:3223generalplustwo2handles}. In fact, the Kirby diagram arising from this graph can be drawn in $\mathbb{R}^3$ in a $\mathbb{Z}/2\mathbb{Z}$-equivariant fashion, as in Figure \ref{fig:3223equivariance}. It remains to show that if $\Gamma$ is as in Figure \ref{fig:3223generalplustwo2handles}, then $Y_\Gamma$ is $(S^1\times S^2)\#(S^1\times S^2)$. We use the bottom $-1$ to blow down the complementary legs $(-a_1,\dots,-a_{l_1})$ and $(-\alpha_1,\dots,-\alpha_{l_2})$, and an extra blow-down leaves us with Figure \ref{fig:3223blowndown1}. This Figure is modeled by Figure \ref{fig:sub1}, with a chain from the red square to the blue one. This chain does not prevent us from disconnecting the black unknot from the blue one in the isotopic Figures \ref{fig:sub2} and \ref{fig:sub3}. Hence, blowing down the middle $-1$ gives us the graph in Figure \ref{fig:3223blowndown2}. We proceed by blowing down the top $-1$ and its adjacent complementary legs $(-z_1,\dots,-z_{n_1})$ and $(-\zeta_1,\dots,-\zeta_{n_2})$, to obtain Figure \ref{fig:3223blowndown3}. Applying the Kirby calculus of Figure \ref{fig:neg3cycleremoval} with an empty red unknot, we blow down the right $-1$ and obtain Figure \ref{fig:3223blowndown4}. Note that the bottom component is a blow-down of $(-\beta_{m_2},\dots,-\beta_1,-1,-b_1,\dots,-b_{m_1})$, which famously blows down to a $0$. Thus $Y_\Gamma=S^3_0(O)\# S^3_0(O)=(S^1\times S^2)\#(S^1\times S^2)$. \subsection{$(-3,-2,-3,-3,-3)$} \begin{figure} \centering \tikzfig{32333general} \caption{These are the graphs obtainable by performing IGOCL and GOCL moves on the linear graph $(-3,-2,-3,-3,-3)$. Here the length of the chain of $-2$'s is $k\geq 0$, $(a_1,\dots,a_{m_1})$ and $(\alpha_1,\dots,\alpha_{m_2})$ are complementary sequences, and $(b_1,\dots,b_{n_1})$ and $(\beta_1,\dots,\beta_{n_2})$ are complementary sequences.} \label{fig:32333general} \end{figure} \begin{figure} \centering \tikzfig{32333generalplus2handle} \caption{The graph in Figure \ref{fig:32333general} with an extra $2$-handle attached. We will show that this graph represents the $3$-manifold $(S^1\times S^2)$.} \label{fig:32333generalplus2handle} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{32333equivariance} \caption{An equivariant embedding of the link in Figure \ref{fig:32333generalplus2handle}.} \label{fig:32333equivariance} \end{figure} \begin{figure} \centering \tikzfig{32333blowndown1} \caption{The graph obtained from \ref{fig:32333generalplus2handle} and Proposition \ref{prop:latticeembeddingproperties} after repeated blow-downs of the $-1$-labelled vertices.} \label{fig:32333blowndown1} \end{figure} \begin{figure} \centering \tikzfig{32333blowndown2} \caption{Figure \ref{fig:32333blowndown1} after another blow-down.} \label{fig:32333blowndown2} \end{figure} \begin{figure} \centering \tikzfig{32333blowndown3} \caption{The Kirby calculus of Figure \ref{fig:neg3cycleremoval} shows that this graph is obtained after another blow-down of Figure \ref{fig:32333blowndown2}.} \label{fig:32333blowndown3} \end{figure} The linear graph $(-3,-2,-3,-3,-3)$ has the lattice embedding $(e_2+e_3-e_5,e_1-e_2,e_2-e_3-e_4,-e_1-e_2-e_5,e_5+e_3-e_4)$. Here $e_1$ and $e_4$ occur exactly twice, serving as starting points for GOCL moves. The basis vectors $e_3$ and $e_5$ occur three times, but only $e_5$ serves as the starting point for IGOCL moves. All possible graphs obtainable by applying IGOCL and GOCL moves to the linear graph $(-3,-2,-3,-3,-3)$ are shown in the non-purple part of Figure \ref{fig:32333generalplus2handle}. This case is simpler than the previous ones in that it is enough to add one single $-1$-weighted vertex for us to be able to blow down the entire graph to a single $0$-weight vertex. If we can add this $2$-handle to the Kirby diagram generated by the graph in a $\mathbb{Z} / 2\mathbb{Z}$-equivariant fashion, it will mean that adding a band to the arborescent link $L$ generated by the non-purple part of Figure \ref{fig:32333generalplus2handle} gives us two unknots, which implies that $L$ bounds a surface in $B^4$ consisting of two discs and a band, which means that $L$ is $\chi$-slice. We can indeed add the $2$-handle in an equivariant fashion, as by Figure \ref{fig:32333equivariance}. It remains to show that the graph in Figure \ref{fig:32333generalplus2handle} indeed blows down to a zero. First, we blow down the complementary legs to obtain Figure \ref{fig:32333blowndown1}. An extra blow-down puts us in Figure \ref{fig:32333blowndown2}. This situation is modelled by Figure \ref{fig:sub1}, so the Kirby calculus of Figure \ref{fig:neg3cycleremoval} us Figure \ref{fig:32333blowndown3} after blowing down the purple $-1$. Now we note that Figure \ref{fig:32333blowndown3} consists of a $-1$ connecting two complementary legs. This famously blows down to a single $0$-weighted vertex. \section{Proof of Theorem \ref{thm:torusknotlist}} \label{sec:torusknots} In this section, we prove Theorem \ref{thm:torusknotlist} by studying the intersection between the graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general} and plumbing graphs of positive rational surgeries on positive torus knots. First we describe the plumbing graphs of the surgeries on torus knots, and then we go through the intersections with the graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general} one by one. \subsection{Plumbing Graphs of Rational Surgeries on Torus Knots} \begin{figure} \centering \includegraphics[width=.3\linewidth]{beforealggeo} \caption{A Kirby diagram with boundary $S^3_n(T(p,\alpha))$ where $n=[N_1+p\alpha, N_2, \dots , N_k]^-$ and $K=T(p,\alpha)$.} \label{fig:beforealggeo} \end{figure} In order to find the intersection between the plumbing graphs of rational surgeries on torus knots and the graphs obtained from Lisca's graphs by repeated GOCL and IGOCL moves, we need to know what the plumbing graphs of rational surgeries on torus knots look like. Let $n>0$ be a rational number. We want to find a plumbing graph for $S^3_n(T(p, \alpha))$. We can write $n=[N_1+p\alpha, N_2, \dots , N_k]^-$ for $N_2,\dots, N_k \geq 2$. The $3$-manifold $S_n(T(p,\alpha))$ bounds the 4-manifold in Figure \ref{fig:beforealggeo}, which is positive-definite if $n>0$. The argument of \cite[Section 3]{lokteva2021surgeries} that the blow-ups decrease the surgery coefficient by a constant still holds to show that $S^3_n(T(p,\alpha))$ bounds the 3-manifold described by the graph in Figure \ref{fig:rationalsurgeryontorusknot}. The positive index of this graph is $k$ by the same logic as in \cite[Section 3]{lokteva2021surgeries}. To obtain a definite graph, we will need the generalisation of the algorithm in \cite[Figure 2]{lokteva2021surgeries} described in Proposition \ref{prop:changingorientation} in Section \ref{subsec:orientationreversalhere}. If $N>1$ and thus $N_1\geq 2$, we can use Proposition \ref{prop:changingorientation} to substitute the chain $(N_1,\dots,N_k)$ with its negative Riemenschneider complement $(-M_1,\dots,-M_j)$ and obtain the negative-definite graph in Figure \ref{fig:rationalsurgeryontorusknotNpos}. If $0<N<1$, then the sequence $(N_1,\dots, N_k)$ starts with a $1$ possibly followed by some $2$'s that we can blow down before turning the rest of the chain negative. This will once again give us a negative-definite graph, namely the one in Figure \ref{fig:rationalsurgeryontorusknotNposlessthan1}. \begin{figure} \centering \tikzfig{rationalsurgeryontorusknot} \caption{A plumbing graph of $S^3_n(T(p,\alpha))$, where $\alpha>p$. Here $[1,c_2,...,c_s]^-=p/\alpha$, $[d_1,\dots,d_t]^-=\alpha/p$ and $[N_1,\dots,N_k]^-=N=n-p\alpha$. In particular, the pair of fractions $([c_2,\dots,c_s]^-, [d_1,\dots,d_t]^-)$ are complementary. Also, we can write $[(c_2,c_3,\dots,c_s)=(-2^{[d_1-2]},a_1+1, a_2, \dots, a_r)$ so that $[a_1,\dots,a_r]^-$ and $[d_2,\dots, d_t]^-$ are complementary.} \label{fig:rationalsurgeryontorusknot} \end{figure} \begin{figure} \centering \tikzfig{rationalsurgeryontorusknotNpos} \caption{A negative-definite plumbing graph of $S^3_n(T(p,\alpha))$, where $N=n-p\alpha>1$ and $\alpha>p$. Here $[1,c_2,...,c_s]^-=p/\alpha$, $[d_1,\dots,d_t]^-=\alpha/p$ and if $N=n-p\alpha=a/b$ with $a,b \in \mathbb{Z}_{>0}$, then $[M_1,\dots,M_j]^-=\frac{a}{a-b}$.} \label{fig:rationalsurgeryontorusknotNpos} \end{figure} \begin{figure} \centering \tikzfig{rationalsurgeryontorusknotNposlessthan1} \caption{A negative-definite plumbing graph of $S^3_n(T(p,\alpha))$, where $\alpha>p$ and $0<N<1$. Here $[1,c_2,...,c_s]^-=p/\alpha$, $[d_1,\dots,d_t]^-=\alpha/p$ and the fraction $[P_1,\dots,P_j]^-$ is complementary to $\frac{1}{1-N}=[N_2,\dots,N_k]^-$. In fact, that means that $N=\frac{1}{[P_1,\dots,P_j]^-}$.} \label{fig:rationalsurgeryontorusknotNposlessthan1} \end{figure} If $N<0$, that is $N_1\leq 0$, then changing turning the positively-weighted vertices $(N_2, \dots , N_k)$ negative will not be enough to decrease the positive index to $0$. Instead, we will use Proposition \ref{prop:changingorientation} to turn the two other legs of our graph positive, and we obtain the graph in Figure \ref{fig:QsurgeryontorusknotN_1isnonpos}, which has negative index 1. If $N_1=0$, we will perform a 0-absorption (see \cite[Proposition 1.1]{neumann89}) and obtain the positive definite graph in Figure \ref{fig:QsurgeryontorusknotN_1is0}. If $N_1=-1$, we simply blow it down. If $N_1\leq 2$, we use Proposition \ref{prop:changingorientation} to turn it into a chain of $2$'s and obtain the graph in Figure \ref{fig:QsurgeryontorusknotN_1isatmostminus2}. \begin{figure} \centering \tikzfig{QsurgeryontorusknotN_1isnonpos} \caption{A plumbing graph of $S^3_n(T(p,\alpha))$, where $\alpha>p$ and $N<0$. Here the negative index is 1, $[d_1,\dots,d_t]^-=\alpha/p$ and $[e_1,\dots,e_r]^-$ is complementary to $[d_2,\dots,d_t]^-$.} \label{fig:QsurgeryontorusknotN_1isnonpos} \end{figure} \begin{figure} \centering \tikzfig{QsurgeryontorusknotN_1is0} \caption{A positive definite plumbing graph of $S^3_n(T(p,\alpha))$, where $\alpha>p$ and $-1<N<0$. Here $[d_1,\dots,d_t]^-=\alpha/p$ and $[e_1,\dots,e_r]^-$ is complementary to $[d_2,\dots,d_t]^-$.} \label{fig:QsurgeryontorusknotN_1is0} \end{figure} \begin{figure} \centering \tikzfig{QsurgeryontorusknotN_1isatmostminus2} \caption{A positive definite plumbing graph of $S^3_n(T(p,\alpha))$, where $\alpha>p$ and $N<-1$. Here $[d_1,\dots,d_t]^-=\alpha/p$ and $[e_1,\dots,e_r]^-$ is complementary to $[d_2,\dots,d_t]^-$, and the tail starts with a chain of $-2$'s of length $-N_1-1$.} \label{fig:QsurgeryontorusknotN_1isatmostminus2} \end{figure} In the graphs of Figures \ref{fig:rationalsurgeryontorusknotNpos} \ref{fig:rationalsurgeryontorusknotNposlessthan1}, \ref{fig:QsurgeryontorusknotN_1is0} and \ref{fig:QsurgeryontorusknotN_1isatmostminus2}, the vertex of degree $3$ is called the node. Removing the node splits the graph into $3$ connected components, of which the top left one is called the \textit{torso}, the bottom left one is called the \textit{leg} and the right one is called the \textit{tail}. This vocabulary is chosen to accord with the vocabulary of \cite{lokteva2021surgeries} on iterated torus knots. We also often talk about the torso, leg and tail collectively as legs. This comes from viewing the graphs as general star-shaped graphs rather than graphs of surgeries on torus knots specifically. (The author recommends looking at a flag of Sicily or Isle of Man for a more precise metaphor.) This vocabulary is generally used by Lecuona, for instance in \cite{lecuonacomplementary} and \cite{lecuonamontesinos}. We say that two legs of a star-shaped graph are negatively quasi-complementary if either adding one vertex at the end of one leg could make them complementary, and positively quasi-complementary if removing a final vertex from one of the legs could. We say that two legs are complementary if they are either positively or negatively quasi-complementary. Note that the graphs in Figures \ref{fig:rationalsurgeryontorusknotNpos}, \ref{fig:rationalsurgeryontorusknotNposlessthan1}, \ref{fig:QsurgeryontorusknotN_1is0} and \ref{fig:QsurgeryontorusknotN_1isatmostminus2} are exactly the star-shaped graphs with three legs whereof two are quasi-complementary. In the following subsections, we are thus going to look for star-shaped graphs with a pair of quasi-complementary legs among the graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general}. The following very easy-to-check proposition will come in useful: \begin{prop} \label{prop:aqcleg} Suppose $Q/P=[a_1,\dots,a_n]^-$ and $(-a_1,-a_2,\dots,-a_n)$ is either the leg or torso of the plumbing graph of $S_r^3(T(p,\alpha))$, a positive rational surgery on a positive torus knot. (Here $-a_n$ is the weight of the vertex adjacent to the node.) Then $\alpha/p$ is one of the following: \begin{itemize} \item $\frac{Q}{P}$, \item $\frac{Q}{Q-P}$, \item $\frac{(l+1)Q+P}{Q}$ for some $l\geq 0$ or \item $\frac{(l+2)Q-P}{Q}$ for some $l\geq 0$. \end{itemize} \end{prop} Note that if $\GCD(P,Q)=1$, then all of these fractions are reduced. However, if $P=Q-1$, then, $\alpha/p=\frac{Q}{Q-P}=Q$ is a degenerate case that we ignore. \subsection{$(-2,-2,-3,-4)$} Here, we look for the intersections between the graphs in Figures \ref{fig:rationalsurgeryontorusknotNpos}, \ref{fig:rationalsurgeryontorusknotNposlessthan1}, \ref{fig:QsurgeryontorusknotN_1is0} and \ref{fig:QsurgeryontorusknotN_1isatmostminus2} and the graphs in Figure \ref{fig:liscagraph2234extendedgeneral}. This is the same as finding all the subfamilies of the graphs in Figure \ref{fig:liscagraph2234extendedgeneral} that are star-shaped with three legs, whereof two are quasi-complementary. For the purpose of stating Theorem \ref{thm:torusknotlist}, it is enough to determine the weights of the quasi-complementary legs, since the weights of the node and the tail only determine the surgery coefficient. To turn Figure \ref{fig:liscagraph2234extendedgeneral} into a star-shaped graph, we will need to keep some of the grown complementary legs to length 1. If we let the vertex of weight $-1-a_1$ be trivalent, then $m_1=1$ and thus $(\beta_1,\dots, \beta_{m_2})$ consists only of $2$'s. If $m_2>1$ then $n_2=1$ and $(a_1,\dots,a_{n_1})=(2,\dots, 2)$. In order to have trivalency of the $-(1+a_1)$ vertex, $\alpha_1\geq 3$ is required. It is easy to check that in this case the only legs that can be quasi-complementary are the $(-b_1,-(2+k))$ one and the $(\underbrace{-2,\dots,-2}_{\alpha_1-2})$ one. They can either be negatively quasi-complementary, made complementary by adding $-3$ at the end of the second leg, in which case $\alpha_1=b_1$ and $k=0$ have to hold, or they can be positively quasi-complementary, made complementary by removing $-(2+k)$ from the first one, in which case $\alpha_1-2=b_1-1$. The first case shows that \[ S^3_{\frac{(2b_1^2-2b_1+1)^2}{2b_1^2-b_1+1}}(T(b_1-1,2b_1-1)) \] bounds a rational homology ball for any $b_1\geq 3$. The second case shows that \[ S^3_{\frac{((k+2)b_1^2-1)^2}{(k+2)b_1^2+b_1-1}}(T(b_1,b_1(k+2)-1)) \] bounds a rational homology ball for all integers $b_1 \geq 2$ and $k \geq 0$. Both of these are subfamilies to families 1 and 2 in Theorem \ref{thm:torusknotlist} that we will show can in fact be fully realised. We get more interesting families when we let $m_2=1$, because then $(\alpha_1,\dots, \alpha_{n_2})$ can be anything as long as it has something but a $2$ somewhere so that $n_1>1$. We will get graphs of the form in Figure \ref{fig:bandbetagone}. To make the top and right legs quasi-complementary is easy: we need to choose whether they are to be positively or negatively quasi-complementary and which leg needs an extra vertex or a vertex removed to be complementary, and then we just need to choose $(a_2,\dots,a_{n_1})$ that make it happen. We use Proposition \ref{prop:aqcleg} for $Q/P=[2+k,2]^-=\frac{2k+3}{2}$. This corresponds to the entire families 3 and 4 as well as subfamilies of families 1 and 2 in Theorem \ref{thm:torusknotlist}. The top and bottom legs cannot be made quasi-complementary. \begin{figure} \centering \tikzfig{bandbetagone} \caption{Choosing $m_1=m_2=1$ in Figure \ref{fig:liscagraph2234extendedgeneral} gives this star-shaped graph.} \label{fig:bandbetagone} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{makegrownlegsqc} \caption{} \label{fig:makegrownlegsqc} \end{figure} The most interesting case to consider is whether the right and the bottom legs can be made quasi-complementary. In Figure \ref{fig:makegrownlegsqc}, the black dots show a Riemenschneider diagram of the complementary sequences $(a_1,\dots,a_{n_1})$ and $(\alpha_1,\dots,\alpha_{n_2})$. Adding the blue dots gives us a Riemenschneider diagram for the sequence $(\underbrace{2,\dots,2}_k,\alpha_1+2,\alpha_2, \dots , \alpha_{n_2})$ (with complement $(k+2, 2, a_1, \dots, a_{n_1})$). Considering only the part to the right of the red line gives us a Riemenschneider diagram for $(a_2,\dots, a_{n_1})$ (with a complement $(A_1,\dots,A_{n_3})$). In order for $(\underbrace{2,\dots,2}_k,\alpha_1+2,\alpha_2, \dots , \alpha_{n_2})$ and $(a_2,\dots, a_{n_1})$ to be quasi-complementary, either the picture to the right of the red line and the total picture without the last line, or the total picture and the picture to the right of the the red line with an extra column, must be the same. The sequences $(k+2, 2, a_1, \dots, a_{n_1})$ and $(a_2,\dots, a_{n_1})$ have length difference $2$, removing the second option. The only ways in which $(\underbrace{2,\dots,2}_k,\alpha_1+2,\alpha_2, \dots , \alpha_{n_2})$ and $(A_1,\dots,A_{n_3})$ can have length difference $1$ is if any of the following hold: \begin{enumerate} \item $k=0$ and $a_1=3$, or \item $k=1$ and $a_1=2$. \end{enumerate} If $k=0$ and $a_1=3$, then the first row of the total picture has length $3$. Thus, in the second total row, to the right of the red line, we need three dots, making a total of $4$ dots. This is a valid solution, namely $(\alpha_1,\dots,\alpha_{n_2})=(2,5)$, $(\alpha_1+2, \alpha_2,\dots,\alpha_{n_2})=(4,5)$, $(A_1,\dots,A_{n_3})=(4)$ and $(a_1,\dots,a_{n_1})=(3,2,2,2)$. If we choose to continue and add $\alpha_3$, that means adding a new row completely to the right of the red line, which must be as long as the second total row, namely 4 dots. That again gives a valid solution $(\alpha_1,\dots,\alpha_{n_2})=(2,5,5)$, $(\alpha_1+2, \alpha_2,\dots,\alpha_{n_2})=(4,5,5)$, $(A_1,\dots,A_{n_3})=(4,5)$ and $(a_1,\dots,a_{n_1})=(3,2,2,3,2,2,2)$. We can continue this process and obtain the solution $(\alpha_1,\dots,\alpha_{n_2})=(2,(5)^{[l]})$ and $(a_1,\dots,a_{n_1})=((3,2,2)^{[l]},2,2)$ for all $l\geq 1$. Our legs are positively quasi-complementary, so $\alpha/p=[d_1,\dots,d_t]^-=[(5)^{[l]},4]^-$. Since $5-\frac{b}{a}=\frac{5a-b}{a}$, we have that \[ \begin{pmatrix} \alpha\\ p \end{pmatrix} = \begin{pmatrix} 5 & -1\\ 1 & 0 \end{pmatrix}^{l} \begin{pmatrix} 4\\ 1 \end{pmatrix} \] for $l\geq 1$. This corresponds to family 6 in Theorem \ref{thm:torusknotlist}. We can compute $N=[0,3,2,2]^-=-\frac{3}{7}$. In other words, if $p_1=1$, $p_2=4$ and $p_{j+2}=5p_{j+1}-p_j$ for all $j\geq 0$ \cite[A004253]{OEIS}, we can say that \[ S^3_{p_jp_{j+1}-\frac{3}{7}}(T(p_j,p_{j+1})) \] bounds a rational homology ball for all $j\geq 1$. In this form it may not be obvious that the numerator of the surgery coefficient is a square, but in fact, $p_jp_{j+1}-\frac{3}{7}=\frac{V^2_{j+1}}{7}$ for $V_j$ being a sequence defined by $V_1=2$, $V_2=5$ and $V_{j+2}=5V_{j+1}-V_j$ for all $j\geq 0$ \cite[A003501]{OEIS}. It is a shifted so called Lucas sequence. The equality can be proven by first proving by induction that $p_{j+2}p_j-p_{j+1}^2=3$ for all $j\geq 0$, then noting that $V_{j+1}=p_{j+1}+p_j$ for all $j\geq 0$, and finally combining these equalities. If $k=1$ and $a_1=2$ the argument goes the same way. The only way for the right and bottom legs to be quasi-complementary is if the Riemenschneider diagram to the right of the red line and the total diagram missing the bottom line coincide. By the same argument as above, it happens if and only if $(\alpha_1,\dots,\alpha_{n_2})=(3,(5)^{[l]})$ and $(a_1,\dots,a_{n_1})=(2,(3,2,2)^{[l]},2)$ for all $l\geq 0$. In this case $\alpha/p=[(5)^{[l]},5,2]^-$ and $N=[0,2,2,3]^-=-\frac{5}{7}$. This shows that if $Q_1=2$, $Q_2=9$ and $Q_{j+2}=5Q_{j+1}-Q_j$ for all $j \geq 1$, then \[ S^3_{Q_jQ_{j+1}-\frac{5}{7}}(T(Q_j,Q_{j+1})) \] bounds a rational homology ball for all $j\geq 1$. This corresponds to family 7 in Theorem \ref{thm:torusknotlist}. Just as before, we can show that \[ Q_jQ_{j+1}-\frac{5}{7} = \frac{(Q_j+Q_{j+1})^2}{7}. \] \begin{figure} \centering \tikzfig{aandalphagone} \caption{Choosing $n_1=n_2=1$ in Figure \ref{fig:liscagraph2234extendedgeneral} gives this star-shaped graph.} \label{fig:aandalphagone} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{makegrownlegsqc2} \caption{} \label{fig:makegrownlegsqc2} \end{figure} Returning to Figure \ref{fig:liscagraph2234extendedgeneral}, we can let the vertex of weight $-b_1$ be the only node. That forces $n_1=1$, so $(\alpha_1,\dots, \alpha_{n_2})=(2,\dots,2)$. Putting $a_1=2$ would give us complete freedom in choosing $(b_2,\dots, b_{m_1})$, so Proposition \ref{prop:aqcleg} applied to $Q/P=2+k$ gives that there are surgery coefficients $n$ such that $S^3_n(T(k+1,k+2))$, $S^3_n(T(k+2,(l+2)(k+2)-1))$ and $S^3_n(T(k+2,(l+1)(k+2)+1))$ bound rational homology $4$-balls. These families correspond to the entire families 1 and 2 in Theorem \ref{thm:torusknotlist}. (Note however, that a couple of subfamilies of these will also be realised if we choose $a_1>2$ because $(b_2,\dots,b_{m_1})=(2,\dots,2)$. These subfamilies have an especially ample supply of choices of surgery coefficents.) If $n_2>1$, then $m_2=1$ and $(b_1,\dots,b_{m_1})=(2,\dots,2)$. We will have three legs, namely $(-(2+k))$, $((-2)^{[\beta_1-2]})$ and $(-(1+a_1),(-2)^{[k]}, -(2+\beta_1), (-2)^{[a_1-2]})$. The first two can be quasi-complementary in two ways, but the generated pairs $(p,\alpha)$ are already known. The first and the third cannot be quasi-complimentary. The last two can also not be quasi-complementary if $n_2>1$. It is once again more interesting if $n_2=1$ and $m_2>1$ is allowed. We get the graph in Figure \ref{fig:aandalphagone}. The top and the bottom legs cannot be made quasi-complementary. The left and the bottom legs are the interesting case. Analogously to how we used Figure \ref{fig:makegrownlegsqc}, we can use Figure \ref{fig:makegrownlegsqc2} to show that $k=0$ and $(\beta_1,\dots, \beta_{m_2})=(4,(6)^{[l]})$ and $(b_1,\dots,b_{m_1})=(2,2,(3,2,2,2)^{[l]},2)$. This is in fact family (4) in \cite[Theorem 1.1]{GALL} and family 8 in Theorem \ref{thm:torusknotlist}. Going back to Figure \ref{fig:liscagraph2234extendedgeneral}, we could also make the vertex of weight $-\alpha_1-\beta_1$ the only trivalent vertex, but that would require $m_1=n_1=1$ and thus all $\alpha_i$ and all $a_j$ are $2$'s. The top leg would not be able to be quasi-complementary to a sequence of $2$s, and the only way for the left and right legs to be quasi-complementary is if they are the legs $(-2)$ and $(-2,-2)$ in either order. This just gives us two new families of possible surgery coefficients on $T(2,3)$. We finish off this subsection by reminding the reader that we have now proved Theorem \ref{thm:torusknotlist} for families 1, 2, 3, 4, 6, 7 and 8. \subsection{$(-3,-2,-2,-3)$} In this subsection we consider the intersections between the graphs in Figures \ref{fig:rationalsurgeryontorusknotNpos}, \ref{fig:rationalsurgeryontorusknotNposlessthan1}, \ref{fig:QsurgeryontorusknotN_1is0} and \ref{fig:QsurgeryontorusknotN_1isatmostminus2} (rational surgeries on torus knots) and the graphs in Figure \ref{fig:3223general} (graphs obtainable from $(-3,-2,-2,-3)$ through GOCL moves). \begin{figure} \centering \includegraphics[width=.7\linewidth]{makegrownlegsqc3223-1} \caption{It is impossible to choose $k$ so that the length or height difference between the full Riemenshneider diagram and the diagram to the right of the red line is one.} \label{fig:makegrownlegsqc3223-1} \end{figure} Figure \ref{fig:3223general} is symmetric in the $y$-axis, so it is enough to try two of the vertices for trivalency, say the one with weight $1-\beta_1-\zeta_1$ and the one with weight $-a_1$. If we want to the vertex with weight $1-\beta_1-\zeta_1$ to be the trivalent vertex in one of the Figures \ref{fig:rationalsurgeryontorusknotNpos}, \ref{fig:rationalsurgeryontorusknotNposlessthan1}, \ref{fig:QsurgeryontorusknotN_1is0} and \ref{fig:QsurgeryontorusknotN_1isatmostminus2} , then $l_1=m_1=1$. Hence $\beta_i=\alpha_j=2$ for all $i$ and $j$. Also, $l_2=1$ or $n_1=1$. No matter what choice we make between these, if $(-\beta_2, \dots, -\beta_{m_2})$ is one of the complementary legs, we will end up within the families 1, 2, 3 and 4 in Theorem \ref{thm:torusknotlist}, which we already have surgery coefficients for. If $n_1=1$, all $\alpha$s, $\beta$s and $\zeta$s become $-2$, giving us a star-shaped graph with two legs containing nothing but $-2$s, not allowing us out of the families 1, 2, 3 and 4. We consider the case $l_2=1$ instead. We have $a_1=\alpha_1=2$. Let $b_1=k+2$ (so that the leg $(-\beta_2,\dots,-\beta_{m_2})=(-2,\dots,-2)$ has length $k$). We investigate if $(-\zeta_2, \dots, -\zeta_{n_2})$ and $(-2,-(k+2), -z_1-1,-z_2,\dots,-z_{n_1})$ can be quasi-complementary. We draw the diagram Figure \ref{fig:makegrownlegsqc3223-1} as before. However, it is impossible to create a difference of one between the length of one leg and the complement of the other leg. \begin{figure} \centering \includegraphics[width=.7\linewidth]{makegrownlegsqc3223-2} \caption{Riemenschneider diagram of the quasi-complementary legs $(-a_2,\dots,-a_{l_1})$ and $(-b_1, 1-\alpha_1-z_1, -\alpha_2, \dots, -\alpha_{l_2})$ in Figure \ref{fig:3223general} when $m_1=n_1=m_2=1$.} \label{fig:makegrownlegsqc3223-2} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{makegrownlegsqc3223-3} \caption{Riemenschneider diagram of the quasi-complementary legs $(-a_2,\dots,-a_{l_1})$ and $(-b_1, 1-\alpha_1-z_1, -\alpha_2, \dots, -\alpha_{l_2})$ in Figure \ref{fig:3223general} when $m_1=n_1=n_2=1$.} \label{fig:makegrownlegsqc3223-3} \end{figure} Now, consider the vertex labelled $-a_1$ being trivalent. This means that $m_1=1$. Also, either $n_1=1$ or $l_2=1$. First, assume that $n_1=1$. This means that $\beta_1=\cdots=\beta_{m_2}=\zeta_1=\cdots=\zeta_{n_2}=2$. Either $n_2$ or $m_2$ must be $1$. The left leg becomes $(-2,\dots, -2, -3)$. If it is included in a pair of quasi-complementary legs, which we can always ensure since we can choose $(\alpha_2,\dots, \alpha_{l_1})$ freely, we will once again end up within families 1, 2, 3 and 4. In the more interesting case (where the leftmost leg is not one of the quasi-complementary ones) $(a_2,\dots,a_{l_1})$ must be quasi-complementary either to $(2,1+k+\alpha_1,\dots,\alpha_{l_2})$ for some $k\geq 0$ (depicted in Figure \ref{fig:makegrownlegsqc3223-2}) or to $(2+k,1+\alpha_1,\dots,\alpha_{l_2})$ for some $k\geq 0$ (depicted in Figure \ref{fig:makegrownlegsqc3223-3}). \begin{figure} \centering \includegraphics[width=\linewidth]{coolgrowthof3223} \caption{A strange two-parameter family of rational surgeries on positive torus knot that bound rational homology 4-balls.} \label{fig:coolgrowthof3223} \end{figure} In the first case we get that $(\alpha_1,\dots,\alpha_{l_2})=(3,(k+4)^{[s]})$ and $(a_1,\dots,a_{l_1})=(2,(3,(2)^{[k+1]})^{[s]},2)$, giving us the graph in Figure \ref{fig:coolgrowthof3223}. This graph is of the shape of Figure \ref{fig:QsurgeryontorusknotN_1isatmostminus2}, so it describes $S^3_n(T(p,\alpha))$ for $\alpha / p= [(k+4)^{[s+1]},2]^-$ and $N=n-p\alpha=[-1,(2)^{[k+1]}]^-=-\frac{2k+3}{k+2}$. This corresponds to family 9 in Theorem \ref{thm:torusknotlist}. This is the most interesting family to date. While we have seen two-parameter families before, this one is more complicated. A different formulation of the result is that $S^3_{p\alpha-\frac{2k+3}{k+2}}(T(p,\alpha))$ bounds a rational homology ball for all $p$ and $\alpha$ described by \[ \begin{pmatrix} \alpha\\ p \end{pmatrix} = \begin{pmatrix} k+4 & -1\\ 1 & 0 \end{pmatrix}^{s+1} \begin{pmatrix} 2\\ 1 \end{pmatrix} \] for some $s,k \geq 0$. If we fix $s$, then $\alpha$ becomes a degree $s+1$ polynomial in $k$. \begin{figure} \centering \includegraphics[width=\linewidth]{coolgrowthof3223-2} \caption{A strange two-parameter family of rational surgeries on positive torus knot that bound rational homology 4-balls.} \label{fig:coolgrowthof3223-2} \end{figure} In the second case, that is if $(a_2,\dots,a_{l_1})$ is quasi-complementary to $(2+k,1+\alpha_1,\dots,\alpha_{l_2})$ for some $k\geq 0$, then $(\alpha_1,\dots,\alpha_{l_2})=(k+3,(k+4)^{[s]})$ for some $s\geq 0$. Then the graph becomes as in Figure \ref{fig:coolgrowthof3223-2}. Now $\alpha /p=[(k+4)^{[s+1]}, k+2]^-$, meaning that $S^3_{p\alpha-\frac{2k+3}{k+2}}(T(p,\alpha))$ bounds a rational homology ball for all $p$ and $\alpha$ described by \[ \begin{pmatrix} \alpha\\ p \end{pmatrix} = \begin{pmatrix} k+4 & -1\\ 1 & 0 \end{pmatrix}^{s+1} \begin{pmatrix} k+2\\ 1 \end{pmatrix} \] for some $s,k \geq 0$. This corresponds to family 10 in Theorem \ref{thm:torusknotlist}. If $l_2=1$ instead of $n_1=1$, then $a_1=\cdots=a_{l_1}=2$. We already know that we can choose surgery coefficients when one of the complementary legs consists of only $-2$s, so we do not need to check that case to formulate Theorem \ref{thm:torusknotlist}. In fact we do not need to check further, as any star-shaped graphs with three legs whereof two are quasi-complementary, the third one consisting only of $-2$s and the node having weight $-2$ is a positive integral surgery on a positive torus knot, which have been classified in \cite{GALL}. We remind the reader that we have now shown Theorem \ref{thm:torusknotlist} for families 9 and 10. \subsection{$(-3,-2,-3,-3,-3)$} In Figure \ref{fig:32333general} there are three possibilities for a trivalent vertex. If we choose the vertex of weight $-a_1$, then $m_2=1$ and thus two of the legs are $(-3-k)$ and $(-2,\dots,-2)$. We already know that if one of these is in a quasi-complementary pair, then $(p, \alpha)$ lies in families 1-4 in Theorem \ref{thm:torusknotlist}, so we get nothing new. Choosing the vertex of weight $-(1+b_1)$ to be trivalent, and noting that we land in families 1-4 if the left leg $(-3-k,-2)$ is one of the quasi-complementary ones, does however lead us to find that \[ S^3_{p\alpha-\frac{5}{7}}(T(p,\alpha)) \] bounds a rational homology ball for every \[ \begin{pmatrix} \alpha\\ p \end{pmatrix} = \begin{pmatrix} 5 & -1\\ 1 & 0 \end{pmatrix}^{s+1} \begin{pmatrix} 3\\ 1 \end{pmatrix} \] where $s\geq 0$. This corresponds to family 11 in Theorem \ref{thm:torusknotlist}. Finally, choosing the vertex of weight $-(1+\alpha_1)$ to be trivalent gives us $m_1=n_1=1$. If the lower leg $(-2,\dots,-2)$ is included in the pair of quasi-complementary legs, we fall into families 1-4 again. We need to investigate when $(-(3+k),-a_1,-(1+b_1))$ can be quasi-complementary to $((-2)^{[b_1-2]},-3,(-2)^{[k]})$. The Riemenschneider dual of the latter leg is $(-b_1,-(k+2))$, so we need $b_1=a_1$ and $k+2=b_1+1$. Note that we also need $a_1\geq 3$ in order to get a three-legged graph. Let $a=a_1-3$. We get $(-(3+k),-a_1,-(1+b_1))=(-(a+5),-(a+3),-(a+4))$. Our graph is now as in Figure \ref{fig:QsurgeryontorusknotN_1is0}. Thus $\alpha/p=[a+5,a+3,a+4]^-=\frac{a^3+12a^2+45a+51}{a^2+7a+11}$. This correspond to family 5 in Theorem \ref{thm:torusknotlist}. In this subsection, we have shown Theorem \ref{thm:torusknotlist} for families 5 and 11. We have thus now shown it for families 1-11. The remaining families follow from \cite{lecuonamontesinos} and \cite{GALL}. \printbibliography \end{document} \section{Introduction} In Kirby's problem 4.5 \cite{kirbylist}, Casson asks which rational homology $3$-spheres bound rational homology $4$-balls. While rational homology $3$-spheres abound in nature, including the $r$-surgery $S^3_r(K)$ on a knot $K$ for any $r \in \mathbb{Q} - \{ 0 \}$, very few of them actually bound rational homology balls. In fact, Aceto and Golla showed in \cite[Theorem 1.1]{acetogolla}, that for every knot $K$ and every $q \in \mathbb{Z}_+$, there exist at most finitely many $p \in \mathbb{Z}_+$ such that $S^3_{p/q}(K)$ bounds a rational homology ball. It is hard to answer Casson's question in full generality, but recently a great deal of progress has been made on specific classes of rational homology $3$-spheres. For example, in 2007 we learnt the answer for lens spaces \cite{liscasingle07, liscamultiple07}, in 2020 for positive integral surgeries on positive torus knots \cite{acetogolla, GALL}, and in between we learnt the answer for several other classes on Seifert fibred spaces with three exceptional fibres \cite{lecuonacomplementary, lecuonamontesinos}. We do not yet know the answer for general Seifert fibred spaces with three exceptional fibres. In \cite{lokteva2021surgeries}, the author started studying surgeries on algebraic (iterated torus) knots, which are not Seifert fibred but decompose into Seifert fibred spaces when cut along a maximal system of incompressible tori \cite{gordon}. An important tool to study which $3$-manifolds bound rational homology balls is the following corollary of Donaldson's diagonalisation theorem \cite[Theorem 1]{donaldsonthm}: \begin{thm*}[Corollary of Donaldson's Theorem] Let $Y$ be a rational homology 3-sphere and $Y=\partial X$ for $X$ a negative definite smooth connected oriented 4-manifold. If $Y=\partial W$ for a smooth rational homology 4-ball $W$, then there exists a lattice embedding \[ (H_2(X)/\Torsion, Q_{X}) \hookrightarrow (\mathbb{Z}^{\rk H_2(X)}, -\Id). \] \end{thm*} \noindent Here $Q_X$ is the intersection form on $H_2(X)/\Torsion$. Determining which $3$-manifolds in a family $\mathfrak{F}$ bound rational homology $4$-balls using lattice embeddings often goes like this: \begin{enumerate}[label=(\arabic*)] \item Find a negative-definite filling $X(Y)$ for every $Y \in \mathfrak{F}$. \item Guess the family $\mathfrak{F}' \subset \mathfrak{F}$ of manifolds whose filling's intersection lattice (that is second homology with the intersection form) embeds into the standard lattice of the same rank. \item Show that $(H_2(X(Y)),Q_{X(Y)})$ does not embed into $(\mathbb{Z}^{b_2(X(Y))}, -\Id)$ for any $Y \in \mathfrak{F}-\mathfrak{F}'$. \item Hopefully prove that $Y$ bounds a rational homology ball for any $Y\in \mathfrak{F}'$. \end{enumerate} This process is sensitive at every step. There exist $3$-manifolds without any definite fillings \cite{gollalarson}. However, lens spaces, surgeries on torus knots and large surgeries on algebraic knots do have definite fillings. In fact, they all bound definite plumbings of disc bundles on spheres. Step (4) is definitely not guaranteed to work either. For example, $S^3_{-m^2}(K)$ bounds the knot trace $D^4_{-m^2}(K)$ ($D^4$ with a $-m^2$-framed $2$-handle glued along $K$) which has intersection lattice $(\mathbb{Z}, (-m^2))$ which embeds into $(\mathbb{Z}, -\Id)$, but according to \cite[Theorem 1.2]{acetogolla}, $S^3_{-m^2}(K)$ bounds a rational homology ball for at most two positive integer values of $m$. However, in \cite{liscasingle07, liscamultiple07,acetogolla,GALL} the authors managed to find an $X(Y)$ for each $Y$ in such a way that the lattice embedding obstruction turned out perfect. These $X(Y)$s have been plumbings of disc bundles on spheres with a tree-shaped plumbing graph, moreover satisfying the property that the quantity \[I=\sum_{v \in V} (-w(v)-3), \] where $V$ is the set of vertices of the graph and $w(v)$ is the weight of $v$, would be negative. Steps (2) and (3) can sometimes be done at the same time, but often, like in \cite{GALL} where $\mathfrak{F}$ is the set of positive integral surgeries on positive torus knots, they cannot. It is then important to eliminate embeddable cases early in order to proceed with step (3). Theorem 1.1 in \cite{GALL}, the classification of positive integral surgeries on positive torus knots bounding rational homology balls, lists 5 families that are Seifert fibred spaces with 3 exceptional fibres. They bound a negative-definite star-shaped plumbing with three legs. Families (1)-(3) have two complementary legs, that is two legs whose weight sequences are Riemenschneider dual (defined, for the reader's convenience, in Section \ref{sec:complegs} of this paper). All such 3-manifolds that bound a rational homology ball have been classified by Lecuona in \cite{lecuonacomplementary}. Family (5) contains two exceptional graphs which were known to bound rational homology balls both because they arise as boundaries of tubular neighbourhoods of rational cuspidal curves in \cite{bobluhene07} and because they are surgeries on torus knots $T(p,q)$ where $q \equiv \pm 1 \pmod{p}$, which were studied in \cite{acetogolla}. However, Family (4) took the authors of \cite{GALL} a while to find, in the meantime thwarting their attempts at step (3). Eventually they found Family (4) using a computer. This allowed them to finish off their lattice embedding analysis, but Family (4) still looked surprising and strange and begged the question of ``How could we have predicted its existence?" This work came out of widening the perspective and asking which boundaries of $4$-manifolds described by plumbing trees with negative definite intersection forms and low $I$ bound rational homology balls. In particular, we asked ourselves which plumbing trees generate an embeddable intersection lattice. We looked at what the graphs of $3$-manifolds we know to bound rational homology balls look like and tried to see if there are any common patterns. In \cite[Remark 3.2]{lecuonamontesinos}, Lecuona describes how to get all lens spaces that bound rational homology balls from the linear graphs $(-2,-2,-2)$, $(-2,-2,-3,-4)$, $(-3,-2,-2,-3)$ and $(-3,-2,-3,-3,-3)$ using some modifications. (She restates Lisca's result in \cite{liscasingle07} in the language of plumbing graphs rather than fractions $p/q$ for $L(p,q)$.) In this paper we define a couple of moves called GOCL and IGOCL moves on embedded plumbing graphs that preserve embeddability and generalise the moves described by Lecuona. From this point of view, Lecuona's list simply turns into a list of $IGOCL$ and $GOCL$ moves that keep the graph linear. We may then ask ourselves if these moves preserve the property of the described $3$-manifolds bounding rational homology balls. There is unfortunately no obvious rational homology cobordism between two $3$-manifolds differing by a GOCL or an IGOCL move. We can however prove that repeated applications of these moves to the embeddable linear graphs $(-2,-2,-2)$, $(-2,-2,-3,-4)$, $(-3,-2,-2,-3)$ and $(-3,-2,-3,-3,-3)$ give $3$-manifolds bounding rational homology balls. We get the following theorem: \begin{thm} \label{thm:general} All $3$-manifolds described by the plumbing graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general} bound rational homology balls. \end{thm} \noindent In fact, we prove this theorem by showing that the above plumbed $3$-manifolds bound a double cover of $D^4$ branched over a $\chi$-slice link \cite[Definition 1]{donaldowens}. By \cite[Proposition 5.1]{donaldowens}, this must be a rational homology $4$-ball. At the same time, we show that these links are $\chi$-ribbon. These families, together with the one generated from $(-2,-2,-2)$ which only contains linear graphs already found by Lisca, include all lens spaces bounding rational homology balls. They also contain more complicated graphs, with linear complexity (defined in \cite{aceto20}) up to 2. Many papers, e.g. \cite{aceto20, acetogolla, GALL, lecuonamontesinos, simone2020classification}, using lattice embeddings to obstruct plumbed $3$-manifolds from bounding a rational homology ball have used arguments of the form ``If my graph $\Gamma$ is embeddable, then this other linear graph obtained from $\Gamma$ is embeddable, and we know what those look like.", which gets harder to do the further $\Gamma$ is away from being linear. Thus we only really have lattice embedding obstructions so far for families of graphs of complexity $1$. The families of Theorem \ref{thm:general} include many graphs of Seifert fibred spaces. They include Family (4) in \cite{GALL} and predict its existence because Family (4) is just the intersection between the set of graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general} and the negative-definite plumbing graphs of positive integral surgeries on positive torus knots. As mentioned above, there is no obvious rational homology cobordism between the $3$-manifolds described by two plumbing graphs differing by a GOCL or an IGOCL move. This is interesting in comparison with the case in the works by Aceto \cite{aceto20} and Lecuona \cite{lecuonacomplementary}. Lecuona shows that given a plumbing graph $\Gamma$, you can modify it to a graph $\Gamma'$ by subtracting $1$ from the weight of a vertex $v$ and attaching two complementary legs $(-a_1,\dots,-a_m)$ and $(-b_1,\dots,-b_n)$ (see Section \ref{sec:complegs} or \cite{lecuonacomplementary} for definitions) to $v$, and the $3$-manifolds $Y_\Gamma$ and $Y_{\Gamma'}$ described by the graphs will be rational homology cobordant, that is bound a rational homology $4$-ball if and only if the other one does. Thus, if she wants to know if a $Y_{\Gamma'}$, for $\Gamma'$ a graph with two complementary legs coming out of the same vertex, bounds a rational homology ball, she can reduce it to the same question for a simpler graph. However, because we do not know if the $GOCL$ and $IGOCL$ moves are rational homology cobordisms, we cannot play this trick for complementary legs growing out of different vertices. An interesting generalisation of \cite[Theorem 1.1]{GALL}, would be to classify all positive rational surgeries on positive torus knots that bound rational homology balls. Theorem \ref{thm:general} allows us to construct more examples of such surgeries than is sightly to write down. Instead, we may ask ourselves the following question: \begin{question} For which $1<p<q$ with $\GCD(p,q)=1$ is there an $r\in \mathbb{Q}_+$ such that $S^3_r(T(p,q))$ bounds a rational homology ball? \end{question} The entirety of Section \ref{sec:torusknots} is consecrated to showing the following theorem: \begin{thm} \label{thm:torusknotlist} For the following pairs $(p,q)$ with $1<p<q$ and $\GCD(p,q)=1$, there is at least one $r \in \mathbb{Q}_+$ such that $S^3_r(T(p,q))$ bounds a rational ball. Here $k,l \geq 0$. \begin{enumerate} \item $(k+2,(l+1)(k+2)+1)$ \item $(k+2,(l+2)(k+2)-1)$ \item $(2k+3,(l+1)(2k+3)+2)$ \item $(2k+3,(l+2)(2k+3)-2)$ \item $(k^2+7k+11, k^3+12k^2+45k+51)$ \item $(P_{l+1},P_{l+2})$ for $(P_i)$ a sequence defined by $P_0=1$, $P_1=4$, $P_2=19$ and $P_{i+2}=5P_{i+1}-P_i$. \item $(Q_{l+1},Q_{l+2})$ for $(Q_i)$ a sequence defined by $Q_0=1$, $Q_1=2$, $Q_2=9$ and $Q_{i+2}=5Q_{i+1}-Q_i$. \item $(R_{l+1},R_{l+2})$ for $(R_i)$ a sequence defined by $R_0=1$, $R_1=3$, $R_2=17$ and $R_{i+2}=6R_{i+1}-R_i$. \item $(S^{(k)}_{l+1},S^{(k)}_{l+2})$ for $(S^{(k)}_i)$ a sequence defined by $S^{(k)}_0=1$, $S^{(k)}_1=2$, $S^{(k)}_2=2k+7$ and $S^{(k)}_{i+2}=(k+4)S^{(k)}_{i+1}-S^{(k)}_i$. \item $(T^{(k)}_{l+1},T^{(k)}_{l+2})$ for $(T^{(k)}_i)$ a sequence defined by $T^{(k)}_0=1$, $T^{(k)}_1=k+2$, $T^{(k)}_2=k^2+6k+7$ and $T^{(k)}_{i+2}=(k+4)T^{(k)}_{i+1}-T^{(k)}_i$. \item $(U_{l+1},U_{l+2})$ for $(U_i)$ a sequence defined by $U_0=1$, $U_1=3$, $U_2=14$ and $U_{i+2}=5U_{i+1}-U_i$. \item $(A, (n+1)Q+P)$ for $P$ and $Q$ such that $L(Q,P)$ bounds a rational homology ball (or equivalently $\frac{Q}{P}$ lying in Lisca's set $\mathcal{R}$ \cite{liscasingle07}), and $A$ a multiplicative inverse to either $Q$ or $nQ+P$ modulo $(n+1)Q+P$ such that $0<A<(n+1)Q+P$. \item $((n+1)Q+P, (l+1)((n+1)Q+P)+A)$ for $P$ and $Q$ such that $L(Q,P)$ bounds a rational homology ball (or equivalently $\frac{Q}{P}$ lying in Lisca's set $\mathcal{R}$ \cite{liscasingle07}), and $A$ a multiplicative inverse to either $Q$ or $nQ+P$ modulo $(n+1)Q+P$ such that $0<A<(n+1)Q+P$. \item $(B, P)$ for $P$ and $Q$ such that $L(Q,P)$ bounds a rational homology ball (or equivalently $\frac{Q}{P}$ lying in Lisca's set $\mathcal{R}$ \cite{liscasingle07}), and $B$ a multiplicative inverse to either $P\lceil \frac{Q}{P}\rceil-Q$ or $Q-P\lfloor \frac{Q}{P} \rfloor$ modulo $P$ such that $0<B<P$. \item $(P, (l+1)P+B)$ for $P$ and $Q$ such that $L(Q,P)$ bounds a rational homology ball (or equivalently $\frac{Q}{P}$ lying in Lisca's set $\mathcal{R}$ \cite{liscasingle07}), and $B$ a multiplicative inverse to either $P\lceil \frac{Q}{P}\rceil-Q$ or $Q-P\lfloor \frac{Q}{P} \rfloor$ modulo $P$ such that $0<B<P$. \item $(P,Q)$ such that there is a number $n$ such that $(P,Q,n) \in \mathcal{R} \sqcup \mathcal{L}$ for the sets $\mathcal{R}$ and $\mathcal{L}$ defined in \cite[Theorem 1.1]{GALL}. (Note that here $r=n \in \{PQ, PQ-1, PQ+1 \}$, so we are looking at an integral surgery.) \end{enumerate} \end{thm} \noindent The interested reader can use the methods of Section \ref{sec:torusknots} to obtain a surgery coefficient $r$ too (or several). In this theorem, case 16 is shown to bound rational homology balls in \cite{GALL} and reflects the degenerate cases of surgeries on torus knots that are lens spaces or connected sums of lens spaces, cases 12-15 are shown to bound rational homology balls in \cite{lecuonacomplementary} because their graphs have a pair of complementary legs, while all the cases 1-11 are shown to bound rational homology balls in this paper. The authors of \cite{GALL} classified all positive integral surgeries on positive torus knots that bound rational homology balls. The classification included 18 families, whereof families 6-18 are included in our family 16, family 4 in our family 8, and the others in families 1 and 2. At the moment of writing we do not know of any other positive torus knots having positive surgeries bounding rational homology balls. The pair $(8,19)$ is in some metric the smallest example not to appear on the list of Theorem \ref{thm:torusknotlist}. Thus we may concretely ask: \begin{question} Is there an $r \in \mathbb{Q}_+$ such that $S^3_r(T(8,19))$ bounds a rational homology ball? \end{question} We may also note that some positive torus knots have many surgeries that bound rational homology balls. For example, Theorem \ref{thm:general} allows us to construct numerous finite and infinite families of surgery coefficients $r \in \mathbb{Q}_+$ such that $S^3_r(T(2,3))$ bounds a rational homology ball. All we need to do is to choose weights in the graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general} so that we get a starshaped graph with three legs whereof one is $(-2)$ and another is either $(-2,-2)$ or $(-3)$. For example $S^3_{\frac{(11k+20)^2}{22k^2+79k+71}}(T(2,3))$ bounds a plumbing of the shape in Figure \ref{fig:liscagraph2234extendedgeneral} with $(b_1,\dots, b_{m_1})=(4)$ and $(a_1,\dots,a_{n_1})=(3)$, and thus bounds a rational homology ball for any $k \geq 0$. There are also surgeries on $T(2,3)$ that bound rational balls, but do not come from our construction. For example, $S^3_{\frac{64}{7}}(T(2,3))=-S^3_{64}(T(3,22))$, which bounds a rational homology ball because it is the boundary of the tubular neighbourhood of a rational curve in $\mathbb{C}P^2$ \cite{bobluhene07}, but whose lattice embedding contains a basis vector with coefficient $2$, which we do not get by applying $GOCL$ or $IGOCL$ moves to $(-2,-2,-3,-4)$, $(-3,-2,-2,-3)$ and $(-3,-2,-3,-3,-3)$. The lattice embedded plumbing graph of $S^3_{\frac{64}{7}}(T(2,3))$ does however fit into Family $\mathcal{C}$ of \cite{stipszabwahl} of symplectically embeddable plumbings. Unfortunately, Family $\mathcal{C}$ of \cite{stipszabwahl} contains both surgeries on $T(2,3)$ that bound rational homology balls and ones that do not. For example, $S^3_{\frac{169}{25}} (T(2,3))$ of \cite[Section 2.4, Figure 12]{stipszabwahl} does not bound a rational ball despite bounding a plumbing with an embeddable intersection form. A later paper \cite{bhupalstipsicz} classified which surgeries on $T(2,3)$ appearing in Family $\mathcal{C}$, viewed as surface singularity links, bound a rationally acyclic Milnor fibre. Interestingly, all but two of the embedded graphs in that family are generated by applying IGOCL moves to the graph of $S^3_{\frac{64}{7}}(T(2,3))$. However, we do not know if any other members of Family $\mathcal{C}$ bound a rational homology ball which is not a Milnor fibre. Hence, the following is a rich open question worth studying: \begin{question} For which $r \in \mathbb{Q}_+$ does $S^3_r(T(2,3))$ bound a rational homology ball? \end{question} \subsection{Outline} We start off the paper with Section \ref{sec:complegs} by recalling some results on complementary legs and the basics of the lattice embedding setup. In Section \ref{sec:analysis} we define the GOCL and IGOCL moves and show Theorem \ref{thm:general}. In Section \ref{sec:torusknots} we prove Theorem \ref{thm:torusknotlist} for the families 1-11, while the other families follow directly from \cite{lecuonacomplementary} and \cite{GALL}. \section{Background on Complementary Legs} \label{sec:complegs} In this section we review the definition and basic properties of complementary (sometimes called Riemenschneider dual) legs. \begin{dfn} We define the negative continued fraction $[a_1,\dots, a_n]^-$ as \[ [a_1,\dots, a_n]^-=a_1-\cfrac{1}{a_2-\cfrac{1}{\ddots - \cfrac{1}{a_n.}}} \] \end{dfn} Negative continued fractions often show up in low-dimensional topology because of the slam-dunk Kirby move \cite[Figure 5.30]{gompfstipsicz}, which allows us to substitute a rational surgery on a knot by an integral surgery on a link. \begin{dfn} A two-component weighted linear graph $(-\alpha_1,...,-\alpha_n), (-\beta_1,...,-\beta_k)$ (with $\alpha_i, \beta_j$ integers greater than or equal to 2) is called a pair of complementary legs if \[ \frac{1}{[\alpha_1,...,\alpha_n]^-}+\frac{1}{[\beta_1,...,\beta_k]^-}=1. \] We call the sequence $(\beta_1,...,\beta_k)$ the Riemenschneider dual or complement of the sequence $(\alpha_1,...,\alpha_n)$, and we call the fractions $[\alpha_1,...,\alpha_n]^-$ and $[\beta_1,...,\beta_k]^-$ complementary. \end{dfn} \begin{dfn} A Riemenschneider diagram is a finite set of points $S$ in $\mathbb{Z}_+ \times \mathbb{Z}_-$ such that $(1,-1) \in S$ and for every point $(a,b) \in S$ but one, exactly one of $(a+1,b)$ or $(a,b-1)$ is in $S$. If $(n,k) \in S$ is the point with the largest $n-k$, we say that the Riemenschneider diagram represents the fractions $[\alpha_1,...,\alpha_n]^-$ and $[\beta_1,...,\beta_k]^-$, where $\alpha_i$ is one more than the number of points with $x=i$ and $\beta_j$ is one more than the number of points with $y=-j$. \end{dfn} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{riemenschneider_diagram_example_2.png} \caption{Example of a Riemenschneider diagram representing the complementary fractions $[5,3,2,2]^-$ and $[2,2,2,3,4]^-$.} \label{riemenschneider_diagram} \end{figure} \begin{example} See Figure \ref{riemenschneider_diagram} for an example of a Riemenschneider diagram. \end{example} \begin{thm}[Riemenschneider \cite{riemenschneider}] The two fractions represented by a Riemenschneider diagram are complementary. \end{thm} \begin{rmk} Note that given any continued fraction $[\alpha_1,...,\alpha_n]^-$ with all $\alpha_i \geq 2$, we may construct a Riemenschneider diagram representing $[\alpha_1,...,\alpha_n]^-$ and its Riemenschneider dual. \end{rmk} In this paper, we are going to be interested in lattice embeddings, especially of complementary legs. Let $X$ be a plumbing of disc bundles on spheres. It can be described using a plumbing graph, which is a weighted simple graph. The weight indicates the Euler number of the disc bundle and an edge indicates a plumbing. (See \cite[Example 4.6.2]{gompfstipsicz} for a longer explanation.) The second homology of $X$ is the free abelian group $\mathbb{Z}\langle V_1,\dots,V_k \rangle $ on the vertices and the intersection form is \[ \langle V_i, V_j \rangle_{Q_{X}} = \begin{cases} \text{weight of } V_i & \text{ if } i=j \\ 1 & \text{ if } V_i \text{ is adjacent to } V_j \\ 0 & \text{ otherwise.} \end{cases} \] A \textit{lattice embedding} $f: (H_2(X)/\Torsion, Q_{X}) \hookrightarrow (\mathbb{Z}^{N}, -\Id)$ is a linear map $f$ such that $\langle V_i, V_j \rangle_{Q_{X}} = \langle f(V_i), f(V_j) \rangle_{-\Id}$. We will simply denote $\langle \cdot, \cdot \rangle := \langle \cdot, \cdot \rangle_{-\Id}$. If nothing else is specified, then $N=\rk H_2(X)$, that is the number of vertices in the graph. Common abuses of notation include ``embedding of the graph", meaning an embedding of the lattice $(H_2(X)/\Torsion, Q_{X})$, where $X$ is described by the plumbing graph. The following theorem is well-known, but we explicitly write out the embedding construction for the reader's convienience. \begin{thm} \label{thm:complegembed} Every pair of complementary legs has a lattice embedding. \end{thm} \begin{proof} The embedding can be constructed algorithmically from a Riemenschneider diagram. Denote the vertices of the two complementary legs by $(U_1,\dots, U_{m_1})$ and $(V_1,\dots,V_{m_2})$. These vertices generate the second homology of the plumbed $4$-manifold described by the graph. We need to send every vertex to an element of $\mathbb{Z}\langle e_1, \dots, e_{m_1+m_2}$. Start by mapping both $U_1$ and $V_1$ to $e_1$. Order the points in the Riemenschneider diagram so that $P_1=(1,-1)$, and if $P_i=(a,b)$, then point $P_{i+1}$ is either $(a+1,b)$ or $(a,b-1)$. Now, we recursively build an embedding as follows. For each non-final $i$, if the current partial embedding is $(u_1,...,u_n), (v_1,...,v_k)$ (meaning that $(U_1,\dots, U_n)$ gets mapped to $(u_1,...,u_n)$ and $(V_1,\dots, V_k)$ gets mapped to $(v_1, \dots, v_k)$) and $P_i=(a,b)$ is such that $P_{i+1}=(a+1,b)$, then the new partial embedding will be $(u_1,...,u_n+e_{i+1}), (v_1,...,v_k-e_{i+1},e_{i+1})$. If $P_{i+1}=(a,b-1)$, then the new partial embedding will be $(u_1,...,u_n-e_{i+1},e_{i+1}), (v_1,...,v_k+e_{i+1})$ instead. If $P_i$ is final and the current partial embedding is $(u_1,...,u_n), (v_1,...,v_k)$, the new embedding will be $(u_1,...,u_n+e_{i+1}), (v_1,...,v_k-e_{i+1})$, or the other way around, whatever is preferred. It is easy to see that an embedding $(u_1,...,u_{m_1}), (v_1,...,v_{m_2})$ constructed this way will have the properties: \begin{itemize} \item Each $u_i$ for $i=1,...,n-1$ and $v_j$ for $j=1,...,k$ will be a sum of consecutive basis vectors, all but the last one with coefficient $1$, and the last one with coefficient $-1$. Meanwhile $u_n$ will be a sum of consecutive basis vectors all with coefficient $1$. \item If the Riemenschneider diagram represents the fractions $[\alpha_1,...,\alpha_n]^-$ and $[\beta_1,...,\beta_k]^-$, then $\langle u_i, u_i \rangle=-\alpha_i$ and $\langle v_j, v_j \rangle=-\beta_j$. \item Since $u_i$ and $u_{i+1}$ have exactly one basis vector in common, one with a positive coefficient and one with a negative one, $\langle u_i, u_{i+1} \rangle =1$, and similarly $\langle v_i, v_{i+1} \rangle =1$. \item The other pairs $(u_i, u_j)$ (with $|i-j|>1$) don't share basis vectors and are thus orthogonal. Similarly, the pairs $(v_i, v_j)$ with $|i-j|>1$ don't share basis vectors and are thus orthogonal. \item It is easy to show by induction on the construction that $\langle u_i, v_j \rangle=0$ for all $i,j$. \end{itemize} These properties show that we are in fact looking at a lattice embedding of the complementary legs. \end{proof} \begin{rmk} In fact, if $e_1$ is fixed to hit the first vertex of each complementary leg, the rest of the embedding is unique up to renaming of elements and sign of the coefficient. \cite[Lemma 5.2]{bakerbucklecuona} \end{rmk} The following facts are useful when dealing with lattice embeddings. We will often use these properties without citing them. The first fact follows from reversing the Riemenschneider diagram, the second from embedding the sequences $(a_m,\dots,a_1)$ and $(b_n,\dots, b_1)$ as in Theorem \ref{thm:complegembed} and mapping the $-1$-weighted vertex to $-e_1$, and the rest from looking at a Riemenscheider diagram. \begin{prop} \label{prop:latticeembeddingproperties} Let $(a_1,\dots,a_m)$ and $(b_1,\dots, b_n)$ be complementary sequences. Then the following hold: \begin{enumerate} \item The sequences $(a_m,\dots,a_1)$ and $(b_n,\dots, b_1)$ are complementary. \item The linear graph $(-a_1,\dots,-a_m,-1,-b_n,\dots,-b_1)$ embeds in $(\mathbb{Z}^{m+n}, -\Id)$. \item Either $a_m$ or $b_n$ must equal $2$, so assume without loss of generality that $b_n=2$. Blowing down the $-1$ in the linear graph $(-a_1,\dots,-a_m,-1,-b_n,\dots,-b_1)$ gives us the linear graph $(-a_1,\dots,-(a_m-1),-1,-b_{n-1},\dots,-b_1)$. This graph is once again a pair of complementary legs linked by a $-1$, described by the Riemenschneider diagram obtained by the removing the last point. \item Repeatedly blowing down the $-1$ in linear graphs of the form \[ (-a_1,\dots,-a_m,-1,-b_n,\dots,-b_1)\] eventually takes us to $(-2,-1,-2)$. \item Similarly, blowing up next to the $-1$ gives $(-a_1,\dots,-(a_m+1),-1,-2,-b_n,\dots,-b_1)$ or $(-a_1,\dots,-a_m,-2,-1,-(b_n+1),\dots,-b_1)$, which are both pairs of complementary legs connected by a $-1$, described by Riemenschneider diagrams that are extensions of the initial one by one dot. \end{enumerate} \end{prop} \section{Growing Complementary Legs on Lisca's Graphs} \label{sec:analysis} The idea for this work comes from studying the lattice embeddings of linear graphs and other trees that are known to bound rational homology 4-balls. Consider for example Lisca's classification of connected linear graphs that bound rational homology 4-balls \cite{liscasingle07}, in the most convenient form for us described by Lecuona in \cite[Remark 3.2]{lecuonamontesinos}. Every family of embeddable graphs can be obtained from the basic graphs $(-2,-2,-2)$, $(-2,-2,-3,-4)$, $(-3,-2,-2,-3)$ and $(-3,-2,-3,-3,-3)$ by repeated application of two types of moves, one of which is the following: choose a basis vector $e$ hitting exactly two vertices $v$ and $w$, where $w$ is final, subtract $1$ from the weight of $v$ and attach a new vertex $u$ of weight $-2$ to $w$. I will show that we can do away with the assumption that $w$ is final and still get 3-manifolds bounding rational homology 4-balls through repeating this operation. \begin{figure} \centering \tikzfig{liscagraph2234} \caption{Lisca's $(-2,-2,-3,-4)$ graph with embedding.} \label{fig:liscagraph2234} \end{figure} \begin{figure} \centering \tikzfig{liscagraph2234extended1} \caption{An extension of Lisca's $(-2,-2,-3,-4)$ graph with embedding.} \label{fig:liscagraph2234extended1} \end{figure} \begin{example} Consider Figure \ref{fig:liscagraph2234}, showing an embedding of Lisca's $(-2,-2,-3,-4)$ graph into the standard lattice $(\mathbb{Z} \langle e_1, e_2, e_3, e_4 \rangle , -\Id)$. Note that $e_4$ and $e_3$ hit two vertices each. Choose $e_4$. We can now perform the operation described above by choosing $v$ to be the vertex of weight $-4$ and $w$ the vertex of weight $-3$. The result is shown in Figure \ref{fig:liscagraph2234extended1} together with its embedding, which is a kind of ``expansion'' of the embedding in Figure \ref{fig:liscagraph2234}. Our new embedding has two basis vectors hitting exactly two vertices each, namely $e_3$ and $f_1$, whereas $e_4$ now hits three vertices. We may now perform the same operation again on any of these basis vectors, thereby obtaining any graph of the form described in Figure \ref{fig:liscagraph2234extendedgeneral}, with $k=0$. We will show that these graphs do not only have lattice embeddings, but also bound rational homology 4-balls. \end{example} We will now introduce two moves on embedded plumbing graphs. Let $\Gamma=(V,E,W)$ be a weighted negative-definite graph with lattice embedding $F: (V,Q_{X_{\Gamma}}) \to (\mathbb{Z}^{|V|},-\Id)$. Assume that there is a basis vector $e$ of $\mathbb{Z}^{|V|}$ hitting exactly two vertices $A$ and $B$ in $\Gamma$, whose images are $v$ and $w$, in any order we prefer. Then a \textbf{GOCL (growth of complementary legs) operation} is constructing an embedded graph $(\Gamma'=(V',E',W'), F')$ by $V'=V\cup C$, $E'=E\cup \{AC\}$ and $u:=F'(C)=-\langle e, v \rangle e - f$, $w':=F'(B)=w-\langle e, v \rangle \langle e, w \rangle f$ and $F'(D)=F(D)$ for all $D \in V-\{A\}$. Note that $\langle e, v \rangle \langle e, w \rangle = \langle f, u \rangle \langle f, w' \rangle$. Thus, the GOCL operation substitutes $e$ by $f$ in the set of basis vectors hitting the graph exactly twice and moreover, the sign difference between the two occurrences of the basis vector is preserved. This operation can therefore be applied repeatedly. If we start with the graph consisting of two vertices of weight $-2$ and no edges, and the embedding $e_1-e_2$ and $e_1+e_2$, then repeated application of GOCL will simply give us two complementary legs. The other operation which we will call \textbf{IGOCL (inner growth of complementary legs)} could be described as growing complementary legs from the inside. Suppose a basis vector $e$ hits exactly three vertices $A$, $B$ and $C$ in $\Gamma$, with their images under the lattice embedding $F$ being $u$, $v$ and $w$ respectively. Assume also that $B$ and $C$ are adjacent and that $\langle v, e \rangle \langle w, e \rangle = -1$, that is $e$ hits $v$ and $w$ with opposite signs. Then $\Gamma'=(V', E', W')$ is described by $V'=V\cup \{ D \}$, $E'=(E-\{BC\})\cup \{BD, DC\}$, $F'(D)=-\langle v, e \rangle e + \langle v, e \rangle f$, $F'(C)=w-\langle w, e \rangle e + \langle w, e \rangle f$, $F'(A)=u+\langle u, e \rangle f$ and $F'(X)=F(X)$ for all $X \in V - \{A,C\}$. After this operation is performed, we can perform it again on either $e$ or $f$, but the result is essentially the same. What it does is grow a chain of $-2$'s between two vertices and compensate by subtracting from the weight of a different vertex. If we apply the IGOCL operation on a vector hitting a pair of complementary legs three times, we still get a pair of complementary legs, which explains the name. Now that we have defined the GOCL and IGOCL moves, the remainder of this section will be dedicated to their applications to Lisca's basic graphs $(-2,-2,-2)$, $(-2,-2,-3,-4)$, $(-3,-2,-2,-3)$ and $(-3,-2,-3,-3,-3)$. We will show for each Lisca graph one by one that the results obtained from repeatedly applying the aforementioned operations always bound rational homology balls. We introduce the following notation. Given a weighted graph $\Gamma$, we let $X_{\Gamma}$ be the 4-dimensional plumbing associated to it and $Y_\Gamma$ be its boundary or, in other words, the associated 3-dimensional plumbing. If $\Gamma$ is a tree, then the attaching link is strongly invertible \cite{montesinoscoverings}, that is can always be drawn in a way equivariant with respect to the $180^{\circ}$ rotation around the $x$-axis in such a way that every knot intersects the $x$-axis in exactly two points. (For example, see Figure \ref{fig:2234involution}.) Let $\pi: X_\Gamma \to X_\Gamma$ be the involution given by extending this $180^\circ$ rotation around the $x$-axis and let $p: X_\Gamma \to X_\Gamma/ \pi$ be the quotient map when we identify $x \sim \pi(x)$. By \cite[Theorem 3]{montesinoscoverings}, $X_\Gamma/ \pi=B^4$ and $p$ is a double covering, branched over a surface $S_\Gamma \subset B^4$. The surface $S_\Gamma$ can be drawn by attaching bands to a disc according to the bottom half of the rotation-equivariant drawing, adding as many half-twists as the weight of the corresponding unknot \cite{montesinoscoverings}. (See Figure \ref{fig:2234arborescent}.) By $K_\Gamma$ we denote the link $K_\Gamma=S_\Gamma\cap S^3$. \begin{figure} \centering \includegraphics[width=.7\linewidth]{2234involution} \caption{Proof that the attaching link of the graph in Figure \ref{fig:liscagraph2234extended1} is strongly invertible.} \label{fig:2234involution} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{2234arborescentmorecorrect} \caption{$S_\Gamma$ for $\Gamma$ the graph in Figure \ref{fig:liscagraph2234extended1} is a disc with five bands attached.} \label{fig:2234arborescent} \end{figure} We will show that if $\Gamma$ is obtainable from the basic Lisca graphs by GOCL and IGOCL moves, then $Y_\Gamma$ bounds a rational homology 3-ball. We will do this by finding a $\Gamma'$ such that $Y_{\Gamma'}=Y_\Gamma$ and $K_{\Gamma'}$ is $\chi$-slice, that is bounds a surface of Euler characteristic $1$ inside $B^4$ \cite[Definition 1]{donaldowens}. The $\chi$-sliceness will be proven by adding two (or one) 2-handles to $X_{\Gamma'}$ in a $\mathbb{Z}/2\mathbb{Z}$-equivariant fashion (changing the branching locus from $S_{\Gamma'}$ to $S$ by adding two (or one) bands), in such a way that we obtain $(S^1\times S^2)\# (S^1\times S^2)$ (or $(S^1\times S^2)$) and that $\partial S$ is the 3-component (or 2-component) unlink. Addition of such bands is a concordance of boundary links. It will follow that $K_{\Gamma'}$ bounds an embedded surface $F$ in $B^4$ obtained from adding two (or one) bands to $K_\Gamma'$ and capping off with three (or two) discs, meaning that $K_\Gamma'$ is the boundary of a surface obtained from attaching two bands to three discs (or one band to two discs). This surface is homotopy equivalent to three points with two edges (or two points with one edge), which has Euler characteristic $1$. We use \cite[Proposition 5.1]{donaldowens} to conclude that the double cover of $B^4$ branched over $F$ is a rational homology ball with boundary $Y_{\Gamma'}=Y_\Gamma$. \subsection{$(-2,-2,-2)$} This graph has embedding $(e_1-e_2,e_2-e_3,-e_1-e_2)$. The only basis vector hitting twice is $e_1$ whose both occurrences are in final vertices. Thus applying the GOCL operation keeps the graph linear and all such graphs have been shown to be bounding rational homology balls by Lisca. In fact, these graphs describe the lens spaces $L(p^2, pq \pm 1)$, for $p>q>0$ with some orientation. \subsection{$(-2,-2,-3,-4)$} \label{subsec:orientationreversalhere} \begin{figure} \centering \tikzfig{liscagraph2234extendedgeneral} \caption{General extension of Lisca's $(-2,-2,-3,-4)$ graph. Here $1/[a_1,\dots,a_{n_1}]^{-}+1/[\alpha_1,\dots,\alpha_{n_2}]^{-}=2$; in other words $[a_1,\dots,a_{n_1}]^{-}$ and $[\alpha_1,\dots,\alpha_{n_2}]^{-}$ are complementary. The fractions $[b_1,\dots,b_{m_1}]^{-}$ and $[\beta_1,\dots,\beta_{m_2}]^{-}$ are complementary as well. The length of the chain of $-2$'s is $k$.} \label{fig:liscagraph2234extendedgeneral} \end{figure} First, we look at the embedding of the $(-2,-2,-3,-4)$ graph in Figure \ref{fig:liscagraph2234}. There are two vectors occuring twice, namely $e_3$ and $e_4$ and one vector occurring three times, namely $e_1$. The vector $e_1$ fulfils the requirement for the IGOCL move to happen, so we can extend this to a $(-(2+k), -2, -3, (-2)^{[k]}, -4)$ graph, a case mentioned in \cite[Remark 3.2]{lecuonamontesinos}. Further, GOCL moves applied to $e_3$ and $e_4$ give us the graphs described by Figure \ref{fig:liscagraph2234extendedgeneral}. Note that applying IGOCL or GOCL again will not move us out of the set of graphs described by Figure \ref{fig:liscagraph2234extendedgeneral}. \begin{figure} \centering \tikzfig{2234extendedplus-1s} \caption{The black part of the graph represents the same 3-manifold as Figure \ref{fig:liscagraph2234extendedgeneral}. We attach two $-1$-framed $2$-handles to the $4$-manifold. We will show that the new boundary $3$-manifold is $(S^1\times S^2)\#(S^1\times S^2)$.} \label{fig:2234extendedplus-1s} \end{figure} \begin{figure} \centering \tikzfig{2234easiertoembedplus-1s} \caption{A graph representing the same $3$-manifold as Figure \ref{fig:2234extendedplus-1s}.} \label{fig:2234easiertoembedplus-1s} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{2234equivarianceright} \caption{Proof that the attaching link of the graph in Figure \ref{fig:2234easiertoembedplus-1s} is strongly invertible.} \label{fig:2234equivariance} \end{figure} We want to show that if $\Gamma$ is a graph described by Figure~\ref{fig:liscagraph2234extendedgeneral}, then $Y_\Gamma$ bounds a rational homology 4-ball. We add two $-1$-framed $2$-handles to $X_\Gamma$ and obtain Figure~\ref{fig:2234extendedplus-1s}. I have not been able to do this equivariantly when $k\geq 2$, but employing the first part of Proposition~\ref{prop:changingorientation} below, we can find a graph $\Gamma'$ such that $Y_\Gamma=Y_{\Gamma'}$ and such that the Kirby diagram of $\Gamma'$ with the extra $-1$-framed knots can be embedded equivariantly. This $\Gamma'$ will be the non-purple part of Figure~\ref{fig:2234easiertoembedplus-1s}. Also note that Figure~\ref{fig:2234extendedplus-1s} and Figure~\ref{fig:2234easiertoembedplus-1s} show the same 3-manifold. The equivariant embedding of the graph in Figure \ref{fig:2234easiertoembedplus-1s} is drawn in Figure \ref{fig:2234equivariance}. The below proposition follows from Neumann's plumbing calculus \cite[Page 305]{neumann89} together with the algorithm of 1) performing a $1$-blow-up at the right of the rightmost chain element greater than $1$, 2) blowing down any $-1$-weighted vertices, and 3) repeating. Following the Riemenschneider diagram, we see that this algorithm gradually substitutes a sequence by its Riemenschneider dual. Blowing up by $1$ increases the number of positive vertices by $1$ and blowing down a $-1$ decreases the number of negative vertices by $1$. \begin{prop} Let $\Gamma$ be a plumbing graph (not necessarily a tree) containing a chain $(-\alpha_1,\dots,-\alpha_k)$ (not necessarily splitting the rest of the graph into two components), as in Figure \ref{fig:changingorientation1}. Let $\Gamma'$ be the graph $\Gamma$ with the chain substituted by the chain $(\beta_1,\dots, \beta_j)$, for $[\alpha_1,\dots, \alpha_k]^-$ and $[\beta_1,\dots, \beta_j]^-$ complementary fractions, and the weight of the vertices adjacent to the chain increased by 1. Then $Y_\Gamma=Y_{\Gamma'}$. Moreover, $b^2_+(X_{\Gamma'})=b^2_+(X_\Gamma)+j$ and $b^2_-(X_{\Gamma'})=b^2_-(X_\Gamma)-k$. \label{prop:changingorientation} \end{prop} \begin{figure} \begin{minipage}{\textwidth} \begin{subfigure}{\linewidth} \centering \tikzfig{changingorientation1} \caption{Changing a negative chain...} \label{fig:changingorientation1} \end{subfigure} \begin{subfigure}{\linewidth} \centering \tikzfig{changingorientation2} \caption{... to a positive one.} \label{fig:changingorientation2} \end{subfigure} \end{minipage}% \caption{The graphs above bound the same 3-manifold if $[\alpha_1,\dots, \alpha_k]^-$ and $[\beta_1,\dots, \beta_j]^-$ are complementary fractions.} \label{fig:changingorientation} \end{figure} \begin{figure} \begin{subfigure}{\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns1} \caption{} \label{fig:liscagraph2234twoblowdowns1} \end{subfigure} \begin{subfigure}{0.5\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns2} \caption{} \label{fig:liscagraph2234twoblowdowns2} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns3} \caption{} \label{fig:liscagraph2234twoblowdowns3} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns4} \caption{} \label{fig:liscagraph2234twoblowdowns4} \end{subfigure} \begin{subfigure}{0.25\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns5} \caption{} \label{fig:liscagraph2234twoblowdowns5} \end{subfigure} \begin{subfigure}{0.25\linewidth} \centering \tikzfig{liscagraph2234twoblowdowns6} \caption{} \label{fig:liscagraph2234twoblowdowns6} \end{subfigure} \caption{Plumbing/Kirby calculus to show that adding two $-1$-framed $2$-handles to the graphs in Figure \ref{fig:liscagraph2234extendedgeneral} gives a $4$-manifold with boundary $(S^1\times S^2)\# (S^1\times S^2)$.} \label{fig:2234kirbycalculus} \end{figure} \begin{figure} \begin{minipage}{.5\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus} \caption{} \label{fig:sub1} \end{subfigure}\\[1ex] \end{minipage}% \begin{minipage}{.5\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus2} \caption{} \label{fig:sub2} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus3} \caption{} \label{fig:sub3} \end{subfigure} \end{minipage}% \caption{Kirby calculus to remove a negative 3-cycle in a plumbing graph.} \label{fig:neg3cycleremoval} \end{figure} \begin{figure} \begin{minipage}{.5\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus-chain} \caption{} \label{fig:sub4} \end{subfigure}\\[1ex] \end{minipage}% \begin{minipage}{.5\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus2-chain} \caption{} \label{fig:sub5} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=.7\linewidth]{kirbycalculus3-chain} \caption{} \label{fig:sub6} \end{subfigure} \end{minipage}% \caption{Kirby calculus to remove a negative cycle in a plumbing graph.} \label{fig:negcycleremoval} \end{figure} \begin{figure} \centering \includegraphics[width=.4\linewidth]{untwisting} \caption{A schematic picture of the graph in Figure \ref{fig:liscagraph2234twoblowdowns3}. From this picture, it is clear that blowing down the green $-1$ unlinks the red and the blue curves, yielding the link in Figure \ref{fig:liscagraph2234twoblowdowns4}.} \label{fig:untwisting} \end{figure} It remains to show that Figures \ref{fig:2234easiertoembedplus-1s} and \ref{fig:2234extendedplus-1s} represent the $3$-manifold $(S^1\times S^2)\#(S^1\times S^2)$. We start from Figure \ref{fig:2234extendedplus-1s} and blow down every $-1$ vertex that we can. The sequence of blow-downs that we do can be seen in Figure \ref{fig:2234kirbycalculus}. Each $-1$ in Figure \ref{fig:2234extendedplus-1s} is connected to a pair of complementary legs and blowing down such a $-1$ is equivalent to substituting the complementary legs by the complementary legs with the Riemenschneider diagram obtained by removing the last point. Thus blowing down the left $-1$ repeatedly leads to first shortening the complementary legs $(-b_1,\dots,-b_{m_1})$ and $(-\beta_1,\dots,-\beta_{m_2})$ to just be $(-2)$ and $(-2)$, and blowing down again we obtain Figure \ref{fig:liscagraph2234twoblowdowns1}. Similarly we blow down the right $-1$ until we obtain $(a_1,\dots,a_{n_1})=(2)$ and $(\alpha_1,\dots,\alpha_{n_2})=(2)$, to obtain Figure \ref{fig:liscagraph2234twoblowdowns2}. Now, it is not obvious how to blow down the left $-1$. Figures \ref{fig:sub1} and \ref{fig:sub4} schematically show the links represented by the graph in Figure \ref{fig:liscagraph2234twoblowdowns2}, depending on whether $k=0$ or not. (If $k>1$, then consider Figure \ref{fig:sub4}. but imagine the green knot as a longer chain.) The little squares represent a continuation of the link and it does not matter if the blue and black squares are in fact connected as long as they are contained strictly to the right of the red square. Blowing down the $-1$ leads us to Figures \ref{fig:sub2} and \ref{fig:sub5} and isotoping the red curve leads to Figures \ref{fig:sub3} and \ref{fig:sub6}. This Kirby calculus shows that if we blow down the left $-1$ in Figure \ref{fig:liscagraph2234twoblowdowns2} and if $k=0$, then we obtain Figure \ref{fig:liscagraph2234twoblowdowns5}. If $k\geq 1$, then we get Figure \ref{fig:liscagraph2234twoblowdowns3}. Looking at the schematic image Figure \ref{fig:untwisting} of the link represented by the graph in Figure \ref{fig:liscagraph2234twoblowdowns3}, we see that blowing down its rightmost $-1$ yields Figure \ref{fig:liscagraph2234twoblowdowns4}, which can be blown down to Figure \ref{fig:liscagraph2234twoblowdowns5}. Blowing down the top $-1$ now gives us Figure \ref{fig:liscagraph2234twoblowdowns6}. Finally, applying the Kirby calculus of Figure \ref{fig:neg3cycleremoval} while ignoring the red curve, we can blow down the left $-1$ and be left with two disconnected $0$-weighted vertices, which is the plumbing graph of $(S^1\times S^2)\#(S^1\times S^2)$. \subsection{$(-3,-2,-2,-3)$} \begin{figure} \centering \tikzfig{3223general} \caption{The form of all graphs obtainable from $(-3,-2,-2,-3)$ using GOCL moves. Here the sequences $(a_1,\dots,a_{l_1})$ and $(\alpha_1,\dots,\alpha_{l_2})$ are complementary, as well as the sequences $(b_1,\dots, b_{m_1})$ and $(\beta_1,\dots,\beta_{m_2})$, and the sequences $(z_1,\dots, z_{n_1})$ and $(\zeta_1,\dots,\zeta_{n_2})$.} \label{fig:3223general} \end{figure} \begin{figure} \centering \tikzfig{3223generalplustwo2handles} \caption{Two 2-handles attached with $-1$-framing.} \label{fig:3223generalplustwo2handles} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{3223equivariance} \caption{An equivariant embedding of the link in Figure \ref{fig:3223generalplustwo2handles}.} \label{fig:3223equivariance} \end{figure} \begin{figure} \begin{subfigure}{0.5\linewidth} \centering \tikzfig{3223blowndown1} \caption{ } \label{fig:3223blowndown1} \end{subfigure} \begin{subfigure}{0.5\linewidth} \centering \tikzfig{3223blowndown2} \caption{ } \label{fig:3223blowndown2} \end{subfigure} \begin{subfigure}{0.5\linewidth} \centering \tikzfig{3223blowndown3} \caption{ } \label{fig:3223blowndown3} \end{subfigure} \begin{subfigure}{0.5\linewidth} \centering \tikzfig{3223blowndown4} \caption{ } \label{fig:3223blowndown4} \end{subfigure} \caption{Plumbing/Kirby calculus showing that Figure \ref{fig:3223generalplustwo2handles} bounds an $(S^1\times S^2)\# (S^1\times S^2)$.} \label{fig:3223kirby} \end{figure} Note that $(-3,-2,-2,-3)$ has embedding $(e_2+e_3+e_4,e_1-e_2,e_2-e_3,-e_1-e_2+e_4)$ which has three basis vectors occurring twice and one basis vector occurring four times. Thus we can only perform GOCL moves here, which gives us the graph family in Figure \ref{fig:3223general}. We will add two $2$-handles as by Figure \ref{fig:3223generalplustwo2handles}. In fact, the Kirby diagram arising from this graph can be drawn in $\mathbb{R}^3$ in a $\mathbb{Z}/2\mathbb{Z}$-equivariant fashion, as in Figure \ref{fig:3223equivariance}. It remains to show that if $\Gamma$ is as in Figure \ref{fig:3223generalplustwo2handles}, then $Y_\Gamma$ is $(S^1\times S^2)\#(S^1\times S^2)$. We use the bottom $-1$ to blow down the complementary legs $(-a_1,\dots,-a_{l_1})$ and $(-\alpha_1,\dots,-\alpha_{l_2})$, and an extra blow-down leaves us with Figure \ref{fig:3223blowndown1}. This Figure is modeled by Figure \ref{fig:sub1}, with a chain from the red square to the blue one. This chain does not prevent us from disconnecting the black unknot from the blue one in the isotopic Figures \ref{fig:sub2} and \ref{fig:sub3}. Hence, blowing down the middle $-1$ gives us the graph in Figure \ref{fig:3223blowndown2}. We proceed by blowing down the top $-1$ and its adjacent complementary legs $(-z_1,\dots,-z_{n_1})$ and $(-\zeta_1,\dots,-\zeta_{n_2})$, to obtain Figure \ref{fig:3223blowndown3}. Applying the Kirby calculus of Figure \ref{fig:neg3cycleremoval} with an empty red unknot, we blow down the right $-1$ and obtain Figure \ref{fig:3223blowndown4}. Note that the bottom component is a blow-down of $(-\beta_{m_2},\dots,-\beta_1,-1,-b_1,\dots,-b_{m_1})$, which famously blows down to a $0$. Thus $Y_\Gamma=S^3_0(O)\# S^3_0(O)=(S^1\times S^2)\#(S^1\times S^2)$. \subsection{$(-3,-2,-3,-3,-3)$} \begin{figure} \centering \tikzfig{32333general} \caption{These are the graphs obtainable by performing IGOCL and GOCL moves on the linear graph $(-3,-2,-3,-3,-3)$. Here the length of the chain of $-2$'s is $k\geq 0$, $(a_1,\dots,a_{m_1})$ and $(\alpha_1,\dots,\alpha_{m_2})$ are complementary sequences, and $(b_1,\dots,b_{n_1})$ and $(\beta_1,\dots,\beta_{n_2})$ are complementary sequences.} \label{fig:32333general} \end{figure} \begin{figure} \centering \tikzfig{32333generalplus2handle} \caption{The graph in Figure \ref{fig:32333general} with an extra $2$-handle attached. We will show that this graph represents the $3$-manifold $(S^1\times S^2)$.} \label{fig:32333generalplus2handle} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{32333equivariance} \caption{An equivariant embedding of the link in Figure \ref{fig:32333generalplus2handle}.} \label{fig:32333equivariance} \end{figure} \begin{figure} \centering \tikzfig{32333blowndown1} \caption{The graph obtained from \ref{fig:32333generalplus2handle} and Proposition \ref{prop:latticeembeddingproperties} after repeated blow-downs of the $-1$-labelled vertices.} \label{fig:32333blowndown1} \end{figure} \begin{figure} \centering \tikzfig{32333blowndown2} \caption{Figure \ref{fig:32333blowndown1} after another blow-down.} \label{fig:32333blowndown2} \end{figure} \begin{figure} \centering \tikzfig{32333blowndown3} \caption{The Kirby calculus of Figure \ref{fig:neg3cycleremoval} shows that this graph is obtained after another blow-down of Figure \ref{fig:32333blowndown2}.} \label{fig:32333blowndown3} \end{figure} The linear graph $(-3,-2,-3,-3,-3)$ has the lattice embedding $(e_2+e_3-e_5,e_1-e_2,e_2-e_3-e_4,-e_1-e_2-e_5,e_5+e_3-e_4)$. Here $e_1$ and $e_4$ occur exactly twice, serving as starting points for GOCL moves. The basis vectors $e_3$ and $e_5$ occur three times, but only $e_5$ serves as the starting point for IGOCL moves. All possible graphs obtainable by applying IGOCL and GOCL moves to the linear graph $(-3,-2,-3,-3,-3)$ are shown in the non-purple part of Figure \ref{fig:32333generalplus2handle}. This case is simpler than the previous ones in that it is enough to add one single $-1$-weighted vertex for us to be able to blow down the entire graph to a single $0$-weight vertex. If we can add this $2$-handle to the Kirby diagram generated by the graph in a $\mathbb{Z} / 2\mathbb{Z}$-equivariant fashion, it will mean that adding a band to the arborescent link $L$ generated by the non-purple part of Figure \ref{fig:32333generalplus2handle} gives us two unknots, which implies that $L$ bounds a surface in $B^4$ consisting of two discs and a band, which means that $L$ is $\chi$-slice. We can indeed add the $2$-handle in an equivariant fashion, as by Figure \ref{fig:32333equivariance}. It remains to show that the graph in Figure \ref{fig:32333generalplus2handle} indeed blows down to a zero. First, we blow down the complementary legs to obtain Figure \ref{fig:32333blowndown1}. An extra blow-down puts us in Figure \ref{fig:32333blowndown2}. This situation is modelled by Figure \ref{fig:sub1}, so the Kirby calculus of Figure \ref{fig:neg3cycleremoval} us Figure \ref{fig:32333blowndown3} after blowing down the purple $-1$. Now we note that Figure \ref{fig:32333blowndown3} consists of a $-1$ connecting two complementary legs. This famously blows down to a single $0$-weighted vertex. \section{Proof of Theorem \ref{thm:torusknotlist}} \label{sec:torusknots} In this section, we prove Theorem \ref{thm:torusknotlist} by studying the intersection between the graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general} and plumbing graphs of positive rational surgeries on positive torus knots. First we describe the plumbing graphs of the surgeries on torus knots, and then we go through the intersections with the graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general} one by one. \subsection{Plumbing Graphs of Rational Surgeries on Torus Knots} \begin{figure} \centering \includegraphics[width=.3\linewidth]{beforealggeo} \caption{A Kirby diagram with boundary $S^3_n(T(p,\alpha))$ where $n=[N_1+p\alpha, N_2, \dots , N_k]^-$ and $K=T(p,\alpha)$.} \label{fig:beforealggeo} \end{figure} In order to find the intersection between the plumbing graphs of rational surgeries on torus knots and the graphs obtained from Lisca's graphs by repeated GOCL and IGOCL moves, we need to know what the plumbing graphs of rational surgeries on torus knots look like. Let $n>0$ be a rational number. We want to find a plumbing graph for $S^3_n(T(p, \alpha))$. We can write $n=[N_1+p\alpha, N_2, \dots , N_k]^-$ for $N_2,\dots, N_k \geq 2$. The $3$-manifold $S_n(T(p,\alpha))$ bounds the 4-manifold in Figure \ref{fig:beforealggeo}, which is positive-definite if $n>0$. The argument of \cite[Section 3]{lokteva2021surgeries} that the blow-ups decrease the surgery coefficient by a constant still holds to show that $S^3_n(T(p,\alpha))$ bounds the 3-manifold described by the graph in Figure \ref{fig:rationalsurgeryontorusknot}. The positive index of this graph is $k$ by the same logic as in \cite[Section 3]{lokteva2021surgeries}. To obtain a definite graph, we will need the generalisation of the algorithm in \cite[Figure 2]{lokteva2021surgeries} described in Proposition \ref{prop:changingorientation} in Section \ref{subsec:orientationreversalhere}. If $N>1$ and thus $N_1\geq 2$, we can use Proposition \ref{prop:changingorientation} to substitute the chain $(N_1,\dots,N_k)$ with its negative Riemenschneider complement $(-M_1,\dots,-M_j)$ and obtain the negative-definite graph in Figure \ref{fig:rationalsurgeryontorusknotNpos}. If $0<N<1$, then the sequence $(N_1,\dots, N_k)$ starts with a $1$ possibly followed by some $2$'s that we can blow down before turning the rest of the chain negative. This will once again give us a negative-definite graph, namely the one in Figure \ref{fig:rationalsurgeryontorusknotNposlessthan1}. \begin{figure} \centering \tikzfig{rationalsurgeryontorusknot} \caption{A plumbing graph of $S^3_n(T(p,\alpha))$, where $\alpha>p$. Here $[1,c_2,...,c_s]^-=p/\alpha$, $[d_1,\dots,d_t]^-=\alpha/p$ and $[N_1,\dots,N_k]^-=N=n-p\alpha$. In particular, the pair of fractions $([c_2,\dots,c_s]^-, [d_1,\dots,d_t]^-)$ are complementary. Also, we can write $[(c_2,c_3,\dots,c_s)=(-2^{[d_1-2]},a_1+1, a_2, \dots, a_r)$ so that $[a_1,\dots,a_r]^-$ and $[d_2,\dots, d_t]^-$ are complementary.} \label{fig:rationalsurgeryontorusknot} \end{figure} \begin{figure} \centering \tikzfig{rationalsurgeryontorusknotNpos} \caption{A negative-definite plumbing graph of $S^3_n(T(p,\alpha))$, where $N=n-p\alpha>1$ and $\alpha>p$. Here $[1,c_2,...,c_s]^-=p/\alpha$, $[d_1,\dots,d_t]^-=\alpha/p$ and if $N=n-p\alpha=a/b$ with $a,b \in \mathbb{Z}_{>0}$, then $[M_1,\dots,M_j]^-=\frac{a}{a-b}$.} \label{fig:rationalsurgeryontorusknotNpos} \end{figure} \begin{figure} \centering \tikzfig{rationalsurgeryontorusknotNposlessthan1} \caption{A negative-definite plumbing graph of $S^3_n(T(p,\alpha))$, where $\alpha>p$ and $0<N<1$. Here $[1,c_2,...,c_s]^-=p/\alpha$, $[d_1,\dots,d_t]^-=\alpha/p$ and the fraction $[P_1,\dots,P_j]^-$ is complementary to $\frac{1}{1-N}=[N_2,\dots,N_k]^-$. In fact, that means that $N=\frac{1}{[P_1,\dots,P_j]^-}$.} \label{fig:rationalsurgeryontorusknotNposlessthan1} \end{figure} If $N<0$, that is $N_1\leq 0$, then changing turning the positively-weighted vertices $(N_2, \dots , N_k)$ negative will not be enough to decrease the positive index to $0$. Instead, we will use Proposition \ref{prop:changingorientation} to turn the two other legs of our graph positive, and we obtain the graph in Figure \ref{fig:QsurgeryontorusknotN_1isnonpos}, which has negative index 1. If $N_1=0$, we will perform a 0-absorption (see \cite[Proposition 1.1]{neumann89}) and obtain the positive definite graph in Figure \ref{fig:QsurgeryontorusknotN_1is0}. If $N_1=-1$, we simply blow it down. If $N_1\leq 2$, we use Proposition \ref{prop:changingorientation} to turn it into a chain of $2$'s and obtain the graph in Figure \ref{fig:QsurgeryontorusknotN_1isatmostminus2}. \begin{figure} \centering \tikzfig{QsurgeryontorusknotN_1isnonpos} \caption{A plumbing graph of $S^3_n(T(p,\alpha))$, where $\alpha>p$ and $N<0$. Here the negative index is 1, $[d_1,\dots,d_t]^-=\alpha/p$ and $[e_1,\dots,e_r]^-$ is complementary to $[d_2,\dots,d_t]^-$.} \label{fig:QsurgeryontorusknotN_1isnonpos} \end{figure} \begin{figure} \centering \tikzfig{QsurgeryontorusknotN_1is0} \caption{A positive definite plumbing graph of $S^3_n(T(p,\alpha))$, where $\alpha>p$ and $-1<N<0$. Here $[d_1,\dots,d_t]^-=\alpha/p$ and $[e_1,\dots,e_r]^-$ is complementary to $[d_2,\dots,d_t]^-$.} \label{fig:QsurgeryontorusknotN_1is0} \end{figure} \begin{figure} \centering \tikzfig{QsurgeryontorusknotN_1isatmostminus2} \caption{A positive definite plumbing graph of $S^3_n(T(p,\alpha))$, where $\alpha>p$ and $N<-1$. Here $[d_1,\dots,d_t]^-=\alpha/p$ and $[e_1,\dots,e_r]^-$ is complementary to $[d_2,\dots,d_t]^-$, and the tail starts with a chain of $-2$'s of length $-N_1-1$.} \label{fig:QsurgeryontorusknotN_1isatmostminus2} \end{figure} In the graphs of Figures \ref{fig:rationalsurgeryontorusknotNpos} \ref{fig:rationalsurgeryontorusknotNposlessthan1}, \ref{fig:QsurgeryontorusknotN_1is0} and \ref{fig:QsurgeryontorusknotN_1isatmostminus2}, the vertex of degree $3$ is called the node. Removing the node splits the graph into $3$ connected components, of which the top left one is called the \textit{torso}, the bottom left one is called the \textit{leg} and the right one is called the \textit{tail}. This vocabulary is chosen to accord with the vocabulary of \cite{lokteva2021surgeries} on iterated torus knots. We also often talk about the torso, leg and tail collectively as legs. This comes from viewing the graphs as general star-shaped graphs rather than graphs of surgeries on torus knots specifically. (The author recommends looking at a flag of Sicily or Isle of Man for a more precise metaphor.) This vocabulary is generally used by Lecuona, for instance in \cite{lecuonacomplementary} and \cite{lecuonamontesinos}. We say that two legs of a star-shaped graph are negatively quasi-complementary if either adding one vertex at the end of one leg could make them complementary, and positively quasi-complementary if removing a final vertex from one of the legs could. We say that two legs are complementary if they are either positively or negatively quasi-complementary. Note that the graphs in Figures \ref{fig:rationalsurgeryontorusknotNpos}, \ref{fig:rationalsurgeryontorusknotNposlessthan1}, \ref{fig:QsurgeryontorusknotN_1is0} and \ref{fig:QsurgeryontorusknotN_1isatmostminus2} are exactly the star-shaped graphs with three legs whereof two are quasi-complementary. In the following subsections, we are thus going to look for star-shaped graphs with a pair of quasi-complementary legs among the graphs in Figures \ref{fig:liscagraph2234extendedgeneral}, \ref{fig:3223general} and \ref{fig:32333general}. The following very easy-to-check proposition will come in useful: \begin{prop} \label{prop:aqcleg} Suppose $Q/P=[a_1,\dots,a_n]^-$ and $(-a_1,-a_2,\dots,-a_n)$ is either the leg or torso of the plumbing graph of $S_r^3(T(p,\alpha))$, a positive rational surgery on a positive torus knot. (Here $-a_n$ is the weight of the vertex adjacent to the node.) Then $\alpha/p$ is one of the following: \begin{itemize} \item $\frac{Q}{P}$, \item $\frac{Q}{Q-P}$, \item $\frac{(l+1)Q+P}{Q}$ for some $l\geq 0$ or \item $\frac{(l+2)Q-P}{Q}$ for some $l\geq 0$. \end{itemize} \end{prop} Note that if $\GCD(P,Q)=1$, then all of these fractions are reduced. However, if $P=Q-1$, then, $\alpha/p=\frac{Q}{Q-P}=Q$ is a degenerate case that we ignore. \subsection{$(-2,-2,-3,-4)$} Here, we look for the intersections between the graphs in Figures \ref{fig:rationalsurgeryontorusknotNpos}, \ref{fig:rationalsurgeryontorusknotNposlessthan1}, \ref{fig:QsurgeryontorusknotN_1is0} and \ref{fig:QsurgeryontorusknotN_1isatmostminus2} and the graphs in Figure \ref{fig:liscagraph2234extendedgeneral}. This is the same as finding all the subfamilies of the graphs in Figure \ref{fig:liscagraph2234extendedgeneral} that are star-shaped with three legs, whereof two are quasi-complementary. For the purpose of stating Theorem \ref{thm:torusknotlist}, it is enough to determine the weights of the quasi-complementary legs, since the weights of the node and the tail only determine the surgery coefficient. To turn Figure \ref{fig:liscagraph2234extendedgeneral} into a star-shaped graph, we will need to keep some of the grown complementary legs to length 1. If we let the vertex of weight $-1-a_1$ be trivalent, then $m_1=1$ and thus $(\beta_1,\dots, \beta_{m_2})$ consists only of $2$'s. If $m_2>1$ then $n_2=1$ and $(a_1,\dots,a_{n_1})=(2,\dots, 2)$. In order to have trivalency of the $-(1+a_1)$ vertex, $\alpha_1\geq 3$ is required. It is easy to check that in this case the only legs that can be quasi-complementary are the $(-b_1,-(2+k))$ one and the $(\underbrace{-2,\dots,-2}_{\alpha_1-2})$ one. They can either be negatively quasi-complementary, made complementary by adding $-3$ at the end of the second leg, in which case $\alpha_1=b_1$ and $k=0$ have to hold, or they can be positively quasi-complementary, made complementary by removing $-(2+k)$ from the first one, in which case $\alpha_1-2=b_1-1$. The first case shows that \[ S^3_{\frac{(2b_1^2-2b_1+1)^2}{2b_1^2-b_1+1}}(T(b_1-1,2b_1-1)) \] bounds a rational homology ball for any $b_1\geq 3$. The second case shows that \[ S^3_{\frac{((k+2)b_1^2-1)^2}{(k+2)b_1^2+b_1-1}}(T(b_1,b_1(k+2)-1)) \] bounds a rational homology ball for all integers $b_1 \geq 2$ and $k \geq 0$. Both of these are subfamilies to families 1 and 2 in Theorem \ref{thm:torusknotlist} that we will show can in fact be fully realised. We get more interesting families when we let $m_2=1$, because then $(\alpha_1,\dots, \alpha_{n_2})$ can be anything as long as it has something but a $2$ somewhere so that $n_1>1$. We will get graphs of the form in Figure \ref{fig:bandbetagone}. To make the top and right legs quasi-complementary is easy: we need to choose whether they are to be positively or negatively quasi-complementary and which leg needs an extra vertex or a vertex removed to be complementary, and then we just need to choose $(a_2,\dots,a_{n_1})$ that make it happen. We use Proposition \ref{prop:aqcleg} for $Q/P=[2+k,2]^-=\frac{2k+3}{2}$. This corresponds to the entire families 3 and 4 as well as subfamilies of families 1 and 2 in Theorem \ref{thm:torusknotlist}. The top and bottom legs cannot be made quasi-complementary. \begin{figure} \centering \tikzfig{bandbetagone} \caption{Choosing $m_1=m_2=1$ in Figure \ref{fig:liscagraph2234extendedgeneral} gives this star-shaped graph.} \label{fig:bandbetagone} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{makegrownlegsqc} \caption{} \label{fig:makegrownlegsqc} \end{figure} The most interesting case to consider is whether the right and the bottom legs can be made quasi-complementary. In Figure \ref{fig:makegrownlegsqc}, the black dots show a Riemenschneider diagram of the complementary sequences $(a_1,\dots,a_{n_1})$ and $(\alpha_1,\dots,\alpha_{n_2})$. Adding the blue dots gives us a Riemenschneider diagram for the sequence $(\underbrace{2,\dots,2}_k,\alpha_1+2,\alpha_2, \dots , \alpha_{n_2})$ (with complement $(k+2, 2, a_1, \dots, a_{n_1})$). Considering only the part to the right of the red line gives us a Riemenschneider diagram for $(a_2,\dots, a_{n_1})$ (with a complement $(A_1,\dots,A_{n_3})$). In order for $(\underbrace{2,\dots,2}_k,\alpha_1+2,\alpha_2, \dots , \alpha_{n_2})$ and $(a_2,\dots, a_{n_1})$ to be quasi-complementary, either the picture to the right of the red line and the total picture without the last line, or the total picture and the picture to the right of the the red line with an extra column, must be the same. The sequences $(k+2, 2, a_1, \dots, a_{n_1})$ and $(a_2,\dots, a_{n_1})$ have length difference $2$, removing the second option. The only ways in which $(\underbrace{2,\dots,2}_k,\alpha_1+2,\alpha_2, \dots , \alpha_{n_2})$ and $(A_1,\dots,A_{n_3})$ can have length difference $1$ is if any of the following hold: \begin{enumerate} \item $k=0$ and $a_1=3$, or \item $k=1$ and $a_1=2$. \end{enumerate} If $k=0$ and $a_1=3$, then the first row of the total picture has length $3$. Thus, in the second total row, to the right of the red line, we need three dots, making a total of $4$ dots. This is a valid solution, namely $(\alpha_1,\dots,\alpha_{n_2})=(2,5)$, $(\alpha_1+2, \alpha_2,\dots,\alpha_{n_2})=(4,5)$, $(A_1,\dots,A_{n_3})=(4)$ and $(a_1,\dots,a_{n_1})=(3,2,2,2)$. If we choose to continue and add $\alpha_3$, that means adding a new row completely to the right of the red line, which must be as long as the second total row, namely 4 dots. That again gives a valid solution $(\alpha_1,\dots,\alpha_{n_2})=(2,5,5)$, $(\alpha_1+2, \alpha_2,\dots,\alpha_{n_2})=(4,5,5)$, $(A_1,\dots,A_{n_3})=(4,5)$ and $(a_1,\dots,a_{n_1})=(3,2,2,3,2,2,2)$. We can continue this process and obtain the solution $(\alpha_1,\dots,\alpha_{n_2})=(2,(5)^{[l]})$ and $(a_1,\dots,a_{n_1})=((3,2,2)^{[l]},2,2)$ for all $l\geq 1$. Our legs are positively quasi-complementary, so $\alpha/p=[d_1,\dots,d_t]^-=[(5)^{[l]},4]^-$. Since $5-\frac{b}{a}=\frac{5a-b}{a}$, we have that \[ \begin{pmatrix} \alpha\\ p \end{pmatrix} = \begin{pmatrix} 5 & -1\\ 1 & 0 \end{pmatrix}^{l} \begin{pmatrix} 4\\ 1 \end{pmatrix} \] for $l\geq 1$. This corresponds to family 6 in Theorem \ref{thm:torusknotlist}. We can compute $N=[0,3,2,2]^-=-\frac{3}{7}$. In other words, if $p_1=1$, $p_2=4$ and $p_{j+2}=5p_{j+1}-p_j$ for all $j\geq 0$ \cite[A004253]{OEIS}, we can say that \[ S^3_{p_jp_{j+1}-\frac{3}{7}}(T(p_j,p_{j+1})) \] bounds a rational homology ball for all $j\geq 1$. In this form it may not be obvious that the numerator of the surgery coefficient is a square, but in fact, $p_jp_{j+1}-\frac{3}{7}=\frac{V^2_{j+1}}{7}$ for $V_j$ being a sequence defined by $V_1=2$, $V_2=5$ and $V_{j+2}=5V_{j+1}-V_j$ for all $j\geq 0$ \cite[A003501]{OEIS}. It is a shifted so called Lucas sequence. The equality can be proven by first proving by induction that $p_{j+2}p_j-p_{j+1}^2=3$ for all $j\geq 0$, then noting that $V_{j+1}=p_{j+1}+p_j$ for all $j\geq 0$, and finally combining these equalities. If $k=1$ and $a_1=2$ the argument goes the same way. The only way for the right and bottom legs to be quasi-complementary is if the Riemenschneider diagram to the right of the red line and the total diagram missing the bottom line coincide. By the same argument as above, it happens if and only if $(\alpha_1,\dots,\alpha_{n_2})=(3,(5)^{[l]})$ and $(a_1,\dots,a_{n_1})=(2,(3,2,2)^{[l]},2)$ for all $l\geq 0$. In this case $\alpha/p=[(5)^{[l]},5,2]^-$ and $N=[0,2,2,3]^-=-\frac{5}{7}$. This shows that if $Q_1=2$, $Q_2=9$ and $Q_{j+2}=5Q_{j+1}-Q_j$ for all $j \geq 1$, then \[ S^3_{Q_jQ_{j+1}-\frac{5}{7}}(T(Q_j,Q_{j+1})) \] bounds a rational homology ball for all $j\geq 1$. This corresponds to family 7 in Theorem \ref{thm:torusknotlist}. Just as before, we can show that \[ Q_jQ_{j+1}-\frac{5}{7} = \frac{(Q_j+Q_{j+1})^2}{7}. \] \begin{figure} \centering \tikzfig{aandalphagone} \caption{Choosing $n_1=n_2=1$ in Figure \ref{fig:liscagraph2234extendedgeneral} gives this star-shaped graph.} \label{fig:aandalphagone} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{makegrownlegsqc2} \caption{} \label{fig:makegrownlegsqc2} \end{figure} Returning to Figure \ref{fig:liscagraph2234extendedgeneral}, we can let the vertex of weight $-b_1$ be the only node. That forces $n_1=1$, so $(\alpha_1,\dots, \alpha_{n_2})=(2,\dots,2)$. Putting $a_1=2$ would give us complete freedom in choosing $(b_2,\dots, b_{m_1})$, so Proposition \ref{prop:aqcleg} applied to $Q/P=2+k$ gives that there are surgery coefficients $n$ such that $S^3_n(T(k+1,k+2))$, $S^3_n(T(k+2,(l+2)(k+2)-1))$ and $S^3_n(T(k+2,(l+1)(k+2)+1))$ bound rational homology $4$-balls. These families correspond to the entire families 1 and 2 in Theorem \ref{thm:torusknotlist}. (Note however, that a couple of subfamilies of these will also be realised if we choose $a_1>2$ because $(b_2,\dots,b_{m_1})=(2,\dots,2)$. These subfamilies have an especially ample supply of choices of surgery coefficents.) If $n_2>1$, then $m_2=1$ and $(b_1,\dots,b_{m_1})=(2,\dots,2)$. We will have three legs, namely $(-(2+k))$, $((-2)^{[\beta_1-2]})$ and $(-(1+a_1),(-2)^{[k]}, -(2+\beta_1), (-2)^{[a_1-2]})$. The first two can be quasi-complementary in two ways, but the generated pairs $(p,\alpha)$ are already known. The first and the third cannot be quasi-complimentary. The last two can also not be quasi-complementary if $n_2>1$. It is once again more interesting if $n_2=1$ and $m_2>1$ is allowed. We get the graph in Figure \ref{fig:aandalphagone}. The top and the bottom legs cannot be made quasi-complementary. The left and the bottom legs are the interesting case. Analogously to how we used Figure \ref{fig:makegrownlegsqc}, we can use Figure \ref{fig:makegrownlegsqc2} to show that $k=0$ and $(\beta_1,\dots, \beta_{m_2})=(4,(6)^{[l]})$ and $(b_1,\dots,b_{m_1})=(2,2,(3,2,2,2)^{[l]},2)$. This is in fact family (4) in \cite[Theorem 1.1]{GALL} and family 8 in Theorem \ref{thm:torusknotlist}. Going back to Figure \ref{fig:liscagraph2234extendedgeneral}, we could also make the vertex of weight $-\alpha_1-\beta_1$ the only trivalent vertex, but that would require $m_1=n_1=1$ and thus all $\alpha_i$ and all $a_j$ are $2$'s. The top leg would not be able to be quasi-complementary to a sequence of $2$s, and the only way for the left and right legs to be quasi-complementary is if they are the legs $(-2)$ and $(-2,-2)$ in either order. This just gives us two new families of possible surgery coefficients on $T(2,3)$. We finish off this subsection by reminding the reader that we have now proved Theorem \ref{thm:torusknotlist} for families 1, 2, 3, 4, 6, 7 and 8. \subsection{$(-3,-2,-2,-3)$} In this subsection we consider the intersections between the graphs in Figures \ref{fig:rationalsurgeryontorusknotNpos}, \ref{fig:rationalsurgeryontorusknotNposlessthan1}, \ref{fig:QsurgeryontorusknotN_1is0} and \ref{fig:QsurgeryontorusknotN_1isatmostminus2} (rational surgeries on torus knots) and the graphs in Figure \ref{fig:3223general} (graphs obtainable from $(-3,-2,-2,-3)$ through GOCL moves). \begin{figure} \centering \includegraphics[width=.7\linewidth]{makegrownlegsqc3223-1} \caption{It is impossible to choose $k$ so that the length or height difference between the full Riemenshneider diagram and the diagram to the right of the red line is one.} \label{fig:makegrownlegsqc3223-1} \end{figure} Figure \ref{fig:3223general} is symmetric in the $y$-axis, so it is enough to try two of the vertices for trivalency, say the one with weight $1-\beta_1-\zeta_1$ and the one with weight $-a_1$. If we want to the vertex with weight $1-\beta_1-\zeta_1$ to be the trivalent vertex in one of the Figures \ref{fig:rationalsurgeryontorusknotNpos}, \ref{fig:rationalsurgeryontorusknotNposlessthan1}, \ref{fig:QsurgeryontorusknotN_1is0} and \ref{fig:QsurgeryontorusknotN_1isatmostminus2} , then $l_1=m_1=1$. Hence $\beta_i=\alpha_j=2$ for all $i$ and $j$. Also, $l_2=1$ or $n_1=1$. No matter what choice we make between these, if $(-\beta_2, \dots, -\beta_{m_2})$ is one of the complementary legs, we will end up within the families 1, 2, 3 and 4 in Theorem \ref{thm:torusknotlist}, which we already have surgery coefficients for. If $n_1=1$, all $\alpha$s, $\beta$s and $\zeta$s become $-2$, giving us a star-shaped graph with two legs containing nothing but $-2$s, not allowing us out of the families 1, 2, 3 and 4. We consider the case $l_2=1$ instead. We have $a_1=\alpha_1=2$. Let $b_1=k+2$ (so that the leg $(-\beta_2,\dots,-\beta_{m_2})=(-2,\dots,-2)$ has length $k$). We investigate if $(-\zeta_2, \dots, -\zeta_{n_2})$ and $(-2,-(k+2), -z_1-1,-z_2,\dots,-z_{n_1})$ can be quasi-complementary. We draw the diagram Figure \ref{fig:makegrownlegsqc3223-1} as before. However, it is impossible to create a difference of one between the length of one leg and the complement of the other leg. \begin{figure} \centering \includegraphics[width=.7\linewidth]{makegrownlegsqc3223-2} \caption{Riemenschneider diagram of the quasi-complementary legs $(-a_2,\dots,-a_{l_1})$ and $(-b_1, 1-\alpha_1-z_1, -\alpha_2, \dots, -\alpha_{l_2})$ in Figure \ref{fig:3223general} when $m_1=n_1=m_2=1$.} \label{fig:makegrownlegsqc3223-2} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{makegrownlegsqc3223-3} \caption{Riemenschneider diagram of the quasi-complementary legs $(-a_2,\dots,-a_{l_1})$ and $(-b_1, 1-\alpha_1-z_1, -\alpha_2, \dots, -\alpha_{l_2})$ in Figure \ref{fig:3223general} when $m_1=n_1=n_2=1$.} \label{fig:makegrownlegsqc3223-3} \end{figure} Now, consider the vertex labelled $-a_1$ being trivalent. This means that $m_1=1$. Also, either $n_1=1$ or $l_2=1$. First, assume that $n_1=1$. This means that $\beta_1=\cdots=\beta_{m_2}=\zeta_1=\cdots=\zeta_{n_2}=2$. Either $n_2$ or $m_2$ must be $1$. The left leg becomes $(-2,\dots, -2, -3)$. If it is included in a pair of quasi-complementary legs, which we can always ensure since we can choose $(\alpha_2,\dots, \alpha_{l_1})$ freely, we will once again end up within families 1, 2, 3 and 4. In the more interesting case (where the leftmost leg is not one of the quasi-complementary ones) $(a_2,\dots,a_{l_1})$ must be quasi-complementary either to $(2,1+k+\alpha_1,\dots,\alpha_{l_2})$ for some $k\geq 0$ (depicted in Figure \ref{fig:makegrownlegsqc3223-2}) or to $(2+k,1+\alpha_1,\dots,\alpha_{l_2})$ for some $k\geq 0$ (depicted in Figure \ref{fig:makegrownlegsqc3223-3}). \begin{figure} \centering \includegraphics[width=\linewidth]{coolgrowthof3223} \caption{A strange two-parameter family of rational surgeries on positive torus knot that bound rational homology 4-balls.} \label{fig:coolgrowthof3223} \end{figure} In the first case we get that $(\alpha_1,\dots,\alpha_{l_2})=(3,(k+4)^{[s]})$ and $(a_1,\dots,a_{l_1})=(2,(3,(2)^{[k+1]})^{[s]},2)$, giving us the graph in Figure \ref{fig:coolgrowthof3223}. This graph is of the shape of Figure \ref{fig:QsurgeryontorusknotN_1isatmostminus2}, so it describes $S^3_n(T(p,\alpha))$ for $\alpha / p= [(k+4)^{[s+1]},2]^-$ and $N=n-p\alpha=[-1,(2)^{[k+1]}]^-=-\frac{2k+3}{k+2}$. This corresponds to family 9 in Theorem \ref{thm:torusknotlist}. This is the most interesting family to date. While we have seen two-parameter families before, this one is more complicated. A different formulation of the result is that $S^3_{p\alpha-\frac{2k+3}{k+2}}(T(p,\alpha))$ bounds a rational homology ball for all $p$ and $\alpha$ described by \[ \begin{pmatrix} \alpha\\ p \end{pmatrix} = \begin{pmatrix} k+4 & -1\\ 1 & 0 \end{pmatrix}^{s+1} \begin{pmatrix} 2\\ 1 \end{pmatrix} \] for some $s,k \geq 0$. If we fix $s$, then $\alpha$ becomes a degree $s+1$ polynomial in $k$. \begin{figure} \centering \includegraphics[width=\linewidth]{coolgrowthof3223-2} \caption{A strange two-parameter family of rational surgeries on positive torus knot that bound rational homology 4-balls.} \label{fig:coolgrowthof3223-2} \end{figure} In the second case, that is if $(a_2,\dots,a_{l_1})$ is quasi-complementary to $(2+k,1+\alpha_1,\dots,\alpha_{l_2})$ for some $k\geq 0$, then $(\alpha_1,\dots,\alpha_{l_2})=(k+3,(k+4)^{[s]})$ for some $s\geq 0$. Then the graph becomes as in Figure \ref{fig:coolgrowthof3223-2}. Now $\alpha /p=[(k+4)^{[s+1]}, k+2]^-$, meaning that $S^3_{p\alpha-\frac{2k+3}{k+2}}(T(p,\alpha))$ bounds a rational homology ball for all $p$ and $\alpha$ described by \[ \begin{pmatrix} \alpha\\ p \end{pmatrix} = \begin{pmatrix} k+4 & -1\\ 1 & 0 \end{pmatrix}^{s+1} \begin{pmatrix} k+2\\ 1 \end{pmatrix} \] for some $s,k \geq 0$. This corresponds to family 10 in Theorem \ref{thm:torusknotlist}. If $l_2=1$ instead of $n_1=1$, then $a_1=\cdots=a_{l_1}=2$. We already know that we can choose surgery coefficients when one of the complementary legs consists of only $-2$s, so we do not need to check that case to formulate Theorem \ref{thm:torusknotlist}. In fact we do not need to check further, as any star-shaped graphs with three legs whereof two are quasi-complementary, the third one consisting only of $-2$s and the node having weight $-2$ is a positive integral surgery on a positive torus knot, which have been classified in \cite{GALL}. We remind the reader that we have now shown Theorem \ref{thm:torusknotlist} for families 9 and 10. \subsection{$(-3,-2,-3,-3,-3)$} In Figure \ref{fig:32333general} there are three possibilities for a trivalent vertex. If we choose the vertex of weight $-a_1$, then $m_2=1$ and thus two of the legs are $(-3-k)$ and $(-2,\dots,-2)$. We already know that if one of these is in a quasi-complementary pair, then $(p, \alpha)$ lies in families 1-4 in Theorem \ref{thm:torusknotlist}, so we get nothing new. Choosing the vertex of weight $-(1+b_1)$ to be trivalent, and noting that we land in families 1-4 if the left leg $(-3-k,-2)$ is one of the quasi-complementary ones, does however lead us to find that \[ S^3_{p\alpha-\frac{5}{7}}(T(p,\alpha)) \] bounds a rational homology ball for every \[ \begin{pmatrix} \alpha\\ p \end{pmatrix} = \begin{pmatrix} 5 & -1\\ 1 & 0 \end{pmatrix}^{s+1} \begin{pmatrix} 3\\ 1 \end{pmatrix} \] where $s\geq 0$. This corresponds to family 11 in Theorem \ref{thm:torusknotlist}. Finally, choosing the vertex of weight $-(1+\alpha_1)$ to be trivalent gives us $m_1=n_1=1$. If the lower leg $(-2,\dots,-2)$ is included in the pair of quasi-complementary legs, we fall into families 1-4 again. We need to investigate when $(-(3+k),-a_1,-(1+b_1))$ can be quasi-complementary to $((-2)^{[b_1-2]},-3,(-2)^{[k]})$. The Riemenschneider dual of the latter leg is $(-b_1,-(k+2))$, so we need $b_1=a_1$ and $k+2=b_1+1$. Note that we also need $a_1\geq 3$ in order to get a three-legged graph. Let $a=a_1-3$. We get $(-(3+k),-a_1,-(1+b_1))=(-(a+5),-(a+3),-(a+4))$. Our graph is now as in Figure \ref{fig:QsurgeryontorusknotN_1is0}. Thus $\alpha/p=[a+5,a+3,a+4]^-=\frac{a^3+12a^2+45a+51}{a^2+7a+11}$. This correspond to family 5 in Theorem \ref{thm:torusknotlist}. In this subsection, we have shown Theorem \ref{thm:torusknotlist} for families 5 and 11. We have thus now shown it for families 1-11. The remaining families follow from \cite{lecuonamontesinos} and \cite{GALL}. \printbibliography \end{document}
1,941,325,220,826
arxiv
\section{Introduction} In recent years, wireless sensor networks (WSNs) have attracted increasing research attention because of their wide application in engineering systems including smart grids, biomedical health monitoring, target tracking and surveillance \citep{article:Sayed2013,article:Yick2008}. Distributed observation and data analysis are ubiquitous in WSNs, where sensors are interconnected to acquire and process the local information from neighbors to finish a common task. Due to various uncertainties in practical systems, the distributed identification problem over WSNs becomes one of the important topics where all the sensors collaboratively estimate an unknown parameter vector of interest by using local noisy measurements. Unlike the centralized method with a fusion center, the distributed scheme has the advantages of flexibility, robustness to node or link failures as well as reducing communication load and calculation pressure. Consequently, the theoretical analysis of distributed estimation or filtering algorithms based on several typical distributed strategies such as the incremental, the diffusion and the consensus strategies have been provided \citep{Abdolee2016, Jian2017, Stefano2020,FULLY}. In practical scenarios, there exist a large number of sparse systems \citep{Bazerque2010,Vinga2021} where many elements in the parameter vector do not contribute or contribute marginally to the systems ( i.e., these elements are zero or near-zero). How to infer the zero elements and identify the nonzero elements in the unknown parameter vector is an important issue in the investigation of sparse systems. Considerable progress has been made on the identification of zero and nonzero elements in an unknown sparse parameter vector \citep{Peng2006, Chiuso2014, Eksioglu2013}, which allows us to obtain a more reliable prediction model. One direction for the estimation of sparse signals is based on the compressed sensing (CS) theory \citep{can2,com7}, and some estimation algorithms using CS are proposed (cf., \cite{com12,com13}) in which \emph{a priori} knowledge about the sparsity of the unknown parameter and the regression vectors are required. Another direction is the sparse optimization based on the regularization framework where the objective function is formulated as a combination of the prediction error with a penalty term. The well-known LASSO (the least absolute shrinkage and selection operator) is one of the classical algorithms to obtain the sparse signals \citep{Tibshirani1996}, and its variants and adaptive LASSO \citep{Zou2006} are also studied. For the stochastic dynamic systems with a single sensor, the adaptive sparse estimation or filtering algorithms are studied by combing the recursive least squares (LS) and least mean squares (LMS) with regularization term \citep{zhao, Chen_Gu2009}. With the development of sensor networks, some distributed adaptive sparse estimation algorithms have been proposed, and the corresponding stability and convergence analysis are also investigated under some signal conditions. For example, \cite{Lorenzo2013} provided the convergence and mean-square performance analysis for the distributed LMS algorithm regularized by convex penalties where the assumption of independent regressors is required. \cite{Huang2015} presented theoretical analysis on the mean and mean-square performance of the distributed sparse total LS algorithm under the condition that the input signals are independent and identically distributed (i.i.d.). \cite{Shiri2018} analyzed the mean stability of distributed quasi-sparse affine projection algorithm with independent regression vectors. \cite{WeiHuang2020} analyzed the mean stability of the sparse diffusion LMS algorithm for two regularization terms with independent regression vectors. However, for the typical models such as ARMAX (autoregressive moving-average with exogenous input) model and Hammerstein system, the regressors are often generated by the past input and output signals, so it is hard for them to satisfy the aforementioned independency assumptions. In order to relax the independency assumption of the regressors, some attempts are made for the distributed adaptive estimation or filtering algorithms. For the unknown time-invariant parameter vector, \cite{Gan2019} proposed a distributed stochastic gradient algorithm, and established the strong consistency of the proposed algorithm under a cooperative excitation condition. \cite{wc2} studied the convergence of the diffusion LS algorithm. For the time-varying parameter vector, \cite{XIE2018} provided a cooperative information condition to guarantee the stability of the consensus-based LMS adaptive filters. Moreover, \cite{Gan2021} introduced the collective random observability condition and provided the stability analysis of the distributed Kalman filter algorithm. Nevertheless, these asymptotical results are established as the number of the observation data obtained by sensors tends to infinity, which may not be suitable for the sparse identification problem with limited observation data. Inspired by \cite{zhao} where a sparse identification algorithm for a single sensor case is put forward to infer the set of zero elements with finite observations, we develop a distributed adaptive sparse LS algorithm over sensor networks such that all sensors can cooperatively identify the unknown parameter vector and infer the zero elements with a finite number of observations. The main contributions can be summarized as follows: \begin{itemize} \item We first introduce a local information criterion for each sensor which is formulated as a linear combination of local estimation errors with $L_1$-regularization term. By minimizing this criterion, a distributed adaptive sparse identification algorithm is proposed. The upper bounds of the estimation error and the accumulative regret of the adaptive predictor are established, which can be degenerated to the results of the classical distributed LS algorithm \citep{wc2} when the weighting coefficients are equal to zero. \item Then, we introduce a cooperative non-persistent excitation condition on the regressors, under which the distributed sparse LS algorithm can cooperatively identify the set of zero elements with finite observations by properly choosing the weighting coefficients. We remark that the key difference between the proposed algorithm and those in distributed sparse optimization framework (e.g., \cite{Lorenzo2013}) lies in that the weighting coefficients are generated from the local observation sequences. The cooperative excitation condition is much weaker than the widely used persistent excitations (cf., \cite{Chen2014, Zhang2021,sheng2015}) and the regularity condition \citep{Zou2006}. \item Different from most existing results on the distributed sparse algorithms, our theoretical results are obtained without relying on the independency assumptions of regression signals, which makes it possible for applications to the stochastic feedback systems. We also reveal that the whole sensor network can cooperatively accomplish the estimation task, even if any individual sensor can not due to lack of necessary information \citep{zhao}. \end{itemize} The remainder of this paper is organized as follows. In Section \ref{problem_formulation}, we give the problem formulation of this paper; Section \ref{main_results} presents the main results of the paper including the parameter convergence of the algorithm, the regret analysis, and the set convergence of the algorithm; the proofs of the main results are given in Section \ref{proofs_of}. A simulation example is provided in Section \ref{simulaiton}. Finally, we conclude the paper with some remarks in Section \ref{concluding}. \section{Problem formulation}\label{problem_formulation} \subsection{Basic notations} In this paper, for an $m$-dimensional vector $\bm x$, its $L_p$-norm is defined as $\|\bm x\|_{p}=(\sum^m_{j=1}|\bm x(j)|^p)^{1/p}$ ($1\leq p<\infty$) , where $\bm x(j)$ denotes the $j$-th element of $\bm x$. For $p = 1$, $\|\bm x\|_{1}$ is the sum of absolute values of all the elements in $\bm x$; and for $p = 2$, $\|\bm x\|_{2}$ is the Euclidean norm, we simply write $\|\cdot\|_{2}$ as $\|\cdot\|$. For an $m\times m$-dimensional real matrix $\bm A$, we use $\lambda_{\max}(\cdot)$ and $\lambda_{\min}(\cdot)$ to denote the largest and smallest eigenvalues of the matrix. $\|\bm A\|$ denotes the Euclidean norm, i.e., $\|\bm A\|=(\lambda_{max}(\bm A\bm A^T))^{\frac{1}{2}}$ where the notation $T$ denotes the transpose operator; $\|\bm A\|_F$ denotes the Frobenius norm, i.e., $\|\bm A\|_F=(tr(\bm A^T\bm A))^{\frac{1}{2}}$, where the notation $tr(\cdot)$ denotes the trace of the corresponding matrix. We use $col(\cdot,\cdots,\cdot)$ to denote a vector stacked by the specified vectors, and $diag(\cdot,\cdots,\cdot)$ to denote a block matrix formed in a diagonal manner of the corresponding vectors or matrices. For a symmetric matrix $\bm A$, if all eigenvalues of $\bm A$ are positive (or nonnegative), then it is a positive definite (semipositive) matrix, and we denote it as $\bm A>0~(\geq 0)$. If all elements of a matrix $\bm A=\{a_{ij}\}\in\mathbb{R}^{n\times n}$ are nonnegative, then it is a nonnegative matrix, and furthermore if $\sum^n_{j=1}a_{ij}=1$ holds for all $i\in\{1,\cdots,n\}$, then it is called a stochastic matrix. For any two positive scalar sequences $\{a_k\}$ and $\{b_k\}$, by $ a_k = O(b_k)$ we mean that there exists a constant $C > 0$ independent of $k$ such that $a_k \leq C b_k$ holds for all $k \geq 0$, and by $a_k=o(b_k)$ we mean that $\lim_{k\rightarrow \infty}a_k/b_k=0$. For a convex function $f(x)$, we use $\partial f:x\rightarrow\partial f(x)$ to denote the subdifferential of $f$, which is a convex set. For example, \begin{eqnarray*} \partial |x|=\left\{ \begin{array}{ll} 1, & \hbox{if}\ x>0; \\ -1, & \hbox{if}\ x<0; \\ $[-1,1]$, & \hbox{if} \ x=0, \end{array} \right. . \end{eqnarray*} A necessary and sufficient condition that a given point $x$ belongs to the minimum set of $f$ is $0\in\partial f(x)$ (see \cite{Rockafellar1972}). We also need to introduce the sign function $\rm{sgn}$$(x)$ defined as $\rm{sgn}$$(x)=1$ if $x\geq0$ and $\rm{sgn}$$(x)=-1$ if $x<0$. \subsection{Graph theory} We consider a sensor network with $n$ sensors. The communication between sensors are usually modeled as an undirected weighted graph $\mathcal{G}=(\mathcal{V},\mathcal{E},$ $\mathcal{A})$, where $\mathcal{V}=\{1,2,3,\cdots, n\}$ is the set of sensors (or nodes), $\mathcal{E}\subseteq\mathcal{V} \times\mathcal{V}$ is the edge set, and $\mathcal{A}=\{a_{ij}\}\in\mathbb{R}^{n\times n}$ is the weighted adjacency matrix. The elements of the adjacency matrix $\mathcal{A}$ satisfy $a_{ij}>0$ if $(i, j)\in \mathcal{E}$ and $a_{ij}=0$ otherwise. Here we assume that the matrix $\mathcal{A}$ is a symmetric and stochastic matrix. For the sensor $i$, the set of its neighbors is denoted as $N_i=\{j\in \mathcal{V}|(i,j)\in\mathcal{E} \}$, and the sensor $i$ belongs to $N_i$. The sensor $i$ can communicate information with its neighboring sensors. A path of length $\ell$ is a sequence of nodes $\{i_1,..., i_{\ell}, i_{\ell+1}\}$ such that $(i_h, i_{h+1})\in \mathcal{E}$ with $1\leq h\leq \ell$. The graph $\mathcal{G}$ is called connected if there is a path between any two sensors. The diameter $D_\mathcal{G}$ of the graph $\mathcal{G}$ is defined as the maximum shortest path length between any two sensors. \subsection{Observation model} In this paper, we consider the parameter identification problem in a network consisting of $n$ sensors labeled $1,\cdots,n$. Assume that the data $\{y_{t,i}, \bm\varphi_{t,i}, t=1,2,\cdots\}$ collected by the sensor $i$ obeys the following discrete-time stochastic regression model, \begin{eqnarray} y_{t+1,i}=\bm \varphi_{t,i}^T\bm \theta+w_{t+1,i}, ~t=0, 1, 2, \cdots, \label{model} \end{eqnarray} where $y_{t,i}$ is the scalar observation or output of the sensor $i$ at time $t$, $\bm\varphi_{t,i}$ is the $m$-dimensional stochastic regression vector which may be the function of current and past inputs and outputs, $\bm\theta\in \mathbb{R}^m$ is an unknown $m$-dimensional parameter to be estimated, and $\{w_{t,i}\}$ is the noise sequence. The above model (\ref{model}) includes many parameterized systems, such as ARX system and Hammerstein system. We further denote the parameter vector $\bm\theta$ and the index set of its zero elements by \begin{equation} \begin{split} \bm\theta&\triangleq(\bm\theta(1),\cdots,\bm\theta(m))^T,\\ H^*&\triangleq\{l\in \{ 1,\cdots,m\}|\bm\theta(l)=0\}.\label{sparse5} \end{split} \end{equation} Our problem is to design a distributed adaptive estimation algorithm such that all sensors cooperatively infer the set $H^*$ in a finite number of steps and identify the unknown parameter $\bm\theta$ by using stochastic regression vectors and the observation signals from its neighbors, i.e., $\{\bm \varphi_{k,j}, y_{k+1,j}\}^t_{k=1}~(j\in N_{i})$. \section{The main results}\label{main_results} \subsection{Parameter convergence} Before designing the algorithm to cooperatively estimate the unknown parameter vector and infer the set $H^*$, we first introduce the following classical distributed least squares algorithm to estimate the unknown parameter $\bm \theta$ in (\ref{sparse5}), i.e., \begin{eqnarray} \bm\theta_{t+1,i}=\bm P_{t+1,i}\left(\sum^n_{j=1} \sum^t_{k=0}a^{(t+1-k)}_{ij}\bm\varphi_{k,j}y_{k+1,j}\right),\label{theta} \end{eqnarray} where $ \bm P_{t+1,i}=\left(\sum^n_{j=1}\sum^t_{k=0}a^{(t+1-k)}_{ij}\bm\varphi_{k,j}\bm\varphi^T_{k,j}\right)^{-1}$ and $a^{(t+1-k)}_{ij}$ is the $i$-th row, $j$-th column entry of the matrix $\mathcal{A}^{t+1-k}$. It is clear that the matrix $\bm P_{t+1,i}$ can be equivalently written as the following recursive form, \begin{gather} \bm P^{-1}_{t+1,i}=\sum_{j\in N_i}a_{ij}({{\bm P}}^{-1}_{t,j}+\bm\varphi_{t,j}\bm\varphi^T_{t,j}).\label{P_inverse} \end{gather} Thus, the algorithm (\ref{theta}) can also have the following recursive expression, \begin{gather} \bm \theta_{t+1,i}=\bm P_{t+1,i}\sum_{j\in N_i}a_{ij}(\bm P^{-1}_{t,j}\bm \theta_{t,j}+\bm \varphi_{t,j}y_{t+1,j}). \label{theta1} \end{gather} Note that in the above derivation, we assume that the matrix $\sum^n_{j=1}\sum^t_{k=0}a^{(t+1-k)}_{ij}\bm\varphi_{k,j}\bm\varphi^T_{k,j}$ is invertible which is usually not satisfied for small $t$. To solve this problem, we take the initial matrix $\bm P_{0,i}$ to be positive definite. By (\ref{P_inverse}), we have \begin{eqnarray} \bm P^{-1}_{t+1,i}=\sum^n_{j=1}\sum^t_{k=0}a^{(t+1-k)}_{ij}\bm\varphi_{k,j}\bm\varphi^T_{k,j} +\sum^n_{j=1}a^{(t+1)}_{ij}\bm P^{-1}_{0,j}.\label{sparse16} \end{eqnarray} This modification will not affect the analysis of the asymptotic properties of the estimate of the distributed least squares algorithm. In fact, the algorithm (\ref{theta1}) can be obtained by minimizing the following linear combination of the estimation error $\sigma_{t+1,i}(\bm\beta)$ between the observation signals and the prediction of the local neighbors, \begin{eqnarray} \sigma_{t+1,i}(\bm\beta)&=&\sum_{j\in N_i}a_{ij}\bigg(\sigma_{t,j}(\bm\beta) +[y_{t+1,j}-{\bm\beta}^T\bm\varphi_{t,j}]^2\bigg),\label{least} \end{eqnarray} with $\sigma_{0,i}(\bm\beta)=0$. That is, $\bm\theta_{t+1,i}\triangleq \arg\min_{\bm\beta}\sigma_{t+1,i}(\bm\beta)$. Set \begin{eqnarray*} \bm e_{t+1}(\bm\beta)&=&col\{(y_{t+1,1}-{\bm\beta}^T\bm\varphi_{t,1})^2, ...,(y_{t+1,n}-{\bm\beta}^T\bm\varphi_{t,n})^2\},\\ \bm{\sigma}_{t}(\bm\beta)&=&col\{\sigma_{t,1}(\bm\beta),...,\sigma_{t,n}(\bm\beta)\}. \end{eqnarray*} Hence by (\ref{least}), we have \begin{eqnarray*} \bm{\sigma}_{t+1}(\bm\beta)&=&\mathcal{A}\bm{\sigma}_{t}(\bm\beta)+\mathcal{A}\bm e_{t+1}(\bm\beta)\\ &=&\mathcal{A}^2\bm{\sigma}_{t-1}(\bm\beta)+\mathcal{A}^2\bm e_{t}(\bm\beta)+\mathcal{A}\bm e_{t+1}(\bm\beta)\\ &=&\sum^t_{k=0}\mathcal{A}^{t+1-k}\bm e_{k+1}(\bm\beta), \end{eqnarray*} which implies that \begin{eqnarray} \sigma_{t+1,i}(\bm\beta)=\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} [y_{k+1,j}-{\bm\beta}^T\bm\varphi_{k,j}]^2. \label{least2} \end{eqnarray} It is shown by \cite{wc2} that the distributed least squares algorithm (\ref{theta1}) can generate a consistent estimate for the unknown parameter when the number of data tends to infinity. However, for the sparse unknown parameter vectors (i.e., there are many zero elements in $\bm\theta$), it is hard to infer the zero elements in a finite step due to the limitation of observations in practice. In order to solve this issue, we introduce the following local information criterion with $L_1$-regularization to identify the unknown sparse parameters and infer the set $H^*$, \begin{eqnarray} J_{t+1,i}(\bm\beta)=\sigma_{t+1,i}(\bm\beta)+\alpha_{t+1,i}\|\bm\beta\|_1,\label{penalty} \end{eqnarray} where $\|\cdot\|_1$ is the $L_1$-norm, $\alpha_{t+1,i}$ is the weighting coefficient chosen to satisfy $\alpha_{t+1,i}=o(\lambda_{\min}(\bm P^{-1}_{t+1,i}))$, and $\sigma_{t+1,i}(\bm\beta)$ is recursively defined by (\ref{least}). For the sensor $i$, we can obtain the following distributed sparse LS algorithm to estimate the unknown parameter $\bm \theta$ by minimizing $ J_{t+1,i}(\bm\beta)$, i.e., \begin{eqnarray} \bm\beta_{t+1,i}= \arg\min_{\bm\beta} J_{t+1,i}(\bm\beta).\label{beta} \end{eqnarray} \begin{remark} For the sensor $i$, the coefficients $\alpha_{t+1,i}$ in (\ref{penalty}) can be dynamically adjusted by using the local observation sequence $\{\bm\varphi_{k,j}, y_{k+1,j}, j\in N_{i}\}^t_{k=1}$, which makes (\ref{penalty}) be the adaptive LASSO (cf., \cite{Zou2006}). We show that by properly choosing the coefficient $\alpha_{t+1,i}$, we can identify the set of the zero elements in the unknown sparse parameter vector $\bm\theta$ with a finite number of observations (see Theorem \ref{theorem2}). \end{remark} In the following, we will first investigate the upper bound of the estimation error generated by (\ref{beta}), which provides the basis for the set convergence of zero elements. For this purpose, we need to introduce the following assumptions on the network topology and the observation noise. \begin{assumption}\label{a1} The communication graph $\mathcal{G}$ is connected. \end{assumption} \begin{remark} For the weighted adjacency matrix $\mathcal{A}$ of the graph $\mathcal{G}$, we denote $\mathcal{A}^l\triangleq(a_{ij}^{(l)})$ with $l\geq 1$. By the theory of product of stochastic matrices, we see that under Assumption \ref{a1}, $\mathcal{A}^l$ is a positive matrix for $l\geq D_{\mathcal{G}}$, i.e., for any $i$ and $j$, $a_{ij}^{(l)}>0$. \end{remark} \begin{assumption}\label{a2} For any $i\in\{1,\cdots,n\}$, the noise sequence $\{w_{k,i},\mathscr{F}_k\}$ is a martingale difference, and there exists a constant $\delta > 2$ such that~ $$ \sup_{k\geq 0}E[|w_{k+1,i}|^{\delta}|\mathscr{F}_k]<\infty, {~~\rm a.s.}, $$ where $\mathscr{F}_t=\sigma\{\bm \varphi_{k,i}, w_{k,i}, k\leq t, i=1,\cdots,n\}$ is a sequence of nondecreasing $\sigma$-algebras and $E[\cdot|\cdot]$ denotes the conditional expectation operator. \end{assumption} We can verify that the i.i.d. zero-mean bounded or Gaussian noise $\{w_{k,i}\}$ which are independent of the regressors can satisfy Assumption \ref{a2}. Assume that there are $d$ nonzero elements in the unknown parameter vector $\bm\theta$. Without loss of generality, we assume $\bm\theta=(\bm\theta(1),\cdots,\bm\theta(d),\bm\theta(d+1),\cdots,\bm\theta(m))^T$ with $\bm\theta(l)\neq 0, l=1,\cdots,d,$ and $\bm\theta(j)=0, j=d+1,\cdots,m.$ For the estimate $\bm\beta_{t+1,i}$ obtained by the distributed sparse LS algorithm (\ref{beta}), we denote the estimate error as \begin{gather} \widetilde{\bm\beta}_{t+1,i}=\bm\beta_{t+1,i}-\bm\theta. \label{sparse15} \end{gather} Then we have the following result concerning the upper bound of the estimation error $\widetilde{\bm\beta}_{t,i}$. \begin{theorem}\label{theorem1} Let $\bm P^{-1}_{t+1,i}$ be generated by (\ref{P_inverse}) with arbitrarily initial matrix $\bm P_{0,i}>0$. Then under Assumptions \ref{a1} and \ref{a2}, we have for all $i\in\{1,\cdots,n\}$ \begin{eqnarray*} \|\widetilde{\bm\beta}_{t+1,i}\|=O\left(\frac{\alpha_{t+1,i}} {\lambda_{\min}(\bm P^{-1}_{t+1,i})}+\sqrt{\frac{\log r_t}{\lambda_{\min}(\bm P^{-1}_{t+1,i})}}\right), {\rm a.s.} \end{eqnarray*} where $r_t=\max\limits_{1\leq i\leq n}\lambda_{\max}\{\bm P^{-1}_{0,i}\}+\sum^n_{i=1}\sum^t_{k=0}\|\bm\varphi_{k,i}\|^2$. \end{theorem} The proof of Theorem \ref{theorem1} is provided in Subsection \ref{proof:theorem1}. \begin{remark}\label{remark_two} By (\ref{sparse16}), we have for $t\geq D_{\mathcal{G}}$, \begin{gather} \lambda_{\min}(\bm P^{-1}_{t+1,i})\geq a_{\min}\lambda^{n,t}_{\min},\label{sparse28} \end{gather} where $a_{\min}\triangleq \min_{i,j\in\mathcal{V}}a^{(D_{\mathcal{G}})}_{ij}>0$ and \begin{eqnarray*} \lambda^{n,t}_{\min}= \lambda_{\min}\left\{\sum^n_{j=1}\bm P^{-1}_{0,j}+\sum^n_{j=1}\sum ^{t-D_{\mathcal{G}+1}}_{k=0}\bm \varphi_{k,j}{\bm \varphi}^T_{k,j}\right\}.\end{eqnarray*} From Theorem \ref{theorem1}, if the coefficient $\alpha_{t+1,i}$ is chosen to satisfy $\alpha_{t+1,i}=o(\lambda_{\min}(\bm P^{-1}_{t+1,i}))$ and the regression vectors satisfy the weakest possible cooperative excitation condition $\log r_t=o(\lambda^{n,t}_{\min})$ (cf., \cite{wc2}), then the almost sure convergence of the distributed sparse LS algorithm can be obtained, i.e., $\bm\beta_{t+1,i}\xrightarrow[{t\rightarrow\infty}] {} \bm\theta$. \end{remark} \subsection{Analysis of the regret} Regret is one of the key metrics for evaluating the performance of the online learning algorithms \citep{ieee5, ieee6}. For each sensor $i\in\{1,\cdots,n\}$, we construct an adaptive predictor $\hat{y}_{t+1,i}$ by using the estimate $\bm \beta_{t,i}$ defined in (\ref{beta}) at the time instant $t$, $$\hat{y}_{t+1,i}=\bm \varphi^T_{t,i}\bm \beta_{t,i}.$$ The prediction error can be described by the following loss function $\rho_{t+1,i}(\bm \beta_{t,i})$, i.e., \begin{eqnarray*} \rho_{t+1,i}(\bm \beta_{t,i})&=& E\left[(y_{t+1,i}-\hat{y}_{t+1,i})^2|\mathscr{F}_k\right]\\ &=& E\left[(y_{t+1,i}-\bm \varphi^T_{t,i}\bm \beta_{t,i})^2|\mathscr{F}_t\right]. \end{eqnarray*} Then the cumulative regret over the whole network is defined as \begin{eqnarray*} R_{t}=\sum^n_{i=1}\sum^t_{k=0}\rho_{k+1,i}(\bm \beta_{k,i})-\min_{\bm \zeta\in \mathbb{R}^m}\sum^n_{i=1}\sum^t_{k=0}\rho_{k+1,i}(\bm \zeta). \end{eqnarray*} The regret defined above reflects the difference between the cumulative loss $\rho_{k+1,i} (\bm \beta_{k,i})$ when the unknown parameter is estimated by (\ref{beta}) and the optimal static value of the cumulative loss function $\rho_{k+1,i} (\cdot)$. Due to existence of the noise, it is generally desired that the average regret $R_{t}/nt$ is small or even goes to zero as $t\rightarrow\infty$. In the following, we analyze the asymptotic property of the regret $R_{t}$ over the sensor network. By Assumption \ref{a2} and the fact $\bm \varphi^T_{k,i}\widetilde{\bm \beta}_{k,i}\in\mathscr{F}_k$, we have \begin{eqnarray} R_t&=&\sum^n_{i=1}\sum^t_{k=0}E((y_{k+1,i}-\hat{y}_{k+1,i})^2|\mathscr{F}_k)\nonumber\\ &&-\min_{\bm \zeta\in\mathbb{R}^m}\sum^n_{i=1}\sum^t_{k=0}E((y_{k+1,i}-\bm \varphi^T_{k+1,i}\bm \zeta)^2|\mathscr{F}_k)\nonumber\\ &=&\sum^n_{i=1}\sum^t_{k=0}E(\bm \varphi^T_{k,i}\widetilde{\bm \beta}_{k,i}+w_{k+1,i})^2|\mathscr{F}_k)\nonumber\\ &&-\min_{\bm \zeta\in\mathbb{R}^m}\sum^n_{i=1}\sum^t_{k=0}E((\bm \varphi^T_{k,i}({\bm \theta}-\bm \zeta)+w_{k+1,i})^2|\mathscr{F}_k)\nonumber\\ &=&\sum^n_{i=1}\sum^t_{k=0}(\bm \varphi^T_{k,i}\widetilde{\bm \beta}_{k,i})^2.\label{ee1} \end{eqnarray} \begin{theorem}\label{theorem4} Under Assumption \ref{a2}, if ~$\bm\Phi^T_{t}\bm P_t\bm\Phi_{t}=O(1)$, and $\alpha_{t,i}=O\Big(\sqrt{\lambda_{\min}(\bm P^{-1}_{t,i})}\Big)$, we have \begin{eqnarray*} R_t=O(\log r_t), ~~{\rm a.s.} \end{eqnarray*} where ~ $\bm \Phi_t\triangleq diag\{\bm\varphi_{t,1},...\bm\varphi_{t,n}\}$, $\bm P_t\triangleq diag\{\bm P_{t,1},...,\bm P_{t,n}\},$ and $r_t$ is defined in Theorem \ref{theorem1}. \end{theorem} The proof of Theorem \ref{theorem4} is given in Subsection \ref{proof:theorem2}. \begin{remark} We know that for the bounded regressors $\bm\varphi_{t,i}$, $r_t$ will be of the order $O(t)$. Consequently, by Theorem \ref{theorem4}, the upper bound of the regret $R_t$ over the sensor network is sublinear with respect to $nt$, i.e., $R_{t}/nt=O({\log t}/{t})\rightarrow 0$ as $t\rightarrow \infty$. The analysis of the regret does not require any excitation condition on the regression signals. Theorem \ref{theorem1} and Theorem \ref{theorem4} can be degenerated to the results of the classical distributed LS algorithm in \cite{wc2} when $\alpha_{t+1,i}$ is equal to zero. \end{remark} \subsection{Set convergence} In the last two subsections, we have obtained the asymptotic results concerning the parameter convergence and the regret analysis. Inspired by \cite{zhao}, we propose the following distributed sparse adaptive algorithm (Algorithm \ref{algorithm2}) to identify the set of zero elements with a finite number of observations by choosing $\alpha_{t,i}$ adaptively. \begin{algorithm}[htb] {\caption{ }\label{algorithm2} \textbf{Step 1:} Based on $\{\bm \varphi_{k,j}, y_{k+1,j}\}^t_{k=1}~(j\in N_{i})$, begin with an initial vector $\bm\theta_{0,i}$ and an initial matrix $\bm P_{0,i}>0$, compute the matrix $\bm P^{-1}_{t+1,i}$ defined by (\ref{P_inverse}) and the local estimate $\bm\theta_{t+1,i}$ of $\bm\theta$ by (\ref{theta1}), and further define \begin{eqnarray} &&\hat{\bm \theta}_{t+1,i}(l)\nonumber\\ &\triangleq& \bm\theta_{t+1,i}(l)+{\rm{sgn}}(\bm\theta_{t+1,i}(l) )\sqrt{\frac{\log(\lambda_{\max}(\bm P^{-1}_{t+1,i}))}{\lambda_{\min}(\bm P^{-1}_{t+1,i})}},\label{sparse6} \end{eqnarray} \textbf{Step 2:} Choose a positive sequence $\{\alpha_{k,i}\}^{t+1}_{k=1}$ satisfying \begin{eqnarray} &&\alpha_{k,i}=o(\lambda_{\min}(\bm P^{-1}_{k,i})),\nonumber\\ &&\lambda_{\max}(\bm P^{-1}_{k,i})\sqrt{\frac{\log(\lambda_{\max}(\bm P^{-1}_{k,i}))}{\lambda_{\min}(\bm P^{-1}_{k,i})}}=o (\alpha_{k,i}).\label{sparse27} \end{eqnarray} \textbf{Step 3:} Optimize the convex objective local function, \begin{eqnarray} \bar J_{t+1,i}(\bm\xi)=\sigma_{t+1,i}(\bm\xi)+\alpha_{t+1,i}\sum^m_{l=1}\frac{1}{|\hat{\bm \theta}_{t+1,i}(l)|}|\bm \xi(l)| \label{sparse4} \end{eqnarray}} with $\sigma_{t+1,i}(\bm\xi)$ defined in (\ref{least}), and obtain \begin{eqnarray} \bm\xi_{t+1,i}&=&(\bm\xi_{t+1,i}(1),\cdots,\bm\xi_{t+1,i}(m))^T\nonumber\\ &\triangleq&\arg\min_{\bm\xi}\bar J_{t+1,i}(\bm\xi),\label{kkk}\\ H_{t+1,i}&\triangleq&\{l=1,\cdots,m|\bm\xi_{t+1,i}(l)=0\}.\label{sparse34} \end{eqnarray} \end{algorithm} In the convex objective function (\ref{sparse4}), different components in $\bm\xi$ are assigned different weights, which is an adaptive LASSO estimator since the weights ${\alpha_{t+1,i}}/{\hat{\bm \theta}_{t+1,i}(l)}$ are generated from the local observation sequence $\{\bm\varphi_{k,j}, y_{k+1,j}, j\in N_{i}\}^t_{k=1}$. The $\hat{\bm \theta}_{t+1,i}(l)$ appearing in the denominator satisfies that $|\hat{\bm \theta}_{t+1,i}(l)|\geq \sqrt{\frac{\log(\lambda_{\max}(\bm P^{-1}_{t+1,i}))}{\lambda_{\min}(\bm P^{-1}_{t+1,i})}}>0$, which makes (\ref{sparse4}) well defined. Moreover, if $\hat{\bm \theta}_{t+1,i}(l)\rightarrow 0$ for some $l=1,\cdots,m$ and hence $1/\hat{\bm \theta}_{t+1,i}(l)\rightarrow \infty$, then the corresponding minimizer $\bm\xi_{t+1,i}(l)$ should be exactly zero. This provides an intuitive explanation for the sparse solution of Algorithm \ref{algorithm2} with a finite number of observations. The set $H_{t+1,i}$ generated from the convex optimization problem (\ref{kkk}) serves as the estimate for the set $H^*$ defined in (\ref{sparse5}). There exist some typical algorithms such as basic pursuit and interior-point algorithms to solve the convex optimization problem (\ref{kkk}) in the literature (see e.g., \cite{kim,Gill}). We introduce the following cooperative non-persistent excitation condition to study the convergence of the sets of zero elements in the unknown sparse parameter vector with a finite number of observations, which is different from the asymptotic analysis given in the last two subsections. \begin{assumption} (Cooperative Non-Persistent Excitation Condition) \label{a4} The following condition is satisfied, \begin{eqnarray} \frac{r_t}{\lambda^{n,t}_{\min}}\sqrt{\frac{\log(r_t)}{\lambda^{n,t}_{\min}}}\xrightarrow[{t\rightarrow\infty}] {} 0, {~~~\rm a.s.}\label{cooperative} \end{eqnarray} where $r_t$ and $\lambda^{n,t}_{\min}$ are respectively defined in Theorem \ref{theorem1} and Remark \ref{remark_two}. \end{assumption} \begin{remark} For the single sensor case with $n=1$ and $ D_{\mathcal{G}}=1$, the condition (\ref {cooperative}) reduces to the excitation condition given by \cite{zhao}. Assumption \ref{a4} reveals the cooperative effect of multiple sensors in the sense that the condition (\ref{cooperative}) can make it possible for Algorithm \ref{algorithm2} to estimate the unknown parameter $\bm\theta$ and the sets of zero elements by the cooperation of multiple sensors even if any individual sensor cannot due to lack of adequate excitation, which is also shown in the simulation example given in Section \ref{simulaiton}. \end{remark} For the set $H_{t,i}$ obtained by (\ref{sparse34}), we get the following finite time convergence result, which shows that the set of zero elements in $\bm\theta$ can be correctly identified with a finite number of observations. \begin{theorem}\label{theorem2} (Set convergence) ~Under Assumptions \ref{a1}-\ref{a4}, if $\log{r_t}=O(\log{r_{t-D_{\mathcal{G}}+1}})$, then there exists a positive integer $T_0$ (which may depend on the sample $\omega$) such that for all $i\in\{1,\cdots,n\}$ \begin{eqnarray*} \bm\xi_{t+1,i}(d+1)=\cdots=\bm\xi_{t+1,i}(m)=0, ~~t\geq T_0. \end{eqnarray*} That is, $H_{t+1,i}=H^*$ for $t\geq T_0$, where $H^*$ and $H_{t+1,i}$ are defined in (\ref{sparse5}) and (\ref{sparse34}). \end{theorem} The detailed proof of Theorem \ref{theorem2} is given in Subsection \ref{proof:Theorem3}. \begin{remark} From Theorem \ref{theorem2} (also Theorem \ref{theorem1} and Theorem \ref{theorem4} ), we see that the parameter convergence, regret analysis, and set convergence results in this paper are derived without using the independency assumption on the regression vectors, which makes it possible to apply our algorithm to practical feedback systems. \end{remark} \section{Proofs of the main results}\label{proofs_of} In order to prove the main theorems of the paper, we first give two preliminary lemmas. Denote the estimation error of the classical distributed LS algorithm (\ref{theta1}) as $\widetilde{\bm \theta}_{t+1,i}\triangleq\bm\theta_{t+1,i}-\bm\theta$, and $\widetilde{\bm \Theta}_t=col\{\widetilde{\bm \theta}_{t,1},...,\widetilde{\bm \theta}_{t,n}\}$. \begin{lemma}{ \rm\citep{wc2}}\label{lemma1} Under Assumptions \ref{a1} and \ref{a2}, we have the following results for the classical distributed LS algorithm (\ref{theta1}), \begin{eqnarray*} &1)& \sum^n_{i=1}\|\widetilde{\bm\theta}_{t,i}\|^2=O\left(\frac{\log r_t}{\lambda^{n,t}_{\min}}\right),\\ &2)& \sum^t_{k=0}\lambda_{\max}(\bm d_k\bm \Phi^T_k\bm P_k\bm \Phi_k)=O(\log r_t),\\ &3)& \sum^t_{k=0}\widetilde{\bm \Theta}^T_k\bm \Phi_{k}\bm d_k\bm \Phi^T_{k}\widetilde{\bm \Theta}_k=O(\log r_t), \end{eqnarray*} where $\bm P_k$ and $\bm \Phi_{k}$ are defined in Theorem \ref{theorem4}, $r_t\triangleq\max\limits_{1\leq i\leq n}\lambda_{\max}\{\bm P^{-1}_{0,i}\}+\sum^n_{i=1}\sum^t_{k=0}\|\bm\varphi_{k,i}\|^2$ and $\bm d_t\triangleq diag\Big\{\frac{1}{1+\bm\varphi^T_{t,1}\bm P_{t,1}\bm\varphi_{t,1}},...,\frac{1}{1+\bm\varphi^T_{t,n}\bm P_{t,n}\bm\varphi_{t,n}}\Big\}.$ \end{lemma} The following lemma provides an upper bound for the cumulative summation of the noises. \begin{lemma}{ \rm\citep{Gan2022}}\label{lemma3} Under Assumptions \ref{a1} and \ref{a2}, for any $i\in\{1,...,n\}$, we have \begin{eqnarray*} &&\Bigg\|\bm P^{\frac{1}{2}}_{t,i}\left(\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij}\bm\varphi_{k,j}w_{k+1,j}\right)\Bigg\|= O(\sqrt{\log(r_{t})}). \end{eqnarray*} \end{lemma} \subsection{ Proof of Theorem \ref{theorem1}}\label{proof:theorem1} \begin{proof} By noting that $\bm\beta_{t+1,i}$ is the minimizer of $ J_{t+1,i}(\bm\beta)$, it follows that \begin{eqnarray} 0&\geq& J_{t+1,i}(\bm\beta_{t+1,i})-J_{t+1,i}(\bm\theta)\nonumber\\ &=& J_{t+1,i}(\widetilde{\bm\beta}_{t+1,i}+\bm\theta)-J_{t+1,i}(\bm\theta).\label{sparse9} \end{eqnarray} Since $\bm\theta(j)=0$, $j=d+1,\cdots,m$, by (\ref{model}), (\ref{least2}) and (\ref{penalty}), we have \begin{eqnarray} &&J_{t+1,i}(\widetilde{\bm\beta}_{t+1,i}+\bm\theta)\nonumber\\ &=&\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} [w_{k+1,j}-\widetilde{\bm\beta}^T_{t+1,i}\bm\varphi_{k,j}]^2\nonumber\\ &&+\alpha_{t+1,i}\sum^d_{l=1}|{\widetilde{\bm\beta}_{t+1,i}(l)+\bm\theta(l)}|+\alpha_{t+1,i}\sum^{m}_{l=d+1}|{\widetilde{\bm\beta}_{t+1,i}(l)\nonumber}|\\ &=&\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} w^2_{k+1,j}\nonumber\\ &&-2\widetilde{\bm\beta}^T_{t+1,i}\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi_{k,j}w_{k+1,j}\nonumber\\ &&+\widetilde{\bm\beta}^T_{t+1,i}\sum^{t}_{k=0}a^{(t+1-k)}_{ij}\bm\varphi_{k,j} \bm\varphi^T_{k,j}\widetilde{\bm\beta}_{t+1,i} \nonumber\\ &&+\alpha_{t+1,i}\sum^d_{l=1}|{\widetilde{\bm\beta}_{t+1,i}(l)+\bm\theta(l)}|+\alpha_{t+1,i}\sum^{m}_{l=d+1}|{\widetilde{\bm\beta}_{t+1,i}(l)}|.\nonumber\\ \label{sparse7} \end{eqnarray} Similarly, we have \begin{eqnarray} &&J_{t+1,i}(\bm\theta)\nonumber\\ &=&\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} [y_{k+1,j}-{\bm\theta}^T\bm\varphi_{k,j}]^2+\alpha_{t+1,i}\sum^d_{l=1}|\bm\theta(l)|\nonumber\\ &=&\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} w^2_{k+1,j}+\alpha_{t+1,i}\sum^d_{l=1}|\bm\theta(l)|.\label{sparse8} \end{eqnarray} Hence by (\ref{sparse7}) and (\ref{sparse8}), we have \begin{eqnarray} &&J_{t+1,i}(\widetilde{\bm\beta}_{t+1,i}+\bm\theta)-J_{t+1,i}(\bm\theta)\nonumber\\ &\geq&\widetilde{\bm\beta}^T_{t+1,i}\sum^{t}_{k=0}a^{(t+1-k)}_{ij}\bm\varphi_{k,j} \bm\varphi^T_{k,j}\widetilde{\bm\beta}_{t+1,i}\nonumber\\ &&-2\widetilde{\bm\beta}^T_{t+1,i}\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi_{k,j}w_{k+1,j}\nonumber\\ &&+\alpha_{t+1,i}\sum^d_{l=1}(|{\widetilde{\bm\beta}_{t+1,i}(l)+\bm\theta(l)}|-|\bm\theta(l)|) \nonumber\\ &\triangleq& M^{(1)}_{t+1,i}-2M^{(2)}_{t+1,i}+M^{(3)}_{t+1,i}.\label{sparse10} \end{eqnarray} In the following, we estimate $M^{(1)}_{t+1,i}$, $M^{(2)}_{t+1,i}$ and $M^{(3)}_{t+1,i}$ separately. Denote $\bm V_{t+1,i}=\bm P^{-\frac{1}{2}}_{t+1,i}\widetilde{\bm\beta}_{t+1,i}$. By Lemma \ref{lemma3}, we have \begin{eqnarray*} && |M^{(2)}_{t+1,i}|\nonumber\\ &=&\Big|\widetilde{\bm\beta}^T_{t+1,i}\bm P^{-\frac{1}{2}}_{t+1,i}\bm P^{\frac{1}{2}}_{t+1,i}\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi_{k,j}w_{k+1,j}\Big|\nonumber\\ &=&O\big(\sqrt{\log(r_{t})}\big)\|\bm V_{t+1,i}\|. \end{eqnarray*} Hence, there exists a positive constant $c_1$ such that for large $t$, \begin{eqnarray} &&M^{(1)}_{t+1,i}-2M^{(2)}_{t+1,i}\nonumber\\ &\geq& \frac{1}{2}\|\bm V_{t+1,i}\|^2-c_1\sqrt{\log(r_{t})}\|\bm V_{t+1,i}\|. \label{sparse11} \end{eqnarray} By $C_r$-inequality, we have \begin{eqnarray} |M^{(3)}_{t+1,i}|\leq \alpha_{t+1,i}\sum^d_{l=1}|\widetilde{\bm\beta}_{t+1,i}(l)| \leq \alpha_{t+1,i}\sqrt{d}\|\widetilde{\bm\beta}_{t+1,i}\|. \label{sparse12} \end{eqnarray} Hence by (\ref{sparse9}) and (\ref{sparse10})-(\ref{sparse12}), we have for large $t$ \begin{eqnarray*} 0\geq \frac{\|\bm V_{t+1,i}\|^2}{2}-c_1\sqrt{\log(r_{t})}\|\bm V_{t+1,i}\|-\sqrt{d}\alpha_{t+1,i}\|\widetilde{\bm\beta}_{t+1,i}\|, \end{eqnarray*} which implies that \begin{eqnarray} \|\bm V_{t+1,i}\|\leq \sqrt{c^2_1\log r_t+2\sqrt{d}\alpha_{t+1,i}\|\widetilde{\bm\beta}_{t+1,i}\|} +\sqrt{c_1\log r_t}.\nonumber\\ \label{sparse13} \end{eqnarray} Note that by the definition of $\bm V_{t+1,i}$, we have \begin{gather*} \|\bm V_{t+1,i}\|^2\geq \lambda_{\min}(\bm P^{-1}_{t+1,i})\|\widetilde{\bm\beta}_{t+1,i}\|^2. \end{gather*} Combining this with (\ref{sparse13}), we have \begin{eqnarray*} &&\left(\|\widetilde{\bm\beta}_{t+1,i}\|-\frac{2\sqrt{d}\alpha_{t+1,i}} {\lambda_{\min}(\bm P^{-1}_{t+1,i})}\right)^2\nonumber\\ &\leq &\left(\frac{2\sqrt{d}\alpha_{t+1,i}} {\lambda_{\min}(\bm P^{-1}_{t+1,i})}\right)^2+\frac{(2c_1^2+2c_1)\log r_t}{\lambda_{\min}(\bm P^{-1}_{t+1,i})}. \end{eqnarray*} Thus, we have \begin{eqnarray} \|\widetilde{\bm\beta}_{t+1,i}\|=O\left(\frac{\alpha_{t+1,i}} {\lambda_{\min}(\bm P^{-1}_{t+1,i})}+\sqrt{\frac{\log r_t}{\lambda_{\min}(\bm P^{-1}_{t+1,i})}}\right), \end{eqnarray} which completes the proof of the theorem. \end{proof} \subsection{Proof of Theorem \ref{theorem4}}\label{proof:theorem2} \begin{proof} By (\ref{least2}), we obtain the subdifferential of (\ref{penalty}), \begin{eqnarray*} \partial J_{t+1,i}(\bm\beta)&=&-2\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi_{k,j}(y_{k+1,j}-\bm\varphi^T_{k,j}{\bm\beta})\\ &&+\alpha_{t+1,i}\partial \|\bm \beta\|_1, \end{eqnarray*} where $\partial \|\bm \beta\|_1$ is the subdifferential of $\|\bm \beta\|_1$. Since $\bm\beta_{t+1,i}$ is the minimizer of $J_{t+1,i}(\bm\beta)$, we have $\bm 0\in \partial J_{t+1,i}(\bm\beta_{t+1,i})$ with $\bm 0\triangleq (\underbrace{0,0,\cdots,0}_{m})^T$, i.e., \begin{eqnarray} \bm 0&\in &-2\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi_{k,j}(y_{k+1,j}-\bm\varphi^T_{k,j}{\bm\beta}_{t+1,i})\nonumber\\ &&+\alpha_{t+1,i}\partial \|\bm \beta_{t+1,i}\|_1.\label{J} \end{eqnarray} Let us write (\ref{J}) in a component form, i.e., for all $l\in\{1,\cdots,m\}$, \begin{eqnarray} 0&\in& -\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi_{k,j}(l)y_{k+1,j}\nonumber\\ &&+\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi_{k,j}(l)\Big(\sum_{s\neq l}\bm\varphi_{k,j}(s){\bm\beta_{t+1,i}}(s)\Big)\nonumber\\ &&+\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi^2_{k,j}(l){\bm\beta_{t+1,i}}(l)+\frac{\alpha_{t+1,i}}{2}\partial |\bm \beta_{t+1,i}(l)|\nonumber\\ &\triangleq &\bm D_{t+1,i}(l)+\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi^2_{k,j}(l){\bm\beta_{t+1,i}}(l)\nonumber\\ &&+\frac{\alpha_{t+1,i}}{2}\partial |\bm \beta_{t+1,i}(l)|.\label{classify} \end{eqnarray} Note that \begin{alignat}{2} \partial |\bm \beta_{t+1,i}(l)|=\left\{ \begin{aligned} &1, ~~~~~~~~~~~~~~~~~~{\rm if} ~~~\bm\beta_{t+1,i}(l)>0\\ &-1, ~~~~~~~~~~~~~~{\rm if} ~~~\bm \beta_{t+1,i}(l)<0\\ &\in [-1,1],~~~~~~{\rm if} ~~~\bm \beta_{t+1,i}(l)=0\nonumber \end{aligned}\ . \right. \end{alignat} Set $\sum^n_{j=1}\sum^t_{k=0}a^{(t+1-k)}_{ij}\bm\varphi_{k,j}\bm\varphi^T_{k,j} \triangleq\bm \Psi_{t+1,i}$. Combining the above equation with (\ref{classify}) yields for large $t$ \begin{alignat}{2} & \bm \beta_{t+1,i}(l)\nonumber\\ =&\left\{ \begin{aligned} &\frac{-\bm D_{t+1,i}(l)+\frac{\alpha_{t+1,i}}{2} }{\bm \Psi_{t+1,i}(l,l)},~{\rm if} ~\bm D_{t+1,i}(l)>\frac{\alpha_{t+1,i}}{2}\\ &\frac{-\bm D_{t+1,i}(l)-\frac{\alpha_{t+1,i}}{2}}{\bm \Psi_{t+1,i}(l,l)},~{\rm if} ~\bm D_{t+1,i}(l)<-\frac{\alpha_{t+1,i}}{2}\\ &~~~ 0, \hskip 2.6cm {\rm if}~|\bm D_{t+1,i}(l)|\leq\frac{\alpha_{t+1,i}}{2}\nonumber \end{aligned}\ \ , \right. \end{alignat} with $\bm \Psi_{t+1,i}(l,l)$ being the $l$-th diagonal element of the matrix $\bm \Psi_{t+1,i}$. This implies that \begin{eqnarray} \bm \Psi_{t+1,i}(l,l)\bm \beta_{t+1,i}(l)= -\bm D_{t+1,i}(l)+\bm \gamma_{t+1,i}(l),\label{sparse1} \end{eqnarray} where \begin{alignat}{2} \bm \gamma_{t+1,i}(l)=\left\{ \begin{aligned} &\frac{\alpha_{t+1,i}}{2}, \hskip 1.4cm{\rm if}~\bm D_{t+1,i}(l)>\frac{\alpha_{t+1,i}}{2}\\ &-\frac{\alpha_{t+1,i}}{2}, \hskip 1.0cm{\rm if} ~\bm D_{t+1,i}(l)<-\frac{\alpha_{t+1,i}}{2}\\ &\bm D_{t+1,i}(l),\hskip 1.0cm{\rm if}~|\bm D_{t+1,i}(l)|\leq\frac{\alpha_{t+1,i}}{2}\nonumber \end{aligned}. \right. \end{alignat} Then by (\ref{sparse1}) and the definition of $\bm D_{t+1,i}(l)$, we have for all $l\in\{1,\cdots,m\}$ \begin{eqnarray*} &&\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi_{k,j}(l)\bm\varphi^T_{k,j}{\bm\beta_{t+1,i}}\\ &=&\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi_{k,j}(l)y_{k+1,j}+\bm \gamma_{t+1,i}(l). \end{eqnarray*} We rewrite the above equation into the matrix form, and obtain the following equation by (\ref{theta}) for large $t$ \begin{eqnarray} \bm\beta_{t+1,i}&=&\bm P_{t+1,i}\left(\sum^n_{j=1}\sum^{t}_{k=0}a^{(t+1-k)}_{ij} \bm\varphi_{k,j}y_{k+1,j}+\bm \gamma_{t+1,i}\right)\nonumber\\ &=&\bm\theta_{t+1,i}+\bm P_{t+1,i}\bm \gamma_{t+1,i}\label{sparse2}, \end{eqnarray} where $\bm \gamma_{t+1,i}=(\bm \gamma_{t+1,i}(1),\cdots,\bm \gamma_{t+1,i}(m))^T$ and $\bm\theta_{t+1,i}$ is defined in (\ref{theta}). Note that for all $i\in\{1,\cdots,n\}$ and $l\in\{1,\cdots,m\}$, $\|\bm \gamma_{t+1,i}(l)\|\leq\frac{\alpha_{t+1,i}}{2}$, hence by Lemma \ref{lemma1}, we obtain \begin{eqnarray} &&\sum^t_{k=0}\bm\gamma^T_k\bm P_k\bm \Phi_{k}\bm d_k\bm\Phi^T_{k}\bm P_k\bm\gamma_k\nonumber\\ &\leq&\sum^t_{k=0}\lambda_{\max}(\bm d_k\bm \Phi^T_{k}\bm P_k\bm \Phi_{k})\bm\gamma^T_k\bm P_k\bm\gamma_k\nonumber\\ &\leq&\sum^t_{k=0}\left[\lambda_{\max}(\bm d_k\bm \Phi^T_{k}\bm P_k\bm \Phi_{k})\left(\sum^n_{i=1}\lambda_{\max}(\bm P_{k,i})\|\bm\gamma_{k,i}\|^2\right)\right]\nonumber\\ &=&O\left(\sum^t_{k=0}\left[\lambda_{\max}(\bm d_k\bm \Phi^T_{k}\bm P_k\bm \Phi_{k}) \left(\sum^n_{i=1}\frac{\alpha^2_{k,i}}{\lambda_{\min}(\bm P^{-1}_{k,i})}\right)\right]\right)\nonumber\\ &=&O(\log r_t),\label{gamma} \end{eqnarray} where $\bm \gamma_{t+1}=col\{\bm \gamma_{t+1,1},\cdots,\bm \gamma_{t+1,n}\}$. By the definition of $\bm d_t$ in Lemma \ref{lemma1}, we have $\bm I_n=\bm d_k+\bm d_k\bm \Phi^T_{k}\bm P_k\bm \Phi_{k}$. By (\ref{sparse2}), we have $\widetilde{\bm \beta}_{t+1}=\widetilde{\bm\Theta}_{t+1}+\bm P_{t+1}\bm \gamma_{t+1}$, where $\widetilde{\bm \beta}_t=col\{\widetilde{\bm \beta}_{t,1},...,\widetilde{\bm \beta}_{t,n}\}$. Hence by (\ref{gamma}), Lemma \ref{lemma1} and the condition $\bm\Phi^T_{t}\bm P_t\bm\Phi_{t}=O(1)$, we have \begin{eqnarray*} &&R_t=\sum^t_{k=0}\widetilde{\bm \beta}^T_k\bm \Phi_{k}\bm \Phi^T_{k}\widetilde{\bm \beta}_k\nonumber\\ &=&\sum^t_{k=0}\widetilde{\bm \beta}^T_k\bm \Phi_{k}\bm d_k\bm \Phi^T_{k}\widetilde{\bm \beta}_k +\sum^t_{k=0}\widetilde{\bm \beta}^T_k\bm \Phi_{k}(\bm d_k\bm \Phi^T_{k}\bm P_k\bm \Phi_{k})\bm \Phi^T_{k}\widetilde{\bm \beta}_k\nonumber\\ &=&O\Big(\sum^t_{k=0}\widetilde{\bm \beta}^T_k\bm \Phi_{k}\bm d_k\bm\Phi^T_{k}\widetilde{\bm \beta}_k\Big)\\ &=&O\Big(\sum^t_{k=0}\widetilde{\bm \Theta}^T_k\bm \Phi_{k}\bm d_k\bm \Phi^T_{k}\widetilde{\bm \Theta}_k+\sum^t_{k=0}\bm\gamma^T_k\bm P_k\bm \Phi_{k}\bm d_k\bm\Phi^T_{k}\bm P_k\bm\gamma_k\Big)\\ &=&O(\log r_t). \end{eqnarray*} This completes the proof of the theorem. \end{proof} \subsection{Proof of Theorem \ref{theorem2} } \label{proof:Theorem3} \begin{proof} Denote the estimation error between $\bm\xi_{t+1,i}$ obtained by Algorithm \ref{algorithm2} and $\bm\theta$ as \begin{gather} \widetilde{\bm\xi}_{t+1,i}=\bm\xi_{t+1,i}-\bm\theta.\label{sparse36} \end{gather} By Assumption \ref{a4} and Lemma \ref{lemma1}, we see that the limits of $\bm \theta_{t+1,i}(l)$ and $\hat{\bm \theta}_{t+1,i}(l)$, $l=1,\cdots,d$ are nonzero. Similar to the proof of Theorem \ref{theorem1}, we also have the following result, \begin{eqnarray} \|\widetilde{\bm\xi}_{t+1,i}\|=O\left(\frac{\alpha_{t+1,i}} {\lambda_{\min}(\bm P^{-1}_{t+1,i})}+\sqrt{\frac{\log r_t}{\lambda_{\min}(\bm P^{-1}_{t+1,i})}}\right).\label{sparse35} \end{eqnarray} By the definition of $\widetilde{\bm\xi}_{t+1,i}$ in (\ref{sparse36}), it suffices to prove that there exists a positive integer $T_0$ such that for all $i\in\{1,\cdots,n\}$ \begin{eqnarray*} \widetilde{\bm\xi}_{t+1,i}(d+1)=\cdots=\widetilde{\bm\xi}_{t+1,i}(m)=0, ~~~~t\geq T_0. \end{eqnarray*} Otherwise, if for some $s_l\in\{d+1,\cdots,m\}$, some sensor $i_0$, and some subsequence $\{t_p\}_{p\geq 1}$ such that $\widetilde{\bm\xi}_{t_p+1,i_0}(s_l)\neq 0$, $p\geq 1$. Thus for $p\geq 1$, we have $\|\widetilde{\bm\xi}_{t_p+1,i_0}\|>0$. Denote \begin{align} \widetilde{\bm\xi}_{t_p+1,i_0}&=\left( \begin{matrix} \widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}\\ \widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\\ \end{matrix} \right)~ {\rm and}~ \bar{\bm\xi}_{t_p+1,i_0}=\left( \begin{matrix} \widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}\\ \bm 0\\ \end{matrix} \right),\label{sparse17} \end{align} where $\widetilde{\bm\xi}^{(1)}_{t_p+1,i_0} \in \mathbb{R}^d$ and $\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0} \in \mathbb{R}^{m-d}$. By noting that $\bm\xi_{t_p+1,i_0}$ is the minimizer of $ \bar J_{t_p+1,i_0}(\bm\xi)$ defined by (\ref{sparse4}) , it follows that \begin{eqnarray} 0&\geq &\bar J_{t_p+1,i_0}(\bm\xi_{t_p+1,i_0})- \bar J_{t_p+1,i_0}(\bm\theta+\bar{\bm\xi}_{t_p+1,i_0})\nonumber\\ &=&\bar J_{t_p+1,i_0}(\bm\theta+\widetilde{\bm\xi}_{t_p+1,i_0})- \bar J_{t_p+1,i_0}(\bm\theta+\bar{\bm\xi}_{t_p+1,i_0}). \label{sparse29} \end{eqnarray} Denote \begin{align} &\bm \Psi_{t+1,i}=\sum^n_{j=1}\sum^t_{k=0}a^{(t+1-k)}_{ij}\bm\varphi_{k,j}\bm\varphi^T_{k,j} \triangleq\left( \begin{matrix} \bm \Psi^{(11)}_{t+1,i}&\bm \Psi^{(12)}_{t+1,i}\\ \bm \Psi^{(21)}_{t+1,i}&\bm \Psi^{(22)}_{t+1,i}\\ \end{matrix} \right),\nonumber\\ &~~{\rm and }~~ \bm \varphi_{k,j}\triangleq\left( \begin{matrix} \bm \varphi^{(1)}_{k,j}\\ \bm \varphi^{(2)}_{k,j}\\ \end{matrix} \right).\label{sparse20} \end{align} Similar to (\ref{sparse7}), we have for $\widetilde{\bm\xi}_{t_p+1,i_0}$ \begin{eqnarray} &&\bar J_{t_p+1,i_0}(\bm\theta+\widetilde{\bm\xi}_{t_p+1,i_0})-\sum^n_{j=1}\sum^{t_p}_{k=0}a^{(t_p+1-k)}_{i_0j}w^2_{k+1,j}\nonumber\\ &=&-2\widetilde{\bm\xi}^{(1)T}_{t_p+1,i_0}\sum^n_{j=1}\sum^{t_p}_{k=0}a^{(t_p+1-k)}_{i_0j} \bm\varphi^{(1)}_{k,j}w_{k+1,j}\nonumber\\ &&-2\widetilde{\bm\xi}^{(2)T}_{t_p+1,i_0}\sum^n_{j=1}\sum^{t_p}_{k=0}a^{(t_p+1-k)}_{i_0j} \bm\varphi^{(2)}_{k,j}w_{k+1,j}\nonumber\\ &&+\widetilde{\bm\xi}^{(1)T}_{t_p+1,i_0}\bm \Psi^{(11)}_{t_p+1,i_0}\widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}+\widetilde{\bm\xi}^{(2)T}_{t_p+1,i_0}\bm \Psi^{(21)}_{t_p+1,i_0}\widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}\nonumber\\ && +\widetilde{\bm\xi}^{(1)T}_{t_p+1,i_0}\bm \Psi^{(12)}_{t_p+1,i_0}\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0} +\widetilde{\bm\xi}^{(2)T}_{t_p+1,i_0}\bm \Psi^{(22)}_{t_p+1,i_0}\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\nonumber\\ &&+\alpha_{t_p+1,i_0}\sum^d_{l=1}\frac{1}{\hat{\bm \theta}_{t_p+1,i_0}(l)}|{\widetilde{\bm\xi}_{t_p+1,i_0}(l)+\bm\theta(l)}|\nonumber\\ &&+\alpha_{t_p+1,i_0}\sum^{m}_{l=d+1}\frac{1}{|\hat{\bm \theta}_{t_p+1,i_0}(l)|}|{\widetilde{\bm\xi}_{t_p+1,i_0}(l)}|.\label{sparse18} \end{eqnarray} For $\bar{\bm\xi}_{t_p+1,i_0}$ defined in (\ref{sparse17}), we have \begin{eqnarray} &&\bar J_{t_p+1,i_0}(\bm\theta+\bar{\bm\xi}_{t_p+1,i_0})-\sum^n_{j=1}\sum^{t_p}_{k=0}a^{(t_p+1-k)}_{i_0j}w^2_{k+1,j}\nonumber\\ &=&-2\widetilde{\bm\xi}^{(1)T}_{t_p+1,i_0}\sum^n_{j=1}\sum^{t_p}_{k=0}a^{(t_p+1-k)}_{i_0j} \bm\varphi^{(1)}_{k,j}w_{k+1,j}\nonumber\\ &&+\widetilde{\bm\xi}^{(1)T}_{t_p+1,i_0}\bm \Psi^{(11)}_{t_p+1,i_0}\widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}\nonumber\\ &&+\alpha_{t_p+1,i_0}\sum^d_{l=1}\frac{1}{|\hat{\bm \theta}_{t_p+1,i_0}(l)|}|{\bar{\bm\xi}_{t_p+1,i_0}(l)+\bm\theta(l)}|.\label{sparse19} \end{eqnarray} By (\ref{sparse18}) and (\ref{sparse19}), we have \begin{eqnarray} &&\bar J_{t_p+1,i_0}(\bm\theta+\widetilde{\bm\xi}_{t_p+1,i_0})-\bar J_{t_p+1,i_0}(\bm\theta+\bar{\bm\xi}_{t_p+1,i_0})\nonumber\\ &=&-2\widetilde{\bm\xi}^{(2)T}_{t_p+1,i_0}\sum^n_{j=1}\sum^{t_p}_{k=0}a^{(t_p+1-k)}_{i_0j} \bm\varphi^{(2)}_{k,j}w_{k+1,j}\nonumber\\ &&+\widetilde{\bm\xi}^{(2)T}_{t_p+1,i_0}\bm \Psi^{(22)}_{t_p+1,i_0}\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}+\widetilde{\bm\xi}^{(1)T}_{t_p+1,i_0}\bm \Psi^{(12)}_{t_p+1,i_0}\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\nonumber\\ &&+\widetilde{\bm\xi}^{(2)T}_{t_p+1,i_0}\bm \Psi^{(21)}_{t_p+1,i_0}\widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}\nonumber\\ &&+\alpha_{t_p+1,i_0}\sum^{m}_{l=d+1}\frac{1}{|\hat{\bm \theta}_{t_p+1,i_0}(l)|}|{\widetilde{\bm\xi}_{t_p+1,i_0}(l)}|\nonumber\\ &\triangleq& -2I^{(1)}_{t_p+1,i_0}+I^{(2)}_{t_p+1,i_0}+I^{(3)}_{t_p+1,i_0}+I^{(4)}_{t_p+1,i_0}+I^{(5)}_{t_p+1,i_0}.\nonumber\\ \label{sparse26} \end{eqnarray} In the following, we estimate $I^{(1)}_{t_p+1,i_0}$, $I^{(2)}_{t_p+1,i_0}$, $I^{(3)}_{t_p+1,i_0}$, $I^{(4)}_{t_p+1,i_0}$, $I^{(5)}_{t_p+1,i_0}$ separately. By (\ref{sparse16}) and (\ref{sparse20}), we have \begin{align*} \bm P^{-1}_{t+1,i}&=\bm \Psi_{t+1,i} +\sum^n_{j=1}a^{(t+1)}_{ij}\bm P^{-1}_{0,j}\triangleq\left( \begin{matrix} \bm Q^{(11)}_{t+1,i}&\bm Q^{(12)}_{t+1,i}\\ \bm Q^{(21)}_{t+1,i}&\bm Q^{(22)}_{t+1,i}\\ \end{matrix} \right). \end{align*} By (\ref{sparse20}) and Lemma \ref{lemma3}, we have \begin{eqnarray*} |I^{(1)}_{t_p+1,i_0}|&=&\Bigg|\widetilde{\bm\xi}^{(2)T}_{t_p+1,i_0}(\bm Q^{(22)}_{t_p+1,i_0})^{\frac{1}{2}} (\bm Q^{(22)}_{t_p+1,i_0})^{-\frac{1}{2}}\\ &&\sum^n_{j=1}\sum^{t_p}_{k=0}a^{(t_p+1-k)}_{i_0j} \bm\varphi^{(2)}_{k,j}w_{k+1,j}\Bigg|\\ &=&\|(\bm Q^{(22)}_{t_p+1,i_0})\|^{\frac{1}{2}}\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|O\left(\sqrt{\log r^{(2)}_{t_p}}\right), \end{eqnarray*} where $r^{(2)}_{t}\triangleq \max\limits_{1\leq i\leq n}\lambda_{\max}\{\bm Q^{(22)}_{0,i}\}+\sum^n_{i=1}\sum^t_{k=0}\|\bm\varphi^{(2)}_{k,i}\|^2$. Note that $\lambda_{\max}(\bm Q^{(22)}_{t_p+1,i_0})\leq \lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})$ and $\lambda_{\min}(\bm Q^{(22)}_{t_p+1,i_0})\geq \lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})$. Hence, we have $r^{(2)}_{t_p}\leq r_{t_p}$. We obtain that for large $p$ and some positive constant $c_2$ \begin{eqnarray} &&-2I^{(1)}_{t_p+1,i_0}+I^{(2)}_{t_p+1,i_0}\nonumber\\ &\geq&\lambda_{\min}(\bm \Psi^{(22)}_{t_p+1,i_0})\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|^2\nonumber\\ &&-c_2\|(\bm Q^{(22)}_{t_p+1,i_0})\|^{\frac{1}{2}}\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|\sqrt{\log r^{(2)}_{t_p}}\nonumber\\ &\geq&\frac{1}{2}\lambda_{\min}(\bm Q^{(22)}_{t_p+1,i_0})\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|^2\nonumber\\ &&-c_2\|(\bm Q^{(22)}_{t_p+1,i_0})\|^{\frac{1}{2}}\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|\sqrt{\log r^{(2)}_{t_p}}\nonumber\\ &\geq&\frac{1}{2}\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|^2\nonumber\\ &&-c_2\sqrt{\lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})}\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|\sqrt{\log r_{t_p}}.\label{sparse22} \end{eqnarray} By (\ref{sparse35}) and Lemma \ref{lemma1}, and based on the equivalence of norms in a finite dimensional space, we have \begin{eqnarray} |I^{(3)}_{t_p+1,i_0}|&=&|\widetilde{\bm\xi}^{(1)T}_{t_p+1,i_0}\bm \Psi^{(12)}_{t_p+1,i_0}\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}|\nonumber\\ &\leq &\|\widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}\|\|\bm \Psi^{(12)}_{t_p+1,i_0}\|\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|\nonumber\\ &\leq& c_3\|\widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}\|\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\| \|\bm \Psi^{(12)}_{t_p+1,i_0}\|_F\nonumber\\ &\leq& c_3\|\widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}\|\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\| \|\bm \Psi_{t_p+1,i_0}\|_F\nonumber\\ &\leq& c_4\|\widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}\|\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\| \|\bm \Psi_{t_p+1,i_0}\|\nonumber\\ &\leq& c_4\|\widetilde{\bm\xi}^{(1)}_{t_p+1,i_0}\|\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\| \lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})\nonumber\\ &=&O\Bigg(\lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})\Bigg[\frac{\alpha_{t_p+1,i_0}} {\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}\nonumber\\ &&+\sqrt{\frac{\log(r_{t_p})}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}}\Bigg]\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|\Bigg), \label{sparse23} \end{eqnarray} where $c_3$ and $c_4$ are two positive constants. Similarly, we have \begin{eqnarray} |I^{(4)}_{t_p+1,i_0}|&\leq&O\Bigg(\lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})\Bigg[\frac{\alpha_{t_p+1,i_0}} {\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}\nonumber\\ &&+\sqrt{\frac{\log(r_{t_p})}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}}\Bigg]\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|\Bigg). \label{sparse24} \end{eqnarray} Then by the definition of $\hat{\bm \theta}_{t_p+1,i_0}(l)$ in (\ref{sparse6}), and the condition $\log{r_t}=O(\log{r_{t-D_{\mathcal{G}}+1}})$, we have for $l=d+1,\cdots,m$, \begin{eqnarray*} L_{t_p+1,i_0} \leq |\hat{\bm \theta}_{t_p+1,i_0}(l)|\leq c_5 L_{t_p+1,i_0}, \end{eqnarray*} where $c_5>0$ is a positive constant, and $$L_{t_p+1,i_0}=\sqrt{\frac{\log(\lambda_{\max}(\bm P^{-1}_{t_p+1,i_0}))}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}}.$$ Hence we have \begin{eqnarray} I^{(5)}_{t_p+1,i_0}&\geq& \alpha_{t_p+1,i_0}\frac{1}{c_5L_{t_p+1,i_0}}\sum^{m}_{l=d+1}|{\widetilde{\bm\xi}_{t_p+1,i_0}(l)}|\nonumber\\ &\geq& \alpha_{t_p+1,i_0}\frac{1}{c_5L_{t_p+1,i_0}}\|{\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}}\|. \label{sparse25} \end{eqnarray} Thus, by (\ref{sparse26})-(\ref{sparse25}), for some $c_{6}>0$, we obtain \begin{eqnarray} &&\bar J_{t_p+1,i_0}(\bm\theta+\widetilde{\bm\xi}_{t_p+1,i_0})-\bar J_{t_p+1,i_0} (\bm\theta+\bar{\bm\xi}_{t_p+1,i_0})\nonumber\\ &\geq&\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|\cdot\nonumber\\ && \left(\frac{\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|}{2}-c_2\sqrt{\frac{\lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}}\sqrt{\frac{\log r_{t_p}}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}}\right.\nonumber\\ &&-\frac{c_{6}\lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}\Bigg[\frac{\alpha_{t_p+1,i_0}} {\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}+\nonumber\\ &&\sqrt{\frac{\log(r_{t_p})}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}}\Bigg]\left.+\frac{\alpha_{t_p+1,i_0}}{c_5\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})L_{t_p+1,i_0}}\right).\label{sparse30} \end{eqnarray} By (\ref{sparse28}), (\ref{sparse27}) and Assumption \ref{a4}, we have \begin{eqnarray} &&\frac{\lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}\sqrt{\frac{\log r_{t_p}}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}}\nonumber\\ &\leq& \frac{\lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}\sqrt{\frac{\log r_{t_p}}{\lambda^{n,t_p}_{\min}}}\nonumber\\ &=& o\left(\frac{\alpha_{t_p+1,i_0}}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})L_{t_p+1,i_0}}\right).\label{sparse31} \end{eqnarray} By (\ref{sparse28}) and Assumption \ref{a4}, we have \begin{eqnarray} &&\frac{\lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})\alpha_{t_p+1,i_0}}{\lambda^2_{\min}(\bm P^{-1}_{t_p+1,i_0})}\Bigg/\frac{\alpha_{t_p+1,i_0}}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})L_{t_p+1,i_0}}\nonumber\\ &=&L_{t_p+1,i_0}\frac{\lambda_{\max}(\bm P^{-1}_{t_p+1,i_0})}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})}\nonumber\\ &=&O\left(\frac{r_{t_p}}{\lambda^{n,{t_p}}_{\min}}\sqrt{\frac{\log(r_{t_p})}{\lambda^{n,{t_p}}_{\min}}}\right) =o(1).\label{sparse32} \end{eqnarray} From (\ref{sparse30})-(\ref{sparse32}), we have \begin{eqnarray} && \bar J_{t_p+1,i_0}(\bm\theta+\widetilde{\bm\xi}_{t_p+1,i_0})-\bar J_{t_p+1,i_0} (\bm\theta+\bar{\bm\xi}_{t_p+1,i_0})\nonumber\\ &\geq&\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|\cdot\nonumber\\ &&\left(\frac{\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|}{2}+\frac{[\frac{1}{c_5}+o(1)]\alpha_{t_p+1,i_0}}{\lambda_{\min}(\bm P^{-1}_{t_p+1,i_0})L_{t_p+1,i_0}}\right).\label{sparse33} \end{eqnarray} Note that $\widetilde{\bm\xi}_{t_p+1,i_0}(s_l)\neq 0$ for some $s_l\in\{d+1,\cdots,m\}$. Hence $\|\widetilde{\bm\xi}^{(2)}_{t_p+1,i_0}\|>0$. Then by (\ref{sparse33}), we have $ J_{t_p+1,i_0}(\bm\theta+\widetilde{\bm\xi}_{t_p+1,i_0})-\bar J_{t_p+1,i_0} (\bm\theta+\bar{\bm\xi}_{t_p+1,i_0})>0$, which contradicts (\ref{sparse29}). This implies that $\|\widetilde{\bm\xi}^{(2)}_{t+1,i}\|=0$ for all large $t$ and all $i\in\{1,\cdots,n\}$. We complete the proof of the theorem. \end{proof} \section{A simulation example}\label{simulaiton} In this section, we provide an example to illustrate the performance of the distributed sparse identification algorithm (i.e., Algorithm \ref{algorithm2}) proposed in this paper. \begin{example} Consider a network composed of $n=6$ sensors whose dynamics obey the model (\ref{model}) with the dimension $m=5$. The noise sequence $\{w_{t,i}, t\geq1, i=1,\cdots,n\}$ in (\ref{model}) is independent and identically distributed with $w_{t,i} \sim \mathcal{N}(0, 0.1)$ (Gaussian distribution with zero mean and variance $0.1$). Let the regression vectors $\bm \varphi_{t,i}\in\mathbb{R}^{m}$ $ ~(i = 1,\cdots,n, ~t\geq1)$ be generated by the following state space model, \begin{alignat}{2} \begin{aligned} &\bm x_{t,i}&=& \bm A_i \bm x_{t-1,i}+\bm B_i \varepsilon_{t,i},\\ &\bm\varphi_{t,i}&=& \bm C_i \bm x_{t,i}, \end{aligned}, \label{cdls2} \end{alignat} where $\bm x_{t,i}\in\mathbb{R}^{m}$ is the state of the above system with $\bm x_{0,i}=[\underbrace{1,\cdots,1}_{m}]^T$, the matrices $\bm A_i$, $\bm B_i$ and $\bm C_i$ ($i=1,2,\cdots, n$) are chosen according to the following way such that the regression vector $\bm \varphi_{t,i}$ is lack of adequate excitation for any individual sensor, \begin{eqnarray*} \bm A_i&=&diag\{\underbrace{1.1,\cdots,1.1}_{m}\}, \\ \bm B_i&=&\bm {e_j}\in\mathbb{R}^{m},\\ \bm C_i&=& col\{0,\cdots,0,\underset{j^{th}}{\bm {e}_j},0,\cdots,0\}\in\mathbb{R}^{m\times m},\\ \end{eqnarray*} where $j=\mod(i,m)$ and $\bm e_j ~(j=1,\cdots,m)$ is the $j$th column of the identity matrix $\bm I_m$ ~$(m=5)$. Let the noise sequence $\{\varepsilon_{t,i}, t \geq 1, i =1,\cdots, n\}$ in (\ref{cdls2}) be independent and identically distributed with $\varepsilon_{t,i} \sim \mathcal{N}(0, 0.2)$. All sensors will estimate an unknown parameter $$\bm\theta=[\bm\theta(1),\bm\theta(2),\bm\theta(3),\bm\theta(4),\bm\theta(5)]^T=[0.8,1.6,0,0,0]^T.$$ The initial estimate is taken as $\bm\xi_{0,i}=[{1,1,1,1,1}]^T$ for $i=1,2,\cdots, 6$. We use the Metropolis rule \citep{new05} to construct the weights of the network, i.e., \begin{alignat}{2} \label{weight} a_{li}= \left\{ \begin{aligned} &1-\sum_{j\neq i}a_{ij}~~~~~~~~~~~{\rm if}~~l=i\\ &1/(\max\{n_i,n_l\})~~{\rm if}~~l\in N_i\setminus\{i\} \end{aligned}, \right. \end{alignat} where $n_i$ is the degree of the node $i$. \end{example} It can be verified that for each sensor $i$ $(i=1\cdots,6)$, the regression signals $\bm \varphi_{t,i}$ ( generated by (\ref{cdls2})) have no adequate excitation to estimate the unknown parameter, but they can cooperate to satisfy Assumption \ref{a4}. We repeat the simulation for $s = 100$ times with the same initial states. 1) We estimate the unknown parameter $\bm\theta$ by using the non-cooperative sparse identification algorithm (i.e., the adjacency matrix is the unit matrix) and the distributed sparse identification algorithm (Algorithm \ref{algorithm2}) proposed in this paper respectively. We adopt the Matlab CVX tools (http://cvxr.com/cvx/) to solve the convex optimization problem (\ref{sparse4}), and take the weight coefficient as $\alpha_{t,i}=( \lambda_{\min}(\bm P^{-1}_{t+1,i}))^{0.75}$. The average estimation error generated by these two algorithms is shown in Fig. \ref{compare}. We see that the estimation error generated by distributed sparse identification algorithm converges to zero as $t$ increases, while the estimation error of the non-cooperative sparse identification algorithm does not. The estimate sequences $\{\bm\xi_{t,i}(1),\bm\xi_{t,i}(2),\bm\xi_{t,i}(3),\bm\xi_{t,i}(4),\bm\xi_{t,i}(5)\}^{200}_{t=0} ~(i=1,\cdots,6)$ generated by Algorithm \ref{algorithm2} are given in Fig. \ref{estimate}. We see from these figures that the estimates can converge to the true value $\bm\theta$. Therefore, the estimation task can be fulfilled through exchanging information between sensors even though any individual sensor can not. \begin{figure}[htb] \centering \includegraphics[width=3in]{fig/compare.pdf} \caption{The estimation errors of the distributed sparse identification algorithm and non-cooperative sparse identification algorithm }\label{compare} \end{figure} \begin{figure*}[htb] \centering \includegraphics[width=7in]{fig/estimate.pdf} \caption{ The estimate sequences $\{\bm\xi_{t,i}\}^{200}_{t=0} $ of all sensors}\label{estimate} \end{figure*} 2) We estimate the unknown parameter $\bm\theta$ by using the classical distributed LS algorithm studied by \cite{wc2} and Algorithm \ref{algorithm2} proposed in this paper under the same network topology. Table \ref{biao1} and Table \ref{biao2} show the estimates for $\bm\theta(3)$, $\bm\theta(4)$, $\bm\theta(5)$ by these two algorithms at different time instants $t$. From Table \ref{biao1} and Table \ref{biao2}, we can see that, compared with the distributed LS algorithm in \cite{wc2}, Algorithm \ref{algorithm2} can generate sparser and more accurate estimates for the unknown parameters and thus give us valuable information in inferring the zero and nonzero elements in the unknown parameters. \begin{table*}[htb] \caption{Estimates by the distributed LS algorithm in \cite{wc2} and Algorithm \ref{algorithm2} for $t=50$}\label{biao1} \begin{center} \begin{tabular}{lllllll} \hline ~ &\footnotesize sensor 1 &\footnotesize sensor 2 &\footnotesize sensor 3 &\footnotesize sensor 4 &\footnotesize sensor 5 &\footnotesize sensor 6 \\ \hline \footnotesize Estimate for $\bm\theta(3)$ &~ & ~& ~& ~& ~& ~ \\ \footnotesize By distributed LS & \tiny$2.5892\times10^{-4}$ &\tiny$ 1.4805\times10^{-4}$& \tiny$3.1352\times10^{-4}$&\tiny $2.7231\times10^{-4}$& \tiny$2.9085\times10^{-4}$&\tiny $2.9085\times10^{-4}$\\ \footnotesize By Algorithm \ref{algorithm2} & \tiny$-2.8518\times10^{-6}$ &\tiny$ -6.3009\times10^{-12}$&\tiny $-8.3539\times10^{-18}$& \tiny$1.4030\times10^{-6}$& \tiny$-4.6969\times10^{-7}$& \tiny$-2.7547\times10^{-18}$ \\ \hline \footnotesize Estimate for $\bm\theta(4)$ &~ & ~& ~& ~& ~& ~ \\ \footnotesize By distributed LS& \tiny$2.7949\times10^{-4}$ &\tiny$ 2.7949\times10^{-4}$& \tiny$2.7949\times10^{-4}$&\tiny $0.0011$& \tiny$2.7949\times10^{-4}$&\tiny $2.7949\times10^{-4}$\\ \footnotesize By Algorithm \ref{algorithm2} & \tiny$7.2376\times10^{-18}$ &\tiny$1.6087\times10^{-8}$&\tiny $-6.2511\times10^{-5}$& \tiny$1.1212\times10^{-6}$& \tiny$-3,6619\times10^{-10}$& \tiny$-7.8179\times10^{-7}$ \\ \hline \footnotesize Estimate for $\bm\theta(5)$ &~ & ~& ~& ~& ~& ~ \\ \footnotesize By distributed LS & \tiny$2.1450\times10^{-4}$ &\tiny$ 8.1487\times10^{-5}$& \tiny$1.7771\times10^{-4}$&\tiny $1.7014\times 10^{-4}$& \tiny$4.9508\times10^{-5}$&\tiny $1.7350\times10^{-4}$\\ \footnotesize By Algorithm \ref{algorithm2} & \tiny$-2.8248\times10^{-6}$ &\tiny$1.3601\times10^{-10}$&\tiny $-3.8278\times10^{-5}$& \tiny$7.7698\times10^{-18}$& \tiny$-1.3398\times10^{-8}$& \tiny$-2.6207\times10^{-6}$ \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[!ht] \caption{Estimates by the distributed LS algorithm in \cite{wc2} and Algorithm \ref{algorithm2} for $t=100$}\label{biao2} \begin{center} \begin{tabular}{lllllll} \hline ~ &\footnotesize sensor 1 &\footnotesize sensor 2 &\footnotesize sensor 3 &\footnotesize sensor 4 &\footnotesize sensor 5 &\footnotesize sensor 6 \\ \hline \footnotesize Estimate for $\bm\theta(3)$ &~ & ~& ~& ~& ~& ~ \\ \footnotesize By distributed LS & \tiny$3.9929\times10^{-6}$ &\tiny$ 4.0720\times10^{-6}$& \tiny$4.5125\times10^{-6}$&\tiny $3.9929\times10^{-6}$& \tiny$3.9929\times10^{-6}$&\tiny $4.5015\times10^{-6}$\\ \footnotesize By Algorithm \ref{algorithm2} & \tiny$ -4.1586\times10^{-13}$ &\tiny$ 2.6792\times10^{-12}$&\tiny $1.7980\times10^{-13}$& \tiny$ -1.6160\times10^{-12}$& \tiny$8.8066\times10^{-14}$& \tiny$6.9114\times10^{-13}$ \\ \hline \footnotesize Estimate for $\bm\theta(4)$ &~ & ~& ~& ~& ~& ~ \\ \footnotesize By distributed LS & \tiny$6.4080\times10^{-6}$ &\tiny$ 3.6820\times10^{-6}$& \tiny$3.2300\times10^{-6}$&\tiny $2.5931\times10^{-6}$& \tiny$6.4080\times10^{-6}$&\tiny $3.2300\times10^{-6}$\\ \footnotesize By Algorithm \ref{algorithm2} & \tiny$ -5.9666\times10^{-12}$ &\tiny$ -4.0833\times10^{-19}$&\tiny $-8.79535\times10^{-12}$& \tiny$-1.2865\times10^{-11}$& \tiny$ -7.3473\times10^{-12}$& \tiny$-6.2931\times10^{-12}$ \\ \hline \footnotesize Estimate for $\bm\theta(5)$ &~ & ~& ~& ~& ~& ~ \\ \footnotesize By distributed LS& \tiny$4.5652\times10^{-6}$ &\tiny$ 4.8154\times10^{-6}$& \tiny$4.6507\times10^{-6}$&\tiny $5.9311\times 10^{-6}$& \tiny$5.5863\times10^{-6}$&\tiny $4.6507\times10^{-6}$\\ \footnotesize By Algorithm \ref{algorithm2} & \tiny$1.4196\times10^{-12}$ &\tiny$1.9062\times10^{-12}$&\tiny $-1.8454\times10^{-12}$& \tiny$3.0918\times10^{-12}$& \tiny$-2.1729\times10^{-15}$& \tiny$-1.9412\times10^{-12}$ \\ \hline \end{tabular} \end{center} \end{table*} \section{Concluding remarks}\label{concluding} In this paper, we first introduced a local information criterion which is formulated as a linear combination of the local estimation error with $L_1$-regularization term. By minimizing this criterion, we proposed a distributed sparse identification algorithm to estimate an unknown parameter vector of a stochastic system. The upper bounds of the estimation error and the averaged accumulated regrets of adaptive prediction are obtained without excitation conditions. Furthermore, we showed that under the cooperative non-persistent excitation conditions, the set of zero elements in the unknown parameter vector can be correctly identified with a finite number of observations by properly choosing the weighting coefficient. We remark that our theoretical results are established without using such stringent conditions as independency of the regression vectors, which makes it possible to combine the distributed adaptive estimation with the distributed control. For future research, it will be interesting to consider the combination of the distributed sparse identification algorithm with the distributed control, and design a recursive distributed sparse adaptive algorithm. \bibliographystyle{apalike}
1,941,325,220,827
arxiv
\section{Introduction} In pure non-Abelian gauge theories there is a linear confinement potential between a static source anti-source pair in the fundamental representation of the gauge group at large distances. The static potential can be efficiently computed by means of Wilson loops. C. Michael \cite{adjpot:su2michael} studied the potential between static {\em adjoint} sources in the pure $\SUtwo$ gauge theory: the gauge fields screen the sources forming color-neutral objects called gluelumps. At large distances the two gluelump state dominates the ground state of the system. A similar situation is expected when these gauge theories are coupled to matter fields in the fundamental representation. The dynamical matter fields form a bound state with the static fundamental source, a color-neutral static ``meson''. One does expect that the potential at large distances is better interpreted as the potential between two static mesons and flattens turning asymptotically into a Yukawa form. So far, this expectation could not be verified by Monte Carlo simulations. In particular, in recent attempts in QCD with two flavors of dynamical quarks this {\em string breaking} effect was not visible \cite{pot:CPPACS,pot:UKQCD2}. The distance $r_{\rm b}$ around which the potential should start flattening off, could be estimated in the quenched approximation \cite{reviews:beauty}: \bes r_{\rm b}\approx2.7\,\r_0 \,\,\, \mbox{(in QCD)}\,, \ees where $\r_0$ is the reference scale defined in \cite{pot:r0}. This is in agreement with an estimate from full QCD simulations \cite{pot:UKQCD2}. The same behavior of the potential is, of course, expected in the non-Abelian Higgs model in the confinement phase. While the potential in the large distance range could not be calculated in early simulations with gauge group $\SUtwo$, they yielded some qualitative evidence for screening of the potential \cite{Higgs:Aachen1,Higgs:Aachen2}. In our work, we compute the potential in the Higgs model and observe string breaking. A recent study in the three-dimensional $\SUtwo$ Higgs model \cite{pot:higgs_3d} has reached very similar conclusions to what we find in four dimensions. \section{Calculation of the potential} For our investigation of the static potential in the $\SUtwo$ Higgs model we choose in the conventional bare parameter space \cite{Higgs:Montvay1} the point $\beta\,=\,2.2$, $\kappa\,=\,0.274$ and $\lambda\,=\,0.5$. It lies in the confinement phase of the model, fairly close to the phase transition, where the properties are similar to QCD. The lattice resolution is of roughly the same size as the one used in the QCD-studies: we obtain $r_0 / a\,=\,2.78 \pm 0.04$.\\ The general strategy for determining the static potential has first been applied in \cite{adjpot:su2michael}. The correlation functions that we used are schematically illustrated in \fig{f_corr}. We measure a symmetric matrix correlation function $C_{ij}(r,t)$, where the indices $i$ and $j$ refer to the space-like parts of the correlation functions. These consist of a Wilson line or two Higgs fields. For fixed spacial separation $r$, the potential $V(r)$ is extracted solving the generalized eigenvalue problem \cite{phaseshifts:LW}: \bes\label{genev} C(t)v_{\alpha}(t,t_0) & = & \lambda_{\alpha}(t,t_0)C(t_0)v_{\alpha}(t,t_0) \, , \ees with $\lambda_{\alpha} > \lambda_{\alpha+1}$. The ground state energy $V(r)\equiv V_0(r)$ and the excited states energies $V_1(r),\;...$ are then given by \bes\label{potentials} a V_{\alpha}(r) & = & \ln(\lambda_{\alpha}(t-a,t_0) /\lambda_{\alpha}(t,t_0)) \nonumber \\ & & +\rmO\left(\rme^{-(V_N(r) - V_{\alpha}(r)) t}\right) \, . \ees Here, $N$ is the rank of the matrix $C$. In order to suppress the correction term in eq. (\ref{potentials}) we use smeared gauge and Higgs fields at different smearing levels. In this way the rank of $C$ is increased to $N=7$. For more details see \cite{pot:alpha}. \begin{figure}[tb] \hspace{0cm} \vspace{-1.0cm} \centerline{\epsfig{file=plots/corrfuncts.eps,width=7.5cm}} \vspace{-0.0cm} \caption{{\small The correlation functions used to determine the static potential. The lines represent the Wilson lines, the filled circles the Higgs field.} \label{f_corr}} \end{figure} \section{String breaking} Our numerical results were obtained on a $20^4$ lattice with periodic boundary conditions. We computed all correlation functions up to $r_{\rm max}=t_{\rm max}=8a \sim 3 \, \rnod$ on 4240 field configurations. Statistical errors were reduced by replacing -- wherever possible -- the time-like links by the 1-link integral. \begin{figure}[tb] \hspace{0cm} \vspace{-1.0cm} \centerline{\psfig{file=plots/meson.ps,width=7.5cm}} \vspace{-0.0cm} \caption{{\small Comparison of the extraction of the mass $\mu$ of a static meson using the smearing operators defined in eqs. (\ref{smearophiggs1}) and (\ref{smearophiggs2}).} \label{f_meson}} \end{figure} \begin{figure}[tb] \hspace{0cm} \vspace{-1.0cm} \centerline{\psfig{file=plots/invariant.ps,width=7.5cm}} \vspace{-0.0cm} \caption{{\small The static potential in units of $\rnod$. The asymptotic value $2\mu$ has been subtracted to obtain a quantity free of self energy contributions which diverge like $\frac{1}{a}$.} \label{f_invariant}} \end{figure} We extracted the mass $\mu$ of a static meson using the variational method of \cite{phaseshifts:LW} from a correlation function with one straight time-like Wilson line and smeared Higgs fields at the ends. In \fig{f_meson} we compare two different smearing procedures for the Higgs field. The smearing operator ${\cal S}_1$ (triangles) is defined as: \bes\label{smearophiggs1} {\cal S}_1\Phi(x) & = & \Phi(x)+\sum_{|x-y|=a \atop x_0=y_0}U(x,y)\Phi(y) \,, \ees where $\Phi(x)$ is the complex Higgs field and $U(x,y)$ is the gauge field link connecting $y$ with $x$. The smearing operator ${\cal S}_2$ (circles) is defined as: \bes\label{smearophiggs2} {\cal S}_2\Phi(x) & = & {\cal P}\{{\cal P}\Phi(x) + {\cal P}\sum_{|x-y|=\sqrt{2}a \atop x_0=y_0} \overline{U}(x,y)\Phi(y) \nonumber \\ & & + {\cal P}\sum_{|x-z|=\sqrt{3}a \atop x_0=z_0} \overline{U}(x,z)\Phi(z)\} \, , \ees where ${\cal P}\Phi = \Phi/\sqrt{\Phi^{\dagger}\Phi}$ and $\overline{U}(x,y)$ represents the average over the shortest link connections between $y$ and $x$. The application of the smearing operator is iterated obtaining a sequence $\Phi^{(m)}(x)\,=\,{\cal S}^m\Phi(x)$ with which the correlation function is evaluated. Using ${\cal S}_2$ the contributions from the excited states are much more suppressed: at $t=7a$ we read off $a\mu\,=\,0.7001\pm 0.0014$ which agrees fully with $t=6a$. We computed the potential in units of $\rnod$. Considering in particular the combination $V(r)-2\mu$, one has a quantity free of divergent self energy contributions. It is shown in \fig{f_invariant}. The expected string breaking is clearly observed for distances $r>\rb\approx 1.8\,\rnod$. Around $r\approx\rb$, the potential changes rapidly from an almost linear rise to an almost constant behavior. We want to point out that if one considers only the sub-block of the matrix correlation function corresponding to the (smeared) Wilson loops, the potential estimates have large correction terms at long distances. One might then extract a potential which is too high. This {\em might} explain why string breaking has not been seen in QCD, yet. This observation is confirmed by the overlap of the variationally determined ground state, characterized by $v_0$ in eq. (\ref{genev}), with the true ground state of the Hamiltonian. The overlap can be computed from the projected correlation function \bes\label{projcorr} \Omega(t) \; = \; v_0^{\rm T}C(t)v_0 \; = \; \sum_n \omega_n \rme^{-V_n(r) t} \, , \ees with normalization $\Omega(0)=1$. Here, $n$ labels the states in the sector of the Hilbert space with 2 static sources. ``The overlap '' is an abbreviation commonly used to denote $\omega_0$. Considering the full matrix $C$ the overlap exceeds about 50\% for all distances, restricting the matrix to the Wilson sub-block we find upper limits for the overlaps at $r>\rb$ of 5\% \cite{pot:alpha}. \section{Conclusions} We have introduced a method to compute the static potential at all relevant distances in gauge theories with scalar matter fields. We demonstrated that it can be applied successfully in the SU(2) Higgs model with parameters chosen to resemble the situation in QCD. It is then interesting to follow a line of constant physics towards smaller lattice spacings in order to check for cutoff effects and to be able to resolve the interesting transition region in the potential. From the matrix correlation function one can also determine excited state energies. A precise determination of the excited potential at all distances needs more statistics. One expects that the transition region of the potential can be described phenomenologically by a level crossing (as function of $r$) of the ``two meson state'' and the ``string state'' \cite{drummond:levelcross}. We are planning to investigate this in more detail. So far, we can only say that for $r\approx\rb$ the two levels $V_1(r)$ and $V(r)$ are close. Of course, it is of considerable interest to apply this method to QCD with dynamical fermions. The only possible difficulty is expected to be one of statistical accuracy. The proper correlation functions can be constructed along the lines of ref. \cite{adjpot:su2michael,pot:higgs_3d,pot:alpha}.
1,941,325,220,828
arxiv
\section*{Abstract} This paper is devoted to the stability analysis of spatially interconnected systems (SISs) via the sum-of-squares (SOS) decomposition of positive trigonometric polynomials. For each spatial direction of SISs, three types of interconnected structures are considered. Inspired by the idea of rational parameterization and robust stabilizability function, necessary and sufficient conditions are derived for establishing the stability of SISs with two different combined topologies respectively. For these results, the primary issue concerns the global or local positivity of trigonometric polynomials. SOS decomposition and generalized trace parameterization of positive trigonometric polynomials are utilized so that the addressed problems can be quantified by two semidefinite programs (SDPs). The proposed methods are applicable to all possible interconnected structures due to the assumption of spatial reversibility. Numerical examples are given to illustrate the efficiency of the derived theoretical results. \section{Introduction} Spatially interconnected systems (SISs), also referred to as spatially distributed systems, are generally regarded as large-scale interconnected systems consisting of multiple spatially distributed similar subunits that only interact with their neighbors. Many practical applications, such as heat equation \cite{Bamieh2002}, vehicle platoons \cite{Knorn2013}, and flexible structures \cite{Liu2016}, can be captured by such interconnected systems. There are also many other excellent state-space representations in the literature can be used to describe the dynamic of these systems such as Roesser model \cite{Roesser1975,Ahn2015}, Fornasini-Marchesini model \cite{Fornasini1976,Zhang2017}, and multidimensional (MD) model \cite{Xu2012,Wang2017}, etc. The SISs model is originally proposed in \cite{DAndrea2003}, where the physical relevance is sufficiently taken into account. As a result, not only causal but also noncausal spatial dynamics appear in SISs, leading to a distinct framework on system description. Considerable efforts have been made under this framework \cite{AlTaie2016,Zhou2008,Kim2013,Heijmans2017,Xu2018}. One of the primary problems in SISs is stability analysis, for which a general accepted method consists of recasting SISs as an infinite-dimensional system \cite{Curtain1995,Bamieh2002}, and employing the Lyapunov theory of distributed parameter systems to obtain linear operator inequalities (LOIs) analysis conditions \cite{DAndrea2003,Dullerud2004,Recht2004,Fridman2009}. However, it must be mentioned that the existing LOIs results are generally sufficient, but not necessary for stability analysis (and accordingly, for controller synthesis). In \cite{Zhou2008}, the idea of parameter-dependent linear matrix inequalities is employed to derive necessary and sufficient analysis conditions for SISs, whereas it remains challenging to find an efficient method for determining the degree of the related matrix polynomials. Recently, the sum-of-squares (SOS) decomposition of positive polynomials has been intensively investigated, and widely utilized by the authors from \cite{Chesi2010,Chesi2014,Chesi2016,Chesi2019} to obtain nonconservative analysis conditions for 2D mixed continuous-discrete-time systems. These encouraging results motivate us to develop their counterparts in SISs, which is not a trivial work. As mentioned above, the physical relevance of SISs results in an essential difference for stability analysis. Mathematically, the ordinary polynomial is replaced by polynomials in complex parameters over unit circle. Moreover, the arbitrary number of spatial directions converts univariate polynomials to multivariate case, imposing some limitations on the SOS decomposition. In this work, the well-established theory of trigonometric polynomials is utilized to address these problems via SOS decomposition of trigonometric counterparts \cite{Dumitrescu2006,Dumitrescu2007,Dumitrescu2017}. The contribution of this note is threefold. Firstly, a necessary and sufficient stability condition is derived for SISs of full infinite interconnections, including a stability test of a constant matrix, and a global positivity of a trigonometric polynomial over unit $L$-circle. Secondly, the stability of SISs with mixed infinite and periodic interconnections is established via Routh-Hurwitz criterion, which contains a check of empty set and local positivity of a series of trigonometric polynomials over domains. Finally, with the aid of the generalized trace parameterization of trigonometric polynomials, two semidefinite programs (SDPs) are proposed to quantify the derived theoretical results. The rest of this paper is organized as follows. Preliminaries and the problem formulation are introduced in Section \ref{sec2}. Section \ref{sec3} derives main results of this note, and numerical examples are presented in Section \ref{sec4} to illustrate the proposed methodologies. Conclusions are included in Section \ref{sec5}. \emph{Notation.} The notation used throughout is reasonably standard. $\mathbb{Z}$ stands for the set of integers, and the superscript `$L$' denotes the Cartesian product of $L$ identical sets. The notation $\mathbb{T}$ is used to indicate the unit circle, i.e, $\mathbb{T}=\left\{z\in \mathbb{C}:~|z|=1\right\}$. The set of non-negative real numbers and complex numbers are denoted by $\mathbb{R}^+$ and $\mathbb{C}$, respectively. $\mathbb{R}^\bullet$ denotes the real-valued vector whose size is not relevant to the discussion, where $\mathbb{R}$ is the set of real numbers. The set of $n\times m$ complex matrices and real matrices is represented by $\mathbb{C}^{n\times m}$ and $\mathbb{R}^{n\times m}$, respectively. $\lfloor \cdot \rfloor$ and $\lceil \cdot \rceil$ denote the floor and ceiling operators, respectively. Let $M\in \mathbb{C}^{n\times m}$ be a given complex matrix, $M^T$ and $M^*$ are used to represent its transpose and complex conjugate transpose, respectively. For matrices $M$ and $N$, $M\otimes N$ indicates the Kronecker product of them. The trace of matrix $M$ is denoted by $tr(M)$. For scalars $a$, $b$, $c$, $d$, the notation $\left \llbracket \cdot \right \rrbracket$ denotes \begin{equation} \left \llbracket \begin{array}{cc} a & b\\ c & d \end{array} \right \rrbracket = \frac{bc - ad}{c}. \end{equation} Finally, the vector $[x_1^*~~x_2^*]^*$ will be described as the notation $(x_1;x_2)$ for simplicity. \section{Preliminaries} \label{sec2} \subsection{System model and Problem formulation} In this paper, the signals considered are vector valued functions indexed by $L+1$ independent variables, i.e, $x=x(t,k_1,...,k_L)$, in which $t\in \mathbb{R}^{+}$ denotes the temporal variable, and $k_i \in \mathbb{D}_i$ denotes the $i$-th spatial variable, where $\mathbb{D}_i$ stands for any one of the following three sets: $\mathbb{Z}$ for infinite spatial extent in $i$-th dimension, $\mathbb{Z}_{N_i}$ for periodicity of period $N_i$, and $\left\{1,...,N_i \right\}$ for finite extent interconnection. According to the structure of signals, the following definitions are first recalled. The $L$-tuple $(k_1,...,k_L)$ is abbreviated as $\mathbf{k}$ for notational simplicity. \begin{definition} \cite{DAndrea2003} The space $\ell_2$ is the set of functions $x$ mapping $\mathbb{D}_1\times \cdots \times \mathbb{D}_L$ to $\mathbb{R}^{\bullet}$ for which the following inequality is satisfied:\begin{equation} \sum_{k_1\in \mathbb{D}_1}\cdots \sum_{k_L\in \mathbb{D}_L}x^*(\mathbf{k})x(\mathbf{k})<\infty. \end{equation} The inner product on $\ell_2$ is defined as \begin{equation} {\left\langle {x,y} \right\rangle _{{\ell _2}}} := \sum\limits_{{k_1} \in \mathbb{D}_1} \cdots \sum\limits_{{k_L} \in \mathbb{D}_L} {{x^*}(\mathbf{k})y(\mathbf{k})}, \end{equation} with corresponding norm $\|x\|_{\ell_2}:=\sqrt{\langle x,x \rangle_{\ell_2}}$. \end{definition} For a fixed temporal variable $t$ and spatial variables $\mathbf{k}$, $x(t,\mathbf{k})$ is a real-valued vector, and $x(t)$ denotes a signal in $\ell_2$. The following continuous-time SISs is considered in this paper: \begin{equation}\label{Sigma1} \begin{aligned} (\Sigma):~\begin{bmatrix} \frac{\partial x(t,\mathbf{k})}{\partial t}\\ w(t,\mathbf{k}) \end{bmatrix}&=\begin{bmatrix} A_{TT} & A_{TS}\\A_{ST} & A_{SS} \end{bmatrix}\begin{bmatrix} x(t,\mathbf{k}) \\ v(t,\mathbf{k}) \end{bmatrix},\\ x(0,\mathbf{k}) & = x_{0}(\mathbf{k}) , \end{aligned} \end{equation} where $x(t,\mathbf{k})\in \mathbb{R}^{n_0}$ denotes the state vector, $w(t,\mathbf{k})$, $v(t,\mathbf{k})\in \mathbb{R}^{n}$ are interconnection variables between subsystems with the forms of \begin{equation*} \begin{aligned} v(t,\mathbf{k}) &= \left(v_{1}(t,\mathbf{k});v_{-1}(t,\mathbf{k});\cdots;v_{L}(t,\mathbf{k});v_{-L}(t,\mathbf{k})\right),\\ w(t,\mathbf{k}) &= \left(w_{1}(t,\mathbf{k});w_{-1}(t,\mathbf{k});\cdots;w_{L}(t,\mathbf{k});w_{-L}(t,\mathbf{k})\right), \end{aligned} \end{equation*} in which, $v_{i}(t,\mathbf{k})$, $w_{i}(t,\mathbf{k}) \in \mathbb{R}^{n_i}$, $v_{-i}(t,\mathbf{k})$, $w_{-i}(t,\mathbf{k})\in \mathbb{R}^{n_{-i}}$ ($i=1,...,L$), and $\sum_{i=1}^{L}(n_i+n_{-i})=n$. As indicated by the different definitions of $\mathbb{D}_i$, three types of spatial interconnections are considered for $i$-th spatial direction. \begin{enumerate}[(1)] \item Infinite Interconnection ($\mathbb{D}_i=\mathbb{Z}$): \begin{equation} \begin{cases} v_{i}(t,\mathbf{k}|_{k_i=l+1})=w_i(t,\mathbf{k}|_{k_i=l}),~\forall l \in \mathbb{Z},\\ v_{-i}(t,\mathbf{k}|_{k_i=l-1}) = w_{-i}(t,\mathbf{k}|_{k_i=l}),~\forall l \in \mathbb{Z}. \end{cases} \end{equation} \item Periodic Interconnection ($\mathbb{D}_i=\mathbb{Z}_{N_i}+1$): \begin{equation} \begin{cases} v_{i}(t,\mathbf{k}|_{k_i=l+1}) = w_i(t,\mathbf{k}|_{k_i=l}),~1\leq l \leq N_i - 1,\\ v_{-i}(t, \mathbf{k}|_{k_i=l-1})=w_{-i}(t, \mathbf{k}|_{k_i=l}),~2 \leq l \leq N_i,\\ v_{i}(t,\mathbf{k}|_{k_i=1}) = w_i(t,\mathbf{k}|_{k_i=N_i}),\\ v_{-i}(t, \mathbf{k}|_{k_i=N_i})=w_{-i}(t, \mathbf{k}|_{k_i=1}). \end{cases} \end{equation} \item Spatially $M$-reversible finite extent system ($\mathbb{D}_i = \left\{ 1,...,N_i\right\}$): \begin{equation} \begin{cases} v_{i}(t,\mathbf{k}|_{k_i=l+1})=w_i(t,\mathbf{k}|_{k_i=l}),~1\leq l \leq N_i -1,\\ v_{-i}(t, \mathbf{k}|_{k_i=l-1})=w_{-i}(t, \mathbf{k}|_{k_i=l}),~2\leq l \leq N_i,\\ v_i(t,\mathbf{k}|_{k_i=1}) = M_iw_{-i}(t,\mathbf{k}|_{k_i=1}), \\ v_{-i}(t,\mathbf{k}|_{k_i=N_i}) = M_i^{-1}w_i(t,\mathbf{k}|_{k_i=N_i}). \\ \end{cases} \end{equation} \end{enumerate} In which, $$\mathbf{k}|_{k_i=l}=(k_1,...,k_{i-1},l,k_{i+1},...,k_L),$$ $M_i$ is a nonsingular boundary conditions matrix for finite spatial extent in dimension $i$ (called the boundary conditions matrix), and the finite extent system is restricted to be spatially $M$-reversible (see for details \cite{Langbort2005}). For a schematic illustration, a basic building block with $L=1$, and its corresponding three types of interconnections are depicted in Fig.~\ref{fig:1DInfinite}-Fig.~\ref{fig:1DFinite}. \begin{figure}[!t] \centering \includegraphics[width=0.4in]{1DBasicBlock-eps-converted-to.pdf} \hfil \includegraphics[width=1.5in]{1DInfinite-eps-converted-to.pdf} \caption{Basic building block for $L=1$ and its infinite interconnection.} \label{fig:1DInfinite} \end{figure} \begin{figure}[!t] \centering \includegraphics[height=1.8cm]{1DPeriodic-eps-converted-to.pdf} \caption{Periodic interconnection for $L=1$.} \label{fig:1DPeriodic} \end{figure} \begin{figure}[!t] \centering \includegraphics[height=1.8cm]{1DFiniteExtent-eps-converted-to.pdf} \caption{Finite extent system for $L=1$ with boundary condition matrix $M$.} \label{fig:1DFinite} \end{figure} As explored in \cite{DAndrea2003,Langbort2005}, all interconnected systems of interest can be captured by the following abstract differential equation on the Hilbert space $\ell_2$: \begin{equation}\label{Sigma2} \begin{aligned} \begin{bmatrix} \frac{\partial x(t)}{\partial t}\\ \Delta v(t) \end{bmatrix}&=\begin{bmatrix} A_{TT} & A_{TS}\\A_{ST} & A_{SS} \end{bmatrix}\begin{bmatrix} x(t) \\ v(t) \end{bmatrix},\\ x(0)&=x_0\in \ell_2, \end{aligned} \end{equation} where \begin{equation} \Delta = \begin{bmatrix} \Delta_1 & & \\ & \ddots & \\ & & \Delta_{L} \end{bmatrix}, \end{equation} and $\Delta_{i}$ denotes structured operator in the $i$-th spatial dimension: \begin{equation} \Delta_{i}=\begin{cases} diag\left(\mathbf{S}_iI_{n_i},\mathbf{S}^{-1}_iI_{n_{-i}} \right),~\text{for infinite}\\ \text{or periodic spatial extent,}\\ \mathcal{C}_i,~\text{for finite spatial extent,} \end{cases} \end{equation} where $\mathbf{S}_i$ and $\mathcal{C}_i$ share the same mechanisms with operators defined in \cite{Langbort2005}. The system (\ref{Sigma2}) is said to be well-posed if the bounded linear operator $\Delta - A_{SS}$ is invertible on $\ell_2$. The well-posedness can be interpreted as the existence and uniqueness of solution. It is assumed that system (\ref{Sigma2}) is well-posed in the rest of this paper since it would make more practical sense to analysis the stability in the presence of wellposedness. \begin{assumption} $\Delta-A_{SS}$ is invertible on $\ell_2$. \end{assumption} A well-posed system (\ref{Sigma2}) has a unique solution for any $x_0\in \ell_2$: \begin{equation} x(t) = exp(\mathbf{A}t)x_0, \end{equation} where $\mathbf{A} := A_{TT} + A_{TS}(\Delta-A_{SS})^{-1}A_{ST}$ is a bounded operator on $\ell_2$ and generates a strong continuous semigroup $exp(\mathbf{A}t)$ formed by \begin{equation} exp(\mathbf{A}t):=\sum_{n=0}^{\infty}\frac{(\mathbf{A}t)^n}{n!}. \end{equation} Under the assumption of well-posedness, the exponential stability of $\Sigma$ is equivalent to that of system (\ref{Sigma2}), i.e, there exist $\alpha>0$, $\beta>0$, such that the continuous semigroup $exp(\mathbf{A}t)$ satisfies \begin{equation}\label{estable} \|exp(\mathbf{A}t)\|_{\ell_2}\leq \alpha exp(-\beta t),~\forall t\in \mathbb{R}^{+}. \end{equation} The problem addressed here is to study the exponential stability of $\Sigma$ by resorting to sum-of-squares (SOS) technique for trigonometric polynomials. \subsection{Trigonometric polynomials} To derive the main results of this paper, some concepts relevant to trigonometric polynomials are presented in the sequel. Let $\mathbf{z}=(z_1,...,z_L)$ be the $L$-dimensional complex variables defined on the unit $L$-circle $\mathbb{T}^L=\left\{ \mathbf{z}\in \mathbb{C}^L :~|z_i|=1,i=1,...,L \right\}$, $\mathbf{z}^{\mathbf{d}}=z^{d_1}_1 \cdots z^{d_L}_L $ is a $L$-variate monomial of degree $\mathbf{d}=\left[ d_1,\cdots, d_L \right] \in \mathbb{Z}^L$. In this paper, we deal with the real-valued (Hermitian) trigonometric polynomial \begin{equation}\label{RP} F(\mathbf{z}) = \sum^{\mathbf{n}}_{\mathbf{d}=-\mathbf{n}}f(\mathbf{d})\mathbf{z}^{\mathbf{d}},~f(-\mathbf{d}) = f^*(\mathbf{d}), \end{equation} where $\mathbf{n}=(\mathbf{n}_1,...,\mathbf{n}_L)\in \mathbb{Z}^L$ gathers the maximum degree for each variable $z_i$, regarded as the degree of $F(\mathbf{z})$. The set of real-valued trigonometric polynomials is denoted by $\mathbb{RP}[\mathbb{T}^L]$. \begin{definition}\cite{Dumitrescu2007} A trigonometric polynomial $F\in \mathbb{RP}[\mathbb{T}^L]$ is said to be sum-of-squares (SOS), if it can be written as \begin{equation} F(\mathbf{z}) = \sum^r_{l=1}V_{l}(\mathbf{z})V^*_l(\mathbf{z}^{-1}), \end{equation} where $V_l(\mathbf{z})$ are positive orthant polynomials, i.e, only monomials $\mathbf{z}^{\mathbf{d}}$ with $\mathbf{d}\geq 0 $ are contained, and $V^*_l(\mathbf{z})$ denotes the polynomial with complex conjugated coefficients. \end{definition} It is obvious that any SOS polynomial is nonnegative on the unit $L$-circle, and an important theoretical result is that any polynomial (\ref{RP}) positive on the unit $L$-circle is SOS, see \cite{Dumitrescu2007} for details. Moreover, consider the set \begin{equation} \mathcal{D} = \left\{ \mathbf{z}\in \mathbb{T}^L: D_i(\mathbf{z})\geq 0, i=1:\mathcal{V} \right\}, \end{equation} defined by the positivity of some given trigonometric polynomials $D_i(\mathbf{z})$, $i=1:\mathcal{V}$, the following lemma characterize the trigonometric polynomials that are positive on $\mathcal{D}$. \begin{lma} \cite{Dumitrescu2006} \label{lma1} If a polynomial $F\in \mathbb{RP}[\mathbb{T}^L]$ is positive on $\mathcal{D}$, then there exist SOS polynomials $H_i(\mathbf{z})$, $i=0:\mathcal{V}$, such that \begin{equation} F(\mathbf{z}) = H_0(\mathbf{z}) + \sum_{i=1}^{\mathcal{V}} D_i(\mathbf{z}) H_i(\mathbf{z}). \end{equation} \end{lma} \section{Main results} \label{sec3} \subsection{Full Infinite Interconnections} In this section, the spatial interconnections are restricted to be infinite extent for each spatial direction. It follows from the results of \cite{Bamieh2002} that the stability of solution for system (\ref{Sigma1}) can be equivalently checked by looking at its corresponding Fourier-transformed form, i.e, the system $\Sigma$ with infinite spatial interconnections is exponentially stable if and only if $A(\mathbf{z})$ is Hurwitz for all $\mathbf{z}\in \mathbb{T}^L$ (i.e, all its eigenvalues have negative real parts), where \begin{align}\label{hatA} A(\mathbf{z})&=A_{TT}+A_{TS}(\Delta(\mathbf{z})-A_{SS})^{-1}A_{ST},\\ \Delta(\mathbf{z}) &= diag\left\{\left.\begin{bmatrix} z_iI_{n_i} & \\ & z^{-1}_{i}I_{n_{-i}} \end{bmatrix}\right|_{i=1}^{L}\right\}. \end{align} Note that $\Delta(\mathbf{z})-A_{SS}$ is invertible for all $\mathbf{z}\in \mathbb{T}^L$ due to the assumption of well-posedness (see \cite{Langbort2005} for details). Inspired by the idea of rational parameterization \cite{Chesi2019}, let us define \begin{equation} h(\mathbf{z}) = det(\Delta(\mathbf{z}) - A_{SS}) \end{equation} and $H(\mathbf{z})$ is the matrix polynomial \begin{equation} H(\mathbf{z}) = A_{TT}h(\mathbf{z}) + A_{TS}adj\left( \Delta(\bm{z}) - A_{SS} \right)A_{ST}, \end{equation} then $A(\mathbf{z})$ can be expressed as \begin{equation}\label{AHh} A(\mathbf{z}) = \frac{H(\mathbf{z})}{h(\mathbf{z})}. \end{equation} The following theorem provides a necessary and sufficient condition for establishing the stability of interest. \begin{theorem}\label{thm1} The complex matrix $A(\mathbf{z})$ is Hurwitz over $\mathbb{T}^L$ if and only if \begin{enumerate}[(i)] \item \label{c1} $A(\mathbf{z})|_{\mathbf{z}=\mathbf{1}_L}$ is Hurwitz; \item \label{c2} $F(\mathbf{z}):=det(-W(\mathbf{z}))$ is positive on the unit $L$-circle $\mathbb{T}^L$, where \begin{align} W(\mathbf{z})& = K(\mathbf{z}) \otimes I_{n_0} + I_{n_0} \otimes \overline{K(\mathbf{z})},\\ K(\mathbf{z})&= H(\mathbf{z})\overline{h(\mathbf{z})}. \end{align} \end{enumerate} \end{theorem} \emph{\textbf{Proof.} ``$\Rightarrow$" Suppose that $A(\mathbf{z})$ is Hurwitz over $\mathbb{T}^L$, it is obvious that $A(\mathbf{z})|_{\mathbf{z}=\mathbf{1}_L}$ is Hurwitz. Let \begin{equation} \hat{W}(\mathbf{z}) = A(\mathbf{z})\otimes I_{n_0}+I_{n_0}\otimes \overline{A(\mathbf{z})}, \end{equation} then it follows from the properties of the Kronecker product that \begin{equation} spec(\hat{W}(\mathbf{z})) = \left\{ \lambda_k(\mathbf{z}) + \overline{\lambda_l(\mathbf{z})}, k,l=1,...,n_0 \right\} \end{equation} where $\lambda_i(\mathbf{z})$, $i=1,...,n_0$ denotes the $i$-th eigenvalue of $A(\mathbf{z})$, thus $\hat{W}(\mathbf{z})$ is Hurwitz for all $\mathbf{z}\in \mathbb{T}^L$. It is observed that the eigenvalues of $\hat{W}(\mathbf{z})$ are symmetric with respect to the real axis, which implies \begin{equation} \hat{F}(\mathbf{z}) = det(-\hat{W}(\mathbf{z})) \end{equation} is positive over $\mathbb{T}^L$. Thus, \begin{equation}\label{FhatF} F(\mathbf{z})=det\left(-W(\mathbf{z})\right) = |h(\mathbf{z})|^{2n^2_0} \hat{F}(\mathbf{z}) \end{equation} is positive.} \emph{``$\Leftarrow$" Suppose that $A(\mathbf{z})|_{\mathbf{z}=\mathbf{1}_L}$ is Hurwitz, and $F(\mathbf{z})$ is positive on the unit $L$-circle. It follows from (\ref{FhatF}) that $\hat{F}(\mathbf{z})$ is positive. Assume that there exists $\mathbf{z}_1\in \mathbb{T}^L$ such that $A(\mathbf{z})|_{\mathbf{z}=\mathbf{z}_1}$ is not Hurwitz. Due to $A(\mathbf{z})|_{\mathbf{z}=\mathbf{1}_L}$ is Hurwitz, there exists $\mathbf{z}_2\in \mathbb{T}^L$ such that $A(\mathbf{z})|_{\mathbf{z}=\mathbf{z}_2}$ has some eigenvalues with null real part according to the continuity of the eigenvalues of $A(\mathbf{z})$, which implies that $\hat{W}(\mathbf{z})|_{\mathbf{z}=\mathbf{z}_2}$ is singular since if $\lambda$ is an eigenvalue of $A(\mathbf{z})|_{\mathbf{z}=\mathbf{z}_2}$ with null real part, $\lambda + \overline{\lambda}=0$ is an eigenvalue of $\hat{W}(\mathbf{z})|_{\mathbf{z}=\mathbf{z}_2}$. Thus, $$\hat{F}(\mathbf{z})|_{\mathbf{z}=\mathbf{z}_2}=det\left(-\hat{W}(\mathbf{z})|_{\mathbf{z}=\mathbf{z}_2}\right) =0,$$ which contradicts the positiveness of $\hat{F}(\mathbf{z})$. The proof is completed. } $\hfill \Box$ Theorem \ref{thm1} shows that the stability of system $\Sigma$ with infinite interconnections can be equivalently checked via two sub-conditions. The former is readily tested, while the latter requires to check the positivity of a polynomial $F(\mathbf{z})$ over $\mathbb{T}^L$. Note that $F(\mathbf{z})$ is a real-valued trigonometric polynomial, $F(\mathbf{z})$ is positive over $\mathbb{T}^L$ if and only if, there exists $\epsilon>0$ such that \begin{equation}\label{PositiveRelaxed} F(\mathbf{z}) - \epsilon >0, ~\forall \mathbf{z} \in \mathbb{T}^L. \end{equation} From the previous discussion, it follows that $F(\mathbf{z})>0$ if and only if, there exists $\epsilon>0$ such that \begin{equation} F(\mathbf{z}) - \epsilon \text{ is SOS. } \end{equation} \subsection{Mixed Interconnections}\label{Sec3_2} In this section, we present the counterpart of previous result for mixed infinite-periodic interconnected system, and its extensions on spatially reversible finite extent systems. Consider the system $\Sigma$ in (\ref{Sigma1}) with \begin{equation} \mathbb{D}_i=\begin{cases} \mathbb{Z},~i=1,...,l,\\ \mathbb{Z}_{N_i}+1,i=l+1,...,L, \end{cases} \end{equation} which indicates that the first $l$ spatial coordinates are infinite interconnected, and the last $L-l$ are periodic. If we define \begin{equation} \mathbb{S}_{i} = \left\{z\in \mathbb{T}:~z^{N_i}=1 \right\},~i=l+1,...,L, \end{equation} the system $\Sigma$ with this type of interconnections is exponentially stable if and only if \begin{equation}\label{PHurwitz} A(\mathbf{z})~\text{is Hurwitz,} ~\forall \mathbf{z}\in \mathbb{G}:=\mathbb{T}^{l} \times \mathbb{S}_{l+1} \times \cdots \times \mathbb{S}_{L}. \end{equation} Obviously, if the two sub-conditions derived in Theorem 1 are satisfied, (\ref{PHurwitz}) holds since $\mathbb{G}\subset \mathbb{T}^L$; however, to the best of our knowledge, the reciprocal may not be true. The idea of robust stabilizability functions in \cite{Chesi2014} encourages us to find another trigonometric polynomial, which is positive on $\mathbb{G}$ if and only if (\ref{PHurwitz}) holds. Let us define \begin{equation} K(\mathbf{z}) = H(\mathbf{z})\overline{h(\mathbf{z})}=|h(\mathbf{z})|^2 A(\mathbf{z}), \end{equation} and its characteristic polynomial \begin{equation} m(\lambda,\mathbf{z}) = det(\lambda I_{n_0} - K(\mathbf{z})). \end{equation} Note that $m(\lambda,\mathbf{z}) $ can be expressed as \begin{equation} m(\lambda,\mathbf{z}) = \sum^{n_0}_{i=0} m_i(\mathbf{z})\lambda^i, \end{equation} where $m_i(\mathbf{z})$ are trigonometric polynomials. Furthermore, we define \begin{equation} m_{conj}(\lambda,\mathbf{z}) = \sum_{i=0}^{n_0} \overline{m_i(\mathbf{z})} \lambda^i \end{equation} and \begin{equation} \label{varPhi} \varphi(\lambda,\mathbf{z}) = m(\lambda,\mathbf{z})m_{conj}(\lambda,\mathbf{z}), \end{equation} then $\varphi(\lambda,\mathbf{z})$ can be expressed by \begin{equation} \varphi(\lambda,\mathbf{z}) = \sum_{i=0}^{2n_0}\varphi_i(\mathbf{z})\lambda^i, \end{equation} in which $\varphi_i(\mathbf{z})$ are real-valued trigonometric polynomials. It can be found that the roots of $m(\lambda,\mathbf{z})$ and $m_{conj}(\lambda,z)$ share the same real parts, thus the stability of interest can be established by investigating stability of the roots of $\varphi(\lambda,\mathbf{z})$ in (\ref{varPhi}). The Routh-Hurwitz table of $\varphi(\lambda,\mathbf{z})$ can be written as \begin{equation} \begin{cases} e_{0,j}(\mathbf{z})= \varphi_{2n_0-2j}(\mathbf{z}),~\forall j=0,...,n_0,\\ e_{1,j}(\mathbf{z})= \varphi_{2n_0-2j-1}(\mathbf{z}),~\forall j=0,...,n_0-1, \end{cases} \end{equation} and \begin{equation} \begin{aligned} e_{i,j}(\mathbf{z})=\left \llbracket \begin{array}{cc} e_{i-2,0}(\mathbf{z}) & e_{i-2,j+1}(\mathbf{z})\\ e_{i-1,0}(\mathbf{z}) & e_{i-1,j+1}(\mathbf{z}) \end{array}\right \rrbracket,~\forall i&= 2,...,2n_0,\\ j&=0,1,... \end{aligned} \end{equation} Construct $\bar{e}_{i,0}(\mathbf{z})$ and $\hat{e}_{i,0}(\mathbf{z})$ from the obtained $e_{i,j}(\mathbf{z})$ by \begin{equation}\label{routh} \begin{aligned} \bar{e}_{0,0}(\mathbf{z}) &= e_{0,0}(\mathbf{z}),~\hat{e}_{0,0}(\mathbf{z})=1,\\ \frac{\bar{e}_{i,0}(\mathbf{z})}{\hat{e}_{i,0}(\mathbf{z})}&= e_{i,0}(\mathbf{z}),~\hat{e}_{i,0}(\mathbf{z})=\prod_{k=i-1,i-3,...}\bar{e}_{k,0}(\mathbf{z}),~\forall i=1,...,2n_0. \end{aligned} \end{equation} Then, define the set \begin{equation} \mathcal{N}=\left\{ i=0,...,n_r: \bar{e}_{i,0}(\mathbf{z}) \text{ is a non-positive constant} \right\} \end{equation} and let $F_k(\mathbf{z})$, $k=1,...,n_f$, be the non-constant polynomials among $\bar{e}_{i,0}(\mathbf{z}),~i=0,...,2n_0$. \begin{theorem}\label{thm2} The complex matrix $A(\mathbf{z})$ is Hurwitz on $ \mathbb{G}$ if and only if the following two sub-conditions are satisfied \begin{enumerate}[(i)] \item $\mathcal{N}=\emptyset$; \item $F_k(\mathbf{z})$ ($k=1,...,n_f$) is positive on $\mathbb{G}$. \end{enumerate} \end{theorem} \emph{\textbf{Proof.} ``$\Rightarrow$" Suppose that for each $\mathbf{z}\in \mathbb{G}$, $A(\mathbf{z})$ is Hurwitz, it follows from the Routh-Huwritz criterion that $\mathcal{N}=\emptyset$, and \begin{equation} F_k(\mathbf{z})>0,~\forall k=1,...,n_f,~\forall \mathbf{z}\in \mathbb{G}. \end{equation}} \emph{ ``$\Leftarrow$" Suppose that $\mathcal{N}=\emptyset$, and $F_k(\mathbf{z})>0$ $(i=1,...,n_f)$ for all $\mathbf{z}\in \mathbb{G}$. This implies that \begin{equation} \bar{e}_{i,0}>0,~\forall i=0,...,2n_0,~\forall \mathbf{z}\in \mathbb{G}. \end{equation} Hence, \begin{equation} K(\mathbf{z}) \text{ is Hurwitz } \forall \mathbf{z}\in \mathbb{G}, \end{equation} Note that $h(\mathbf{z})\neq 0$ due to the assumption of well-posedness, thus \begin{equation} A(\mathbf{z}) = \frac{K(\mathbf{z})}{|h(\mathbf{z})|^2} \text{ is Hurwitz } \forall \mathbf{z}\in \mathbb{G}. \end{equation} The proof is completed.} $\hfill \Box$ If we define \begin{equation} F(\mathbf{z}) = \min_{k=1,...,n_f} F_k(\mathbf{z}),~\forall \mathbf{z}\in \mathbb{G}, \end{equation} the second sub-condition in Theorem \ref{thm2} will amount to the positivity of polynomial $F(\mathbf{z})$ over $\mathbb{G}$. However, it is generally hard to obtain an exact expression for such $F(\mathbf{z})$. Thus, we prefer to check the positivity of $F_k(\mathbf{z})$ over $\mathbb{G}$. It is observed that \begin{equation} \mathbb{G} = \left\{ \mathbf{z}\in \mathbb{T}^L : D_i(\mathbf{z})\geq 0,~i=1,...,L-l \right\}, \end{equation} in which \begin{equation} D_i(\mathbf{z}) = z^{N_{i+l}}_{i+l} - 1 = D_{i}(\mathbf{d}) \mathbf{z}^{\mathbf{d}}, \end{equation} thus it follows from Lemma \ref{lma1} that the system $\Sigma$ considered in this section is exponentially stable if and only if $\mathcal{N}=\emptyset$, and for each $k=1,...,n_f$, there exist $\epsilon>0$, and SOS polynomials $H_{k,i}(\mathbf{z})$, $i=0:L-l$, such that \begin{equation}\label{sos2} F_k(\mathbf{z}) - \epsilon = H_{k,0}(\mathbf{z}) + \sum_{i=1}^{L-l}D_i(\mathbf{z})H_{k,i}(\mathbf{z}). \end{equation} In what follows, we turn our attention to the spatially reversible finite extent systems (see \cite{Langbort2005} for details). The $i$-th spatial index $k_i$ is restricted in the following set \begin{equation} \mathbb{D}_i = \begin{cases} \mathbb{Z}_{N_i}+1,~i=1,...,l,\\ \left\{ 1,...,N_i \right\},~i=l+1,...,L, \end{cases} \end{equation} i.e., the first $l$ spatial coordinates are periodic interconnected, and the last $L-l$ are spatially reversible finite extent. Benefiting from the results derived in \cite{Langbort2005}, the stability of such system can be checked by looking at the corresponding periodic system indexed over the set \begin{equation} \mathbb{G} := \mathbb{Z}_{N_1}\times \cdots \times \mathbb{Z}_{N_l} \times \mathbb{Z}_{2N_{l+1}}\times \cdots \times \mathbb{Z}_{2N_{L}}. \end{equation} Thus, the stability of interest can be established by following the same pattern shown above. The only difference is that the result pertaining to the corresponding periodic system only yield sufficient condition for finite extent case. \subsection{Generalized Trace Parameterization} In this section, the generalized trace parameterization of trigonometric polynomial is recalled \cite{Dumitrescu2007}, so that the theoretical results derived in the previous subsections can be quantified by two semidefinite programs (SDPs). For any polynomial $F(\mathbf{z})$ defined in (\ref{RP}), let \begin{equation} p(z_i) = \begin{bmatrix} 1 & z_i & \cdots & z^{\hat{\mathbf{n}}_i}_i \end{bmatrix}^T,~i=1:L, \end{equation} to be the vector that gathers the canonical basis for polynomials of degree $\hat{\mathbf{n}}_i$ in $i$-th variable $z_i$, where $\hat{\mathbf{n}}=(\hat{\mathbf{n}}_1,...,\hat{\mathbf{n}}_L)\geq \mathbf{n}$, and the vector \begin{equation} p(\mathbf{z}) = p(z_L) \otimes \cdots \otimes p(z_1), \end{equation} of length $\hat{N} = \Pi_{i=1}^{L}(\hat{\mathbf{n}}_i+1)$ to be the canonical basis for $L$-variate polynomials of degree $\hat{\mathbf{n}}$, where $\otimes$ represents the Kronecker product. Then, there exist Gram matrix parameterization for $F(\mathbf{z})$, i.e, \begin{equation}\label{Gram} F(\mathbf{z}) = p^T(\mathbf{z}^{-1})\cdot G \cdot p(\mathbf{z}). \end{equation} where $G\in \mathbb{C}^{\hat{N}\times \hat{N}}$ is a Hermitian matrix, called a Gram matrix associated with $F(\mathbf{z})$, the set of Gram matrices associated with $F(\mathbf{z})$ is denoted by $\mathcal{G}(F)$. The general relation between the coefficients of the polynomial $F(\mathbf{z})$ and its Gram matrix is given by the following lemma. \begin{lma}\cite{Dumitrescu2007} For a trigonometric polynomial $F$ defined in (\ref{RP}), and $G\in \mathcal{G}(F)$, then the relation \begin{equation}\label{TrGram} tr[T(\mathbf{d})\cdot G] = \begin{cases} f(\mathbf{d}),~\mathbf{d} \in [-\mathbf{n},\mathbf{n}]\\ 0,~\mathbf{d} \in [-\hat{\mathbf{n}},\hat{\mathbf{n}}]\backslash [-\mathbf{n},\mathbf{n}] \end{cases} \end{equation} holds, where \begin{equation}\label{Toeplitz} T(\mathbf{d}) = T_L(d_L)\otimes \cdots \otimes T_1(d_1), \end{equation} and $T_i(d_i)\in \mathbb{R}^{(\hat{\mathbf{n}}_i+1)\times (\hat{\mathbf{n}}_i+1)}$ are elementary Toeplitz matrices with ones only on the $d_i$-th diagonal. \end{lma} The relation (\ref{TrGram}) is called the generalized trace parameterization. It is found that the size of Gram matrix $G$ is defined by the $L$-tuple $\hat{\mathbf{n}}$ that assumed to be unknown previously, which leads to an important result on trigonometric polynomial that any polynomial $F$ in (\ref{RP}) is SOS if and only if there exists a semipositive definite Gram matrix $G$ associated with $F(\mathbf{z})$, i.e, there exist an $L$-tuple $\hat{\mathbf{n}}$, and matrix $G\in \mathbb{C}^{\hat{N}\times \hat{N}}\geq 0$, such that (\ref{TrGram}) holds. With the generalized trace parameterization, the derived theoretical results can be quantified by semi-definite programs in optimization problem. Specifically, for the globally positive trigonometric polynomial $F(\mathbf{z})$ from 2) in Theorem \ref{thm1}, let us define \begin{equation}\label{opt1} \begin{aligned} \epsilon^*(\bm{e}) &= \sup_{\epsilon,~G}\epsilon\\ s.t.~&\begin{cases} g(\mathbf{d}) = tr[ T(\mathbf{d}) G ],\\ \mathbf{n}_G = \mathbf{n}_F+\bm{e},\\ G\in \mathbb{C}^{N_G\times N_G}\geq 0,\\ \end{cases} \end{aligned} \end{equation} where $\mathbf{n}_F$ denotes the degree of $F(\mathbf{z})$, $\bm{e}$ has nonnegative elements, and \begin{equation} g(\mathbf{d}) = \begin{cases} f(\mathbf{d}),~\mathbf{d} \in [-\mathbf{n}_F,\mathbf{n}_F]\\ 0,~\mathbf{d} \in [-\mathbf{n}_G,\mathbf{n}_G] \backslash [-\mathbf{n}_F,\mathbf{n}_F] \end{cases} \end{equation} in which, $f(\mathbf{d})$ denotes the symmetric representation (\ref{RP}) for $F(\mathbf{z}) - \epsilon$. Then, 2) holds if and only if $\epsilon^*(\bm{e})> 0$ for some nonnegative $\bm{e}$. For the positivity of $F_k(\mathbf{z})$ from 2) in Theorem \ref{thm2} on domain $\mathbb{G}$, let us define \begin{equation}\label{opt2} \begin{aligned} \epsilon^*(\bm{e}) &= \sup_{\epsilon,~G_{k,i}}\epsilon\\ s.t.&~\forall k=1,...,n_f\\ &\begin{cases} g_k(\mathbf{d}) -\sum^{L-l}_{i=1} \varphi_{k,i}(\mathbf{d})= tr\left[ T_{k,0}(\mathbf{d})\cdot G_{k,0} \right],\\ \mathbf{n}_{G_{k,0}} = 2(\mathbf{n}_k+\bm{e}),\\ \mathbf{n}_{k} = \lceil \frac{1}{2} \max \left\{ \mathbf{n}_{F_k}, \mathbf{n}_{D_1},...,\mathbf{n}_{D_{L-l}} \right\}\rceil, \\ \mathbf{n}_{G_{k,i}} = \mathbf{n}_{G_{k,0}} - \mathbf{n}_{D_i},~i=1:L-l \\ G_{k,i}\in \mathbb{C}^{N_{G_{k,i}}\times N_{G_{k,i}}}\geq 0,~i=0:L-l\\ \end{cases} \end{aligned} \end{equation} where $\mathbf{n}_{F_k}$ denotes the degree of $F_k(\mathbf{z})$, $\bm{e}$ has nonnegative elements, and \begin{align} N_{G_{k,i}} &= \Pi_{v=1}^L({\mathbf{n}}^v_{G_{k,i}}+1),~\varphi_{k,i}(\mathbf{d}) = \sum_{\mathbf{d_1}+\mathbf{d_2}=\mathbf{d}}f^{i}_d(\mathbf{d}_1)f^{k,i}_h(\mathbf{d}_2),\\ \sum_{\mathbf{d}=-\mathbf{n}_{D_i}}^{\mathbf{n}_{D_i}}f^i_d(\mathbf{d})\mathbf{z}^{\mathbf{d}}&:=D_i(\mathbf{z}),~f^{k,i}_h(\mathbf{d}):=tr\left[ T_{k,i}(\mathbf{d})\cdot G_{k,i} \right],\\ g_k(\mathbf{d})& = \begin{cases} f_k(\mathbf{d}),~\mathbf{d} \in [-\mathbf{n}_{F_k},\mathbf{n}_{F,k}]\\ 0,~\mathbf{d} \in [-\mathbf{n}_{G_{k,0}},\mathbf{n}_{G_{k,0}}] \backslash [-\mathbf{n}_{F_k},\mathbf{n}_{F_k}] \end{cases},\\ \sum_{\mathbf{d}=-\mathbf{n}_{F_k}}^{\mathbf{n}_{F_k}}f_k(\mathbf{d})\mathbf{z}^{\mathbf{d}}&:=F_k(\mathbf{z}) - \epsilon, \end{align} in which $T_{k,i}(\mathbf{d})$ ($i=0:L-l$) are defined by (\ref{Toeplitz}) associated with $\mathbf{n}_{G_{k,i}}$. We determine that 2) holds, if $\epsilon^*(\bm{e})> 0$ in SDP \eqref{opt2} for some nonnegative $\bm{e}$. This test gives no false positive, and in the example section we will see that if 2) holds, a positive $\epsilon$ can be readily found in general. \section{Examples}\label{sec4} In this section, several examples are given to demonstrate the efficiency of the derived theoretical results. \textbf{Example 1: } Let us consider the problem of determining whether an infinite interconnected system $\Sigma$ is exponentially stable with $L=2$, where \begin{equation} \begin{cases} \begin{aligned} A_{TT} &= \begin{bmatrix} -0.5 & 0\\ 0 & -1 \end{bmatrix},~A_{TS} = \begin{bmatrix} 1 & 0 & 0 & 2\\ 0 & 0 & 0.5 & 0 \end{bmatrix},\\ A_{ST} &=\begin{bmatrix} 0 & 0.5 \\ 1 & 0 \\ -0.5 & 0\\ 0 & 0 \end{bmatrix},~A_{SS}=\begin{bmatrix} 0 & 0 & 0 & 0\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0 \end{bmatrix}. \end{aligned} \end{cases} \end{equation} Firstly, it is readily verified that the constant matrix \begin{equation} \begin{aligned} \mathbf{A}(\mathbf{z})|_{z_1=1,z_2=1}&=A_{TT} + A_{TS}(I - A_{SS})^{-1}A_{ST}\\ &=\begin{bmatrix} -0.5 & 0.5\\ -0.25 & -1.0 \end{bmatrix} \end{aligned} \end{equation} has all eigenvalues with negative real part, i.e, the first sub-condition in Theorem \ref{thm1} holds. Thus, we turn our attention to the second sub-condition. The polynomial $h(\mathbf{z})$ is given by \begin{equation} h(\mathbf{z}) = det\left( \Delta(\mathbf{z}) - A_{SS} \right) = 1, \end{equation} and the matrix polynomial $H(\mathbf{z})$ \begin{equation} \begin{aligned} H(\mathbf{z})& = A_{TT}h(\mathbf{z}) + A_{TS}adj(\Delta(\mathbf{z}) - A_{SS})A_{ST} \\ & = \begin{bmatrix} -0.5 & 0.5z^{-1}_1 \\ -0.25 z^{-1}_2 & -1 \end{bmatrix}. \end{aligned} \end{equation} It is straightforward to figure out \begin{equation} \begin{aligned} F(\mathbf{z}) &= 0.015625z^{-2}_1z^{-2}_2 + 0.5625 z^{-1}_1z^{-1}_2 + 4.46875\\ &+0.5625z_1z_2 + 0.015625z^{2}_1z^2_2, \end{aligned} \end{equation} then using the YALMIP toolbox to solve the optimization problem (\ref{opt1}) for $\bm{e} = (0,0)$, it is found that \begin{equation*} \epsilon = 3.3750>0. \end{equation*} Thus, it can be concluded from Theorem \ref{thm1} that the system $\Sigma$ is exponentially stable. The multidimensional (MD) toolbox is used to simulate the motion of this system under the following initial condition: \begin{equation}\label{initC} \begin{aligned} \forall i=1,2,~x_i(0,k_1,k_2) = \begin{cases} 1,~&\text{if } k_1=5,~k_2=5; \\ &\quad k_1=6,~k_2=5; \\ &\quad k_1=6,~k_2=6,\\ 0,~&\text{otherwise.} \end{cases} \end{aligned} \end{equation} The state response $x_1(t,k_1,k_2)$ at different time ($t=0$, $3$, $5$, $20$s) is shown in Fig.~\ref{fig:ex1x1}, which illustrates the stability of system $\Sigma$ ($x_2(t,k_1,k_2)$ is not included due to the lack of space). \begin{figure} \centering \includegraphics[height=6cm]{Ex1_x1-eps-converted-to.pdf} \caption{State response $x_{1}(t,k_1,k_2)$ at different time} \label{fig:ex1x1} \end{figure} \textbf{Example 2: } In this example, we consider a system $\Sigma$ with $L=2$ interconnected in a mixed way. Specifically, infinite interconnection in the first spatial dimension, and periodic interconnection in the second spatial dimension ($N_2 = 3$). The system matrices are chosen as \begin{equation} \begin{cases} \begin{aligned} A_{TT} &= \begin{bmatrix} -1 & 0\\ 0 & -1 \end{bmatrix},~A_{TS} = \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & -0.5 & 0 \end{bmatrix},\\ A_{ST} &=\begin{bmatrix} 0 & 0.5 \\1 & 0 \\ 0.5 & 0\\ 0 & 0 \end{bmatrix},~A_{SS}=\begin{bmatrix} 0 & 0 & 0 & 0\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0 \end{bmatrix}. \end{aligned} \end{cases} \end{equation} Applying the method proposed in Section \ref{Sec3_2}, we obtain \begin{equation} \begin{cases} \begin{aligned} \bar{e}_{0,0}(\mathbf{z}) &= 1,\\ \bar{e}_{1,0}(\mathbf{z}) &= 4,\\ \bar{e}_{2,0}(\mathbf{z}) &= 0.25z^{-1}_1z^{-1}_2+20+0.25z_1z_2,\\ \bar{e}_{3,0}(\mathbf{z})&=r_1(\mathbf{z})+ 63.875+\overline{r_1(\mathbf{z})},\\ \bar{e}_{4,0}(\mathbf{z})&=r_2(\mathbf{z})+263.4922+\overline{r_2(\mathbf{z})}, \end{aligned} \end{cases} \end{equation} where \begin{equation*} \begin{aligned} r_1(\mathbf{z}) &= 0.0625z^{-2}_1z^{-2}_2+4z^{-1}_1z^{-1}_2,\\ r_2(\mathbf{z}) &= 0.03125z^{-3}_1z^{-3}_2+2.2539z^{-2}_1z^{-2}_2+48.2188z^{-1}_1z^{-1}_2, \end{aligned} \end{equation*} and some rounding approximations are employed due to the lack of space. It is found that $\mathcal{N}=\emptyset$, thus the first sub-condition in Theorem \ref{thm2} holds. For second sub-condition, let us define \begin{equation} D_1(\mathbf{z}) = z^3_2 - 1, \forall z_2 \in \mathbb{T}. \end{equation} Solving SDP (\ref{opt2}) for $\bm{e}=(0,0)$, the following index $\epsilon$ is achieved \begin{equation} \epsilon = 19.5> 0, \end{equation} which implies the system $\Sigma$ considered in this example is exponentially stable. Similarly, consider the following initial condition \begin{equation}\label{initC3} \begin{aligned} \forall i=1,2,~x_i(0,k_1,k_2) = \begin{cases} 1,~&\text{if } k_1=5,~k_2=1; \\ &\quad k_1=6,~k_2=1; \\ &\quad k_1=6,~k_2=2,\\ 0,~&\text{otherwise,} \end{cases} \end{aligned} \end{equation} the state responses $x_{1}(t,k_1,k_2)$ at $t=0$, $3$, $5$, $20$s are presented in Fig.~\ref{fig:ex3x1}. \begin{figure} \begin{center} \includegraphics[height=6cm]{Ex2_x1-eps-converted-to.pdf} \caption{State response $x_{1}(t,k_1,k_2)$ at different time} \label{fig:ex3x1} \end{center} \end{figure} \section{Conclusion}\label{sec5} In this paper, positive trigonometric polynomial techniques have been exploited to establish the stability of SISs. Three types of topological interconnections were considered for each spatial direction, including infinite interconnection, periodic interconnected structure, and spatially reversible finite extent. The stability of SISs with infinite interconnections for all spatial dimensions was first explored, and inspired by the idea of rational parameterization, the addressed problem was converted into a stability test of a constant matrix and the global positivity of a trigonometric polynomial. Then, SISs of mixed infinite and periodic interconnections were considered to encompass more general topological structures. The idea of robust stabilizability function was referred to find a trigonometric polynomial whose local positivity corresponds to the stability of interest. Both the global and local positivity of the obtained polynomials can be checked via SOS decomposition. Benefiting from some existing results on spatially reversible interconnected systems, the derived methods could be applicable to all possible topological structures constructed by the three types of interconnections considered. Finally, the generalized trace parameterization of polynomials was explored, based on which, two SDPs were proposed to quantify the derived theoretical results. Although some relaxations were employed due to computational complexity, it was seen from the numerical examples that the effect seems to be negligible. \section*{Acknowledgments} We thank just about everybody.
1,941,325,220,829
arxiv
\section{Introduction} First order ODEs can always be linearized \cite{fm1} by point transformations \cite{l1}. Lie \cite{l2} showed that all linearizable second order ODEs must be cubically semi-linear, i.e. \begin{eqnarray} y''+ a_1(x,y)y'^3 - a_2(x,y)y'^2 + a_3(x,y)y' - a_4(x,y)=0~,\label{secondlform} \end{eqnarray} the coefficients $a_1, a_2, a_3, a_4$ satisfy an over-determined integrable system of four constraints involving two auxiliary functions, which Tresse wrote in more usable form \cite{t} \begin{eqnarray} 3(a_1a_3)_x - 3a_4a_{1y} - 6a_1a_{4y} - 2a_2a_{2x} + a_2a_{3y} - 3a_{1xx} +2a_{2xy} - a_{3yy}=0~,\nonumber\\ 3(a_4a_2)_y - 3a_1a_{4x} - 6a_4a_{1x} - 2a_3a_{3y} + a_3a_{2x} +3a_{4yy} - 2a_{3xy} + a_{2xx}=0~.\label{secondlc} \end{eqnarray} We call such equations \emph{Lie linearizable}.\\ \\ Chern \cite{c1, c2} and Grebot \cite{g1, g2} extended the linearization programme to the third order using contact and point transformations respectively to obtain linearizability criteria for equations reducible to the forms $u'''(t)=0$ and $u'''(t)+u(t)=0.$ It was shown \cite{mL} that there are three classes of third order ODEs that are linearizable by point transformations, viz. those that reduce to the above two forms or $u'''(t)+\alpha(t)u(t)=0.$ Neut and Petitot \cite{np} dealt with the general third order ODEs. Ibragimov and Meleshko (IM) \cite{im} used the original Lie procedure \cite{l2} of point transformation to determine the linearizability criteria for third order ODEs. They showed that any third order ODE $y'''=f(x,y,y',y'')$ obtained from a linear equation $u''' + \alpha(t)u=0$ by means of point transformations $t=\varphi(x,y),u=\psi(x,y),$ must belong to one of the following two types of equations.\\ \textbf{Type I}: If $\varphi_y=0$ the equations that are linearizable are of the form \begin{eqnarray} y''' +(a_1y' + a_0)y'' + b_3y'^3 + b_2y'^2 + b_1y' + b_0=0~.\label{thirdtype1} \end{eqnarray} \textbf{Type II}: If $\varphi_y\neq0$ , set $r(x,y)=\varphi_x/\varphi_y$~, equations are of the form \begin{eqnarray} y'''+{1\over y'+r}[-3(y'')^2 + (c_2y'^2+c_1y'+c_0)y''\nonumber\\ +d_5y'^5+d_4y'^4+d_3y'^3+d_2y'^2+d_1y'+d_0]=0~,\label{thirdtype2} \end{eqnarray} where all coefficients $a_i,b_i,c_i,d_i$, being the functions of $x$ and $y$, satisfy certain constraint requirements. Afterwards Ibragimov, Meleshko and Suksern \cite{ims1, ims2} used the point and contact transformations to determine the criteria for the linearizability of fourth order scalar ODEs. Meleshko \cite{M} provided a simple algorithm to reduce third order ODEs of the form $y'''=f(y,y',y'')$ to second order ODEs. If the reduced equations satisfy the Lie linearizability criteria, they can then be solved by linearization. Meleshko showed that a third order ODE is reducible to the second order linearizable ODE if it is of the form \begin{eqnarray} y'''+A(y,y')y''^3+B(y,y')y''^2+c(y,y')y''+D(y,y')~, \end{eqnarray} where the coefficients $A,B,C,D$ satisfy certain constraints. \\\\ In the present paper we extend Meleshko's procedure to the fourth order ODEs in the cases that the equations do not depend explicitly on the independent or the dependent variable (or both) to reduce it to third (respectively second) order equations. Once the order is reduced we can apply the IM (or Lie) linearization test. If the reduced third (or second) order ODE satisfies the IM (or Lie) linearization test, then after finding a linearizing transformation, the general solution of the original equation is obtained by quadrature. So this method is effective in the sense that it reduces many ODEs, that cannot be linearized, to lower order linearizable forms. This is one of the motivations for studying this method. Another hope for the study of the linearization problem is that by using it we may be able to provide a complete classification of ODEs according to the number of arbitrary initial conditions that can be satisfied \cite{mq2}.\\ \\ \section{Equations reducible to linearizable forms} Meleshko had only treated the special case of independence of $x$ for third order ODE. We include independence of $y$ for completeness before proceeding to the fourth order.\\\\ \textbf{Third order ODEs independent of $y$}\\\\ Taking $y'$ as the independent variable $u(x)$, we convert the ODE \begin{eqnarray} y'''=f(x,y',y'')~, \end{eqnarray} to the second order ODE \begin{eqnarray} u''=f(x,u,u')~,\label{thirdindy-general} \end{eqnarray} which is linearizable by Lie's criteria if it is cubically semi-linear with the coefficients satisfying conditions (\ref{secondlc}).\\ \\ Hence (\ref{thirdindy-general}) is reducible to second order linearizable form if and only if \begin{eqnarray} f(x,y',y'')=-c(x,y')y''^3+g(x,y')y''^2-h(x,y')y''+d(x,y')~,\label{thirdindy-lform} \end{eqnarray} with the coefficients satisfying \begin{eqnarray} 3(ch)_x-3dc_{y'}-6cd_{y'}-2gg_x+gh_{y'}-3c_{xx}+2g_{xy'}-h_{y'y'}=0~,\nonumber\\ 3(dg)_{y'}-3cd_x-6dc_x-2hh_{y'}+hg_x+3d_{y'y'}-2h_{xy'}+g_{xx}=0~.\label{thirdindy-lc} \end{eqnarray} \\ \textbf{Fourth order ODEs independent of $y$}\\\\ Since the variable $y$ is missing, by taking $y'$ as the new dependent variable $u(x)$, the ODE \begin{eqnarray} y^{(4)}=f(x,y',y'',y''')~,\label{fourthindy-general} \end{eqnarray} is reduced to third order ODE \begin{eqnarray} u'''=f(x,u,u',u'')~.\label{reduced10} \end{eqnarray} Equation (\ref{reduced10}) is linearizable for the type I of Ibragimov and Meleshko's criteria if and only if \begin{eqnarray} f(x,y',y'',y''')=-(a_1y''+a_0)y'''-b_3y''^3-b_2y''^2-b_1y''-b_0~, \end{eqnarray} with the coefficients $a_i=a_i(x,y')~,~(i=0,1)$ and $b_j=b_j(x,y')~,~(j=0,1,2,3)$~,\label{fourthindy-lform}\\ \\ satisfying the conditions \begin{eqnarray} a_{0y'}-a_{1x}=0~,\nonumber\\ (3b_1-{a_0}^2-3a_{0x})_{y'}=0~,\nonumber\\ 3a_{1x}+a_0a_1-3b_2=0~,\nonumber\\ 3a_{1y'}+{a_1}^2-9b_3=0~,\nonumber\\ (9b_1-6a_{0x}-2{a_0}^2)a_{1x}+9(b_{1x}-a_1b_0)_{y'}+3b_{1y'}a_0-27b_{0y'y'}=0~.\label{fourthindy-lc} \end{eqnarray}\\ Also the necessary and sufficient conditions for (\ref{reduced10}) to be linearizable for the type II of Ibragimov and Meleshko's criteria are \begin{eqnarray} f(x,y',y'',y''')={-1\over y''+r}[-3(y''')^2 + (c_2y''^2+c_1y''+c_0)y'''\nonumber\\ +d_5y''^5+d_4y''^4+d_3y''^3+d_2y''^2+d_1y''+d_0]~,\label{fourthindy-lform-type2} \end{eqnarray} and the coefficients $c_i=c_i(x,y')~,~(i=0,1,2)$ , $d_j=d_j(x,y')~,~(j=0,1,2,3,4,5)$ and $r=r(x,y')$ have to satisfy constraint equations which can be produced simply by replacing $y$ by $y'$ for the type II constraint equations in \cite{im}.\\\\ \textbf{Fourth order ODEs independent of $x$}\\\\ The transformation $y'=u(y)$ will transform autonomous ODE of the fourth order \begin{eqnarray} y^{(4)}=f(y,y',y'',y''')~,\label{fourthindx-general} \end{eqnarray} into the equation \begin{eqnarray} u^3u'''+4u^2u'u''+uu'^3-f(y,u,uu',u^2u''+uu'^2)=0~,\label{reduced1.1} \end{eqnarray} which is a third order ODE in $(y,u)$. It is linearizable by Ibragimov Meleshko's criteria if it is of the form (\ref{thirdtype1}) i.e, \begin{eqnarray} f(y,u,u'u,u''u^2+uu'^2)=-u^3[(a_1u'+a_0)u''+b_3u'^3+b_2u'^2+b_1u'+b_0]\nonumber\\ +4u^2u'u''+uu'^3~,\label{reduced2} \end{eqnarray} where $a_i=a_i(y,u)~,~(i=0,1)$ and $b_j=b_j(y,u)~,~(j=0,1,2,3)~.$ With this (\ref{reduced1.1}) takes the form \begin{eqnarray} u'''+(a_1u'+a_0)u''+b_3u'^3+b_2u'^2+b_1u'+b_0=0~.\label{reduced3} \end{eqnarray} Transforming (\ref{reduced3}) into a fourth order ODE with $x$ as independent variable and $y$ as dependent variable: \begin{eqnarray} y^{(4)}+(A_1y''+A_0)y'''+B_3y''^3+B_2y''^2+B_1y''+B_0=0~,\label{fourthindx-lform-type1} \end{eqnarray} where \begin{eqnarray} A_i=A_i(y,y')~,~ (i=0,1)~;~\quad B_j=B_j(y,y')~, (j=0,1,2,3) \end{eqnarray} subject to the identification of coefficients \begin{eqnarray} a_1=A_1+{4\over y'}~,~\quad a_0={A_0\over y'}~,~\quad b_3=B_3+{A_1\over y'}+{1\over y'^2}~, \nonumber\\ b_2={B_2\over y'}+{A_0\over y'^2}~,~\quad b_1={B_1\over y'^2}~,~\quad b_0={B_0\over y'^3}~, \end{eqnarray} with the constraints \begin{eqnarray} y'^2A_{1y}-y'A_{0y'}+A_0=0~,\nonumber\\ y'^2(-3A_{0yy'})+y'(3B_{1y'}+3A_{0y}-2A_0A_{0y'})+(-6B_1+2A_0^2)=0~,\nonumber\\ y'^2(3A_{1y})+y'(A_0A_1-3B_2)+A_0=0~,\nonumber\\ y'^2(3A_{1y'}-9B_3+A_1^2)-y'A_1-5=0~,\nonumber\\ y'^4(-6A_{0y}A_{1y})+y'^3(9B_1A_{1y}-2A_0^2A_{1y}+9B_{1yy'})+y'^2(-18B_{1y}-9A_1B_{0y'}\nonumber\\ -9B_0A_{1y'} +3A_0B_{1y'}-27B_{0y'y'})+y'(27A_1B_0-6A_0B_1+126B_{0y'})\nonumber\\ -180B_0=0~.\label{fourthindx-lc-type2} \end{eqnarray} \\ Also in order to make (\ref{reduced1.1}) linearizable of type II of Ibragimov and Meleshko's criteria we have to take \begin{eqnarray} f({y,u,uu',u^2u''+uu'^2})=-{u^3\over u'+r}[-3(u'')^2+(c_2u'^2+c_1u'+c_0)u''\nonumber\\ +d_5u'^5+d_4u'^4+d_3u'^3+d_2u'^2+d_1u'+d_0]+4u^2u'u''+uu'^3~,\label{reduced4} \end{eqnarray} where $c_i=c_i(y,u)~,~(i=0,1,2)$~,~ $d_j=d_j(y,u)~,~(j=0,1,2,3,4,5)$ and $r=r(y,u)~.$\\ \\ Considering the form (\ref{reduced4}) and converting (\ref{reduced1.1}) into fourth order with $x$ as independent and $y$ as dependent variable, we have \begin{eqnarray} y^{(4)} +{1\over y''+r_0}[-3(y''')^2 + (C_2y''^2+C_1y''+C_0)y'''\nonumber\\ +D_5y''^5+D_4y''^4+D_3y''^3+D_2y''^2+D_1y''+D_0]=0~,\label{fourthindx-lform-type2} \end{eqnarray} where \begin{eqnarray} C_i=C_i(y,y')~,~ (i=0,1,2)~;\quad D_j=D_j(y,y')~,~ (j=0,1,2,3,4,5)~;~\quad r_0=r_0(y,y')~,\nonumber \end{eqnarray} subject to the identification of coefficients \begin{eqnarray} c_2=C_2-{2\over y'}~,~\quad c_1=C_1+{4r_0\over y'}~,~\quad c_0={C_0\over y'^2}~,~\quad d_5={D_5\over y'^5}~, \nonumber\\ d_4=D_4+{C_2\over y'}-{2\over y'^2}~,~\quad d_3={D_3\over y'}+{C_1\over y'}+{4r_0\over y'^2}-{3r_0\over y'^3}~,\nonumber\\ d_2={D_2\over y'^2}+{C_0\over y'^3}~,~\quad d_1={D_1\over y'^3}~,~\quad d_0={D_0\over y'^4}~,~\quad r={r_0\over y'}~, \end{eqnarray} with the constraints (\ref{C1})--(\ref{C9}) (presented in the appendix).\\ \\ \textbf{Fourth order ODEs independent of $x$ and $y$}\\\\ By considering $y'$ as independent and $y''$ as dependent variable, we convert the equation \begin{eqnarray} y^{(4)}=f(y',y'',y''')~,\label{fourthindxy-general} \end{eqnarray} into a second order ODE: \begin{eqnarray} u^2u''+uu'^2=f(y',u,uu')~.\label{reduced5} \end{eqnarray} For (\ref{reduced5}) to be Lie-linearizable we must have \begin{eqnarray} f(y',u,uu')=-u^2[A(y',u)u'^3+B(y',u)u'^2+C(y',u)u'+D(y',u)]+uu'^2~.\label{reduced6} \end{eqnarray} Hence (\ref{fourthindxy-general}) takes the form \begin{eqnarray} y^{(4)}+a(y',y'')y'''^3+b(y',y'')y'''^2+c(y',y'')y'''+d(y',y'')=0~,\label{fourthindxy-lform} \end{eqnarray} where $a,b,c$ and $d$ must satisfy the constraints: \begin{eqnarray} (3a_{y'y'})y''^4+(2bb_{y'}-3ca_{y'}-3ac_{y'}-2b_{y'y''})y''^3+(2b_{y'}-bc_{y''}\nonumber\\ +3a_{y''}d+6ad_{y''}-c_{y''y''})y''^2+(bc-9ad-3c_{y''})y''-c=0~,\nonumber\\ (by'y')y''^4+(b_{y'}c+3d_{y''}b-3d_{y'}a-6a_{y'}d-2c_{y'y''})y''^3+(c_{y'}+3d_{y''}\nonumber\\ -6bd+3b_{y''}d-2cc_{y''}+3d_{y''y''})y''^2+(2c^2-6d-12d_{y''})y''+15d=0~.\label{fourthindxy-lc} \end{eqnarray} Thus we have the following theorems.\\ \\ {\bf Theorem 1.} \emph{Equation} (\ref{fourthindx-lform-type1}) \emph{is reduced to the third order linearizable form if and only if it obeys} (\ref{fourthindx-lc-type2}). \\\\ {\bf Theorem 2.} \textit{Equation} (\ref{fourthindx-lform-type2}) \emph{is reduced to the third order linearizable form if and only if it obeys} (\ref{C1})--(\ref{C9}) (presented in the appendix II).\\ \\ \textbf{Theorem 3}. \emph{Equation} (\ref{fourthindxy-lform}) \emph{is reduced to the second order linearizable form if and only if it obeys} (\ref{fourthindxy-lc}).\\\\ \textbf{Remark}: If we have a fourth order ODE of the form \begin{eqnarray} y^{(4)}=-f(x,y)y'^5+10{y''y'''\over y'}-15{y''^3\over y'^2}~, \end{eqnarray} with $f(x,y)$ linear in $x$, then we can convert it to a linear ODE $x^{(4)}=f(x,y)$ by simply taking $x$ as dependent and $y$ as independent variables. \section{Illustrative Examples} {\bf Example 1.} The nonlinear fourth order ODE \begin{eqnarray} y'y^{(4)}-y''y'''-3y'^2y'''+2y'^3y''+3y'^5=0~,\label{example1} \end{eqnarray} cannot be linearized by point or contact transformation. It has the form (\ref{fourthindx-lform-type1}) with the coefficients $A_1=-1/y',$ $A_0=-3y',$ $B_3=B_2=0,$ $B_1=2y'^2,$ $B_0=3y'^5.$ One can verify that these coefficients satisfy the conditions (\ref{fourthindx-lc-type2}). The transformation $y'=u(y)$ will reduce this ODE to the 3rd order linearizable ODE \begin{eqnarray} u'''+{3\over u}u'u''-3u''-{3\over u}u'^2+2u'+3u=0~.\label{reduced7} \end{eqnarray} By using transformation equations in \cite{im}, we arrive at the transformation $t=e^y,$ $s=u^2$ which maps (\ref{reduced7}) to the linear third order ODE $s'''-{2\over t^3}s=0$, whose solution is given by $s=c_1t^{-1}+t^2\{c_2\cos(\sqrt{2}\ln t)+c_3\sin (\sqrt{2} \ln t)\}$, where $c_i$ are arbitrary constants. By using the above transformation we get the solution of (\ref{reduced7}) given by $u=\pm \sqrt{c_1e^{-y}+e^{2y}\{c_2\cos (\sqrt{2}y)+c_3\sin (\sqrt{2}y)\}}$. Hence the general solution of (\ref{example1}) is obtained by taking the quadrature \begin{eqnarray} \int {dy\over \sqrt{c_1e^{-y}+e^{2y}\{c_2\cos (\sqrt{2}y)+c_3\sin (\sqrt{2}y)\}}}=\pm x+c_4~, \end{eqnarray} where $c_i$ are arbitrary constants.\\ \\ {\bf Example 2.} The nonlinear ODE \begin{eqnarray} y^2y'^2y^{(4)}-10y^2y'y''y'''-3yy'^3y'''+15y^2y''^3+9yy'^2y''^2+3y'^4y''=0~,\label{example2} \end{eqnarray} is of the form (\ref{fourthindx-lform-type1}) with the coefficients $A_1={-10\over y'}$, $A_0={-3y'\over y}$, $B_3={15\over {y'^2}},$ $B_2={9\over y},$ $B_1={3y'^2\over {y^2}},$ $B_0=0$ satisfying the conditions (\ref{fourthindx-lc-type2}). So it is reduced to the third order linearizable ODE \begin{eqnarray} y^2u^2u'''-3yu^2u''-6y^2uu'u''+3u^2u'+6yuu'^2+6y^2u'^3=0~,\label{reduced8} \end{eqnarray} with $y$ as independent and $u$ as dependent variable. The transformation $t=y^2,$ $s={1\over u},$ reduces (\ref{reduced8}) to the linear third order ODE $s'''=0,$ whose solution is $s=c_1t^2+c_2t+c_3.$ Now one only needs to solve the equation $y'=1/(c_1y^4+c_2y^2+c_3)$, where $c_i$ are arbitrary constants. Hence, the general solution of (\ref{example2}) is given by \begin{eqnarray} x=c_1y^5+c_2y^3+c_3y+c_4~.\nonumber \end{eqnarray} \\ {\bf Example 3.} The ODE \begin{eqnarray} y'y''y^{(4)}-3y'y'''^2+6y'^3y''^2y'''-4y''^2y'''-y'y''^5=0~,\label{example3} \end{eqnarray} has 2 symmetries. It is of the form (\ref{fourthindx-lform-type2}) with the coefficients $r_0=0,$ $C_2=6y'^2-{4\over y'},$ $C_1=C_0=0,$ $D_5=-1,$ $D_4=D_3=D_2=D_1=D_0=0,$ obey the conditions (\ref{C1})--(\ref{C9}). So it is reducible to linearizable third order ODE \begin{eqnarray} u'''+{1\over u'}[-3u''^2-yu'^5]=0~.\label{reduced9} \end{eqnarray} The transformation $t=u,s=y,$ will convert the nonlinear ODE (\ref{reduced9}) to the linear ODE $s'''+s=0$ with solution \begin{eqnarray} s=c_1e^{-t}+c_2e^{t\over 2}\cos{t}+c_3e^{t\over 2}\sin{t}~. \end{eqnarray} Finally to find the solution of (\ref{example3}), we only need to solve \begin{eqnarray} y=c_1e^{-y'}+c_2e^{y'\over 2}\cos{y'}+c_3e^{y'\over 2}\sin{y'}~. \end{eqnarray} \\ \textbf{Example 4.} The nonlinear ODE \begin{eqnarray} y''y^{(4)}+y'''^3-y'''^2-y''y'''~,\label{example4} \end{eqnarray} is of the form (\ref{fourthindxy-lform}) and the coefficients $a={1\over y''},$ $b=-{1\over y''},$ $c=-{y''},$ $d=0$, that satisfy conditions (\ref{fourthindxy-lc}). So it is reduced to the linearizable second order ODE $u''+u'^3-u'=0$. By using the transformation $t=u~,~s=e^y~$, we can reduce it to linear ODE $s''-s=0$, whose solution is given by $s=c_1e^t+c_2e^{-t}~,$ where $c_i$ are arbitrary constants. So that solution of (\ref{example4}) is obtained by solving the second order ODE \begin{eqnarray} e^{y'}=c_1e^{-y''}+c_2e^{y''}~, \end{eqnarray} where $c_i$ are arbitrary constants. \section{Concluding Remarks} Nonlinear ODEs are difficult to solve but, if they can be converted to linear ones by invertible transformations, they can be solved. Hence linearization plays a significant role in the theory of ODEs. In this paper we have presented criteria for fourth order autonomous ODEs to be reducible to linearizable third and second order ODEs. There are certain fourth order ODEs, not depending explicitly on the independent variable, which cannot be linearized by point or contact transformations but can be reducible to linearizable third order ODEs by Meleshko's method. The solution of the original equation is then obtained by a quadrature. Various fourth order ODEs with fewer symmetries can be reduced to linearizable form by this procedure. The class of ODEs linearizable by this method is not included in the Ibragimov and Meleshko classes or conditionally linearizable classes \cite{mq3, mq4} of the ODEs (though there can be an overlap but it is not contained in that either). The reason is that it is not linearizable but reducible to linearizable form. In Lie's programme there is no definite statement available for the cases when the ODEs are not linearizable. By the recent developments this gap may be filled. By using the concept of Meleshko linearization a new class of scalar ODEs may be defined on the basis of initial conditions to be satisfied by ODEs.\\ \begin{center} \textbf{Appendix} \end{center} \begin{eqnarray} (r_0C_1-6r_{0y})y'^2+(6r_0r_{0y'}+4r_0^2-r_0^2C_2-C_0)y'-4r_0^2=0~,\label{C1}\\ \nonumber\\ (C_{2y}-C_{1y'})y'^3+(r_0C_{2y'}+C_2r_{0y'}-4r_{0y'}-6r_{0y'y'})y'^2\nonumber\\ +(10r_{0y'}+4r_0-C_2r_0)y'-8r_0=0~,\label{C2}\\ \nonumber\\ (-6r_0^2C_{1y}-54{(r_{0y})}^2+18r_0r_{oyy}+18r_0r_{0y}C_1-2r_0^2C_1^2)y'^8\nonumber\\ +(3r_0^3C_{1y'}+48r_0^2r_{0y}-3r_0^3C_{2y}-36r_0^2r_{0yy'} -6r_0^2r_{0y}C_2-18r_0^2r_{0y'}C_1\nonumber\\ +2r_0^3C_1C_2-16r_0^3C_1)y'^7+(-60r_0^3r_{0y'}+9r_0^4C_{2y'}-42r_0^2r_{0y}\nonumber\\ -36r_0^2{(r_{oy'})}^2+9r_0^3r_{0y'}C_2+14r_0^3C_1-32r_0^4+8r_0^4C_2+4r_0^4C_2^2\nonumber\\ +18r_0^4D_4)y'^6+(44r_0^4+72r_0^2r_{0y'} -18r_0^3r_{0y'} -7r_0^4C_2)y'^5\nonumber\\ +(-20r_0^4)y'^4-72r_0^5D_5=0~,\label{C3}\\ \nonumber\\ (-12r_0C_{1y}+18r_{oyy'}+18r_{0y}C_1-4r_0C_1^2)y'^8+(9r_0^2C_{1y'}-48r_0r_{0y}\nonumber\\ -27r_0^2C_{2y}-36r_0r_{0yy'}-18r_{0y}+72r_0r_{0y}+24r_0r_{0y}C_2-18r_0r_{0y'}C_1\nonumber\\ -18r_0r_{0y'}-32r_0^2C_1-2r_0^2C_1C_2)y'^7+(-18D_1-36r_0^2r_{0y'}+33r_0^3C_{2y'}\nonumber\\ +6r_0r_{0y}+18r_0^2C_1-21r_0^2r_{0y'}C_2+18r_0{(r_{0y'})}^2-64r_0^3+4r_0^2C_1-8r_0^3C_2\nonumber\\ +20r_0^3C_2^2+72r_0^3D_4)y'^6+(52r_0^3+6r_0^2r_{oy'}+13r_0^3c_2)y'^5\nonumber\\ +(-22r_0^3)y'^4-270r_0^4D_5=0~,\label{C4}\\ \nonumber\\ (-3C_{1y}-C_1^2)y'^8+(3r_0C_{1y'}-12r_{0y}-21r_0C_{2y}-8r_0C_1\nonumber\\ +15r_{oy}C_2-5r_0C_1C_2)y'^7+(-9d_2+12r_0r_{0y'}+21r_0^2C_{2y'}-30r_{0y}\nonumber\\ -15r_0r_{0y'}C_2+10r_0C_1-20r_0^2C_2+14r_0^2C_2^2+54r_0^2D_4\nonumber\\ -16r_0^2)y'^6+(-9C_0+28r_0^2+30r_0r_{0y'}+13r_0^2C_2)y'^5+(-40r_0^2)y'^4\nonumber\\ -180r_0^3D_5=0~,\label{C5}\\ \nonumber\\ (-3C_{2y}-C_1C_2)y'^7+(-3D_3+4C_1+3r_0C_{2y'}-4r_0C_2+2r_0C_2^2\nonumber\\ +12r_0D_4)y'^6+(-4r_0+4r_0C_2)y'^5+(-r_0)y'^4-30r_0^2D_5=0~,\label{C6}\\ \nonumber\\ (-54D_{4y}+18C_{1y'y'}+3C_2C_{1y'}-72C_{2yy'}-39C_2C_{2y})y'^8+(24C_{2y}\nonumber\\ +72r_{0y'y'}+12C_2r_{0y'}-6C_{1y'}+36r_0C_{2y'y'}-3r_0C_2C_{2y'}+72r_{0y'}C_{2y'}\nonumber\\ +33C_2^2r_{0y')}+108D_4r_{0y'}+54r_0d_{4y'}+36r_0C_2^2+18r_0C_{2y'y'})y'^7\nonumber\\ +(-168r_{0y'}-12r_0C_2-138r_0C_{2y'}-24C_2r_{0y'}-33r_0C_2^2-36r_0D_4)y'^6\nonumber\\ +(168r_0-228r_0C_2+60r_{0y'})y'^5+(-120r_0)y'^4+(270D_5r_{0y}\nonumber\\ +270r_0D_{5y})y'^2+(54r_0^2D_{5y'}-810r_0r_{0y'}D_5)y'+2160r_0^2D_5=0~,\label{C7} \end{eqnarray} and \begin{eqnarray} (-H_y)y'^2+(3Hr_{0y'}+r_0H_y')y'-3Hr_0=0~,\label{C8} \end{eqnarray} where \begin{eqnarray} H=(D_{4y'}+{1\over 3}C_{2y'y'}+{2\over 3}C_2C_{2y'}+{2\over 3}C_2D_4+{4\over 27}C_2^3)\nonumber\\ +{1\over y'}(-{4\over 3}C_{2y'}+{2\over 3}C_2^2-{4\over 3}D_4-{8\over 9}C_2^2)\nonumber\\ +{1\over y'^2}(-{5\over 9}C_2)+{1\over y'^3}({40\over 27})+{1\over y'^5}(-2D_{5y}-{2\over 3}C_1D_5)\nonumber\\ +{1\over y'^6}(-3r_0D_{5y'}-5D_5r_{0y'}-2r_0C_2D_5-{8\over 3}r_0D_5)\nonumber\\ +{1\over y'^7}(24r_0D_5)~.\label{C9} \end{eqnarray}
1,941,325,220,830
arxiv
\section{Introduction} \label{intro} The Milky Way is one of the 72\% of massive\footnote{By massive galaxies, we arbitrarily consider those with stellar masses larger than $10^{10}M_{\odot}$, the Milky Way being five times more massive than this value.} galaxies that are disk dominated. How large disks formed in massive spirals? The question is still not fully answered. Disks are supported by their angular momentum that may be acquired by early interactions in the framework of the tidal torque (TT) theory \cite{Peebles76,White}. In this theory, galactic disks are then assumed to evolve without subsequent major mergers, in a secular way, as did the Milky Way from its early and gradual disk formation. Is the Milky Way an archetype of spiral galaxies? The answer is of crucial importance for galaxy formation theory because the TT theory faces with at least two major problems. First, galaxy simulations demonstrate that such disks can be easily destroyed by collisions \cite{Toth92}, and such collisions might be too frequent to let disks survive. Second, the disks produced by simulations are too small or have a too small angular momentum when compared to the observed ones, the so-called "spin catastrophe". \section{How the Milky Way compares to other local (SDSS) spirals?} \label{sec:1} It is now well established that the Milky Way experienced very few minor mergers and no major merger during the past 10-11 Gyrs \cite{Wyse,Gilmore}. The old stellar content of the thick disk let possible a merger origin at such an early epoch, which is still a matter of debate. The Milky Way is presently absorbing the Sagittarius dwarf, though it is a very tiny event given that the Sagittarius mass is less than 1\% of the Milky Way mass \cite{Helmi}. \begin{figure} \resizebox{0.98\columnwidth}{!}{ \includegraphics{hammer_Fig1.eps}} \caption{Reproduction of the Figure 5 of \cite{Hammer07} with the new measurements for the circular velocity of the Milky Way. One-$\sigma$ uncertainty of both relations is shown as dashed lines. Long- and short-dashed lines show how we select Milky Way like galaxies, which are discrepant in both $L_{K}$ and disk scalelength.} \label{fig:1} \end{figure} How the fundamental parameters of the Milky Way disk compare to those of other galaxies? The main observational difficulty is coming from the fact that we are lying into the Milky Way, and such a comparison is not an easy task. By chance, very detailed models of the light distribution of the Galaxy were necessary to remove at best its signal to recover the CMB emission. Hipparcos also provided very useful data to model in detail the Galaxy. Using these data, \cite{Hammer07} have shown that the Galactic disk scalelength, $R_{d}$= 2.3 $\pm$ 0.6 kpc, is quite small especially when compared to that of M31 ($R_{d}$= 5.8 $\pm$0.6 kpc). The whole emission of both the Milky Way and M31 in K-band have been well recovered by Cobe \& Spitzer, and provides $ M_{K}(AB)$=-22.15 and -22.84, for the Milky Way and M31, respectively. The difference between the two values indicates that the stellar mass of M31 is twice that of the Milky Way, after accounting for their respective stellar mass to K-band luminosity ratios. Even if the Milky Way is approximately twice gas rich than M31, the baryonic mass ratio is still close to 2, because the gas content is rather marginal in both galaxies. On the basis of a very detailed study of the local scaling relations (mass-velocity or Tully Fisher, radius-velocity) for local spirals, \cite{Hammer07} showed that M31 is quite a typical spiral, while the Milky Way is surprisingly exceptional, being offset by 1$\sigma$ in both relations. The new measurement by \cite{Reid09} with data being reanalysed by \cite{Bovy09} provides a Milky Way velocity of 244 km/s instead of 220 km/s as adopted by \cite{Hammer07}. Fig. 1 shows how the position of the Milky Way in the K-band Tully Fisher and in the $R_{d}$-$V_{flat}$ relationships, together with M31 and SDSS galaxies from a complete sample of \cite{Pizagno07}. If correct, the new velocity for the Milky Way would be in excess of that of M31, and would place the Milky Way at $\sim$ 2$\sigma$ for both relations. We have searched for SDSS galaxies with comparable masses that would share the same location than the Milky Way in the ($M_{K}$, $R_{d}$, $V_{flat}$) volume. Only one galaxy (SDSS235607.82+003258.1) among the 79 SDSS galaxies with $Log(V_{flat})\ge$ 2.2 shows a position similar to that of the Milky Way. Thus only 1.2$\pm$1.2\%\footnote{Error has been calculated as in \cite{Hammer07}, and assuming that the \cite{Pizagno07} sample is representative of SDSS galaxies; the distribution of galaxies around the mean of both relations shows a similar scatter than found by other studies.} of SDSS galaxies share the location of the Milky Way in that volume, i.e., significantly smaller than the value (7\%) found by \cite{Hammer07}. Examination of Fig. 1 shows that this is mainly due to the offset of the Milky Way in the well-defined Tully Fisher relation (only two SDSS galaxies show a similar offset) and not to the scale-length estimate of the Milky Way, as it has been previously argued by \cite{van der Kruit11}. In addition, \cite{Mouhcine06} showed that Milky Way stars in the inner halo have much bluer colour than corresponding stars in haloes of other spirals, including M31, which implies a deficiency in their [Fe/H] abundances by almost 1 dex. Helmi (2011, private communication) argued that having an external view of the Milky Way would change this result because of the prominence of the Sagittarius Stream. On the other hand the Sagittarius Stream represents a small fraction of the halo stellar mass \cite{Hammer07}, and its [Fe/H]= -1.2 abundance \cite{Sesar11} is still offset by -0.5 dex when compared to M31 and other spiral haloes. Thus, as firstly guessed by Allan Sandage and verified by \cite{Flynn06} and \cite{Hammer07}, it appears that the Milky Way is almost certainly an exceptional spiral. \section{How the past history of the Milky Way compares to that of other spirals?} Let us first consider M31. Quoting Sidney van den Bergh\cite{vandenBergh05} in his introduction of the book "The Local Group as an Astrophysical Laboratory": ``Both the high metallicity of the M31 halo, and the $r^{1/4}$ luminosity profile of the Andromeda galaxy, suggest that this object might have formed from the early merger and subsequent violent relaxation, of two relatively massive metal-rich ancestral objects.'' In fact the considerable amount of streams in the M31 haunted halo could be the result of a major merger \cite{Hammer10} instead of a considerable number of minor mergers. This alternative scenario provides a robust explanation of the Giant Stream (GS) discovered by \cite{Ibata01}: observed properties of GS stars are consistent with tidal tail stars that are captured by the gravitational potential of a galaxy after a major merger. In fact GS stars have ages older than 5.5 Gyr \cite{brown07}, which is difficult to reconcile with a recent collision, such as expected for a minor merger \cite{Font08}. The stellar age constraint has let \cite{Hammer10} to reproduce the M31 substructures (disk, bulge \& thick disk) as well as the GS after a 3:1 gas-rich merger for which the interaction and fusion may have occurred 8.75$\pm$0.35 and 5.5 $\pm$0.5 Gyr ago, respectively. M31 being a quite typical spiral and possibly a major merger relics, one may wonder what is the general past history of most spirals, and how it differs from that of the Milky Way. Progenitors of present-day giant spirals are similar to galaxies having emitted their light $\sim$ 6 Gyr ago, according to the Cosmological Principle. Since the Canada France Redshift Survey \cite{Hammer95,Hammer97}, observations of distant galaxies up to z$\sim$ 1 can provide data with depth and resolution comparable to what is currently obtained for local galaxies. Six billion years ago, the Hubble sequence was very different from the present-day one \cite{Delgado09}. While the E/S0 number density shows no evolution, more than half of the spiral progenitors show peculiar morphologies. Furthermore, \cite{Neichel08} demonstrated that in addition to their peculiar morphologies, these galaxies show anomalous velocity fields at large scales (7 kpc) from their extended ionised gas, i.e. not consistent with rotation. This indicates a common process perturbing the ionised gas and stars in half of the spiral progenitors. This cannot be caused by outflows since in most cases there are no velocity shift between emission and absorption lines \cite{Hammer09}. Most anomalous galaxies reveal peculiar large-scale gas motions that cannot be caused by minor mergers or by secular evolution (e.g. bars), both mechanisms resulting in too small and/or too spatially localised kinematic perturbations \cite{Puech07,Puech11}. Internal fragmentation should have a limited impact at these redshifts ($z_{mean}$=0.65): less than 20\% of the IMAGES sample \cite{Yang08} show clumpy morphologies according to \cite{Puech10} while associated cold gas accretion tends to vanish in massive halos at z$<$1, with $<$1.5 $M_{\odot}$/yr at z$\sim$ 0.6 \cite{Keres09}. Besides this, \cite{Hammer09} succeeded to reproduce the morpho-kinematics of the anomalous galaxies based on a grid of simple major merger models based on \cite{Barnes02}, providing convincing matches in about two-thirds of the cases. This suggests that a third of z=0.4-0.75 spiral galaxies are or have been potentially involved in a major merger. Why so many major mergers? In fact the morpho-kinematic observational technique used in IMAGES is found to be sensitive to all merger phases, from pairs to post-merger relaxation. \cite{Puech11} has compared the merger rate associated with these different phases, and found a perfect match with predictions by state-of-the-art $\Lambda$CDM semi-empirical models \cite{Hopkins10} with no particular fine-tuning. Thus, both theory and observations predict an important impact of major mergers in the progenitors of present-day spiral galaxies. \section{Conclusion: what can we learn on and from the Milky Way formation?} The specific angular momentum of the Milky Way is half that of spirals with similar velocities. Could the quiescent past history of the Milky Way explain its lack of angular momentum? A significant part of the observed angular momentum of spiral galaxies may come from the orbital angular momentum generated by major mergers, which may solve the spin catastrophe \cite{Maller02}. However the Milky Way has still a too large angular momentum when compared to expectations from the TT theory. \cite{Hammer07} conjectured that a significant part of its disk mass had been acquired through gas accretion, explaining its observed lack of angular momentum. This does not exclude a very ancient and gas-rich merger, a possibility supported by the distribution of orbital eccentricities of thick disk stars \cite{Dierickx09} . It can be also tested whether the bulge has a classical component \cite{Babusiaux10}, because only primordial collapse or merger are known to produce such a component. The reverse is not necessarily true: some gas-rich mergers may also produce bulges with low Sersic indices \cite{Wang11}. There are now many simulations of disk formation via gas-rich mergers, including in the cosmological context \cite{Brook11}. They naturally produce an important thick disk component that is made of material re-accreted by the newly re-formed galaxy \cite{Brook04,Hammer10}. Constrained cosmological simulations of the Local Group are also progressing rapidly towards promising predictions \cite{Forero11}. The Milky Way having escaped such an event for at least 10 billion years, it could be an almost direct descendant of galaxies with a few $10^{10}M_{\odot}$ at z$\sim$ 2-3.
1,941,325,220,831
arxiv
\section{Introduction} \label{sec:intro} Rapid advancement of deep learning has attracted significant attention of researchers to explore how to use deep learning to solve scientific and engineering problems. Since numerical solutions of partial differential equations (PDEs) sits at the heart of many scientific areas, there is a surge of studies on how to use neural networks to leverage data and physical knowledge to solve PDEs \citep{raissi2019physics,e2018ritz,long2018pde,zang2019weak,li2020fourier,li2020multipole,lu2021learning,gin2021deepgreen,zhang2021ifnn,Teng2022gfnet,CLARKDILEONI2023111793}. The neural network-based methods have several advantages over traditional numerical methods (e.g., finite element, finite difference and finite volume), such as avoiding the need for numerical integration, generating differentiable solutions, exploiting advanced computing capabilities, e.g., GPUs. Nevertheless, a major drawback of these deep learning methods for solving PDEs is high computational cost associated with the neural network training/retraining using stochastic gradient descent (SGD). One of the popular strategies to alleviate this issue is transfer learning. Transfer learning for PDEs is to develop a pre-trained neural network that can be effectively re-used to solve a PDE with multiple coefficients or in various domains, or to solve multiple types of PDEs. When transfer a pre-trained neural network from one scenario to another, the feature space, e.g., the hidden layers, are often frozen or slightly perturbed, which can dramatically reduce the training overhead by orders of magnitude. However, existing transfer learning approaches for PDEs, e.g., \citep{lu2021learning, li2020fourier,Chakraborty2020TransferLB, Desai2021OneShotTL}, require information/knowledge of the target family of PDEs to pre-train a neural network model. The needed information could be the analytical definitions of the PDEs including initial and boundary conditions, and/or measurement data of the PDE's solution. These requirement not only leads to time-consuming simulation data generation using other PDE solvers, but also limits the transferability of the pre-trained neural network (i.e., the pre-trained network is only transferable to the same or similar type of PDEs that are used for pre-training). To overcome the above challenges, in this paper we propose a transferable neural network (TransNet) to improve the transferability of neural networks for solving PDEs. The key idea is construct a pre-trained neural feature space without using any PDE information, so that the pre-trained feature space could be transferred to a variety of PDEs with different domains and boundary conditions. We limit our attention to single-hidden-layer fully-connected neural networks, which have sufficient expressive power for low-dimensional PDEs that are commonly used in science and engineering fields. Specifically, we treat each hidden neuron as a basis function and re-parameterize all the neurons to separate the parameters that determine the neuron's location and the ones that control the shape (i.e., the slope) of the activation function. Then, we develop a simple, yet very effective, approach to generate uniformly distributed neurons in the unit ball, and rigorously prove the uniform neuron distribution. Then, the shape parameters of the neurons are tuned using auxiliary functions, i.e., realizations of a Gaussian process. The entire feature space construction (determining the hidden neurons' parameters) does not require the PDE's formulation or data of the PDE's solution. When applying the constructed feature space to a PDE problem, we only need to solve for the parameters of the output layer by minimizing the standard PDE residual loss. This can be done by either solving a simple least squares problem for linear PDE or combining a least squares solver with a nonlinear iterative solver, e.g., Pichard iteration, for nonlinear PDEs. The major contributions of this work are summarized as \vspace{-0.25cm} \begin{itemize}[leftmargin=10pt]\itemsep-0.0cm \item We develop transferable neural feature spaces that are {independent} of any PDE, and can be applied to effectively solve various linear and nonlinear PDE problems. \item We theoretically and computationally prove the uniform distribution of the hidden neurons, viewed as global non-orthogonal basis, for the proposed TransNet in the unit ball of any dimension. \item We demonstrate the superior accuracy and efficiency of the proposed TransNet for solving PDEs, e.g., the mean square errors of TransNet are several orders of magnitudes smaller than those by the state-of-the-art methods. \end{itemize} \section{Related work} Studies on using neural networks for solving PDEs can be traced back to some early works, e.g., \citep{dissanayake1994neural,lagaris1998artificial}. Recent advances mostly have been focused on physics-informed neural network (PINN). The general idea of PINN is to represent the PDE's solution by a neural network, and then train the network by minimizing certain measurement of the PDE's residual at a set of samples in the domain of computation. Several improvements on the training and sampling were proposed in \citep{lu2021deepxde,anitescu2019artificial,zhao2020solving,krishnapriyan2021characterizing}. Besides direct minimizing the PDE's residual, there are studies on how to combine traditional PDE solvers with neural networks. For example, the deep Ritz method \citep{e2018ritz} uses the variational form of PDEs and combines the stochastic gradient descent with numerical integration to train the network; the deep Galerkin method \citep{sirignano2018} combines the Galerkin method with machine learning; the PDE-Net \citep{long2018pde,long2019pde} uses a stack of neural networks to approximate the PDE solutions over a multiple of time steps. Another type of deep learning method for PDEs is to use neural networks to learn a family of PDE operators, instead of a single equation. The Fourier neural operator (FNO) \citep{li2020fourier} parameterizes the integral kernel in Fourier space and is generalizable to different spatial/time resolutions. The DeepONet \citep{lu2021learning} extends the universal approximation theorem \citep{chen1995universal} to deep neural networks, and its variant \citep{wang2021learning} further reduces the amount of data needed for training. The physics-informed neural operator (PINO) \citep{li2021physics} combines operator learning with function approximation to achieve higher accuracy. MIONet \cite{JML2022MIONet} was proposed to learn multiple-input operators via tensor product basd on low-rank approximation. Random feature models have also been used to solve PDEs \citep{Sun2018OnTA,https://doi.org/10.48550/arxiv.2212.05591} or learn PDE operators \citep{doi:10.1137/20M133957X}. The theory of random feature models for function approximation was developed due to its natural connection with kernel methods \citep{9495136,10.5555/3122009.3122030}. The proposed TransNet can be viewed as an improved random feature model for PDEs from two perspectives: (1) the re-parameterization of the hidden neurons to separate the parameters that determine locations of the neurons and the ones that control the activation function slope, (2) the usage of auxiliary functions to tune the neural feature space, which makes a critical contribution to the improvement of the accuracy of TransNet in solving PDEs. \section{Transferable neural networks for PDEs}\label{sec:method} \subsection{Problem setting and background}\label{sec:backgroud} We introduce the problem setup for using neural networks to solve partial differential equations. The PDE of interest can be presented in a general formulation, i.e., \begin{equation}\label{eq:pde} \left\{ \begin{aligned} & \mathcal{L}(u(\bm y)) = f(\bm y)\;\; \text{ for }\; \bm y \in \Omega,\\ & \mathcal{B}(u(\bm y)) = g(\bm y)\;\; \text{ for }\; \bm y \in \partial \Omega, \end{aligned} \right. \end{equation} where $\Omega \subset \mathbb{R}^{d}$ with the boundary $\partial \Omega$ is the spatial-temporal bounded domain under consideration, $\bm y := (\bm x, t) = (x_1, \ldots, x_{d-1}, t)^{\top}$ is a column vector includes both spatial and temporal variables, $u$ denotes the unknown solution of the PDE, $\mathcal{L}(\cdot)$ is a differential operator, $\mathcal{B}(\cdot)$ is the operator defining the initial and/or boundary conditions, $ f(\bm y)$ and $g(\bm y)$ are the right hand sides associated with the operators $\mathcal{L}(\cdot)$ and $\mathcal{B}(\cdot)$, respectively. For notational simplicity, we assume that the solution is a scalar function; the proposed method can be extended to vector-valued functions without any essential difficulty. We limit our attention to the single-hidden-layer fully-connected neural networks, denoted by \begin{equation}\label{eq:fc} u_{\rm NN}(\bm y) := \sum_{m = 1}^M \alpha_m\, \sigma( \bm w_m \bm y + b_m) + \alpha_0, \end{equation} where $M$ is the number of hidden neurons, the row vector $\bm w_m = (w_{m,1}, \ldots, w_{m,d})$ and the scalar $b_m$ are the weights and bias of the $m$-th hidden neuron, the row vector $\bm \alpha = (\alpha_0, \alpha_1, \ldots, \alpha_M)$ includes the weights and bias of the output layer, and $\sigma(\cdot)$ is the activation function. As demonstrated in Section \ref{sec:exp}, this type of neural networks have sufficient expressive power for solving a variety of PDEs with satisfactory accuracy. A typical method \citep{2021NatRP...3..422K} for solving the PDE in Eq.~\eqref{eq:pde} is to directly parameterize the solution $u(\bm y)$ as a neural network $u_{\rm NN}(\bm y)$ in Eq.~\eqref{eq:fc} and optimize the neural network's parameters by minimizing the PDE residual loss, e.g., $L(\bm y) = \|\mathcal{L}(u(\bm y)) - \mathcal{L}(u_{\rm NN}(\bm y))\|_2 + \|\mathcal{B}(u(\bm y)) - \mathcal{B}(u_{\rm NN}(\bm y)) \|_2$, at a set of spatial-temporal locations. Despite the good performance of these approaches in solving PDE problems, its main drawback is the {\em limited transferability} because of the high computational cost of gradient-based re-training and hyperparameter re-tuning. When there is any change to the operators $\mathcal{L}(\cdot), \mathcal{B}(\cdot)$, the right-hand-side functions $f(\bm y), g(\bm y)$, or the shape of the domain $\Omega$, the neural network $u_{\rm NN}(\bm y)$ often needs to be re-trained using gradient-based optimization (even though the current parameter values could provide a good initial guess for the re-training), or the hyperparameters associated with the network and the optimizer need to be re-tuned. In comparison, the random feature models require much lower re-training cost, which has been exploited in learning operators \citep{doi:10.1137/20M133957X} and dynamical systems \citep{NEURIPS2021_72fe6f9f, https://doi.org/10.48550/arxiv.2212.05591}. \subsection{The neural feature space} We can treat each hidden neuron $\sigma(\bm w_m \bm y + b_m)$ as a nonlinear feature map from the space of $\bm y \in \mathbb{R}^{d}$ to the output space $\mathbb{R}$. From the perspective of approximation theory, the set of hidden neurons $\{\sigma(\bm w_m \bm y + b_m)\}_{m=1}^M$ can be viewed as a globally supported basis in $\mathbb{R}^d$. The {\em neural feature space}, denoted by $\mathcal{P}_{\rm NN}$, can be defined by the linear space expanded by the basis $\{\sigma(\bm w_m \bm y + b_m)\}_{m=1}^M$, i.e., \begin{equation}\label{eq:fs} \mathcal{P}_{\rm NN} = {span}\Big\{1, \sigma(\bm w_1 \bm y + b_1), \ldots, \sigma(\bm w_M \bm y + b_M) \Big\}, \end{equation} where the constant basis corresponds to the bias of the output layer. Then, the neural network in Eq.~\eqref{eq:fc} lives in the linear space, i.e., $ u_{\rm NN}(\bm y) \in \mathcal{P}_{\rm NN}.$ In other words, the neural network approximation can be viewed as a spectral method with {\em non-orthogonal} basis, and the parameters $\bm \alpha$ in Eq.~\eqref{eq:fc} of the output layer of $u_{\rm NN}(\bm y)$ contains the coefficients of the expansion in the neural feature space $\mathcal{P}_{\rm NN}$. In the PINN methods, the neural feature space $\mathcal{P}_{\rm NN}$ and the coefficient $\bm \alpha$ are trained simultaneously using stochastic gradient descent methods, which often leads to a non-convex and ill-conditioned optimization problem. It has been shown that the non-convexity and ill-conditioning in the neural network training are major reasons of unsatisfactory accuracy of the trained neural network. A natural idea to reduce the complexity of the training is to decouple the training of $\mathcal{P}_{\rm NN}$ from that of $\bm \alpha$. For example, in random feature models, $\mathcal{P}_{\rm NN}$ is defined by randomly generating the parameters$\{(\bm w_m, b_m)\}_{m=1}^M$ from a user-defined probability distribution; the coefficients $\bm \alpha$ can then be obtained by solving a linear system when the operators $\mathcal{L}$, $\mathcal{B}$ in Eq.~\eqref{eq:pde} are linear. However, the numerical experiments in Section \ref{sec:exp} show that the random feature model based on Eq.~\eqref{eq:fc} converges very slowly with the increase of the number of features. This drawback motivates us to develop a methodology to customize the neural feature space $\mathcal{P}_{\rm NN}$ to improve the accuracy, efficiency and transferability of $u_{\rm NN}$ in solving PDEs. \subsection{Constructing the transferable neural feature space}\label{sec:par} This section contains the key ingredients of the proposed TransNet. The goal is to construct a single neural feature space $\mathcal{P}_{\rm NN}$ that can be used to solve various PDEs in different domains. \subsubsection{Re-parameterization of $\mathcal{P}_{\rm NN}$}\label{sec:repar} The first step is to re-parameterize the hidden neuron $\sigma(\bm w_m \bm y + b_m)$, viewed as a basis function in $\Omega$, to separate the components that determine the {\em location} of the neuron and the components that control the {\em shape} of the neuron. The idea of handling the locations of the basis functions is inspired by the studies on activation patterns of ReLU networks. When $\sigma$ is the ReLU function, there is a {\em partition hyperplane} defined by \begin{equation}\label{eq:plane} w_{m,1} y_1 + w_{m,2} y_2 + \cdots + w_{m,d} y_d + b_m = 0 \end{equation} that separates the activated and inactivated regions for this neuron. The intersections of multiple partition hyperplanes associated with different neurons define a linear region of ReLU network. Studies have shown that the expressive power of a ReLU network is determined by the number of linear regions and the distribution of those linear regions. In principle, the more {\em uniformly distributed} linear regions in the domain $\Omega$, the more expressive power the ReLU network has. For other activation functions, e.g., $tanh(\cdot)$ that is widely used in solving PDEs due to its smoothness, the partition hyperplane in Eq.~\eqref{eq:plane} can be used to describe the geometric property of the neuron. Specifically, let us re-write Eq.~\eqref{eq:plane} into the following point-slope form: \begin{equation}\label{eq:plane2} \gamma_m\big(a_{m,1} (y_1 - r_m a_{m,1}) + \cdots + a_{m,d} (y_d - r_m a_{m,d})\big) = 0, \end{equation} where $\bm a_m = (a_{m,1}, \ldots, a_{m,d})$ is a unit vector, i.e., $\|\bm a_m \|_2 = 1$, $r_m > 0$ and $\gamma_m \in \mathbb{R}$ are two scalar parameters for the $m$-th neuron. We can relate Eq.~\eqref{eq:plane2} to Eq.~\eqref{eq:plane} by \begin{equation}\label{eq:rep} \left\{\begin{split} w_{m,i} &= \gamma_m a_{m,i}, \;\;\;i=1,\cdots,d,\\ b_m &= - \gamma_m \sum_{i=1}^d a_{m,i}^2 r_m, \end{split}\right. \end{equation} which shows the desired geometric properties of the partition hyperplane in Eq.~\eqref{eq:plane}. In terms of the location, the unit vector $\bm a_m$ is the normal direction of the partition hyperplane in $\mathbb{R}^{d}$, the vector $(r_m a_{m,1}, \ldots, r_m a_{m,d})$ indicates a point that the hyperplane passes, $r_m$ is the distance between the origin and the partition hyperplane. An illustration is shown in Figure \ref{fig1}{\color{blue}(a)}. In terms of the shape, the constant $\gamma_m$ determines the steepness of the slope of the activation function along the normal direction $\bm a_m$. Thus, the re-parameterization in Eq.~\eqref{eq:plane2} successfully separates the parameters determining location from the ones determining the shape. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{fig4.png} \vspace{-0.7cm} \caption{{\bf (a)} Illustrates how the re-parameterization in Eq.~\eqref{eq:plane2} characterizes the location of a neuron. The blue line is the plane where $\tanh(\cdot) = 0$, $\bm a_m$ (the arrow) is the normal direction of the plane, the red dot is the location $r_m \bm a_m$ that the plane passes, $r_m$ is the distance between the origin and the plane. {\bf (b)} illustrates how to generate uniformly distributed neurons in the unit ball. The first step in {\bf (b)-left} is to generate the normal directions $\{\bm a_m\}_{m=1}^M$ uniformly distributed on unit sphere; the second step in {\bf (b)-middle} is to generated $\{r_m\}_{m=1}^M$ uniformly from $[0,1]$ defining the locations the neurons' partition hyperplanes will pass; the blue lines in {\bf (b)-right} show the distribution of the partition hyperplanes. {\bf (c)} shows the density function $D_M(\bm y)$ with $\tau = 0.05$ in Eq.~\eqref{eq:den} for a set of neurons generated using our approach. We can see that our approach provides a uniformly distributed neurons in the ball $B_{1-\tau}(\bm 0)$, which is consistent with Theorem \ref{thm1}.} \label{fig1} \end{figure*} \subsubsection{Generating uniformly distributed neurons for $\mathcal{P}_{\rm NN}$}\label{sec:uniform} The second step of constructing $\mathcal{P}_{\rm NN}$ is to determine the parameters $\{(\bm a_m, r_m)\}_{m=1}^M$ in Eq.~\eqref{eq:plane2}, such that all the neurons are uniformly distributed in $\Omega$. We assume $\Omega$ is a {\em unit ball}, i.e., $B_{1}(\bm 0)=\{\bm y:\|\bm y\|_2 \le 1\} \subset \mathbb{R}^d$ in this subsection. To proceed, we need to define a density function that measures the neuron distribution. % For a given $\bm y \in \Omega$, the distance between $\bm y$ and the partition hyperplane in Eq.~\eqref{eq:plane2} is given by \begin{equation}\label{eq:dist} dist(\bm y, m) = |\bm a_m (\bm y - r_m \bm a_m)|, \end{equation} for $m = 1, \ldots, M$. We use this distance to define how close the point $\bm y$ to the $m$-th neuron. The density function, denoted by $D_M(\bm y)$, is defined using the above distance, i.e., \begin{equation}\label{eq:den} D_M(\bm y) = \frac{1}{M}\sum_{m=1}^M \mathbf{1}_{dist(\bm y, m) < \tau} (\bm y), \end{equation} where $\mathbf{1}_{dist(\bm y, m) < \tau} (\bm y)$ is the indicator function of the event that the distance between $\bm y$ and the $m$-th neuron is smaller than a prescribed tolerance $\tau > 0$. Intuitively, $D_M(\bm y)$ measures the percentage of neurons whose partition hyperplane in Eq.~\eqref{eq:plane} intersect the ball (with radius $\tau$) around $\bm y$. Next we propose the following approach, illustrated in Figure \ref{fig1}{\color{blue}(b)}, to generate the parameters $\{(\bm a_m, r_m)\}_{m=1}^M$. Specifically, we first generate the normal directions $\{\bm a_m\}_{m=1}^M$ uniformly distributed on the $d-1$-dimensional unit sphere. Note that when $d>2$, sampling uniformly in the angular space in the hyperspherical coordinate system does not lead to uniformly distributed samples on the unit sphere. This is known as the sphere point picking problem. To overcome this issue, we draw samples from the $d$-dimensional Gaussian distribution in the Cartesian coordinate system, and normalize the samples to unit vectors to obtain $\{\bm a_m\}_{m=1}^M$. Then, we generate $\{r_m\}_{m=1}^M$ uniformly from $[0,1]$ using the Monte Carlo method. The following theorem shows that our approach provides a set of uniformly distributed neurons in $\Omega$, where the density is measured by $D_M(\bm y)$ in Eq.~\eqref{eq:den}. \begin{thm}[Uniform neuron distribution]\label{thm1} Given the re-parameterization in Eq.~\eqref{eq:plane2}, if $\{\bm a_m\}_{m=1}^M$ are uniformly distributed random vectors on the $d$-dimensional unit sphere, i.e., $\|\bm a_m\|_2 = 1$, and $\{r_m\}_{m=1}^M$ are uniformly distributed random variables in $[0,1]$, then, for a fixed $\tau \in (0,1)$, \[ \mathbb{E}[D_M(\bm y)] = \tau\; \text{ for any}\; \|\bm y\|_2 \le 1-\tau, \] where $D_M(\bm y)$ is the density function defined in Eq.~\eqref{eq:den}. \end{thm} The proof is given in Appendix \ref{sec:proof}; an illustration of the density function is given in Figure \ref{fig1}{\color{blue}(c)}. This result is a little surprising that the distribution of $\{r_m\bm a_m\}_{m=1}^M$, i.e., the red dots in Figure \ref{fig1}{\color{blue}(b)-middle}, are not uniformly distributed in the ball $B_{1-\tau}(\bm 0)$, but the density function $D_M(\bm y)$ is a constant in the ball $B_{1-\tau}(\bm 0)$. \begin{rem}[The dimentionality]\label{rem3} Even though Theorem \ref{thm1} holds for any dimension $d$, the number of neurons required to cover a high-dimensional unit ball still could be intractable. On the other hand, the majority of PDEs commonly used in science and engineering are defined in low-dimensional domains, e.g., 3D spatial domain + 1D time domain. In this scenario, the proposed method is effective and easy to implement, as demonstrated in Section \ref{sec:exp}. \end{rem} \subsubsection{Tuning the shape of the neurons in $\mathcal{P}_{\rm NN}$ using auxiliary functions }\label{sec:train} The third step is to tune the shape parameters $\{\gamma_m\}_{m=1}^M$ in Eq.~\eqref{eq:plane2} that controls the slope of the activation function. The experimental tests in Section \ref{sec:basis} show that the slope parameters play a critical role in determining the accuracy of the neural network approximator $u_{\rm NN}$. For simplicity, we assume the same shape parameter value for all neurons, i.e., $ \gamma = \gamma_m \text{ for } m = 1, \ldots, M. $ Because we intend to construct a feature space $\mathcal{P}_{\rm NN}$ that can be used in multiple scenarios, e.g., various PDEs with different domains and boundary conditions, we do not want to tune the shape parameter $\gamma$ using any information about a specific PDE. Our idea is to use auxiliary functions that have similar or more complicated spatial-temporal variation frequency as the PDE solution to tune $\gamma$. Specifically, we propose to use realizations of Gaussian processes to generate the auxiliary functions. The advantage of Gaussian process is that one can control the variation frequency of its realizations by adjusting the correlation length. Additionally, the Guassian process is independent of the coordinate system. Let us denote by ${G}(\bm y| \omega, \eta)$ the Gaussian process, where $\omega$ represents the abstract random variable and $\eta$ is the correlation length. Given a correlation length, we first generate a set of realizations of the Gaussian process, denoted by $\{G(\bm y | \omega_k, \eta)\}_{k = 1}^K$. For each realization, define the MSE loss as \begin{equation}\label{eq:mse} \begin{aligned} & \text{MSE}(u_{\rm NN}(\bm y), G(\bm y|\omega_k, \eta)) \\ = & \frac{1}{J}\sum_{j=1}^J \left[\sum_{m=1}^M \alpha_m \sigma(\bm w_m \bm y_j + b_m) + \alpha_0 - G(\bm y_j|w_k, \eta) \right]^2, \end{aligned} \end{equation} where the parameters $\{\bm w_m\}_{m=1}^M$ and $\{b_m\}_{m=1}^M$ are already determined using the strategy in Section \ref{sec:uniform} and Eq.~\eqref{eq:rep}, and $J$ denotes the number of sample points. Unlike standard neural network training, the optimal coefficient $\bm \alpha$ that minimizing the MSE loss can be efficiently achieved by solving the least squares problem. Hence, the shape parameter $\gamma$ can be tuned by solving the following one-dimensional optimization problem \begin{equation}\label{eq:ls2} \min_{\gamma} \left\{\sum_{k=1}^K \min_{\bm \alpha}\left[\text{MSE}({u}_{\rm NN}(\bm y), G(\bm y| \omega_k, \eta)) \right]\right\}, \end{equation} where for each candidate $\gamma$, we solve $K$ least squares problems to compute the total loss. \begin{rem}[The choice of the correlation length] There are two strategies to choose the correlation length $\eta$. One is to use the prior knowledge about the PDE. For example, for the Naveier-Stokes equations with low Reynolds' number, we know the solution will not have very high-frequency oscillation. The other is to use an over-killing correlation length to ensure that the feature space has sufficient expressive power to solve the target PDE. \end{rem} \subsection{Applying TransNet to linear and nonlinear PDEs} Once the neural feature space $\mathcal{P}_{\rm NN}$ is constructed and tuned, we can readily use it to solve PDE problems. Even though $\mathcal{P}_{\rm NN}$ is defined on the unit ball, i.e., $B_{1}(\bm 0)$, we can always place the (bounded) domain $\Omega$ for the target PDE in $B_{1}(\bm 0)$ by simple translation and dilation. Thus, the feature space can be used to handle PDEs defined in various domains, as demonstrated in Section \ref{sec:exp}. {\bf Linear PDEs.}\; When $\mathcal{L}$ and $\mathcal{B}$ in Eq.~\eqref{eq:pde} are linear operators, the unknown parameters $\bm \alpha = (\alpha_0, \ldots, \alpha_M)$ in Eq.~\eqref{eq:fc} can be easily determined by solving the following least squares problem, i.e., \begin{equation}\label{eq:ls}\small \begin{aligned} & \min_{\bm \alpha}\Bigg\{ \frac{1}{J_1}\sum_{j=1}^{J_1}\left[\sum_{m=1}^M\alpha_m\, \mathcal{L}(\sigma( \bm w_m \bm y_j + b_m)) + \alpha_0 - f(\bm y_j)\right]^2 \\ &\;\;\quad + \frac{1}{J_2}\sum_{j=1}^{J_2}\left[\sum_{m=1}^M\alpha_m\, \mathcal{B}(\sigma( \bm w_m \bm y_j + b_m)) + \alpha_0 - g(\bm y_j)\right]^2\Bigg\} \end{aligned} \end{equation} where the parameters $\{\bm w_m\}_{m=1}^M$ and $\{b_m\}_{m=1}^M$ are first computed using the strategy in Section \ref{sec:uniform} and Eq.~\eqref{eq:rep}. {\bf Nonlinear PDEs.}\; When one or both operators, $\mathcal{L}$ and $\mathcal{B}$, are nonlinear, there are two approaches to handle the situation. The first way is to wrap the least squares problem with a well established nonlinear iterative solver, e.g., Picard's methods, to solve the PDE. Within each iteration, the PDE is linearized such that we can update the coefficient $\bm \alpha$ by solving the least squares problem as mentioned above. When there is sufficient knowledge to choose a proper nonlinear solver, we prefer this approach because the well-established theory on nonlinear solvers can ensure a good convergence rate. Thus, we in fcat adopt this approach for numerical experiments in this paper. The second feasible approach is to wrap a gradient descent optimizer around the total loss $L(\bm y) = \|\mathcal{L}(u(\bm y)) - \mathcal{L}(u_{\rm NN}(\bm y))\|_2^2 + \|\mathcal{B}(u(\bm y)) - \mathcal{B}(u_{\rm NN}(\bm y)) \|_2^2$. Because the neural feature space $\mathcal{P}_{\rm NN}$ is fixed, the optimization will be simpler than training the entire neural network from scratch. This approach is easier to implement and suitable for scenarios that standard nonlinear solvers do not provide a satisfactory solution. \vspace{-0.5cm} \begin{rem}[Not using PDE's solution data] In this work, we do not rely on any measurement data of the solution $u(\bm y)$ when using TransNet to solve PDEs, because the operators $\mathcal{L}$ and $\mathcal{B}$ in Eq.~\eqref{eq:pde} are sufficient to ensure the existence and uniqueness of the PDE's solution. On the other hand, if any extra data of $u(\bm y)$ are available, TransNet can easily incorporate it into the least squares problem in Eq.~\eqref{eq:ls} as a supervised learning loss. \end{rem} \subsection{Complexity and accuracy of TransNet}\label{sec:accuracy} The complexity of TransNet is greatly reduced compared to the scenario of using SGD to train the entire network. The construction of the neural feature space $\mathcal{P}_{\rm NN}$ only involves random number generations and a simple one-dimensional optimization in Eq.~\eqref{eq:ls2}. Moreover, these cost are completely offline, and the constructed $\mathcal{P}_{\rm NN}$ is transferable to various PDE problems. The online operation for solving linear PDEs only requires solving one least squares problem, where the assembling of the least squares matrix can be efficiently done using the autograd function in Tensorflow or Pytorch. The numerical experiments in Section \ref{sec:exp} show that that the accuracy and efficiency of TransNet is significantly improved compared with several baseline methods, because our method does not suffer from the slow convergence of SGD in neural network training. \section{Numerical experiments}\label{sec:exp} We now demonstrate the performance of TransNet by testing several classic steady-state or time-dependent PDEs in two and three dimensional spaces. In Section \ref{sec:basis}, we illustrate how to construct the transferable feature space $\mathcal{P}_{\rm NN}$. To test and demonstrate the transferability of our model, we build and test two neural features spaces, one for the 2D case and the other for the 3D case\footnote{Note that the dimension of the feature space is the sum of both space and time dimensions since it doesn't differ them.}. The constructed feature spaces are then used in Section \ref{sec:pde2} to solve the model PDE problems. \subsection{Uniform neuron distribution} \label{sec:basis} This experiment is to use and test the algorithm proposed in Section \ref{sec:par} to construct transferable neural feature spaces $\mathcal{P}_{\rm NN}$ in the 2D and 3D unit balls. We tune the shape parameter $\gamma = \gamma_m$ for $m = 1, \ldots, M$ in Eq.~\eqref{eq:plane2} with $K =50$ realizations of the Gaussian process. In addition, we also test the effect of the correlation length and the number of hidden neurons by setting different values for $\eta$ and $M$. For each setting of $\eta$ and $M$, the shape parameter $\gamma$ is tuned separately. Additional information about the experiment setup is given in Appendix \ref{app:gp}. \vspace{-0.2cm} \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{gamma.png} \vspace{-0.0cm} \caption{The loss landscapes of the optimizing problem in Eq.~\eqref{eq:ls2} for tuning the shape parameter $\gamma$ of the feature space $\mathcal{P}_{\rm NN}$ in two and three dimensional cases. The blue star is the optimal value for $\gamma$ founded by our method. It shows that the optimal value for $\gamma$ varies with the number of hidden neurons, meaning that tuning $\gamma$ is a necessary operation to achieve optimal accuracy of $u_{\rm NN}$ when changing the number of hidden neurons. }\label{fig:gamma} \end{figure} Figure \ref{fig:gamma} illustrates the landscapes of the loss function $\sum_{k=1}^K \min_{\bm \alpha}[\text{MSE}({u}_{\rm NN}(\bm y), G(\bm y| \omega_k, \eta)) ]$ of the optimization problem in Eq.~\eqref{eq:ls2} for 2D and 3D neural feature spaces. We report the results for two correlation lengths ($\eta = 0.5$ and $\eta = 1.0$) combined with three numbers of hidden neurons ($M = 100, 500, 1000$ for 2D and $M = 500, 1000, 5000$ for 3D). We observe that the loss function behaves roughly like a parabolic curve for a fixed number of hidden neurons, so that the problem in Eq.~\eqref{eq:ls2} can be solved by a simple solver for one-dimensional optimization. More importantly, we observe that the optimal value for $\gamma$ varies with the number of hidden neurons. This provides an important insight that tuning $\gamma$ is a necessary operation to achieve optimal accuracy of $u_{\rm NN}$ when changing the number of hidden neurons. \begin{figure}[h!] \centering {\includegraphics[width=0.48\textwidth]{GP.png}} \vspace{-0.2cm} \caption{Top row: three realizations of the auxiliary Gaussian process with the correlation length $\eta = 0.5$. Bottom row: the distribution of the MSE of TransNet's approximation with 1000 hidden neurons. Thanks to the feature space with the uniform density in the 2D unit ball (illustrated in Figure {\color{blue}\ref{fig1}(c)}), we obtain a TransNet approximation with very small MSE fluctuation.}\label{fig:gp-err} \vspace{-0.4cm} \end{figure} \begin{figure*}[h!] \centering \includegraphics[width=0.98\textwidth]{PDE_err.png} \caption{The MSE decay along with the increasing of the number of hidden neurons for $(C_1)$ to $(C_9)$, where all the methods use the same network architecture. Our TransNet significantly outperforms the baseline methods from two aspects: (i) {\em Transferability}: for a fixed number of hidden neurons, TransNet only need use one 2D feature space and one 3D feature space; (ii) {\em Accuracy}: TransNet achieves several orders of magnitude smaller MSE than PINN and the random feature models. TransNet does not suffer from the slow convergence in SGD-based neural network training, and can exploit more expressive power of a given neural network $u_{\rm NN}$ to obtain more accurate PDE solutions.} \label{fig:err} \vspace{-0.0cm} \end{figure*} Figure \ref{fig:gp-err} illustrates the error distribution when using TransNet to approximate three realizations of the Gaussian process with correlation length $\eta = 0.5$ in the 2D unit ball. Even though the purpose of TransNet is not to approximate the Gaussian process, it is interesting to check whether the uniform density $D_M(\bm y)$ (proved in Theorem \ref{thm1}) leads to uniform error distribution. We use 1000 hidden neurons and the shape parameter $\gamma$ is set to 2. The bottom row of Figure \ref{fig:gp-err} shows that the MSE error distributes uniformly in the unit ball, which demonstrates the effectiveness of the feature space generation method proposed in Section \ref{sec:par}. \subsection{PDE examples}\label{sec:pde2} We then use the constructed 2D and 3D neural feature spaces from Section \ref{sec:basis} to solve two steady-state PDEs (i.e., the Poisson equation and the time-independent Navior-Stokes equation) and two time-dependent PDEs (i.e., the Fokker-Planck equation and the wave equation). The definitions of the PDEs under consideration are given in Appendix \ref{app:pde}. We perform the following testing cases: \vspace{-0.2cm} \begin{itemize}[leftmargin=30pt]\itemsep-0.1cm \item[($C_1$)] Poisson equation (2D space) in a box domain; \item[($C_2$)] Poisson equation (2D space) in a circular domain; \item[($C_3$)] Poisson equation (2D space) in an L-shaped domain; \item[($C_4$)] Poisson equation (2D space) in an annulus domain; \item[($C_5$)] Poisson equation (3D space) in a box domain; \item[($C_6$)] Steady-state Navier-Stokes equation (2D space); \item[($C_7$)] Fokker-Planck equation (1D space + 1D time); \item[($C_8$)] 2D Fokker-Planck equation (2D space + 1D time); \item[($C_9$)] 1D wave equation (1D space + 1D time) \end{itemize} \vspace{-0.3cm} to demonstrate the transferability of TransNet in solving various PDEs in different domains. Recall that for time-dependent PDEs, the temporal variable is simply treated as an extra dimension, so that we will use the 2D feature space to solve problems $(C_7)$ and $(C_9)$ and the 3D feature space to solve problem $(C_8)$. We compare our method with two baseline methods, i.e., the random feature mode and the PINN. All the methods use the same network architecture, i.e., Eq.~\eqref{eq:fc} with the $tanh$ activation. Additional information about the setup of the experiments are given in Appendix \ref{app:setup}. \begin{table*}[h!] \footnotesize \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \toprule & $(C_1)$ & $(C_2)$ & $(C_3)$ & $(C_4)$ & $(C_5)$ & $(C_6)$ & $(C_7)$ & $(C_8)$ & $(C_9)$ \\ \midrule Random feature model & 0.25s & 0.22s & 0.22s & 0.19s & 0.96s & 12.85s & 0.92s & 1.21s & 0.47s \\ PINN:Adam & 29.69s & 25.34s & 24.57s & 22.24s & 110.59s & 69.73s & 61.45s & 97.12s & 49.25s\\ PINN:Adam+BFGS & 125.78s & 121.46s& 120.93s & 119.24s & 264.62s & 191.53s & 172.86s & 178.99s & 152.71s\\ TransNet & 0.27s & 0.20s & 0.20s & 0.17s & 1.03s & 11.14s & 0.97s & 1.27s & 0.51s \\ \toprule \end{tabular} \vspace{-0.2cm} \caption{The computing times of TransNet and the baselines in solving the nine PDE test cases with 1000 hidden neurons. TransNet and the random feature model are significantly faster than PINN because SGD is not required in them.} \label{tab1} \vspace{-0.2cm} \end{table*} Figure \ref{fig:err} shows the MSE decay with the increasing of the number of the hidden neurons, where the number of hidden neurons are chosen as $M =$ 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, respectively, for the 2D feature space, and $M =$ 1000, 2000, 3000, 4000, 5000, respectively, for the 3D feature space. We observe that our TransNet achieves a superior performance for all the nine test cases, which demonstrates the outstanding transferability of TransNet. PINN with BFGS acceleration provides a good accuracy gain compared with PINN with Adam, which means the landscape of the PDE loss exhibits severe ill-conditioning as the SGD method approaches the minimizer\footnote{BFGS can alleviate ill-conditioning by exploiting the second-order information, e.g., the approximate Hessian.}. In comparison, TransNet does not require SGD in solving the PDEs, so that TransNet does not suffer from the slow convergence of SGD used in PINN. Figure \ref{fig:PINN_den} shows the density function $D_M(\bm y)$ in Eq.~\eqref{eq:den} of the feature spaces obtained by training PINN and the random feature models in solving the Poisson equation in the 2D space, i.e., case $(C_1)$ - $(C_4)$, where the constant $\tau$ in Eq.~\eqref{eq:den} is set to 0.2. Compared with TransNet's uniform density shown in Figure {\ref{fig1}\color{blue}(c)}, the feature spaces obtained by the baseline methods have highly non-uniform densities in the domain of computation. The random feature models tend to have higher density, i.e., more hidden neurons, near the center of the domain. The first row in Figure \ref{fig:PINN_den} can be viewed as the initial densities of the feature space for PINN; the second and the third rows are the final densities. We can see that the training of PINN does not necessarily lead to a more uniform density function $D_M(\bm y)$, which is one of the reasons why PINN cannot exploit the full expressive power of the neural network $u_{\rm NN}$. \vspace{-0.5cm} \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{PINN_den.png} \vspace{-0.2cm} \caption{The density function $D_M(\bm y)$ with $\tau =0.2$ in Eq.~\eqref{eq:den} of the neural feature spaces obtained by training PINN and the random feature models in solving the Poisson equation in the 2D space, i.e., problems $(C_1)$ - $(C_4)$. Compared to the uniform density of TransNet in Figure \ref{fig1}{\color{blue}(c)}, both PINN and the random feature model cannot provide feature spaces with uniform density, which is one explanation of their under-performance shown in Figure \ref{fig:err}.} \label{fig:PINN_den} \vspace{-0.1cm} \end{figure} \vspace{0.5cm} \section{Conclusion}\label{sec:con} We propose a transferable neural network model to advance the state of the art of using neural networks to solve PDEs. The key ingredient is to construct a neural feature space independent of any PDE, which makes it easy to transfer the neural feature space to various PDEs in different domains. Moreover, because the feature space is in fact fixed when using TransNet to solve a PDE, we only need to solve linear least squares problems, which avoids the drawbacks of SGD-based training algorithms, e.g., ill-conditioning. Numerical experiments show that the proposed TransNet can exploit more expressive power of a given neural network than the compared baselines. This work is the first scratch in this research direction, and there are multiple potential related topics that will be studied in our future work, including (1) {\em theoretical analysis of the convergence rate of TransNet in solving PDEs.} We observe in Figure \ref{fig:err} that the MSE of TransNet decays along with the increasing of the number of hidden neurons. A natural question to study is that whether TransNet can achieve the optimal convergence rate of the single-hidden-layer fully-connected neural network. (2) {\em Extension to multi-layer neural networks.} Even though the single-hidden-layer model has sufficient expressive power for the PDEs tested in this work, there are more complicated PDEs, e.g., turbulence models, that could require multi-layer models with much higher expressive power. (3) {\em The properties of the least squares problem.} In this work, we use the standard least squares solver of Pytorch in the numerical experiments. However, it is worth further investigation of the properties of this specific least squares problem. For example, since the set of neurons $\{\sigma(\bm w_m \bm y + b_m)\}_{m=1}^M$ forms a non-orthogonal basis, it is possible to have linearly correlated neurons which will reduce the column rank of the least squares matrix, or even lead to an under-determined system. This will require the use of some regularization techniques, e.g., ridge regression, to stabilize the least squares system. Additionally, compressed sensing, i.e., $\ell_1$ regularization, could be added to remove redundant neurons from the feature space as needed and obtain a sparse neural network. \section*{Acknowledgement} This work was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics Program, under the contract number ERKJ387. This work was accomplished at Oak Ridge National Laboratory (ORNL). ORNL is operated by UT-Battelle, LLC., for the U.S. Department of Energy under Contract DE-AC05-00OR22725.
1,941,325,220,832
arxiv
\section{Introduction} The main advantage of zeroth-order (derivative-free) optimization methods \cite{rosenbrock1960automatic,fabian1967stochastic,brent1973algorithms,spall2003introduction,conn2009introduction,larson2019derivative-free} is that computing function value is, in general, simpler than computing its gradient vector. On the one hand, zeroth-order methods usually have worse convergence rates, and may be inferior to gradient methods endowed with Fast Automatic Differentiation (FAD) technique, for which it is known \cite{kim1984efficient,baydin2018automatic} that if there is a series of computational operations to evaluate the value of a function, then with at most four times large number of arithmetic operations it is possible to evaluate the gradient of this function. On the other hand, there are still a number of situations, when the objective is given as a black-box, there is no access to function derivatives and the FAD technique is not applicable. One of the many important recent examples is Reinforcement Learning problems, where the goal is to find an optimal control strategy by observing, in a stochastic environment, some black-box reward function values, see \cite{sutton2018reinforcement} for a review and examples. The problem can be even more complicated when one deals with computer simulation of some physical processes, e.g. satellite movement, since such models often have some noise in their outputs. Similarly, in Reinforcement Learning only noisy observations of the reward function are available. Moreover, the noise can be biased \cite{DOI:10.1609/aaai.v34i04.6086} and standard batch averaging may not help. Thus, it is important to analyze zeroth-order methods in the setting of possibly biased noisy observations of the objective function. Another important application of zeroth-order methods with noisy observations is min-max or min-min problems, which are particular settings of bi-level optimization problems. For example, in \cite{bolte2020holderian} the authors consider the problem \[ \min_x \{f(x)=\max_{y}L(x,y)\}, \] where $f$ has locally H\"older-continuous gradient and only inexact values of $f$ and its gradient are available via inexact solution to the inner maximization problem in $y$. This leads to a non-convex minimization problem with inexact oracle and the authors focus on first-order inexact oracle. Motivated, in particular, by such problems, we consider in this paper the case when only noisy observations of the objective value $f$ are available. Our bounds on the noise help to evaluate what accuracy of the solution to the inner problem is sufficient to solve the outer problem with some desired accuracy. \textbf{Related works.} In \cite{NesterovSpokoiny2015}, among other settings, the authors consider minimization of a non-convex function $f$ on $\mathbb{R}^n$ with exact values of $f$ and used the Gaussian smoothing technique with parameter $\mu$ to prove convergence to a stationary point of a smoothed function $f_{\mu}(x)$, which is a uniform approximation to $f(x)$. The main idea is that the smoothed function $f_{\mu}(x)$ has better properties, e.g. it is smooth even if $f$ is non-smooth. In the case when $f$ has Lipschitz-continuous gradient, the authors of \cite{NesterovSpokoiny2015} prove that their method achieves $\mathbb{E}\left[\|\nabla f(x_{N})\|^2_{\ast}\right]\leqslant \varepsilon_{\nabla f}$ after $N=O\left(\frac{n}{\varepsilon_{\nabla f}}\right)$ steps with 2 oracle calls in each step. When $f$ is Lipschitz-continuous they estimate an appropriate value of the parameter $\mu$ such that the smoothed function $f_{\mu}(x)$ satisfies $|f_{\mu}(x)-f(x)|\leq \epsilon_f$ for all $x \in \mathbb{R}^n$, and prove that in order to obtain $\mathbb{E}\left[\|\nabla f_{\mu}(x_{N})\|^2_{\ast}\right]\leqslant \varepsilon_{\nabla f}$ it is sufficient to make $N=O\left(\frac{n^3}{\epsilon_f \varepsilon_{\nabla f}^2}\right)$ steps of their method with 2 oracle calls in each step. This technique was later used in the works \cite{ghadimi2013stochastic} (RSGF algorithm) and \cite{ghadimi2013minibatch} (RSPGF algorithm) to build an algorithm which finds so-called $(\varepsilon,\Lambda)$-solution i.e. a point $x$ s.t. $\mathbb{P}\{\|\nabla f(x)\|^2_{\ast}\leqslant \varepsilon_{\nabla f}\}\geqslant 1-\Lambda$, in the case of Lipschitz-gradient function and stochastic oracle $F(x,\xi)$ s.t. $\mathbb{E}_{\xi}[F(x, \xi)] = f(x)$. They have shown that to find an $(\varepsilon_{\nabla f},\Lambda)$-solution it is sufficient to make $O\left( C_1\frac{n}{\varepsilon_{\nabla f}}+C_2\frac{n}{\varepsilon_{\nabla f}^2}\right)$ calls to the stochastic zeroth-order oracle (here the constants $C_1,C_2$ depend on $\Lambda$ and other parameters of the problem, such as Lipschitz constant and diameter of the feasible set). In the works \cite{berahas2020theoretical,berahas2019global} the authors compare several types of gradient approximations $g(x)$, including Gaussian smoothing and smoothing based on uniform sampling on the Euclidean sphere, in terms of the number of calls to the inexact zeroth-order oracle $\hat{f}(x)$ for $f$ which guarantees the approximation condition $\|g(x)-\nabla f(x)\|_{\ast}\leqslant \theta \|\nabla f(x)\|_{\ast}$, where $\theta\in[0,1)$. They show that random-directions-based methods lose in theory to the standard finite differences approach, needing more oracle calls to ensure the above approximation condition. However, in the work \cite{liu2018zerothorder} zeroth-order variants of stochastic variance reduction methods called ZO-SVRG are considered and a variant which uses random directions approach in the experiments required less number of oracle calls than the standard finite differences method (ZO-SVRG-Coord), despite having worse theoretical convergence rate. In this paper we do not rely on the above approximation condition, which allows to obtain better complexity bounds for the considered approach based on random directions and Gaussian smoothing. \textbf{Our contributions.} The works listed above mainly focus on the setting when the objective $f$ has Lipschitz-continuous gradient. The only paper, which considers non-smooth setting with $f$ being Lipschitz continuous is \cite{NesterovSpokoiny2015}, where the value of the objective $f$ is assumed to be known exactly. Our main contribution consists in obtaining complexity bounds for zeroth-order methods with inexact values of the objective in the setting of $f$ having H\"older-continuous gradient, i.e. for some $L_{\nu}>0, \nu \in [0,1]$, $\|\nabla f(x) -\nabla f(y)\|_*\leq L_{\nu}\|x-y\|^{\nu}$. This assumption is more general and includes as particular cases the previously considered settings of objectives $f$ with Lipschitz-continuous gradient and objectives $f$ which are differentiable and Lipschitz continuous. Our approach uses finite-difference gradient approximation based on normally distributed random Gaussian vectors $u$ and we prove that a gradient descent scheme based on this approximation ensures \begin{align*} \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f_\mu(x_k)\|_{\ast}^2\right]\leqslant \varepsilon_{\nabla f}\text{ after }N = O\left(\frac{n^{\frac{(7-3\nu)}{2}}}{\varepsilon_{\nabla f}^{\frac{3-\nu}{1+\nu}}}\right) \end{align*} steps. Here $x_k$ are the iterates, $f_\mu(\cdot)$ is a smoothed version of the objective $f$, $\mathcal{U} = (u_0, . . . , u_{N-1})$ is the history of the realizations of random Gaussian vector $u$. We also consider convergence to a stationary point of the initial (not smoothed) objective function $f$ and prove that this scheme ensures \begin{align*} \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f(x_k)\|_{\ast}^2\right]\leqslant \varepsilon_{\nabla f}\text{ after }N = O\left(\frac{n^{2 + \frac{(1-\nu)}{2\nu}}}{\varepsilon_{\nabla f}^{\frac{1}{\nu}}}\right) \end{align*} steps, when $\nu \in (0,1]$. For both cases we obtain bounds for the maximum level of noise in the zeroth-order oracle which does not affect the above iteration complexity bounds. The main difference of our work from \cite{NesterovSpokoiny2015} is that we consider the inexact oracle setting, intermediate smoothness $\nu \in [0,1]$ rather than the cases $\nu \in {0,1}$, which we also cover in a unified manner. We additionally provide a refined analysis for the case of $\nu = 1$ to achieve the complexity bound $N=O\left(\frac{n}{\varepsilon_{\nabla f}}\right)$ both for $\min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f_\mu(x_k)\|_{\ast}^2\right]\leqslant \varepsilon_{\nabla f}$ and $\min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f(x_k)\|_{\ast}^2\right]\leqslant \varepsilon_{\nabla f}$, which is similar to the bound in \cite{NesterovSpokoiny2015} for this case. The rest of the paper is organized as follows. The first section contains necessary definitions and some technical lemmas which extend or improve the corresponding bounds derived in \cite{NesterovSpokoiny2015}. In the second section, we consider a simple gradient descent process with Gaussian-sampling-based finite-difference gradient approximation and obtain complexity bounds for this method in terms of the gradient norm of the smoothed and of the non-smoothed function. We also analyze how the noise in the objective values influences the convergence and what level of inexactness can be tolerated without changing the convergence properties. \section{Gaussian smoothing, zeroth-order oracle} This section provides problem statement, technical preliminaries and properties of the function $f_\mu$ obtained from $f$ by Gaussian smoothing, as well as the gradient of $f_\mu$, and the estimates for the difference between $f$ and $f_{\mu}$ as well as their gradients. \subsection{Definitions} We mostly follow the notation in \cite{NesterovSpokoiny2015} and \cite{Dvurechensky2017GradientMW}, where a similar problem was considered from the point of view of inexact first-order oracle. We start with some definitions from \cite{NesterovSpokoiny2015}. For an $n$-dimensional space $E$, we denote by $E^{\ast}$ its dual space. The value of a linear function $s \in E^{\ast}$ at point $x \in E$ is denoted by $\langle s, x\rangle$. We endow the spaces $E$ and $E^{\ast}$ with Euclidean norms \begin{align}\label{def:norm} \|x\|^2=\langle Bx, x\rangle,~\forall x\in E~~~\|s\|_{\ast}^2=\langle s, B^{-1}s\rangle,~\forall s\in E^{\ast}, \end{align} where $B:E\to E^{\ast}$ is a linear operator s.t. $B\succ 0$. In this paper we consider the problem of the form \begin{align}\label{opt_problem} \min\limits_{x\in E} f(x) \end{align} under the two following assumptions. \begin{assumption}\label{asmpt:noisy_oracle} The function $f(x)$ is equipped with an \textit{inexact zeroth-order oracle} $\tilde{f}(x,\delta)$ with some $\delta>0$ i.e. there exists $\delta > 0$ and one can calculate $\tilde{f}(x, \delta )\in \mathbb{R}$ satisfying, for all $x \in E$, \begin{align}\label{def:inexact_zeroth_order_oracle} & |f(x) - \tilde{f}(x, \delta)|\leqslant\delta. \end{align} \end{assumption} \begin{assumption}\label{asmpt:holder_grad_function} The function $f(x)$ is differentiable with H\"older-continuous gradient with some $\nu \in [0, 1]$ and $L_{\nu}\geqslant 0$ i.e. \begin{align} \|\nabla f(y) - \nabla f(x)\|_{\ast} \leqslant L_{\nu}\|y-x\|^{\nu},~\forall x,y\in E. \end{align} \end{assumption} The latter inequality gives a useful inequality \begin{align} f(y) \leqslant f(x) + \langle \nabla f(x), y-x\rangle + \frac{L_{\nu}}{1 +\nu}\|y-x\|^{1 + \nu},~\forall x,y\in E. \end{align} Next, we consider the Gaussian smoothed version of $f(x)$ defined in \cite{NesterovSpokoiny2015}. \begin{definition}\label{def:gaussian_approximation} Consider a function $f:E\to\mathbb{R}$. Its Gaussian approximation $f_{\mu}(x)$ is defined as \begin{align}\label{def:gaussian_approximation:f_exp} f_{\mu}(x) = \frac{1}{\kappa}\int\limits_{E}f(x+\mu u) e^{-\tfrac{1}{2}\|u\|^2}du, \end{align} where \begin{align} \kappa \overset{\text{def}}{=} \int\limits_{E}e^{-\tfrac{1}{2}\|u\|^2}du = \frac{(2\pi)^{n/2}}{[\det B]^{1/2}}. \end{align} \end{definition} It can be shown, that (see \cite{NesterovSpokoiny2015} Section 2 for details) \begin{align} \nabla f_\mu(x)& =\frac{1}{\kappa} \int\limits_{E}\frac{f(x+\mu u) - f(x)}{\mu}e^{-\tfrac{1}{2}\|u\|^2}Budu \label{def:gaussian_approximation:gradient_as_f_difference_exp}\\ &=\frac{1}{\kappa} \int\limits_{E}\frac{f(x+\mu u)}{\mu}e^{-\tfrac{1}{2}\|u\|^2}Budu \label{def:gaussian_approximation:gradient_as_f_exp}\\ \nabla f(x) & = \frac{1}{\kappa}\int\limits_{E}\langle \nabla f(x), u\rangle e^{-\tfrac{1}{2}\|u\|^2}Budu, \end{align} where the latter equality holds when $f(x)$ is differentiable at $x$. If $f$ is differentiable on $E$, then \begin{align} \nabla f_\mu(x)& =\frac{1}{\kappa} \int\limits_{E}\nabla f(x+\mu u)e^{-\tfrac{1}{2}\|u\|^2}du. \label{def:gaussian_approximation:gradient_as_gradient_exp} \end{align} The Gaussian approximation of the function $\tilde{f}(x,\delta)$ then takes the form \begin{align}\label{def:gaussian_approximation:inexact_f_exp} \tilde{f}_{\mu}(x,\delta) = \frac{1}{\kappa}\int\limits_{E}\tilde{f}(x+\mu u,\delta) e^{-\tfrac{1}{2}\|u\|^2}du. \end{align} We also define the following vector which plays the role of the gradient of $\tilde{f}_{\mu}(x,\delta)$ \begin{align} \nabla\tilde{f}_\mu(x, \delta) & = \frac{1}{\kappa} \int\limits_{E}\frac{\tilde{f}(x+\mu u,\delta) - \tilde{f}(x,\delta)}{\mu}e^{-\tfrac{1}{2}\|u\|^2}Budu \label{def:gaussian_approximation:inexact_gradient_as_inexact_f_difference_exp}\\ & = \frac{1}{\kappa} \int\limits_{E}\frac{\tilde{f}(x+\mu u,\delta)}{\mu}e^{-\tfrac{1}{2}\|u\|^2}Budu. \label{def:gaussian_approximation:inexact_gradient_as_inexact_f_exp} \end{align} For the case of $\delta = 0$ we have $\nabla\tilde{f}_\mu(x, \delta) = \nabla f_\mu(x)$. It is also worth noting that, in general, it is not possible to obtain for $\nabla\tilde{f}_\mu(x, \delta)$ a representation similar to \eqref{def:gaussian_approximation:gradient_as_gradient_exp} since the function $\tilde{f}(x, \delta)$ is not necessarily differentiable. \subsection{Basic results} As shown in Lemma 3 in \cite{NesterovSpokoiny2015}, for $f(x)$ with Lipschitz-continuous gradient, it holds that \begin{align*} \|\nabla f_\mu(x) - \nabla f(x)\|_{\ast} \leqslant \frac{\mu L_1}{2}(n+3)^{\nicefrac{3}{2}}. \end{align*} This result was improved and extended (see A.1 in \cite{berahas2020theoretical}) to the noisy case giving \begin{align*} \|\nabla\tilde{f}_\mu(x, \delta) - \nabla f(x)\|_{\ast} \leqslant \frac{\delta}{\mu}n^{1/2} + \mu L_{1}n^{1/2}. \end{align*} Extending it to the H\"older case we can show the following result. \begin{lemma} \label{Lm:2:1} Under Assumptions \ref{asmpt:noisy_oracle} and \ref{asmpt:holder_grad_function} it holds that \begin{align*} \|\nabla\tilde{f}_\mu(x, \delta) - \nabla f_\mu(x)\|_{\ast} & \leqslant \frac{\delta}{\mu}n^{1/2} \\ \|\nabla f_\mu(x) - \nabla f(x)\|_{\ast} & \leqslant \mu^{\nu} L_{\nu}n^{\nu/2}, \end{align*} and, consequently, \begin{align} \label{C_grad_diff} \|\nabla\tilde{f}_\mu(x, \delta) - \nabla f(x)\|_{\ast} \leqslant \frac{\delta}{\mu}n^{1/2} + \mu^{\nu} L_{\nu}n^{\nu/2}. \end{align} \end{lemma} \begin{proof} \hyperlink{Lm:2:1:proof}{Can be found in Appendix.} \end{proof} It can be shown (assuming $f$ is Lipschitz-continuous with constant $L_0$), that $f_{\mu}$ has H\"older-continuous gradient with $\nu=1$ and $L=\frac{n^{1/2}}{\mu}L_0$ (Lemma 2 from \cite{NesterovSpokoiny2015}). Thus, we can obtain in this case that \begin{align}\label{ineq:holder_bias_zero_delta} |f_{\mu}(y)-f_{\mu}(x)-\langle \nabla f_{\mu}(x),y-x\rangle|\leqslant \frac{L}{2}\|x-y\|^2. \end{align} Under more general H\"older condition we can obtain the following inexact version of \eqref{ineq:holder_bias_zero_delta}. \begin{lemma} \label{Lm:2:2} Under Assumption \ref{asmpt:holder_grad_function} it holds that \begin{align*} |f_{\mu}(y)-f_{\mu}(x)-\langle \nabla f_{\mu}(x),y-x\rangle|\leqslant \frac{A_1}{2}\|y-x\|^2 + A_2, \end{align*} where either \begin{align*} A_1 = \frac{L_{\nu}}{\mu^{1-\nu}}n^{\tfrac{1+\nu}{2}} \text{, } A_2 = 0, \end{align*} or \begin{align*} A_1 = \left[\frac{1}{\hat{\delta}}\right]^{\frac{1-\nu}{1+\nu}}\frac{2L_{\nu}}{\mu^{1-\nu}} \text{, } A_2 = \hat{\delta}L_{\nu}\mu^{1+\nu} \text{ where } \hat{\delta}>0. \end{align*} \end{lemma} \begin{proof} \hyperlink{Lm:2:2:proof}{Can be found in Appendix.} \end{proof} One of the most important properties of the smoothed function $f_{\mu}(x)$ is that it provides a uniform approximation for $f$. For example, when $f$ is Lipschitz-continuous with constant $L_{0}$ it can be shown (see Theorem 1 from \cite{NesterovSpokoiny2015}) that \begin{align*} |f_{\mu}(x)-f(x)|\leqslant \mu L_{0}n^{1/2}. \end{align*} For the more general case of H\"older-continuous gradient we obtain the following more general result. \begin{lemma} \label{Lm:2:3} Under Assumption \ref{asmpt:holder_grad_function} it can be shown that \begin{align*} |f_{\mu}(x)-f(x)|\leqslant \frac{L_{\nu}}{1+\nu}\mu^{1+\nu}n^{\frac{1+\nu}{2}}. \end{align*} \end{lemma} \begin{proof} \hyperlink{Lm:2:3:proof}{Can be found in Appendix.} \end{proof} From Lemma \ref{Lm:2:1} we can obtain an upper bound which connects the gradient norm of $f$ and gradient norm of its smoothed approximation $f_{\mu}$. This will be the key to translate the convergence rate for the smoothed function gradient to the convergence rate of the original objective $f$ gradient. \begin{lemma} \label{Lm:2:4} Under Assumption \ref{asmpt:holder_grad_function} it holds that \begin{align*} \|\nabla f(x)\|_{\ast}^2 \leqslant 2\|\nabla f_{\mu}(x)\|_{\ast}^2 + 2\mu^{2\nu} L_{\nu}^2 n^{\nu}. \end{align*} \end{lemma} \begin{proof} \hyperlink{Lm:2:4:proof}{Can be found in Appendix.} \end{proof} In the next section we consider a gradient descent method with gradient replaced with a random gradient estimation \begin{align}\label{random_noisy_grad_est} g_{\mu}(x,u,\delta)=\frac{\tilde{f}(x+\mu u,\delta) - \tilde{f}(x,\delta)}{\mu}Bu, \end{align} where $u$ is a Gaussian random vector with mean $0_n$ and identity covariance matrix $I_n$ ($u\sim\mathcal{N}(0,I_n)$). Thus $\mathbb{E}_u\left[ g_{\mu}(x,u,\delta)\right] = \nabla\tilde{f}_\mu(x, \delta)$. In what follows we need also one technical result about this estimation. \begin{lemma} \label{Lm:2:5} Under Assumptions \ref{asmpt:noisy_oracle} and \ref{asmpt:holder_grad_function} for the gradient estimation (\ref{random_noisy_grad_est}) it holds that \begin{align*} &\mathbb{E}_{u}\left[\|g_{\mu}(x,u,\delta)\|^2_{\ast}\right] \leqslant 20(n+4)\|\nabla f_{\mu}(x)\|^2 + \\ & + 5\left( \frac{4\delta^2}{\mu^2}n + \frac{4L_{\nu}^2}{(1+\nu)^2}\mu^{2\nu}n^{2+\nu} + \frac{\mu^2 A_1^2}{4}(n+6)^3 + \frac{A_2^2}{\mu^2}n\right), \end{align*} where $A_1,A_2$ are constants equal to constants $A_1,A_2$ from Lemma \ref{Lm:2:2}. \end{lemma} \begin{proof} \hyperlink{Lm:2:5:proof}{Can be found in Appendix.} \end{proof} \section{Convergence rate analysis} We consider a gradient descent process \begin{align}\label{gd_process} x_{k+1} = x_k - h_k B^{-1}g_{\mu}(x_k,u_k,\delta), \end{align} where $u_k$ is normal random vector and $g_{\mu}(x_k,u_k,\delta)$ is defined in (\ref{random_noisy_grad_est}). We will consider two type of convergence -- in the sense of $\|\nabla f(x_k)\|_{\ast}$ and $\|\nabla f_{\mu}(x_k)\|_{\ast}$. We start with proving the following result \begin{lemma}\label{Lm:3:1} Consider the process (\ref{gd_process}). Under Assumptions \ref{asmpt:noisy_oracle} and \ref{asmpt:holder_grad_function} it can be shown that after $N-1$ iterations of this process \begin{align} \label{ineq:grad_exp_before_A_substitution} & \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f_\mu(x_k)\|_{\ast}^2\right]\leqslant \frac{320(n+4)A_1(f_{\mu}(x_0) - f^{\ast})}{ND} + \\ & + \frac{D}{4(n+4)}\left( \frac{4\delta^2}{\mu^2}n + \frac{4L_{\nu}^2}{(1+\nu)^2}\mu^{2\nu}n^{2+\nu} + \frac{\mu^2 (A_1')^2}{4}(n+6)^3 + \frac{(A_2')^2}{\mu^2}n\right) + \nonumber \\ & + \frac{320(n+4)A_1}{D}\left( A_2 + \frac{\delta^2}{2A_1\mu^2}n + \frac{\delta^2}{\mu^2}n\right), \nonumber \end{align} where $\mathcal{U} = (u_0, . . . , u_{N-1})$ is a random vector composed by i.i.d. $\{u_k\}_{k=0}^{N-1}$, $A_1,A_2$ and $A_1',A_2'$ are the independent pair of constants from Lemma \ref{Lm:2:2} and $D\in(0,1]$. \end{lemma} \begin{proof} From Lemma \ref{Lm:2:1}, Lemma \ref{Lm:2:2} and the fact that $ab\leqslant\frac{Ca^2}{2} + \frac{b^2}{2C}$ where $C>0$, $a=\|y-x\|$ and $b=\frac{\delta}{\mu}n^{1/2}$ we obtain \begin{align*} & |f_{\mu}(y)-f_{\mu}(x) - \langle \nabla \tilde{f}_{\mu}(x,\delta),y-x\rangle| \leqslant \\ & \leqslant |f_{\mu}(y)-f_{\mu}(x) - \langle \nabla f_{\mu}(x),y-x\rangle| + |\langle \nabla \tilde{f}_{\mu}(x,\delta)-\nabla f_{\mu}(x),y-x\rangle| \leqslant \\ & \leqslant \frac{A_1}{2}\|y-x\|^2 + A_2 + \frac{\delta}{\mu}n^{1/2}\|y-x\| \leqslant \\ & \leqslant \left(\frac{A_1}{2}+\frac{C}{2}\right)\|y-x\|^2 + A_2 + \frac{\delta^2}{2C\mu^2}n \overset{C=A_1}{=} A_1\|y-x\|^2 + \left( A_2 + \frac{\delta^2}{2A_1\mu^2}n\right). \end{align*} Consider a gradient descent process (\ref{gd_process}). Substituting it and (\ref{random_noisy_grad_est}) into the last inequality and taking the expectation in $u_k$ we obtain \begin{align*} \mathbb{E}_{u_k}\left[ f_{\mu}(x_{k+1})\right] \leqslant & ~ f_{\mu}(x_k) - h_k\| \nabla \tilde{f}_{\mu}(x_k,\delta)\|^2_{\ast} + h_k^2A_1\mathbb{E}_{u_k}\left[\|g_{\mu}(x_k,u_k,\delta)\|^2_{\ast}\right] + \\ & + \left( A_2 + \frac{\delta^2}{2A_1\mu^2}n\right). \end{align*} Now let's use the fact that (from $(a+b)^2\leqslant 2a^2 + 2b^2$) \begin{align*} \| \nabla f_{\mu}(x)\|^2_{\ast} & \leqslant 2\| \nabla \tilde{f}_{\mu}(x,\delta)\|^2_{\ast} +2 \|\nabla f_{\mu}(x) - \nabla \tilde{f}_{\mu}(x,\delta)\|^2_{\ast} \leqslant \\ &\leqslant 2\| \nabla \tilde{f}_{\mu}(x,\delta)\|^2_{\ast} + 2\cdot \frac{\delta^2}{\mu^2}n \end{align*} thus \begin{align*} \mathbb{E}_{u_k}\left[ f_{\mu}(x_{k+1})\right] \leqslant & ~ f_{\mu}(x_k) - \frac{h_k}{2}\| \nabla f_{\mu}(x_k)\|^2_{\ast} + h_k^2A_1\mathbb{E}_{u_k}\left[\|g_{\mu}(x_k,u_k,\delta)\|^2_{\ast}\right] + \\ & + \left( A_2 + \frac{\delta^2}{2A_1\mu^2}n + \frac{\delta^2}{\mu^2}n\right). \end{align*} Substituting result of Lemma \ref{Lm:2:5} (we rename constants from this lemma with $A_1',A_2'$ because it is the second pair of constants, and it can be chosen independently from $A_1,A_2$) we obtain \begin{align*} & \mathbb{E}_{u_k}\left[ f_{\mu}(x_{k+1})\right] \leqslant f_{\mu}(x_k) - \left(\frac{h_k}{2} - 20(n+4)h_k^2A_1\right)\| \nabla f_{\mu}(x_k)\|^2_{\ast} + \\ & + h_k^2A_1 5\left( \frac{4\delta^2}{\mu^2}n + \frac{4L_{\nu}^2}{(1+\nu)^2}\mu^{2\nu}n^{2+\nu} + \frac{\mu^2 (A_1')^2}{4}(n+6)^3 + \frac{(A_2')^2}{\mu^2}n\right) + \\ & + \left( A_2 + \frac{\delta^2}{2A_1\mu^2}n + \frac{\delta^2}{\mu^2}n\right). \end{align*} Let's choose $h = h_k = \frac{D}{80(n+4)A_1}$ where $D\in(0,1]$ then \begin{align*} & \mathbb{E}_{u_k}\left[ f_{\mu}(x_{k+1})\right] \leqslant f_{\mu}(x_k) - \frac{D}{320(n+4)A_1}\| \nabla f_{\mu}(x_k)\|^2_{\ast} + \\ & + \frac{5D^2}{A_1(80(n+4))^2}\left( \frac{4\delta^2}{\mu^2}n + \frac{4L_{\nu}^2}{(1+\nu)^2}\mu^{2\nu}n^{2+\nu} + \frac{\mu^2 (A_1')^2}{4}(n+6)^3 + \frac{(A_2')^2}{\mu^2}n\right) + \\ & + \left( A_2 + \frac{\delta^2}{2A_1\mu^2}n + \frac{\delta^2}{\mu^2}n\right) \end{align*} and after summing and taking expectations in $\mathcal{U}$ it becomes \begin{align*} & \mathbb{E}_{\mathcal{U}}\left[ f_{\mu}(x_{N})\right] \leqslant f_{\mu}(x_0) - \frac{D}{320(n+4)A_1}\sum\limits_{k=0}^{N-1}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f_\mu(x_k)\|_{\ast}^2\right] + \\ & + \frac{5ND^2}{A_1(80(n+4))^2}\left( \frac{4\delta^2}{\mu^2}n + \frac{4L_{\nu}^2}{(1+\nu)^2}\mu^{2\nu}n^{2+\nu} + \frac{\mu^2 (A_1')^2}{4}(n+6)^3 + \frac{(A_2')^2}{\mu^2}n\right) + \\ & + N\left( A_2 + \frac{\delta^2}{2A_1\mu^2}n + \frac{\delta^2}{\mu^2}n\right). \end{align*} Rearranging terms and using the fact that $f^{\ast}\leqslant \mathbb{E}_{\mathcal{U}}\left[ f_{\mu}(x_{N})\right]$ we finally obtain (\ref{ineq:grad_exp_before_A_substitution}). \qed \end{proof} And now we will use it to obtain the rate of convergence and noise bounds for two cases. \subsection{Convergence in the sense of \texorpdfstring{$\|\nabla f(x_k)\|_{\ast}$}{nablaf(xk)}} \begin{theorem}\label{thrm:convergence_in_the_sense_of_gradient_norm} Consider the process (\ref{gd_process}) and Assumptions \ref{asmpt:noisy_oracle} and \ref{asmpt:holder_grad_function}. Suppose we want to ensure \begin{align*} \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f(x_k)\|_{\ast}^2\right]\leqslant \varepsilon_{\nabla f} \end{align*} then it can be shown that with the right choice of the smoothing parameter $\mu$ this inequality holds after \begin{align}\label{thrm:convergence_in_the_sense_of_gradient_norm:N_upper_bound} N = O\left(\frac{n^{2 + \frac{1-\nu}{2\nu}}}{\varepsilon_{\nabla f}^{\frac{1}{\nu}}}\right) \end{align} steps of the process (\ref{gd_process}) under the assumption that \begin{align}\label{thrm:convergence_in_the_sense_of_gradient_norm:delta_upper_bound} \delta < \frac{\mu^{\frac{3+\nu}{2}}}{n^{\frac{3-\nu}{4}}} = O\left(\frac{\varepsilon_{\nabla f}^{\frac{3+\nu}{4\nu}}}{n^{\frac{3+7\nu}{4\nu}}}\right). \end{align} \end{theorem} \begin{proof} We will use Lemma \ref{Lm:2:4} to replace the gradient norm with the gradient norm of the smoothed function and then use Lemma \ref{Lm:3:1} \begin{align*} & \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f(x)\|_{\ast}^2\right] \overset{Lem.~\ref{Lm:2:4}}{\leqslant} \min\limits_{k\in\{0, N-1\}}\left( 2\mathbb{E}_{\mathcal{U}}\left[\|\nabla f_{\mu}(x)\|_{\ast}^2\right]+ 2\mu^{2\nu}L_{\nu}^2 n^{\nu}\right) \overset{(\ref{ineq:grad_exp_before_A_substitution})}{\leqslant} \\ & \leqslant \frac{640(n+4)A_1(f_{\mu}(x_0) - f^{\ast})}{ND} + \\ & + \frac{D}{2(n+4)}\left( \frac{4\delta^2}{\mu^2}n + \frac{4L_{\nu}^2}{(1+\nu)^2}\mu^{2\nu}n^{2+\nu} + \frac{\mu^2 (A_1')^2}{4}(n+6)^3 + \frac{(A_2')^2}{\mu^2}n\right) + \\ & + \frac{640(n+4)A_1}{D}\left( A_2 + \frac{\delta^2}{2A_1\mu^2}n + \frac{\delta^2}{\mu^2}n\right) + 2\mu^{2\nu}L_{\nu}^2 n^{\nu}. \end{align*} As we can see, the best achievable power of $\mu$ is $2\nu$, so we can choose the remaining parameters based on this. Consider the case $A_1 = \frac{L_{\nu}}{\mu^{1-\nu}}n^{\frac{1+\nu}{2}},A_2 = 0$ and $A_1' = \left[\frac{1}{\hat{\delta}}\right]^{\frac{1-\nu}{1+\nu}}\frac{2L_\nu}{\mu^{1-\nu}},A_2' = \hat{\delta}L_\nu\mu^{1+\nu}$ with $\hat{\delta} = (n+6)^{\frac{1+\nu}{2}}$ (this is chosen to equalize powers of $n$ in second term): \begin{align} \label{ineq:grad_exp_after_A_substitution} & \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f(x)\|_{\ast}^2\right] \leqslant \frac{640(n+4)L_{\nu}(f_{\mu}(x_0) - f^{\ast})}{ND\mu^{1-\nu}}n^{\frac{1+\nu}{2}} + \\ & + \frac{D\mu^{2\nu}}{2(n+4)}\left( \frac{4\delta^2}{\mu^{2+2\nu}}n + \frac{4L_{\nu}^2}{(1+\nu)^2}n^{2+\nu} + L_{\nu}^2(n+6)^{2+\nu} + L_{\nu}^2n(n+6)^{1+\nu}\right) + \nonumber \\ & + \frac{640(n+4)L_{\nu}}{D\mu^{1-\nu}}n^{\frac{1+\nu}{2}}\left( 0 + \frac{\delta^2}{2L_{\nu}n^{\frac{1+\nu}{2}}\mu^{1+\nu}}n + \frac{\delta^2}{\mu^2}n\right) + 2\mu^{2\nu}L_{\nu}^2 n^{\nu}.\nonumber \end{align} Now we see only terms with $\mu^{2\nu}$ and terms with $\delta^2$ and some powers of $\mu$. To ease assumptions on $\delta$ we can consider maximum possible $D = 1$. The bound for $\delta$ then has form of $\delta\leqslant \frac{\mu^{\alpha}}{n^\beta}$, where $\alpha = \frac{3+\nu}{2}$ (from the third term, we have $\mu^{2\alpha - (1 - \nu) - 2}$ and we want it to be $\mu^{2\nu}$) and $\beta = \frac{3-\nu}{4}$ to equalize powers of $n$ in the second ($n^{1+\nu}$) and the third ($n^{2+\frac{1+\nu}{2}-2\beta}$) terms (therefore $\delta < \frac{\mu^{\frac{3+\nu}{2}}}{n^{\frac{3-\nu}{4}}}$): \begin{align*} & \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f(x)\|_{\ast}^2\right] \leqslant \frac{640(n+4)L_{\nu}(f_{\mu}(x_0) - f^{\ast})}{N\mu^{1-\nu}}n^{\frac{1+\nu}{2}} + \\ & + \frac{\mu^{2\nu}}{2(n+4)}\left( 4\mu^{1-\nu}n^{\frac{\nu - 1}{2}} + \frac{4L_{\nu}^2}{(1+\nu)^2}n^{2+\nu} + L_{\nu}^2(n+6)^{2+\nu} + L_{\nu}^2n(n+6)^{1+\nu}\right) + \\ & + 320(n+4)\mu^{2\nu}\left( \mu^{1-\nu}n^{\frac{\nu - 1}{2}} + 2L_{\nu}n^{\nu}\right) + 2\mu^{2\nu}L_{\nu}^2 n^{\nu} \end{align*} (notice, that $\mu^{1-\nu}\leqslant 1$ because $\mu<1$ as the step of gradient estimation, so we will replace $\mu^{1-\nu}$ with $1$ further). Consider $\mu\leqslant\mu_0 = \left( M \cdot n^{1+\nu} \right)^{-\frac{1}{2\nu}}\varepsilon_{\nabla f}^{\frac{1}{2\nu}}$ where \begin{align*} M \cdot n^{1+\nu} = & \frac{4n^{\frac{\nu - 1}{2}} + \frac{4L_{\nu}^2}{(1+\nu)^2}n^{2+\nu} + 2L_{\nu}^2(n+3)(n+6)^{1+\nu}}{4(n+4)} + \\ & + 160(n+4)\left( n^{\frac{\nu - 1}{2}} + 2L_{\nu}n^{\nu} \right) + \mu^{2\nu}L_{\nu}^2 n^{\nu} \end{align*} (thus $M = O(1+L_{\nu}+L_{\nu}^2)$) and substituting it we obtain \begin{align*} & \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f(x_k)\|_{\ast}^2\right]\leqslant \frac{640(n+4)n^{(1-\nu)\cdot\frac{1+\nu}{2\nu}}L_{\nu}(f_{\mu}(x_0) - f^{\ast})}{N\cdot M^{-\frac{1-\nu}{2\nu}}\varepsilon_{\nabla f}^{\frac{2-2\nu}{2\nu}}}n^{\frac{1+\nu}{2}} + \frac{\varepsilon_{\nabla f}}{2}. \end{align*} That means that we need to make \begin{align*} N = O\left(\frac{n^{1 + (1-\nu)\cdot\frac{1+\nu}{2\nu} + \frac{1+\nu}{2}}}{\varepsilon_{\nabla f}^{\frac{1}{\nu}}}\right) = O\left(\frac{n^{2 + \frac{1-\nu}{2\nu}}}{\varepsilon_{\nabla f}^{\frac{1}{\nu}}}\right) \end{align*} steps to ensure $\min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f_\mu(x_k)\|_{\ast}^2\right]\leqslant \varepsilon_{\nabla f}$. It's only left to substitute $\mu$ into upper bound for $\delta$ to obtain (\ref{thrm:convergence_in_the_sense_of_gradient_norm:delta_upper_bound}). \qed \end{proof} In case $\nu = 1$ the article \cite{NesterovSpokoiny2015} (Section 7) shows that the upper bound for the expected number of steps is $N = O\left(\frac{n}{\varepsilon^2}\right)$ where $\varepsilon^2 = \varepsilon_{\nabla f}$, while we show $N = O\left(\frac{n^{2}}{\varepsilon_{\nabla f}}\right)$, which is $n$ times worse. This can be improved quite easily using the fact that for this case \begin{align*} \|\nabla f_{\mu}(y)-\nabla f_{\mu}(x)\|_{\ast} = & \left\| \frac{1}{\kappa} \int\limits_{E}\left( \nabla f(y+\mu u) - \nabla f(x+\mu u) \right) e^{-\tfrac{1}{2}\|u\|^2}du \right\|_{\ast} \leqslant \\ \leqslant & \frac{1}{\kappa} \int\limits_{E}L_{1}\|y-x\| e^{-\tfrac{1}{2}\|u\|^2}du = L_{1}\|y-x\| \end{align*} then this inequality can be used to set $A_1 = \frac{L_1}{2}$ and $A_2 = 0$ in (\ref{ineq:grad_exp_before_A_substitution}), so the power of $n$ in the first term will be 1 less and repeating following steps we will obtain $N = O\left(\frac{n}{\varepsilon_{\nabla f}}\right)$. This, however, cannot be easily extended to $\nu < 1$, because of $\|x-y\|^{\nu}$ term (see Lemma \ref{Lm:2:2} proof for details). \subsection{Convergence in the sense of \texorpdfstring{$\|\nabla f_{\mu}(x_k)\|_{\ast}$}{nablafmu(xk)}} The main problem of the previous result is that it doesn't work with $\nu = 0$ (which is normal because we cannot ensure gradient norm convergence when the gradient is only bounded) and convergence becomes infinitely slow when $\nu\to 0$. We will now consider the convergence in the sense of smoothed function gradient norm while keeping functional gap (Lemma \ref{Lm:2:3}) small. \begin{theorem}\label{thrm:convergence_in_the_sense_of_smoothed_gradient_norm} Consider the process (\ref{gd_process}) and Assumptions \ref{asmpt:noisy_oracle} and \ref{asmpt:holder_grad_function}. Suppose we want to ensure \begin{align*} |f_{\mu}(x)-f(x)|\leqslant \frac{L_{\nu}}{1+\nu}\mu^{1+\nu}n^{\frac{1+\nu}{2}} &\leqslant \varepsilon_{f} \\ \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f_\mu(x_k)\|_{\ast}^2\right]&\leqslant \varepsilon_{\nabla f} \end{align*} where $\varepsilon_{f} \sim \varepsilon_{\nabla f}^{\frac{1+\nu}{2\nu+\alpha}}$ then it can be shown that with the right choice of the smoothing parameter $\mu$ these inequalities hold after \begin{align}\label{thrm:convergence_in_the_sense_of_smoothed_gradient_norm:N_upper_bound} N = O\left(\frac{n^{\frac{7-3\nu}{2}}}{\varepsilon_{\nabla f}^{\frac{3-\nu}{1+\nu}}}\right) \end{align} steps of the process (\ref{gd_process}) under the assumption that \begin{align}\label{thrm:convergence_in_the_sense_of_smoothed_gradient_norm:delta_upper_bound} \delta < \frac{\mu^{\frac{5-\nu}{2}}}{n^{\frac{3-\nu}{4}}} = O\left(\frac{\varepsilon_{\nabla f}^{\frac{5-\nu}{2(1+\nu)}}}{n^{\frac{13-3\nu}{4}}}\right). \end{align} \end{theorem} \begin{proof} Substituting the same $A_1,A_2$ and $A_1',A_2'$ as in previous proof into (\ref{ineq:grad_exp_before_A_substitution}) we will obtain almost (\ref{ineq:grad_exp_after_A_substitution}) but without the fourth term and with a smaller constant: \begin{align*} & \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f(x)\|_{\ast}^2\right] \leqslant \frac{320(n+4)L_{\nu}(f_{\mu}(x_0) - f^{\ast})}{ND\mu^{1-\nu}}n^{\frac{1+\nu}{2}} + \\ & + \frac{D\mu^{2\nu}}{4(n+4)}\left( \frac{4\delta^2}{\mu^{2+2\nu}}n + \frac{4L_{\nu}^2}{(1+\nu)^2}n^{2+\nu} + L_{\nu}^2(n+6)^{2+\nu} + L_{\nu}^2n(n+6)^{1+\nu}\right) + \nonumber \\ & + \frac{320(n+4)L_{\nu}}{D\mu^{1-\nu}}n^{\frac{1+\nu}{2}}\left( \frac{\delta^2}{2L_{\nu}n^{\frac{1+\nu}{2}}\mu^{1+\nu}}n + \frac{\delta^2}{\mu^2}n\right).\nonumber \end{align*} The difference now is that we are not restricted to use $\varepsilon_{\nabla f} \sim \mu^{2\nu}$, because we can select $D$ to balance powers of $\mu$ (there is no fourth term with its invariable $\mu^{2\nu}$). Let's at first consider a case with $\delta = 0$. Suppose that $D=\mu^{\alpha}$, then \begin{align*} \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f(x)\|_{\ast}^2\right] \leqslant O\left(\mu^{-(1-\nu+\alpha)}\right) + O(\mu^{2\nu+\alpha}) + 0 \end{align*} thus $\mu^{2\nu+\alpha}\sim \varepsilon_{\nabla f}$ (because like in previous proof we want to bound the second and the third terms with the $\varepsilon_{\nabla f}/2$) and from Lemma \ref{Lm:2:3} we have \begin{align*} \varepsilon_{f}\geqslant \frac{L_{\nu}}{1+\nu}\mu^{1+\nu}n^{\frac{1+\nu}{2}} \sim \varepsilon_{\nabla f}^{\frac{1+\nu}{2\nu+\alpha}}. \end{align*} Now, in previous subsection we had $\mu\sim\varepsilon_{\nabla f}^{\frac{1}{2}}$ (for the case of $\nu=1$), so substituting it into Lemma \ref{Lm:2:3} we would obtain $\varepsilon_{\nabla f}\sim \varepsilon_{f}$. So let's just consider this to be our case, then we can obtain $\frac{1+\nu}{2\nu+\alpha} = 1$ which gives us $\alpha = 1-\nu$ (such reasoning combines results from this and previous sections in the case of $\nu = 1$). Now, let's set $D = \mu^{1-\nu} < 1$ and $\delta < \frac{\mu^{\frac{5-\nu}{2}}}{n^{\frac{3-\nu}{4}}}$ (the power of $n$ is chosen similar to the previous proof) then \begin{align*} & \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f_\mu(x_k)\|_{\ast}^2\right]\leqslant \frac{320(n+4)L_{\nu}(f_{\mu}(x_0) - f^{\ast})}{N\mu^{2-2\nu}}n^{\frac{1+\nu}{2}} + \\ & + \frac{\mu^{1+\nu}}{4(n+4)}\left( 4\mu^{3-3\nu}n^{\frac{\nu - 1}{2}} + \frac{4L_{\nu}^2}{(1+\nu)^2}n^{2+\nu} + 2L_{\nu}^2(n+3)(n+6)^{1+\nu}\right) + \\ & + 160(n+4)\mu^{1+\nu}\left( \mu^{1-\nu}n^{\frac{\nu - 1}{2}} + 2L_{\nu}n^{\nu}\right). \end{align*} Consider $\mu\leqslant\mu_0 = \left( M \cdot n^{1+\nu} \right)^{-\frac{1}{1+\nu}}\varepsilon_{\nabla f}^{\frac{1}{1+\nu}} = \frac{1}{n\cdot M^{\frac{1}{1+\nu}}}\varepsilon_{\nabla f}^{\frac{1}{1+\nu}}$ where \begin{align*} M\cdot n^{1+\nu} = & \frac{ 4n^{\frac{\nu - 1}{2}} + \frac{4L_{\nu}^2}{(1+\nu)^2}n^{2+\nu} + 2L_{\nu}^2(n+3)(n+6)^{1+\nu}}{8(n+4)} + \\ & + 80(n+4)\left( n^{\frac{\nu - 1}{2}} + 2L_{\nu}n^{\nu}\right) \end{align*} and substituting it we obtain \begin{align*} & \min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f_\mu(x_k)\|_{\ast}^2\right]\leqslant \frac{320(n+4)n^{2-2\nu}L_{\nu}(f_{\mu}(x_0) - f^{\ast})}{N\cdot M^{-\frac{2-2\nu}{1+\nu}}\varepsilon_{\nabla f}^{\frac{2-2\nu}{1+\nu}}}n^{\frac{1+\nu}{2}} + \frac{\varepsilon_{\nabla f}}{2}. \end{align*} That means that we need to make \begin{align} N = O\left(\frac{n^{1 +(2-2\nu) + \frac{1+\nu}{2}}}{\varepsilon_{\nabla f}^{\frac{3-\nu}{1+\nu}}}\right) = O\left(\frac{n^{\frac{7-3\nu}{2}}}{\varepsilon_{\nabla f}^{\frac{3-\nu}{1+\nu}}}\right) \end{align} steps to ensure $\min\limits_{k\in\{0, N-1\}}\mathbb{E}_{\mathcal{U}}\left[\|\nabla f_\mu(x_k)\|_{\ast}^2\right]\leqslant \varepsilon_{\nabla f}$. Substituting $\mu = \mu_0$ into Lemma \ref{Lm:2:3} we obtain \begin{align*} |f_{\mu}(x)-f(x)|\leqslant \frac{L_{\nu}}{1+\nu}\mu^{1+\nu}n^{\frac{1+\nu}{2}} = \Theta\left( \frac{\varepsilon_{\nabla f}}{n^{\frac{1+\nu}{2}}}\right). \end{align*} Thus we ensure $|f_{\mu}(x)-f(x)|\leqslant \varepsilon_{f}$ with $\varepsilon_f = \Theta\left( \frac{\varepsilon_{\nabla f}}{n^{\frac{1+\nu}{2}}}\right)$. The bound (\ref{thrm:convergence_in_the_sense_of_smoothed_gradient_norm:delta_upper_bound}) can be obtained the same way as in previous theorem. \qed \end{proof} In case $\nu = 0$ \cite{NesterovSpokoiny2015} shows that $N = O\left(\frac{n^3}{\varepsilon_{f}\varepsilon_{\nabla f}^2}\right) \overset{\varepsilon_f = \Theta\left( \frac{\varepsilon_{\nabla f}}{n^{1/2}}\right)}{=} O\left(\frac{n^{\frac{7}{2}}}{\varepsilon_{\nabla f}^3}\right)$ which coincides with our result. In case $\nu = 1$ this result coincides with the result of the previous theorem, and we can repeat the reasoning at the end improving the result by making the iteration complexity to be proportional to $n$ rather than $n^2$. We didn't discuss the question of what is the weakest possible bound on $\delta$ at which it is still possible to prove the convergence. It can be easily shown that if we remove powers of $n$ from these $\delta$ upper bounds it won't change the fact of the convergence, however this will increase the powers of $n$ in $N$ bounds. For example in the end of the proof of the Theorem \ref{thrm:convergence_in_the_sense_of_smoothed_gradient_norm} we can choose $\mu_0 = \left( M \cdot n^{\frac{5+\nu}{2}} \right)^{-\frac{1}{1+\nu}}\varepsilon_{\nabla f}^{\frac{1}{1+\nu}}$ (this is the biggest power of $n$ there) and then repeating the steps we obtain \begin{align*} N = O\left(\frac{n^{1 +\frac{5+\nu}{2}\frac{1}{1+\nu} + \frac{1+\nu}{2}}}{\varepsilon_{\nabla f}^{\frac{3-\nu}{1+\nu}}}\right). \end{align*} Changing the powers of $\varepsilon_{\nabla f}$ for noise bounds is harder though, and can be a topic of the further studies. \section{Conclusion} \label{section:conclusion} In this paper we extend the results of \cite{NesterovSpokoiny2015} to non-convex minimization problems with H\"older-continuous gradients and noisy zeroth-order oracle. Table \ref{convergence-table} below summarizes our results for two types of the quality measures: norm of the gradient of the smoothed version of the objective $f_{\mu}$ and norm of the gradient of the original objective function $f$. We provide an upper bound for the necessary number of iterations $N$ and an upper bound on the oracle inexactness $\delta$ which can be tolerated and still allows to achieve the desired accuracy in terms of the corresponding criterion. We also show that in the case $\nu = 1$, the upper bounds for $N$ can be improved by reducing the exponent of $n$ to 1 (second part of the Table \ref{convergence-table}). The interesting fact is that for the case of $\nu = 1$ the upper bound for the noise level $\delta$ is linear in $\varepsilon_{\nabla f}$, and bounds on $N$ and $\delta$ for both $\|\nabla f(x_k)\|_{\ast}$ and $\|\nabla f_{\mu}(x_k)\|_{\ast}$ coincide. In future it would be interesting to explore in more details the trade-off between the oracle noise level $\delta$ and the iteration number $N$ in terms of their dependence on $n$, which we briefly discussed after the proofs of Theorems \ref{thrm:convergence_in_the_sense_of_gradient_norm} and \ref{thrm:convergence_in_the_sense_of_smoothed_gradient_norm}. Another interesting question for future research is whether it is possible to obtain a bound for $N$ which continuously depends on $\nu$ and for $\nu=1$ gives the same bound as the bound in \cite{NesterovSpokoiny2015}. \begin{table}[h!] \caption{Convergence properties for the different convergence types} \label{convergence-table} \begin{tabular}{ccccc} \toprule Convergence type & $N$ upper bound & $\delta$ upper bound & $|f_{\mu}(x)-f(x)|$ & Possible $\nu$ \\ \midrule $\mathbb{E}\|\nabla f(x_k)\|_{\ast}^2 \leqslant \varepsilon_{\nabla f}$ & $O\left(\frac{n^{2 + \frac{1-\nu}{2\nu}}}{\varepsilon_{\nabla f}^{\frac{1}{\nu}}}\right)$ & $O\left(\frac{\varepsilon_{\nabla f}^{\frac{3+\nu}{4\nu}}}{n^{\frac{3+7\nu}{4\nu}}}\right)$ & ~--- & $\nu\in(0,1]$ \\ $\mathbb{E}\|\nabla f_{\mu}(x_k)\|_{\ast}^2 \leqslant \varepsilon_{\nabla f}$ & $O\left(\frac{n^{\frac{7-3\nu}{2}}}{\varepsilon_{\nabla f}^{\frac{3-\nu}{1+\nu}}}\right)$ & $O\left(\frac{\varepsilon_{\nabla f}^{\frac{5-\nu}{2(1+\nu)}}}{n^{\frac{13-3\nu}{4}}}\right)$ & $\Theta\left( \frac{\varepsilon_{\nabla f}}{n^{\frac{1+\nu}{2}}}\right)$ & $\nu\in[0,1]$ \\ \midrule $\mathbb{E}\|\nabla f(x_k)\|_{\ast}^2 \leqslant \varepsilon_{\nabla f}$ & $O\left(\frac{n}{\varepsilon_{\nabla f}}\right)$ & $O\left(\frac{\varepsilon_{\nabla f}}{n^{\nicefrac{5}{2}}}\right)$ & ~--- & $\nu=1$ \\ $\mathbb{E}\|\nabla f_{\mu}(x_k)\|_{\ast}^2 \leqslant \varepsilon_{\nabla f}$ & $O\left(\frac{n}{\varepsilon_{\nabla f}}\right)$ & $O\left(\frac{\varepsilon_{\nabla f}}{n^{\nicefrac{5}{2}}}\right)$ & $\Theta\left( \frac{\varepsilon_{\nabla f}}{n}\right)$ & $\nu = 1$ \\ \bottomrule \end{tabular} \end{table} \begin{acknowledgements} The authors are grateful to K. Scheinberg and A. Beznosikov for several discussions on derivative-free methods. \end{acknowledgements} \bibliographystyle{spmpsci}
1,941,325,220,833
arxiv
\section{Introduction} The gravitational field of an astrophysical object can deviate the light path of a background source star and the result is the formation of two images at either side of the lens \cite{Einstein36}. In the Galactic scale, the angular separation of the images is of the order of milli arc second, too small to be resolved by the ground-based telescopes. Instead, the overall light from the images received by the observer is magnified in comparison to an un-lensed source. This phenomenon is called gravitational microlensing and has been proposed as an astrophysical tool to probe dark objects in the Galactic disk. In recent years it has been used also for discovering the extra solar planets, the stellar atmosphere of distant stars and etc. \cite{Leibes,ChangRefesdal,Paczynski86}. A detailed review on these topics can be found in Gaudi (2012). An important problem in photometric observations of microlensing events is the degeneracy between the lens parameters as the distance, mass and velocity. We can partially resolve this degeneracy by using higher order terms as the parallax and finite size effects which can slightly change simple microlensing light curves. However, these effects can not be applied for all the microlensing events. There are other observables such as (i) the centroid shift of images and (ii) the polarization variation of images during the lensing that can break this degeneracy. Here, we assume that during photometric observations of microlensing events, polarimetric and astrometric observations also can be done. {\bf Astrometric observation}: In the gravitational microlensing, the light centroid of images deviates from the position of the source star and for the case of a point-mass lens, the centroid of images traces an elliptical shape during the lensing \cite{Walker,Miyamoto,Hog,Jeong}. By measuring the light centroid shift of images with the high resolution ground-based or space-based telescopes, accompanied by measurements of the parallax effect, the lens mass can be identified \cite{Paczynski96,Miralda96}. This method is also applicable for studying the structure of the Milky way with enough number of microlensing events \cite{Rahvar2005}. Also the degeneracy in the close and the wide caustic-crossing binary microlensing events can be removed by astrometric observations \cite{Dominik,Gould,Chung}. \\ {\bf Polarimetric observation}: Another feature in the gravitational microlensing is the time-dependent variation of the polarization during the lensing. The scattering of photons by electrons in the atmosphere of star makes a local polarization in different positions over the surface of a star and as a result of circular symmetry of the star, the total polarization is zero \cite{chandrasekhar60,sob}. During the gravitational microlensing, the circular symmetry of images breaks and the total polarization of source star is non-zero and changes with time \cite{schneider87,simmons95a,Bogdanov96}. Measuring polarization during microlensing events helps us to evaluate the finite source effect, the Einstein radius and the limb-darkening parameters of the source star \cite{yoshida06,Agol96,schneider87}. In this work we start with the investigation for a possible relation between the polarization and the centroid shift vectors. We find an orthogonality relation between them in the simple as well as the binary lensing. However, near the cusp in the caustic lines of a binary lens, the polarization and centroid shift vectors are not normal to each other except on the symmetric axis of the cusp. This effect enables us to discover the source trajectory relative to the caustic line. As a result of this orthogonality relation, the polarimetry measurements can resolve the source trajectory degeneracy, i.e. $u_{0}$ degeneracy, in the same way as the astrometric observations. Finally, we study the effects of source spots as a perturbation effect and show that they can break this orthogonality relation. Studying the time variation of polarization can provide a unique tool to distinguish and study the source anomalies such as spots on the surface of the source stars. The layout of the paper is as follows. In sections (\ref{ortho}) we introduce the polarization and astrometry in the gravitational lensing and demonstrate that there is an orthogonality relation between them in simple microlensing events. We extend this discussion for the binary lensing in section (\ref{twolens}) and discuss how this correlation helps to resolve the degeneracy problem. In section (\ref{spot}), we investigate the effect of source spots in the polarimetric observation. Finally, we take the largest sample of the OGLE microlensing data and estimate the number of events with the observable polarimetry signals. We conclude in section (\ref{result}). \section{Polarimetric and Astrometric shift in gravitational microlensing} \label{ortho} In this section we first review the polarization during microlensing events. Then we study the astrometric shift and its orthogonality with the polarization for an extended source star. \subsection{Polarization during gravitational microlensing} Chandrasekhar (1960) has shown that photons can be scattered by electrons (Thompson scattering) in the atmosphere of hot stars which makes a linear and local polarization. The amount of polarization enhances from the centre to the limb and at a given wavelength in each point it is proportional to the cosine of the angle between the line of sight and the normal vector to the star surface. The other mechanisms such as photon scattering on atomic, molecular species and neutral hydrogen (Rayleigh scattering) or on dust grains are also responsible for producing a local polarization over late-type main sequence and cool giant stars \cite{Ingrosso}. Due to the circular symmetry of the star surface, the total light of a distant star is unpolarized. This circular symmetry can be broken by spots on the star surface, magnetic fields or the lensing effect and as a result, we expect to detect a non-zero polarization for these cases. For example, the emission lines from T-Tauri stars (pre-main sequence stars) show a linear polarization due to the light scattering by dust grains in the circumstellar disk around the central stars. This polarization changes with time due to variations in the configuration of the dust pattern \cite{Drissen1989,Akitaya2009}. Schneider \& Wagoner (1987) noticed that there is a net and time-dependent polarization for a lensed supernova. They estimated the amounts of polarization degree near point and critical line singularities. The existence of a net polarization during microlensing events due to circular symmetry breaking was noticed by Simmons et al. (1995a,b) and Bogdanov et al. (1996). Also, polarization in binary microlensing events was numerically calculated by Agol (1996). He noticed that in a binary microlensing event, the net polarization is larger than that by a single lens and reaches to one percent during the caustic crossing. A semi-analytical formula for polarization degree induced by a single microlens was derived by Yoshida (2006). Recently, Ingrosso et al. (2012) evaluated the expected polarization signals for a set of reported high-magnification single-lens and exo-planetary microlensing events towards the Galactic bulge. They showed that it reaches to $0.04$ per cent for late-type stars and rises to a few per cent for cool giants. \begin{figure} \begin{center} \psfig{file=coordinate.eps,angle=270,width=11.cm,clip=0} \caption{ Demonstration of a projected source surface on the lens plane (red circle). In this figure, the black star and the black spot represent the lens position and a typical point on the source surface, respectively. Source star is projected on the sky plane. The directions of $\boldsymbol{n}$ and $\boldsymbol{Z}$ refer to the propagation direction and line of sight towards the observer. $u_{cm}$ connects lens position to the source center, $\phi$ is the azimuthal angle between the lens-source connection line and the line from the origin to each projected element over the source surface and $\theta$ is the projection angle i.e. $\rho=\sin \theta$. $\boldsymbol{l_l}$ and $\boldsymbol{l_r}$ are two unit vectors normal to the direction of $\boldsymbol{Z}$ as shown in the figure.}\label{coordinate} \end{center} \end{figure} In this part, we review how to calculate the net polarization of a source star in the microlensing events. For describing the polarized light, we use $S_{I}$, $S_{Q}$, $S_{U}$ and $S_{V}$ as the Stokes parameters. These parameters show the total intensity, two components of linear polarizations and the circular polarization over the source surface, respectively \cite{Tinbergen96}. For a stellar atmosphere, we set the circular polarization to be zero, $S_{V}=0$. The linear polarization degree $(P)$ and angle of polarization $(\theta_{p})$ as functions of total Stokes parameters are given by \cite{chandrasekhar60}: \begin{eqnarray} P&=&\frac{\sqrt{S_{Q}^{2}+S_{U}^{2}}}{S_{I}},\nonumber\\ \theta_{p}&=&\frac{1}{2}\tan^{-1}{\frac{S_{U}}{S_{Q}}}, \end{eqnarray} where the un-normalized Stokes parameters are defined as follows: \begin{eqnarray} S_{Q}&\equiv&\int_{\mathcal{S}}d^{2}\mathcal{S}<E_{X}E_{X}-E_{Y}E_{Y}>= \nonumber\\ &=&-\int d^{2}\mathcal{S}~I_{-}(\mu)\cos(2\phi),\nonumber\\ S_{U}&\equiv&-\int_{\mathcal{S}}d^{2}\mathcal{S}<E_{X}E_{Y}+E_{Y}E_{X}>=\nonumber\\ &=&\int d^{2}\mathcal{S}~I_{-}(\mu)\sin(2\phi),\nonumber\\ S_{I}&\equiv&\int_{\mathcal{S}}d^{2}\mathcal{S}<E_{X}E_{X}+E_{Y}E_{Y}>=\int d^{2}\mathcal{S}~I(\mu), \end{eqnarray} where $\mathcal{S}$ refers to source area projected on the lens plane, $\phi$ is the azimuthal angle between the lens-source connection line and the line from the origin to each element over the source surface, $I(\mu)=I_{l}(\mu)+I_{r}(\mu)$, $I_{-}(\mu)=I_{r}(\mu)-I_{l}(\mu)$ and $<>$ refers to the time averaging (see Figure \ref{coordinate}). In following, we adapt the coordinate sets used by Chandrasekhar. Let us define $\boldsymbol{n}$ being the normal to the source surface at each point which is the propagation direction and $\boldsymbol{Z}$ being the direction towards the observer. We define $\boldsymbol{r}$ and $\boldsymbol{l}$ so that $I_{l}(\mu)$ being the emitted intensity by the star atmosphere in the plane containing the line of sight and the normal to the source surface on that point, i.e. $(\boldsymbol{n})$, and $I_{r}(\mu)$ being the emitted intensity in the normal direction to that plane, where $\boldsymbol{r} \times \boldsymbol{l}= \boldsymbol{Z}$, $\mu=\cos(\theta)=\sqrt{1- \rho^{2}}$ and $\rho$ is the distance from the centre to each projected element over the source surface normalized to the projected radius of star on the lens plane. Indeed, $\boldsymbol{l}$ represents the radial and $\boldsymbol{r}$ is the tangent coordinates perpendicular to the line of sight. We consider a fixed cartesian coordinate at the star centre where $\boldsymbol{X}$-axis is outwards the lens position, $\boldsymbol{Y}$-axis is normal to it, projected in the sky plane and $\boldsymbol{Z}$-axis is in the line of sight towards the observer. The projected source surface on the lens plane (red circle) and the specified axes are shown in Figure (\ref{coordinate}). \begin{figure} \begin{center} \psfig{file=final_pol.eps,angle=270,width=8.cm,clip=} \caption{The polarization degree versus $u_{cm}/\rho_{\star}$ for different values of star size and impact parameter. The polarization raises to a maximum value at $u_{cm}/\rho_{\star}\approx 0.96$ ( Schneider \& Wagoner 1987).} \label{figpol} \end{center} \end{figure} In simple microlensing events, the Stokes parameters by integrating over the source surface are given by: \begin{eqnarray}\label{tsparam} \left( \begin{array}{c} S_{Q}\\ S_{U}\end{array}\right)&=&\rho^2_{\star}\int_{0}^1\rho~d\rho\int_{-\pi}^{\pi}d\phi I_{-}(\mu) A(u) \left( \begin{array}{c} -\cos 2\phi \\ \sin 2\phi \end{array} \right),\nonumber\\ S_{I}&=&~\rho^2_{\star}\int_{0}^{1}\rho d\rho\int_{-\pi}^{\pi}d\phi I(\mu)~ A(u), \end{eqnarray} where $\rho_\star$ is the projected radius of star on the lens plane and normalized to the Einstein radius, $u=(u_{cm}^2+ \rho^2 \rho^{2}_{\star}+2 \rho \rho_{\star} u_{cm} \cos\phi)^{1/2}$ is the distance of each projected element over the source surface with respect to the lens position, $u_{cm}$ is the impact parameter of the source centre and magnification factor for a simple microlensing is $$A(u)=\frac{u^2+2}{u \sqrt{u^2+4}}.$$ The amounts of $I(\mu)$ and $I_{-}(\mu)$ by assuming the electron scattering in spherically isotropic atmosphere of an early-type star were evaluated by Chandrasekhar (1960) as follows \begin{eqnarray} I(\mu)&=&I_{0}(1-c_{1}(1-\mu)),\nonumber\\ I_{-}(\mu)&=&I_{0}c_{2}(1-\mu), \end{eqnarray} where $c_{1}=0.64$, $c_{2}=0.032$ and $I_{0}$ is the total intensity emitted towards the line of sight direction \cite{schneider87}. For a point-mass lens, the integrals over the azimuthal angle $\phi$ of the total Stokes parameters, i.e. equation (\ref{tsparam}), are reduced to a combination of complete elliptical integrals \cite{yoshida06}. We calculate the Stokes parameters by numerical integrations. Figure (\ref{figpol}) represents the dependence of the polarization degree on the source size, $\rho_{\star}$ and $u_{cm}/\rho_{\star}$. The polarization has its maximum value at $u_{cm}/\rho_{\star}\approx0.96$ \cite{schneider87}. If $u_{0}<\rho_{\star}$ there are two times in which $u_{cm}=0.96 \rho_{\star}$, so the time profile of polarization has two peaks (transit case) whereas if $u_{0}\geq \rho_{\star}$ only in the closest approach the time profile of the polarization has a peak (bypass case) \cite{simmonsb}. For the case that the finite size effect of source star is small, the probability of detecting polarization also is small as the chance that the lens and source approach with the impact parameter comparable to the source size is small. Hence, detecting polarization effect is in favour of microlensing events with a large finite size parameter (i.e. giant stars). According to equation (\ref{tsparam}), the total Stokes parameter $S_{U}$ for a limb-darkened and circular source in simple Paczy\'nski microlensing events is zero and $S_{Q}$ is negative. Hence, the net polarization is normal to the lens-source connection line. In figure (\ref{figm}), we show the polarization map around a point-mass lens located at the centre of the plane with solid black lines. Note that, the size of lines is proportional to the polarization. The orientation (i.e. $\theta_{p}$) is given in terms of its angle with respect to the x-axis specified in Figure (\ref{coordinate}). The arrow sign represents the centroid shift of images that will be discussed in the next part. \subsection{Astrometric centroid shift} \begin{figure} \begin{center} \psfig{file=onelens.eps,angle=270,width=9.cm,clip=0} \caption{The astrometric (red dashed vectors) and polarimetric maps (black solid lines) around a point-mass lens located at centre of plane. We set $\rho_{\star}=0.5$. Note that, the size of centroid shifts and polarizations are normalized by the arbitrary factors. For $u\simeq\rho_{\star}$, the size of the centroid shift vector tend to zero while polarization signal is maximum.} \label{figm} \end{center} \end{figure} In gravitational microlensing events a shift in the light centroid of the source star happens with respect to its real position. For a point-mass lens the astrometric shift in the light centroid is defined by: \begin{eqnarray} \boldsymbol{\delta\theta}_{c}=\frac{\mu_{1}\boldsymbol{\theta_{1}}+ \mu_{2}\boldsymbol{\theta_{2}}}{\mu_{1}+\mu_{2}}-\boldsymbol{u}\theta_{E} =\frac{\theta_{E}}{u^{2}+2}\boldsymbol{u}, \end{eqnarray} where $\theta_{E}$ is the angular size of the Einstein ring. The astrometric shift traces an ellipse, so-called the \emph{astrometric ellipse} while the source star passes an straight line with respect to the lens plane \cite{Walker,Jeong}. The ratio of axes for this ellipse is a function of the impact parameter and for large impact parameters this ellipse converts to a circle whose radius decreases by increasing the impact parameter and for small values, it turns to a straight line \cite{Walker}. In Figure (\ref{figm}), we plot a vector map containing the normalized astrometric centroid shift at each position of the lens plane which is a symmetric map around the position of a point-mass lens. All the vectors in this case are radial and outward. The astrometric centroid of an extended source is given by (Witt \& Moa 1998) \begin{eqnarray}\label{astro} \boldsymbol{\theta}^{fs}_{c}=\frac{1}{\pi \rho^{2}_{\star} \bar{I}\mu^{fs}}\sum_{i}\int_{\mathcal{S}}d^{2}\mathcal{S}~ I(\boldsymbol{u}) \boldsymbol{\theta}_{i}\mu_{i}, \end{eqnarray} where $\bar{I}$ is the mean surface brightness of source, $\boldsymbol{\theta}_{i}$ is the position of the $i$th image with the magnification factor $\mu_{i}$ and $\mu^{fs}$ is the total magnification factor of an extended source: \begin{eqnarray} \mu^{fs}=\frac{1}{\pi \rho^{2}_{\star} \bar{I}}\sum_{i} \int_{\mathcal{S}}d^{2}\mathcal{S}~I(\boldsymbol{u})\mu_{i}(\boldsymbol{u}). \end{eqnarray} In a simple microlensing event, where we have a single lens and the surface brightness of source star is uniform, $\sum_{i}\boldsymbol{\theta}_{i}\mu_{i}=\theta_{E}\frac{u^2+3}{ \sqrt{u^2+4}} \hat{u}$ and the normalized centroid shift for an extended source is given by: \begin{eqnarray}\label{astrop} \boldsymbol{\delta\theta}_{c}^{fs}(\theta_{E})=-\boldsymbol{u_{cm}}+\frac{1}{\pi \mu^{fs}}\int_0^{1} \rho d\rho \int_{-\pi}^{\pi} d\phi I(\rho) \frac{u^2+3}{\sqrt{u^2+4}} \hat{u}, \end{eqnarray} where $\hat{u}=(\rho \cos\phi+ u_{cm} ,\rho \sin\phi)/u$ (see Figure \ref{coordinate}). In Figure (\ref{cenfin}), we plot the trajectories of normalized centroid shifts in the lens plane (upper panel) and the absolute value of normalized centroid shifts versus $u$ (lower panel) for different values of $\rho_{\star}$ and fixed value of $u_{0}=0.1$. \begin{figure} \begin{center} \psfig{file=finite_cenf.eps,angle=270,width=8.cm,clip=} \caption{The normalized centroid shift trajectories in the lens plane (upper panel) and the normalized centroid shift amount versus $u$ (lower panel) for different amounts of $\rho_{\star}$ and $u_{0}=0.1$. The negative value of centroid shift (in lower panel) indicates reverting its direction with respect to the $x$-axis which occurs for $u\leq \rho_{\star}$.} \label{cenfin} \end{center} \end{figure} In the case of a uniform or limb-darkened source, the centroid shift component normal to the lens-source connection line (i.e. $Y$-axis in Figure \ref{coordinate}) is zero. On the other hand, the polarization orientation is normal to the centroid shift for a single lens (see Figure \ref{figm}). The anomalies over the source surface can break this orthogonal relation. We note that, astrometric signals are detectable for the nearby and massive lenses which have large angular Einstein radii. On the other hand polarimetric signals are sensitive to high-magnification microlensing events or extended sources where we have the condition of $u_{cm} \simeq \rho_{\star}$. \section{Binary microlenses} \label{twolens} In this section we investigate a possible relation between the polarization and the astrometric centroid shift in binary microlensing events. In these events, the polarization signal when the source star crosses the caustic curve is higher than that in the single lens case \cite{Agol96}. The caustic lines can be classified into two categories of (i) folds where the caustic lines are smooth and (ii) cusps where folds cross at a point \cite{PettersW1996}. We will investigate the polarization and astrometric signals near fold and cusp singularities. Assuming that a source star crosses the fold of a caustic curve, two temporary images can appear on the lens plane with almost the same magnification factors \cite{Schneider92f}. On the other hand, there are global images which do not change during caustic crossing. These global images move slowly and have small magnification factors with respect to the temporary images. Hence, we only consider the temporary images and obtain their light centroid and polarization vectors in our calculation. We set the centre of coordinate located at the fold and axes are parallel with and normal to the tangent vector of the fold. We also connect the source centre to the centre of coordinate i.e. $\boldsymbol{u}_{cm}=(0,z\rho_{\star})$, therefore, the position of any point on the source surface in this coordinate is given by $\boldsymbol{u}=\boldsymbol{u}_{cm}+\rho_{\star}\boldsymbol{\rho}$. Since the position vector of local images due to the fold singularity, $\boldsymbol{\theta}_{\pm}$ is a linear function with respect to $\boldsymbol{u}$, hence $\boldsymbol{\theta}_{\pm}(\boldsymbol{u})=\boldsymbol{\theta}_{\pm}(\boldsymbol{u}_{cm}) +\rho_{\star}\boldsymbol{\theta}_{\pm}(\boldsymbol{\rho})$. The magnification factor for these images is $$\mu_{\pm}=\frac{1}{2}\sqrt{\frac{u_{f}}{\rho_{\star}(\rho_{2}+z)}}$$ where $\rho_{2}\geq -z$. The light centroid of images near the fold singularity is given by \cite{gaudi2001}: \begin{eqnarray} \boldsymbol{\theta}^{fs}_{f}&=&\boldsymbol{\theta}_{f,cm}+ \frac{ \sqrt{u_{f}\rho_{\star}}}{\pi\mu^{fs}_{f}ad}\int_{max(-z,-1)}^{1} d\rho_{2} \int_{-\sqrt{1-\rho^{2}_{2}}}^{\sqrt{1-\rho^{2}_{2}}}\nonumber\\ &&d\rho_{1}I(\rho) \frac{1}{\sqrt{\rho_{2}+z}} \left( \begin{array}{c} d\rho_{1}-b\rho_{2}\\ -b\rho_{1}\end{array}\right), \end{eqnarray} where $\rho=\sqrt{\rho^{2}_{1}+\rho^{2}_{2}}$ and $\boldsymbol{\theta}_{f,cm}=(\frac{-b\rho_{\star}}{ad}z,0)$ which is parallel with the $u_{1}$-axis. For a limb-darkened source star, $y$ component of the second sentence vanishes due to the symmetric range of $\rho_{1}$. Hence, near the fold singularities the centroid shift vector is parallel with the caustic (i.e. $u_{1}$). On the other hand the Stokes parameters near the fold are given by: \begin{eqnarray} \left( \begin{array}{c} S_{Q}\\S_{U}\end{array}\right)&=&\sqrt{\frac{u_{f}}{\rho_{\star}}}\int_{max(-z,-1)}^1 d\rho_{2}\int_{-\sqrt{1-\rho^{2}_{2}}}^{\sqrt{1-\rho^{2}_{2}}}\nonumber\\&&d\rho_{1} I_{-}(\rho) \frac{1}{\rho^{2}\sqrt{\rho_{2}+z}}\left( \begin{array}{c} \rho^{2}_{1}-\rho^{2}_{2} \\ 2\rho_{1}\rho_{2} \end{array} \right). \end{eqnarray} Here, the second component (i.e. $S_{U}$) due to the symmetry of integral over $\rho_{1}$ is zero. As a result, near the fold singularities polarizations are normal to the tangent vector to the fold and the centroid shift vectors. According to this orthogonality relation near the fold singularity, the polarimetry and astrometry can provide the following information of (i) tracing the source trajectory with respect to caustic lines, (ii) in the case that the orthogonality relationship is broken, we can study possible effects of anomalies such as spots over the source surface. The orthonormality relation can also be broken in cusp singularities except for the case that the source is located at symmetric axis of the cusp. For other points around the cusp, this orthogonal relation does not exit. We calculate numerically the polarization and centroid shift vectors around the cusp shown in Figure (\ref{cuspfig}). \begin{figure} \begin{center} \psfig{file=cusp.eps,angle=270,width=8.cm,clip=} \caption{The polarization (black lines) and astrometric centroid shift (red vectors) near the cusp in a binary lens (blue line). We set $\rho_{\star}=0.5$. Note that, the size of centroid shifts and polarizations are divided by constant factors. Whenever the limb of the source star crosses the cusp, the polarization signal strongly changes. } \label{cuspfig} \end{center} \end{figure} \section{The effect of source spots on polarization and centroid shift}\label{spot} In this section we study the effect of source spots on polarization and centroid shift of microlensing events. Our aim is to extract the astrophysical information of the spots and the atmosphere of the source star from these anomalies. A given spot on the source star with a different temperature and magnetic field in comparison to the background star produces an angular-dependent defect on the source surface. The result is producing a net polarization even in the absence of the gravitational lensing. The gravitational lensing can amplify the polarization as well as change the orientation of the polarization. The amount of magnified polarization depends on relative position of spots with respect to the source centre and the lens position. Also it depends on the temperature and intrinsic flux of spots. Here we use three parameters to quantify a spot in our calculation as: (i) the size of the spot, (ii) the temperature of the source star and its temperature difference with respect to the spot and (iii) the location of the spot on the source surface. Let us characterize the lens plane by $(u_{1},u_{2})$ axes and put the centre of lens at the centre of coordinate system. The projected positions of the source and the spot in this reference frame are $(u_{1,\star},u_{2,\star})$ and $(u_{1,s},u_{2,s})$. The radius of the spot is $r_{s}$ and its angular size in the coordinate system located at the centre of source star is given by $\theta_{0}=\sin^{-1} (r_{s}/ R_{\star})$. For simplicity we choose a circular spot over the source surface. In order to locate a typical spot on the source star, we first put the position of the spot at the star pole and then perform a coordinate transformation and move the spot to an arbitrary location of the source \cite{Mehrabi2013}. The position of the spot located at the pole of the spherical coordinate system is given by: \begin{eqnarray} X_{s}&=&R_{\star} \sin\eta \cos\varphi\nonumber\\ Y_{s}&=&R_{\star} \sin\eta \sin\varphi\nonumber\\ Z_{s}&=&R_{\star} \cos\eta, \end{eqnarray} where $\eta$ and $\varphi$ change in the ranges of $[0,\theta_{0}]$ and $[0,2 \pi]$, respectively. The spot position projected on the lens plane and normalized to the Einstein radius is \begin{eqnarray} x_{s}&=&\rho_{\star} \sin\eta \cos\varphi\nonumber\\ y_{s}&=&\rho_{\star} \sin\eta \sin\varphi\nonumber\\ z_{s}&=&\rho_{\star} \cos\eta, \end{eqnarray} and finally the spot position on the lens plane, using two rotation angles of $\theta$ around $y$-axis and $\phi$ around $z$-axis is given by \begin{eqnarray} u_{1,s}&=&x_{s} \cos\phi \cos\theta -y_{s} \sin\phi +z_{s} \cos \phi\sin\theta + u_{1,\star}\nonumber\\ u_{2,s}&=&x_{s} \sin\phi \cos\theta +y_{s} \cos\phi + z_{s} \sin\phi\sin\theta + u_{2,\star}. \end{eqnarray} The modified Stokes parameters $S'_{Q}$, $S'_{U}$ and $S'_{I}$ for the case of a single spot on the source are given by: \begin{eqnarray}\label{spott} \left(\begin{array}{c}S'_{Q}\\S'_{U}\end{array}\right)&=& \left(\begin{array}{c}S_{Q}\\S_{U}\end{array}\right)-f\int_{\mathcal{A}_s}d^{2}s~I_{-}(\rho)A(u_{s})\left(\begin{array}{c}-\cos2\phi \\\sin 2\phi\end{array}\right)\nonumber\\ &=&\left(\begin{array}{c}S_{Q}\\S_{U}\end{array}\right)-f \left(\begin{array}{c}S_{Q,s}\\S_{U,s}\end{array}\right),\nonumber\\ S'_{I}&=&S_{I}-f\int_{\mathcal{A}_s}d^{2}s~I(\rho)A(u_{s})=S_{I}-fS_{I,s}, \end{eqnarray} where $S_{Q}$, $S_{U}$ and $S_{I}$ are given by the equation (\ref{tsparam}), these parameters assigned for the source star without any spot and ${\mathcal{A}_s}$ represents area that is covered by the spot. $u_{s}=\sqrt{u_{1,s}^2+u_{2,s}^2}$ is the distance of each point of the spot from the lens position and $f=\left[F_{\star}(\nu)-F_{\mathcal{A}_s}(\nu)\right]/F_{\star}(\nu)$ is the relative difference in the flux of the source and spot at the frequency of $\nu$. We assume a black-body radiation for both star and spot. The polarization degree $P'$ also can be given by: \begin{eqnarray}\label{polt} P'=\frac{\left(P^2 S_{I}^{2}+ f^2 P_{s}^{2}S_{I,s}^{2}-2fC(P,P_{s})S_{I}S_{I,s}\right)^{1/2}}{S_{I}-fS_{I,s}}, \end{eqnarray} where we define $P_{s}\equiv({S_{Q,s}^2+S_{U,s}^2})^{1/2}/S_{I,s}$ and $C(P,S_{s})\equiv(S_{Q}S_{Q,s}+S_{U}S_{U,s})/S_{I}S_{I,s}$ as the cross term between the contribution from the star and spot. The angle of polarization $\theta'_{p}$ in terms of the Stocks parameters of the source and spot is given by \begin{eqnarray}\label{tetap} \theta'_{p}&=&\frac{1}{2}\tan^{-1}[\frac{S_{U}}{S_{Q}}+f\frac{S_{U}S_{Q,s}-S_{Q}S_{U,s}}{S^{2}_{Q}}\nonumber\\ &+&f^2\frac{-S_{Q,s}S_{U,s}S_{Q}+S_{U}S^{2}_{Q,s}}{S^{3}_{Q}}+ ... ]. \end{eqnarray} We can rewrite this equation in terms of unperturbed angle $ \theta_p$ and first- and second-order terms in $f$ as follows: \begin{eqnarray}\label{tetap2} \theta'_{p} &=& \theta_{p}+f\frac{S_{U}S_{Q,s}-S_{Q}S_{U,s}}{2P^{2}S^{2}_{I}}\nonumber\\&+&f^{2}\frac{C(P,P_{s})S_{I,s}}{2P^{4}S^{3}_{I}}(S_{U}S_{Q,s}-S_{Q}S_{U,s}) + ... \end{eqnarray} \begin{figure} \begin{center} \psfig{file=polm1.eps,angle=270,width=8.cm,clip=} \psfig{file=polm2.eps,angle=270,width=8.cm,clip=}\caption{The two microlensing events with spot on sources. In both subfigures (a) and (b), the light curves and polarimetric curves with different set of parameters for the spot are shown in left and right panels, respectively. The source (grey circle) and its spot (pink spot), lens position (grey star) and source centre trajectory projected in the lens plane (red dash-dotted line) are shown with insets in the left-hand panels. The simple models without spot effect are shown by red solid lines. The black horizontal dashed lines represent the polarization signals due to spotted sources without lensing effect. The photometric and polarimetric residuals with respect to the simple models are plotted in bottom panels. The parameters of these microlensing events shown in subfigure $(a)$ and $(b)$ are $\rho_{\star}=0.3$, $\theta=80^{\circ}$, $\phi=30^{\circ}$, $u_{0}=-0.205$, $f=0.9$ and $\rho_{\star}=0.21$, $\theta=110^{\circ}$, $\phi=10^{\circ}$, $u_{0}=-0.033$, $r_{s}=0.35\rho_{\star}$, respectively.} \label{fig6} \end{center} \end{figure} In Figures (\ref{fig6}) we represent two microlensing events considering a stellar spot on the source surface. In both cases, we plot the photometric and polarimetric light curves. The parameters of these microlensing events shown in subfigures $(a)$ and $(b)$ are $(a)$ $\rho_{\star}=0.3$, $\theta=80^{\circ}$, $\phi=30^{\circ}$, $u_{0}=-0.205$, $f=0.9$ and $(b)$ $\rho_{\star}=0.21$, $\theta=110^{\circ}$, $\phi=10^{\circ}$, $u_{0}=-0.033$, $r_{s}=0.35\rho_{\star}$, respectively. Interpreting Figure (\ref{fig6}), the polarimetric curve in the absence of stellar spots for the case of $u_{0}<\rho_{\star}$ has two symmetric peaks at $t(t_{E})=t_{0}\pm \sqrt{\rho_{\star}^{2}-u_{0}^2}$ where $t_{0}$ is the time of the closest approach. For the case of a spot on the source surface, not only the symmetry in the pholarimetic curve breaks, also it tends to non-zero polarization at no-lensing stages. \begin{figure} \begin{center} \psfig{file=spot.eps,angle=270,width=8.cm,clip=} \caption{The normalized polarization (black lines) and astrometric centroid shift (red vectors) for a source star with stellar spot lensed by a point-mass lens with the parameters of $\rho_{\star}=0.5$, $\theta_{0}=18^{\circ}$, $\theta=110^{\circ}$, $\phi=15^{\circ}$ and $f=0.75$. Note that, the size of centroid shifts and polarizations are normalized by an arbitrary factor.} \label{spotf} \end{center} \end{figure} A significant signal of the stellar spot in microlensing events happens when the lens approaches close enough to the spot. In that case the total flux due to the spot $S_{I,s}$ increases and as a result denominator of equation (\ref{polt}) decreases. On the other hand, the cross term $C(P,P_{s})$ enhances, as $P$ is approximately parallel with $P_{s}$ (noting that, the polarimetric vectors over the spot are similar to the polarimetric vectors over the source except having different temperatures). Moreover, if the spot is located on the limb of the source star, one of peaks in the polarimetric curve will strongly be disturbed. According to Figure (\ref{fig6}), the larger and darker spots make stronger polarimetric and photometric signals. We note that the presence of the spot has small effects on light curves. The astrometric centroid shift of a source with a spot is given by: \begin{eqnarray} \boldsymbol{\delta\theta}'_{c}=\frac{1}{S'_{I}}\{ S_{I} \boldsymbol{\theta}_{c} -f\int_{\mathcal{A}}d^{2}s~ I(\rho) \frac{u_{s}^2+3}{u_{s}\sqrt{u_{s}^2+4}}\nonumber\\ \left( \begin{array}{c} \rho \cos\phi+ u_{cm} \\ \rho\sin\phi\end{array} \right)\}-\boldsymbol{u}_{spot}, \end{eqnarray} where $\boldsymbol{\theta}_{c}$ is the light centroid vector for a source star without any spot (see equation \ref{astrop}) and $\boldsymbol{u}_{spot}$ is the light centroid of the source with a spot. The spot perturbation on the astrometric measurements is larger for the smaller impact parameters. In Figure (\ref{spotf}), we plot the map of normalized polarization and centroid shift vectors of the source with a spot and the parameters of $\rho_{\star}=0.5$, $r_{s}=0.3\rho_{\star}$, $\theta=110^{\circ}$, $\phi=15^{\circ}$, and $f=0.75$, lensed by a single lens. We note that the orientations of polarization and centroid shift in the absence of the spot are orthogonal. The polarization vector changes strongly due to the spot while the centroid shift is almost unchanged except for the close approaches. As a result, the anomalies over the source surface break the orthogonality relation between the astrometric centroid shift and polarization vectors. \subsection{Statistical investigation of the OGLE data for polarimetry observation} Here we investigate the statistics of high magnification events in the list of the OGLE data to identify the fraction of events with possible signatures of the polarization. For the case of high magnification events with $u_{0}<\rho_{\star}$, the time profile of polarization has two peaks and the maximum polarization may reach to one percent. The large telescopes with enough exposure times can achieve the sensitivity of detecting the polarization as well as time variation of this parameter. In a recent work by Ingrosso et al. (2012), they prospect the observation of the polarization by FOcal Reducer and low dispersion Spectrograph 2 (FORS2) on the Very Large Telescope (VLT). A detailed study for follow-up observations of polarization of the OGLE microlensing data will be presented in a later work. \begin{figure} \begin{center} \psfig{file=L_T.eps,angle=270,width=8.cm,clip=} \caption{Temperature-Luminosity diagram for the OGLE microlensing events with main-sequence (black points) and red clump sources (red points). The blue stars represent high magnified sources with $u_{0}<\rho_{\star}$. The lines in this diagram show the constant radii for the stars.} \label{colorM} \end{center} \end{figure} Here, we examine $3560$ microlensing events reported by the OGLE collaboration towards the Galactic bulge for the period of years 2001-2009 \cite{OGLE3}. These microlensing events have been detected by monitoring $150$ million objects in the Galactic Bulge. Amongst microlensing events listed by the OGLE collaboration, for $2614$ number of events, the position of source stars are identified in the Colour Magnitude (CM) diagram. The source stars in CM are grouped in (i) the main-sequence stars (assigned by black dots) and (ii) red clump stars (assigned by red dots) in Figure (\ref{colorM}). In order to estimate the relevant parameter in the polarization (i.e. $u_0/\rho_\star$), we use the best value of $u_0$ from fitting to the observed light curve for single lensing. On the other hand we estimate the size of source star from radius-CM diagram, noting that the absolute magnitude verse colour of starts are corrected by the amount of the extinction and the reddening estimated from the Besan\c{c}on model in the Galaxy \cite{Robin2003}. For calculating the radius of star, we use the Stefan--Boltzmann law (i.e. $L=4\pi R^2\sigma T^{4}$) and for a given luminosity and temperature, we obtain the radius of star. The straight lines in Figure (\ref{colorM}) represent the constant radius lines for the stars in CM diagram. Now we need the calculation of $\rho_{\star}$, we set the lens mass $M_L = 0.3 M_{\odot}$, distance of source $D_{s}=8.5$ kpc and distance of lens at $x=0.5$ where the probability of lensing is maximum. The distribution of $u_{0}/\rho_{\star}$ for the OGLE microlensing events is shown in Figure (\ref{urho}). Amongst $2614$ number of microlensing events, the high magnification events with the criterion of $u_{0}<\rho_{\star}$ contains (a) $81$ source stars in the main sequences and (b) $32$ source stars in the red clumps. Using this statistics, we estimate that almost $4.3$ per cent of microlensing events satisfy the condition of $u_0/\rho_\star<1$. For these events the polarization due to possible spots on the source surface with a suitable device is observable. \section{conclusions} \label{result} In gravitational microlensing observations, the tradition is the follow-up photometry of ongoing events. This strategy of observation is aimed to produce precise light curves from microlensing events with high cadences and small photometric error bars. We can imagine performing two more types of astrometric and polarimetric observations of microlensing events. These observations are aimed to measure the time variations of centroid shift of images and polarization during the lensing. \begin{figure} \begin{center} \psfig{file=urho.eps,angle=270,width=8.cm,clip=} \caption{The distribution of $u_{0}/\rho_{\star}$ for the OGLE-III microlensing events. Almost $4.3$ percent of these events are in the range of $u_{0}<\rho_{\star}$. These events are potentially suitable targets for measuring the polarization due to spots on the source star.} \label{urho} \end{center} \end{figure} We took a perfect circular shape as the source star for the microlensing. In gravitational microlensing the symmetry of source star is broken by producing two distorted images at either side of the lens. This breaking of symmetry causes some phenomena such as the polarization and light centroid shift of source star images. We have showed that there is an orthogonality relation between the polarization and light centroid shift vectors in simple microlensing events. The observations of either polarimetry or astrometry can uniquely indicate the source trajectory on the lens plane. By exploring the sign of impact parameter and source trajectory, all related degeneracies i.e. $u_{0}$ degeneracy, ecliptic degeneracy, parallax-jerk degeneracy and orbiting binary ecliptic degeneracy can be resolved \cite{Skowron2011,rah11}. We noted that while the polarimetry can probe the small impact parameters, the astrometry is sensitive to the large impact parameters and applying these two observations can probe all the ranges of impact parameter. In the binary microlensing events, unlike to the simple lensing, the orthogonality between the polarization and centroid shift generally is not valid. During the caustic crossing of the source star which produces the maximum signals of polarization and centroid shift, we studied the behavior of these parameters for the fold and cusp singularities. We have shown that orthogonality relation between the polarization and centroid shift is valid for the fold singularity while this relation is violated in the cusp singularity. This behavior between these two vectors during microlensing events can be an indicator to identify the trajectory of the source with respect to the caustic lines in the binary lensing. Finally, we have studied the effect of the source spots on polarization, astrometry and photometry of microlensing events for the single lensing. One of features of spots is breaking the orthogonality relation between the polarization and centroid shift. Studying the map of polarization and centroid shift with the photometry provides some information on the physics of the spots on a source star. We have investigated the high magnification events in the list of the OGLE data \cite{OGLE3} and show that for $4.3$ percent of events the polarization effect could be enhanced with the amount of about $1$ percent. This means that for a large number of very high magnification microlensing events, the polarimetry follow-up observations can open a new window for studying the stellar spots on various types of stars. \textbf{Acknowledgment} We are grateful to Philippe Jetzer and Reza Rahimitabar for reading and commenting on the manuscript.
1,941,325,220,834
arxiv
\section{Introduction} Predicting the binding mode of small molecules in a protein pocket is one of the main challenges in the field of computational chemistry. Accurate predictions can substantially reduce the costs of drug development and speed up the process\cite{Pinzi2019}. Several software solutions exist that address this problem, including AutoDock Vina \cite{Trott2010}, Glide \cite{Friesner2004}, Gold \cite{Jones1997} or rDock \cite{Ruiz-Carmona2014}. Typical docking protocols use the protein cavity and the query ligand to generate poses that are later evaluated with a generalized scoring function. However, structural knowledge about the target system is usually available, such as protein homologs with similar co-crystallized ligands. Hence, given that the binding mode of similar molecules is usually conserved \cite{Drwal2017, Malhotra2017}, it is reasonable to exploit this information to increase the accuracy of the prediction. Considering the growing amount of structural data available in the Protein Data Bank (PDB)\cite{Berman2000}, and the popularity of fragment-based drug discovery \cite{Lamoree2017}, we expect this knowledge-rich scenario to become increasingly prevalent. \\ Docking algorithms which make use of such knowledge are usually referred to as similarity docking or scaffold docking \cite{Fradera2004}. Scaffold docking methods usually rely on maximum common substructure (MCS) approaches, such as fkcombu \cite{Kawabata2014}. MCS methods try to find the largest common substructure (subgraph) between two molecules. When found, the conformation of that substructure in the query ligand can be modelled by simply mimicking the conformation of that same substructure in the template, while the position of the remaining atoms is decided by a general scoring function. However, due to the characteristics of the MCS methods, two almost identical molecules that only differ in minor modifications, can return disappointingly short subgraphs. A shorter MCS means that more atoms in the query ligand would have to be modelled without any reference by the docking software, which is not desirable. Additionally, such minimal mismatches can be of critical interest in medicinal chemistry, as they can constitute scaffold hops that can, potentially, improve the pharmacological properties of a compound or circumvent intellectual property \cite{Hu2017}. Therefore, there is a need to maximize the use of structural information. We present here SkeleDock, a new scaffold docking algorithm that can overcome local mismatches. \\ \section{Features} \begin{figure}[H] \includegraphics[scale=0.60]{figures/skeledock_diagram.pdf} \caption{Main steps of SkeleDock algorithm. The dihedral autocompletion step is optional.} \label{skeledock_flow} \end{figure} \textbf{Algorithm.} SkeleDock web application provides a user-friendly interface to perform scaffold docking, starting from files with the structure of the receptor (PDB), a template molecule (PDB) and a set SMILES representing query ligands (CSV). After submission, these files follow SkeleDock's algorithm, whose main steps are summarized in Figure \ref{skeledock_flow}. The algorithm begins by building a graph for the query and the template molecules. These two graphs are then compared to identify a common subgraph, that is, a continuous set of atoms whose element (node) and bonds (edges) are equivalent in both the query and the template molecules. Hence, if this step is successful, a mapping linking several atoms in the query molecule to their template counterparts will be returned. In the following step, tethering, this mapping is used to change the position of the atoms of the query molecule. This is done by creating a force in each query atom that points towards the location of its template counterpart, effectively biasing the conformation of the query ligand towards that of the template. Finally, in order to find an appropriate location for those atoms in the query molecule for which no template equivalent was found, the \textit{tethered template docking} protocol of rDock \cite{Ruiz-Carmona2014} is used. This protocol allows the user to constrain the degrees of freedom of the docking run (orientation, position and dihedral angles of the ligand), based on the initial conformation of the provided molecule and a set of atom indexes. These indexes correspond to the atoms that the user wants to be fixed, in our case, those atoms for which we have found a template counterpart. If a given dihedral is composed by atoms whose indexes belong to this set, its dihedral angle will not be sampled at all or only within a user-defined range. \\ \textbf{Autocompletion step.} As previously discussed, one limitation of methods based on MCS is its sensitivity to small changes: two molecules which are almost identical, except for some minor modifications, will return a smaller mapping, as the common subgraph shared by both is now smaller. Figure \ref{dihedral_autocompletion}a depicts such scenario. To avoid this problem, we added an optional step called dihedral autocompletion. As shown in Figure \ref{dihedral_autocompletion}b, the mapping found in the graph comparison step has \textit{stopped} just before the atom whose element differs between the query and template molecules, depicted as X in Figure \ref{dihedral_autocompletion}a. However, this mismatching atom belongs to a dihedral (highlighted in a ball-stick representation) in which three consecutive atoms are already mapped to the template. We can then assume that the mismatching atom -the fourth atom of this dihedral- matches the fourth atom of the equivalent dihedral in the template. This is what we refer to as dihedral autocompletion. After each dihedral autocompletion cycle, a new non-mapped fourth atom appears, and this step is repeated recursively until no more atoms are available. If the template dihedral offers several possibilities for the fourth atom, all of them are explored and evaluated. This functionality is key to overcome local mismatches, which makes SkeleDock able to handle some minor scaffold hopps. Some MCS methods can overcome simple mismatches as the one shown in Figure \ref{dihedral_autocompletion}, as they could identify the two disconnected, common subgraphs. However, if these disconnected subgraphs are not highly similar, typical MCS methods could fail.\\ \begin{figure}[H] \includegraphics[scale=0.12]{figures/autocomplete_vert_ab.pdf} \caption{Dihedral autocompletion step. a) Chemical structure of template and query molecules. The mismatching atom is depicted as X. b) Overlap between query molecule (opaque licorice) and template molecule (transparent texture) before (top image) and after (bottom image) the autocompletion step. The semi-completed dihedral (atoms depicted with a ball) propagates to the right side, improving the overlap with the template. Template molecule is PDB code: 1UVT, resname: I48. Rings have to be broken to allow this step, but they are restored before the tethering. These conformations are not the final docked poses.} \label{dihedral_autocompletion} \end{figure} \textbf{Application options.} Different options are available to change the behaviour of the application. \textit{rDock refinement step} is enabled by default, but it might not be necessary if every query atom has a template equivalent. The \textit{scaffold-hopping tolerant mode} enables or disables the dihedral autocompletion step. Users can also modify the magnitude of the force applied to each atom during tethering. Higher values result in a better alignment but might introduce some artefacts, like a change of chirality. The last option is \textit{probe radius} that defines the radius of the spheres used to define the size of the docking cavity for rDock.\cite{Ruiz-Carmona2014} After execution, the best pose of each ligand can be displayed together with the protein and the template ligand (Figure \ref{PM_GUI}). Results can be downloaded in a tar.gz file. \\ \end{multicols} \end{small} \begin{figure}[H] \includegraphics[scale=0.24]{figures/GUI_figure.pdf} \caption{SkeleDock's graphical user interface. Docked molecules (line representation) are shown overlapped with the template used (ball and stick representation).} \label{PM_GUI} \end{figure} \begin{small} \begin{multicols}{2} \textbf{Time performance}. We assessed the efficiency of SkeleDock by docking congeneric series for two different targets: Cathepsin S (459 ligands, average of 46.6 heavy atoms) and BACE-1 (154 ligands, an average of 38.4 heavy atoms). We used rDock as a baseline, and each test was run using 4 and 30 cores. SkeleDock is two to three times slower than rDock, but we believe that the increase in accuracy compensates it. Table \ref{time-performace} sums up the results of time performance evaluation.\\ \begin{table}[H] \small \begin{tabular}{|c c | c | c |} \hline \multicolumn{2}{c}{ } & \multicolumn{1}{c}{BACE-1} & \multicolumn{1}{c}{CatS}\\ \multicolumn{2}{c}{ } & \multicolumn{1}{c}{(154)} & \multicolumn{1}{c}{(459)}\\ \hline \textbf{Method} & \textbf{\#cpu} & \textbf{Yield} & \textbf{Yield} \\ & & [Lig/min] & [Lig/min] \\ \hline \multirow{2}{*}{SkeleDock} & 4 & 15.2 & 13.9 \\ & 30 & 43.8 & 50.7 \\ \hline \multirow{2}{*}{rDock} & 4 & 50.5 & 53.4 \\ & 30 & 102.0 & 127.5 \\ \hline \end{tabular} \caption{Time performance of SkeleDock and rDock for BACE-1 and CathepsinS congeneric series. The number of simulated ligands is listed in brackets.} \label{time-performace} \end{table} \section{Validation} \textbf{Fragment-based docking.} We evaluated SkeleDock's ability to recover the native pose of a ligand using a fragment as a template. Due to the lack of crystal structures of protein with ligands and corresponding fragments, we decided to artificially generate fragments for complexes from the refined set of PDBbind (version 2018) \cite{Wang2005} and use them as templates for SkeleDock. The ligands were fragmented by breaking a selected rotatable bond. We prepared three sets of fragments of increasing difficulty by excluding from the fragment 1, 3 or 5 rotatable bonds of the complete ligand. Deleting more atoms from the template increases the difficulty of predicting the right pose, as there is no reference for them. \\ We compared SkeleDock's performance with two MCS-based methods and with an unconstrained docking protocol. The MCS-based methods are two different settings of RDKit's \cite{rdkit} findMCS function: The first, where the element of the atoms must match (strict MCS) and the second, where the element and bond-order mismatches are allowed (agnostic MCS). The function returns a mapping (just as the graph comparison step of SkeleDock), which is then directly passed as an input to the tethering and pose refinement steps. Finally, for the unconstrained docking protocol, we used rDock with free rotation, translation and dihedral angle exploration (free rDock). The performance of docking algorithms is evaluated by the number of correct predictions. By convention, poses are considered correct if their RMSD from their crystal pose is under 2.0{\AA} \cite{Cross2009}. We report two levels of success: Top 1, where only the top pose was selected, and Top 5, where the best among the top five poses was selected (Figure \ref{fragment-based}). A full report of the docking results can be found in Table S1. \\ In terms of success rate, SkeleDock outperforms other approaches in all fragmentation scenarios (Figure \ref{fragment-based}). Strict MCS is comparable to SkeleDock, and they both outperform agnostic MCS and free rDock. This result is expected because the less-strict nature of the agnostic MCS setting might find mappings which are feasible in terms of equivalence in other features (as ring-ring), but lead to wrong orientations of the ligand. These results suggest that, when the binding mode of the query ligand and its fragments are conserved, biasing the prediction using SkeleDock or MCS approaches can substantially increase the success rate of binding mode prediction.\\ \begin{figure}[H] \includegraphics[scale=0.55]{figures/Top15.pdf} \caption{Self-docking performance of SkeleDock, strict MCS, agnostic MCS and free rDock. Different shades correspond to the different success levels: Top 1 - opaque, Top 5 - transparent.} \label{fragment-based} \end{figure} \textbf{D3R Grand Challenge 4.} In order to evaluate SkeleDock prospectively, we engaged in the D3R Grand Challenge 4 (GC4) pose prediction subchallenge. The D3R Grand Challenge is an international contest where participants complete different computational tasks of pharmaceutical interest.\cite{Gaieb2019} In its fourth edition, the objective was to predict binding modes of 20 ligands of BACE-1 protein. As templates for SkeleDock, we used crystal structures of close homologs and their co-crystallized ligands from PDB (Table S2). At the time the challenge took place, the final rDock pose refinement step was not implemented in the protocol. Instead, we run a short molecular dynamics (MD) simulation to relax the poses (See SI:MD simulation for further details). To asses the performance of the final protocol, a retrospective analysis was run using SkeleDock's web application at PlayMolecule. \\ This subchallenge was particularly complicated for two reasons: (1) all ligands except one had a macrocycle, and (2) most of the ligands had a shortened MCS with their template due to certain atoms differing in element, or the presence of rings. Conformational changes in macrocycles involve the concerted rotation of several dihedrals, making them difficult to model \cite{Allen2016}. The gold standard among docking practitioners is to first sample different conformations of a macrocycle and then dock each one independently. This was not needed in our case, as SkeleDock can simply use the macrocycle of the template to model the one in the query ligand. Regarding the shortened MCS, the autocompletion step of SkeleDock can handle these mismatches, leading to a greater mapping and overlap with the templates both in the macro and non-macro fractions of the molecules, as can be seen in Figure \ref{MCS_comparison}. We actually compared the RMSD of the poses generated by SkeleDock and the two MCS methods described in \textit{fragment-based docking}. Both the global RMSD and the macrocycle RMSD is lower in SkeleDock poses, thanks to the bigger mapping with the template (Table S3 and Table S4).\\ SkeleDock's submission (code: qqou3) finished among the top-performing participants, ranking 9th out of 74 according to median RMSD (1.02 \AA) and 15th according to mean RMSD (1.33 \AA). In the retrospective analysis, SkeleDock web application performed slightly worse with a mean RMSD of 1.47 {\AA}. Given that this test was run in a fully automated fashion and with no human supervision, the gap between the two results is understandable.\\ \begin{figure}[H] \includegraphics[scale=0.11]{figures/comparison_v5_vertical.pdf} \caption{Overlap between predicted poses (gold) and template (violet) using 3 different methods: a) SkeleDock, b) Element Agnostic, c) Strict MCS. RMSD is the average RMSD value of the poses, and mRMSD is the mean value of the RMSD of the macrocycle atoms.} \label{MCS_comparison} \end{figure} \section{Conclusions} SkeleDock algorithm offers four main features: 1) docking of molecules based on their analogues or fragments, 2) autocompletion step that can handle local mismatches and hence, model minor scaffold hopps, 3) ability to model macrocycles without having to pregenerate ring conformations and 4) a user-friendly GUI that enables efficient scaffold docking and results exploration. The protocol can be accessed at https://playmolecule.org/SkeleDock/. \\ \begin{acknowledgement} The authors thank Acellera Ltd. for funding and the D3R organizers for their efforts. G.D.F. acknowledges support from MINECO (BIO2017-82628-P) and FEDER. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 823712 (CompBioMed2) and from the Industrial Doctorates Plan of the Secretariat of Universities and Research of the Department of Economy and Knowledge of the Generalitat of Catalonia. \end{acknowledgement} \begin{suppinfo} Table S1: Detailed results of the retrospective validation analysis. \href{https://figshare.com/articles/SkeleDock_Table_S1/12248702}{(csv)} Table S2: PDB codes used as template for SkeleDock in the D3R GC4 pose prediction challenge. MD simulation: Description of the MD protocol used to refine the poses in D3R GC4 pose prediction challenge. Table S3: Mean RMSD obtained by the different methods used to model macrocycles. Table S4: Mean RMSD (computed for macrocycle atoms only) obtained by the different methods used to model macrocycles. \href{https://figshare.com/articles/SkeleDock_Supplementary_Information/12248693}{(pdf)} \end{suppinfo} \end{multicols} \end{small} \newpage \newpage
1,941,325,220,835
arxiv
\section{Introduction} \label{s:Intro} \noindent At the end of his \(1975\) article on class numbers of pure quintic fields, Parry suggested verbatim \lq\lq In conclusion the author would like to say that he believes a numerical study of pure quintic fields would be most interesting\rq\rq\ \cite[p. 484]{Pa}. Of course, it would have been rather difficult to realize Parry's desire in \(1975\). But now, \(40\) years later, we are in the position to use the powerful computer algebra systems PARI/GP \cite{PARI} and MAGMA \cite{BCP,BCFS,MAGMA} for starting an attack against this hard problem. Prepared by \cite{Ma2a,Ma2b}, this will actually be done in the present paper. Even in \(1991\), when we generalized Barrucand and Cohn's theory \cite{BaCo2} of principal factorization types from pure cubic fields \(\mathbb{Q}(\sqrt[3]{D})\) to pure quintic fields \(L=\mathbb{Q}(\sqrt[5]{D})\) and their pure metacyclic normal closures \(N=\mathbb{Q}(\zeta_5,\sqrt[5]{D})\) \cite{Ma0}, it was still impossible to verify our hypothesis about the distinction between absolute, intermediate and relative \textit{differential principal factors} (DPF) \cite[(6.3)]{Ma2a} and about the values of the \textit{unit norm index} \((U_K:N_{N/K}(U_N))\) \cite[(1.3)]{Ma2a} by actual computations. All these conjectures have been proven by our most recent numerical investigations. Our classification is based on the Hasse-Iwasawa theorem about the Herbrand quotient of the unit group \(U_N\) of the Galois closure \(N\) of \(L\) as a module over the relative group \(G=\mathrm{Gal}(N/K)\) with respect to the cyclotomic subfield \(K=\mathbb{Q}(\zeta_5)\). It only involves the unit norm index \((U_K:N_{N/K}(U_N))\) and our \(13\) types of differential principal factorizations \cite[Thm. 1.3]{Ma2a}, but not the index of subfield units \((U_N:U_0)\) \cite[\S\ 5]{Ma2a} in Parry's class number formula \cite[(5.1)]{Ma2a}. We begin with a collection of explicit multiplicity formulas in \S\ \ref{s:Formulas} which are required for understanding the subsequent extensive presentation of our computational results in twenty tables of crucial invariants in \S\ \ref{s:Tables}. This information admits the classification of all \(900\) pure quintic fields \(L=\mathbb{Q}(\root{5}\of{D})\) with normalized radicands \(2\le D<10^3\) into \(13\) DPF types and the refined classification into similarity classes with representative prototypes in \S\ \ref{s:Conclusions}. In these final conclusions, we collect theoretical consequences of our experimental results and draw the attention to remaining open questions. \section{Collection of Multiplicity Formulas} \label{s:Formulas} \noindent For the convenience of the reader, we provide a summary of formulas for calculating invariants of pure quintic fields \(L=\mathbb{Q}(\sqrt[5]{D})\) with normalized fifth power free radicands \(D>1\) and their associated pure metacyclic normal fields \(N=\mathbb{Q}(\zeta,\sqrt[5]{D})\) with a primitive fifth root of unity \(\zeta=\zeta_5\). Let \(f\) be the class field theoretic conductor of the relatively quintic Kummer extension \(N/K\) over the cyclotomic field \(K=\mathbb{Q}(\zeta)\). It is also called the conductor of the pure quintic field \(L\). The \textit{multiplicity} \(m=m(f)\) of the conductor \(f\) indicates the number of non-isomorphic pure metacyclic fields \(N\) sharing the common conductor \(f\), or also, according to \cite[Prop. 2.1]{Ma2a}, the number of normalized fifth power free radicands \(D>1\) whose fifth roots generate non-isomorphic pure quintic fields \(L\) sharing the common conductor \(f\). We adapt the general multiplicity formulas in \cite[Thm. 2, p. 104]{Ma1} to the quintic case \(p=5\). If \(L\) is a field of species \(1\mathrm{a}\) \cite[(2.6) and Exm. 2.2]{Ma2a}, i.e. \(f^4=5^6\cdot q_1^4\cdots q_t^4\), then \(m=4^t\) where \(t:=\#\lbrace q\in\mathbb{P}\mid q\ne 5,\ q\mid f\rbrace\). The explicit values of \(m\) in dependence on \(t\) are given in Table \ref{tbl:Multiplicity1a}. \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Multiplicity of fields of species \(1\mathrm{a}\)} \label{tbl:Multiplicity1a} \begin{center} \begin{tabular}{|c||rrrrrr|} \hline \(t\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) \\ \hline \(m\) & \(1\) & \(4\) & \(16\) & \(64\) & \(256\) & \(1024\) \\ \hline \end{tabular} \end{center} \end{table} \noindent If \(L\) is a field of species \(1\mathrm{b}\) \cite[(2.6) and Exm. 2.2]{Ma2a}, i.e. \(f^4=5^2\cdot q_1^4\cdots q_t^4\), then \(m=4^u\cdot X_v\) where \(u:=\#\lbrace q\in\mathbb{P}\mid q\equiv\pm 1,\pm 7\,(\mathrm{mod}\,25),\ q\mid f\rbrace\), \(v:=t-u\) and \(X_j:=\frac{1}{5}\lbrack 4^j-(-1)^j\rbrack\), that is \((X_j)_{j\ge -1}=(\frac{1}{4},0,1,3,13,51,205,\ldots)\). The explicit values of \(m\) in dependence on \(u\) and \(v\) are given in Table \ref{tbl:Multiplicity1b}. \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Multiplicity of fields of species \(1\mathrm{b}\)} \label{tbl:Multiplicity1b} \begin{center} \begin{tabular}{|c||c|rrrrrr|} \hline \(u\) & \(v\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) \\ \hline \(0\) & \(\) & \(0\) & \(1\) & \(3\) & \(13\) & \(51\) & \(205\) \\ \(1\) & \(\) & \(0\) & \(4\) & \(12\) & \(52\) & \(204\) & \(820\) \\ \(2\) & \(\) & \(0\) & \(16\) & \(48\) & \(208\) & \(816\) & \(\) \\ \(3\) & \(\) & \(0\) & \(64\) & \(192\) & \(832\) & \(\) & \(\) \\ \(4\) & \(\) & \(0\) & \(256\) & \(768\) & \(\) & \(\) & \(\) \\ \hline \end{tabular} \end{center} \end{table} \noindent If \(L\) is a field of species \(2\) \cite[(2.6) and Exm. 2.2]{Ma2a}, i.e. \(f^4=5^0\cdot q_1^4\cdots q_t^4\), then \(m=4^u\cdot X_{v-1}\) where \(u:=\#\lbrace q\in\mathbb{P}\mid q\equiv\pm 1,\pm 7\,(\mathrm{mod}\,25),\ q\mid f\rbrace\), \(v:=t-u\) and \(X_j:=\frac{1}{5}\lbrack 4^j-(-1)^j\rbrack\), that is \((X_j)_{j\ge -1}=(\frac{1}{4},0,1,3,13,51,205,\ldots)\). The explicit values of \(m\) in dependence on \(u\) and \(v\) are given in Table \ref{tbl:Multiplicity2}. \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Multiplicity of fields of species \(2\)} \label{tbl:Multiplicity2} \begin{center} \begin{tabular}{|c||c|rrrrrr|} \hline \(u\) & \(v\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) \\ \hline \(0\) & \(\) & \(0\) & \(0\) & \(1\) & \(3\) & \(13\) & \(51\) \\ \(1\) & \(\) & \(1\) & \(0\) & \(4\) & \(12\) & \(52\) & \(204\) \\ \(2\) & \(\) & \(4\) & \(0\) & \(16\) & \(48\) & \(208\) & \(816\) \\ \(3\) & \(\) & \(16\) & \(0\) & \(64\) & \(192\) & \(832\) & \(\) \\ \(4\) & \(\) & \(64\) & \(0\) & \(256\) & \(768\) & \(\) & \(\) \\ \hline \end{tabular} \end{center} \end{table} \section{Classification by DPF types in \(20\) numerical tables} \label{s:Tables} \subsection{DPF types} \label{ss:DPFTypes} The following twenty Tables \ref{tbl:PureQuinticFields50} -- \ref{tbl:PureQuinticFields1000} establish a complete classification of all \(900\) pure metacyclic fields \(N=\mathbb{Q}(\zeta,\root{5}\of{D})\) with normalized radicands in the range \(2\le D\le 10^3\). With the aid of PARI/GP \cite{PARI} and MAGMA \cite{MAGMA} we have determined the \textit{differential principal factorization type}, T, of each field \(N\) by means of other invariants \(U,A,I,R\) \cite[Thm. 6.1]{Ma2a}. After several weeks of CPU time, the date of completion was September \(17\), \(2018\). The possible DPF types are listed in dependence on \(U,A,I,R\) in Table \ref{tbl:DPFTypes}, where the symbol \(\times\) in the column \(\eta\), resp. \(\zeta\), indicates the existence of a unit \(H\in U_N\), resp. \(Z\in U_N\), such that \(\eta=N_{N/K}(H)\), resp. \(\zeta=N_{N/K}(Z)\). The \(5\)-valuation of the \textit{unit norm index} \((U_K:N_{N/K}{U_N})\) is abbreviated by \(U\) \cite[(1.3), (6.3)]{Ma2a}. \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Differential principal factorization types, T, of pure metacyclic fields \(N\)} \label{tbl:DPFTypes} \begin{center} \begin{tabular}{|c|ccc|ccc|} \hline T & \(U\) & \(\eta\) & \(\zeta\) & \(A\) & \(I\) & \(R\) \\ \hline \(\alpha_1\) & \(2\) & \(-\) & \(-\) & \(1\) & \(0\) & \(2\) \\ \(\alpha_2\) & \(2\) & \(-\) & \(-\) & \(1\) & \(1\) & \(1\) \\ \(\alpha_3\) & \(2\) & \(-\) & \(-\) & \(1\) & \(2\) & \(0\) \\ \(\beta_1\) & \(2\) & \(-\) & \(-\) & \(2\) & \(0\) & \(1\) \\ \(\beta_2\) & \(2\) & \(-\) & \(-\) & \(2\) & \(1\) & \(0\) \\ \(\gamma\) & \(2\) & \(-\) & \(-\) & \(3\) & \(0\) & \(0\) \\ \hline \(\delta_1\) & \(1\) & \(\times\) & \(-\) & \(1\) & \(0\) & \(1\) \\ \(\delta_2\) & \(1\) & \(\times\) & \(-\) & \(1\) & \(1\) & \(0\) \\ \(\varepsilon\) & \(1\) & \(\times\) & \(-\) & \(2\) & \(0\) & \(0\) \\ \hline \(\zeta_1\) & \(1\) & \(-\) & \(\times\) & \(1\) & \(0\) & \(1\) \\ \(\zeta_2\) & \(1\) & \(-\) & \(\times\) & \(1\) & \(1\) & \(0\) \\ \(\eta\) & \(1\) & \(-\) & \(\times\) & \(2\) & \(0\) & \(0\) \\ \hline \(\vartheta\) & \(0\) & \(\times\) & \(\times\) & \(1\) & \(0\) & \(0\) \\ \hline \end{tabular} \end{center} \end{table} \subsection{Justification of the computational techniques} \label{ss:Techniques} The steps of the following classification algorithm are ordered by increasing requirements of CPU time. To avoid unnecessary time consumption, the algorithm stops at early stages already, as soon as the DPF type is determined unambiguously. The illustrating subfield lattice of \(N\) is drawn in Figure \ref{fig:GaloisCorrespondence} at the end of the paper. \begin{algorithm} \label{alg:Classification} (Classification into \(13\) DPF types.)\\ \textbf{Input:} a normalized fifth power free radicand \(D\ge 2\). \\ \textbf{Step 1:} By purely \textit{rational} methods, without any number field constructions, the prime factorization of the radicand \(D\) (including the counters \(t,u,v;n,s_2,s_4\), \S\ \ref{ss:Prototypes}) is determined. If \(D=q\in\mathbb{P}\), \(q\equiv\pm 2\,(\mathrm{mod}\,5)\), \(q\not\equiv\pm 7\,(\mathrm{mod}\,25)\), then \(N\) is a Polya field of type \(\varepsilon\); stop. If \(D=q\in\mathbb{P}\), \(q=5\) or \(q\equiv\pm 7\,(\mathrm{mod}\,25)\), then \(N\) is a Polya field of type \(\vartheta\); stop. \\ \textbf{Step 2:} The field \(L\) of degree \(5\) is constructed. The primes \(q_1,\ldots,q_T\) dividing the conductor \(f\) of \(N/K\) are determined, and their overlying prime ideals \(\mathfrak{q}_1,\ldots,\mathfrak{q}_T\) in \(L\) are computed. By means of at most \(5^T\) principal ideal tests of the elements of \(\mathcal{I}_{L/\mathbb{Q}}/\mathcal{I}_{\mathbb{Q}}=\bigoplus_{i=1}^T\,\mathbb{F}_5\,\mathfrak{q}_i\), the number \(5^A:=\#\lbrace (v_1,\ldots,v_T)\in\mathbb{F}_5^T\mid\prod_{i=1}^T\,\mathfrak{q}_i^{v_i}\in\mathcal{P}_{L}\rbrace\), that is the cardinality of \(\mathcal{P}_{L/\mathbb{Q}}/\mathcal{P}_{\mathbb{Q}}\), is determined. If \(A=T\), then \(N\) is a Polya field. If \(A=3\), then \(N\) is of type \(\gamma\); stop. If \(A=2\), \(s_2=s_4=0\), \(v\ge 1\), then \(N\) is of type \(\varepsilon\); stop. If \(A=1\), \(s_2=s_4=0\), then \(N\) is of type \(\vartheta\); stop. \\ \textbf{Step 3:} If \(s_2\ge 1\) or \(s_4\ge 1\), then the field \(M\) of degree \(10\) is constructed. For the \(2\)-split primes \(\ell_1,\ldots,\ell_{s_2+s_4}\equiv\pm 1\,(\mathrm{mod}\,5)\) among the primes \(q_1,\ldots,q_T\) dividing the conductor \(f\) of \(N/K\), the overlying prime ideals \(\mathcal{L}_1,\mathcal{L}_1^\tau,\ldots,\mathcal{L}_{s_2+s_4},\mathcal{L}_{s_2+s_4}^\tau\) in \(M\) are computed. By means of at most \(5^{s_2+s_4}\) principal ideal tests of the elements of \((\mathcal{I}_{M/K^+}/\mathcal{I}_{K^+})\bigcap\ker(N_{M/L}) =\bigoplus_{i=1}^{s_2+s_4}\,\mathbb{F}_5\,\mathcal{K}_{(\ell_i)}\), where \(\mathcal{K}_{(\ell_i)}=\mathcal{L}_i^{1+4\tau}\) for \(1\le i\le s_2+s_4\), the number \(5^I:=\#\lbrace (v_1,\ldots,v_{s_2+s_4})\in\mathbb{F}_5^{s_2+s_4}\mid\prod_{i=1}^{s_2+s_4}\,\mathcal{K}_{(\ell_i)}^{v_i}\in\mathcal{P}_{M}\rbrace\), that is the cardinality of \((\mathcal{P}_{M/K^+}/\mathcal{P}_{K^+})\bigcap\ker(N_{M/L})\), is determined. If \(I=2\), then \(N\) is of type \(\alpha_3\); stop. If \(I=1\), \(A=2\), then \(N\) is of type \(\beta_2\); stop. \\ \textbf{Step 4:} If \(s_4\ge 1\), then the field \(N\) of degree \(20\) is constructed. For all \(4\)-split primes \(\ell_{s_2+1},\ldots,\ell_{s_2+s_4}\) \(\equiv +1\,(\mathrm{mod}\,5)\) among the primes \(q_1,\ldots,q_T\) dividing the conductor \(f\) of \(N/K\), the overlying prime ideals \(\mathfrak{L}_{s_2+1},\mathfrak{L}_{s_2+1}^{\tau^2},\mathfrak{L}_{s_2+1}^{\tau},\mathfrak{L}_{s_2+1}^{\tau^3}, \ldots,\mathfrak{L}_{s_2+s_4},\mathfrak{L}_{s_2+s_4}^{\tau^2},\mathfrak{L}_{s_2+s_4}^{\tau},\mathfrak{L}_{s_2+s_4}^{\tau^3}\) in \(N\) are computed. By means of at most \(5^{2s_4}\) principal ideal tests of the elements of \((\mathcal{I}_{N/K}/\mathcal{I}_{K})\bigcap\ker(N_{N/M}) =\bigoplus_{i=s_2+1}^{s_2+s_4}\,\left(\mathbb{F}_5\,\mathfrak{K}_{1,(\ell_i)}\right.\) \(\left.\oplus\mathbb{F}_5\,\mathfrak{K}_{2,(\ell_i)}\right)\), where \(\mathfrak{K}_{1,(\ell_i)}=\mathfrak{L}_i^{1+4\tau^2+2\tau+3\tau^3}\) and \(\mathfrak{K}_{2,(\ell_i)}=\mathfrak{L}_i^{1+4\tau^2+3\tau+2\tau^3}\) for \(s_2+1\le i\le s_2+s_4\), the number \(5^R:=\#\lbrace (v_{1,s_2+1},v_{2,s_2+1},\ldots,v_{1,s_2+s_4},v_{2,s_2+s_4})\in\mathbb{F}_5^{2s_4}\mid \prod_{i=s_2+1}^{s_2+s_4}\,\left(\mathfrak{K}_{1,(\ell_i)}^{v_{1,i}}\mathfrak{K}_{2,(\ell_i)}^{v_{2,i}}\right)\in\mathcal{P}_{N}\rbrace\), that is the cardinality of \((\mathcal{P}_{N/K}/\mathcal{P}_{K})\bigcap\ker(N_{N/M})\), is determined. If \(R=2\), then \(N\) is of type \(\alpha_1\); stop. If \(R=1\), \(I=1\), then \(N\) is of type \(\alpha_2\); stop. If \(R=1\), \(A=2\), then \(N\) is of type \(\beta_1\); stop. \\ \textbf{Step 5:} If the type of the field \(N\) is not yet determined uniquely, then \(U=1\) and there remain the following possibilities. If \(v\ge 1\), then \(N\) is of type \(\delta_1\), if \(R=1\), of type \(\delta_2\), if \(I=1\), and of type \(\varepsilon\), if \(R=I=0\). If \(v=0\), then a fundamental system \((E_j)_{1\le j\le 9}\) of units is constructed for the unit group \(U_N\) of the field \(N\) of degree \(20\), and all relative norms of these units with respect to the cyclotomic subfield \(K\) are computed. If \(N_{N/K}(E_j)=\zeta_5^k\) for some \(1\le j\le 9\), \(1\le k\le 4\), then \(N\) is of type \(\zeta_1\), if \(R=1\), of type \(\zeta_2\), if \(I=1\), and of type \(\eta\), if \(R=I=0\). Otherwise the conclusions are the same as for \(v\ge 1\). \\ \textbf{Output:} the DPF type of the field \(N=\mathbb{Q}(\zeta_5,\sqrt[5]{D})\) and the decision about its Polya property. \end{algorithm} \begin{proof} The claims of Step 1 concerning the types \(\varepsilon,\vartheta\) are proved in items (1) and (2) of \cite[Thm. 10.1]{Ma2a}. For Step 2, the formulas (4.1) and (4.2) in \cite[Thm. 4.1]{Ma2a} give an \(\mathbb{F}_5\)-basis of the space of absolute differential factors, and the formulas (4.3) and (4.4) in \cite[Cor. 4.1]{Ma2a} determine bounds for the \(\mathbb{F}_5\)-dimension \(A\) of the space of \textit{absolute} DPF in the field \(L\) of degree \(5\). The Polya property was characterized in \cite[Thm. 10.5]{Ma2a}, the claim concerning type \(\gamma\) follows from \cite[Thm. 6.1]{Ma2a}, and the claims about the types \(\varepsilon,\vartheta\) from \cite[Thm. 8.1 and Thm. 6.1]{Ma2a}. For Step 3, the formulas (4.5) and (4.6) in \cite[Thm. 4.3]{Ma2a} give an \(\mathbb{F}_5\)-basis of the space of intermediate differential factors, and the formulas (4.7) and (4.8) in \cite[Cor. 4.2]{Ma2a} determine bounds for the \(\mathbb{F}_5\)-dimension \(I\) of the space of \textit{intermediate} DPF in the field \(M\) of degree \(10\). The claims concerning the types \(\alpha_3,\beta_2\) are consequences of \cite[Thm. 6.1]{Ma2a}, For Step 4, the formulas (4.9) and (4.10) in \cite[Thm. 4.4]{Ma2a} give an \(\mathbb{F}_5\)-basis of the space of relative differential factors, and the formulas (4.11) and (4.12) in \cite[Cor. 4.3]{Ma2a} determine bounds for the \(\mathbb{F}_5\)-dimension \(R\) of the space of \textit{relative} DPF in the field \(N\) of degree \(20\). The claims concerning the types \(\alpha_1,\alpha_2,\beta_1\) are consequences of \cite[Thm. 6.1]{Ma2a}. Concerning Step 5, the signature of \(N\) is \((r_1,r_2)=(0,10)\), whence the torsion free Dirichlet unit rank of \(N\) is given by \(r=r_1+r_2-1=9\). The claims about all types are consequences of \cite[Thm. 6.1]{Ma2a}, including information on the constitution of the norm group \(N_{N/K}(U_N)\). \end{proof} \begin{remark} \label{rmk:Classification} Whereas the execution of Step 1 and 2 in our Algorithm \ref{alg:Classification}, implemented as a Magma program script \cite{MAGMA}, is a matter of a few seconds on a machine with clock frequency at least \(2\,\)GHz, the CPU time for Step 3 lies in the range of several minutes. The time requirement for Step 4 and 5 can reach hours or even days in spite of code optimizations for the calculation of units, in particular the use of the Magma procedures \texttt{IndependentUnits()} and \texttt{SetOrderUnitsAreFundamental()} prior to the call of \texttt{UnitGroup()}. \end{remark} \subsection{Open problems} \label{ss:OpenQuestions} We conjecture that considerable amounts of CPU time can be saved in our Algorithm \ref{alg:Classification} by computing the logarithmic \(5\)-class numbers \(V_F:=v_5(h_F)\) of the fields \(F\in\lbrace L,M,N\rbrace\), which admit the determination of the logarithmic indices \(E\), resp. \(E^+\), of subfield units in the Parry \cite{Pa}, resp. Kobayashi \cite{Ky1,Ky2}, class number relation, according to the formulas \begin{equation} \label{eqn:LogInd} E=5+V_N-4\cdot V_L, \qquad E^+=2+V_M-2\cdot V_L. \end{equation} However, first there would be required rigorous proofs of the heuristic connections between \(E,E^+\) and the DPF types in Table \ref{tbl:LogInd}, where \((E,E^+)=(1,0)\) implies type \(\alpha_2\), \((E,E^+)=(2,0)\) implies type \(\alpha_3\), \((E,E^+)=(4,2)\) implies type \(\delta_1\), but \((E,E^+)=(2,1)\) admits types \(\alpha_1,\delta_1\), \((E,E^+)=(3,1)\) admits types \(\alpha_2,\beta_1,\delta_2\), \((E,E^+)=(4,1)\) admits types \(\beta_2,\zeta_2\), \((E,E^+)=(5,2)\) admits types \(\beta_1,\varepsilon,\zeta_1,\vartheta\), and \((E,E^+)=(6,2)\) admits types \(\gamma,\eta\). \((E,E^+)=(0,0)\) seems to be impossible. \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Logarithmic indices \(E,E^+\) of subfield units for DPF types, T} \label{tbl:LogInd} \begin{center} \begin{tabular}{|c|cc|c|cc|} \hline T & \(E\) & \(E^+\) & or & \(E\) & \(E^+\) \\ \hline \(\alpha_1\) & \(2\) & \(1\) & & \(\) & \(\) \\ \(\alpha_2\) & \(1\) & \(0\) & & \(3\) & \(1\) \\ \(\alpha_3\) & \(2\) & \(0\) & & \(\) & \(\) \\ \(\beta_1\) & \(3\) & \(1\) & & \(5\) & \(2\) \\ \(\beta_2\) & \(4\) & \(1\) & & \(\) & \(\) \\ \(\gamma\) & \(6\) & \(2\) & & \(\) & \(\) \\ \hline \(\delta_1\) & \(2\) & \(1\) & & \(4\) & \(2\) \\ \(\delta_2\) & \(3\) & \(1\) & & \(\) & \(\) \\ \(\varepsilon\) & \(5\) & \(2\) & & \(\) & \(\) \\ \hline \(\zeta_1\) & \(5\) & \(2\) & & \(\) & \(\) \\ \(\zeta_2\) & \(4\) & \(1\) & & \(\) & \(\) \\ \(\eta\) & \(6\) & \(2\) & & \(\) & \(\) \\ \hline \(\vartheta\) & \(5\) & \(2\) & & \(\) & \(\) \\ \hline \end{tabular} \end{center} \end{table} \subsection{Conventions and notation in the tables} \label{ss:Conventions} The \textit{normalized} radicand \(D=q_1^{e_1}\cdots q_s^{e_s}\) of a pure metacyclic field \(N\) of degree \(20\) is minimal among the powers \(D^n\), \(1\le n\le 4\), with corresponding exponents \(e_j\) reduced modulo \(5\). The normalization of the radicands \(D\) provides a warranty that all fields are pairwise non-isomorphic \cite[Prop. 2.1]{Ma2a}. Prime factors are given for composite radicands \(D\) only. Dedekind's \textit{species}, S, of radicands is refined by distinguishing \(5\mid D\) (species 1a) and \(\gcd(5,D) = 1\) (species 1b) among radicands \(D\not\equiv\pm 1,\pm 7\,(\mathrm{mod}\,25)\) (species 1). By the species and factorization of \(D\), the shape of the \textit{conductor} \(f\) is determined. We give the fourth power \(f^4\) to avoid fractional exponents. Additionally, the \textit{multiplicity} \(m\) indicates the number of non-isomorphic fields sharing a common conductor \(f\) (\S \ref{s:Formulas}). The symbol \(V_F\) briefly denotes the \(5\)-valuation of the order \(h_F=\#\mathrm{Cl}(F)\) of the class group \(\mathrm{Cl}(F)\) of a number field \(F\). By \(E\) we denote the exponent of the power in the \textit{index of subfield units} \((U_N:U_0)=5^E\). An asterisk denotes the smallest radicand with given Dedekind kind, DPF type and \(5\)-class groups \(\mathrm{Cl}_5(F)\), \(F\in\lbrace L,M,N\rbrace\). The latter are usually elementary abelian, except for the cases indicated by an additional asterisk (see \S\ \ref{ss:NonElem}). Principal factors, P, are listed when their constitution is not a consequence of the other information. According to \cite[Thm. 7.2., item (1)]{Ma2a} it suffices to give the rational integer norm of \textit{absolute} principal factors. For \textit{intermediate} principal factors, we use the symbols \(\mathcal{K}:=\mathcal{L}^{1-\tau}=\alpha\mathcal{O}_M\) with \(\alpha\in M\) or \(\mathcal{L}=\lambda\mathcal{O}_M\) with a prime element \(\lambda\in M\) (which implies \(\mathcal{L}^\tau=\lambda^\tau\mathcal{O}_M\) and thus also \(\mathcal{K}=\lambda^{1-\tau}\mathcal{O}_M\)). Here, \((\mathcal{L}^{1+\tau})^5=\ell\mathcal{O}_M\) when a prime \(\ell\equiv\pm 1\,(\mathrm{mod}\,5)\) divides the radicand \(D\). For \textit{relative} principal factors, we use the symbols \(\mathfrak{K}_1:=\mathfrak{L}^{1+4\tau^2+2\tau+3\tau^3}=A_1\mathcal{O}_N\) and \(\mathfrak{K}_2:=\mathfrak{L}^{1+4\tau^2+3\tau+2\tau^3}=A_2\mathcal{O}_N\) with \(A_1,A_2\in N\). Here, \((\mathfrak{L}^{1+\tau+\tau^2+\tau^3})^5=\ell\mathcal{O}_N\) when a prime number \(\ell\equiv +1\,(\mathrm{mod}\,5)\) divides the radicand \(D\). (Kernel ideals in \cite[\S\ 7]{Ma2a}.) The quartet \((1,2,4,5)\) indicates conditions which either enforce a reduction of possible DPF types or enable certain DPF types. The lack of a prime divisor \(\ell\equiv\pm 1\,(\mathrm{mod}\,5)\) together with the existence of a prime divisor \(q\not\equiv\pm 7\,(\mathrm{mod}\,25)\) and \(q\ne 5\) of \(D\) is indicated by a symbol \(\times\) for the component \(1\). In these cases, only the two DPF types \(\gamma\) and \(\varepsilon\) can occur \cite[Thm. 8.1]{Ma2a}. A symbol \(\times\) for the component \(2\) emphasizes a prime divisor \(\ell\equiv -1\,(\mathrm{mod}\,5)\) of \(D\) and the possibility of intermediate principal factors in \(M\), like \(\mathcal{L}\) and \(\mathcal{K}\). A symbol \(\times\) for the component \(4\) emphasizes a prime divisor \(\ell\equiv +1\,(\mathrm{mod}\,5)\) of \(D\) and the possibility of relative principal factors in \(N\), like \(\mathfrak{K}_1\) and \(\mathfrak{K}_2\). The \(\times\) symbol is replaced by \(\otimes\) if the facility is used completely, and by \((\times)\) if the facility is only used partially. If \(D\) has only prime divisors \(q\equiv\pm 1,\pm 7\,(\mathrm{mod}\,25)\) or \(q=5\), a symbol \(\times\) is placed in component \(5\). In these cases, \(\zeta\) can occur as a norm \(N_{N/K}(Z)\) of some unit in \(Z\in U_N\). If it actually does, the \(\times\) is replaced by \(\otimes\) \cite[\S\ 8]{Ma2a}. \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(40\) pure metacyclic fields with normalized radicands \(2\le D\le 52\)} \label{tbl:PureQuinticFields50} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 1 & *\(2\) & & 1b & \(5^2 2^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 2 & \(3\) & & 1b & \(5^2 3^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 3 & *\(5\) & & 1a & \(5^6\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 4 & *\(6\) & \(2\cdot 3\) & 1b & \(5^2 2^4 3^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 5 & *\(7\) & & 2 & \(7^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 6 & *\(10\) & \(2\cdot 5\) & 1a & \(5^6 2^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 7 & *\(11\) & & 1b & \(5^2 11^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 8 & \(12\) & \(2^2\cdot 3\) & 1b & \(5^2 2^4 3^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 9 & \(13\) & & 1b & \(5^2 13^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 10 & *\(14\) & \(2\cdot 7\) & 1b & \(5^2 2^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 11 & \(15\) & \(3\cdot 5\) & 1a & \(5^6 3^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 12 & \(17\) & & 1b & \(5^2 17^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 13 & *\(18\) & \(2\cdot 3^2\) & 2 & \(2^4 3^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 14 & *\(19\) & & 1b & \(5^2 19^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 15 & \(20\) & \(2^2\cdot 5\) & 1a & \(5^6 2^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 16 & \(21\) & \(3\cdot 7\) & 1b & \(5^2 3^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 17 & *\(22\) & \(2\cdot 11\) & 1b & \(5^2 2^4 11^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) \\ 18 & \(23\) & & 1b & \(5^2 23^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 19 & \(26\) & \(2\cdot 13\) & 2 & \(2^4 13^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 20 & \(28\) & \(2^2\cdot 7\) & 1b & \(5^2 2^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 21 & \(29\) & & 1b & \(5^2 29^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 22 & *\(30\) & \(2\cdot 3\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 23 & *\(31\) & & 1b & \(5^2 31^4\) & \(1\) & \(2\) & \(3\) & \(5\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) \\ 24 & *\(33\) & \(3\cdot 11\) & 1b & \(5^2 3^4 11^4\) & \(3\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 25 & \(34\) & \(2\cdot 17\) & 1b & \(5^2 2^4 17^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 26 & *\(35\) & \(5\cdot 7\) & 1a & \(5^6 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((-,-,-,\otimes)\) & \(\eta\) & \\ 27 & \(37\) & & 1b & \(5^2 37^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 28 & *\(38\) & \(2\cdot 19\) & 1b & \(5^2 2^4 19^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 29 & \(39\) & \(3\cdot 13\) & 1b & \(5^2 3^4 13^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 30 & \(40\) & \(2^3\cdot 5\) & 1a & \(5^6 2^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 31 & \(41\) & & 1b & \(5^2 41^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 32 & *\(42\) & \(2\cdot 3\cdot 7\) & 1b & \(5^2 2^4 3^4 7^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5,3\cdot 5^2\) \\ 33 & \(43\) & & 2 & \(43^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 34 & \(44\) & \(2^2\cdot 11\) & 1b & \(5^2 2^4 11^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) \\ 35 & \(45\) & \(3^2\cdot 5\) & 1a & \(5^6 3^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 36 & \(46\) & \(2\cdot 23\) & 1b & \(5^2 2^4 23^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 37 & \(47\) & & 1b & \(5^2 47^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 38 & \(48\) & \(2^4\cdot 3\) & 1b & \(5^2 2^4 3^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 39 & \(51\) & \(3\cdot 17\) & 2 & \(3^4 17^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 40 & \(52\) & \(2^2\cdot 13\) & 1b & \(5^2 2^4 13^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(53\le D\le 104\)} \label{tbl:PureQuinticFields100} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 41 & \(53\) & & 1b & \(5^2 53^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 42 & *\(55\) & \(5\cdot 11\) & 1a & \(5^6 11^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 43 & \(56\) & \(2^3\cdot 7\) & 1b & \(5^2 2^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 44 & *\(57\) & \(3\cdot 19\) & 2 & \(3^4 19^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 45 & \(58\) & \(2\cdot 29\) & 1b & \(5^2 2^4 29^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(29\cdot 5^2,\mathcal{K}\) \\ 46 & \(59\) & & 1b & \(5^2 59^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 47 & \(60\) & \(2^2\cdot 3\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 48 & \(61\) & & 1b & \(5^2 61^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 49 & \(62\) & \(2\cdot 31\) & 1b & \(5^2 2^4 31^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 50 & \(63\) & \(3^2\cdot 7\) & 1b & \(5^2 3^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 51 & \(65\) & \(5\cdot 13\) & 1a & \(5^6 13^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 52 & *\(66\) & \(2\cdot 3\cdot 11\) & 1b & \(5^2 2^4 3^4 11^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,3\cdot 5^3\) \\ 53 & \(67\) & & 1b & \(5^2 67^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 54 & \(68\) & \(2^2\cdot 17\) & 2 & \(2^4 17^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 55 & \(69\) & \(3\cdot 23\) & 1b & \(5^2 3^4 23^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 56 & *\(70\) & \(2\cdot 5\cdot 7\) & 1a & \(5^6 2^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 57 & \(71\) & & 1b & \(5^2 71^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 58 & \(73\) & & 1b & \(5^2 73^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 59 & \(74\) & \(2\cdot 37\) & 2 & \(2^4 37^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 60 & \(75\) & \(3\cdot 5^2\) & 1a & \(5^6 3^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 61 & \(76\) & \(2^2\cdot 19\) & 2 & \(2^4 19^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 62 & *\(77\) & \(7\cdot 11\) & 1b & \(5^2 7^4 11^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11\cdot 5^3,\mathcal{K}\) \\ 63 & *\(78\) & \(2\cdot 3\cdot 13\) & 1b & \(5^2 2^4 3^4 13^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,2\cdot 5^3\) \\ 64 & \(79\) & & 1b & \(5^2 79^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 65 & \(80\) & \(2^4\cdot 5\) & 1a & \(5^6 2^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 66 & *\(82\) & \(2\cdot 41\) & 2 & \(2^4 41^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 67 & \(83\) & & 1b & \(5^2 83^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 68 & \(84\) & \(2^2\cdot 3\cdot 7\) & 1b & \(5^2 2^4 3^4 7^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5,3\cdot 5\) \\ 69 & \(85\) & \(5\cdot 17\) & 1a & \(5^6 17^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 70 & \(86\) & \(2\cdot 43\) & 1b & \(5^2 2^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 71 & \(87\) & \(3\cdot 29\) & 1b & \(5^2 3^4 29^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(29,\mathcal{L}\) \\ 72 & \(88\) & \(2^3\cdot 11\) & 1b & \(5^2 2^4 11^4\) & \(3\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 73 & \(89\) & & 1b & \(5^2 89^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 74 & \(90\) & \(2\cdot 3^2\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 75 & \(91\) & \(7\cdot 13\) & 1b & \(5^2 7^4 13^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 76 & \(92\) & \(2^2\cdot 23\) & 1b & \(5^2 2^4 23^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 77 & \(93\) & \(3\cdot 31\) & 2 & \(3^4 31^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 78 & \(94\) & \(2\cdot 47\) & 1b & \(5^2 2^4 47^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 79 & *\(95\) & \(5\cdot 19\) & 1a & \(5^6 19^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 80 & \(97\) & & 1b & \(5^2 97^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 81 & \(99\) & \(3^2\cdot 11\) & 2 & \(3^4 11^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 82 & *\(101\) & & 2 & \(101^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,\otimes)\) & \(\zeta_1\) & \(\mathfrak{K}_2\) \\ 83 & \(102\) & \(2\cdot 3\cdot 17\) & 1b & \(5^2 2^4 3^4 17^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,3\cdot 5\) \\ 84 & \(103\) & & 1b & \(5^2 103^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 85 & \(104\) & \(2^3\cdot 13\) & 1b & \(5^2 2^4 13^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(105\le D\le 155\)} \label{tbl:PureQuinticFields150} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 86 & \(105\) & \(3\cdot 5\cdot 7\) & 1a & \(5^6 3^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 87 & \(106\) & \(2\cdot 53\) & 1b & \(5^2 2^4 53^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 88 & \(107\) & & 2 & \(107^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 89 & \(109\) & & 1b & \(5^2 109^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 90 & *\(110\) & \(2\cdot 5\cdot 11\) & 1a & \(5^6 2^4 11^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11,\mathcal{L}\) \\ 91 & \(111\) & \(3\cdot 37\) & 1b & \(5^2 3^4 37^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 92 & \(112\) & \(2^4\cdot 7\) & 1b & \(5^2 2^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 93 & \(113\) & & 1b & \(5^2 113^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 94 & *\(114\) & \(2\cdot 3\cdot 19\) & 1b & \(5^2 2^4 3^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2\cdot 5^3,3\cdot 5^3\) \\ 95 & \(115\) & \(5\cdot 23\) & 1a & \(5^6 23^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 96 & \(116\) & \(2^2\cdot 29\) & 1b & \(5^2 2^4 29^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(29\cdot 5,\mathcal{K}\) \\ 97 & \(117\) & \(3^2\cdot 13\) & 1b & \(5^2 3^4 13^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 98 & \(118\) & \(2\cdot 59\) & 2 & \(2^4 59^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 99 & \(119\) & \(7\cdot 17\) & 1b & \(5^2 7^4 17^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 100 & \(120\) & \(2^3\cdot 3\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 101 & \(122\) & \(2\cdot 61\) & 1b & \(5^2 2^4 61^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(61\cdot 5^3,\mathcal{K}\) \\ 102 & *\(123\) & \(3\cdot 41\) & 1b & \(5^2 3^4 41^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 103 & \(124\) & \(2^2\cdot 31\) & 2 & \(2^4 31^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 104 & *\(126\) & \(2\cdot 3^2\cdot 7\) & 2 & \(2^4 3^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 105 & \(127\) & & 1b & \(5^2 127^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 106 & \(129\) & \(3\cdot 43\) & 1b & \(5^2 3^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 107 & \(130\) & \(2\cdot 5\cdot 13\) & 1a & \(5^6 2^4 13^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 108 & *\(131\) & & 1b & \(5^2 131^4\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 109 & *\(132\) & \(2^2\cdot 3\cdot 11\) & 2 & \(2^4 3^4 11^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11,\mathcal{L}\) \\ 110 & *\(133\) & \(7\cdot 19\) & 1b & \(5^2 7^4 19^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7,\mathcal{L}\) \\ 111 & \(134\) & \(2\cdot 67\) & 1b & \(5^2 2^4 67^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 112 & \(136\) & \(2^3\cdot 17\) & 1b & \(5^2 2^4 17^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 113 & \(137\) & & 1b & \(5^2 137^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 114 & \(138\) & \(2\cdot 3\cdot 23\) & 1b & \(5^2 2^4 3^4 23^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,2\cdot 5^2\) \\ 115 & *\(139\) & & 1b & \(5^2 139^4\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{L}\) \\ 116 & *\(140\) & \(2^2\cdot 5\cdot 7\) & 1a & \(5^6 2^4 7^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(7\) \\ 117 & *\(141\) & \(3\cdot 47\) & 1b & \(5^2 3^4 47^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(47\cdot 5\) \\ 118 & \(142\) & \(2\cdot 71\) & 1b & \(5^2 2^4 71^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(71\cdot 5^2,\mathcal{K}\) \\ 119 & \(143\) & \(11\cdot 13\) & 2 & \(11^4 13^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 120 & \(145\) & \(5\cdot 29\) & 1a & \(5^6 29^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 121 & \(146\) & \(2\cdot 73\) & 1b & \(5^2 2^4 73^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 122 & \(147\) & \(3\cdot 7^2\) & 1b & \(5^2 3^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 123 & \(148\) & \(2^2\cdot 37\) & 1b & \(5^2 2^4 37^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 124 & *\(149\) & & 2 & \(149^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,\times)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 125 & \(150\) & \(2\cdot 3\cdot 5^2\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 126 & *\(151\) & & 2 & \(151^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,\times)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 127 & \(152\) & \(2^3\cdot 19\) & 1b & \(5^2 2^4 19^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(19\cdot 5^2,\mathcal{K}\) \\ 128 & \(153\) & \(3^2\cdot 17\) & 1b & \(5^2 3^4 17^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 129 & *\(154\) & \(2\cdot 7\cdot 11\) & 1b & \(5^2 2^4 7^4 11^4\) & \(12\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 11^2,\mathcal{K}\) \\ 130 & *\(155\) & \(5\cdot 31\) & 1a & \(5^6 31^4\) & \(4\) & \(2\) & \(3\) & \(5\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(156\le D\le 207\)} \label{tbl:PureQuinticFields200} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 131 & \(156\) & \(2^2\cdot 3\cdot 13\) & 1b & \(5^2 2^4 3^4 13^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,13\cdot 5^2\) \\ 132 & \(157\) & & 2 & \(157^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 133 & \(158\) & \(2\cdot 79\) & 1b & \(5^2 2^4 79^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 134 & \(159\) & \(3\cdot 53\) & 1b & \(5^2 3^4 53^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(5\) \\ 135 & \(161\) & \(7\cdot 23\) & 1b & \(5^2 7^4 23^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 136 & \(163\) & & 1b & \(5^2 163^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 137 & \(164\) & \(2^2\cdot 41\) & 1b & \(5^2 2^4 41^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(41\cdot 5,\mathcal{K}\) \\ 138 & \(165\) & \(3\cdot 5\cdot 11\) & 1a & \(5^6 3^4 11^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5\cdot 11^2,\mathcal{K}\) \\ 139 & \(166\) & \(2\cdot 83\) & 1b & \(5^2 2^4 83^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(5\) \\ 140 & \(167\) & & 1b & \(5^2 167^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 141 & \(168\) & \(2^3\cdot 3\cdot 7\) & 2 & \(2^4 3^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 142 & \(170\) & \(2\cdot 5\cdot 17\) & 1a & \(5^6 2^4 17^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 143 & *\(171\) & \(3^2\cdot 19\) & 1b & \(5^2 3^4 19^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(19\cdot 5^2\) \\ 144 & \(172\) & \(2^2\cdot 43\) & 1b & \(5^2 2^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 145 & \(173\) & & 1b & \(5^2 173^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 146 & *\(174\) & \(2\cdot 3\cdot 29\) & 2 & \(2^4 3^4 29^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3^2\cdot 29,\mathcal{K}\) \\ 147 & \(175\) & \(5^2\cdot 7\) & 1a & \(5^6 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((-,-,-,\otimes)\) & \(\eta\) & \\ 148 & \(176\) & \(2^4\cdot 11\) & 2 & \(2^4 11^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 149 & \(177\) & \(3\cdot 59\) & 1b & \(5^2 3^4 59^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3,\mathcal{L}\) \\ 150 & \(178\) & \(2\cdot 89\) & 1b & \(5^2 2^4 89^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(89\cdot 5^2\) \\ 151 & \(179\) & & 1b & \(5^2 179^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 152 & *\(180\) &\(2^2\cdot 3^2\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(3\) \\ 153 & \(181\) & & 1b & \(5^2 181^4\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 154 & *\(182\) & \(2\cdot 7\cdot 13\) & 2 & \(2^4 7^4 13^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(7\) \\ 155 & \(183\) & \(3\cdot 61\) & 1b & \(5^2 3^4 61^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(61\cdot 5^3,\mathcal{K}\) \\ 156 & \(184\) & \(2^3\cdot 23\) & 1b & \(5^2 2^4 23^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 157 & \(185\) & \(5\cdot 37\) & 1a & \(5^6 37^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 158 & *\(186\) & \(2\cdot 3\cdot 31\) & 1b & \(5^2 2^4 3^4 31^4\) & \(13\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(5,\mathfrak{K}_2\) \\ 159 & \(187\) & \(11\cdot 17\) & 1b & \(5^2 11^4 17^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11\cdot 5,\mathcal{K}\) \\ 160 & \(188\) & \(2^2\cdot 47\) & 1b & \(5^2 2^4 47^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 161 & *\(190\) & \(2\cdot 5\cdot 19\) & 1a & \(5^6 2^4 19^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 162 & *\(191\) & & 1b & \(5^2 191^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(5,\mathfrak{K}_1\) \\ 163 & \(193\) & & 2 & \(193^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 164 & \(194\) & \(2\cdot 97\) & 1b & \(5^2 2^4 97^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(97\cdot 5^2\) \\ 165 & \(195\) & \(3\cdot 5\cdot 13\) & 1a & \(5^6 3^4 13^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 166 & \(197\) & & 1b & \(5^2 197^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 167 & \(198\) & \(2\cdot 3^2\cdot 11\) & 1b & \(5^2 2^4 3^4 11^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(3\cdot 5^3,11\cdot 5^3\) \\ 168 & \(199\) & & 2 & \(199^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,\times)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 169 & \(201\) & \(3\cdot 67\) & 2 & \(3^4 67^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 170 & *\(202\) & \(2\cdot 101\) & 1b & \(5^2 2^4 101^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) \\ 171 & *\(203\) & \(7\cdot 29\) & 1b & \(5^2 7^4 29^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(29\cdot 5\) \\ 172 & \(204\) & \(2^2\cdot 3\cdot 17\) & 1b & \(5^2 2^4 3^4 17^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3\cdot 5^4,17\cdot 5^3\) \\ 173 & \(205\) & \(5\cdot 41\) & 1a & \(5^6 41^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 174 & \(206\) & \(2\cdot 103\) & 1b & \(5^2 2^4 103^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\cdot 5^2\) \\ 175 & \(207\) & \(3^2\cdot 23\) & 2 & \(3^4 23^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(208\le D\le 259\)} \label{tbl:PureQuinticFields250} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 176 & \(208\) & \(2^4\cdot 13\) & 1b & \(5^2 2^4 13^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 177 & *\(209\) & \(11\cdot 19\) & 1b & \(5^2 11^4 19^4\) & \(3\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,\otimes,\times,-)\) & \(\beta_2\) & \(11\cdot 5^2,\mathcal{K}_{(19)}\) \\ 178 & *\(210\) & \(2\cdot 3\cdot 5\cdot 7\) & 1a & \(5^6 2^4 3^4 7^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(7,3^2\cdot 5\) \\ 179 & *\(211\) & & 1b & \(5^2 211^4\) & \(1\) & \(3\) & \(5\) & \(9\) & \(2\) & \((-,-,\otimes,-)\) & \(\delta_1\) & \(\mathfrak{K}_2\) \\ 180 & \(212\) & \(2^2\cdot 53\) & 1b & \(5^2 2^4 53^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 181 & \(213\) & \(3\cdot 71\) & 1b & \(5^2 3^4 71^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(3\cdot 5^3,\mathcal{K}\) \\ 182 & \(214\) & \(2\cdot 107\) & 1b & \(5^2 2^4 107^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 183 & \(215\) & \(5\cdot 43\) & 1a & \(5^6 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((-,-,-,\otimes)\) & \(\eta\) & \\ 184 & \(217\) & \(7\cdot 31\) & 1b & \(5^2 7^4 31^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 185 & *\(218\) & \(2\cdot 109\) & 2 & \(2^4 109^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(2\) \\ 186 & \(219\) & \(3\cdot 73\) & 1b & \(5^2 3^4 73^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 187 & \(220\) & \(2^2\cdot 5\cdot 11\) & 1a & \(5^6 2^4 11^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) \\ 188 & \(221\) & \(13\cdot 17\) & 1b & \(5^2 13^4 17^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 189 & \(222\) & \(2\cdot 3\cdot 37\) & 1b & \(5^2 2^4 3^4 37^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(37,2\cdot 5^2\) \\ 190 & \(223\) & & 1b & \(5^2 223^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 191 & \(226\) & \(2\cdot 113\) & 2 & \(2^4 113^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 192 & \(227\) & & 1b & \(5^2 227^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 193 & \(228\) & \(2^2\cdot 3\cdot 19\) & 1b & \(5^2 2^4 3^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(3\cdot 5,19\cdot 5\) \\ 194 & \(229\) & & 1b & \(5^2 229^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 195 & \(230\) & \(2\cdot 5\cdot 23\) & 1a & \(5^6 2^4 23^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 196 & *\(231\) & \(3\cdot 7\cdot 11\) & 1b & \(5^2 3^4 7^4 11^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(11,7\cdot 5^2\) \\ 197 & \(232\) & \(2^3\cdot 29\) & 2 & \(2^4 29^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 198 & \(233\) & & 1b & \(5^2 233^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 199 & \(234\) & \(2\cdot 3^2\cdot 13\) & 1b & \(5^2 2^4 3^4 13^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,2\cdot 5\) \\ 200 & \(235\) & \(5\cdot 47\) & 1a & \(5^6 47^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 201 & \(236\) & \(2^2\cdot 59\) & 1b & \(5^2 2^4 59^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 5^2,\mathcal{K}\) \\ 202 & \(237\) & \(3\cdot 79\) & 1b & \(5^2 3^4 79^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3\cdot 5,\mathcal{K}\) \\ 203 & \(238\) & \(2\cdot 7\cdot 17\) & 1b & \(5^2 2^4 7^4 17^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(5,2^4\cdot 7\) \\ 204 & \(239\) & & 1b & \(5^2 239^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 205 & \(240\) & \(2^4\cdot 3\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(3\) \\ 206 & \(241\) & & 1b & \(5^2 241^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 207 & \(244\) & \(2^2\cdot 61\) & 1b & \(5^2 2^4 61^4\) & \(3\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 208 & \(245\) & \(5\cdot 7^2\) & 1a & \(5^6 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((-,-,-,\otimes)\) & \(\eta\) & \\ 209 & \(246\) & \(2\cdot 3\cdot 41\) & 1b & \(5^2 2^4 3^4 41^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(3,2\cdot 5^2\) \\ 210 & *\(247\) & \(13\cdot 19\) & 1b & \(5^2 13^4 19^4\) & \(3\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \\ 211 & \(248\) & \(2^3\cdot 31\) & 1b & \(5^2 2^4 31^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 212 & \(249\) & \(3\cdot 83\) & 2 & \(3^4 83^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 213 & \(251\) & & 2 & \(251^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,\times)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 214 & \(252\) & \(2^2\cdot 3^2\cdot 7\)& 1b & \(5^2 2^4 3^4 7^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,2\cdot 5\) \\ 215 & *\(253\) & \(11\cdot 23\) & 1b & \(5^2 11^4 23^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(11,\mathfrak{K}_1\) \\ 216 & \(254\) & \(2\cdot 127\) & 1b & \(5^2 2^4 127^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 217 & \(255\) & \(3\cdot 5\cdot 17\) & 1a & \(5^6 3^4 17^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 218 & \(257\) & & 2 & \(257^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 219 & \(258\) & \(2\cdot 3\cdot 43\) & 1b & \(5^2 2^4 3^4 43^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,3\cdot 5\) \\ 220 & *\(259\) & \(7\cdot 37\) & 1b & \(5^2 7^4 37^4\) & \(4\) & \(1\) & \(2\) & \(5*\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(260\le D\le 307\)} \label{tbl:PureQuinticFields300} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 221 & \(260\) & \(2^2\cdot 5\cdot 13\) & 1a & \(5^6 2^4 13^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 222 & \(261\) & \(3^2\cdot 29\) & 1b & \(5^2 3^4 29^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(29,\mathcal{L}\) \\ 223 & \(262\) & \(2\cdot 131\) & 1b & \(5^2 2^4 131^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(131\cdot 5,\mathcal{K}\) \\ 224 & \(263\) & & 1b & \(5^2 263^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 225 & \(264\) & \(2^3\cdot 3\cdot 11\) & 1b & \(5^2 2^4 3^4 11^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,2\cdot 11\) \\ 226 & \(265\) & \(5\cdot 53\) & 1a & \(5^6 53^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 227 & *\(266\) & \(2\cdot 7\cdot 19\) & 1b & \(5^2 2^4 7^4 19^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(5,2^3\cdot 7\) \\ 228 & \(267\) & \(3\cdot 89\) & 1b & \(5^2 3^4 89^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(89\cdot 5^2,\mathcal{K}\) \\ 229 & \(268\) & \(2^2\cdot 67\) & 2 & \(2^4 67^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 230 & \(269\) & & 1b & \(5^2 269^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 231 & \(270\) & \(2\cdot 3^3\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 232 & \(271\) & & 1b & \(5^2 271^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(5,\mathfrak{K}_1\) \\ 233 & \(272\) & \(2^4\cdot 17\) & 1b & \(5^2 2^4 17^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 234 & *\(273\) & \(3\cdot 7\cdot 13\) & 1b & \(5^2 3^4 7^4 13^4\) & \(12\) & \(2\) & \(4\) & \(8\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(7\cdot 13^2\cdot 5^4\) \\ 235 & \(274\) & \(2\cdot 137\) & 2 & \(2^4 137^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 236 & *\(275\) & \(5^2\cdot 11\) & 1a & \(5^6 11^4\) & \(4\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 237 & *\(276\) & \(2^2\cdot 3\cdot 23\) & 2 & \(2^4 3^4 23^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 238 & \(277\) & & 1b & \(5^2 277^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 239 & \(278\) & \(2\cdot 139\) & 1b & \(5^2 2^4 139^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(139\cdot 5,\mathcal{K}\) \\ 240 & \(279\) & \(3^2\cdot 31\) & 1b & \(5^2 3^4 31^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 241 & \(280\) & \(2^3\cdot 5\cdot 7\) & 1a & \(5^6 2^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 242 & *\(281\) & & 1b & \(5^2 281^4\) & \(1\) & \(3\) & \(5*\) & \(9*\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) \\ 243 & \(282\) & \(2\cdot 3\cdot 47\) & 2 & \(2^4 3^4 47^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 244 & \(283\) & & 1b & \(5^2 283^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 245 & \(284\) & \(2^2\cdot 71\) & 1b & \(5^2 2^4 71^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5^3,\mathcal{K}\) \\ 246 & *\(285\) & \(3\cdot 5\cdot 19\) & 1a & \(5^6 3^4 19^4\) & \(16\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \\ 247 & *\(286\) & \(2\cdot 11\cdot 13\) & 1b &\(5^2 2^4 11^4 13^4\) & \(13\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11^2\cdot 5,\mathcal{K}\) \\ 248 & *\(287\) & \(7\cdot 41\) & 1b & \(5^2 7^4 41^4\) & \(4\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 249 & *\(290\) & \(2\cdot 5\cdot 29\) & 1a & \(5^6 2^4 29^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(29\) \\ 250 & \(291\) & \(3\cdot 97\) & 1b & \(5^2 3^4 97^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 251 & \(292\) & \(2^2\cdot 73\) & 1b & \(5^2 2^4 73^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 252 & \(293\) & & 2 & \(293^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 253 & \(294\) & \(2\cdot 3\cdot 7^2\) & 1b & \(5^2 2^4 3^4 7^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,7^2\cdot 5\) \\ 254 & \(295\) & \(5\cdot 59\) & 1a & \(5^6 59^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 255 & \(296\) & \(2^3\cdot 37\) & 1b & \(5^2 2^4 37^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 256 & \(297\) & \(3^3\cdot 11\) & 1b & \(5^2 3^4 11^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11\cdot 5,\mathcal{K}\) \\ 257 & *\(298\) & \(2\cdot 149\) & 1b & \(5^2 2^4 149^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(5\) \\ 258 & \(299\) & \(13\cdot 23\) & 2 & \(13^4 23^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 259 & *\(301\) & \(7\cdot 43\) & 2 & \(7^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((-,-,-,\otimes)\) & \(\eta\) & \\ 260 & *\(302\) & \(2\cdot 151\) & 1b & \(5^2 2^4 151^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(2,\mathfrak{K}_1\) \\ 261 & \(303\) & \(3\cdot 101\) & 1b & \(5^2 3^4 101^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(101\cdot 5,\mathcal{K}\) \\ 262 & \(304\) & \(2^4\cdot 19\) & 1b & \(5^2 2^4 19^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(19\cdot 5^4,\mathcal{K}\) \\ 263 & \(305\) & \(5\cdot 61\) & 1a & \(5^6 61^4\) & \(4\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 264 & \(306\) & \(2\cdot 3^2\cdot 17\) & 1b & \(5^2 2^4 3^4 17^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3\cdot 5,3\cdot 17\) \\ 265 & \(307\) & & 2 & \(307^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(308\le D\le 357\)} \label{tbl:PureQuinticFields350} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 266 & \(308\) & \(2^2\cdot 7\cdot 11\) & 1b & \(5^2 2^4 7^4 11^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(11\cdot 5,7\cdot 5^2\) \\ 267 & \(309\) & \(3\cdot 103\) & 1b & \(5^2 3^4 103^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(103\cdot 5^2\)\\ 268 & \(310\) & \(2\cdot 5\cdot 31\) & 1a & \(5^6 2^4 31^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 269 & \(311\) & & 1b & \(5^2 311^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 270 & \(312\) & \(2^3\cdot 3\cdot 13\) & 1b & \(5^2 2^4 3^4 13^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(5,2^4\cdot 13\) \\ 271 & \(313\) & & 1b & \(5^2 313^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 272 & \(314\) & \(2\cdot 157\) & 1b & \(5^2 2^4 157^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 273 & \(315\) & \(3^2\cdot 5\cdot 7\) & 1a & \(5^6 3^4 7^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(7\) \\ 274 & \(316\) & \(2^2\cdot 79\) & 1b & \(5^2 2^4 79^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) \\ 275 & \(317\) & & 1b & \(5^2 317^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 276 & \(318\) & \(2\cdot 3\cdot 53\) & 2 & \(2^4 3^4 53^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 277 & *\(319\) & \(11\cdot 29\) & 1b & \(5^2 11^4 29^4\) & \(3\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(11)},\mathcal{K}_{(29)}\) \\ 278 & \(321\) & \(3\cdot 107\) & 1b & \(5^2 3^4 107^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 279 & \(322\) & \(2\cdot 7\cdot 23\) & 1b & \(5^2 2^4 7^4 23^4\) & \(12\) & \(2\) & \(4\) & \(8\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(7\cdot 23^2\cdot 5^3\) \\ 280 & \(323\) & \(17\cdot 19\) & 1b & \(5^2 17^4 19^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(19\cdot 5^2,\mathcal{K}\) \\ 281 & \(325\) & \(5^2\cdot 13\) & 1a & \(5^6 13^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 282 & \(326\) & \(2\cdot 163\) & 2 & \(2^4 163^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 283 & \(327\) & \(3\cdot 109\) & 1b & \(5^2 3^4 109^4\) & \(3\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \\ 284 & \(328\) & \(2^3\cdot 41\) & 1b & \(5^2 2^4 41^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 41\cdot 5,\mathcal{K}\) \\ 285 & *\(329\) & \(7\cdot 47\) & 1b & \(5^2 7^4 47^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(47\) \\ 286 & *\(330\) & \(2\cdot 3\cdot 5\cdot 11\) & 1a & \(5^6 2^4 3^4 11^4\) & \(64\) & \(2\) & \(4\) & \(9\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(3\cdot 11,5\cdot 11^3\) \\ 287 & \(331\) & & 1b & \(5^2 331^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 288 & \(332\) & \(2^2\cdot 83\) & 2 & \(2^4 83^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 289 & \(333\) & \(3^2\cdot 37\) & 1b & \(5^2 3^4 37^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 290 & \(334\) & \(2\cdot 167\) & 1b & \(5^2 2^4 167^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 291 & \(335\) & \(5\cdot 67\) & 1a & \(5^6 67^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 292 & \(336\) & \(2^4\cdot 3\cdot 7\) & 1b & \(5^2 2^4 3^4 7^4\) & \(12\) & \(2\) & \(4\) & \(8\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\cdot 5^4\) \\ 293 & \(337\) & & 1b & \(5^2 337^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 294 & \(339\) & \(3\cdot 113\) & 1b & \(5^2 3^4 113^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 295 & \(340\) & \(2^2\cdot 5\cdot 17\) & 1a & \(5^6 2^4 17^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 296 & *\(341\) & \(11\cdot 31\) & 1b & \(5^2 11^4 31^4\) & \(3\) & \(3\) & \(5\) & \(9\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_{(11),1}\mathfrak{K}_{(31),2}^3,\) \\ & & & & & & & & & & & & \(\mathfrak{K}_{(11),2}\mathfrak{K}_{(31),1}\) \\ 297 & \(342\) & \(2\cdot 3^2\cdot 19\) & 1b & \(5^2 2^4 3^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(19,2\cdot 5^2\) \\ 298 & \(344\) & \(2^3\cdot 43\) & 1b & \(5^2 2^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 299 & \(345\) & \(3\cdot 5\cdot 23\) & 1a & \(5^6 3^4 23^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 300 & \(346\) & \(2\cdot 173\) & 1b & \(5^2 2^4 173^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(173\cdot 5\) \\ 301 & \(347\) & & 1b & \(5^2 347^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 302 & *\(348\) & \(2^2\cdot 3\cdot 29\) & 1b & \(5^2 2^4 3^4 29^4\) & \(13\) & \(2\) & \(4\) & \(8\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(2\cdot 3\cdot 5\) \\ 303 & \(349\) & & 2 & \(349^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,\times)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 304 & \(350\) & \(2\cdot 5^2\cdot 7\) & 1a & \(5^6 2^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 305 & \(351\) & \(3^3\cdot 13\) & 2 & \(3^4 13^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 306 & \(353\) & & 1b & \(5^2 353^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 307 & \(354\) & \(2\cdot 3\cdot 59\) & 1b & \(5^2 2^4 3^4 59^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(5,2\cdot 3^2\) \\ 308 & \(355\) & \(5\cdot 71\) & 1a & \(5^6 71^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 309 & \(356\) & \(2^2\cdot 89\) & 1b & \(5^2 2^4 89^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(89\cdot 5^4\) \\ 310 & \(357\) & \(3\cdot 7\cdot 17\) & 2 & \(3^4 7^4 17^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(358\le D\le 408\)} \label{tbl:PureQuinticFields400} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 311 & \(358\) & \(2\cdot 179\) & 1b & \(5^2 2^4 179^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2,\mathcal{L}\) \\ 312 & \(359\) & & 1b & \(5^2 359^4\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{L}\) \\ 313 & \(360\) & \(2^3\cdot 3^2\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 314 & \(362\) & \(2\cdot 181\) & 1b & \(5^2 2^4 181^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(181\cdot 5,\mathcal{K}\) \\ 315 & \(364\) & \(2^2\cdot 7\cdot 13\) & 1b & \(5^2 2^4 7^4 13^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,7\cdot 13\) \\ 316 & \(365\) & \(5\cdot 73\) & 1a & \(5^6 73^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 317 & \(366\) & \(2\cdot 3\cdot 61\) & 1b & \(5^2 2^4 3^4 61^4\) & \(13\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(3\cdot 61^2\cdot 5^4,\mathfrak{K}_1\) \\ 318 & \(367\) & & 1b & \(5^2 367^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 319 & \(368\) & \(2^4\cdot 23\) & 2 & \(2^4 23^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 320 & \(369\) & \(3^2\cdot 41\) & 1b & \(5^2 3^4 41^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 321 & \(370\) & \(2\cdot 5\cdot 37\) & 1a & \(5^6 2^4 37^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 322 & \(371\) & \(7\cdot 53\) & 1b & \(5^2 7^4 53^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 323 & \(372\) & \(2^2\cdot 3\cdot 31\) & 1b & \(5^2 2^4 3^4 31^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(5,2\cdot 31\) \\ 324 & \(373\) & & 1b & \(5^2 373^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 325 & \(374\) & \(2\cdot 11\cdot 17\) & 2 & \(2^4 11^4 17^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11,\mathcal{L}\) \\ 326 & \(376\) & \(2^3\cdot 47\) & 2 & \(2^4 47^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 327 & *\(377\) & \(13\cdot 29\) & 1b & \(5^2 13^4 29^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 328 & \(378\) & \(2\cdot 3^3\cdot 7\) & 1b & \(5^2 2^4 3^4 7^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5^2,3^3\cdot 7^2\) \\ 329 & *\(379\) & & 1b & \(5^2 379^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(5\) \\ 330 & \(380\) & \(2^2\cdot 5\cdot 19\) & 1a & \(5^6 2^4 19^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(19,\mathcal{L}\) \\ 331 & \(381\) & \(3\cdot 127\) & 1b & \(5^2 3^4 127^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 332 & \(382\) & \(2\cdot 191\) & 2 & \(2^4 191^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 333 & \(383\) & & 1b & \(5^2 383^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 334 & *\(385\) & \(5\cdot 7\cdot 11\) & 1a & \(5^6 7^4 11^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(7\cdot 11^2,\mathcal{K}\) \\ 335 & \(386\) & \(2\cdot 193\) & 1b & \(5^2 2^4 193^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\) \\ 336 & \(387\) & \(3^2\cdot 43\) & 1b & \(5^2 3^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 337 & \(388\) & \(2^2\cdot 97\) & 1b & \(5^2 2^4 97^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(97\cdot 5^4\) \\ 338 & \(389\) & & 1b & \(5^2 389^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 339 & *\(390\) & \(2\cdot 3\cdot 5\cdot 13\) & 1a & \(5^6 2^4 3^4 13^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,5\) \\ 340 & \(391\) & \(17\cdot 23\) & 1b & \(5^2 17^4 23^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 341 & \(393\) & \(3\cdot 131\) & 2 & \(3^4 131^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 342 & \(394\) & \(2\cdot 197\) & 1b & \(5^2 2^4 197^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 343 & \(395\) & \(5\cdot 79\) & 1a & \(5^6 79^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 344 & \(396\) & \(2^2\cdot 3^2\cdot 11\) & 1b & \(5^2 2^4 3^4 11^4\) & \(13\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(11\cdot 5^2,\mathfrak{K}_2\) \\ 345 & \(397\) & & 1b & \(5^2 397^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 346 & *\(398\) & \(2\cdot 199\) & 1b & \(5^2 2^4 199^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 347 & *\(399\) & \(3\cdot 7\cdot 19\) & 2 & \(3^4 7^4 19^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3\cdot 19^2,\mathcal{K}\) \\ 348 & *\(401\) & & 2 & \(401^4\) & \(1\) & \(2\) & \(3\) & \(5\) & \(2\) & \((-,-,\otimes,\times)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) \\ 349 & \(402\) & \(2\cdot 3\cdot 67\) & 1b & \(5^2 2^4 3^4 67^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3\cdot 5^3,2\cdot 5^3\) \\ 350 & \(403\) & \(13\cdot 31\) & 1b & \(5^2 13^4 31^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 351 & \(404\) & \(2^2\cdot 101\) & 1b & \(5^2 2^4 101^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) \\ 352 & \(405\) & \(3^4\cdot 5\) & 1a & \(5^6 3^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 353 & \(406\) & \(2\cdot 7\cdot 29\) & 1b & \(5^2 2^4 7^4 29^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(7,2\cdot 5^3\) \\ 354 & \(407\) & \(11\cdot 37\) & 2 & \(11^4 37^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 355 & \(408\) & \(2^3\cdot 3\cdot 17\) & 1b & \(5^2 2^4 3^4 17^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(17,3\cdot 5^3\) \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(409\le D\le 458\)} \label{tbl:PureQuinticFields450} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 356 & \(409\) & & 1b & \(5^2 409^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 357 & \(410\) & \(2\cdot 5\cdot 41\) & 1a & \(5^6 2^4 41^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 41^2,\mathcal{K}\) \\ 358 & \(411\) & \(3\cdot 137\) & 1b & \(5^2 3^4 137^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 359 & \(412\) & \(2^2\cdot 103\) & 1b & \(5^2 2^4 103^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\cdot 5^2\) \\ 360 & \(413\) & \(7\cdot 59\) & 1b & \(5^2 7^4 59^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(59\cdot 5,\mathcal{K}\) \\ 361 & \(414\) & \(2\cdot 3^2\cdot 23\) & 1b &\(5^2 2^4 3^4 23^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,3\) \\ 362 & \(415\) & \(5\cdot 83\) & 1a & \(5^6 83^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 363 & \(417\) & \(3\cdot 139\) & 1b & \(5^2 3^4 139^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(139\cdot 5,\mathcal{K}\) \\ 364 & *\(418\) & \(2\cdot 11\cdot 19\) & 2 & \(2^4 11^4 19^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,\otimes,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}_{(19)},\mathfrak{K}_{(11),1}\) \\ 365 & \(419\) & & 1b & \(5^2 419^4\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{L}\) \\ 366 & \(420\) & \(2^2\cdot 3\cdot 5\cdot 7\) & 1a & \(5^6 2^4 3^4 7^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,3^2\cdot 7\) \\ 367 & *\(421\) & & 1b & \(5^2 421^4\) & \(1\) & \(1\) & \(2\) & \(3\) & \(4\) & \((-,-,\otimes,-)\) & \(\delta_1\) & \(\mathfrak{K}_2\) \\ 368 & *\(422\) & \(2\cdot 211\) & 1b & \(5^2 2^4 211^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\times,-)\) & \(\varepsilon\) & \(2\cdot 5^2\) \\ 369 & \(423\) & \(3^2\cdot 47\) & 1b & \(5^2 3^4 47^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(3\cdot 5^3\) \\ 370 & \(424\) & \(2^3\cdot 53\) & 2 & \(2^4 53^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 371 & \(425\) & \(5^2\cdot 17\) & 1a & \(5^6 17^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 372 & \(426\) & \(2\cdot 3\cdot 71\) & 2 & \(2^4 3^4 71^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2^2\cdot 71,\mathcal{K}\) \\ 373 & \(427\) & \(7\cdot 61\) & 1b & \(5^2 7^4 61^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(61\cdot 5^2,\mathcal{K}\) \\ 374 & \(428\) & \(2^2\cdot 107\) & 1b & \(5^2 2^4 107^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 375 & \(429\) & \(3\cdot 11\cdot 13\) & 1b &\(5^2 3^4 11^4 13^4\)& \(13\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(13\cdot 5,\mathcal{K}\) \\ 376 & \(430\) & \(2\cdot 5\cdot 43\) & 1a & \(5^6 2^4 43^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(43\) \\ 377 & \(431\) & & 1b & \(5^2 431^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 378 & \(433\) & & 1b & \(5^2 433^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 379 & \(434\) & \(2\cdot 7\cdot 31\) & 1b &\(5^2 2^4 7^4 31^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(5,2\cdot 31^2\) \\ 380 & \(435\) & \(3\cdot 5\cdot 29\) & 1a & \(5^6 3^4 29^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3\cdot 5^3,\mathcal{K}\) \\ 381 & \(436\) & \(2^2\cdot 109\) & 1b & \(5^2 2^4 109^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(109,\mathcal{L}\) \\ 382 & \(437\) & \(19\cdot 23\) & 1b & \(5^2 19^4 23^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(19\cdot 5^2,\mathcal{K}\) \\ 383 & \(438\) & \(2\cdot 3\cdot 73\) & 1b &\(5^2 2^4 3^4 73^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,5\) \\ 384 & \(439\) & & 1b & \(5^2 439^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 385 & \(440\) & \(2^3\cdot 5\cdot 11\) & 1a & \(5^6 2^4 11^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) \\ 386 & \(442\) & \(2\cdot 13\cdot 17\) & 1b &\(5^2 2^4 13^4 17^4\)& \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5^4,13\cdot 5^2\) \\ 387 & \(443\) & & 2 & \(443^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 388 & \(444\) & \(2^2\cdot 3\cdot 37\) & 1b &\(5^2 2^4 3^4 37^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(5,2^3\cdot 3\) \\ 389 & \(445\) & \(5\cdot 89\) & 1a & \(5^6 89^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 390 & \(446\) & \(2\cdot 223\) & 1b & \(5^2 2^4 223^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 391 & \(447\) & \(3\cdot 149\) & 1b & \(5^2 3^4 149^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(149\cdot 5,\mathcal{K}\) \\ 392 & \(449\) & & 2 & \(449^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,\times)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 393 & *\(451\) & \(11\cdot 41\) & 2 & \(11^4 41^4\) & \(1\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}_{(11)}\mathcal{K}_{(41)}^4,\) \\ & & & & & & & & & & & & \(\mathfrak{K}_{(11),1}\mathfrak{K}_{(41),2}^3\) \\ 394 & \(452\) & \(2^2\cdot 113\) & 1b & \(5^2 2^4 113^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 395 & \(453\) & \(3\cdot 151\) & 1b & \(5^2 3^4 151^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(3\cdot 5^2,\mathcal{K}\) \\ 396 & \(454\) & \(2\cdot 227\) & 1b & \(5^2 2^4 227^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 397 & \(455\) & \(5\cdot 7\cdot 13\) & 1a & \(5^6 7^4 13^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 398 & \(456\) & \(2^3\cdot 3\cdot 19\) & 1b &\(5^2 2^4 3^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2\cdot 5,19\) \\ 399 & \(457\) & & 2 & \(457^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 400 & \(458\) & \(2\cdot 229\) & 1b & \(5^2 2^4 229^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(229\cdot 5,\mathcal{K}\) \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(459\le D\le 508\)} \label{tbl:PureQuinticFields500} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 401 & \(459\) & \(3^3\cdot 17\) & 1b & \(5^2 3^4 17^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 402 & \(460\) & \(2^2\cdot 5\cdot 23\) & 1a & \(5^6 2^4 23^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 403 & \(461\) & & 1b & \(5^2 461^4\) & \(1\) & \(1\) & \(2\) & \(3\) & \(4\) & \((-,-,\otimes,-)\) & \(\delta_1\) & \(\mathfrak{K}_1\) \\ 404 & *\(462\) & \(2\cdot 3\cdot 7\cdot 11\) & 1b & \(5^2 2^4 3^4 7^4 11^4\) & \(52\) & \(2\) & \(4\) & \(9\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(5^2\cdot 7,3\cdot 5\cdot 11^2\) \\ 405 & \(463\) & & 1b & \(5^2 463^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 406 & \(464\) & \(2^4\cdot 29\) & 1b & \(5^2 2^4 29^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 407 & *\(465\) & \(3\cdot 5\cdot 31\) & 1a & \(5^6 3^4 31^4\) & \(16\) & \(2\) & \(3\) & \(7*\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 408 & \(466\) & \(2\cdot 233\) & 1b & \(5^2 2^4 233^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 409 & \(467\) & & 1b & \(5^2 467^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 410 & \(468\) & \(2^2\cdot 3^2\cdot 13\) & 2 & \(2^4 3^4 13^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 411 & \(469\) & \(7\cdot 67\) & 1b & \(5^2 7^4 67^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 412 & \(470\) & \(2\cdot 5\cdot 47\) & 1a & \(5^6 2^4 47^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 413 & \(471\) & \(3\cdot 157\) & 1b & \(5^2 3^4 157^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 414 & \(472\) & \(2^3\cdot 59\) & 1b & \(5^2 2^4 59^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) \\ 415 & *\(473\) & \(11\cdot 43\) & 1b & \(5^2 11^4 43^4\) & \(4\) & \(2\) & \(3\) & \(7*\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11,\mathcal{L}\) \\ 416 & \(474\) & \(2\cdot 3\cdot 79\) & 2 & \(2^4 3^4 79^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3,\mathcal{K}\) \\ 417 & \(475\) & \(5^2\cdot 19\) & 1a & \(5^6 19^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 418 & \(476\) & \(2^2\cdot 7\cdot 17\) & 2 & \(2^4 7^4 17^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 419 & \(477\) & \(3^2\cdot 53\) & 1b & \(5^2 3^4 53^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(3\cdot 5\) \\ 420 & \(478\) & \(2\cdot 239\) & 1b & \(5^2 2^4 239^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(239,\mathcal{L}\) \\ 421 & \(479\) & & 1b & \(5^2 479^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 422 & \(481\) & \(13\cdot 37\) & 1b & \(5^2 13^4 37^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 423 & *\(482\) & \(2\cdot 241\) & 2 & \(2^4 241^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(241,\mathfrak{K}_2\) \\ 424 & \(483\) & \(3\cdot 7\cdot 23\) & 1b & \(5^2 3^4 7^4 23^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(7,3\cdot 23\) \\ 425 & \(485\) & \(5\cdot 97\) & 1a & \(5^6 97^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 426 & \(487\) & & 1b & \(5^2 487^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 427 & \(488\) & \(2^3\cdot 61\) & 1b & \(5^2 2^4 61^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5^2,\mathcal{K}\) \\ 428 & \(489\) & \(3\cdot 163\) & 1b & \(5^2 3^4 163^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(3\cdot 5^2\) \\ 429 & \(490\) & \(2\cdot 5\cdot 7^2\) & 1a & \(5^6 2^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 430 & \(491\) & & 1b & \(5^2 491^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 431 & \(492\) & \(2^2\cdot 3\cdot 41\) & 1b & \(5^2 2^4 3^4 41^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(3,2\cdot 5^2\) \\ 432 & \(493\) & \(17\cdot 29\) & 2 & \(17^4 29^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 433 & \(494\) & \(2\cdot 13\cdot 19\) & 1b & \(5^2 2^4 13^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2,13\cdot 5^2\) \\ 434 & \(495\) & \(3^2\cdot 5\cdot 11\) & 1a & \(5^6 3^4 11^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11,\mathcal{L}\) \\ 435 & \(496\) & \(2^4\cdot 31\) & 1b & \(5^2 2^4 31^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 436 & \(497\) & \(7\cdot 71\) & 1b & \(5^2 7^4 71^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(7\cdot 5^3,\mathcal{K}\) \\ 437 & \(498\) & \(2\cdot 3\cdot 83\) & 1b & \(5^2 2^4 3^4 83^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2^3\cdot 3,2\cdot 5^2\) \\ 438 & \(499\) & & 2 & \(499^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,\times)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 439 & \(501\) & \(3\cdot 167\) & 2 & \(3^4 167^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 440 & *\(502\) & \(2\cdot 251\) & 1b & \(5^2 2^4 251^4\) & \(4\) & \(2*\) & \(4*\) & \(8*\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(251\cdot 5,\mathfrak{K}_2\) \\ 441 & \(503\) & & 1b & \(5^2 503^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 442 & \(504\) & \(2^3\cdot 3^2\cdot 7\) & 1b & \(5^2 2^4 3^4 7^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(7,2\cdot 5^2\) \\ 443 & *\(505\) & \(5\cdot 101\) & 1a & \(5^6 101^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),\otimes)\) & \(\zeta_2\) & \(\mathcal{K}\) \\ 444 & \(506\) & \(2\cdot 11\cdot 23\) & 1b & \(5^2 2^4 11^4 23^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(23,2\cdot 5\) \\ 445 & \(508\) & \(2^2\cdot 127\) & 1b & \(5^2 2^4 127^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(509\le D\le 556\)} \label{tbl:PureQuinticFields550} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 446 & \(509\) & & 1b & \(5^2 509^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 447 & \(510\) & \(2\cdot 3\cdot 5\cdot 17\) & 1a & \(5^6 2^4 3^4 17^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 3,2\cdot 17\) \\ 448 & \(511\) & \(7\cdot 73\) & 1b & \(5^2 7^4 73^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 449 & \(513\) & \(3^3\cdot 19\) & 1b & \(5^2 3^4 19^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(3\cdot 5^2\) \\ 450 & \(514\) & \(2\cdot 257\) & 1b & \(5^2 2^4 257^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 451 & \(515\) & \(5\cdot 103\) & 1a & \(5^6 103^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 452 & \(516\) & \(2^2\cdot 3\cdot 43\) & 1b & \(5^2 2^4 3^4 43^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 3^2,2\cdot 5^4\) \\ 453 & *\(517\) & \(11\cdot 47\) & 1b & \(5^2 11^4 47^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(11\cdot 5^2,\mathfrak{K}_2\) \\ 454 & \(518\) & \(2\cdot 7\cdot 37\) & 2 & \(2^4 7^4 37^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 455 & \(519\) & \(3\cdot 173\) & 1b & \(5^2 3^4 173^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 456 & \(520\) & \(2^3\cdot 5\cdot 13\) & 1a & \(5^6 2^4 13^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\) \\ 457 & \(521\) & & 1b & \(5^2 521^4\) & \(1\) & \(1\) & \(2\) & \(3\) & \(4\) & \((-,-,\otimes,-)\) & \(\delta_1\) & \(\mathfrak{K}_2\) \\ 458 & \(522\) & \(2\cdot 3^2\cdot 29\) & 1b & \(5^2 2^4 3^4 29^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2,29\cdot 5\) \\ 459 & \(523\) & & 1b & \(5^2 523^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 460 & \(524\) & \(2^2\cdot 131\) & 2 & \(2^4 131^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 461 & \(525\) & \(3\cdot 5^2\cdot 7\) & 1a & \(5^6 3^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 462 & \(526\) & \(2\cdot 263\) & 2 & \(2^4 263^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 463 & \(527\) & \(17\cdot 31\) & 1b & \(5^2 17^4 31^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 464 & \(528\) & \(2^4\cdot 3\cdot 11\) & 1b & \(5^2 2^4 3^4 11^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,3\cdot 5^3\) \\ 465 & \(530\) & \(2\cdot 5\cdot 53\) & 1a & \(5^6 2^4 53^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 466 & \(531\) & \(3^2\cdot 59\) & 1b & \(5^2 3^4 59^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3,\mathcal{L}\) \\ 467 & *\(532\) & \(2^2\cdot 7\cdot 19\) & 2 & \(2^4 7^4 19^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(2\cdot 7\) \\ 468 & \(533\) & \(13\cdot 41\) & 1b & \(5^2 13^4 41^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(13\cdot 5^2,\mathcal{K}\) \\ 469 & \(534\) & \(2\cdot 3\cdot 89\) & 1b & \(5^2 2^4 3^4 89^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(3,89\cdot 5\) \\ 470 & \(535\) & \(5\cdot 107\) & 1a & \(5^6 107^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((-,-,-,\otimes)\) & \(\eta\) & \\ 471 & \(536\) & \(2^3\cdot 67\) & 1b & \(5^2 2^4 67^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 472 & \(537\) & \(3\cdot 179\) & 1b & \(5^2 3^4 179^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3\cdot 5^2,\mathcal{K}\) \\ 473 & \(538\) & \(2\cdot 269\) & 1b & \(5^2 2^4 269^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2,\mathcal{L}\) \\ 474 & \(539\) & \(7^2\cdot 11\) & 1b & \(5^2 7^4 11^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11\cdot 5,\mathcal{K}\) \\ 475 & \(540\) & \(2^2\cdot 3^3\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 476 & \(541\) & & 1b & \(5^2 541^4\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 477 & \(542\) & \(2\cdot 271\) & 1b & \(5^2 2^4 271^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 478 & \(543\) & \(3\cdot 181\) & 2 & \(3^4 181^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 479 & \(545\) & \(5\cdot 109\) & 1a & \(5^6 109^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 480 & *\(546\) & \(2\cdot 3\cdot 7\cdot 13\) & 1b & \(5^2 2^4 3^4 7^4 13^4\) & \(52\) & \(2\) & \(4\) & \(9\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3\cdot 5\cdot 13,5^2\cdot 7\cdot 13\) \\ 481 & \(547\) & & 1b & \(5^2 547^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 482 & \(548\) & \(2^2\cdot 137\) & 1b & \(5^2 2^4 137^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 483 & \(549\) & \(3^2\cdot 61\) & 2 & \(3^4 61^4\) & \(3\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 484 & *\(550\) & \(2\cdot 5^2\cdot 11\) & 1a & \(5^6 2^4 11^4\) & \(16\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}, \mathfrak{K}_1\) \\ 485 & *\(551\) & \(19\cdot 29\) & 2 & \(19^4 29^4\) & \(1\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,-,-)\) & \(\alpha_3\) & \(\mathcal{K}_{(19)},\mathcal{K}_{(29)}\) \\ 486 & \(552\) & \(2^3\cdot 3\cdot 23\) & 1b & \(5^2 2^4 3^4 23^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(23,3\cdot 5\) \\ 487 & \(553\) & \(7\cdot 79\) & 1b & \(5^2 7^4 79^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(5\) \\ 488 & \(554\) & \(2\cdot 277\) & 1b & \(5^2 2^4 277^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 489 & \(555\) & \(3\cdot 5\cdot 37\) & 1a & \(5^6 3^4 37^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 490 & \(556\) & \(2^2\cdot 139\) & 1b & \(5^2 2^4 139^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 5^2,\mathcal{K}\) \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(557\le D\le 604\)} \label{tbl:PureQuinticFields600} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 491 & \(557\) & & 2 & \(557^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 492 & \(558\) & \(2\cdot 3^2\cdot 31\) & 1b & \(5^2 2^4 3^4 31^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(5,3\cdot 31\) \\ 493 & \(559\) & \(13\cdot 43\) & 1b & \(5^2 13^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 494 & \(560\) & \(2^4\cdot 5\cdot 7\) & 1a & \(5^6 2^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 495 & \(561\) & \(3\cdot 11\cdot 17\) & 1b & \(5^2 3^4 11^4 17^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(11\cdot 5^3,17\cdot 5^4\) \\ 496 & \(562\) & \(2\cdot 281\) & 1b & \(5^2 2^4 281^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\times,-)\) & \(\varepsilon\) & \(2\cdot 5\) \\ 497 & \(563\) & & 1b & \(5^2 563^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 498 & \(564\) & \(2^2\cdot 3\cdot 47\) & 1b & \(5^2 2^4 3^4 47^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 47,2\cdot 5^2\) \\ 499 & \(565\) & \(5\cdot 113\) & 1a & \(5^6 113^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 500 & \(566\) & \(2\cdot 283\) & 1b & \(5^2 2^4 283^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 501 & \(567\) & \(3^4\cdot 7\) & 1b & \(5^2 3^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 502 & \(568\) & \(2^3\cdot 71\) & 2 & \(2^4 71^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 503 & \(569\) & & 1b & \(5^2 569^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 504 & *\(570\) & \(2\cdot 3\cdot 5\cdot 19\) & 1a & \(5^6 2^4 3^4 19^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2^4\cdot 3,2^4\cdot 5\) \\ 505 & \(571\) & & 1b & \(5^2 571^4\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 506 & \(572\) & \(2^2\cdot 11\cdot 13\) & 1b & \(5^2 2^4 11^4 13^4\) & \(13\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(2\cdot 13^2\cdot 5^3,\mathfrak{K}_2\) \\ 507 & \(573\) & \(3\cdot 191\) & 1b & \(5^2 3^4 191^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 508 & *\(574\) & \(2\cdot 7\cdot 41\) & 2 & \(2^4 7^4 41^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(41,\mathcal{L}\) \\ 509 & \(575\) & \(5^2\cdot 23\) & 1a & \(5^6 23^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 510 & \(577\) & & 1b & \(5^2 577^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 511 & \(579\) & \(3\cdot 193\) & 1b & \(5^2 3^4 193^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 512 & \(580\) & \(2^2\cdot 5\cdot 29\) & 1a & \(5^6 2^4 29^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2^4\cdot 5,\mathcal{K}\) \\ 513 & \(581\) & \(7\cdot 83\) & 1b & \(5^2 7^4 83^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 514 & \(582\) & \(2\cdot 3\cdot 97\) & 2 & \(2^4 3^4 97^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 515 & \(583\) & \(11\cdot 53\) & 1b & \(5^2 11^4 53^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(53\cdot 5,\mathcal{K}\) \\ 516 & \(584\) & \(2^3\cdot 73\) & 1b & \(5^2 2^4 73^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 517 & \(585\) & \(3^2\cdot 5\cdot 13\) & 1a & \(5^6 3^4 13^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 518 & \(586\) & \(2\cdot 293\) & 1b & \(5^2 2^4 293^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\) \\ 519 & \(587\) & & 1b & \(5^2 587^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 520 & \(589\) & \(19\cdot 31\) & 1b & \(5^2 19^4 31^4\) & \(3\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(19)},\mathcal{K}_{(31)}\) \\ 521 & *\(590\) & \(2\cdot 5\cdot 59\) & 1a & \(5^6 2^4 59^4\) & \(16\) & \(2\) & \(3\) & \(7*\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 522 & \(591\) & \(3\cdot 197\) & 1b & \(5^2 3^4 197^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 523 & \(592\) & \(2^4\cdot 37\) & 1b & \(5^2 2^4 37^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 524 & \(593\) & & 2 & \(593^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 525 & \(594\) & \(2\cdot 3^3\cdot 11\) & 1b & \(5^2 2^4 3^4 11^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(11,3^2\cdot 5\) \\ 526 & \(595\) & \(5\cdot 7\cdot 17\) & 1a & \(5^6 7^4 17^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 527 & \(596\) & \(2^2\cdot 149\) & 1b & \(5^2 2^4 149^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(2\cdot 5^2\) \\ 528 & \(597\) & \(3\cdot 199\) & 1b & \(5^2 3^4 199^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3^4\cdot 5,\mathcal{K}\) \\ 529 & \(598\) & \(2\cdot 13\cdot 23\) & 1b & \(5^2 2^4 13^4 23^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2^4\cdot 5,13\cdot 5^3\) \\ 530 & \(599\) & & 2 & \(599^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,\times)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 531 & \(600\) & \(2^3\cdot 3\cdot 5^2\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 532 & \(601\) & & 2 & \(601^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,\times)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 533 & *\(602\) & \(2\cdot 7\cdot 43\) & 1b & \(5^2 2^4 7^4 43^4\) & \(16\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,7\) \\ 534 & \(603\) & \(3^2\cdot 67\) & 1b & \(5^2 3^4 67^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 535 & \(604\) & \(2^2\cdot 151\) & 1b & \(5^2 2^4 151^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(2,\mathfrak{K}_1\) \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(605\le D\le 653\)} \label{tbl:PureQuinticFields650} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 536 & \(605\) & \(5\cdot 11^2\) & 1a & \(5^6 11^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 537 & *\(606\) & \(2\cdot 3\cdot 101\) & 1b & \(5^2 2^4 3^4 101^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,101\) \\ 538 & \(607\) & & 2 & \(607^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 539 & *\(609\) & \(3\cdot 7\cdot 29\) & 1b & \(5^2 3^4 7^4 29^4\) & \(12\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7^2\cdot 5,\mathcal{K}\) \\ 540 & \(610\) & \(2\cdot 5\cdot 61\) & 1a & \(5^6 2^4 61^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5^2,\mathcal{K}\) \\ 541 & \(611\) & \(13\cdot 47\) & 1b & \(5^2 13^4 47^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 542 & \(612\) & \(2^2\cdot 3^2\cdot 17\) & 1b & \(5^2 2^4 3^4 17^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(17,3^2\cdot 5\) \\ 543 & \(613\) & & 1b & \(5^2 613^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 544 & \(614\) & \(2\cdot 307\) & 1b & \(5^2 2^4 307^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\) \\ 545 & \(615\) & \(3\cdot 5\cdot 41\) & 1a & \(5^6 3^4 41^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(3,\mathcal{K}\) \\ 546 & \(616\) & \(2^3\cdot 7\cdot 11\) & 1b & \(5^2 2^4 7^4 11^4\) & \(12\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(7\cdot 5^2,\mathcal{K}\) \\ 547 & \(617\) & & 1b & \(5^2 617^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 548 & \(618\) & \(2\cdot 3\cdot 103\) & 2 & \(2^4 3^4 103^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 549 & \(619\) & & 1b & \(5^2 619^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 550 & *\(620\) & \(2^2\cdot 5\cdot 31\) & 1a & \(5^6 2^4 31^4\) & \(16\) & \(2\) & \(4*\) & \(8*\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(5,\mathfrak{K}_1\) \\ 551 & \(621\) & \(3^3\cdot 23\) & 1b & \(5^2 3^4 23^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 552 & \(622\) & \(2\cdot 311\) & 1b & \(5^2 2^4 311^4\) & \(3\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 553 & \(623\) & \(7\cdot 89\) & 1b & \(5^2 7^4 89^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7\cdot 5,\mathcal{K}\) \\ 554 & \(624\) & \(2^4\cdot 3\cdot 13\) & 2 & \(2^4 3^4 13^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 555 & \(626\) & \(2\cdot 313\) & 2 & \(2^4 313^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 556 & *\(627\) & \(3\cdot 11\cdot 19\) & 1b & \(5^2 3^4 11^4 19^4\) & \(13\) & \(3\) & \(4\) & \(9\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(11)},\mathcal{K}_{(19)}\) \\ 557 & \(628\) & \(2^2\cdot 157\) & 1b & \(5^2 2^4 157^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 558 & \(629\) & \(17\cdot 37\) & 1b & \(5^2 17^4 37^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 559 & \(630\) &\(2\cdot 3^2\cdot 5\cdot 7\) & 1a & \(5^6 2^4 3^4 7^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,5\) \\ 560 & \(631\) & & 1b & \(5^2 631^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 561 & \(632\) & \(2^3\cdot 79\) & 2 & \(2^4 79^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 562 & \(633\) & \(3\cdot 211\) & 1b & \(5^2 3^4 211^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\times,-)\) & \(\varepsilon\) & \(3\cdot 5\) \\ 563 & \(634\) & \(2\cdot 317\) & 1b & \(5^2 2^4 317^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 564 & \(635\) & \(5\cdot 127\) & 1a & \(5^6 127^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 565 & \(636\) & \(2^2\cdot 3\cdot 53\) & 1b & \(5^2 2^4 3^4 53^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3^2\cdot 5,53\) \\ 566 & \(637\) & \(7^2\cdot 13\) & 1b & \(5^2 7^4 13^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 567 & *\(638\) & \(2\cdot 11\cdot 29\) & 1b & \(5^2 2^4 11^4 29^4\) & \(13\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,\otimes,(\times),-)\) & \(\beta_2\) & \(2^4\cdot 11\cdot 5,\) \\ & & & & & & & & & & & & \(\mathcal{K}_{(11)}\cdot\mathcal{K}_{(29)}^4\) \\ 568 & \(639\) & \(3^2\cdot 71\) & 1b & \(5^2 3^4 71^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(3^2\cdot 5,\mathcal{K}\) \\ 569 & \(641\) & & 1b & \(5^2 641^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(5,\mathfrak{K}_2\) \\ 570 & \(642\) & \(2\cdot 3\cdot 107\) & 1b & \(5^2 2^4 3^4 107^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(107,3\cdot 5\) \\ 571 & \(643\) & & 2 & \(643^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 572 & \(644\) & \(2^2\cdot 7\cdot 23\) & 1b & \(5^2 2^4 7^4 23^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5,7\cdot 5\) \\ 573 & \(645\) & \(3\cdot 5\cdot 43\) & 1a & \(5^6 3^4 43^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(43\) \\ 574 & \(646\) & \(2\cdot 17\cdot 19\) & 1b & \(5^2 2^4 17^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2\cdot 5,17\) \\ 575 & \(647\) & & 1b & \(5^2 647^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 576 & *\(649\) & \(11\cdot 59\) & 2 & \(11^4 59^4\) & \(1\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(11)},\mathcal{K}_{(59)}\) \\ 577 & \(650\) & \(2\cdot 5^2\cdot 13\) & 1a & \(5^6 2^4 13^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 578 & \(651\) & \(3\cdot 7\cdot 31\) & 2 & \(3^4 7^4 31^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(3^2\cdot 7,\mathcal{K}\) \\ 579 & \(652\) & \(2^2\cdot 163\) & 1b & \(5^2 2^4 163^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(5\) \\ 580 & \(653\) & & 1b & \(5^2 653^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(44\) pure metacyclic fields with normalized radicands \(654\le D\le 701\)} \label{tbl:PureQuinticFields700} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 581 & \(654\) & \(2\cdot 3\cdot 109\) & 1b & \(5^2 2^4 3^4 109^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(3\cdot 5,109\) \\ 582 & \(655\) & \(5\cdot 131\) & 1a & \(5^6 131^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 583 & \(656\) & \(2^4\cdot 41\) & 1b & \(5^2 2^4 41^4\) & \(3\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 584 & \(657\) & \(3^2\cdot 73\) & 2 & \(3^4 73^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 585 & \(658\) & \(2\cdot 7\cdot 47\) & 1b & \(5^2 2^4 7^4 47^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5,47\cdot 5\) \\ 586 & \(659\) & & 1b & \(5^2 659^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 587 & *\(660\) &\(2^2\cdot 3\cdot 5\cdot 11\)& 1a & \(5^6 2^4 3^4 11^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,5\cdot 11\) \\ 588 & \(661\) & & 1b & \(5^2 661^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 589 & \(662\) & \(2\cdot 331\) & 1b & \(5^2 2^4 331^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2^4\cdot 5,\mathcal{K}\) \\ 590 & \(663\) & \(3\cdot 13\cdot 17\) & 1b & \(5^2 3^4 13^4 17^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3^2\cdot 5,13^2\cdot 5\) \\ 591 & \(664\) & \(2^3\cdot 83\) & 1b & \(5^2 2^4 83^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2^2\cdot 5\) \\ 592 & *\(665\) & \(5\cdot 7\cdot 19\) & 1a & \(5^6 7^4 19^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7,\mathcal{K}\) \\ 593 & \(666\) & \(2\cdot 3^2\cdot 37\) & 1b & \(5^2 2^4 3^4 37^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(37,2\cdot 5^2\) \\ 594 & \(667\) & \(23\cdot 29\) & 1b & \(5^2 23^4 29^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(23\cdot 5^2,\mathcal{K}\) \\ 595 & \(668\) & \(2^2\cdot 167\) & 2 & \(2^4 167^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 596 & \(669\) & \(3\cdot 223\) & 1b & \(5^2 3^4 223^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 597 & \(670\) & \(2\cdot 5\cdot 67\) & 1a & \(5^6 2^4 67^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\) \\ 598 & *\(671\) & \(11\cdot 61\) & 1b & \(5^2 11^4 61^4\) & \(3\) & \(3\) & \(4\) & \(8\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}_{(11)}\mathcal{K}_{(61)},\) \\ & & & & & & & & & & & & \(\mathfrak{K}_{(11),2}\mathfrak{K}_{(61),1}^2\) \\ 599 & \(673\) & & 1b & \(5^2 673^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 600 & \(674\) & \(2\cdot 337\) & 2 & \(2^4 337^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 601 & \(677\) & & 1b & \(5^2 677^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 602 & \(678\) & \(2\cdot 3\cdot 113\) & 1b & \(5^2 2^4 3^4 113^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2^2\cdot 5,3^4\cdot 5\) \\ 603 & \(679\) & \(7\cdot 97\) & 1b & \(5^2 7^4 97^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 604 & \(680\) & \(2^3\cdot 5\cdot 17\) & 1a & \(5^6 2^4 17^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 605 & \(681\) & \(3\cdot 227\) & 1b & \(5^2 3^4 227^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 606 & *\(682\) & \(2\cdot 11\cdot 31\) & 2 & \(2^4 11^4 31^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}_{(11)}\mathcal{K}_{(31)}^3,\) \\ & & & & & & & & & & & & \(\mathfrak{K}_{(11),1}\mathfrak{K}_{(31),2}^3\) \\ 607 & \(683\) & & 1b & \(5^2 683^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 608 & \(684\) & \(2^2\cdot 3^2\cdot 19\) & 1b & \(5^2 2^4 3^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(19\cdot 5,2\cdot 5^3\) \\ 609 & \(685\) & \(5\cdot 137\) & 1a & \(5^6 137^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 610 & \(687\) & \(3\cdot 229\) & 1b & \(5^2 3^4 229^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3\cdot 5^2,\mathcal{K}\) \\ 611 & \(688\) & \(2^4\cdot 43\) & 1b & \(5^2 2^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 612 & \(689\) & \(13\cdot 53\) & 1b & \(5^2 13^4 53^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 613 & \(690\) & \(2\cdot 3\cdot 5\cdot 23\) & 1a & \(5^6 2^4 3^4 23^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,5\cdot 23^2\) \\ 614 & *\(691\) & & 1b & \(5^2 691^4\) & \(1\) & \(3\) & \(4\) & \(8\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 615 & \(692\) & \(2^2\cdot 173\) & 1b & \(5^2 2^4 173^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\cdot 5\) \\ 616 & *\(693\) & \(3^2\cdot 7\cdot 11\) & 2 & \(3^4 7^4 11^4\) & \(4\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(3\cdot 7,\mathfrak{K}_2\) \\ 617 & \(694\) & \(2\cdot 347\) & 1b & \(5^2 2^4 347^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 618 & *\(695\) & \(5\cdot 139\) & 1a & \(5^6 139^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(139\) \\ 619 & \(696\) & \(2^3\cdot 3\cdot 29\) & 1b & \(5^2 2^4 3^4 29^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(5,2\cdot 29\) \\ 620 & \(697\) & \(17\cdot 41\) & 1b & \(5^2 17^4 41^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(17\cdot 5,\mathcal{K}\) \\ 621 & \(698\) & \(2\cdot 349\) & 1b & \(5^2 2^4 349^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2^4\cdot 5,\mathcal{K}\) \\ 622 & \(699\) & \(3\cdot 233\) & 2 & \(3^4 233^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 623 & \(700\) & \(2^2\cdot 5^2\cdot 7\) & 1a & \(5^6 2^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 624 & \(701\) & & 2 & \(701^4\) & \(1\) & \(2\) & \(3\) & \(5\) & \(2\) & \((-,-,\otimes,\times)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(47\) pure metacyclic fields with normalized radicands \(702\le D\le 753\)} \label{tbl:PureQuinticFields750} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 625 & *\(702\) & \(2\cdot 3^3\cdot 13\) & 1b & \(5^2 2^4 3^4 13^4\) & \(13\) & \(2\) & \(4\) & \(8\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(13\cdot 5^4\) \\ 626 & \(703\) & \(19\cdot 37\) & 1b & \(5^2 19^4 37^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(19\cdot 5^2,\mathcal{K}\) \\ 627 & \(705\) & \(3\cdot 5\cdot 47\) & 1a & \(5^6 3^4 47^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 628 & \(706\) & \(2\cdot 353\) & 1b & \(5^2 2^4 353^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 629 & *\(707\) & \(7\cdot 101\) & 2 & \(7^4 101^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),\otimes)\) & \(\zeta_2\) & \(\mathcal{K}\) \\ 630 & \(708\) & \(2^2\cdot 3\cdot 59\) & 1b & \(5^2 2^4 3^4 59^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(3\cdot 5,59\cdot 5\) \\ 631 & \(709\) & & 1b & \(5^2 709^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 632 & *\(710\) & \(2\cdot 5\cdot 71\) & 1a & \(5^6 2^4 71^4\) & \(16\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}, \mathfrak{K}_2\) \\ 633 & \(711\) & \(3^2\cdot 79\) & 1b & \(5^2 3^4 79^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 634 & \(712\) & \(2^3\cdot 89\) & 1b & \(5^2 2^4 89^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(2^2\cdot 5\) \\ 635 & \(713\) & \(23\cdot 31\) & 1b & \(5^2 23^4 31^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 636 & \(714\) & \(2\cdot 3\cdot 7\cdot 17\) & 1b & \(5^2 2^4 3^4 7^4 17^4\) & \(52\) & \(2\) & \(4\) & \(9\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(17\cdot 5^3,2\cdot 7^2\cdot 5^3\) \\ 637 & \(715\) & \(5\cdot 11\cdot 13\) & 1a & \(5^6 11^4 13^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11,\mathcal{L}\) \\ 638 & \(716\) & \(2^2\cdot 179\) & 1b & \(5^2 2^4 179^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(179,\mathcal{L}\) \\ 639 & \(717\) & \(3\cdot 239\) & 1b & \(5^2 3^4 239^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3^2\cdot 5,\mathcal{K}\) \\ 640 & \(718\) & \(2\cdot 359\) & 2 & \(2^4 359^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 641 & \(719\) & & 1b & \(5^2 719^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 642 & \(720\) & \(2^4\cdot 3^2\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 643 & \(721\) & \(7\cdot 103\) & 1b & \(5^2 7^4 103^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 644 & \(723\) & \(3\cdot 241\) & 1b & \(5^2 3^4 241^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(3\cdot 5,\mathcal{K}\) \\ 645 & \(724\) & \(2^2\cdot 181\) & 2 & \(2^4 181^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 646 & \(725\) & \(5^2\cdot 29\) & 1a & \(5^6 29^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 647 & \(726\) & \(2\cdot 3\cdot 11^2\) & 2 & \(2^4 3^4 11^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2^2\cdot 3,\mathcal{K}\) \\ 648 & \(727\) & & 1b & \(5^2 727^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 649 & \(728\) & \(2^3\cdot 7\cdot 13\) & 1b & \(5^2 2^4 7^4 13^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(5,7\) \\ 650 & \(730\) & \(2\cdot 5\cdot 73\) & 1a & \(5^6 2^4 73^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 651 & \(731\) & \(17\cdot 43\) & 1b & \(5^2 17^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 652 & \(732\) & \(2^2\cdot 3\cdot 61\) & 2 & \(2^4 3^4 61^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2^3\cdot 3^2,\mathcal{K}\) \\ 653 & \(733\) & & 1b & \(5^2 733^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 654 & \(734\) & \(2\cdot 367\) & 1b & \(5^2 2^4 367^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 655 & \(735\) & \(3\cdot 5\cdot 7^2\) & 1a & \(5^6 3^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 656 & \(737\) & \(11\cdot 67\) & 1b & \(5^2 11^4 67^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(11,\mathfrak{K}_1\) \\ 657 & \(738\) & \(2\cdot 3^2\cdot 41\) & 1b & \(5^2 2^4 3^4 41^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(3,2\cdot 5^2\) \\ 658 & \(739\) & & 1b & \(5^2 739^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 659 & \(740\) & \(2^2\cdot 5\cdot 37\) & 1a & \(5^6 2^4 37^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(37\) \\ 660 & \(741\) & \(3\cdot 13\cdot 19\) & 1b & \(5^2 3^4 13^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(3\cdot 5^2,19\cdot 5\) \\ 661 & \(742\) & \(2\cdot 7\cdot 53\) & 1b & \(5^2 2^4 7^4 53^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5^2,7\cdot 5^2\) \\ 662 & \(743\) & & 2 & \(743^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 663 & \(744\) & \(2^3\cdot 3\cdot 31\) & 1b & \(5^2 2^4 3^4 31^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(5,3\cdot 31^2\) \\ 664 & *\(745\) & \(5\cdot 149\) & 1a & \(5^6 149^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,\otimes)\) & \(\zeta_2\) & \(\mathcal{K}\) \\ 665 & \(746\) & \(2\cdot 373\) & 1b & \(5^2 2^4 373^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2^4\cdot 5\) \\ 666 & \(747\) & \(3^2\cdot 83\) & 1b & \(5^2 3^4 83^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 667 & \(748\) & \(2^2\cdot 11\cdot 17\) & 1b & \(5^2 2^4 11^4 17^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,2\cdot 17\) \\ 668 & *\(749\) & \(7\cdot 107\) & 2 & \(7^4 107^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,-,\times)\) & \(\varepsilon\) & \\ 669 & *\(751\) & & 2 & \(751^4\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,\times)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 670 & \(752\) & \(2^4\cdot 47\) & 1b & \(5^2 2^4 47^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 671 & \(753\) & \(3\cdot 251\) & 1b & \(5^2 3^4 251^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(44\) pure metacyclic fields with normalized radicands \(754\le D\le 799\)} \label{tbl:PureQuinticFields800} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 672 & \(754\) & \(2\cdot 13\cdot 29\) & 1b & \(5^2 2^4 13^4 29^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(13\cdot 5,29\cdot 5\) \\ 673 & \(755\) & \(5\cdot 151\) & 1a & \(5^6 151^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),\otimes)\) & \(\zeta_2\) & \(\mathcal{K}\) \\ 674 & \(756\) & \(2^2\cdot 3^3\cdot 7\) & 1b & \(5^2 2^4 3^4 7^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(7,2^3\cdot 5^2\) \\ 675 & \(757\) & & 2 & \(757^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 676 & \(758\) & \(2\cdot 379\) & 1b & \(5^2 2^4 379^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 5^2,\mathcal{K}\) \\ 677 & \(759\) & \(3\cdot 11\cdot 23\) & 1b & \(5^2 3^4 11^4 23^4\) & \(13\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(3^2\cdot 23\cdot 5,\mathfrak{K}_2\) \\ 678 & \(760\) & \(2^3\cdot 5\cdot 19\) & 1a & \(5^6 2^4 19^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2^4\cdot 19,\mathcal{K}\) \\ 679 & \(761\) & & 1b & \(5^2 761^4\) & \(1\) & \(2\) & \(3\) & \(5\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) \\ 680 & \(762\) & \(2\cdot 3\cdot 127\) & 1b & \(5^2 2^4 3^4 127^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(127\cdot 5,3\cdot 5^3\) \\ 681 & \(763\) & \(7\cdot 109\) & 1b & \(5^2 7^4 109^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7\cdot 5^2,\mathcal{K}\) \\ 682 & \(764\) & \(2^2\cdot 191\) & 1b & \(5^2 2^4 191^4\) & \(3\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 683 & \(765\) & \(3^2\cdot 5\cdot 17\) & 1a & \(5^6 3^4 17^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 684 & \(766\) & \(2\cdot 383\) & 1b & \(5^2 2^4 383^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 685 & \(767\) & \(13\cdot 59\) & 1b & \(5^2 13^4 59^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(13\cdot 5^2,\mathcal{K}\) \\ 686 & \(769\) & & 1b & \(5^2 769^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 687 & *\(770\) & \(2\cdot 5\cdot 7\cdot 11\) & 1a & \(5^6 2^4 7^4 11^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,2^2\cdot 11\) \\ 688 & \(771\) & \(3\cdot 257\) & 1b & \(5^2 3^4 257^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 689 & \(772\) & \(2^2\cdot 193\) & 1b & \(5^2 2^4 193^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\) \\ 690 & \(773\) & & 1b & \(5^2 773^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 691 & \(774\) & \(2\cdot 3^2\cdot 43\) & 2 & \(2^4 3^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 692 & \(775\) & \(5^2\cdot 31\) & 1a & \(5^6 31^4\) & \(4\) & \(2\) & \(3\) & \(5\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) \\ 693 & \(776\) & \(2^3\cdot 97\) & 2 & \(2^4 97^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 694 & \(777\) & \(3\cdot 7\cdot 37\) & 1b & \(5^2 3^4 7^4 37^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,7\cdot 5\) \\ 695 & \(778\) & \(2\cdot 389\) & 1b & \(5^2 2^4 389^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 696 & *\(779\) & \(19\cdot 41\) & 1b & \(5^2 19^4 41^4\) & \(3\) & \(3\) & \(4\) & \(8\) & \(1\) & \((-,\otimes,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}_{(19)}\mathcal{K}_{(41)},\) \\ & & & & & & & & & & & & \(\mathfrak{K}_{(41),1}\) \\ 697 & \(780\) &\(2^2\cdot 3\cdot 5\cdot 13\)& 1a & \(5^6 2^4 3^4 13^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5,5\cdot 13\) \\ 698 & \(781\) & \(11\cdot 71\) & 1b & \(5^2 11^4 71^4\) & \(3\) & \(3\) & \(5\) & \(9\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_{(11),1}\mathfrak{K}_{(71),1}^4,\) \\ & & & & & & & & & & & & \(\mathfrak{K}_{(11),2}\mathfrak{K}_{(71),2}^2\) \\ 699 & \(782\) & \(2\cdot 17\cdot 23\) & 2 & \(2^4 17^4 23^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 700 & \(783\) & \(3^3\cdot 29\) & 1b & \(5^2 3^4 29^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(29,\mathcal{L}\) \\ 701 & *\(785\) & \(5\cdot 157\) & 1a & \(5^6 157^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,-,\times)\) & \(\varepsilon\) & \\ 702 & \(786\) & \(2\cdot 3\cdot 131\) & 1b & \(5^2 2^4 3^4 131^4\) & \(13\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 3\cdot 5^2,\mathcal{K}\) \\ 703 & \(787\) & & 1b & \(5^2 787^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 704 & \(788\) & \(2^2\cdot 197\) & 1b & \(5^2 2^4 197^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 705 & \(789\) & \(3\cdot 263\) & 1b & \(5^2 3^4 263^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 706 & \(790\) & \(2\cdot 5\cdot 79\) & 1a & \(5^6 2^4 79^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(5\) \\ 707 & \(791\) & \(7\cdot 113\) & 1b & \(5^2 7^4 113^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 708 & \(792\) & \(2^3\cdot 3^2\cdot 11\) & 1b & \(5^2 2^4 3^4 11^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,11\cdot 5\) \\ 709 & \(793\) & \(13\cdot 61\) & 2 & \(13^4 61^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(61,\mathfrak{K}_2\) \\ 710 & \(794\) & \(2\cdot 397\) & 1b & \(5^2 2^4 397^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 711 & \(795\) & \(3\cdot 5\cdot 53\) & 1a & \(5^6 3^4 53^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 712 & \(796\) & \(2^2\cdot 199\) & 1b & \(5^2 2^4 199^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) \\ 713 & \(797\) & & 1b & \(5^2 797^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 714 & *\(798\) & \(2\cdot 3\cdot 7\cdot 19\) & 1b & \(5^2 2^4 3^4 7^4 19^4\) & \(52\) & \(2\) & \(4\) & \(9\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(3\cdot 19,7\cdot 5^4\) \\ 715 & \(799\) & \(17\cdot 47\) & 2 & \(17^4 47^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(46\) pure metacyclic fields with normalized radicands \(801\le D\le 848\)} \label{tbl:PureQuinticFields850} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 716 & \(801\) & \(3^2\cdot 89\) & 2 & \(3^4 89^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 717 & \(802\) & \(2\cdot 401\) & 1b & \(5^2 2^4 401^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2^2\cdot 5,\mathcal{K}\) \\ 718 & \(803\) & \(11\cdot 73\) & 1b & \(5^2 11^4 73^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(73\cdot 5^2,\mathcal{K}\) \\ 719 & \(804\) & \(2^2\cdot 3\cdot 67\) & 1b & \(5^2 2^4 3^4 67^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,2\cdot 5^3\) \\ 720 & \(805\) & \(5\cdot 7\cdot 23\) & 1a & \(5^6 7^4 23^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 721 & \(806\) & \(2\cdot 13\cdot 31\) & 1b & \(5^2 2^4 13^4 31^4\) & \(13\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(31\cdot 5^2,\mathfrak{K}_2\) \\ 722 & \(807\) & \(3\cdot 269\) & 2 & \(3^4 269^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 723 & *\(808\) & \(2^3\cdot 101\) & 1b & \(5^2 2^4 101^4\) & \(4\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}, \mathfrak{K}_1\) \\ 724 & \(809\) & & 1b & \(5^2 809^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 725 & \(810\) & \(2\cdot 3^4\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 726 & \(811\) & & 1b & \(5^2 811^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 727 & \(812\) & \(2^2\cdot 7\cdot 29\) & 1b & \(5^2 2^4 7^4 29^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(29,7^2\cdot 5\) \\ 728 & \(813\) & \(3\cdot 271\) & 1b & \(5^2 3^4 271^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 729 & \(814\) & \(2\cdot 11\cdot 37\) & 1b & \(5^2 2^4 11^4 37^4\) & \(13\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(11\cdot 37^2\cdot 5,\mathfrak{K}_2\) \\ 730 & \(815\) & \(5\cdot 163\) & 1a & \(5^6 163^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 731 & \(816\) & \(2^4\cdot 3\cdot 17\) & 1b & \(5^2 2^4 3^4 17^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5^2,3\cdot 5^3\) \\ 732 & \(817\) & \(19\cdot 43\) & 1b & \(5^2 19^4 43^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(19^2\cdot 5,\mathcal{K}\) \\ 733 & \(818\) & \(2\cdot 409\) & 2 & \(2^4 409^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 734 & \(819\) & \(3^2\cdot 7\cdot 13\) & 1b & \(5^2 3^4 7^4 13^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(13,7\cdot 5^2\) \\ 735 & \(820\) & \(2^2\cdot 5\cdot 41\) & 1a & \(5^6 2^4 41^4\) & \(16\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}, \mathfrak{K}_1\) \\ 736 & \(821\) & & 1b & \(5^2 821^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 737 & \(822\) & \(2\cdot 3\cdot 137\) & 1b & \(5^2 2^4 3^4 137^4\) & \(13\) & \(2\) & \(4\) & \(8\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(3^3\cdot 137\cdot 5\) \\ 738 & \(823\) & & 1b & \(5^2 823^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 739 & \(824\) & \(2^3\cdot 103\) & 2 & \(2^4 103^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 740 & *\(825\) & \(3\cdot 5^2\cdot 11\) & 1a & \(5^6 3^4 11^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(3^2\cdot 11,\mathfrak{K}_1\) \\ 741 & \(826\) & \(2\cdot 7\cdot 59\) & 2 & \(2^4 7^4 59^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(59,\mathcal{L}\) \\ 742 & \(827\) & & 1b & \(5^2 827^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 743 & \(828\) & \(2^2\cdot 3^2\cdot 23\) & 1b & \(5^2 2^4 3^4 23^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(23,2\cdot 5^2\) \\ 744 & \(829\) & & 1b & \(5^2 829^4\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(829,\mathcal{L}\)\\ 745 & \(830\) & \(2\cdot 5\cdot 83\) & 1a & \(5^6 2^4 83^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 746 & \(831\) & \(3\cdot 277\) & 1b & \(5^2 3^4 277^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(3^2\cdot 5\) \\ 747 & \(833\) & \(7^2\cdot 17\) & 1b & \(5^2 7^4 17^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 748 & \(834\) & \(2\cdot 3\cdot 139\) & 1b & \(5^2 2^4 3^4 139^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2\cdot 5,3\cdot 5^3\) \\ 749 & \(835\) & \(5\cdot 167\) & 1a & \(5^6 167^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 750 & \(836\) & \(2^2\cdot 11\cdot 19\) & 1b & \(5^2 2^4 11^4 19^4\) & \(13\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,\otimes,(\times),-)\) & \(\beta_2\) & \(2\cdot 11\cdot 5,\mathcal{K}_{(11)}\cdot\mathcal{K}_{(19)}^3\) \\ 751 & \(837\) & \(3^3\cdot 31\) & 1b & \(5^2 3^4 31^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 752 & \(838\) & \(2\cdot 419\) & 1b & \(5^2 2^4 419^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) \\ 753 & \(839\) & & 1b & \(5^2 839^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 754 & \(840\) &\(2^3\cdot 3\cdot 5\cdot 7\) & 1a & \(5^6 2^4 3^4 7^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 7,3\cdot 7\) \\ 755 & \(842\) & \(2\cdot 421\) & 1b & \(5^2 2^4 421^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\times,-)\) & \(\varepsilon\) & \(2\cdot 5^3\) \\ 756 & *\(843\) & \(3\cdot 281\) & 2 & \(3^4 281^4\) & \(1\) & \(1\) & \(2\) & \(3\) & \(4\) & \((-,-,\otimes,-)\) & \(\delta_1\) & \(\mathfrak{K}_1\) \\ 757 & \(844\) & \(2^2\cdot 211\) & 1b & \(5^2 2^4 211^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\times,-)\) & \(\varepsilon\) & \(2\cdot 5^2\) \\ 758 & \(845\) & \(5\cdot 13^2\) & 1a & \(5^6 13^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 759 & \(846\) & \(2\cdot 3^2\cdot 47\) & 1b & \(5^2 2^4 3^4 47^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,3\) \\ 760 & \(847\) & \(7\cdot 11^2\) & 1b & \(5^2 7^4 11^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(7\cdot 5^2,\mathcal{K}\) \\ 761 & \(848\) & \(2^4\cdot 53\) & 1b & \(5^2 2^4 53^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(47\) pure metacyclic fields with normalized radicands \(849\le D\le 901\)} \label{tbl:PureQuinticFields900} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 762 & \(849\) & \(3\cdot 283\) & 2 & \(3^4 283^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 763 & \(850\) & \(2\cdot 5^2\cdot 17\) & 1a & \(5^6 2^4 17^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 764 & \(851\) & \(23\cdot 37\) & 2 & \(23^4 37^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 765 & \(852\) & \(2^2\cdot 3\cdot 71\) & 1b & \(5^2 2^4 3^4 71^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5^3,3\cdot 5^3\) \\ 766 & \(853\) & & 1b & \(5^2 853^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 767 & \(854\) & \(2\cdot 7\cdot 61\) & 1b & \(5^2 2^4 7^4 61^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5^2,61\) \\ 768 & \(855\) & \(3^2\cdot 5\cdot 19\) & 1a & \(5^6 3^4 19^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3,\mathcal{K}\) \\ 769 & \(856\) & \(2^3\cdot 107\) & 1b & \(5^2 2^4 107^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 770 & \(857\) & & 2 & \(857^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 771 & *\(858\) &\(2\cdot 3\cdot 11\cdot 13\) & 1b &\(5^2 2^4 3^4 11^4 13^4\) & \(51\) & \(2\) & \(4\) & \(9\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,13\cdot 5\) \\ 772 & \(859\) & & 1b & \(5^2 859^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 773 & \(860\) & \(2^2\cdot 5\cdot 43\) & 1a & \(5^6 2^4 43^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 774 & *\(861\) & \(3\cdot 7\cdot 41\) & 1b & \(5^2 3^4 7^4 41^4\) & \(12\) & \(3\) & \(4\) & \(8\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 775 & \(862\) & \(2\cdot 431\) & 1b & \(5^2 2^4 431^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(431,\mathfrak{K}_1\) \\ 776 & \(863\) & & 1b & \(5^2 863^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 777 & \(865\) & \(5\cdot 173\) & 1a & \(5^6 173^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 778 & \(866\) & \(2\cdot 433\) & 1b & \(5^2 2^4 433^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 779 & \(868\) & \(2^2\cdot 7\cdot 31\) & 2 & \(2^4 7^4 31^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 7^2,\mathcal{K}\) \\ 780 & \(869\) & \(11\cdot 79\) & 1b & \(5^2 11^4 79^4\) & \(3\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(11)},\mathcal{K}_{(79)}\) \\ 781 & \(870\) & \(2\cdot 3\cdot 5\cdot 29\) & 1a & \(5^6 2^4 3^4 29^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(29,3^2\cdot 5\) \\ 782 & \(871\) & \(13\cdot 67\) & 1b & \(5^2 13^4 67^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 783 & \(872\) & \(2^3\cdot 109\) & 1b & \(5^2 2^4 109^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(109,\mathcal{L}\) \\ 784 & \(873\) & \(3^2\cdot 97\) & 1b & \(5^2 3^4 97^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 785 & \(874\) & \(2\cdot 19\cdot 23\) & 2 & \(2^4 19^4 23^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 19^2,\mathcal{K}\) \\ 786 & \(876\) & \(2^2\cdot 3\cdot 73\) & 2 & \(2^4 3^4 73^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 787 & \(877\) & & 1b & \(5^2 877^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 788 & \(878\) & \(2\cdot 439\) & 1b & \(5^2 2^4 439^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 789 & \(879\) & \(3\cdot 293\) & 1b & \(5^2 3^4 293^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(3\) \\ 790 & \(880\) & \(2^4\cdot 5\cdot 11\) & 1a & \(5^6 2^4 11^4\) & \(16\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}, \mathfrak{K}_2\) \\ 791 & \(881\) & & 1b & \(5^2 881^4\) & \(1\) & \(1\) & \(2\) & \(3\) & \(4\) & \((-,-,\otimes,-)\) & \(\delta_1\) & \(\mathfrak{K}_2\) \\ 792 & \(883\) & & 1b & \(5^2 883^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 793 & \(884\) & \(2^2\cdot 13\cdot 17\) & 1b & \(5^2 2^4 13^4 17^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(17,13\cdot 5^2\) \\ 794 & \(885\) & \(3\cdot 5\cdot 59\) & 1a & \(5^6 3^4 59^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3,\mathcal{K}\) \\ 795 & \(886\) & \(2\cdot 443\) & 1b & \(5^2 2^4 443^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 796 & \(887\) & & 1b & \(5^2 887^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 797 & \(888\) & \(2^3\cdot 3\cdot 37\) & 1b & \(5^2 2^4 3^4 37^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,3\) \\ 798 & \(889\) & \(7\cdot 127\) & 1b & \(5^2 7^4 127^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 799 & \(890\) & \(2\cdot 5\cdot 89\) & 1a & \(5^6 2^4 89^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2^2\cdot 5,\mathcal{K}\) \\ 800 & \(891\) & \(3^4\cdot 11\) & 1b & \(5^2 3^4 11^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(3\cdot 5^3,\mathcal{K}\) \\ 801 & \(892\) & \(2^2\cdot 223\) & 1b & \(5^2 2^4 223^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 802 & *\(893\) & \(19\cdot 47\) & 2 & \(19^4 47^4\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(19,\mathfrak{L}\) \\ 803 & *\(894\) & \(2\cdot 3\cdot 149\) & 1b & \(5^2 2^4 3^4 149^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(5,2\cdot 3^2\) \\ 804 & \(895\) & \(5\cdot 179\) & 1a & \(5^6 179^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) \\ 805 & \(897\) & \(3\cdot 13\cdot 23\) & 1b & \(5^2 3^4 13^4 23^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(13\cdot 5^2,3^3\cdot 5^2\) \\ 806 & \(898\) & \(2\cdot 449\) & 1b & \(5^2 2^4 449^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 5^2,\mathcal{K}\) \\ 807 & \(899\) & \(29\cdot 31\) & 2 & \(29^4 31^4\) & \(1\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(29)},\mathcal{K}_{(31)}\) \\ 808 & \(901\) & \(17\cdot 53\) & 2 & \(17^4 53^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(47\) pure metacyclic fields with normalized radicands \(902\le D\le 949\)} \label{tbl:PureQuinticFields950} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 809 & *\(902\) & \(2\cdot 11\cdot 41\) & 1b & \(5^2 2^4 11^4 41^4\) & \(13\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11\cdot 5,\mathcal{K}_{(11)}\cdot\mathcal{K}_{(41)}\) \\ 810 & \(903\) & \(3\cdot 7\cdot 43\) & 1b & \(5^2 3^4 7^4 43^4\) & \(16\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(5,7^2\cdot 43\) \\ 811 & \(904\) & \(2^3\cdot 113\) & 1b & \(5^2 2^4 113^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 812 & \(905\) & \(5\cdot 181\) & 1a & \(5^6 181^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 813 & \(906\) & \(2\cdot 3\cdot 151\) & 1b & \(5^2 2^4 3^4 151^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2,3\cdot 5^2\) \\ 814 & \(907\) & & 2 & \(907^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & \\ 815 & \(908\) & \(2^2\cdot 227\) & 1b & \(5^2 2^4 227^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 816 & \(909\) & \(3^2\cdot 101\) & 1b & \(5^2 3^4 101^4\) & \(4\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}, \mathfrak{K}_1\) \\ 817 & \(910\) & \(2\cdot 5\cdot 7\cdot 13\) & 1a & \(5^6 2^4 7^4 13^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,13\cdot 5^2\) \\ 818 & \(911\) & & 1b & \(5^2 911^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) \\ 819 & \(912\) & \(2^4\cdot 3\cdot 19\) & 1b & \(5^2 2^4 3^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(3,2\cdot 5^2\) \\ 820 & \(913\) & \(11\cdot 83\) & 1b & \(5^2 11^4 83^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11\cdot 5,\mathcal{K}\) \\ 821 & \(914\) & \(2\cdot 457\) & 1b & \(5^2 2^4 457^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(2\) \\ 822 & \(915\) & \(3\cdot 5\cdot 61\) & 1a & \(5^6 3^4 61^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(3\cdot 5^2,\mathcal{K}\) \\ 823 & \(916\) & \(2^2\cdot 229\) & 1b & \(5^2 2^4 229^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2^4\cdot 5,\mathcal{K}\) \\ 824 & \(917\) & \(7\cdot 131\) & 1b & \(5^2 7^4 131^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(131\cdot 5,\mathcal{K}\) \\ 825 & \(918\) & \(2\cdot 3^3\cdot 17\) & 2 & \(2^4 3^4 17^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 826 & \(919\) & & 1b & \(5^2 919^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 827 & \(920\) & \(2^3\cdot 5\cdot 23\) & 1a & \(5^6 2^4 23^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 828 & \(921\) & \(3\cdot 307\) & 1b & \(5^2 3^4 307^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 829 & \(922\) & \(2\cdot 461\) & 1b & \(5^2 2^4 461^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\times,-)\) & \(\varepsilon\) & \(2^2\cdot 5\) \\ 830 & \(923\) & \(13\cdot 71\) & 1b & \(5^2 13^4 71^4\) & \(3\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 831 & *\(924\) &\(2^2\cdot 3\cdot 7\cdot 11\)& 2 & \(2^4 3^4 7^4 11^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(3\cdot 7,7\cdot 11\) \\ 832 & \(925\) & \(5^2\cdot 37\) & 1a & \(5^6 37^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 833 & \(926\) & \(2\cdot 463\) & 2 & \(2^4 463^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 834 & \(927\) & \(3^2\cdot 103\) & 1b & \(5^2 3^4 103^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(3^3\cdot 5^2\)\\ 835 & \(929\) & & 1b & \(5^2 929^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) \\ 836 & \(930\) & \(2\cdot 3\cdot 5\cdot 31\) & 1a & \(5^6 2^4 3^4 31^4\) & \(64\) & \(2\) & \(4\) & \(9\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(5,31\) \\ 837 & \(931\) & \(7^2\cdot 19\) & 1b & \(5^2 7^4 19^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7,\mathcal{L}\) \\ 838 & \(932\) & \(2^2\cdot 233\) & 2 & \(2^4 233^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 839 & \(933\) & \(3\cdot 311\) & 1b & \(5^2 3^4 311^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(3\cdot 5,\mathcal{K}\) \\ 840 & \(934\) & \(2\cdot 467\) & 1b & \(5^2 2^4 467^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 841 & \(935\) & \(5\cdot 11\cdot 17\) & 1a & \(5^6 11^4 17^4\) & \(16\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}, \mathfrak{K}_2\) \\ 842 & \(936\) & \(2^3\cdot 3^2\cdot 13\) & 1b & \(5^2 2^4 3^4 13^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,5\) \\ 843 & \(937\) & & 1b & \(5^2 937^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 844 & \(938\) & \(2\cdot 7\cdot 67\) & 1b & \(5^2 2^4 7^4 67^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(7,67\cdot 5\) \\ 845 & \(939\) & \(3\cdot 313\) & 1b & \(5^2 3^4 313^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(5\) \\ 846 & \(940\) & \(2^2\cdot 5\cdot 47\) & 1a & \(5^6 2^4 47^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 847 & \(941\) & & 1b & \(5^2 941^4\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 848 & \(942\) & \(2\cdot 3\cdot 157\) & 1b & \(5^2 2^4 3^4 157^4\) & \(12\) & \(2\) & \(4\) & \(8\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(157\) \\ 849 & \(943\) & \(23\cdot 41\) & 2 & \(23^4 41^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 850 & \(944\) & \(2^4\cdot 59\) & 1b & \(5^2 2^4 59^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 5^4,\mathcal{K}\) \\ 851 & \(945\) & \(3^3\cdot 5\cdot 7\) & 1a & \(5^6 3^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 852 & \(946\) & \(2\cdot 11\cdot 43\) & 1b & \(5^2 2^4 11^4 43^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,43\) \\ 853 & \(947\) & & 1b & \(5^2 947^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 854 & \(948\) & \(2^2\cdot 3\cdot 79\) & 1b & \(5^2 2^4 3^4 79^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2\cdot 5^2,3\cdot 5^2\) \\ 855 & \(949\) & \(13\cdot 73\) & 2 & \(13^4 73^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(45\) pure metacyclic fields with normalized radicands \(950\le D\le 999\)} \label{tbl:PureQuinticFields1000} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|} \hline No. & \(D\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P \\ \hline 856 & \(950\) & \(2\cdot 5^2\cdot 19\) & 1a & \(5^6 2^4 19^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) \\ 857 & \(951\) & \(3\cdot 317\) & 2 & \(3^4 317^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 858 & \(952\) & \(2^3\cdot 7\cdot 17\) & 1b & \(5^2 2^4 7^4 17^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5^4,7\cdot 5^4\) \\ 859 & \(953\) & & 1b & \(5^2 953^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 860 & \(954\) & \(2\cdot 3^2\cdot 53\) & 1b & \(5^2 2^4 3^4 53^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3\cdot 5,2\cdot 5^2\) \\ 861 & *\(955\) & \(5\cdot 191\) & 1a & \(5^6 191^4\) & \(4\) & \(2*\) & \(3*\) & \(6*\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 862 & \(956\) & \(2^2\cdot 239\) & 1b & \(5^2 2^4 239^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(239,\mathcal{L}\) \\ 863 & *\(957\) & \(3\cdot 11\cdot 29\) & 2 & \(3^4 11^4 29^4\) & \(3\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(11)},\mathcal{K}_{(29)}\) \\ 864 & \(958\) & \(2\cdot 479\) & 1b & \(5^2 2^4 479^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(2^3\cdot 5\) \\ 865 & \(959\) & \(7\cdot 137\) & 1b & \(5^2 7^4 137^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 866 & \(962\) & \(2\cdot 13\cdot 37\) & 1b & \(5^2 2^4 13^4 37^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5,37\cdot 5\) \\ 867 & \(963\) & \(3^2\cdot 107\) & 1b & \(5^2 3^4 107^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 868 & \(964\) & \(2^2\cdot 241\) & 1b & \(5^2 2^4 241^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(241,\mathfrak{K}_2\) \\ 869 & \(965\) & \(5\cdot 193\) & 1a & \(5^6 193^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((-,-,-,\otimes)\) & \(\eta\) & \\ 870 & \(966\) & \(2\cdot 3\cdot 7\cdot 23\) & 1b & \(5^2 2^4 3^4 7^4 23^4\) & \(52\) & \(2\) & \(4\) & \(9\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3\cdot 5^3,2\cdot 23^2\) \\ 871 & \(967\) & & 1b & \(5^2 967^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 872 & \(969\) & \(3\cdot 17\cdot 19\) & 1b & \(5^2 3^4 17^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(19,3\cdot 5^2\) \\ 873 & \(970\) & \(2\cdot 5\cdot 97\) & 1a & \(5^6 2^4 97^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 874 & \(971\) & & 1b & \(5^2 971^4\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) \\ 875 & \(973\) & \(7\cdot 139\) & 1b & \(5^2 7^4 139^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7\cdot 5^2,\mathcal{K}\) \\ 876 & \(974\) & \(2\cdot 487\) & 2 & \(2^4 487^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 877 & \(975\) & \(3\cdot 5^2\cdot 13\) & 1a & \(5^6 3^4 13^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 878 & \(976\) & \(2^4\cdot 61\) & 2 & \(2^4 61^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 879 & \(977\) & & 1b & \(5^2 977^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 880 & \(978\) & \(2\cdot 3\cdot 163\) & 1b & \(5^2 2^4 3^4 163^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3\cdot 5^2,2\cdot 5^4\) \\ 881 & \(979\) & \(11\cdot 89\) & 1b & \(5^2 11^4 89^4\) & \(3\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,\otimes,(\times),-)\) & \(\beta_2\) & \(11,\mathcal{K}_{(11)}\cdot\mathcal{K}_{(89)}^2\) \\ 882 & \(980\) & \(2^2\cdot 5\cdot 7^2\) & 1a & \(5^6 2^4 7^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(7\) \\ 883 & \(981\) & \(3^2\cdot 109\) & 1b & \(5^2 3^4 109^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(109,\mathcal{L}\) \\ 884 & *\(982\) & \(2\cdot 491\) & 2 & \(2^4 491^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) \\ 885 & \(983\) & & 1b & \(5^2 983^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 886 & \(984\) & \(2^3\cdot 3\cdot 41\) & 1b & \(5^2 2^4 3^4 41^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(3,2\cdot 5^2\) \\ 887 & \(985\) & \(5\cdot 197\) & 1a & \(5^6 197^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 888 & \(986\) & \(2\cdot 17\cdot 29\) & 1b & \(5^2 2^4 17^4 29^4\) & \(13\) & \(2\) & \(4\) & \(8\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(2\cdot 5\) \\ 889 & \(987\) & \(3\cdot 7\cdot 47\) & 1b & \(5^2 3^4 7^4 47^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(7\cdot 5^2,3\cdot 5^4\) \\ 890 & \(988\) & \(2^2\cdot 13\cdot 19\) & 1b & \(5^2 2^4 13^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2,13\cdot 5^2\) \\ 891 & \(989\) & \(23\cdot 43\) & 1b & \(5^2 23^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \\ 892 & \(990\) &\(2\cdot 3^2\cdot 5\cdot 11\)& 1a & \(5^6 2^4 3^4 11^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,3\cdot 5^2\) \\ 893 & \(991\) & & 1b & \(5^2 991^4\) & \(1\) & \(1\) & \(2\) & \(3\) & \(4\) & \((-,-,\otimes,-)\) & \(\delta_1\) & \(\mathfrak{K}_1\) \\ 894 & \(993\) & \(3\cdot 331\) & 2 & \(3^4 331^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) \\ 895 & \(994\) & \(2\cdot 7\cdot 71\) & 1b & \(5^2 2^4 7^4 71^4\) & \(12\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5^3,\mathcal{K}\) \\ 896 & \(995\) & \(5\cdot 199\) & 1a & \(5^6 199^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,\otimes)\) & \(\zeta_2\) & \(\mathcal{K}\) \\ 897 & \(996\) & \(2^2\cdot 3\cdot 83\) & 1b & \(5^2 2^4 3^4 83^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5^2,3\cdot 5^4\) \\ 898 & \(997\) & & 1b & \(5^2 997^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ 899 & \(998\) & \(2\cdot 499\) & 1b & \(5^2 2^4 499^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(499,\mathcal{L}\) \\ 900 & \(999\) & \(3^3\cdot 37\) & 2 & \(3^4 37^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \\ \hline \end{tabular} \end{center} \end{table} \newpage \section{Statistical evaluation, refinements, and conclusions} \label{s:Conclusions} \subsection{Statistics of DPF types} \label{ss:Statistics} \noindent The complete statistical evaluation of the preceding twenty Tables \ref{tbl:PureQuinticFields50} --- \ref{tbl:PureQuinticFields1000} is given in Table \ref{tbl:Statistics}. The first ten columns show the absolute frequencies of pure metacyclic fields \(N=\mathbb{Q}(\zeta,\sqrt[5]{D})\) with various DPF types for the ranges \(2\le D<n\cdot 100\) with \(1\le n\le 10\). The eleventh column lists the relative percentages of the five most frequent DPF types for the complete range \(2\le D<10^3\) of normalized radicands. Among our \(13\) differential principal factorization types, type \(\gamma\) with \(3\)-dimensional absolute principal factorization, \(A=3\), is clearly dominating with more than one third \((36\,\%)\) of all occurrences in the complete range \(2\le D<10^3\), followed by type \(\varepsilon\) with \(2\)-dimensional absolute principal factorization, \(A=2\), which covers nearly one quarter \((23\,\%)\) of all cases. The third place (nearly \(18\,\%\)) is occupied by type \(\beta_2\) with mixed absolute and intermediate principal factorization, \(A=2\), \(I=1\). It is striking that type \(\alpha_1\) with \(2\)-dimensional relative principal factorization, \(R=2\), and type \(\alpha_3\) with \(2\)-dimensional intermediate principal factorization, \(I=2\), are populated rather sparsely, in favour of a remarkable contribution by type \(\alpha_2\) with mixed intermediate and relative principal factorization, \(I=R=1\) (place four with \(8\,\%\)). The appearance of the four types \(\zeta_1\), \(\zeta_2\), \(\eta\), \(\vartheta\) with norm representation \(N_{N/K}(Z)=\zeta\), \(Z\in U_N\), of the primitive fifth root of unity \(\zeta=\zeta_5\) is marginal (\cite[Thm. 8.2]{Ma2a}), in spite of the parametrized contribution by all prime conductors \(f=q\equiv\pm 7\,(\mathrm{mod}\,25)\) to type \(\vartheta\), as we shall prove in Theorem \ref{thm:InfSimCls} (1) in \S\ \ref{ss:Theorems}. \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Absolute frequencies of differential principal factorization types} \label{tbl:Statistics} \begin{center} \begin{tabular}{|c|rrrrrrrrrr|r|} \hline Type &\(100\) &\(200\) &\(300\) &\(400\) &\(500\) &\(600\) &\(700\) &\(800\) &\(900\) &\(1000\)& \(\%\) \\ \hline \(\alpha_1\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(5\) & \(5\) & \(9\) & \(9\) & \(9\) & \\ \(\alpha_2\) & \(10\) & \(17\) & \(23\) & \(30\) & \(35\) & \(42\) & \(52\) & \(57\) & \(63\) & \(75\) & \(8.3\) \\ \(\alpha_3\) & \(0\) & \(0\) & \(0\) & \(1\) & \(1\) & \(3\) & \(5\) & \(5\) & \(7\) & \(8\) & \\ \(\beta_1\) & \(0\) & \(2\) & \(4\) & \(7\) & \(8\) & \(11\) & \(15\) & \(18\) & \(22\) & \(23\) & \\ \(\beta_2\) & \(7\) & \(24\) & \(40\) & \(54\) & \(80\) & \(94\) &\(108\) &\(126\) &\(146\) &\(161\) & \(17.9\) \\ \(\gamma\) & \(25\) & \(55\) & \(88\) &\(117\) &\(148\) &\(187\) &\(222\) &\(259\) &\(290\) &\(324\) & \(36.0\) \\ \hline \(\delta_1\) & \(0\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(4\) & \(4\) & \(6\) & \(7\) & \\ \(\delta_2\) & \(8\) & \(14\) & \(19\) & \(23\) & \(31\) & \(35\) & \(38\) & \(44\) & \(51\) & \(53\) & \(5.9\) \\ \(\varepsilon\) & \(26\) & \(45\) & \(67\) & \(95\) &\(110\) &\(128\) &\(150\) &\(165\) &\(184\) &\(208\) & \(23.1\) \\ \hline \(\zeta_1\) & \(0\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \\ \(\zeta_2\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(1\) & \(4\) & \(4\) & \(5\) & \\ \(\eta\) & \(1\) & \(2\) & \(4\) & \(5\) & \(5\) & \(6\) & \(6\) & \(6\) & \(6\) & \(7\) & \\ \hline \(\vartheta\) & \(3\) & \(6\) & \(8\) & \(9\) & \(11\) & \(13\) & \(15\) & \(17\) & \(18\) & \(19\) & \\ \hline Total & \(81\) &\(168\) &\(258\) &\(347\) &\(438\) &\(530\) &\(622\) &\(715\) &\(807\) &\(900\) &\(100.0\) \\ \hline \end{tabular} \end{center} \end{table} \subsection{Similarity classes and prototypes} \label{ss:Prototypes} \noindent In \cite{Ma2b}, we came to the conviction that for deeper insight into the arithmetical structure of the fields under investigation, the prime factorization of the class field theoretic conductor \(f\) of the abelian extension \(N/K\) over the cyclotomic field \(K=\mathbb{Q}(\zeta)\) and the primary invariants of all involved \(5\)-class groups must be taken in consideration. These ideas have lead to the concept of \textit{similarity classes} and representative \textit{prototypes}, which refines the differential principal factorization (DPF) types Let \(t\) be the number of primes \(q_1,\ldots,q_t\in\mathbb{P}\) distinct from \(5\) which divide the conductor \(f\). Among these prime numbers, we separately count \(u:=\#\lbrace 1\le i\le t\mid q_i\equiv\pm 1,\pm 7\,(\mathrm{mod}\,25)\rbrace\) free primes, \(v:=t-u\) restrictive primes, \(s_2:=\#\lbrace 1\le i\le t\mid q_i\equiv -1\,(\mathrm{mod}\,5)\rbrace\) \(2\)-split primes, and \(s_4:=\#\lbrace 1\le i\le t\mid q_i\equiv +1\,(\mathrm{mod}\,5)\rbrace\) \(4\)-split primes. The multiplicity \(m=m(f)\) is given in terms of \(t,u,v\), according to \S\ \ref{s:Formulas}, and the dimensions of various spaces of primitive ambiguous ideals over the finite field \(\mathbb{F}_5\) are given in terms of \(t,s_2,s_4\), according to \cite[\S\ 4]{Ma2a}. By \(\eta=\frac{1}{2}(1+\sqrt{5})\) we denote the fundamental unit of \(K^+=\mathbb{Q}(\sqrt{5})\). The dimensions of the spaces of absolute, intermediate and relative DPF over \(\mathbb{F}_5\) are denoted by \(A\), \(I\) and \(R\), identical with the additive (logarithmic) version in \cite[Thm. 6.1]{Ma2a}. Further, let \(M=\mathbb{Q}(\sqrt{5},\sqrt[5]{D})\) be the maximal real intermediate field of \(N/L\), and denote by \(U_0\) the subgroup of the unit group \(U_N\) of \(N=\mathbb{Q}(\zeta,\sqrt[5]{D})\) generated by the units of all conjugate fields of \(L=\mathbb{Q}(\sqrt[5]{D})\) and of \(K=\mathbb{Q}(\zeta)\), where \(\zeta=\zeta_5\) is a primitive fifth root of unity. For a number field \(F\), let \(V_F:=v_5(\#\mathrm{Cl}(F))\) be the \(5\)-valuation of the class number of \(F\). \begin{definition} \label{dfn:QuinticSimilarity} A set of normalized fifth power free radicands \(D>1\) is called a \textit{similarity class} if the associated pure quintic fields \(L=\mathbb{Q}(\sqrt[5]{D})\) share the following common multiplets of invariants: \begin{itemize} \item the refined Dedekind species \((e_0;t,u,v,m;n,s_2,s_4)\), where \begin{equation} \label{eqn:Dedekind5} f^4=5^{e_0}\cdot q_1^4\cdots q_t^4 \quad \text{ with } \quad e_0\in\lbrace 0,2,6\rbrace,\ t\ge 0,\ n=t-s_2-s_4, \end{equation} \item the differential principal factorization type \((U,\eta,\zeta;A,I,R)\), where \begin{equation} \label{eqn:AIR} (U_K:N_{N/K}(U_N))=5^U \quad \text{ and } \quad U+1=A+I+R, \end{equation} \item the structure of the \(5\)-class groups \((V_L,V_M,V_N;E)\), where \begin{equation} \label{eqn:5ClsGrp} (U_N:U_0)=5^E \quad \text{ and } \quad V_N=4\cdot V_L+E-5. \end{equation} \end{itemize} \end{definition} \begin{warning} \label{wrn:QuinticSimilarity} To reduce the number of invariants, we abstain from defining additional counters \(s_2^\prime:=\#\lbrace 1\le i\le t\mid q_i\equiv -1\,(\mathrm{mod}\,25)\rbrace\) and \(s_4^\prime:=\#\lbrace 1\le i\le t\mid q_i\equiv +1\,(\mathrm{mod}\,25)\rbrace\) for \textit{free} splitting prime divisors of the conductor \(f\). However, we point out that occasionally a similarity class in the sense of Definition \ref{dfn:QuinticSimilarity} will be split in two separate classes, having the same invariants, but distinct contributions to the counters \(u\) and \(s_4\), resp. \(s_2\). For instance, the similarity classes \(\mathbf{\lbrack 77\rbrack}\) and \(\mathbf{\lbrack 202\rbrack}\) with \(77=7\cdot 11\) and \(202=2\cdot 101\) share identical multiplets of invariants \((e_0;t,u,v,m;n,s_2,s_4)=(2;2,1,1,4;1,0,1)\) (species \(1\mathrm{b}\)), \((U,\eta,\zeta;A,I,R)=(2,-,-;2,1,0)\) (type \(\beta_2\)), and \((V_L,V_M,V_N;E)=(1,1,3;4)\). But \(u=1\) and \(n=1\) are due to \(7\), \(v=1\) and \(s_4=1\) are due to \(11\), in the former case, whereas \(v=1\) and \(n=1\) are due to \(2\), \(u=1\) and \(s_4=1\) are due to \(101\), in the latter case. Therefore, the contributions by primes congruent to \(\pm 1\,(\mathrm{mod}\,25)\) will be indicated by writing \(u=1'\) and \(s_4=1'\), resp. \(s_2=1'\). We also emphasize that in the rare cases of non-elementary \(5\)-class groups, the actual structures (abelian type invariants) of the \(5\)-class groups will be taken into account, and not only the \(5\)-valuations \(V_L,V_M,V_N\). \end{warning} \begin{definition} \label{dfn:Prototype} The minimal element \(\mathbf{M}\) of a similarity class (with respect to the natural order of positive integers \(\mathbb{N}\)) is called the representative \textit{prototype} of the class, which is denoted by writing its prototype in square brackets \(\mathbf{\lbrack M\rbrack}\). \end{definition} The remaining elements of a similarity class, which are bigger than the prototype, only reproduce the arithmetical invariants of the prototype and do not provide any additional information, exept possibly about \textit{other primary components} of the class groups, that is the structure of \(\ell\)-class groups \(\mathrm{Cl}_{\ell}(F)\) of the fields \(F\in\lbrace L,M,N\rbrace\) for \(\ell\in\mathbb{P}\setminus\lbrace 5\rbrace\). Whereas there are only \(13\) DPF types of pure quintic fields, the \textit{number of similarity classes} is obviously infinite, since firstly the number \(t\) of primes dividing the conductor is unbounded and secondly the number of \textit{states}, defined by the triplet \((V_L,V_M,V_N)\) of \(5\)-valuations of class numbers, is also unlimited. Given a fixed refined Dedekind species \((e_0;t,u,v,m;n,s_2,s_4)\), the set of all associated normalized fifth power free radicands \(D\) usually splits into several similarity classes defined by distinct DPF types (\textit{type splitting}). Occasionally it even splits further into different structures of \(5\)-class groups, called \textit{states}, with increasing complexity of abelian type invariants (\textit{state splitting}). The \(134\) prototypes \(2\le\mathbf{M}<10^3\) of pure quintic fields are listed in the Tables \ref{tbl:Prototypes1}, \ref{tbl:Prototypes2}, \ref{tbl:Prototypes3} and \ref{tbl:Prototypes4}. By \(\mathbf{\lvert M\rvert}:=\#\lbrace D\in\mathbf{\lbrack M\rbrack}\mid D<B\rbrace\) we denote the number of elements of the similarity class \(\mathbf{\lbrack M\rbrack}\) defined by the prototype \(\mathbf{M}\), truncated at the upper bound \(B:=10^3\) of our systematic investigations. \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(46\) prototypes \(2\le\mathbf{M}<200\) of pure metacyclic fields} \label{tbl:Prototypes1} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|r|} \hline No. & \(\mathbf{M}\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & \(2\) & & 1b & \(5^2 2^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & & \(71\) \\ 2 & \(5\) & & 1a & \(5^6\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & & \(1\) \\ 3 & \(6\) & \(2\cdot 3\) & 1b & \(5^2 2^4 3^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & & \(77\) \\ 4 & \(7\) & & 2 & \(7^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((-,-,-,\otimes)\) & \(\vartheta\) & & \(18\) \\ 5 & \(10\) & \(2\cdot 5\) & 1a & \(5^6 2^4\) & \(4\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & & \(31\) \\ 6 & \(11\) & & 1b & \(5^2 11^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) & \(14\) \\ 7 & \(14\) & \(2\cdot 7\) & 1b & \(5^2 2^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & & \(44\) \\ 8 & \(18\) & \(2\cdot 3^2\) & 2 & \(2^4 3^4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & & \(37\) \\ 9 & \(19\) & & 1b & \(5^2 19^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{L}\) & \(27\) \\ 10 & \(22\) & \(2\cdot 11\) & 1b & \(5^2 2^4 11^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) & \(35\) \\ 11 & \(30\) & \(2\cdot 3\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & & \(37\) \\ 12 & \(31\) & & 1b & \(5^2 31^4\) & \(1\) & \(2\) & \(3\) & \(5\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) & \(2\) \\ 13 & \(33\) & \(3\cdot 11\) & 1b & \(5^2 3^4 11^4\) & \(3\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) & \(8\) \\ 14 & \(35\) & \(5\cdot 7\) & 1a & \(5^6 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((-,-,-,\otimes)\) & \(\eta\) & & \(6\) \\ 15 & \(38\) & \(2\cdot 19\) & 1b & \(5^2 2^4 19^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) & \(44\) \\ 16 & \(42\) & \(2\cdot 3\cdot 7\) & 1b & \(5^2 2^4 3^4 7^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2\cdot 5,3\cdot 5^2\) & \(22\) \\ 17 & \(55\) & \(5\cdot 11\) & 1a & \(5^6 11^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) & \(6\) \\ 18 & \(57\) & \(3\cdot 19\) & 2 & \(3^4 19^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) & \(10\) \\ 19 & \(66\) & \(2\cdot 3\cdot 11\) & 1b & \(5^2 2^4 3^4 11^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,3\cdot 5^3\) & \(17\) \\ 20 & \(70\) & \(2\cdot 5\cdot 7\) & 1a & \(5^6 2^4 7^4\) & \(16\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & & \(14\) \\ 21 & \(77\) & \(7\cdot 11\) & 1b & \(5^2 7^4 11^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11\cdot 5^3,\mathcal{K}\) & \(7\) \\ 22 & \(78\) & \(2\cdot 3\cdot 13\) & 1b & \(5^2 2^4 3^4 13^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3,2\cdot 5^3\) & \(37\) \\ 23 & \(82\) & \(2\cdot 41\) & 2 & \(2^4 41^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) & \(15\) \\ 24 & \(95\) & \(5\cdot 19\) & 1a & \(5^6 19^4\) & \(4\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) & \(9\) \\ 25 & \(101\) & & 2 & \(101^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,\otimes)\) & \(\zeta_1\) & \(\mathfrak{K}_2\) & \(1\) \\ 26 & \(110\) & \(2\cdot 5\cdot 11\) & 1a & \(5^6 2^4 11^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11,\mathcal{L}\) & \(11\) \\ 27 & \(114\) & \(2\cdot 3\cdot 19\) & 1b & \(5^2 2^4 3^4 19^4\) & \(13\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2\cdot 5^3,3\cdot 5^3\) & \(20\) \\ 28 & \(123\) & \(3\cdot 41\) & 1b & \(5^2 3^4 41^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) & \(2\) \\ 29 & \(126\) & \(2\cdot 3^2\cdot 7\) & 2 & \(2^4 3^4 7^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & & \(6\) \\ 30 & \(131\) & & 1b & \(5^2 131^4\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) & \(6\) \\ 31 & \(132\) & \(2^2\cdot 3\cdot 11\) & 2 & \(2^4 3^4 11^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11,\mathcal{L}\) & \(5\) \\ 32 & \(133\) & \(7\cdot 19\) & 1b & \(5^2 7^4 19^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7,\mathcal{L}\) & \(7\) \\ 33 & \(139\) & & 1b & \(5^2 139^4\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{L}\) & \(4\) \\ 34 & \(140\) & \(2^2\cdot 5\cdot 7\) & 1a & \(5^6 2^4 7^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(7\) & \(5\) \\ 35 & \(141\) & \(3\cdot 47\) & 1b & \(5^2 3^4 47^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(47\cdot 5\) & \(19\) \\ 36 & \(149\) & & 2 & \(149^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,\otimes,-,\times)\) & \(\delta_2\) & \(\mathcal{L}\) & \(6\) \\ 37 & \(151\) & & 2 & \(151^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,\times)\) & \(\alpha_2\) & \(\mathfrak{K}_1\) & \(3\) \\ 38 & \(154\) & \(2\cdot 7\cdot 11\) & 1b & \(5^2 2^4 7^4 11^4\) & \(12\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(2\cdot 11^2,\mathcal{K}\) & \(3\) \\ 39 & \(155\) & \(5\cdot 31\) & 1a & \(5^6 31^4\) & \(4\) & \(2\) & \(3\) & \(5\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) & \(2\) \\ 40 & \(171\) & \(3^2\cdot 19\) & 1b & \(5^2 3^4 19^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(19\cdot 5^2\) & \(6\) \\ 41 & \(174\) & \(2\cdot 3\cdot 29\) & 2 & \(2^4 3^4 29^4\) & \(3\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3^2\cdot 29,\mathcal{K}\) & \(3\) \\ 42 & \(180\) &\(2^2\cdot 3^2\cdot 5\) & 1a & \(5^6 2^4 3^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(3\) & \(5\) \\ 43 & \(182\) & \(2\cdot 7\cdot 13\) & 2 & \(2^4 7^4 13^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(7\) & \(1\) \\ 44 & \(186\) & \(2\cdot 3\cdot 31\) & 1b & \(5^2 2^4 3^4 31^4\) & \(13\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(5,\mathfrak{K}_2\) & \(7\) \\ 45 & \(190\) & \(2\cdot 5\cdot 19\) & 1a & \(5^6 2^4 19^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) & \(9\) \\ 46 & \(191\) & & 1b & \(5^2 191^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(5,\mathfrak{K}_1\) & \(3\) \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(44\) prototypes \(200<\mathbf{M}<510\) of pure metacyclic fields} \label{tbl:Prototypes2} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|r|} \hline No. & \(\mathbf{M}\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P & \(\mathbf{\lvert M\rvert}\) \\ \hline 47 & \(202\) & \(2\cdot 101\) & 1b & \(5^2 2^4 101^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(2\cdot 5,\mathcal{K}\) & \(6\) \\ 48 & \(203\) & \(7\cdot 29\) & 1b & \(5^2 7^4 29^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(29\cdot 5\) & \(2\) \\ 49 & \(209\) & \(11\cdot 19\) & 1b & \(5^2 11^4 19^4\) & \(3\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,\otimes,\times,-)\) & \(\beta_2\) & \(11\cdot 5^2,\mathcal{K}_{(19)}\) & \(2\) \\ 50 & \(210\) & \(2\cdot 3\cdot 5\cdot 7\) & 1a & \(5^6 2^4 3^4 7^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(7,3^2\cdot 5\) & \(5\) \\ 51 & \(211\) & & 1b & \(5^2 211^4\) & \(1\) & \(3\) & \(5\) & \(9\) & \(2\) & \((-,-,\otimes,-)\) & \(\delta_1\) & \(\mathfrak{K}_2\) & \(1\) \\ 52 & \(218\) & \(2\cdot 109\) & 2 & \(2^4 109^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(2\) & \(1\) \\ 53 & \(231\) & \(3\cdot 7\cdot 11\) & 1b & \(5^2 3^4 7^4 11^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(11,7\cdot 5^2\) & \(5\) \\ 54 & \(247\) & \(13\cdot 19\) & 1b & \(5^2 13^4 19^4\) & \(3\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & & \(2\) \\ 55 & \(253\) & \(11\cdot 23\) & 1b & \(5^2 11^4 23^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(11,\mathfrak{K}_1\) & \(4\) \\ 56 & \(259\) & \(7\cdot 37\) & 1b & \(5^2 7^4 37^4\) & \(4\) & \(1\) & \(2\) & \(5*\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & & \(1\) \\ 57 & \(266\) & \(2\cdot 7\cdot 19\) & 1b & \(5^2 2^4 7^4 19^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(5,2^3\cdot 7\) & \(3\) \\ 58 & \(273\) & \(3\cdot 7\cdot 13\) & 1b & \(5^2 3^4 7^4 13^4\) & \(12\) & \(2\) & \(4\) & \(8\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(7\cdot 13^2\cdot 5^4\) & \(4\) \\ 59 & \(275\) & \(5^2\cdot 11\) & 1a & \(5^6 11^4\) & \(4\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_2\) & \(2\) \\ 60 & \(276\) & \(2^2\cdot 3\cdot 23\) & 2 & \(2^4 3^4 23^4\) & \(3\) & \(0\) & \(0\) & \(1\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & & \(10\) \\ 61 & \(281\) & & 1b & \(5^2 281^4\) & \(1\) & \(3\) & \(5*\) & \(9*\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) & \(1\) \\ 62 & \(285\) & \(3\cdot 5\cdot 19\) & 1a & \(5^6 3^4 19^4\) & \(16\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & & \(1\) \\ 63 & \(286\) & \(2\cdot 11\cdot 13\) & 1b & \(5^2 2^4 11^4 13^4\) & \(13\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11^2\cdot 5,\mathcal{K}\) & \(3\) \\ 64 & \(287\) & \(7\cdot 41\) & 1b & \(5^2 7^4 41^4\) & \(4\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) & \(1\) \\ 65 & \(290\) & \(2\cdot 5\cdot 29\) & 1a & \(5^6 2^4 29^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(29\) & \(2\) \\ 66 & \(298\) & \(2\cdot 149\) & 1b & \(5^2 2^4 149^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(5\) & \(2\) \\ 67 & \(301\) & \(7\cdot 43\) & 2 & \(7^4 43^4\) & \(4\) & \(0\) & \(0\) & \(1\) & \(6\) & \((-,-,-,\otimes)\) & \(\eta\) & & \(1\) \\ 68 & \(302\) & \(2\cdot 151\) & 1b & \(5^2 2^4 151^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(2,\mathfrak{K}_1\) & \(2\) \\ 69 & \(319\) & \(11\cdot 29\) & 1b & \(5^2 11^4 29^4\) & \(3\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(11)},\mathcal{K}_{(29)}\) & \(3\) \\ 70 & \(329\) & \(7\cdot 47\) & 1b & \(5^2 7^4 47^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(47\) & \(7\) \\ 71 & \(330\) & \(2\cdot 3\cdot 5\cdot 11\) & 1a & \(5^6 2^4 3^4 11^4\) & \(64\) & \(2\) & \(4\) & \(9\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(3\cdot 11,5\cdot 11^3\) & \(2\) \\ 72 & \(341\) & \(11\cdot 31\) & 1b & \(5^2 11^4 31^4\) & \(3\) & \(3\) & \(5\) & \(9\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_{(11),1}\mathfrak{K}_{(31),2}^3,\) & \\ & & & & & & & & & & & & \(\mathfrak{K}_{(11),2}\mathfrak{K}_{(31),1}\) & \(2\) \\ 73 & \(348\) & \(2^2\cdot 3\cdot 29\) & 1b & \(5^2 2^4 3^4 29^4\) & \(13\) & \(2\) & \(4\) & \(8\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(2\cdot 3\cdot 5\) & \(2\) \\ 74 & \(377\) & \(13\cdot 29\) & 1b & \(5^2 13^4 29^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,\otimes,-,-)\) & \(\delta_2\) & \(\mathcal{K}\) & \(1\) \\ 75 & \(379\) & & 1b & \(5^2 379^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(5\) & \(1\) \\ 76 & \(385\) & \(5\cdot 7\cdot 11\) & 1a & \(5^6 7^4 11^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7\cdot 11^2,\mathcal{K}\) & \(1\) \\ 77 & \(390\) & \(2\cdot 3\cdot 5\cdot 13\) & 1a & \(5^6 2^4 3^4 13^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,5\) & \(4\) \\ 78 & \(398\) & \(2\cdot 199\) & 1b & \(5^2 2^4 199^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) & \(7\) \\ 79 & \(399\) & \(3\cdot 7\cdot 19\) & 2 & \(3^4 7^4 19^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(3\cdot 19^2,\mathcal{K}\) & \(2\) \\ 80 & \(401\) & & 2 & \(401^4\) & \(1\) & \(2\) & \(3\) & \(5\) & \(2\) & \((-,-,\otimes,-)\) & \(\alpha_1\) & \(\mathfrak{K}_1,\mathfrak{K}_2\) & \(2\) \\ 81 & \(418\) & \(2\cdot 11\cdot 19\) & 2 & \(2^4 11^4 19^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,\otimes,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}_{(19)},\mathfrak{K}_{(11),1}\) & \(1\) \\ 82 & \(421\) & & 1b & \(5^2 421^4\) & \(1\) & \(1\) & \(2\) & \(3\) & \(4\) & \((-,-,\otimes,-)\) & \(\delta_1\) & \(\mathfrak{K}_2\) & \(5\) \\ 83 & \(422\) & \(2\cdot 211\) & 1b & \(5^2 2^4 211^4\) & \(3\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\times,-)\) & \(\varepsilon\) & \(2\cdot 5^2\) & \(6\) \\ 84 & \(451\) & \(11\cdot 41\) & 2 & \(11^4 41^4\) & \(1\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}_{(11)}\mathcal{K}_{(41)}^4,\) & \\ & & & & & & & & & & & & \(\mathfrak{K}_{(11),1}\mathfrak{K}_{(41),2}^3\) & \(1\) \\ 85 & \(462\) & \(2\cdot 3\cdot 7\cdot 11\) & 1b & \(5^2 2^4 3^4 7^4 11^4\) & \(52\) & \(2\) & \(4\) & \(9\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(5^2\cdot 7,3\cdot 5\cdot 11^2\) & \(1\) \\ 86 & \(465\) & \(3\cdot 5\cdot 31\) & 1a & \(5^6 3^4 31^4\) & \(16\) & \(2\) & \(3\) & \(7*\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(5,\mathcal{K}\) & \(1\) \\ 87 & \(473\) & \(11\cdot 43\) & 1b & \(5^2 11^4 43^4\) & \(4\) & \(2\) & \(3\) & \(7*\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11,\mathcal{L}\) & \(1\) \\ 88 & \(482\) & \(2\cdot 241\) & 2 & \(2^4 241^4\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(241,\mathfrak{K}_2\) & \(2\) \\ 89 & \(502\) & \(2\cdot 251\) & 1b & \(5^2 2^4 251^4\) & \(4\) & *\(2\) & *\(4\) & *\(8\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(251\cdot 5,\mathfrak{K}_2\) & \(1\) \\ 90 & \(505\) & \(5\cdot 101\) & 1a & \(5^6 101^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),\otimes)\) & \(\zeta_2\) & \(\mathcal{K}\) & \(2\) \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(39\) prototypes \(510<\mathbf{M}<900\) of pure metacyclic fields} \label{tbl:Prototypes3} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|r|} \hline No. & \(\mathbf{M}\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P & \(\mathbf{\lvert M\rvert}\) \\ \hline 91 & \(517\) & \(11\cdot 47\) & 1b & \(5^2 11^4 47^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(11\cdot 5^2,\mathfrak{K}_2\) & \(1\) \\ 92 & \(532\) & \(2^2\cdot 7\cdot 19\) & 2 & \(2^4 7^4 19^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(2\cdot 7\) & \(1\) \\ 93 & \(546\) & \(2\cdot 3\cdot 7\cdot 13\) & 1b & \(5^2 2^4 3^4 7^4 13^4\) & \(52\) & \(2\) & \(4\) & \(9\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(3\cdot 5\cdot 13,\) & \\ & & & & & & & & & & & & \(5^2\cdot 7\cdot 13\) & \(3\) \\ 94 & \(550\) & \(2\cdot 5^2\cdot 11\) & 1a & \(5^6 2^4 11^4\) & \(16\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}, \mathfrak{K}_1\) & \(1\) \\ 95 & \(551\) & \(19\cdot 29\) & 2 & \(19^4 29^4\) & \(1\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,-,-)\) & \(\alpha_3\) & \(\mathcal{K}_{(19)},\mathcal{K}_{(29)}\) & \(1\) \\ 96 & \(570\) & \(2\cdot 3\cdot 5\cdot 19\) & 1a & \(5^6 2^4 3^4 19^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(2^4\cdot 3,2^4\cdot 5\) & \(2\) \\ 97 & \(574\) & \(2\cdot 7\cdot 41\) & 2 & \(2^4 7^4 41^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(41,\mathcal{L}\) & \(3\) \\ 98 & \(590\) & \(2\cdot 5\cdot 59\) & 1a & \(5^6 2^4 59^4\) & \(16\) & \(2\) & \(3\) & *\(7\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(5,\mathcal{K}\) & \(1\) \\ 99 & \(602\) & \(2\cdot 7\cdot 43\) & 1b & \(5^2 2^4 7^4 43^4\) & \(16\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(2,7\) & \(2\) \\ 100 & \(606\) & \(2\cdot 3\cdot 101\) & 1b & \(5^2 2^4 3^4 101^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,101\) & \(2\) \\ 101 & \(609\) & \(3\cdot 7\cdot 29\) & 1b & \(5^2 3^4 7^4 29^4\) & \(12\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7^2\cdot 5,\mathcal{K}\) & \(1\) \\ 102 & \(620\) & \(2^2\cdot 5\cdot 31\) & 1a & \(5^6 2^4 31^4\) & \(16\) & \(2\) & \(4*\) & \(8*\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(5,\mathfrak{K}_1\) & \(1\) \\ 103 & \(627\) & \(3\cdot 11\cdot 19\) & 2 & \(5^2 3^4 11^4 59^4\) & \(13\) & \(3\) & \(4\) & \(9\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(11)},\mathcal{K}_{(19)}\) & \(1\) \\ 104 & \(638\) & \(2\cdot 11\cdot 29\) & 1b & \(5^2 2^4 11^4 29^4\) & \(13\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,\otimes,(\times),-)\) & \(\beta_2\) & \(2^4\cdot 11\cdot 5,\) & \\ & & & & & & & & & & & & \(\mathcal{K}_{(11)}\cdot\mathcal{K}_{(29)}^4\) & \(2\) \\ 105 & \(649\) & \(11\cdot 59\) & 2 & \(11^4 59^4\) & \(1\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(11)},\mathcal{K}_{(59)}\) & \(2\) \\ 106 & \(660\) &\(2^2\cdot 3\cdot 5\cdot 11\)& 1a & \(5^6 2^4 3^4 11^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,5\cdot 11\) & \(2\) \\ 107 & \(665\) & \(5\cdot 7\cdot 19\) & 1a & \(5^6 7^4 19^4\) & \(16\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(7,\mathcal{K}\) & \(1\) \\ 108 & \(671\) & \(11\cdot 61\) & 1b & \(5^2 11^4 61^4\) & \(3\) & \(3\) & \(4\) & \(8\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}_{(11)}\mathcal{K}_{(61)},\) & \\ & & & & & & & & & & & & \(\mathfrak{K}_{(11),2}\mathfrak{K}_{(61),1}^2\) & \(1\) \\ 109 & \(682\) & \(2\cdot 11\cdot 31\) & 2 & \(2^4 11^4 31^4\) & \(3\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}_{(11)}\mathcal{K}_{(31)}^3,\) & \\ & & & & & & & & & & & & \(\mathfrak{K}_{(11),1}\mathfrak{K}_{(31),2}^3\) & \(1\) \\ 110 & \(691\) & & 1b & \(5^2 691^4\) & \(1\) & \(3\) & \(4\) & \(8\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_2\) & \(1\) \\ 111 & \(693\) & \(3^2\cdot 7\cdot 11\) & 2 & \(3^4 7^4 11^4\) & \(4\) & \(2\) & \(3\) & \(6\) & \(3\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(3\cdot 7,\mathfrak{K}_2\) & \(1\) \\ 112 & \(695\) & \(5\cdot 139\) & 1a & \(5^6 139^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,\times,-,-)\) & \(\varepsilon\) & \(139\) & \(1\) \\ 113 & \(702\) & \(2\cdot 3^3\cdot 13\) & 1b & \(5^2 2^4 3^4 13^4\) & \(13\) & \(2\) & \(4\) & \(8\) & \(5\) & \((\times,-,-,-)\) & \(\varepsilon\) & \(13\cdot 5^4\) & \(2\) \\ 114 & \(707\) & \(7\cdot 101\) & 2 & \(7^4 101^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,\otimes)\) & \(\zeta_2\) & \(\mathcal{K}\) & \(1\) \\ 115 & \(710\) & \(2\cdot 5\cdot 71\) & 1a & \(5^6 2^4 71^4\) & \(16\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}, \mathfrak{K}_2\) & \(4\) \\ 116 & \(745\) & \(5\cdot 149\) & 1a & \(5^6 149^4\) & \(4\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,\otimes)\) & \(\zeta_2\) & \(\mathcal{K}\) & \(2\) \\ 117 & \(749\) & \(7\cdot 107\) & 2 & \(7^4 107^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,-,\times)\) & \(\varepsilon\) & & \(1\) \\ 118 & \(751\) & & 2 & \(751^4\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,\times)\) & \(\alpha_2\) & \(\mathcal{L},\mathfrak{K}_1\) & \(1\) \\ 119 & \(770\) & \(2\cdot 5\cdot 7\cdot 11\) & 1a & \(5^6 2^4 7^4 11^4\) & \(64\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,2^2\cdot 11\) & \(1\) \\ 120 & \(779\) & \(19\cdot 41\) & 1b & \(5^2 19^4 41^4\) & \(3\) & \(3\) & \(4\) & \(8\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}_{(19)}\mathcal{K}_{(41)},\) & \\ & & & & & & & & & & & & \(\mathfrak{K}_{(41),1}\) & \(1\) \\ 121 & \(785\) & \(5\cdot 157\) & 1a & \(5^6 157^4\) & \(4\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,-,\times)\) & \(\varepsilon\) & & \(1\) \\ 122 & \(798\) & \(2\cdot 3\cdot 7\cdot 19\) & 1b & \(5^2 2^4 3^4 7^4 19^4\) & \(52\) & \(2\) & \(4\) & \(9\) & \(6\) & \((-,\times,-,-)\) & \(\gamma\) & \(3\cdot 19,7\cdot 5^4\) & \(1\) \\ 123 & \(808\) & \(2^3\cdot 101\) & 1b & \(5^2 2^4 101^4\) & \(4\) & \(2\) & \(2\) & \(4\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K}, \mathfrak{K}_1\) & \(2\) \\ 124 & \(825\) & \(3\cdot 5^2\cdot 11\) & 1a & \(5^6 3^4 11^4\) & \(16\) & \(1\) & \(2\) & \(4\) & \(5\) & \((-,-,\otimes,-)\) & \(\beta_1\) & \(3^2\cdot 11,\mathfrak{K}_1\) & \(1\) \\ 125 & \(843\) & \(3\cdot 281\) & 2 & \(3^4 281^4\) & \(1\) & \(1\) & \(2\) & \(3\) & \(4\) & \((-,-,\otimes,-)\) & \(\delta_1\) & \(\mathfrak{K}_1\) & \(1\) \\ 126 & \(858\) &\(2\cdot 3\cdot 11\cdot 13\) & 1b &\(5^2 2^4 3^4 11^4 13^4\) & \(51\) & \(2\) & \(4\) & \(9\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(2\cdot 5,13\cdot 5\) & \(1\) \\ 127 & \(861\) & \(3\cdot 7\cdot 41\) & 1b & \(5^2 3^4 7^4 41^4\) & \(12\) & \(3\) & \(4\) & \(8\) & \(1\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) & \(1\) \\ 128 & \(893\) & \(19\cdot 47\) & 2 & \(19^4 47^4\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \((-,\otimes,-,-)\) & \(\beta_2\) & \(19,\mathfrak{L}\) & \(1\) \\ 129 & \(894\) & \(2\cdot 3\cdot 149\) & 1b & \(5^2 2^4 3^4 149^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((\times,-,-,-)\) & \(\gamma\) & \(5,2\cdot 3^2\) & \(1\) \\ \hline \end{tabular} \end{center} \end{table} \newpage \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{\(5\) prototypes \(900<\mathbf{M}<1000\) of pure metacyclic fields} \label{tbl:Prototypes4} \begin{center} \begin{tabular}{|r|rc|ccr|cccc|ccc|r|} \hline No. & \(\mathbf{M}\) & Factors & S & \(f^4\) & \(m\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \((1,2,4,5)\) & T & P & \(\mathbf{\lvert M\rvert}\) \\ \hline 130 & \(902\) & \(2\cdot 11\cdot 41\) & 1b & \(5^2 2^4 11^4 41^4\) & \(13\) & \(2\) & \(3\) & \(7\) & \(4\) & \((-,-,(\times),-)\) & \(\beta_2\) & \(11\cdot 5,\) & \\ & & & & & & & & & & & & \(\mathcal{K}_{(11)}\cdot\mathcal{K}_{(41)}\) & \(1\) \\ 131 & \(924\) &\(2^2\cdot 3\cdot 7\cdot 11\)& 1a & \(2^4 3^4 7^4 11^4\) & \(12\) & \(1\) & \(2\) & \(5\) & \(6\) & \((-,-,\times,-)\) & \(\gamma\) & \(3\cdot 7,7\cdot 11\) & \(1\) \\ 132 & \(955\) & \(5\cdot 191\) & 1a & \(5^6 191^4\) & \(4\) & \(2*\) & \(3*\) & \(6*\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) & \(1\) \\ 133 & \(957\) & \(3\cdot 11\cdot 29\) & 2 & \(3^4 11^4 29^4\) & \(3\) & \(2\) & \(2\) & \(5\) & \(2\) & \((-,\otimes,(\times),-)\) & \(\alpha_3\) & \(\mathcal{K}_{(11)},\mathcal{K}_{(29)}\) & \(1\) \\ 134 & \(982\) & \(2\cdot 491\) & 2 & \(2^4 491^4\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \((-,-,\otimes,-)\) & \(\alpha_2\) & \(\mathcal{K},\mathfrak{K}_1\) & \(2\) \\ \hline \end{tabular} \end{center} \end{table} \subsection{General theorems on DPF types and Polya fields} \label{ss:Theorems} \noindent There is only a single finite similarity class \(\lbrack 5\rbrack=\lbrace 5\rbrace\), characterized by the exceptional number \(t=0\) of primes \(q\ne 5\) dividing the conductor \(f\) (here \(f^4=5^6\)). The invariants of this unique metacyclic Polya field \(N\) are given by \begin{equation} \label{eqn:Prototype5Theta} \begin{aligned} & \mathbf{\lbrack 5\rbrack}, \text{ species \(1\mathrm{a}\), } (e_0;t,u,v,m;n,s_2,s_4)=(6;0,0,0,1;0,0,0), \\ & \text{type \(\vartheta\), }(U,\eta,\zeta;A,I,R)=(0,\times,\times;1,0,0), \text{ and } (V_L,V_M,V_N;E)=(0,0,0;5). \end{aligned} \end{equation} We conjecture that all the other similarity classes are infinite. Precisely four of them can actually be given by parametrized infinite sequences in a deterministic way aside from the intrinsic probabilistic nature of the occurrence of primes in residue classes and of composite integers with assigned shape of prime decomposition. This was proved in \cite[Thm. 10.1]{Ma2a} and \cite[Thm. 2.1]{Ma2b}. \begin{theorem} \label{thm:InfSimCls} Each of the following infinite sequences of conductors \(f=f_{N/K}\) unambiguously determines the DPF type of the pure metacyclic fields \(N\) in the associated multiplet with \(m=m(f)\) members. \begin{enumerate} \item \(f=q\) with \(q\in\mathbb{P}\), \(q\equiv\pm 7\,(\mathrm{mod}\,25)\) gives rise to a singulet, \(m=1\), with DPF type \(\vartheta\), \item \(f^4=5^2\cdot q^4\) with \(q\in\mathbb{P}\), \(q\equiv\pm 2\,(\mathrm{mod}\,5)\), \(q\not\equiv\pm 7\,(\mathrm{mod}\,25)\) gives rise to a singulet, \(m=1\), with DPF type \(\varepsilon\), \item \(f^4=5^6\cdot q^4\) with \(q\in\mathbb{P}\), \(q\equiv\pm 2\,(\mathrm{mod}\,5)\), \(q\not\equiv\pm 7\,(\mathrm{mod}\,25)\) gives rise to a quartet, \(m=4\), with homogeneous DPF type \((\varepsilon,\varepsilon,\varepsilon,\varepsilon)\), \item \(f=q_1\cdot q_2\) with \(q_i\in\mathbb{P}\), \(q_i\equiv\pm 2\,(\mathrm{mod}\,5)\), \(q_i\not\equiv\pm 7\,(\mathrm{mod}\,25)\) gives rise to a singulet, \(m=1\), with DPF type \(\varepsilon\). \end{enumerate} \end{theorem} In fact, the shape of the conductors in Theorem \ref{thm:InfSimCls} does not only determine the refined Dedekind species and the DPF type, but also the structure of the \(5\)-class groups of the fields \(L\), \(M\) and \(N\). \begin{corollary} \label{cor:InfSimCls} The invariants of the similarity classes defined by the four infinite sequences of conductors in Theorem \ref{thm:InfSimCls} are given as follows, in the same order: \begin{equation} \label{eqn:Prototype7Theta} \begin{aligned} & \mathbf{\lbrack 7\rbrack}, \text{ species \(2\), } (e_0;t,u,v,m;n,s_2,s_4)=(0;1,1,0,1;1,0,0), \\ & \text{type \(\vartheta\), }(U,\eta,\zeta;A,I,R)=(0,\times,\times;1,0,0), \text{ and } (V_L,V_M,V_N;E)=(0,0,0;5); \end{aligned} \end{equation} \begin{equation} \label{eqn:Prototype2Epsilon} \begin{aligned} & \mathbf{\lbrack 2\rbrack}, \text{ species \(1\mathrm{b}\), } (e_0;t,u,v,m;n,s_2,s_4)=(2;1,0,1,1;1,0,0), \\ & \text{type \(\varepsilon\), }(U,\eta,\zeta;A,I,R)=(1,\times,-;2,0,0), \text{ and } (V_L,V_M,V_N;E)=(0,0,0;5); \end{aligned} \end{equation} \begin{equation} \label{eqn:Prototype10Epsilon} \begin{aligned} & \mathbf{\lbrack 10\rbrack}, \text{ species \(1\mathrm{a}\), } (e_0;t,u,v,m;n,s_2,s_4)=(6;1,0,1,4;1,0,0), \\ & \text{type \(\varepsilon\), }(U,\eta,\zeta;A,I,R)=(1,\times,-;2,0,0), \text{ and } (V_L,V_M,V_N;E)=(0,0,0;5); \end{aligned} \end{equation} \begin{equation} \label{eqn:Prototype18Epsilon} \begin{aligned} & \mathbf{\lbrack 18\rbrack}, \text{ species \(2\), } (e_0;t,u,v,m;n,s_2,s_4)=(0;2,0,2,1;2,0,0), \\ & \text{type \(\varepsilon\), }(U,\eta,\zeta;A,I,R)=(1,\times,-;2,0,0), \text{ and } (V_L,V_M,V_N;E)=(0,0,0;5). \end{aligned} \end{equation} The pure metacyclic fields \(N\) associated with these four similarity classes are Polya fields. \end{corollary} \begin{remark} \label{rmk:InfSimCls} The statements concerning \(5\)-class groups in Corollary \ref{cor:InfSimCls} were proved by Parry in \cite[Thm. IV, p. 481]{Pa}, where Formula (10) gives the shape of radicands associated with the conductors in our Theorem \ref{thm:InfSimCls}. \end{remark} \begin{proof} (of Theorem \ref{thm:InfSimCls} and Corollary \ref{cor:InfSimCls}) It only remains to show the claims for the composite radicands associated with conductors \(f^4=5^6\cdot q^4\) and \(f=q_2\cdot q_2\). See \cite[Thm. 10.6]{Ma2a}. \end{proof} For similarity classes distinct from the four infinite classes in Theorem \ref{thm:InfSimCls} we cannot provide deterministic criteria for the DPF type and for the homogeneity of multiplets with \(m>1\). In general, the members of a multiplet belong to distinct similarity classes, thus giving rise to \textit{heterogeneous} DPF types. We explain these phenomena with the simplest cases where only two DPF types are involved (type splitting). \begin{theorem} \label{thm:EpsilonEta} Each of the following infinite sequences of conductors \(f=f_{N/K}\) admits precisely two DPF types of the pure metacyclic fields \(N\) in the associated quartet with \(m=4\) members. \begin{enumerate} \item \(f^4=5^6\cdot q^4\) with \(q\in\mathbb{P}\), \(q\equiv\pm 7\,(\mathrm{mod}\,25)\) gives rise to a quartet with possibly heterogeneous DPF type \((\varepsilon^x,\eta^y)\), \(x+y=4\), \item \(f=q_1\cdot q_2\) with \(q_i\in\mathbb{P}\), \(q_i\equiv\pm 7\,(\mathrm{mod}\,25)\) gives rise to a quartet with possibly heterogeneous DPF type \((\varepsilon^x,\eta^y)\), \(x+y=4\). \end{enumerate} \end{theorem} \begin{example} \label{exm:EpsilonEta} It is not easy to find complete quartets, whose members are spread rather widely. The smallest quartet \((35,175,245,4375)=(5\cdot 7,5^2\cdot 7,5\cdot 7^2,5^4\cdot 7)\) belonging to the first infinite sequence contains the member \(D=4375\) outside of the range of our systematic computations. We have determined its DPF type separately and thus discovered a homogeneous quartet of type \((\eta,\eta,\eta,\eta)\). However, we cannot generally exclude the occurrence of heterogeneous quartets. \end{example} \begin{corollary} \label{cor:EpsilonEta} The invariants of the similarity classes defined by the two infinite sequences of conductors in Theorem \ref{thm:EpsilonEta} are given as follows, in the same order. The statements concerning \(5\)-class groups are only conjectural. Each sequence splits into two similarity classes. \\ The classes for \(f^4=5^6\cdot q^4\) are: \begin{equation} \label{eqn:Prototype35Eta} \begin{aligned} & \mathbf{\lbrack 35\rbrack}, \text{ species \(1\mathrm{a}\), } (e_0;t,u,v,m;n,s_2,s_4)=(6;1,1,0,4;1,0,0), \\ & \text{type \(\eta\), }(U,\eta,\zeta;A,I,R)=(1,-,\times;2,0,0), \text{ and } (V_L,V_M,V_N;E)=(0,0,1;6); \end{aligned} \end{equation} \begin{equation} \label{eqn:Prototype785Epsilon} \begin{aligned} & \mathbf{\lbrack 785\rbrack}, \text{ species \(1\mathrm{a}\), } (e_0;t,u,v,m;n,s_2,s_4)=(6;1,1,0,4;1,0,0), \\ & \text{type \(\varepsilon\), }(U,\eta,\zeta;A,I,R)=(1,\times,-;2,0,0), \text{ and } (V_L,V_M,V_N;E)=(1,2,4;5). \end{aligned} \end{equation} The classes for \(f=q_1\cdot q_2\) are: \begin{equation} \label{eqn:Prototype301Eta} \begin{aligned} & \mathbf{\lbrack 301\rbrack}, \text{ species \(2\), } (e_0;t,u,v,m;n,s_2,s_4)=(0;2,2,0,4;2,0,0), \\ & \text{type \(\eta\), }(U,\eta,\zeta;A,I,R)=(1,-,\times;2,0,0), \text{ and } (V_L,V_M,V_N;E)=(0,0,1;6); \end{aligned} \end{equation} \begin{equation} \label{eqn:Prototype749Epsilon} \begin{aligned} & \mathbf{\lbrack 749\rbrack}, \text{ species \(2\), } (e_0;t,u,v,m;n,s_2,s_4)=(0;2,2,0,4;2,0,0), \\ & \text{type \(\varepsilon\), }(U,\eta,\zeta;A,I,R)=(1,\times,-;2,0,0), \text{ and } (V_L,V_M,V_N;E)=(1,2,4;5). \end{aligned} \end{equation} All pure metacyclic fields \(N\) associated with these four similarity classes are Polya fields. \end{corollary} \begin{proof} (of Theorem \ref{thm:EpsilonEta} and Corollary \ref{cor:EpsilonEta}) \end{proof} \begin{remark} \label{rmk:EpsilonEta} The statements on \(5\)-class groups in Corollary \ref{cor:EpsilonEta} have been verified for all examples with \(2\le D<1000\) by our computations. In particular, the occurrence of the radicands \(D=749=7\cdot 107\) and \(D=785=5\cdot 157\), both with \(V_L=1\), proves the impossibility of the general claim \(5\nmid h(L)\) for the two situations mentioned in \cite[Lem. 3.3 (ii) and (iv), p. 204]{Ii} and \cite[Thm. 5 (ii) and (iv), p. 5]{Ky2}, partially also indicated in \cite[Thm. IV (11), p. 481]{Pa}. \end{remark} \begin{theorem} \label{thm:EpsilonGamma} Each of the following infinite sequences of conductors \(f=f_{N/K}\) admits precisely two DPF types of the pure metacyclic fields \(N\) in the associated hexadecuplet with \(m=16\) members. \begin{enumerate} \item \(f^4=5^6\cdot q_1^4q_2^4\) with \(q_i\in\mathbb{P}\), \(q_i\equiv\pm 2\,(\mathrm{mod}\,5)\), both \(q_i\not\equiv\pm 7\,(\mathrm{mod}\,25)\) gives rise to a hexadecuplet with possibly heterogeneous DPF type \((\varepsilon^x,\gamma^y)\), \(x+y=16\), \item \(f^4=5^6\cdot q_1^4q_2^4\) with \(q_i\in\mathbb{P}\), \(q_i\equiv\pm 2\,(\mathrm{mod}\,5)\), only one \(q_i\equiv\pm 7\,(\mathrm{mod}\,25)\) gives rise to a hexadecuplet with possibly heterogeneous DPF type \((\varepsilon^x,\gamma^y)\), \(x+y=16\). \end{enumerate} \end{theorem} \begin{example} \label{exm:EpsilonGamma} It is difficult to find complete hexadecuplets, whose members are spread rather widely. The smallest hexadecuplet \[(30,60,90,120,150,180,240,270,360,540,600,720,810,1350,1620,3750)=\] \[=(2\cdot 3\cdot 5,\ 2^2\cdot 3\cdot 5,\ 2\cdot 3^2\cdot 5,\ 2^3\cdot 3\cdot 5,\ 2\cdot 3\cdot 5^2,\ 2^2\cdot 3^2\cdot 5,\ 2^4\cdot 3\cdot 5,\ 2\cdot 3^3\cdot 5,\] \[2^3\cdot 3^2\cdot 5,\ 2^2\cdot 3^3\cdot 5,\ 2^3\cdot 3\cdot 5^2,\ 2^4\cdot 3^2\cdot 5,\ 2\cdot 3^4\cdot 5,\ 2\cdot 3^3\cdot 5^2,\ 2^2\cdot 3^4\cdot 5,\ 2\cdot 3\cdot 5^4)\] belonging to the first infinite sequence contains the members \(D=1350,1620,3750\) outside of the range of our systematic computations. We have determined their DPF type separately and thus discovered a heterogeneous hexadecuplet (in the same order) of type \[(\varepsilon^3,\gamma^{13})=(\gamma,\gamma,\gamma,\gamma,\gamma,\varepsilon,\varepsilon,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\varepsilon,\gamma,\gamma).\] \end{example} \begin{corollary} \label{cor:EpsilonGamma} The invariants of the similarity classes defined by the two infinite sequences of conductors in Theorem \ref{thm:EpsilonGamma} are given as follows, in the same order. The statements concerning \(5\)-class groups are only conjectural. Each sequence splits into two similarity classes. \\ The classes for \(f^4=5^6\cdot q_1^4q_2^4\), both \(q_i\not\equiv\pm 7\,(\mathrm{mod}\,25)\) are: \begin{equation} \label{eqn:Prototype30Gamma} \begin{aligned} & \mathbf{\lbrack 30\rbrack}, \text{ species \(1\mathrm{a}\), } (e_0;t,u,v,m;n,s_2,s_4)=(6;2,0,2,16;2,0,0), \\ & \text{type \(\gamma\), }(U,\eta,\zeta;A,I,R)=(2,-,-;3,0,0), \text{ and } (V_L,V_M,V_N;E)=(0,0,1;6); \end{aligned} \end{equation} \begin{equation} \label{eqn:Prototype180Epsilon} \begin{aligned} & \mathbf{\lbrack 180\rbrack}, \text{ species \(1\mathrm{a}\), } (e_0;t,u,v,m;n,s_2,s_4)=(6;2,0,2,16;2,0,0), \\ & \text{type \(\varepsilon\), }(U,\eta,\zeta;A,I,R)=(1,\times,-;2,0,0), \text{ and } (V_L,V_M,V_N;E)=(1,2,4;5). \end{aligned} \end{equation} The classes for \(f^4=5^6\cdot q_1^4q_2^4\), only one \(q_i\equiv\pm 7\,(\mathrm{mod}\,25)\) are: \begin{equation} \label{eqn:Prototype70Gamma} \begin{aligned} & \mathbf{\lbrack 70\rbrack}, \text{ species \(1\mathrm{a}\), } (e_0;t,u,v,m;n,s_2,s_4)=(6;2,1,1,16;2,0,0), \\ & \text{type \(\gamma\), }(U,\eta,\zeta;A,I,R)=(2,-,-;3,0,0), \text{ and } (V_L,V_M,V_N;E)=(0,0,1;6); \end{aligned} \end{equation} \begin{equation} \label{eqn:Prototype140Epsilon} \begin{aligned} & \mathbf{\lbrack 140\rbrack}, \text{ species \(1\mathrm{a}\), } (e_0;t,u,v,m;n,s_2,s_4)=(6;2,1,1,16;2,0,0), \\ & \text{type \(\varepsilon\), }(U,\eta,\zeta;A,I,R)=(1,\times,-;2,0,0), \text{ and } (V_L,V_M,V_N;E)=(1,2,4;5). \end{aligned} \end{equation} Only the pure metacyclic fields \(N\) of type \(\gamma\) associated with \eqref{eqn:Prototype30Gamma} and \eqref{eqn:Prototype70Gamma} are Polya fields. \end{corollary} \begin{proof} (of Theorem \ref{thm:EpsilonGamma} and Corollary \ref{cor:EpsilonGamma}) See \cite[Thm. 10.7]{Ma2a}. \end{proof} \begin{theorem} \label{thm:Polya1or24mod25} A pure metacyclic field \(N=\mathbb{Q}(\zeta_5,\sqrt[5]{\ell})\) with prime radicand \(\ell\equiv\pm 1\,(\mathrm{mod}\,25)\) has a prime conductor \(f=\ell\), and possesses the Polya property, regardless of its DPF type and the complexity of its \(5\)-class group structure. \end{theorem} \begin{proof} This is an immediate consequence of \cite[Thm. 10.5 and Thm. 6.1]{Ma2a}, taking into account that we have the value \(t=1\) for the number of primes dividing the conductor in the present situation, and thus the estimate in \cite[Cor. 4.1]{Ma2a} yields \(1\le A\le\min(3,t)=\min(3,1)=1\). For the Polya property we must have \(A=t=1\), according to \cite[Thm. 10.5]{Ma2a}, which admits the DPF types \(\alpha_1\), \(\alpha_2\), \(\alpha_3\), \(\delta_1\), \(\delta_2\), \(\zeta_1\), \(\zeta_2\) or \(\vartheta\) \cite[Thm. 1.3 and Tbl. 1]{Ma2a}. However, DPF type \(\alpha_3\) is excluded by \cite[Cor. 4.2]{Ma2a}, since the requirement \(s_2+s_4\ge 2\) cannot be fulfilled in our situation where either \(s_2=0\) and \(s_4=1\) for \(\ell\equiv +1\,(\mathrm{mod}\,25)\) or \(s_2=1\) and \(s_4=0\) for \(\ell\equiv -1\,(\mathrm{mod}\,25)\). \end{proof} \begin{theorem} \label{thm:Polya1or4mod5} A pure metacyclic field \(N=\mathbb{Q}(\zeta_5,\sqrt[5]{\ell})\) with prime radicand \(\ell\equiv\pm 1\,(\mathrm{mod}\,5)\) but \(\ell\not\equiv\pm 1\,(\mathrm{mod}\,25)\) has a composite conductor \(f^4=5^2\cdot\ell^4\), and the following conditions are equivalent: \begin{enumerate} \item \(N\) possesses the Polya property. \item \((\exists\,\alpha\in L=\mathbb{Q}(\sqrt[5]{\ell}))\,N_{L/\mathbb{Q}}(\alpha)=5\). \item The prime ideal \(\mathfrak{p}\in\mathbb{P}_L\) with \(5\mathcal{O}_L=\mathfrak{p}^5\) is principal. \item \(N\) is of DPF type either \(\beta_1\) or \(\beta_2\) or \(\varepsilon\). \end{enumerate} \end{theorem} \begin{proof} This is a consequence of \cite[Thm. 10.5 and Thm. 6.1]{Ma2a}, taking into account that the prime \(5\) is not included in the current definition of the counter \(t\) (with value \(t=1\) in the present situation), and thus the estimate in \cite[Cor. 4.1]{Ma2a} must be replaced by \(1\le A\le\min(3,t+1)=\min(3,2)=2\). For the Polya property we must have \(A=t+1=2\) \cite[Thm. 10.5]{Ma2a}, which determines the DPF types \(\beta_1\), \(\beta_2\), \(\varepsilon\) or \(\eta\) \cite[Thm. 1.3 and Tbl. 1]{Ma2a}. However, DPF type \(\eta\) is excluded by the prime \(\ell\not\equiv\pm 1,\pm 7\,(\mathrm{mod}\,25)\) dividing the conductor (\cite[Thm. 8.1]{Ma2a}). \end{proof} Inspired by the last two theorems, it is worth ones while to summarize, for each kind of prime radicands, what is known about the possibilities for differential principal factorizations. \begin{theorem} \label{thm:PrimeRadicands} Let \(N=\mathbb{Q}(\zeta_5,\sqrt[5]{D})\) be a pure metacyclic field with prime radicand \(D\in\mathbb{P}\). \begin{enumerate} \item If \(D=q\) with \(q\equiv\pm 7\,(\mathrm{mod}\,25)\) or \(q=5\), then \(N\) is of type \(\vartheta\). \item If \(D=\ell\) with \(\ell\equiv -1\,(\mathrm{mod}\,25)\), then \(N\) is of one of the types \(\delta_2,\zeta_2,\vartheta\). \item If \(D=\ell\) with \(\ell\equiv +1\,(\mathrm{mod}\,25)\), then \(N\) is of one of the types \(\alpha_1,\alpha_2,\delta_1,\delta_2,\zeta_1,\zeta_2,\vartheta\). \item If \(D=q\) with \(q\equiv\pm 2\,(\mathrm{mod}\,5)\) but \(q\not\equiv\pm 7\,(\mathrm{mod}\,25)\), then \(N\) is of type \(\varepsilon\). \item If \(D=\ell\) with \(\ell\equiv -1\,(\mathrm{mod}\,5)\) but \(\ell\not\equiv -1\,(\mathrm{mod}\,25)\), then \(N\) is of one of the types \(\beta_2,\delta_2,\varepsilon\). \item If \(D=\ell\) with \(\ell\equiv +1\,(\mathrm{mod}\,5)\) but \(\ell\not\equiv +1\,(\mathrm{mod}\,25)\), then \(N\) is of one of the types \(\alpha_1,\alpha_2,\beta_1,\beta_2,\delta_1,\delta_2,\varepsilon\). \end{enumerate} A pure metacyclic field with prime radicand can never be of any of the types \(\alpha_3,\gamma,\eta\). \end{theorem} \begin{proof} By making use of the bounds \cite[\S\ 4]{Ma2a} for \(\mathbb{F}_5\)-dimensions of spaces of differential principal factors (DPF), \begin{equation} \label{eqn:Dimensions} \begin{aligned} 1 \le A \le & \min(3,t), \\ 0 \le I \le & \min(2,2(s_2+s_4)), \\ 0 \le R \le & \min(2,4s_4), \end{aligned} \end{equation} we can determine the possible DPF types of pure metacyclic fields \(N=\mathbb{Q}(\zeta_5,\sqrt[5]{D})\) with prime radicands \(D\in\mathbb{P}\). We start with a few general observations. Firstly, if \(D\equiv\pm 1,\pm 7\,(\mathrm{mod}\,25)\), resp. \(D=5\), is prime, then \(N\) is of Dedekind species \(2\), resp. \(1\mathrm{a}\), with prime power conductor \(f=D\), resp. \(f^4=5^6\), and \(t=1\), whence \(A=1\) and the types \(\beta_1,\beta_2,\gamma,\varepsilon,\eta\) with \(A\ge 2\) are forbidden. However, if \(D\not\equiv\pm 1,\pm 7\,(\mathrm{mod}\,25)\) and \(D\ne 5\) is prime, then the congruence requirement eliminates the types \(\zeta_1,\zeta_2,\eta,\vartheta\), the field \(N\) is of Dedekind species \(1\mathrm{b}\) with composite conductor \(f^4=5^2\cdot D^4\), and \(t=2\), whence \(1\le A\le 2\) and type \(\gamma\) with \(A=3\) is discouraged. So, the types \(\gamma\) and \(\eta\) are generally forbidden for prime radicands. Secondly, for a prime radicand \(D\equiv\pm 1\,(\mathrm{mod}\,5)\) which splits in \(M\), the space of radicals \(\Delta=\langle\sqrt[5]{D}\rangle\) is a \(1\)-dimensional subspace of absolute DPF contained in the \(2\)-dimensional space \(\Delta\oplus\Delta^\prime\) of differential factors generated by the two prime ideals of \(M\) over \(D\). Consequently, in this special situation there arises an additional constraint \(I\le 1\) for the dimension of the space of intermediate DPF, which must be contained in the \(1\)-dimensional complement \(\Delta^\prime\). This generally excludes type \(\alpha_3\) with \(I=2\) for prime radicands. \begin{enumerate} \item If \(D=q\) with \(q\equiv\pm 7\,(\mathrm{mod}\,25)\), then \(t=1\), \(s_2=s_4=0\), and thus \(A=1\), \(I=R=0\). These conditions eliminate the types \(\alpha_1,\alpha_2,\alpha_3,\beta_1,\beta_2,\gamma,\delta_1,\delta_2,\varepsilon,\zeta_1,\zeta_2,\eta\) with either \(A\ge 2\) or \(I\ge 1\) or \(R\ge 1\), and only type \(\vartheta\) remains admissible. \item If \(D=\ell\) with \(\ell\equiv -1\,(\mathrm{mod}\,25)\), then \(t=s_2=1\), \(s_4=0\), and thus \(A=1\), \(0\le I\le 1\), \(R=0\), whence the types \(\alpha_1,\alpha_2,\alpha_3,\beta_1,\beta_2,\gamma,\delta_1,\varepsilon,\zeta_1,\eta\) with either \(A\ge 2\) or \(I=2\) or \(R\ge 1\) are excluded, and only the types \(\delta_2,\zeta_2,\vartheta\) remain admissible. \item If \(D=\ell\) with \(\ell\equiv +1\,(\mathrm{mod}\,25)\), then \(t=s_4=1\), \(s_2=0\), and thus \(A=1\), \(0\le I\le 1\), \(0\le R\le 2\), whence the types \(\alpha_3,\beta_1,\beta_2,\gamma,\varepsilon,\eta\) with either \(A\ge 2\) or \(I=2\) are excluded, and only the types \(\alpha_1,\alpha_2,\delta_1,\delta_2,\zeta_1,\zeta_2,\vartheta\) remain admissible. \item If \(D=q\) with \(q\equiv\pm 2\,(\mathrm{mod}\,5)\) but \(q\not\equiv\pm 7\,(\mathrm{mod}\,25)\), then \(t=2\), \(s_2=s_4=0\), and thus \(1\le A\le 2\), \(I=R=0\). These conditions eliminate the types \(\alpha_1,\alpha_2,\alpha_3,\beta_1,\beta_2,\gamma,\delta_1,\delta_2,\zeta_1,\zeta_2\) with either \(A=3\) or \(I\ge 1\) or \(R\ge 1\), and only the types \(\varepsilon,\eta,\vartheta\) remain admissible. However, the congruence requirement modulo \(25\) discourages the types \(\eta,\vartheta\), and only type \(\varepsilon\) is possible. \item If \(D=\ell\) with \(\ell\equiv -1\,(\mathrm{mod}\,5)\) but \(\ell\not\equiv -1\,(\mathrm{mod}\,25)\), then \(t=2\), \(s_2=1\), \(s_4=0\), and thus \(1\le A\le 2\), \(0\le I\le 1\), \(R=0\), whence the types \(\alpha_1,\alpha_2,\alpha_3,\beta_1,\gamma,\delta_1,\zeta_1\) with either \(A=3\) or \(I=2\) or \(R\ge 1\) are forbidden. The types \(\zeta_2,\eta,\vartheta\) are excluded by congruence conditions, and only the types \(\beta_2,\delta_2,\varepsilon\) remain admissible. \item If \(D=\ell\) with \(\ell\equiv +1\,(\mathrm{mod}\,5)\) but \(\ell\not\equiv +1\,(\mathrm{mod}\,25)\) then \(t=2\), \(s_2=0\), \(s_4=1\), and thus \(1\le A\le 2\), \(0\le I\le 1\), \(0\le R\le 2\), whence the types \(\alpha_3,\gamma\) with either \(A=3\) or \(I=2\) are forbidden. The types \(\zeta_1,\zeta_2,\eta,\vartheta\) are excluded by congruence conditions, and only the types \(\alpha_1,\alpha_2,\beta_1,\beta_2,\delta_1,\delta_2,\varepsilon\) remain admissible. \qedhere \end{enumerate} \end{proof} \begin{example} \label{exm:PrimeRadicands} Concerning numerical realizations of Theorem \ref{thm:PrimeRadicands}, we refer to Corollary \ref{cor:InfSimCls} for the parametrized infinite sequences \(\mathbf{\lbrack 7\rbrack}\) and \(\mathbf{\lbrack 2\rbrack}\) which realize item (1) and (4). (See also Tables \ref{tbl:Theta} and \ref{tbl:Epsilon} for the types \(\vartheta\) and \(\varepsilon\).) In all the other cases, there occurs \textit{type splitting}: The similarity class \(\mathbf{\lbrack 149\rbrack}\) partially realizes item (2). (See Table \ref{tbl:Delta2} for the type \(\delta_2\).) Outside the range of our systematic investigations, we found that the similarity class \(\mathbf{\lbrack 1049\rbrack}\) realizes type \(\zeta_2\). Realizations of the type \(\vartheta\) are unknown up to now. The similarity classes \(\mathbf{\lbrack 401\rbrack}\), \(\mathbf{\lbrack 151\rbrack}\) and \(\mathbf{\lbrack 101\rbrack}\) partially realize item (3). (See Tables \ref{tbl:Alpha1}, \ref{tbl:Alpha2} and \ref{tbl:Zeta1} for the types \(\alpha_1\), \(\alpha_2\) and \(\zeta_1\).) Outside the range of our systematic investigations, we found that the similarity class \(\mathbf{\lbrack 1151\rbrack}\), resp. \(\mathbf{\lbrack 3251\rbrack}\), realizes type \(\delta_1\), resp. \(\delta_2\). Realizations of the types \(\zeta_2\) and \(\vartheta\) are unknown up to now. The similarity classes \(\mathbf{\lbrack 139\rbrack}\), \(\mathbf{\lbrack 19\rbrack}\) and \(\mathbf{\lbrack 379\rbrack}\) completely realize item (5). (See Tables \ref{tbl:Beta2}, \ref{tbl:Delta2} and \ref{tbl:Epsilon} for the types \(\beta_2\), \(\delta_2\) and \(\varepsilon\).) The similarity classes \(\mathbf{\lbrack 31\rbrack}\), \(\mathbf{\lbrack 11\rbrack}\), \(\mathbf{\lbrack 191\rbrack}\) and \(\mathbf{\lbrack 211\rbrack}\) partially realize item (6). (See Tables \ref{tbl:Alpha1}, \ref{tbl:Alpha2}, \ref{tbl:Beta1} and \ref{tbl:Delta1} for the types \(\alpha_1\), \(\alpha_2\), \(\beta_1\) and \(\delta_1\).) Realizations of the types \(\beta_2\), \(\delta_2\) and \(\varepsilon\) are unknown up to now. \end{example} \subsection{Non-elementary \(5\)-class groups} \label{ss:NonElem} \noindent Although most of the \(5\)-class groups of pure metacyclic fields \(N\), maximal real subfields \(M\) and pure quintic subfields \(L\) are elementary abelian, there occur sparse examples with non-elementary structure. For instance, we have only \(8\) occurrences within the range \(2\le D<10^3\) of our computations: \\ (1) \(\mathrm{Cl}_5(N)\simeq C_{25}\times C_5^3\), \((V_L,V_M,V_N;E)=(1,2,5*;6)\) for \(D=259=7\cdot 37\) (type \(\gamma\)), \\ (2) \(\mathrm{Cl}_5(N)\simeq C_{25}\times C_5^7\), \(\mathrm{Cl}_5(M)\simeq C_{25}\times C_5^3\), \((V_L,V_M,V_N;E)=(3,5*,9*;2)\) for \(D=281\) prime \\ (type \(\alpha_1\)), \\ (3) \(\mathrm{Cl}_5(N)\simeq C_{25}\times C_5^5\), \((V_L,V_M,V_N;E)=(2,3,7*;4)\) for \(D=465=3\cdot 5\cdot 31\) (type \(\beta_2\)), \\ (4) \(\mathrm{Cl}_5(N)\simeq C_{25}\times C_5^5\), \((V_L,V_M,V_N;E)=(2,3,7*;4)\) for \(D=473=11\cdot 43\) (type \(\beta_2\)), \\ (5) \(\mathrm{Cl}_5(N)\simeq C_{25}\times C_5^6\), \(\mathrm{Cl}_5(M)\simeq C_{25}\times C_5^2\), \(\mathrm{Cl}_5(L)\simeq C_{25}\), \((V_L,V_M,V_N;E)=(2*,4*,8*;5)\) \\ for \(D=502=2\cdot 251\) (type \(\beta_1\)), \\ (6) \(\mathrm{Cl}_5(N)\simeq C_{25}\times C_5^5\), \((V_L,V_M,V_N;E)=(2,3,7*;4)\) for \(D=590=2\cdot 5\cdot 59\) (type \(\beta_2\)), \\ (7) \(\mathrm{Cl}_5(N)\simeq C_{25}^2\times C_5^4\), \(\mathrm{Cl}_5(M)\simeq C_{25}\times C_5^2\), \((V_L,V_M,V_N;E)=(2,4*,8*;5)\) for \(D=620=2^2\cdot 5\cdot 31\) \\ (type \(\beta_1\)), \\ (8) \(\mathrm{Cl}_5(N)\simeq C_{25}\times C_5^4\), \(\mathrm{Cl}_5(M)\simeq C_{25}\times C_5\), \(\mathrm{Cl}_5(L)\simeq C_{25}\), \((V_L,V_M,V_N;E)=(2*,3*,6*;3)\) \\ for \(D=955=5\cdot 191\) (type \(\alpha_2\)). However, outside the range of systematic computations, we additionally found: \\ (a) \(\mathrm{Cl}_5(N)\simeq C_{25}^3\times C_5\), \(\mathrm{Cl}_5(M)\simeq C_{25}\times C_5\), \(\mathrm{Cl}_5(L)\simeq C_{25}\), \((V_L,V_M,V_N;E)=(2*,3*,7*;4)\) \\ for \(D=1049\) prime (type \(\zeta_2\)), \\ (b) \(\mathrm{Cl}_5(N)\simeq C_{25}^2\times C_5^6\), \(\mathrm{Cl}_5(M)\simeq C_{25}\times C_5^3\), \((V_L,V_M,V_N;E)=(3,5*,10*;3)\) for \(D=3001\) prime \\ (type \(\alpha_2\)), \\ (c) \(\mathrm{Cl}_5(N)\simeq C_{25}^5\times C_5^4\), \(\mathrm{Cl}_5(M)\simeq C_{25}^2\times C_5^3\), \(\mathrm{Cl}_5(L)\simeq C_{25}\times C_5^2\), \((V_L,V_M,V_N;E)=(4*,7*,14*;3)\) \\ for \(D=3251\) prime (type \(\delta_2\)), \\ (d) \(\mathrm{Cl}_5(N)\simeq C_{25}^2\times C_5^2\), \(\mathrm{Cl}_5(M)\simeq C_{25}\times C_5\), \(\mathrm{Cl}_5(L)\simeq C_{25}\), \((V_L,V_M,V_N;E)=(2*,3*,6*;3)\) \\ for \(D=5849\) prime (type \(\delta_2\)). We point out that in all of the last four examples, the normal field \(N\) is a Polya field, since the radicands \(D\) are primes \(\ell\equiv\pm 1\,(\mathrm{mod}\,25)\), the conductors are primes \(f=\ell\), and thus all primitive ambiguous ideals are principal, generated by the radical \(\delta=\sqrt[5]{D}\) and its powers. Consequently, there seems to be no upper bound for the complexity of \(5\)-class groups \(\mathrm{Cl}_5(N)\) of pure metacyclic Polya fields \(N\) in Theorem \ref{thm:Polya1or24mod25}. \subsection{Refinement of DPF types by similarity classes} \label{ss:Refinement} \noindent Based on the definition of similarity classes and prototypes in \S\ \ref{ss:Prototypes}, on the explicit listing of all prototypes in the range between \(2\) and \(10^3\) in the Tables \ref{tbl:Prototypes1} --- \ref{tbl:Prototypes4}, and on theoretical foundations in \S\ \ref{ss:Theorems}, we are now in the position to establish the intended refinement of our \(13\) differential principal factorization types into similarity classes in the Tables \ref{tbl:Alpha1} --- \ref{tbl:Theta}, as far as the range of our computations for normalized radicands \(2\le D<10^3\) is concerned. The cardinalities \(\mathbf{\lvert M\rvert}\) refine the statistical evaluation in Table \ref{tbl:Statistics}. DPF types are characterized by the multiplet \((U,\eta,\zeta;A,I,R)\), refined Dedekind species, S, by the multiplet \((e_0;t,u,v,m;n,s_2,s_4)\), and \(5\)-class groups by the multiplet \((V_L,V_M,V_N;E)\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\alpha_1\), \((U,\eta,\zeta;A,I,R)=(2,-,-;1,0,2)\): \(5\) similarity classes} \label{tbl:Alpha1} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(1\) & \(2\) & \(3\) & \(5\) & \(2\) & \(\mathbf{31}\) & \(2\) \\ 2 & 1a & \(6\) & \(1\) & \(0\) & \(1\) & \(4\) & \(0\) & \(0\) & \(1\) & \(2\) & \(3\) & \(5\) & \(2\) & \(\mathbf{155}\) & \(2\) \\ 3 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(1\) & \(3\) & \(5*\) & \(9*\) & \(2\) & \(\mathbf{281}\) & \(1\) \\ 4 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(0\) & \(0\) & \(2\) & \(3\) & \(5\) & \(9\) & \(2\) & \(\mathbf{341}\) & \(2\) \\ 5 & 2 & \(0\) & \(1\) & \(1'\)& \(0\) & \(1\) & \(0\) & \(0\) & \(1'\)& \(2\) & \(3\) & \(5\) & \(2\) & \(\mathbf{401}\) & \(2\) \\ \hline \end{tabular} \end{center} \end{table} DPF type \(\alpha_1\) splits into \(3\) similarity classes in the ground state \((V_L,V_M,V_N)=(2,3,5)\) and \(2\) similarity classes in the first excited state \((V_L,V_M,V_N)=(3,5,9)\). Summing up the partial frequencies \(6+3\) of these states in Table \ref{tbl:Alpha1} yields the modest absolute frequency \(9\) of type \(\alpha_1\) in the range \(2\le D<10^3\), as given in Table \ref{tbl:Statistics}. The logarithmic subfield unit index of type \(\alpha_1\) is restricted to the single value \(E=2\). Type \(\alpha_1\) is the unique type with \(2\)-dimensional relative principal factorization, \(R=2\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\alpha_2\), \((U,\eta,\zeta;A,I,R)=(2,-,-;1,1,1)\): \(22\) similarity classes} \label{tbl:Alpha2} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \(\mathbf{11}\) & \(14\) \\ 2 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(0\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \(\mathbf{33}\) & \(8\) \\ 3 & 1a & \(6\) & \(1\) & \(0\) & \(1\) & \(4\) & \(0\) & \(0\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \(\mathbf{55}\) & \(6\) \\ 4 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(0\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \(\mathbf{82}\) & \(15\) \\ 5 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(0\) & \(1\) & \(2\) & \(3\) & \(6\) & \(3\) & \(\mathbf{123}\) & \(2\) \\ 6 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \(\mathbf{131}\) & \(6\) \\ 7 & 2 & \(0\) & \(1\) & \(1'\)& \(0\) & \(1\) & \(0\) & \(0\) & \(1'\)& \(1\) & \(1\) & \(2\) & \(3\) & \(\mathbf{151}\) & \(3\) \\ 8 & 1a & \(6\) & \(1\) & \(0\) & \(1\) & \(4\) & \(0\) & \(0\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \(\mathbf{275}\) & \(2\) \\ 9 & 1b & \(2\) & \(2\) & \(1\) & \(1\) & \(4\) & \(1\) & \(0\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \(\mathbf{287}\) & \(1\) \\ 10 & 2 & \(0\) & \(3\) & \(0\) & \(3\) & \(3\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \(6\) & \(3\) & \(\mathbf{418}\) & \(1\) \\ 11 & 2 & \(0\) & \(2\) & \(0\) & \(2\) & \(1\) & \(0\) & \(0\) & \(2\) & \(2\) & \(3\) & \(6\) & \(3\) & \(\mathbf{451}\) & \(1\) \\ 12 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(1\) & \(0\) & \(1\) & \(2\) & \(3\) & \(6\) & \(3\) & \(\mathbf{550}\) & \(1\) \\ 13 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(0\) & \(0\) & \(2\) & \(3\) & \(4\) & \(8\) & \(1\) & \(\mathbf{671}\) & \(1\) \\ 14 & 2 & \(0\) & \(3\) & \(0\) & \(3\) & \(3\) & \(1\) & \(0\) & \(2\) & \(2\) & \(3\) & \(6\) & \(3\) & \(\mathbf{682}\) & \(1\) \\ 15 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(1\) & \(3\) & \(4\) & \(8\) & \(1\) & \(\mathbf{691}\) & \(1\) \\ 16 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(1\) & \(0\) & \(1\) & \(2\) & \(2\) & \(4\) & \(1\) & \(\mathbf{710}\) & \(4\) \\ 17 & 2 & \(0\) & \(1\) & \(1'\)& \(0\) & \(1\) & \(0\) & \(0\) & \(1'\)& \(2\) & \(2\) & \(4\) & \(1\) & \(\mathbf{751}\) & \(1\) \\ 18 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(8\) & \(1\) & \(\mathbf{779}\) & \(1\) \\ 19 & 1b & \(2\) & \(2\) & \(1'\)& \(1\) & \(4\) & \(1\) & \(0\) & \(1'\)& \(2\) & \(2\) & \(4\) & \(1\) & \(\mathbf{808}\) & \(2\) \\ 20 & 1b & \(2\) & \(3\) & \(1\) & \(2\) & \(12\) & \(2\) & \(0\) & \(1\) & \(3\) & \(4\) & \(8\) & \(1\) & \(\mathbf{861}\) & \(1\) \\ 21 & 1a & \(6\) & \(1\) & \(0\) & \(1\) & \(4\) & \(0\) & \(0\) & \(1\) & \(2*\) & \(3*\) & \(6*\) & \(3\) & \(\mathbf{955}\) & \(1\) \\ 22 & 2 & \(0\) & \(2\) & \(0\) & \(2\) & \(1\) & \(1\) & \(0\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \(\mathbf{982}\) & \(2\) \\ \hline \end{tabular} \end{center} \end{table} The logarithmic subfield unit index of DPF type \(\alpha_2\) can take two values, either \(E=3\) or \(E=1\). Type \(\alpha_2\) with \(E=3\) splits into \(5\) similarity classes in the ground state \((V_L,V_M,V_N)=(1,1,2)\) and \(6\) similarity classes in the first excited state \((V_L,V_M,V_N)=(2,3,6)\). Type \(\alpha_2\) with \(E=1\) splits into \(7\) similarity classes in the ground state \((V_L,V_M,V_N)=(2,2,4)\) and \(4\) similarity classes in the first excited state \((V_L,V_M,V_N)=(3,4,8)\). Summing up the partial frequencies \(40+7\), resp. \(24+4\), of these states in Table \ref{tbl:Alpha2} yields the considerable absolute frequency \(75\) of type \(\alpha_2\) in the range \(2\le D<10^3\), as given in Table \ref{tbl:Statistics}. Type \(\alpha_2\) is the unique type with mixed intermediate and relative principal factorization, \(I=R=1\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\alpha_3\), \((U,\eta,\zeta;A,I,R)=(2,-,-;1,2,0)\): \(5\) similarity classes} \label{tbl:Alpha3} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(0\) & \(1\) & \(1\) & \(2\) & \(2\) & \(5\) & \(2\) & \(\mathbf{319}\) & \(3\) \\ 2 & 2 & \(0\) & \(2\) & \(0\) & \(2\) & \(1\) & \(0\) & \(2\) & \(0\) & \(2\) & \(2\) & \(5\) & \(2\) & \(\mathbf{551}\) & \(1\) \\ 3 & 1b & \(2\) & \(3\) & \(0\) & \(3\) & \(13\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \(9\) & \(2\) & \(\mathbf{627}\) & \(1\) \\ 4 & 2 & \(0\) & \(2\) & \(0\) & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(2\) & \(2\) & \(5\) & \(2\) & \(\mathbf{649}\) & \(2\) \\ 5 & 2 & \(0\) & \(3\) & \(0\) & \(3\) & \(3\) & \(1\) & \(1\) & \(1\) & \(2\) & \(2\) & \(5\) & \(2\) & \(\mathbf{957}\) & \(1\) \\ \hline \end{tabular} \end{center} \end{table} DPF type \(\alpha_3\) splits into \(4\) similarity classes in the ground state \((V_L,V_M,V_N)=(2,2,5)\) and \(1\) similarity class in the first excited state \((V_L,V_M,V_N)=(3,4,9)\). Summing up the partial frequencies \(7+1\) of these states in Table \ref{tbl:Alpha3} yields the modest absolute frequency \(8\) of type \(\alpha_3\) in the range \(2\le D<10^3\), as given in Table \ref{tbl:Statistics}. The logarithmic subfield unit index of type \(\alpha_3\) is restricted to the unique value \(E=2\). Type \(\alpha_3\) is the unique type with \(2\)-dimensional intermediate principal factorization, \(I=2\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\beta_1\), \((U,\eta,\zeta;A,I,R)=(2,-,-;2,0,1)\): \(10\) similarity classes} \label{tbl:Beta1} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1b & \(2\) & \(3\) & \(0\) & \(3\) & \(13\) & \(2\) & \(0\) & \(1\) & \(2\) & \(3\) & \(6\) & \(3\) & \(\mathbf{186}\) & \(7\) \\ 2 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{191}\) & \(3\) \\ 3 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(0\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{253}\) & \(4\) \\ 4 & 1b & \(2\) & \(2\) & \(1'\)& \(1\) & \(4\) & \(1\) & \(0\) & \(1'\)& \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{302}\) & \(2\) \\ 5 & 2 & \(0\) & \(2\) & \(0\) & \(2\) & \(1\) & \(1\) & \(0\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{482}\) & \(2\) \\ 6 & 1b & \(2\) & \(2\) & \(1'\)& \(1\) & \(4\) & \(1\) & \(0\) & \(1'\)& \(2*\) & \(4*\) & \(8*\) & \(5\) & \(\mathbf{502}\) & \(1\) \\ 7 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(0\) & \(1\) & \(2\) & \(3\) & \(6\) & \(3\) & \(\mathbf{517}\) & \(1\) \\ 8 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(1\) & \(0\) & \(1\) & \(2\) & \(4*\) & \(8*\) & \(5\) & \(\mathbf{620}\) & \(1\) \\ 9 & 2 & \(0\) & \(3\) & \(1\) & \(2\) & \(4\) & \(2\) & \(0\) & \(1\) & \(2\) & \(3\) & \(6\) & \(3\) & \(\mathbf{693}\) & \(1\) \\ 10 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(1\) & \(0\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{825}\) & \(1\) \\ \hline \end{tabular} \end{center} \end{table} The logarithmic subfield unit index of DPF type \(\beta_1\) can take two values, either \(E=3\) or \(E=5\). Type \(\beta_1\) with \(E=3\) consists of \(3\) similarity classes in the ground state \((V_L,V_M,V_N)=(2,3,6)\). Type \(\beta_1\) with \(E=5\) splits into \(5\) similarity classes in the ground state \((V_L,V_M,V_N)=(1,2,4)\) and \(2\) similarity classes in the first excited state \((V_L,V_M,V_N)=(2,4,8)\). Summing up the partial frequencies \(9\), resp. \(12+2\), of these states in Table \ref{tbl:Beta1} yields the modest absolute frequency \(23\) of type \(\beta_1\) in the range \(2\le D<10^3\), as given in Table \ref{tbl:Statistics}. Type \(\beta_1\) is the unique type with mixed absolute and relative principal factorization, \(A=2\) and \(R=1\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\beta_2\), \((U,\eta,\zeta;A,I,R)=(2,-,-;2,1,0)\): \(25\) similarity classes} \label{tbl:Beta2} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(0\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{22}\) & \(35\) \\ 2 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(1\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{38}\) & \(44\) \\ 3 & 1b & \(2\) & \(2\) & \(1\) & \(1\) & \(4\) & \(1\) & \(0\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{77}\) & \(7\) \\ 4 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(1\) & \(0\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{110}\) & \(11\) \\ 5 & 2 & \(0\) & \(3\) & \(0\) & \(3\) & \(3\) & \(2\) & \(0\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{132}\) & \(5\) \\ 6 & 1b & \(2\) & \(2\) & \(1\) & \(1\) & \(4\) & \(1\) & \(1\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{133}\) & \(7\) \\ 7 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(1\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{139}\) & \(4\) \\ 8 & 1b & \(2\) & \(3\) & \(1\) & \(2\) & \(12\) & \(2\) & \(0\) & \(1\) & \(2\) & \(3\) & \(7\) & \(4\) & \(\mathbf{154}\) & \(3\) \\ 9 & 2 & \(0\) & \(3\) & \(0\) & \(3\) & \(3\) & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{174}\) & \(3\) \\ 10 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(1\) & \(1\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{190}\) & \(9\) \\ 11 & 1b & \(2\) & \(2\) & \(1'\)& \(1\) & \(4\) & \(1\) & \(0\) & \(1'\)& \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{202}\) & \(6\) \\ 12 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(0\) & \(1\) & \(1\) & \(2\) & \(3\) & \(7\) & \(4\) & \(\mathbf{209}\) & \(2\) \\ 13 & 1b & \(2\) & \(3\) & \(0\) & \(3\) & \(13\) & \(2\) & \(0\) & \(1\) & \(2\) & \(3\) & \(7\) & \(4\) & \(\mathbf{286}\) & \(3\) \\ 14 & 1a & \(6\) & \(2\) & \(1\) & \(1\) & \(16\) & \(1\) & \(0\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{385}\) & \(1\) \\ 15 & 1b & \(2\) & \(2\) & \(1'\)& \(1\) & \(4\) & \(1\) & \(1'\)& \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{398}\) & \(7\) \\ 16 & 2 & \(0\) & \(3\) & \(1\) & \(2\) & \(4\) & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{399}\) & \(2\) \\ 17 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(1\) & \(0\) & \(1\) & \(2\) & \(3\) & \(7*\) & \(4\) & \(\mathbf{465}\) & \(1\) \\ 18 & 1b & \(2\) & \(2\) & \(1\) & \(1\) & \(4\) & \(1\) & \(0\) & \(1\) & \(2\) & \(3\) & \(7*\) & \(4\) & \(\mathbf{473}\) & \(1\) \\ 19 & 2 & \(0\) & \(3\) & \(1\) & \(2\) & \(4\) & \(2\) & \(0\) & \(1\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{574}\) & \(3\) \\ 20 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(1\) & \(1\) & \(0\) & \(2\) & \(3\) & \(7*\) & \(4\) & \(\mathbf{590}\) & \(1\) \\ 21 & 1b & \(2\) & \(3\) & \(1\) & \(2\) & \(12\) & \(2\) & \(1\) & \(0\) & \(2\) & \(3\) & \(7\) & \(4\) & \(\mathbf{609}\) & \(1\) \\ 22 & 1b & \(2\) & \(3\) & \(0\) & \(3\) & \(13\) & \(1\) & \(1\) & \(1\) & \(2\) & \(3\) & \(7\) & \(4\) & \(\mathbf{638}\) & \(2\) \\ 23 & 1a & \(6\) & \(2\) & \(1\) & \(1\) & \(16\) & \(1\) & \(1\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{665}\) & \(1\) \\ 24 & 2 & \(0\) & \(2\) & \(0\) & \(2\) & \(1\) & \(1\) & \(1\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{893}\) & \(1\) \\ 25 & 1b & \(2\) & \(3\) & \(0\) & \(3\) & \(13\) & \(1\) & \(0\) & \(2\) & \(2\) & \(3\) & \(7\) & \(4\) & \(\mathbf{902}\) & \(1\) \\ \hline \end{tabular} \end{center} \end{table} DPF type \(\beta_2\) splits into \(16\) similarity classes in the ground state \((V_L,V_M,V_N)=(1,1,3)\) and \(9\) similarity classes in the first excited state \((V_L,V_M,V_N)=(2,3,7)\). Summing up the partial frequencies \(146+15\) of these states in Table \ref{tbl:Beta2} yields the high absolute frequency \(161\) of type \(\beta_2\) in the range \(2\le D<10^3\), as given in Table \ref{tbl:Statistics}. The logarithmic subfield unit index of type \(\beta_2\) is restricted to the unique value \(E=4\). Type \(\beta_2\) is the unique type with mixed absolute and intermediate principal factorization, \(A=2\) and \(I=1\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\gamma\), \((U,\eta,\zeta;A,I,R)=(2,-,-;3,0,0)\): \(29\) similarity classes} \label{tbl:Gamma} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(2\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(6\) & \(\mathbf{6}\) & \(77\) \\ 2 & 1b & \(2\) & \(2\) & \(1\) & \(1\) & \(4\) & \(2\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(6\) & \(\mathbf{14}\) & \(44\) \\ 3 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(2\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(6\) & \(\mathbf{30}\) & \(37\) \\ 4 & 1b & \(2\) & \(3\) & \(1\) & \(2\) & \(12\) & \(3\) & \(0\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{42}\) & \(22\) \\ 5 & 1b & \(2\) & \(3\) & \(0\) & \(3\) & \(13\) & \(2\) & \(0\) & \(1\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{66}\) & \(17\) \\ 6 & 1a & \(6\) & \(2\) & \(1\) & \(1\) & \(16\) & \(2\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(6\) & \(\mathbf{70}\) & \(14\) \\ 7 & 1b & \(2\) & \(3\) & \(0\) & \(3\) & \(13\) & \(3\) & \(0\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{78}\) & \(37\) \\ 8 & 1b & \(2\) & \(3\) & \(0\) & \(3\) & \(13\) & \(2\) & \(1\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{114}\) & \(20\) \\ 9 & 2 & \(0\) & \(3\) & \(1\) & \(2\) & \(4\) & \(3\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(6\) & \(\mathbf{126}\) & \(6\) \\ 10 & 1a & \(6\) & \(3\) & \(1\) & \(2\) & \(64\) & \(3\) & \(0\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{210}\) & \(5\) \\ 11 & 1b & \(2\) & \(3\) & \(1\) & \(2\) & \(12\) & \(2\) & \(0\) & \(1\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{231}\) & \(5\) \\ 12 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(1\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{247}\) & \(2\) \\ 13 & 1b & \(2\) & \(2\) & \(1\) & \(1\) & \(4\) & \(2\) & \(0\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{259}\) & \(1\) \\ 14 & 1b & \(2\) & \(3\) & \(1\) & \(2\) & \(12\) & \(2\) & \(1\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{266}\) & \(3\) \\ 15 & 2 & \(0\) & \(3\) & \(0\) & \(3\) & \(3\) & \(3\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(6\) & \(\mathbf{276}\) & \(10\) \\ 16 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(1\) & \(1\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{285}\) & \(1\) \\ 17 & 1a & \(6\) & \(3\) & \(0\) & \(3\) & \(64\) & \(2\) & \(0\) & \(1\) & \(2\) & \(4\) & \(9\) & \(6\) & \(\mathbf{330}\) & \(2\) \\ 18 & 1a & \(6\) & \(3\) & \(0\) & \(3\) & \(64\) & \(3\) & \(0\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{390}\) & \(4\) \\ 19 & 1b & \(2\) & \(4\) & \(1\) & \(3\) & \(52\) & \(3\) & \(0\) & \(1\) & \(2\) & \(4\) & \(9\) & \(6\) & \(\mathbf{462}\) & \(1\) \\ 20 & 1b & \(2\) & \(4\) & \(1\) & \(3\) & \(52\) & \(4\) & \(0\) & \(0\) & \(2\) & \(4\) & \(9\) & \(6\) & \(\mathbf{546}\) & \(3\) \\ 21 & 1a & \(6\) & \(3\) & \(0\) & \(3\) & \(64\) & \(2\) & \(1\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{570}\) & \(2\) \\ 22 & 1b & \(2\) & \(3\) & \(2\) & \(1\) & \(16\) & \(3\) & \(0\) & \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{602}\) & \(2\) \\ 23 & 1b & \(2\) & \(3\) & \(1'\)& \(2\) & \(12\) & \(2\) & \(0\) & \(1'\)& \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{606}\) & \(2\) \\ 24 & 1a & \(6\) & \(3\) & \(0\) & \(3\) & \(64\) & \(2\) & \(0\) & \(1\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{660}\) & \(2\) \\ 25 & 1a & \(6\) & \(3\) & \(1\) & \(2\) & \(64\) & \(2\) & \(0\) & \(1\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{770}\) & \(1\) \\ 26 & 1b & \(2\) & \(4\) & \(1\) & \(3\) & \(52\) & \(3\) & \(1\) & \(0\) & \(2\) & \(4\) & \(9\) & \(6\) & \(\mathbf{798}\) & \(1\) \\ 27 & 1b & \(2\) & \(4\) & \(0\) & \(4\) & \(51\) & \(3\) & \(0\) & \(1\) & \(2\) & \(4\) & \(9\) & \(6\) & \(\mathbf{858}\) & \(1\) \\ 28 & 1b & \(2\) & \(3\) & \(1'\)& \(2\) & \(12\) & \(2\) & \(1'\)& \(0\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{894}\) & \(1\) \\ 29 & 2 & \(0\) & \(4\) & \(1\) & \(3\) & \(12\) & \(3\) & \(0\) & \(1\) & \(1\) & \(2\) & \(5\) & \(6\) & \(\mathbf{924}\) & \(1\) \\ \hline \end{tabular} \end{center} \end{table} DPF type \(\gamma\) splits into \(6\) similarity classes in the ground state \((V_L,V_M,V_N)=(0,0,1)\), \(16\) similarity classes in the first excited state \((V_L,V_M,V_N)=(1,2,5)\), and \(5\) similarity classes in the second excited state \((V_L,V_M,V_N)=(2,4,9)\). Summing up the partial frequencies \(188+128+8\) of these states in Table \ref{tbl:Gamma} yields the maximal absolute frequency \(324\) of type \(\gamma\) among all \(13\) types in the range \(2\le D<10^3\), as given in Table \ref{tbl:Statistics}. The logarithmic subfield unit index of type \(\gamma\) is restricted to the unique value \(E=6\). Type \(\gamma\) is the unique type with \(3\)-dimensional absolute principal factorization, \(A=3\). However, the \(1\)-dimensional subspace \(\Delta\) is formed by radicals, and only the complementary \(2\)-dimensional subspace is non-trivial. \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\delta_1\), \((U,\eta,\zeta;A,I,R)=(1,\times,-;1,0,1)\): \(3\) similarity classes} \label{tbl:Delta1} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(1\) & \(3\) & \(5\) & \(9\) & \(2\) & \(\mathbf{211}\) & \(1\) \\ 2 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(1\) & \(1\) & \(2\) & \(3\) & \(4\) & \(\mathbf{421}\) & \(5\) \\ 3 & 2 & \(0\) & \(2\) & \(0\) & \(2\) & \(1\) & \(1\) & \(0\) & \(1\) & \(1\) & \(2\) & \(3\) & \(4\) & \(\mathbf{843}\) & \(1\) \\ \hline \end{tabular} \end{center} \end{table} The logarithmic subfield unit index of DPF type \(\delta_1\) can take two values, either \(E=4\) or \(E=2\). Type \(\delta_1\) with \(E=4\) splits into \(2\) similarity classes in the ground state \((V_L,V_M,V_N)=(1,2,3)\). Type \(\delta_1\) with \(E=2\) consists of \(1\) similarity class in the ground state \((V_L,V_M,V_N)=(3,5,9)\). Summing up the partial frequencies \(6+1\) of these states in Table \ref{tbl:Delta1} yields the modest absolute frequency \(7\) of type \(\delta_1\) in the range \(2\le D<10^3\), as given in Table \ref{tbl:Statistics}. Type \(\delta_1\) is a type with \(1\)-dimensional relative principal factorization, \(R=1\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\delta_2\), \((U,\eta,\zeta;A,I,R)=(1,\times,-;1,1,0)\): \(5\) similarity classes} \label{tbl:Delta2} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(1\) & \(0\) & \(1\) & \(1\) & \(2\) & \(3\) & \(\mathbf{19}\) & \(27\) \\ 2 & 2 & \(0\) & \(2\) & \(0\) & \(2\) & \(1\) & \(1\) & \(1\) & \(0\) & \(1\) & \(1\) & \(2\) & \(3\) & \(\mathbf{57}\) & \(10\) \\ 3 & 1a & \(6\) & \(1\) & \(0\) & \(1\) & \(4\) & \(0\) & \(1\) & \(0\) & \(1\) & \(1\) & \(2\) & \(3\) & \(\mathbf{95}\) & \(9\) \\ 4 & 2 & \(0\) & \(1\) & \(1'\)& \(0\) & \(1\) & \(0\) & \(1'\)& \(0\) & \(1\) & \(1\) & \(2\) & \(3\) & \(\mathbf{149}\) & \(6\) \\ 5 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(1\) & \(0\) & \(2\) & \(3\) & \(6\) & \(3\) & \(\mathbf{377}\) & \(1\) \\ \hline \end{tabular} \end{center} \end{table} DPF type \(\delta_2\) splits into \(4\) similarity classes in the ground state \((V_L,V_M,V_N)=(1,1,2)\) and \(1\) similarity class in the first excited state \((V_L,V_M,V_N)=(2,3,6)\). Summing up the partial frequencies \(52+1\) of these states in Table \ref{tbl:Delta2} yields the considerable absolute frequency \(53\) of type \(\delta_2\) in the range \(2\le D<10^3\), as given in Table \ref{tbl:Statistics}. The logarithmic subfield unit index of type \(\delta_2\) is restricted to the unique value \(E=3\). Type \(\delta_2\) is a type with \(1\)-dimensional intermediate principal factorization, \(I=1\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\varepsilon\), \((U,\eta,\zeta;A,I,R)=(1,\times,-;2,0,0)\): \(22\) similarity classes} \label{tbl:Epsilon} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(5\) & \(\mathbf{2}\) & \(71\) \\ 2 & 1a & \(6\) & \(1\) & \(0\) & \(1\) & \(4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(5\) & \(\mathbf{10}\) & \(31\) \\ 3 & 2 & \(0\) & \(2\) & \(0\) & \(2\) & \(1\) & \(2\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(5\) & \(\mathbf{18}\) & \(37\) \\ 4 & 1a & \(6\) & \(2\) & \(1\) & \(1\) & \(16\) & \(2\) & \(0\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{140}\) & \(5\) \\ 5 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(2\) & \(0\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{141}\) & \(19\) \\ 6 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(1\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{171}\) & \(6\) \\ 7 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(2\) & \(0\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{180}\) & \(5\) \\ 8 & 2 & \(0\) & \(3\) & \(1\) & \(2\) & \(4\) & \(2\) & \(0\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{182}\) & \(1\) \\ 9 & 1b & \(2\) & \(2\) & \(1\) & \(1\) & \(4\) & \(1\) & \(1\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{203}\) & \(2\) \\ 10 & 2 & \(0\) & \(2\) & \(0\) & \(2\) & \(1\) & \(1\) & \(1\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{218}\) & \(1\) \\ 11 & 1b & \(2\) & \(3\) & \(1\) & \(2\) & \(12\) & \(3\) & \(0\) & \(0\) & \(2\) & \(4\) & \(8\) & \(5\) & \(\mathbf{273}\) & \(4\) \\ 12 & 1a & \(6\) & \(2\) & \(0\) & \(2\) & \(16\) & \(1\) & \(1\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{290}\) & \(2\) \\ 13 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(1\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{298}\) & \(2\) \\ 14 & 1b & \(2\) & \(2\) & \(1\) & \(1\) & \(4\) & \(2\) & \(0\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{329}\) & \(7\) \\ 15 & 1b & \(2\) & \(3\) & \(0\) & \(3\) & \(13\) & \(2\) & \(1\) & \(0\) & \(2\) & \(4\) & \(8\) & \(5\) & \(\mathbf{348}\) & \(2\) \\ 16 & 1b & \(2\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(1\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{379}\) & \(1\) \\ 17 & 1b & \(2\) & \(2\) & \(0\) & \(2\) & \(3\) & \(1\) & \(0\) & \(1\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{422}\) & \(6\) \\ 18 & 2 & \(0\) & \(3\) & \(1\) & \(2\) & \(4\) & \(2\) & \(1\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{532}\) & \(1\) \\ 19 & 1a & \(6\) & \(1\) & \(0\) & \(1\) & \(4\) & \(0\) & \(1\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{695}\) & \(1\) \\ 20 & 1b & \(2\) & \(3\) & \(0\) & \(3\) & \(13\) & \(3\) & \(0\) & \(0\) & \(2\) & \(4\) & \(8\) & \(5\) & \(\mathbf{702}\) & \(2\) \\ 21 & 2 & \(0\) & \(2\) & \(2\) & \(0\) & \(4\) & \(2\) & \(0\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{749}\) & \(1\) \\ 22 & 1a & \(6\) & \(1\) & \(1\) & \(0\) & \(4\) & \(1\) & \(0\) & \(0\) & \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{785}\) & \(1\) \\ \hline \end{tabular} \end{center} \end{table} DPF type \(\varepsilon\) splits into \(3\) similarity classes in the ground state \((V_L,V_M,V_N)=(0,0,0)\), \(16\) similarity classes in the first excited state \((V_L,V_M,V_N)=(1,2,4)\), and \(3\) similarity classes in the second excited state \((V_L,V_M,V_N)=(2,4,8)\). Summing up the partial frequencies \(139+61+8\) of these states in Table \ref{tbl:Epsilon} yields the high absolute frequency \(208\) of type \(\varepsilon\) in the range \(2\le D<10^3\), as given in Table \ref{tbl:Statistics}. The logarithmic subfield unit index of type \(\varepsilon\) is restricted to the unique value \(E=5\). Type \(\varepsilon\) is a type with \(2\)-dimensional absolute principal factorization, \(A=2\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{No splitting of type \(\zeta_1\), \((U,\eta,\zeta;A,I,R)=(1,-,\times;1,0,1)\): \(1\) similarity class} \label{tbl:Zeta1} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 2 & \(0\) & \(1\) & \(1'\)& \(0\) & \(1\) & \(0\) & \(0\) & \(1'\)& \(1\) & \(2\) & \(4\) & \(5\) & \(\mathbf{101}\) & \(1\) \\ \hline \end{tabular} \end{center} \end{table} The logarithmic subfield unit index of DPF type \(\zeta_1\) is restricted to the unique value \(E=5\). Type \(\zeta_1\) consists of \(1\) similarity class in the ground state \((V_L,V_M,V_N)=(1,2,4)\). The frequency \(1\) of this state in Table \ref{tbl:Zeta1} coincides with the negligible absolute frequency \(1\) of type \(\zeta_1\) in the range \(2\le D<10^3\), as given in Table \ref{tbl:Statistics}. Type \(\zeta_1\) is a type with \(1\)-dimensional relative principal factorization, \(R=1\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\zeta_2\), \((U,\eta,\zeta;A,I,R)=(1,-,\times;1,1,0)\): \(3\) similarity classes} \label{tbl:Zeta2} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1a & \(6\) & \(1\) & \(1'\)& \(0\) & \(4\) & \(0\) & \(0\) & \(1'\)& \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{505}\) & \(2\) \\ 2 & 2 & \(0\) & \(2\) & \(2'\)& \(0\) & \(4\) & \(1\) & \(0\) & \(1'\)& \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{707}\) & \(1\) \\ 3 & 1a & \(6\) & \(1\) & \(1\) & \(0\) & \(4\) & \(0\) & \(1\) & \(0\) & \(1\) & \(1\) & \(3\) & \(4\) & \(\mathbf{745}\) & \(2\) \\ \hline \end{tabular} \end{center} \end{table} DPF type \(\zeta_2\) consists of \(3\) similarity classes. The modest absolute frequency \(5\) of type \(\zeta_2\) in the range \(2\le D<10^3\), given in Table \ref{tbl:Statistics}, is the sum \(2+1+2\) of partial frequencies in Table \ref{tbl:Zeta2}. Type \(\zeta_2\) only occurs with logarithmic subfield unit index \(E=4\). It is a type with \(1\)-dimensional intermediate principal factorization, \(I=1\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\eta\), \((U,\eta,\zeta;A,I,R)=(1,-,\times;2,0,0)\): \(2\) similarity classes} \label{tbl:Eta} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1a & \(6\) & \(1\) & \(1\) & \(0\) & \(4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(6\) & \(\mathbf{35}\) & \(6\) \\ 2 & 2 & \(0\) & \(2\) & \(2\) & \(0\) & \(4\) & \(2\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(6\) & \(\mathbf{301}\) & \(1\) \\ \hline \end{tabular} \end{center} \end{table} DPF type \(\eta\) splits in \(2\) similarity classes, \(\lbrack\mathbf{35}\rbrack\) and \(\lbrack\mathbf{301}\rbrack\). The modest absolute frequency \(7\) of type \(\eta\) in the range \(2\le D<10^3\), given in Table \ref{tbl:Statistics}, is the sum \(6+1\) of partial frequencies in Table \ref{tbl:Eta}. Type \(\eta\) only occurs with logarithmic subfield unit index \(E=6\). It is a type with \(2\)-dimensional absolute principal factorization, \(A=2\). However, it should be pointed out that outside of the range of our systematic investigations we found an excited state \((V_L,V_M,V_N)=(1,2,5)\) for the similarity class \(\mathbf{\lbrack 1505\rbrack}\), where \(1505=5\cdot 7\cdot 43\) has three prime divisors, additionally to the ground state \((V_L,V_M,V_N)=(0,0,1)\). \renewcommand{\arraystretch}{1.0} \begin{table}[ht] \caption{Splitting of type \(\vartheta\), \((U,\eta,\zeta;A,I,R)=(0,\times,\times;1,0,0)\): \(2\) similarity classes} \label{tbl:Theta} \begin{center} \begin{tabular}{|r|cc|cccr|ccc|cccc|rr|} \hline No. & S & \(e_0\) & \(t\) & \(u\) & \(v\) & \(m\) & \(n\) & \(s_2\) & \(s_4\) & \(V_L\) & \(V_M\) & \(V_N\) & \(E\) & \(\mathbf{M}\) & \(\mathbf{\lvert M\rvert}\) \\ \hline 1 & 1a & \(6\) & \(0\) & \(0\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(5\) & \(\mathbf{5}\) & \(1\) \\ 2 & 2 & \(0\) & \(1\) & \(1\) & \(0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(5\) & \(\mathbf{7}\) & \(18\) \\ \hline \end{tabular} \end{center} \end{table} DPF type \(\vartheta\) splits into the unique finite similarity class \(\lbrack\mathbf{5}\rbrack\) with only a single element and the infinite parametrized sequence \(\lbrack\mathbf{7}\rbrack\) consisting of all prime radicands \(D=q\) congruent to \(\pm 7\,(\mathrm{mod}\,25)\). The small absolute frequency \(19\) of type \(\vartheta\) in the range \(2\le D<10^3\), given in Table \ref{tbl:Statistics}, is the sum \(\mathbf{\lvert 5\rvert}+\mathbf{\lvert 7\rvert}=1+18\) in Table \ref{tbl:Theta}. Since no theoretical argument disables the occurrence of type \(\vartheta\) for composite radicands \(D\) with prime factors \(5\) and \(q\equiv\pm 7\,(\mathrm{mod}\,25)\), we conjecture that such cases will appear in bigger ranges with \(D>10^3\). Type \(\vartheta\) only occurs with logarithmic subfield unit index \(E=5\), and is the unique type where every unit of \(K\) occurs as norm of a unit of \(N\), that is \(U=0\). \subsection{Increasing dominance of DPF type \(\gamma\) for \(T\to\infty\)} \label{ss:ManyPrmDiv} \noindent In this final section, we want to show that the careful book keeping of similarity classes with representative prototypes in the Tables \ref{tbl:Alpha1} -- \ref{tbl:Theta} is useful for the quantitative illumination of many other phenomena. For an explanation, we select the phenomenon of \textit{absolute} principal factorizations. The statistical distribution of DPF types in Table \ref{tbl:Statistics} has proved that \textit{type} \(\gamma\) with \(324\) occurrences, that is \(36\,\%\), among all \(900\) fields \(N=\mathbb{Q}(\zeta_5,\sqrt[5]{D})\) with normalized radicands in the range \(2\le D<10^3\) is doubtlessly the \textit{high champion} of all DPF types. This means that there is a clear trend towards the maximal possible extent of \(3\)-dimensional spaces of \textit{absolute} principal factorizations, \(A=3\), in spite of the disadvantage that the estimate \(1\le A\le\min(3,T)\) in the formulas (4.3) and (4.4) of \cite[Cor. 4.1]{Ma2a} prohibits type \(\gamma\) for conductors \(f\) with \(T\le 2\) prime divisors. For the following investigation, we have to recall that the number \(T\) of all prime factors of \(f^4=5^{e_0}\cdot q_1^4\ldots q_t^4\) is given by \(T=t+1\) for fields of Dedekind's species \(1\), where \(e_0\in\lbrace 2,6\rbrace\), and by \(T=t\) for fields of Dedekind's species \(2\), where \(e_0=0\). Conductors \(f\) with \(T=4\) prime factors occur in six tables, \\ \(1\) case of type \(\alpha_2\) in a single similarity class of Table \ref{tbl:Alpha2}, \\ \(1\) case of type \(\alpha_3\) in a single similarity class of Table \ref{tbl:Alpha3}, \\ \(7\) cases of type \(\beta_1\) in a single similarity class of Table \ref{tbl:Beta1}, \\ \(10\) cases of type \(\beta_2\) in \(5\) similarity classes of Table \ref{tbl:Beta2}, \\ \(126\) cases of type \(\gamma\) in \(16\) similarity classes of Table \ref{tbl:Gamma}, \\ \(8\) cases of type \(\varepsilon\) in \(3\) similarity classes of Table \ref{tbl:Epsilon}, \\ that is, a total of \(153\) cases, with respect to the complete range \(2\le D<10^3\) of our computations. Consequently, we have an increase of type \(\gamma\) from \(36.0\,\%\), with respect to the entire database, to \(\frac{126}{153}=82.4\,\%\), with respect to \(T=4\). The feature is even aggravated for conductors \(f\) with \(T=5\) prime factors, which exclusively occur in Table \ref{tbl:Gamma}. There are \(4\) similarity classes with \(T=5\), namely \(\mathbf{\lbrack 462\rbrack}\), \(\mathbf{\lbrack 546\rbrack}\), \(\mathbf{\lbrack 798\rbrack}\), \(\mathbf{\lbrack 858\rbrack}\), with a total of \(6\) elements, all \((100\,\%)\) with associated fields of type \(\gamma\). \section{Acknowledgements} \label{s:Thanks} \noindent We gratefully acknowledge that our research was supported by the Austrian Science Fund (FWF): projects J 0497-PHY and P 26008-N25. This work is dedicated to the memory of Charles J. Parry (\(\dagger\) 25 December 2010) who suggested a numerical investigation of pure quintic number fields. \begin{figure}[ht] \caption{Lattices of subfields of \(N\) and of subgroups of \(G=\mathrm{Gal}(N/\mathbb{Q})\)} \label{fig:GaloisCorrespondence} {\tiny \setlength{\unitlength}{1.0cm} \begin{picture}(15,7)(-11,-8.5) \put(-10,-2){\makebox(0,0)[cb]{Degree}} \put(-10,-4){\vector(0,1){1.5}} \put(-10,-9){\line(0,1){5}} \multiput(-10.1,-9)(0,1){6}{\line(1,0){0.2}} \put(-10.2,-4){\makebox(0,0)[rc]{\(20\)}} \put(-9.8,-4){\makebox(0,0)[lc]{metacyclic}} \put(-10.2,-5){\makebox(0,0)[rc]{\(10\)}} \put(-9.8,-5){\makebox(0,0)[lc]{conjugates of maximal real}} \put(-10.2,-6){\makebox(0,0)[rc]{\(5\)}} \put(-9.8,-6){\makebox(0,0)[lc]{conjugates of pure quintic}} \put(-10.2,-7){\makebox(0,0)[rc]{\(4\)}} \put(-9.8,-7){\makebox(0,0)[lc]{cyclotomic}} \put(-10.2,-8){\makebox(0,0)[rc]{\(2\)}} \put(-9.8,-8){\makebox(0,0)[lc]{real quadratic}} \put(-10.2,-9){\makebox(0,0)[rc]{\(1\)}} \put(-9.8,-9){\makebox(0,0)[lc]{rational}} {\normalsize \put(-1,-2){\makebox(0,0)[cc]{Galois Correspondence}} \put(-1,-2.5){\makebox(0,0)[cc]{\(\longleftrightarrow\)}} } \put(-6,-9){\circle*{0.2}} \put(-6,-9.2){\makebox(0,0)[ct]{\(\mathbb{Q}\)}} \put(-6,-9){\line(2,1){2}} \put(-4,-8){\circle*{0.2}} \put(-4,-8.2){\makebox(0,0)[ct]{\(K^+\)}} \put(-4,-8){\line(2,1){2}} \put(-2,-7){\circle*{0.2}} \put(-2,-7.2){\makebox(0,0)[ct]{\(K\)}} \put(0,-9){\circle*{0.2}} \put(0,-9.2){\makebox(0,0)[ct]{\(G=\langle\sigma,\tau\rangle\)}} \put(0,-9){\line(2,1){2}} \put(2,-8){\circle*{0.2}} \put(2,-8.2){\makebox(0,0)[ct]{\(\langle\sigma,\tau^2\rangle\)}} \put(2,-8){\line(2,1){2}} \put(4,-7){\circle*{0.2}} \put(4,-7.2){\makebox(0,0)[ct]{\(\langle\sigma\rangle\)}} \put(-6,-9){\line(0,1){3}} \put(-4,-8){\line(0,1){3}} \put(-2,-7){\line(0,1){3}} \put(0,-9){\line(0,1){3}} \put(2,-8){\line(0,1){3}} \put(4,-7){\line(0,1){3}} \put(-6,-6){\circle{0.2}} \put(-6,-5.8){\makebox(0,0)[cb]{\(L\)}} \put(-5.8,-6.2){\makebox(0,0)[lc]{\(L^{\sigma},\ldots,L^{\sigma^4}\)}} \put(-6,-6){\line(2,1){2}} \put(-4,-5){\circle{0.2}} \put(-4,-4.8){\makebox(0,0)[cb]{\(M\)}} \put(-3.8,-5.2){\makebox(0,0)[lc]{\(M^{\sigma},\ldots,M^{\sigma^4}\)}} \put(-4,-5){\line(2,1){2}} \put(-2,-4){\circle*{0.2}} \put(-2,-3.8){\makebox(0,0)[cb]{\(N\)}} \put(0,-6){\circle{0.2}} \put(0,-5.8){\makebox(0,0)[cb]{\(\langle\tau\rangle\)}} \put(0.2,-6.2){\makebox(0,0)[lc]{\(\langle\sigma^{-i}\tau\sigma^i\rangle\)}} \put(0.2,-6.5){\makebox(0,0)[lc]{\(1\le i\le 4\)}} \put(0,-6){\line(2,1){2}} \put(2,-5){\circle{0.2}} \put(2,-4.8){\makebox(0,0)[cb]{\(\langle\tau^2\rangle\)}} \put(2.2,-5.2){\makebox(0,0)[lc]{\(\langle\sigma^{-i}\tau^2\sigma^i\rangle\)}} \put(2.2,-5.5){\makebox(0,0)[lc]{\(1\le i\le 4\)}} \put(2,-5){\line(2,1){2}} \put(4,-4){\circle*{0.2}} \put(4,-3.8){\makebox(0,0)[cb]{\(1\)}} \end{picture} } \end{figure}
1,941,325,220,836
arxiv
\section{Introduction and main result}\label{S:intro} In many practical applications it is of interest to find or approximate the eigenvalues and eigenfunctions of elliptic problems. Finite element approximations for these problems have been widely used and analyzed under a general framework. Optimal a priori error estimates for the eigenvalues and eigenfunctions have been obtained (see~\cite{Babuska_eigenvalue, Babuska, Raviart-Thomas, Strang-Fix} and the references therein). Adaptive finite element methods are an effective tool for making an efficient use of the computational resources; for certain problems, it is even indispensable to their numerical resolvability. A quite popular, natural adaptive version of classical finite element methods consists of the loop \begin{equation*} \textsc{Solve $\to$ Estimate $\to$ Mark $\to$ Refine}, \end{equation*} that is: solve for the finite element solution on the current grid, compute the a~posteriori error estimator, mark with its help elements to be subdivided, and refine the current grid into a new, finer one. The ultimate goal of adaptive methods is to equidistribute the error and the computational effort obtaining a sequence of meshes with optimal complexity. Historically, the first step to prove optimality has always been to understand convergence of adaptive methods. A general result of convergence for linear problems has been obtained in~\cite{MSV_convergence}, where very general conditions on the linear problems and the adaptive methods that guarantee convergence are stated. Optimality for adaptive methods using D\"orfler's~\cite{Dorfler} marking strategy has been proved in~\cite{CKNS-quasi-opt, Stevenson} for linear problems. The goal of this article is to analyze the convergence of adaptive finite element methods for the eigenvalue problem consisting in finding $\lambda \in \RR$, and $u\not\equiv 0$ such that \[ -\nabla\cdot(\AAA \nabla u) = \lambda \BB u \quad \text{in $\Omega$}, \qquad u=0\quad\text{on $\partial\Omega$}, \] under general assumptions on $\AAA$, $\BB$ and $\Omega$ that we state precisely in Section~\ref{S:setting}. As we mentioned before, adaptive methods are based on a posteriori error estimators, that are computable quantities depending on the discrete solution and data, and indicate a distribution of the error. A posteriori error estimators for eigenvalue problems have been constructed by using different approaches in~\cite{Verfurth-book, Verfurth-nonlinear, Duran, Larson}, they have been developed for $\AAA \equiv I$ and $\BB \equiv 1$, but the same proofs can be carried over to the general case considered here; see~\cite{Giani} and Section~\ref{S:apost}. An important aspect to be mentioned here is that the upper bound holds for sufficiently fine meshes. However, our proof will not rely on this bound, allowing us to prove convergence from any initial mesh. The first result (and only up to now) about convergence of adaptive finite elements for eigenvalue problems has been presented in~\cite{Giani}. The following is the main result of this article. \begin{theorem}[Main Result] Let $\lambda_k$ and $u_k$ be the discrete eigenvalues and eigenfunctions obtained with the adaptive algorithm stated in Section~\ref{S:adloop} below. Then there exists an eigenvalue $\lambda$ of the continuous problem such that \[ \lim_{k\rightarrow\infty} \lambda_k=\lambda \qquad \textrm{and}\qquad \lim_{k\rightarrow\infty} \dist_{H^1_0(\Omega)}(u_k,M(\lambda)) = 0, \] where $M(\lambda)$ denotes the set of all eigenfunctions of the continuous problem corresponding to the eigenvalue $\lambda$. \end{theorem} \begin{remark} Before proceeding with the details of the statement and the proof of this result, we make some remarks: \begin{itemize} \item An important difference with previous works is that we do not require the initial mesh $\Tau_0$ to be fine enough. Any initial mesh that captures the discontinuities of $\AAA$ will guarantee convergence. \item The result holds for any of the popular marking strategies, not only D\"orfler's~\cite{Dorfler}. The only assumption is that non-marked elements have error estimators smaller than marked ones, see condition~(\ref{E:marking}) in Section~\ref{S:adloop} below. \item The marking is done according to the residual type a posteriori error estimators presented in Section~\ref{S:apost}. Even though there are some \emph{oscillation terms} in the lower bound, we do not require any marking due to these terms. We only need to mark according to the error estimators, which is what is usually done in practice. \item The result holds with a minimal refinement of marked elements, one bisection suffices. We do not require the enforcement of the so-called \emph{interior node property}. \item We are assuming that each of the discrete eigenvalues $\lambda_k$ is the $j$-th eigenvalue of the corresponding discrete problem. The result, as stated above, only guarantees that $\lambda_k$ converges to one eigenvalue $\lambda$ of the continuous problem. We can be sure that we approximate the $j$-th eigenvalue of the continuous problem under any of the following assumptions: \begin{itemize} \item No eigenfunction is equal to a polynomial of degree $\le \ell$ on an open region of $\Omega$, where $\ell$ denotes the polynomial degree of the finite element functions being used. This is a Non-Degeneracy Assumption, and it holds for a large class of problems; see Assumption~\ref{A:non-deg} and following discussion. \item The meshsize of the initial triangulation is small enough. This assumption goes against the spirit of adaptivity and a posteriori analysis, since we cannot quantify what \emph{small enough} means. But we state it for completeness, because in some (nonlinear) problems there may be no way to overcome this. \end{itemize} \item The proof follows similar ideas to those of~\cite{MSV_convergence}, with some modifications due to the different nature of the problem. It consists in proving the following steps: \begin{itemize} \item The full sequence of discrete eigenvalues converges to a number $\lambda_\infty$ and a subsequence of the discrete eigenfunctions converges to some function $u_\infty$. \item The global a posteriori error estimator converges to zero (for the subsequence). \item The pair $(\lambda_\infty, u_\infty)$ is an eigenpair of the continuous problem. Due to a lack of a sharp upper bound (it only holds for sufficiently fine meshes) it is necessary to introduce a new argument to prove this (see Theorem~\ref{T:eigenfunction}). This new argument is perhaps the main difference with respect to~\cite{MSV_convergence}, and we believe that the idea can be useful for many other nonlinear problems. \end{itemize} \end{itemize} \end{remark} The rest of the article is organized as follows. In Section~\ref{S:problem} we state precisely the problem that we study, describe the approximants, mention some already known results about a priori and a posteriori estimation, and state the adaptive loop. In Section~\ref{S:uinfty} we prove that the sequence $\{(\lambda_k,u_k)\}_{k\in\NN_0}$ of solutions to the discrete problems contains a subsequence that converges to a limiting pair $(\lambda_\infty,u_\infty)$. In Section~\ref{S:convest} we prove that the global a posteriori error estimator tends to zero; which is instrumental to conclude in Section~\ref{S:eigenfunction} that $(\lambda_\infty,u_\infty)$ is an eigenpair of the continuous problem. Finally, in Section~\ref{S:main-result} we state and prove the main result and discuss its implications. \section{Problem statement and adaptive algorithm}\label{S:problem} This section is subdivided in four parts. In Section~\ref{S:setting} we state precisely the continuous problem that we study and mention some of its properties. In Section~\ref{S:discrete problem} we state the discrete problems that we consider as approximants to the continuous one, mention some of its properties and state the a priori error estimates. In Section~\ref{S:apost} we define the a posteriori error estimators that we use, state the upper bound and prove the discrete local lower bound that we will use in our convergence proof. Finally, in Section~\ref{S:adloop} we state the adaptive algorithm together with the assumptions on each of its blocks. \subsection{Setting}\label{S:setting} Let $\Omega\subset \RR^d$ be a bounded open set with a Lipschitz boundary. In particular, we suppose that $\Omega$ is a polygonal domain if $d=2$ and a polyhedral domain if $d=3$. Let $a,b: H^1_0(\Omega) \times H^1_0(\Omega) \to \RR$ be the bilinear forms defined by \begin{equation* a(u,v):=\int_\Omega \AAA \nabla u\cdot\nabla v, \end{equation*} and \begin{equation* b(u,v):=\int_\Omega \BB u v, \end{equation*} where $\AAA$ is a piecewise $W^{1,\infty}(\Omega)$ symmetric-matrix-valued function which is uniformly positive definite, i.e., there exist constants $a_1,a_2>0$ such that \begin{equation* a_1|\xi|^2\leq \AAA(x)\xi\cdot \xi\leq a_2|\xi|^2,\qquad \forall~\xi\in\RR^d,\qquad \forall~x\in\Omega, \end{equation*} and $\BB$ is a scalar function such that $$b_1\leq \BB(x)\leq b_2,\qquad \forall~x\in\Omega,$$ for some constants $b_1,b_2>0$. We also define the induced norms by these bilinear forms as \begin{equation* \normaa{v}:= a(v,v)^{1/2}, \quad v\in H_0^1(\Omega), \qquad\text{and}\qquad \normab{v}:= b(v,v)^{1/2}, \quad v\in L^2(\Omega). \end{equation*} By the assumptions on $\AAA$ and $\BB$, $\normaa{\cdot}\simeq\|\cdot\|_{H^1_0(\Omega)}$ and $\normab{\cdot}\simeq\|\cdot\|_{\Omega}$, i.e., there exist positive constants $c_1$, $c_2$, $c_3$, $c_4$ such that $$c_1 \|v\|_{H^1_0(\Omega)}\leq \normaa{v}\leq c_2 \|v\|_{H^1_0(\Omega)},\qquad\forall~v\in H^1_0(\Omega),$$ and $$c_3 \|v\|_{\Omega}\leq \normab{v}\leq c_4 \|v\|_{\Omega},\qquad\forall~v\in L^2(\Omega).$$ Where, hereafter, if $A\subset\Omega$, $\|\cdot\|_A$ will denote the $L^2(A)$-norm. We consider the following \paragraph{Continuous eigenvalue problem.} Find $\lambda\in\RR$ and $u\in H^1_0(\Omega)$ satisfying \begin{equation} \label{E:cont-problem} \left\{ \begin{array}{l} a(u,v)=\lambda \, b(u,v),\qquad \forall~v\in H^1_0(\Omega),\\ \normab{u}=1. \end{array} \right. \end{equation} It is well known~\cite{Babuska_eigenvalue} that under our assumptions on $\AAA$ and $\BB$ problem~(\ref{E:cont-problem}) has a countable sequence of eigenvalues $$0<\lambda_1\leq \lambda_2\leq \lambda_3\leq \ldots \nearrow \infty$$ and corresponding eigenfunctions $$u_1,u_2,u_3,\ldots$$ which can be assumed to satisfy $$b(u_i,u_j)=\delta_{ij} :=\begin{cases} 1 &i=j,\\ 0 &i\neq j,\end{cases}$$ where in the sequence $\{\lambda_j\}_{j\in\NN}$, the $\lambda_j$ are repeated according to geometric multiplicity. Also, the eigenvalues can be characterized as extrema of the Rayleigh quotient $\D\RRR(u)=\frac{a(u,u)}{b(u,u)},$ by the following relationships. \begin{itemize} \item \textbf{Minimum principle:} \begin{align*} \lambda_1&=\min\limits_{u\in H^1_0(\Omega)}\RRR(u)=\RRR(u_1), \\ \lambda_j&=\min\limits_{\substack{u\in H^1_0(\Omega)\\a(u,u_i)=0\\i=1,\ldots,j-1}}\RRR(u)=\RRR(u_j),\quad j=2,3,\dots. \end{align*} \item \textbf{Minimum-maximum principle:} \begin{equation*} \lambda_j=\min\limits_{\substack{V_j\subset H^1_0(\Omega)\\ \dim V_j=j}}\max\limits_{u\in V_j}\RRR(u)=\max\limits_{u\in \Span\{u_1,\ldots,u_j\}}\RRR(u),\quad j=1,2,\ldots. \end{equation*} \end{itemize} For each fixed eigenvalue $\lambda$ of (\ref{E:cont-problem}) we define \begin{equation* M(\lambda):=\{u\in H^1_0(\Omega):~u~\textrm{satisfies (\ref{E:cont-problem})}\}, \end{equation*} and notice that if $\lambda$ is simple, then $M(\lambda)$ contains two functions, whereas if $\lambda$ is not simple, it consists of a sphere in the subspace generated by the eigenfunctions. \subsection{Discrete problem}\label{S:discrete problem} In order to define the discrete approximations we will consider conforming triangulations $\Tau$ of the domain $\Omega$, that is, partitions of $\Omega$ into $d$-simplices such that if two elements intersect, they do so at a full vertex/edge/face of both elements. For any triangulation $\Tau$, $\SSS$ will denote the set of interior sides, where by side we mean an edge if $d=2$ and a face if $d=3$. And $\kappa_\Tau$ will denote the regularity of $\Tau$, defined as $$\kappa_\Tau:= \max_{T\in\Tau} \frac{\diam(T)}{\rho_T},$$ where $\diam(T)$ is the length of the longest edge of $T$, and $\rho_T$ is the radius of the largest ball contained in it. It is also useful to define the meshsize $h_\Tau := \D\max_{T\in\Tau} h_T$, where $h_T:=|T|^{1/d}$. Let $\ell\in\NN$ be fixed, and let $\VV_\Tau$ be the finite element space consisting of continuous functions vanishing on $\partial \Omega$ which are polynomials of degree $\le \ell$ in each element of $\Tau$, i.e, \begin{equation* \VV_\Tau:=\{v\in H^1_0(\Omega):\quad v|_T\in \PP_\ell(T),\quad \forall~T\in\Tau\}. \end{equation*} Obviously, $\VV_\Tau\subset H^1_0(\Omega)$ and if $\Tau_*$ is a refinement of $\Tau$, then $\VV_\Tau\subset\VV_{\Tau_*}$. We consider the approximation of the continuous eigenvalue problem~(\ref{E:cont-problem}) with the following \paragraph{Discrete eigenvalue problem.} Find $\lambda_\Tau\in \RR$ and $u_\Tau\in\VV_\Tau$ such that \begin{equation}\label{E:disc-problem} \left\{ \begin{array}{l} a(u_\Tau,v)=\lambda_\Tau \, b(u_\Tau,v),\qquad \forall~v\in \VV_\Tau,\\ \normab{u_\Tau}=1. \end{array} \right. \end{equation} For this discrete problem, similar results to those of the continuous problem hold~\cite{Babuska_eigenvalue}. More precisely, problem~(\ref{E:disc-problem}) has a finite sequence of eigenvalues $$0<\lambda_{1,\Tau}\leq \lambda_{2,\Tau}\leq \lambda_{3,\Tau}\leq \ldots \leq \lambda_{N_\Tau,\Tau},$$ where $N_\Tau:= \dim \VV_\Tau$, and corresponding eigenfunctions $$u_{1,\Tau},u_{2,\Tau},u_{3,\Tau},\ldots,u_{N_\Tau,\Tau},$$ which can be assumed to satisfy $$b(u_{i,\Tau},u_{j,\Tau})=\delta_{ij}.$$ Moreover, the following extremal characterizations also hold: \begin{itemize} \item \textbf{Minimum principle:} \begin{align*} \lambda_{1,\Tau}&=\min\limits_{u\in \VV_\Tau}\RRR(u)=\RRR(u_{1,\Tau}),\\ \lambda_{j,\Tau}&=\min\limits_{\substack{u\in \VV_\Tau\\a(u,u_{i,\Tau})=0\\i=1,\ldots,j-1}}\RRR(u)=\RRR(u_{j,\Tau}),\quad j=2,3,\ldots,N_\Tau. \end{align*} \item \textbf{Minimum-maximum principle:} \begin{equation*} \lambda_{j,\Tau}=\min\limits_{\substack{V_{j,\Tau}\subset \VV_\Tau\\ \dim V_{j,\Tau}=j}}\max\limits_{u\in V_{j,\Tau}}\RRR(u)=\max\limits_{u\in \Span\{u_{1,\Tau},\ldots,u_{j,\Tau}\}}\RRR(u),\quad j=1,2,\ldots,N_\Tau. \end{equation*} \end{itemize} It follows from the minimum-maximum principles that $$\lambda_j\leq \lambda_{j,\Tau},\qquad j=1,2,\ldots,N_\Tau.$$ and it also follows that if $\Tau_*$ is any refinement of $\Tau$ then $$\lambda_{j,\Tau_*}\leq \lambda_{j,\Tau},\qquad j=1,2,\ldots,N_\Tau.$$ For a given eigenvalue $\lambda$ we define a notion of minimal error of approximation of its eigenfunctions by \begin{equation* \epsilon_\Tau(\lambda):=\sup_{u\in M(\lambda)} \inf_{\chi\in\VV_\Tau} \normaa{u-\chi}. \end{equation*} For $j=1,2,\ldots,N_\Tau$, there holds that $$\lambda_{j,\Tau}-\lambda_j \lesssim \epsilon_\Tau^2(\lambda_j),$$ where, from now on, whenever we write $A \lesssim B$ we mean that $A \le C B$ with a constant $C$ that may depend on $\AAA$, $\BB$, the domain $\Omega$ and the regularity $\kappa_\Tau$ of $\Tau$, but not on other properties of $\Tau$ such as element size or uniformity. If $\{\Tau_k\}_{k\in\NN_0}$ is any sequence of triangulations such that $\D\sup_{k\in\NN_0} \kappa_{\Tau_k} < \infty$, and $h_{\Tau_k} \to 0$ as $k\to\infty$, then $$\epsilon_{\Tau_k}(\lambda_j)\longrightarrow 0,\qquad\textrm{as}\quad k \longrightarrow \infty,$$ and therefore, \begin{equation}\label{E:lambdak converges} \lambda_{j,\Tau_k}\longrightarrow\lambda_j,\qquad\textrm{as}\quad k\longrightarrow \infty . \end{equation} This holds for any $j\in\NN$ and it is a consequence of standard interpolation estimates and the fact that $M(\lambda_j)$ is bounded and contained in a finite dimensional subspace of $H^1_0(\Omega)$. \subsection{A posteriori error estimators}\label{S:apost} A posteriori estimates for eigenvalue problems have been studied by Larson~\cite{Larson}, Dur\'an, Padra and Rodr\'iguez~\cite{Duran}, Giani and Graham~\cite{Giani}. In this section we present the residual type a posteriori estimates for eigenvalue problems, state without proof some already known properties and prove the discrete local lower bound that will be useful for our convergence proof. In order to define the estimators we assume that the triangulation $\Tau$ matches the discontinuities of $\AAA$. More precisely, we assume that the discontinuities of $\AAA$ are aligned with the sides of $\Tau$. Observe that in particular, $\AAA|_T$ is Lipschitz continuous for all $T\in\Tau$. \begin{definition}[Element residual and jump residual] For $\mu\in\RR$ and $v\in \VV_\Tau$ we define the element residual $R(\mu,v)$ by \begin{equation}\label{E:element-residual} R(\mu,v)|_{T}:= -\nabla \cdot (\AAA\nabla v)-\mu \BB v, \end{equation} for all $T\in\Tau$, and the jump residual $J(v)$ by \begin{equation}\label{E:jump-residual} J(v)|_{S}:= (\AAA\nabla v)|_{T_1}\cdot \overrightarrow{n_1}+(\AAA\nabla v)|_{T_2}\cdot \overrightarrow{n_2}, \end{equation} for every interior side $S\in \SSS$, where $T_1$ and $T_2$ are the elements in $\Tau$ which share $S$ and $\overrightarrow{n_i}$ is the outward normal unit vector of $T_i$ on $S$, for $i=1,2$. We define $J(v)|_{\partial \Omega}:= 0$. \end{definition} \begin{definition}[Local and global error estimator] For $\mu\in\RR$ and $v\in \VV_\Tau$ we define the local error estimator $\est{\mu,v}{T}$ by \begin{equation* \est{\mu,v}{T}^2:= h_T^2\normT{R(\mu,v)}^2 + h_T\normbT{J(v)}^2, \end{equation*} for all $T\in\Tau$, and the global error estimator $\gest{\mu,v}$ is given by \begin{equation* \gest{\mu,v}^2:= \sum_{T\in\Tau} \est{\mu,v}{T}^2. \end{equation*} \end{definition} Even though we will not need it for the convergence proof, we include the statement of the upper bound of the error in terms of the a posteriori error estimation, for the sake of completeness. \begin{theorem}[Upper bound]\label{T:upperbound} Let $j\in\NN$, and let $u_{\Tau}$ be an eigenfunction corresponding to the $j$-th eigenvalue $\lambda_{\Tau}$ of the discrete problem~\eqref{E:disc-problem}, then, if $h_\Tau$ is small enough, there exists an eigenfunction $u$ corresponding to the $j$-th eigenvalue $\lambda$ of the continuous problem~\eqref{E:cont-problem} such that $$\normaa{u-u_\Tau} \lesssim \gest{\lambda_\Tau,u_\Tau}.$$ \end{theorem} The proof of this theorem can be obtained following the steps given in~\cite{Duran}, by extending Lemmas 3.1 and 3.2 presented there for the model problem with $\AAA \equiv I$, and $\BB \equiv 1$, to the general case that we consider here, using the following regularity result, and the a priori bound stated in Theorem~\ref{T:apriori} below. \begin{comment} \begin{proof} Let $Ie\in\VV_\Tau$ be the Cl\'ement interpolant of $e$,~\cite{Clement}. Then, for all $T\in\Tau$ we have that $$\normT{e-Ie}\leq Ch_T\|e\|_{H^1(\omega_\Tau(T))},\quad\textrm{and}\quad\normbT{e-Ie}\leq Ch_T^{1/2}\|e\|_{H^1(\omega_\Tau(T))},$$ where $C$ only depends on the regularity of $\Tau$. Now, \begin{align*} \normaa{e}^2&= a(e,e-Ie)+a(e,Ie)= a(e,e-Ie)+a(u,Ie)-a(u_\Tau,Ie)\\ &= a(e,e-Ie)+b(\lambda u-\lambda_\Tau u_\Tau,Ie)= a(e,e-Ie)-b(\lambda u-\lambda_\Tau u_\Tau,e-Ie)+b(\lambda u-\lambda_\Tau u_\Tau,e). \end{align*} Using Lemmas~\ref{L:aux1_upperbound} and \ref{L:aux2_upperbound}, we can write \begin{align*} \normaa{e}^2&= \sum_{T\in\Tau} \int_T -R(\lambda_\Tau,u_\Tau)(e-Ie)-\sum_{S\in\SSS}\int_{S} J(u_\Tau)(e-Ie)+\frac{\lambda+\lambda_\Tau}{2}b(e,e)\\ &\leq \sum_{T\in\Tau} \normT{R(\lambda_\Tau,u_\Tau)}\normT{e-Ie}+\sum_{S\in\SSS} \normS{J(u_\Tau)}\normS{e-Ie}+\frac{\lambda+\lambda_\Tau}{2}b(e,e)\\ &\leq C\left(\sum_{T\in\Tau} h_T\normT{R(\lambda_\Tau,u_\Tau)}\|e\|_{H^1(\omega_\Tau(T))}+\sum_{T\in\Tau} \normbT{J(u_\Tau)}h_T^{1/2}\|e\|_{H^1(\omega_\Tau(T))}\right)+\frac{\lambda+\lambda_\Tau}{2}b(e,e)\\ &\leq C\gest{\lambda_\Tau,u_\Tau}\|e\|_{H^1(\Omega)} + \frac{\lambda+\lambda_\Tau}{2}b(e,e)\\ &\leq C\gest{\lambda_\Tau,u_\Tau}\normaa{e}+\frac{\lambda+\lambda_\Tau}{2}\normab{e}^2. \end{align*} Finally, using Theorem~\ref{T:apriori} the claims of this theorem follow. \end{proof} \end{comment} \begin{comment} \begin{lemma}\label{L:aux1_upperbound} Let $(\lambda, u)$ be a solution of~(\ref{E:cont-problem}) and $(\lambda_\Tau, u_\Tau)$ a solution of~(\ref{E:disc-problem}). Then for all $v\in H^1_0(\Omega)$ there holds \begin{equation*} a(u-u_{\Tau},v)-b(\lambda u - \lambda_\Tau u_\Tau, v)=-\sum_{T\in\Tau} \int_T R(\lambda_\Tau,u_\Tau)v-\sum_{S\in\SSS}\int_{S} J(u_\Tau)v. \end{equation*} \end{lemma} \begin{proof} For $v\in H^1_0(\Omega)$ we have \begin{align*} a(e,v)- b(\lambda u - \lambda_\Tau u_\Tau, v) &=\underbrace{a(u,v)- \lambda b(u,v)}_{=0}-a(u_\Tau,v)+\lambda_\Tau b(u_\Tau,v)\\ &=\sum_{T\in\Tau} \left( -\int_T \AAA \nabla u_\Tau \cdot \nabla v +\int_T \BB\lambda_\Tau u_\Tau v\right)\\ &=\sum_{T\in\Tau} \left( \int_T \nabla\cdot(\AAA \nabla u_\Tau) v -\int_T \nabla\cdot(v\AAA\nabla u_\Tau)+\int_T \BB\lambda_\Tau u_\Tau v\right)\\ &=\sum_{T\in\Tau} \int_T \big(\nabla\cdot(\AAA \nabla u_\Tau)+\BB\lambda_\Tau u_\Tau\big)v-\sum_{T\in\Tau}\int_{\partial T} v\AAA\nabla u_\Tau\cdot \overrightarrow{n}\\ &=\sum_{T\in\Tau} \int_T -R(\lambda_\Tau,u_\Tau)v-\sum_{T\in\Tau}\int_{\partial T} v\AAA\nabla u_\Tau\cdot \overrightarrow{n}\\ &=\sum_{T\in\Tau} \int_T -R(\lambda_\Tau,u_\Tau)v-\sum_{S\in\SSS}\int_{S} vJ(u_\Tau). \end{align*} \end{proof} \end{comment} \begin{comment} \begin{lemma}\label{L:aux2_upperbound} Let $(\lambda, u)$ be a solution of~(\ref{E:cont-problem}) and $(\lambda_\Tau, u_\Tau)$ a solution of~(\ref{E:disc-problem}). Then \begin{equation*} b(\lambda u -\lambda_\Tau u_\Tau,u-u_{\Tau})=\frac{\lambda+\lambda_\Tau}{2}b(u-u_{\Tau},u-u_{\Tau}). \end{equation*} \end{lemma} \begin{proof} \begin{align*} b(\lambda u -\lambda_\Tau u_\Tau,e)&=\lambda b(u,e)-\lambda_\Tau b(u_\Tau,e)\\ &=\lambda b(u,u)-\lambda b(u,u_\Tau)-\lambda_\Tau b(u_\Tau,u)+\lambda_\Tau b(u_\Tau,u_\Tau)\\ &=\lambda-(\lambda+\lambda_\Tau) b(u,u_\Tau)+\lambda_\Tau\\ &=(\lambda+\lambda_\Tau)\big(1- b(u,u_\Tau)\big)=\frac{\lambda+\lambda_\Tau}{2}\big(2- 2b(u,u_\Tau)\big)\\ &=\frac{\lambda+\lambda_\Tau}{2}\big(b(u,u)+b(u_\Tau,u_\Tau)- 2b(u,u_\Tau)\big)\\ &=\frac{\lambda+\lambda_\Tau}{2}b(u-u_\Tau,u-u_\Tau)=\frac{\lambda+\lambda_\Tau}{2}b(e,e). \end{align*} \end{proof} \end{comment} \begin{lemma}[Regularity of the eigenfunctions]\label{L:regularity} There exists $r\in (0,1]$ depending only on $\Omega $ and $\AAA$ such that \begin{equation* u\in H^{1+r}(\Omega ), \end{equation*}% for any eigenfunction $u$ of the problem~(\ref{E:cont-problem}). \end{lemma} \begin{proof} This can be proved by observing that if $u$ is an eigenfunction, then it is also a solution to a linear elliptic equation of second order with right-hand side in $L^2(\Omega)$. We know that $r=1$ when $\AAA$ is constant or smooth and $\Omega$ is convex. The case in which $\Omega$ is non-convex has been studied in~\cite{BDLN} and the case of $\AAA$ having a discontinuity across an interior interface in~\cite{Babuska70}. For the general case, which we are considering here, see~\cite[Theorem 3]{Jochmann}. \end{proof} The following result is an a priori estimate relating the errors in the strong and weak norms associated to the problem, and it is the last slab in the chain necessary to prove Theorem~\ref{T:upperbound}. The case $\AAA \equiv I$ and $\BB = 1$ can be easily obtained from the results in~\cite{Strang-Fix}, and in~\cite{Raviart-Thomas}. The general case was presented in~\cite{Giani,Giani-thesis}. \begin{theorem}\label{T:apriori} Let the same assumptions of Theorem~\ref{T:upperbound} hold. Then, if $h_\Tau$ is small enough, there exists an eigenfunction $u$ corresponding to the $j$-th eigenvalue $\lambda$ of the continuous problem~\eqref{E:cont-problem} such that \begin{equation*} \normab{u-u_\Tau}\lesssim h_\Tau^r\normaa{u-u_\Tau}. \end{equation*} \end{theorem} The next result, which we will need for our proof of convergence is the discrete local lower bound, whose proof follows that of the continuous lower bound in~\cite{Duran}, but in order to make this article more self-contained we will include it here. For $S\in \SSS$ we define $\omega_\Tau (S)$ as the union of the two elements in $\Tau$ sharing $S$. For $T\in\Tau$, ${\mathcal N}_\Tau(T):=\{T'\in\Tau : T'\cap T \neq \emptyset\}$ denotes the set of neighbors of $T$ in $\Tau$, and $\omega_\Tau(T) := \bigcup_{T'\in{\mathcal N}_\Tau(T)}T'$. We also define $n_d:= 3$ if $d=2$ and $n_d:= 6$ if $d=3$. This guarantees that after $n_d$ bisections to an element, new nodes appear on each side and in the interior. Here we consider the newest-vertex bisection in two dimensions and the procedure of Kossaczk\'y in three dimensions~\cite{Alberta}. \begin{theorem}[Discrete local lower bound]\label{T:general_lower_bound} Let $T\in\Tau$ and let $\Tau_*$ be the triangulation of $\Omega$ which is obtained from $\Tau$ by bisecting $n_d$ times each element of ${\mathcal N}_\Tau(T)$. Let $\lambda_\Tau$ and $u_\Tau$ be a solution to the discrete problem~(\ref{E:disc-problem}). Let $\WW$ be a subspace of $H^1_0(\Omega)$ such that $\VV_{\Tau_*}\subset \WW$. If $\mu\in\RR$ and $w\in\WW$ satisfy \begin{equation* \left\{ \begin{array}{l} a(w,v)=\mu \, b(w,v),\qquad \forall~v\in \WW,\\ \normab{w}=1, \end{array} \right. \end{equation*} then $$\est{\lambda_\Tau,u_\Tau}{T}\lesssim \normNT{\nabla (w-u_\Tau)} +h_T \normNT{\mu w-\lambda_\Tau u_\Tau} +h_T \normNT{R-\overline{R}}+h_T^{1/2} \normbT{J-\overline{J}},$$ where, for every $T'\in {\mathcal N}_\Tau(T)$, $\overline{R}|_{T'}$ is the $L^2(T')$-projection of $R:= R(\lambda_\Tau,u_\Tau)$ onto $\PP_{\ell -1}$, and for every side $S\subset\partial T$, $\overline{J}|_{S}$ is the $L^2(S)$-projection of $J:= J(u_\Tau)$ onto $\PP_{\ell -1}$. \end{theorem} \begin{proof} \step{1} We first analyze the element residual. We obviously have \begin{equation}\label{E:lowbound_aux1} \normT{R}\leq \normT{\overline{R}}+\normT{R-\overline{R}}. \end{equation} Let $x_T^{int}$ denote the vertex of $\Tau_*$ which is interior to $T$. Let $\varphi_T$ be the continuous piecewise linear function over $\Tau_*$ such that $\varphi_T(x_T^{int})=1$ and $\varphi_T$ vanishes over all the others vertices of $\Tau_*$. Then \begin{equation}\label{E:interior-residual-split} \normT{\overline{R}}^2 \lesssim \int_T \overline{R}^2 \varphi_T = \int_T \overline{R}(\overline{R} \varphi_T) = \int_T R(\overline{R} \varphi_T)+\int_T (\overline{R}-R)\overline{R} \varphi_T. \end{equation} If we define $v:=\overline{R} \varphi_T\in\VV_{\Tau_*}\subset \WW$, taking into account that $v$ vanishes over $\partial T$, for the first integral in~(\ref{E:interior-residual-split}) we have \begin{align*} \int_T Rv &= \int_T (-\nabla \cdot (\AAA\nabla u_\Tau)-\lambda_\Tau \BB u_\Tau)v\\ &= \int_T \AAA\nabla u_\Tau \cdot\nabla v -\int_T \lambda_\Tau \BB u_\Tau v\\ &= \int_T \AAA\nabla u_\Tau \cdot\nabla v -\int_T \lambda_\Tau \BB u_\Tau v-\int_T \AAA\nabla w \cdot \nabla v+\int_T \mu \BB w v\\ &= \int_T \AAA\nabla (u_\Tau-w) \cdot\nabla v +\int_T \BB(\mu w-\lambda_\Tau u_\Tau)v\\ &\lesssim \normT{\nabla (u_\Tau-w)}\normT{\nabla v}+\normT{\mu w-\lambda_\Tau u_\Tau}\normT{v} . \end{align*} For the second integral in~(\ref{E:interior-residual-split}) we have \begin{align*} \int_T (\overline{R}-R) \overline{R}\varphi_T \leq \normT{\overline{R}\varphi_T}\normT{\overline{R}-R} \leq \normT{\overline{R}}\normT{\overline{R}-R}. \end{align*} Therefore, taking into account that $\normT{\nabla v}\lesssim \frac{1}{h_T}\normT{v}$ and $\normT{v}\leq \normT{\overline{R}}$ we can write \begin{equation*} \normT{\overline{R}}^2 \lesssim \normT{\nabla (u_\Tau-w)}\frac{1}{h_T}\normT{\overline{R}}+\normT{\mu w-\lambda_\Tau u_\Tau}\normT{\overline{R}} +\normT{\overline{R}}\normT{\overline{R}-R}, \end{equation*} and then \begin{equation}\label{E:lowbound_aux2} h_T\normT{\overline{R}}\lesssim \normT{\nabla (u_\Tau-w)}+h_T\normT{\mu w-\lambda_\Tau u_\Tau} +h_T\normT{\overline{R}-R}. \end{equation} Now, from~(\ref{E:lowbound_aux1}) and~(\ref{E:lowbound_aux2}) it follows that \begin{equation}\label{E:lowbound-int} h_T\normT{R}\lesssim \normT{\nabla (u_\Tau-w)}+h_T\normT{\mu w-\lambda_\Tau u_\Tau}+h_T\normT{R-\overline{R}} . \end{equation} The same bound holds replacing $T$ by $T'$, for all $T'\in {\mathcal N}_\Tau(T)$. \step{2} Secondly, we estimate the jump residual. Let $S$ be a side of $T$ and let $T_1$ and $T_2$ denote the elements sharing $S$. Obviously, one of them is $T$ itself. As before we proceed by bounding first the projection $\overline{J}$ of $J$, since \begin{equation}\label{E:lowbound_aux3} \normS{J}\leq \normS{\overline{J}}+\normS{J-\overline{J}}. \end{equation} Let $x_S^{int}$ denote the vertex of $\Tau_*$ which is interior to $S$. Let $\varphi_S$ be the continuous piecewise linear function over $\Tau_*$ such that $\varphi_S(x_S^{int})=1$ and $\varphi_S$ vanishes over all the others vertices of $\Tau_*$. Then \begin{equation}\label{E:split-jump} \normS{\overline{J}}^2 \lesssim \int_S (\overline{J})^2 \varphi_S = \int_S \overline{J}(\overline{J} \varphi_S) = \int_S J(\overline{J} \varphi_S)+\int_S (\overline{J}-J)\overline{J} \varphi_S. \end{equation} Now, we extend $\overline{J}$ to $\omega_\Tau(S)$ as constant along the direction of one side of each $T_i$, for $i=1,2$, and still call this extention $\overline{J}$. Observe that $\overline{J}$ is continuous on $\omega_\Tau(S)$ and $\overline{J}|_{T_i}\in \PP_{\ell-1}(T_i)$, for $i=1,2$. Since $v:=\overline{J} \varphi_S\in\VV_{\Tau_*}\subset \WW$ and taking into account that $v=0$ on $\partial (\omega_\Tau (S))$, for the first integral in~(\ref{E:split-jump}) we have \begin{align*} \int_S Jv &= \sum_{i=1,2} \int_{\partial T_i}v\AAA\nabla u_\Tau\cdot \overrightarrow{n_i}=\sum_{i=1,2} \int_{T_i}\nabla\cdot(v\AAA\nabla u_\Tau)\\ &= \sum_{i=1,2} \left(\int_{T_i} \AAA\nabla u_\Tau\cdot \nabla v + \int_{T_i} v\nabla\cdot(\AAA\nabla u_\Tau)\right)\\ &= \sum_{i=1,2} \left(\int_{T_i} \AAA\nabla u_\Tau\cdot \nabla v + \int_{T_i} v\nabla\cdot(\AAA\nabla u_\Tau)\right)+\int_{T_1\cup T_2} \mu \BB w v-\int_{T_1\cup T_2} \AAA\nabla w\cdot \nabla v\\ &= \sum_{i=1,2} \left(\int_{T_i} \AAA\nabla (u_\Tau-w)\cdot \nabla v + \int_{T_i} (\nabla\cdot(\AAA\nabla u_\Tau)+\mu \BB w)v\right)\\ &= \int_{\omega_\Tau (S)} \AAA\nabla (u_\Tau-w)\cdot \nabla v +\sum_{i=1,2} \left(\int_{T_i} -Rv+\int_{T_i} \BB (\mu w-\lambda_\Tau u_\Tau)v\right)\\ &\lesssim \normNS{\nabla (u_\Tau-w)}\normNS{\nabla v}+ \normNS{R}\normNS{v}+\normNS{\mu w-\lambda_\Tau u_\Tau}\normNS{v}. \end{align*} For the second integral in~(\ref{E:split-jump}) we have \begin{align*} \int_S (\overline{J}-J) \overline{J} \varphi_S \leq \normS{\overline{J}\varphi_S}\normS{\overline{J}-J} \leq \normS{\overline{J}}\normS{\overline{J}-J}. \end{align*} Hence, taking into account that $\normNS{\nabla v} \lesssim \frac{1}{h_T}\normNS{v}$,\quad $\normNS{v}\leq \normNS{\overline{J}}$ and $\normNS{\overline{J}}\lesssim h_T^{1/2}\normS{\overline{J}}$ we can write \begin{align*} \normS{\overline{J}}^2 &\lesssim \normNS{\nabla (u_\Tau-w)}h_T^{-1/2}\normS{\overline{J}}+ \normNS{R}h_T^{1/2}\normS{\overline{J}} \\ &\quad+\normNS{\mu w-\lambda_\Tau u_\Tau}h_T^{1/2}\normS{\overline{J}} + \normS{\overline{J}}\normS{\overline{J}-J}, \end{align*} and then \begin{equation}\label{E:lowbound_aux4} h_T^{1/2}\normS{\overline{J}} \lesssim \normNS{\nabla (u_\Tau-w)}+ h_T\normNS{R}+h_T\normNS{\mu w-\lambda_\Tau u_\Tau} + h_T^{1/2}\normS{\overline{J}-J}. \end{equation} Now, from~(\ref{E:lowbound_aux3}) and~(\ref{E:lowbound_aux4}) it follows \begin{equation*} h_T^{1/2}\normS{J}\lesssim \normNS{\nabla (u_\Tau-w)}+ h_T\normNS{R}+h_T\normNS{\mu w-\lambda_\Tau u_\Tau} + h_T^{1/2}\normS{J-\overline{J}}. \end{equation*} Adding the last equation over all $S\subset \partial T$, we obtain \begin{equation*} h_T^{1/2}\normbT{J} \lesssim \normNT{\nabla (u_\Tau-w)}+ h_T\normNT{R}+h_T\normNT{\mu w-\lambda_\Tau u_\Tau} + h_T^{1/2}\normbT{J-\overline{J}}. \end{equation*} The claim of this theorem follows by adding this last inequality and~(\ref{E:lowbound-int}). \end{proof} The next result is some kind of stability bound for the oscillation terms, which will be useful to obtain the bound of Corollary~\ref{C:lowerbound} which is what will be effectively used in our convergence proof. \begin{lemma}\label{L:oscillation} Under the assumptions of Theorem~\ref{T:general_lower_bound} there holds $$h_T \normNT{R-\overline{R}}+h_T^{1/2} \normbT{J-\overline{J}}\lesssim h_T (2+\lambda_\Tau)\|u_\Tau\|_{H^1(\omega_\Tau(T))}.$$ \end{lemma} \begin{proof} \step{1} We first consider the term corresponding to the element residual. \begin{align} \normT{R-\overline{R}} &= \normT{-\nabla \cdot (\AAA\nabla u_\Tau)-\lambda_\Tau \BB u_\Tau +\overline{\nabla \cdot (\AAA\nabla u_\Tau)}+\lambda_\Tau \overline{\BB u_\Tau}} \notag\\ &\leq \normT{-\nabla \cdot (\AAA\nabla u_\Tau)+\overline{\nabla \cdot (\AAA\nabla u_\Tau)}}+\normT{\lambda_\Tau \big(\BB u_\Tau - \overline{\BB u_\Tau}\big)},\label{E:Proj_aux0} \end{align} where, as before, the bar denotes the $L^2(T)$-projection onto $\PP_{\ell-1}(T)$. Let $A^T=(A^T_{ij})$ denote the mean value of $\AAA =(\AAA_{ij})$ over the element $T$, and note that $\nabla \cdot (A^T\nabla u_\Tau)=\overline{\nabla \cdot (A^T\nabla u_\Tau)}$. Thus, for the first term in the right hand side of~(\ref{E:Proj_aux0}) we have \begin{align*} \normT{-\nabla \cdot (\AAA\nabla u_\Tau)+\overline{\nabla \cdot (\AAA\nabla u_\Tau)}}&= \normT{\nabla \cdot \big((A^T-\AAA)\nabla u_\Tau\big)-\overline{\nabla \cdot \big((A^T-\AAA)\nabla u_\Tau\big)}}\\ &\leq \normT{\big(\nabla \cdot (A^T-\AAA)\big)\cdot\nabla u_\Tau-\overline{\big(\nabla\cdot(A^T-\AAA)\big)\cdot \nabla u_\Tau}}\\ & \quad+\normT{(A^T-\AAA):D^2 u_\Tau-\overline{(A^T-\AAA):D^2 u_\Tau}}\\ &\leq \normT{\big(\nabla \cdot(A^T-\AAA)\big)\cdot\nabla u_\Tau}+\normT{(A^T-\AAA):D^2 u_\Tau}\\ &\lesssim \|\AAA\|_{W^1_\infty(T)}\normT{\nabla u_\Tau}+\|A^T-\AAA\|_{L^\infty(T)}\normT{D^2 u_\Tau}. \end{align*} Since $ \|A^T-\AAA\|_{L^\infty(T)}\lesssim h_T \|\AAA\|_{W^1_\infty(T)}$, an inverse inequality leads to \begin{equation*} \normT{-\nabla \cdot (\AAA\nabla u_\Tau)+\overline{\nabla \cdot (\AAA\nabla u_\Tau)}} \leq \|\AAA\|_{W^1_\infty(T)}\normT{\nabla u_\Tau} + h_T \|\AAA\|_{W^1_\infty(T)}\normT{D^2 u_\Tau} \lesssim \normT{\nabla u_\Tau}. \end{equation*} \begin{comment} For all $1\leq i,j \leq d$ and $x,y\in T$ we have \begin{align*} |\AAA_{ij}(y)-\AAA_{ij}(x)|&=\left| \int_0^1 \nabla \AAA_{ij}(x+t(y-x))\cdot (y-x)~dt\right|\\ &\leq \left(\int_0^1 |\nabla \AAA_{ij}(x+t(y-x))|~dt\right)~|y-x|\\ &\lesssim \|\AAA_{ij}\|_{W^1_\infty(T)} h_T. \end{align*} Let us choose $y_{ij}$ such that $\AAA_{ij}(y_{ij})=A^T_{ij}$, then $$|A^T_{ij}-\AAA_{ij}(x)|\lesssim \|\AAA_{ij}\|_{W^1_\infty(T)} h_T.$$ Taking supremum for $x\in T$ we obtain $$\|A^T_{ij}-\AAA_{ij}\|_{L^\infty(T)}\lesssim \|\AAA_{ij}\|_{W^1_\infty(T)} h_T.$$ Therefore, \begin{equation}\label{E:Proj_aux1} \|A^T-\AAA\|_{L^\infty(T)}\lesssim h_T \|\AAA\|_{W^1_\infty(T)}. \end{equation} Thus, using~(\ref{E:Proj_aux1}) we get \begin{align*} \normT{-\nabla \cdot (\AAA\nabla u_\Tau)+\overline{\nabla \cdot (\AAA\nabla u_\Tau)}}&\leq \|\AAA\|_{W^1_\infty(T)}\normT{\nabla u_\Tau}+Ch_T \|\AAA\|_{W^1_\infty(T)}\normT{D^2 u_\Tau}\\ &\lesssim \|\AAA\|_{W^1_\infty(T)}\normT{\nabla u_\Tau}, \end{align*} where for the last inequality we have used an inverse inequality. \end{comment} For the second term in the right hand side of~(\ref{E:Proj_aux0}) we have $$ \normT{\lambda_\Tau \big(\BB u_\Tau - \overline{\BB u_\Tau}\big)}\leq \normT{\lambda_\Tau \BB u_\Tau }\lesssim \lambda_\Tau\normT{u_\Tau}, $$ and therefore, $$\normT{R-\overline{R}}\lesssim (1+\lambda_\Tau)\|u_\Tau\|_{H^1(T)}.$$ The same estimation holds for all elements in ${\mathcal N}_\Tau(T)$, and consequently, \begin{equation}\label{E:interior-osc} h_T\normNT{R-\overline{R}}\lesssim h_T(1+\lambda_\Tau)\|u_\Tau\|_{H^1(\omega_\Tau(T))}. \end{equation} \step{2} Next, we analyze the jump residual. Let $S$ be a side of $T$ and let $T_1$ and $T_2$ denote the elements sharing $S$. Again, if the bar denotes the $L^2(S)$-projection onto $\PP_{\ell-1}(S)$, it follows that \begin{align*} \normS{J-\overline{J}} &=\normS{\sum_{i=1,2}(\AAA\nabla u_\Tau)|_{T_i}\cdot \overrightarrow{n_i}-\overline{\sum_{i=1,2}(\AAA\nabla u_\Tau)|_{T_i}\cdot \overrightarrow{n_i}}}. \end{align*} Using that $(A^{T_i}\nabla u_\Tau)|_{T_i}\cdot \overrightarrow{n_i}=\overline{(A^{T_i}\nabla u_\Tau)|_{T_i}\cdot \overrightarrow{n_i}}$ we have \begin{align*} \normS{J-\overline{J}} &=\normS{\sum_{i=1,2}\big((\AAA-A^{T_i})\nabla u_\Tau\big)|_{T_i}\cdot \overrightarrow{n_i}-\overline{\sum_{i=1,2}\big((\AAA-A^{T_i})\nabla u_\Tau\big)|_{T_i}\cdot \overrightarrow{n_i}}}\\ &\leq \normS{\sum_{i=1,2}\big((\AAA-A^{T_i})\nabla u_\Tau\big)|_{T_i}\cdot \overrightarrow{n_i}}\\ &\leq \sum_{i=1,2}\normS{\big((\AAA-A^{T_i})\nabla u_\Tau\big)|_{T_i}\cdot \overrightarrow{n_i}}\\ &\leq \sum_{i=1,2}\|\AAA|_{T_i}-A^{T_i}\|_{L^\infty(S)}\normS{\nabla u_\Tau|_{T_i}}\\ &\lesssim \sum_{i=1,2}h_T\|\AAA\|_{W^1_\infty(T_i)}h_T^{-1/2}\|u_\Tau\|_{H^1(T_i)}\\ &\lesssim h_T^{1/2}\|u_\Tau\|_{H^1(\omega_\Tau(S))}. \end{align*} Therefore, \begin{equation}\label{E:jump-osc} h_T^{1/2}\normbT{J-\overline{J}}\lesssim h_T\|u_\Tau\|_{H^1(\omega_\Tau(T))}. \end{equation} Adding~(\ref{E:interior-osc}) and~(\ref{E:jump-osc}) we obtain the claim of this lemma. \end{proof} As an immediate consequence of Theorem~\ref{T:general_lower_bound} and Lemma~\ref{L:oscillation} the following result holds. \begin{corollary}[Lower bound]\label{C:lowerbound} Under the assumptions of Theorem~\ref{T:general_lower_bound} there holds $$\est{\lambda_\Tau,u_\Tau}{T}\lesssim \normNT{\nabla (w-u_\Tau)} +h_T \normNT{\mu w}+h_T(1+\lambda_\Tau)\|u_\Tau\|_{H^1(\omega_\Tau(T))}.$$ \end{corollary} \subsection{Adaptive loop}\label{S:adloop} Our goal is to use an adaptive method to approximate the $j$-th eigenvalue and one of its eigenfunctions, for some fixed $j \in \NN$. From now on, we thus keep $j\in\NN$ fixed, and let $\lambda$ denote the $j$-th eigenvalue of (\ref{E:cont-problem}) and $u$ an eigenfunction in $M(\lambda)$. The algorithm for approximating $\lambda$ and $M(\lambda)$ is an iteration of the following main steps: \begin{enumerate} \item [(1)] $(\lambda_k,u_k):= \textsf{SOLVE}(\VV_k)$. \item [(2)] $\{\eta_k(T)\}_{T\in\Tau_k}:= \textsf{ESTIMATE}(\lambda_k,u_k,\Tau_k)$. \item [(3)] $\MM_k:= \textsf{MARK}(\{\eta_k(T)\}_{T\in\Tau_k},\Tau_k)$. \item [(4)] $\Tau_{k+1}:= \textsf{REFINE}(\Tau_k,\MM_k)$, increment $k$. \end{enumerate} This is the same loop considered in~\cite{MSV_convergence}, the difference lies in the building blocks which we now describe in detail. If $\Tau_k$ is a conforming triangulation of $\Omega$, the module \textsf{SOLVE} takes the space $\VV_k := \VV_{\Tau_k}$ as input argument and outputs the $j$-th eigenvalue of the discrete problem~(\ref{E:disc-problem}) with $\Tau=\Tau_k$, i.e., $\lambda_k:= \lambda_{j,\Tau_k}$, and a corresponding eigenfunction $u_k\in\VV_k$. Therefore, $\lambda_k$ and $u_k$ satisfy \begin{equation}\label{E:kdisc-problem} \left\{ \begin{array}{l} a(u_k,v_k)=\lambda_k \, b(u_k,v_k),\qquad \forall~v_k\in \VV_k,\\ \normab{u_k}=1. \end{array} \right. \end{equation} Given $\Tau_k$ and the corresponding outputs $\lambda_k$ and $u_k$ of \textsf{SOLVE}, the module \textsf{ESTIMATE} computes and outputs the a posteriori error estimators $\{\eta_k(T)\}_{T\in\Tau_k}$, where $$\eta_k(T):= \eta_{\Tau_k}(\lambda_k,u_k;T).$$ Based upon the a posteriori error indicators $\{\eta_k(T)\}_{T\in\Tau_k}$, the module \textsf{MARK} collects elements of $\Tau_k$ in $\MM_k$. In order to simplify the presentation, the only requirement that we make on the module \textsf{MARK} is that the set of marked elements $\MM_k$ contains at least one element of $\Tau_k$ holding the largest value of estimator. That is, there exists one element $T_k^{\max} \in \MM_k$ such that \[ \eta_k(T_k^{\max}) = \max_{T \in \Tau_k} \eta_k(T). \] Whenever a marking strategy satisfies this assumption, we call it \emph{reasonable}, since this is what practitioners do in order to maximize the error reduction with a minimum effort. The most commonly used marking strategies, e.g., \textit{Maximum strategy} and \textit{Equidistribution strategy}, fulfill this condition, which is sufficient to guarantee that \begin{equation}\label{E:marking} T \in \Tau_k \setminus \MM_k \qquad\Longrightarrow \qquad \eta_k(T) \lesssim \eta_k(\MM_k) := \bigg( \sum_{T \in \MM_k} \eta_k(T)^2 \bigg)^{1/2}. \end{equation} This is slightly weaker, and is what we will use in our proof. The original \textit{D\"orfler's strategy} also guarantees~(\ref{E:marking}). The refinement procedure \textsf{REFINE} takes the triangulation $\Tau_k$ and the subset $\MM_k\subset \Tau_k$ as input arguments. We require that all elements of $\MM_k$ are refined (at least once), and that a new conforming triangulation $\Tau_{k+1}$ of $\Omega$, which is a refinement of $\Tau_k$, is returned as output. In this way, starting with an initial conforming triangulation $\Tau_0$ of $\Omega$ and iterating the steps (1),(2),(3) and (4) of this algorithm, we obtain a sequence of successive conforming refinements of $\Tau_0$ called $\Tau_1,\Tau_2,\ldots$ and the corresponding outputs $(\lambda_k,u_k)$, $\{\eta_k(T)\}_{T\in\Tau_k}$, $\MM_k$ of the modules \textsf{SOLVE}, \textsf{ESTIMATE} and \textsf{MARK}, respectively. For simplicity, we consider for the module \textsf{REFINE}, the concrete choice of the \textit{newest vertex} bisection procedure in two dimensions and the bisection procedure of Kossaczk\'y in three dimensions~\cite{Alberta}. Both these procedures refine the marked elements and some additional ones in order to keep conformity, and they also guarantee that $$\kappa:= \sup_{k\in\NN_0} \kappa_{\Tau_k}<\infty,$$ i.e., $\{\Tau_k\}_{k\in\NN_0}$ is a sequence shape regular of triangulations of $\Omega$. It is worth mentioning that we do not assume \textsf{REFINE} to enforce the so-called \emph{interior node property}, and convergence is guaranteed nevertheless, this is an important difference with respect to~\cite{Giani}. Regarding the module \textsf{MARK}, we stress that the marking is done only according to the error estimators; no marking due to oscillation is necessary, this is another important difference with respect to~\cite{Giani}, where the set of marked elements has to be enlarged so that D\"orfler's criterion is satisfied not only by the data oscillation terms, but also by the oscillation of the current solution $u_k$. \section{Convergence to a limiting pair}\label{S:uinfty} In this section we will prove that the sequence of discrete eigenpairs $\{(\lambda_k, u_k)\}_{k\in\NN_0}$ obtained by \textsf{SOLVE} throughout the adaptive loop of Section~\ref{S:adloop} has the following property: $\lambda_k$ converges to some $\lambda_\infty \in \RR$ and there exists a subsequence $\{u_{k_m}\}_{m\in\NN_0}$ of $\{u_k\}_{k\in\NN_0}$ converging in $H^1(\Omega)$ to a function $u_\infty$. Let us define the limiting space as $\VV_\infty:=\overline{\cup \VV_k}^{H^1_0(\Omega)}$, and note that $\VV_\infty$ is a closed subspace of $H^1_0(\Omega)$ and therefore, it is itself a Hilbert space with the inner product inherited from $H^1_0(\Omega)$. Since $\Tau_{k+1}$ is always a refinement of $\Tau_k$, by the Minimum-Maximum principle $\{\lambda_k\}_{k\in\NN_0}$ is a decreasing sequence bounded below by $\lambda$. Therefore, there exists $\lambda_\infty>0$ such that \begin{equation* \lambda_k \searrow \lambda_\infty. \end{equation*} From~(\ref{E:kdisc-problem}) it follows that \begin{equation}\label{E:normaa of u_k} \normaa{u_k}^2=a(u_k,u_k)=\lambda_k b(u_k,u_k)=\lambda_k \normab{u_k}^2=\lambda_k \rightarrow \lambda_\infty, \end{equation} and therefore, that $\{u_k\}_{k\in\NN_0}$ is a bounded sequence in $\VV_\infty$. Then, there exists a subsequence $\{u_{k_m}\}_{m\in\NN_0}$ weakly convergent in $\VV_\infty$ to a function $u_\infty\in\VV_\infty$, so \begin{equation}\label{E:u_k conv weakly in H1} u_{k_m} \rightharpoonup u_\infty \quad \textrm{in}\quad H^1_0(\Omega). \end{equation} Using Rellich's theorem we can extract a subsequence of the last one, which we still denote $\{u_{k_m}\}_{m\in\NN_0}$, such that \begin{equation}\label{E:u_k conv in L2} u_{k_m} \longrightarrow u_\infty \quad \textrm{in}\quad L^2(\Omega). \end{equation} If $k_0\in\NN_0$ and $k_m\geq k_0$, for all $v_{k_0}\in\VV_{k_0}$ we have that a(u_{k_m},v_{k_0})=\lambda_{k_m}b(u_{k_m},v_{k_0}), , and when $m$ tends to infinity, we obtain that a(u_\infty,v_{k_0})=\lambda_\infty b(u_\infty,v_{k_0}) . Since $k_0\in\NN_0$ and $v_{k_0}\in\VV_{k_0}$ are arbitrary we have that \begin{equation}\label{E:infty equation} a(u_\infty,v)=\lambda_\infty b(u_\infty,v), \qquad \forall~v\in \VV_\infty. \end{equation} On the other hand, since that $\normab{u_{k_m}}=1$, considering~(\ref{E:u_k conv in L2}) we conclude that $\normab{u_\infty}=1.$ Now, taking into account~(\ref{E:infty equation}) we have that $$\normaa{u_\infty}^2=\lambda_\infty \normab{u_\infty}^2=\lambda_\infty.$$ From (\ref{E:normaa of u_k}) it follows that $\normaa{u_{k_m}}^2=\lambda_{k_m} \longrightarrow \lambda_\infty,$ and therefore, $\normaa{u_{k_m}} \rightarrow \normaa{u_\infty}.$ This, together with~(\ref{E:u_k conv weakly in H1}) yields \begin{equation* u_{k_m} \longrightarrow u_\infty \quad \textrm{in} \quad H^1_0(\Omega). \end{equation*} Summarizing, we have proved the following \begin{theorem}\label{T:limiting function} There exist $\lambda_\infty\in\RR$ and $u_\infty\in\VV_\infty$ such that \begin{equation* \left\{ \begin{array}{l} a(u_\infty,v)=\lambda_\infty \, b(u_\infty,v),\qquad \forall~v\in \VV_\infty,\\ \normab{u_\infty}=1. \end{array} \right. \end{equation*} Moreover, $\D\lambda_\infty = \lim_{k\to\infty} \lambda_k$ and there exists a subsequence $\{u_{k_m}\}_{m\in\NN_0}$ of $\{u_k\}_{k\in\NN_0}$ such that $$u_{k_m} \longrightarrow u_\infty \quad \textrm{in} \quad H^1_0(\Omega).$$ \end{theorem} \begin{remark}\label{R:subsubsequence} It is important to notice that from any subsequence $\{(\lambda_{k_m},u_{k_m})\}_{m\in\NN_0}$ of $\{(\lambda_k,u_k)\}_{k\in\NN_0}$, we can extract another subsequence $\{(\lambda_{k_{m_n}},u_{k_{m_n}})\}_{n\in\NN_0}$, such that $u_{k_{m_n}}$ converges in $H^1(\Omega)$ to some function $\tilde u_\infty\in\VV_\infty$ that satisfies \begin{equation*} \left\{ \begin{array}{l} a(\tilde u_\infty,v)=\lambda_\infty \, b(\tilde u_\infty,v),\qquad \forall~v\in \VV_\infty,\\ \normab{\tilde u_\infty}=1. \end{array} \right. \end{equation*} \end{remark} \section{Convergence of estimators}\label{S:convest} In this section we will prove that the global a posteriori estimator defined in Section~\ref{S:apost} tends to zero. We will follow the same steps as in~\cite{MSV_convergence} providing the proofs of the results that are problem dependent. Those geometrical results that are consequences of the fact that we are only refining will be stated without proof, but with a precise reference to the result from~\cite{MSV_convergence} being used. In order not to clutter the notation, we will still call $\{u_k\}_{k\in\NN_0}$ to the subsequence $\{u_{k_m}\}_{m\in\NN_0}$, and $\{\Tau_k\}_{k\in\NN_0}$ to the sequence $\{\Tau_{k_m}\}_{m\in\NN_0}$. Also, we will replace the subscript $\Tau_k$ by $k$ (e.g.\ ${\mathcal N}_k(T):= {\mathcal N}_{\Tau_k}(T)$ and $\omega_k(T):= \omega_{\Tau_k}(T)$), and whenever $\Xi$ is a subset of $\Tau_k$, $\eta_k(\Xi)^2$ will denote the sum $\sum_{T\in \Xi}\eta_k(T)^2$.% The main result of this section is the following \begin{theorem}[Estimator's convergence]\label{T:est_conv} If $\{\Tau_k\}_{k\in\NN_0}$ denote the triangulations corresponding to the convergent subsequence of discrete eigenpairs from Theorem~\ref{T:limiting function}, then $$\lim_{k\rightarrow\infty} \eta_k(\Tau_k)=0.$$ \end{theorem} In order to prove this theorem we consider the following decomposition of $\Tau_k$, which was first established in~\cite{MSV_convergence}. \begin{definition}\label{D:splitting} Given the sequence $\{\Tau_k\}_{k\in\NN_0}$ of triangulations, for each $k \in \NN_0$ we define the following (disjoint) subsets of $\Tau_k$. \begin{itemize} \item $\Tau_k^0:= \{T\in\Tau_k : T'~ \textrm{is refined at least} ~n_d~ \textrm{times, for all}~ T'\in {\mathcal N}_k(T)\}$; \item $\Tau_k^+:= \{T\in\Tau_k : T'~ \textrm{is never refined, for all}~T'\in {\mathcal N}_k(T)\}$; \item $\Tau_k^*:= \Tau_k \setminus (\Tau_k^0\cup \Tau_k^+)$. \end{itemize} We also define the three (overlapping) regions in $\Omega$: \begin{itemize} \item $\Omega_k^0:= \bigcup\limits_{T\in\Tau_k^0} \omega_k(T)$; \item $\Omega_k^+:= \bigcup\limits_{T\in\Tau_k^+} \omega_k(T)$; \item $\Omega_k^*:= \bigcup\limits_{T\in\Tau_k^*} \omega_k(T)$. \end{itemize} \end{definition} We will prove that $\eta_k(\Tau_k^0)$, $\eta_k(\Tau_k^*)$ and $\eta_k(\Tau_k^+)$ tend to zero as $k$ tends to infinity in Theorems~\ref{T:est_conv_1}, \ref{T:est_conv_2} and \ref{T:est_conv_3}. Since $\eta_k(\Tau_k)^2 = \eta_k(\Tau_k^0)^2 + \eta_k(\Tau_k^+)^2 + \eta_k(\Tau_k^*)^2 $, Theorem~\ref{T:est_conv} will follow from these results. \begin{definition}[Meshsize function] We define $h_k\in L^\infty(\Omega)$ as the piecewise constant function $$h_k|_T:=|T|^{1/d},\qquad \forall~T\in\Tau_k.$$ \end{definition} For almost every $x\in\Omega$ there holds that $h_k(x)$ is monotonically decreasing and bounded from below by $0$. Therefore, $$h_\infty(x):=\lim\limits_{k\rightarrow\infty} h_k(x)$$ is well-defined for almost every $x\in\Omega$ and defines a function in $L^\infty(\Omega)$. Moreover, the following result holds~\cite[Lemma 4.3 and Corollary 4.1]{MSV_convergence}. \begin{lemma}\label{L:hktendstozero} The sequence $\{h_k\}_{k\in\NN_0}$ converges to $h_\infty$ uniformly, i.e., $$\lim_{k\rightarrow\infty} \|h_k-h_\infty\|_{L^\infty(\Omega)}=0,$$ and if $\chi_{\Omega_k^0}$ denotes the characteristic function of $\Omega_k^0$ then $$\lim_{k\rightarrow\infty} \|h_k \chi_{\Omega_k^0}\|_{L^\infty(\Omega)}=0.$$ \end{lemma} This lemma is a consequence of the fact that the sequence of triangulations is obtained by refinement only, and that every time an element $T \in \Tau_k$ is refined into $\Tau_{k+1}$, $h_{k+1}(x)\leq \left(\frac{1}{2}\right)^{1/d} h_k(x)$ for almost every $x\in T$. But it is otherwise independent of the marking strategy. The next result is also independent of the marking strategy, it is just a consequence of the fact that $u_k\rightarrow u_\infty$, the lower bound and the convergence of $\|h_k \chi_{\Omega_k^0}\|_{L^\infty(\Omega)}$ to zero. \begin{theorem}[Estimator's convergence: First part]\label{T:est_conv_1} If $\{\Tau_k\}_{k\in\NN_0}$ denote the triangulations corresponding to the convergent subsequence of discrete eigenpairs from Theorem~\ref{T:limiting function}, then the contribution of $\Tau_k^0$ to the estimator vanishes in the limit, i.e., $$\lim_{k\rightarrow\infty} \eta_k(\Tau_k^0)= 0.$$ \end{theorem} \begin{proof} Using Corollary~\ref{C:lowerbound} with $\WW=\VV_\infty$, $w=u_\infty$, and $\mu=\lambda_\infty$ we have that \begin{align*} \eta_k(\Tau_k^0)^2 &= \sum_{T\in \Tau_k^0} \eta_k(T)^2\\ &\lesssim\sum_{T\in \Tau_k^0} \normkNT{\nabla(u_k-u_\infty)}^2+h_T^2 \normkNT{\lambda_\infty u_\infty}^2+h_T^2(1+\lambda_k)^2\|u_k\|_{H^1(\omega_k(T))}^2\\ &\lesssim\norm{\nabla(u_k-u_\infty)}^2+\|h_k \chi_{\Omega_k^0}\|_{L^\infty(\Omega)}^2\big(\norm{\lambda_\infty u_\infty}^2+(1+\lambda_k)^2\|u_k\|_{H^1(\Omega)}^2\big) \end{align*} Since $\lambda_k \rightarrow \lambda_\infty$ in $\RR$ and $u_k \rightarrow u_\infty$ in $H^1(\Omega)$, Lemma~\ref{L:hktendstozero} implies the claim. \end{proof} The following lemma was proved as the first step of the proof of~\cite[Proposition 4.2]{MSV_convergence}, it is also a consequence of the fact that the sequence of triangulations is obtained by refinement, without coarsening, and it is independent of the specific problem being considered. \begin{lemma}\label{L:interfasetendstozero} If $\Omega_k^*$ is as in Definition~\ref{D:splitting}, then $$\lim_{k\rightarrow\infty}|\Omega_k^*|= 0.$$ \end{lemma} From Corollary~\ref{C:lowerbound}, Lemma~\ref{L:interfasetendstozero} and the fact that $u_k\rightarrow u_\infty$ en $H^1$ we obtain \begin{theorem}[Estimator's convergence: Second part]\label{T:est_conv_2} If $\{\Tau_k\}_{k\in\NN_0}$ denote the triangulations corresponding to the convergent subsequence of discrete eigenpairs from Theorem~\ref{T:limiting function}, then the contribution of $\Tau_k^*$ to the estimator vanishes in the limit, i.e., $$\lim_{k\rightarrow\infty}\eta_k(\Tau_k^*)= 0.$$ \end{theorem} \begin{proof} Let $(\lambda, u)$ be any eigenpair of~(\ref{E:cont-problem}). Then Corollary~\ref{C:lowerbound} with $\WW=H^1_0(\Omega)$, $w=u$, and $\mu=\lambda$ implies that \begin{align*} \eta_k(\Tau_k^*)^2 &= \sum_{T\in \Tau_k^*} \eta_k(T)^2\\ &\lesssim\sum_{T\in \Tau_k^*} \normkNT{\nabla(u_k-u)}^2+h_T^2 \normkNT{\lambda u}^2+h_T^2(1+\lambda_k)^2\| u_k\|_{H^1(\omega_k(T))}^2\\ &\lesssim \|\nabla(u_k-u)\|_{\Omega_k^*}^2+\lambda^2\|u\|_{\Omega_k^*}^2+ (1+\lambda_k)^2\|u_k\|_{H^1(\Omega_k^*)}^2\\ &\lesssim \|\nabla(u_k-u_\infty)\|_{\Omega}^2+\|\nabla(u_\infty-u)\|_{\Omega_k^*}^2+\lambda^2\|u\|_{\Omega_k^*}^2\\ &\quad+(1+\lambda_k)^2\|u_k-u_\infty\|_{H^1(\Omega)}^2+(1+\lambda_k)^2\|u_\infty\|_{H^1(\Omega_k^*)}^2. \end{align*} Taking into account that $\lambda_k \to \lambda_\infty$ in $\RR$, $u_k \rightarrow u_\infty$ in $H^1(\Omega)$ and Lemma~\ref{L:interfasetendstozero}, the claim follows. \end{proof} In order to prove that the estimator contribution from $\Tau_k^+$ vanishes in the limit, we make the following \begin{definition}\label{D:Tau-plus} Let $\Tau^+$ be the set of elements that are never refined, i.e., $$\Tau^+:= \bigcup_{k\geq 0}\bigcap_{m \geq k} \Tau_m,$$ and let the set $\Omega^+$ be defined as $$\Omega^+:= \bigcup_{T\in\Tau^+} T.$$ \end{definition} It is interesting to observe at this point that \begin{lemma}\label{L:h0<=>omega+} The set $\Omega^+$ is empty if and only if $\D \lim_{k\rightarrow\infty} \|h_k\|_{L^\infty(\Omega)}=0$. \end{lemma} \begin{proof} If $\Omega^+$ is empty, then $\Omega^+_k$ and $\Omega^*_k$ are empty for all $k\in\NN_0$, and $\|h_k\|_{L^\infty(\Omega)}=\|h_k\|_{L^\infty(\Omega^0_k)}$ which tends to zero by Lemma~\ref{L:hktendstozero}. Conversely, if $\lim_{k\rightarrow\infty} \|h_k\|_{L^\infty(\Omega)}=0$, then $\Omega^+$ must be empty, otherwise there would exist $T\in\Tau^+$ and for all $k$ we would have $\|h_k\|_{L^\infty(\Omega)}\geq |T|^{1/d}$. \end{proof} This lemma, as Lemma~\ref{L:hktendstozero}, is just a geometric observation, and a consequence of the fact that the sequence of triangulations is shape regular and obtained by refinement, but it is independent of the particular problem being considered. As an immediate consequence of Definition~\ref{D:Tau-plus} and Lemma 4.1 in \cite{MSV_convergence} we have that $$\Tau^+= \bigcup_{k\geq 0} \Tau_k^+.$$ \begin{remark} Theorems~\ref{T:est_conv_1} and~\ref{T:est_conv_2} hold independently of the marking strategy. In the next theorem, we will make use for the first time of the assumption~\eqref{E:marking} done on the module \textsf{MARK}. \end{remark} \begin{theorem}[Estimator's convergence: Third part]\label{T:est_conv_3} If $\{\Tau_k\}_{k\in\NN_0}$ denote the triangulations corresponding to the convergent subsequence of discrete eigenpairs from Theorem~\ref{T:limiting function}, then the contribution of $\Tau_k^+$ to the estimator vanishes in the limit, i.e., $$\lim_{k\rightarrow\infty} \eta_k(\Tau_k^+)=0.$$ \end{theorem} \begin{proof} Let $T\in\Tau^+$, then there exists $k_0$ such that $T\in \Tau_k$, for all $k\geq k_0$. Taking into account that all marked elements are at least refined once, we have that $T \notin \MM_k$. From assumption~\eqref{E:marking}, $\eta_k(T) \lesssim \eta_k(\MM_k)$. Since $\MM_k \subset \Tau_k^* \cup \Tau_k^0$, Theorems~\ref{T:est_conv_1} and~\ref{T:est_conv_2} imply that \[ \eta_k(T)^2 \lesssim \eta_k(\MM_k)^2 \le \eta_k(\Tau_k^*)^2 + \eta_k(\Tau_k^0)^2 \longrightarrow 0. \] We have thus proved that \begin{equation* \eta_k(T)\longrightarrow 0, \qquad\text{for all $T\in\Tau^+$.} \end{equation*} Now, we will prove that, moreover, $$\sum_{T\in \Tau_k^+} \eta_k(T)^2\longrightarrow 0.$$ To prove this, we resort to a generalized majorized convergence theorem. We first define \begin{equation*} \epsilon_k|_T:=\frac{1}{|T|}\eta_k(T)^2, \quad\text{for all $T\in\Tau_k^+$,}\qquad\text{and}\quad \epsilon_k:= 0,\quad\text{otherwise}. \end{equation*} Then $\sum_{T\in \Tau_k^+} \eta_k(T)^2 = \int_{\Omega} \epsilon_k(x) \, dx$, and $\epsilon_k(x) \to 0$ as $k\to\infty$ for almost every $x\in\Omega$. It remains to prove that $\int_{\Omega} \epsilon_k(x) \, dx \to 0$ as $k\to\infty$. Let $k$ be fixed. Due to the definition of $\Tau_k^+$, for $T\in\Tau_k^+$ we have that $\omega_k(T)=\omega_j(T)$ for all $j\geq k$, and we can drop the subscript and call this set $\omega(T)$. Using Corollary~\ref{C:lowerbound} we have that if $(\lambda,u)$ is any fixed eigenpair of~(\ref{E:cont-problem}), \begin{align*} \eta_k(T)^2&\lesssim\|\nabla(u_k-u)\|_{\omega(T)}^2+\|\lambda u\|_{\omega(T)}^2 + (1+\lambda_k)^2\|u_k\|_{H^1(\omega(T))}^2\\ &\lesssim\|\nabla(u_k-u_\infty)\|_{\omega(T)}^2+\|\nabla u_\infty\|_{\omega(T)}^2+\|\nabla u\|_{\omega(T)}^2+\|\lambda u\|_{\omega(T)}^2 \\ &\quad + (1+\lambda_0)^2\|u_k-u_\infty\|_{H^1(\omega(T))}^2+ (1+\lambda_0)^2\|u_\infty\|_{H^1(\omega(T))}^2\\ &\lesssim (1+\lambda_0)^2\left(\|u_k-u_\infty\|_{H^1(\omega(T))}^2 +c_T^2\right), \end{align*} where $$c_T^2:= \|u_\infty\|_{H^1(\omega(T))}^2+\|\lambda u\|_{\omega(T)}^2+\|\nabla u\|_{\omega(T)}^2,$$ and fulfills \begin{equation}\label{E:c_T} \sum_{T\in\Tau_k^+} c_T^2\lesssim\|u_\infty\|_{H^1(\Omega)}^2+\norm{\lambda u}^2+\norm{\nabla u}^2<\infty. \end{equation} Let now $M_k$ be defined by \begin{equation*} M_k|_T:=\frac{C}{|T|}\big(\|u_k-u_\infty\|_{H^1(\omega(T))}^2 +c_T^2\big), \quad\text{for all $T\in\Tau_k^+$,}\qquad\text{and}\quad M_k:= 0 ,\quad\text{otherwise}, \end{equation*} where $C$ is chosen so that $0\leq\epsilon_k(x)\leq M_k(x)$, for all $x\in\Omega$. If we define \begin{equation*} M|_T:= C\frac{c_T^2}{|T|}, \quad\text{for all $T\in\Tau^+$,} \qquad \text{and} \quad M:= 0 , \quad\text{otherwise}, \end{equation*} then \begin{align*} \int_{\Omega^+} |M_k(x)-M(x)|~dx &=\sum_{T\in\Tau^+\setminus \Tau^+_k} \int_T |M_k(x)-M(x)|~dx+\sum_{T\in\Tau^+_k} \int_T |M_k(x)-M(x)|~dx\\ &=\sum_{T\in\Tau^+\setminus \Tau^+_k} \int_T |M(x)|~dx+C\sum_{T\in\Tau^+_k} \|u_k-u_\infty\|_{H^1(\omega(T))}^2\\ &\lesssim C\sum_{T\in\Tau^+\setminus \Tau^+_k} c_T^2+C\|u_k-u_\infty\|_{H^1(\Omega)}^2. \end{align*} The terms in the right hand side tend to zero when $k$ tends to infinity, due to~(\ref{E:c_T}) and the fact that $u_k$ converges to $u_\infty$ in $H^1_0(\Omega)$. Therefore, $$M_k\longrightarrow M,\qquad \textrm{in}\quad L^1(\Omega^+).$$ Hence, using that $\epsilon_k(x)\rightarrow 0$, for almost every $x\in\Omega$, we can apply a generalized majorized convergence theorem \cite[p.1015]{Zeidler} to conclude that $$ \eta_k(\Tau_k^+)^2 = \sum_{T\in \Tau_k^+} \eta_k(T)^2=\int_{\Omega^+} \epsilon_k(x)~dx \longrightarrow 0, $$ as $k\rightarrow \infty$. \end{proof} We have proved in this section that $\eta_k(\Tau_k)\rightarrow 0$ as $k\rightarrow \infty$. In the next section we will use this result to conclude that $(\lambda_\infty, u_\infty)$ is an eigenpair of the continuous problem~(\ref{E:cont-problem}). \section{The limiting pair is an eigenpair}\label{S:eigenfunction} In this section we will prove that $(\lambda_\infty,u_\infty)$ is an eigenpair of the continuous problem~(\ref{E:cont-problem}). The idea in~\cite{MSV_convergence} to prove that $u_\infty$ is the exact solution to the continuous problem, consisted in using the \emph{reliability} of the a posteriori error estimators, that is, the fact that the error in energy norm is bounded (up to a constant) by the global error estimator. Such a bound does not hold in this case unless the underlying triangulation is sufficiently fine (see Theorem~\ref{T:upperbound}). We do not enforce such a condition on the initial triangulation $\Tau_0$, since the term \emph{sufficiently fine} is not easily quantifiable. Instead we resort to another idea, we will bound $a(u_\infty,v)-\lambda_\infty b(u_\infty,v)$ by the residuals of the discrete problems, which are in turn bounded by the estimators, and were proved to converge to zero in the previous section. \begin{theorem}\label{T:eigenfunction} The limiting pair $(\lambda_\infty,u_\infty)$ of Theorem~\ref{T:limiting function} is an eigenpair of the continuous problem~\eqref{E:cont-problem}. That is, \begin{equation*} \left\{ \begin{array}{l} a(u_\infty,v)=\lambda_\infty \, b(u_\infty,v),\qquad \forall~v\in H^1_0(\Omega),\\ \normab{u_\infty}=1. \end{array} \right. \end{equation*} \end{theorem} \begin{proof} We know that $\normab{u_\infty}=1$ due to Theorem~\ref{T:limiting function}. It remains to prove that $$a(u_\infty,v)=\lambda_\infty b(u_\infty,v),\qquad \forall~v\in H^1_0(\Omega).$$ Let $v\in H^1_0(\Omega)$, and let $v_k\in \VV_k$ be the Scott-Zhang interpolant~\cite{Scott-Zhang},\cite{Scott-Zhang92} of $v$, which satisfies $$\normT{v-v_k}\lesssim h_T \|\nabla v\|_{\omega_k(T)}\qquad \textrm{and}\qquad \normbT{v-v_k}\lesssim h_T^{1/2} \|\nabla v\|_{\omega_k(T)}.$$ From (\ref{E:kdisc-problem}) we have $$a(u_k,v_k)=\lambda_k b(u_k,v_k),$$ for all $k$, and then \begin{align} \notag |a(u_\infty,v)-&\lambda_\infty b(u_\infty,v)|= |a(u_\infty,v)-\lambda_\infty b(u_\infty,v)-a(u_k,v_k)+\lambda_k b(u_k,v_k)| \\ \notag &= |a(u_k,v-v_k)-\lambda_k b(u_k,v-v_k)+b(\lambda_k u_k-\lambda_\infty u_\infty,v)+a(u_\infty-u_k,v)| \\ \label{E:to-be-bounded} &\le |a(u_k,v-v_k)-\lambda_k b(u_k,v-v_k)|+|b(\lambda_k u_k-\lambda_\infty u_\infty,v)|+|a(u_\infty-u_k,v)|. \end{align} The second term in~\eqref{E:to-be-bounded} can be bounded as \begin{align*} |b(\lambda_k u_k-\lambda_\infty u_\infty,v)|&=|\lambda_k b(u_k- u_\infty,v)+(\lambda_k-\lambda_\infty)b(u_\infty,v)|\\ &\leq |\lambda_k| |b(u_k- u_\infty,v)|+|\lambda_k-\lambda_\infty||b(u_\infty,v)|\\ &\lesssim \lambda_0 \norm{u_k- u_\infty}\norm{v}+|\lambda_k-\lambda_\infty|\norm{u_\infty}\norm{v}\\ &\lesssim\left(\lambda_0 \norm{u_k- u_\infty}+|\lambda_k-\lambda_\infty| \norm{u_\infty}\right)\norm{v}. \end{align*} And the third term in~\eqref{E:to-be-bounded} is bounded by $$|a(u_\infty-u_k,v)| \lesssim \norm{\nabla (u_\infty-u_k)}\norm{\nabla v}.$$ Finally, the first term in~\eqref{E:to-be-bounded} can be bounded following the steps of the proof of the a posteriori upper bound, as follows: \begin{align*} |a(u_k,v-v_k)-\lambda_k b(u_k,v-v_k)|&=\left|\sum_{T\in\Tau_k} \int_T \AAA \nabla u_k\cdot \nabla (v-v_k) -\lambda_k \int_T \BB u_k(v-v_k)\right|\\ &=\left|\sum_{T\in\Tau_k} \int_T \big(-\nabla\cdot (\AAA \nabla u_k)-\lambda_k\BB u_k\big)(v-v_k) +\int_{\partial T} (v-v_k)\AAA\nabla u_k\cdot \overrightarrow{n}\right|\\ &=\left|\sum_{T\in\Tau_k} \int_T R(\lambda_k,u_k)(v-v_k) +\frac{1}{2}\int_{\partial T} (v-v_k)J(u_k)\right|, \end{align*} with $R(\lambda_k,u_k)$ and $J(u_k)$ as defined in~(\ref{E:element-residual}) and~(\ref{E:jump-residual}). Now, by H\"older and Cauchy-Schwarz inequalities we obtain \begin{align*} |a(u_k,v-v_k)-\lambda_k b(u_k,v-v_k)|&\leq\sum_{T\in\Tau_k}\normT{R(\lambda_k,u_k)}\normT{v-v_k}+\normbT{J(u_k)}\normbT{v-v_k}\\ &\lesssim\sum_{T\in\Tau_k}\normT{R(\lambda_k,u_k)}h_T\|\nabla v\|_{\omega_k(T)}+\normbT{J(u_k)}h_T^{1/2}\|\nabla v\|_{\omega_k(T)}\\ &\lesssim \left(\sum_{T\in\Tau_k}h_T^2\normT{R(\lambda_k,u_k)}^2+h_T\normbT{J(u_k)}^2\right)^{1/2}\norm{\nabla v}\\ &=\eta_k(\Tau_k)\norm{\nabla v}. \end{align*} Summarizing, we have that \[ |a(u_\infty,v)-\lambda_\infty b(u_\infty,v)| \lesssim \left((1+\lambda_0)\left\|u_k- u_\infty\right\|_{H^1(\Omega)}+|\lambda_k-\lambda_\infty|\norm{u_\infty}+ \eta_k(\Tau_k)\right) \left\|v\right\|_{H^1(\Omega)}. \] Using the convergence of $u_k$ to $u_\infty$ in $H^1(\Omega)$ and $\lambda_k$ to $\lambda_\infty$ in $\RR$ from Theorem~\ref{T:limiting function}, and the convergence of the global estimator to zero from Theorem~\ref{T:est_conv}, we conclude that $$|a(u_\infty,v)-\lambda_\infty b(u_\infty,v)|=0,$$ and the proof is completed. \end{proof} \section{Main result and concluding remarks}\label{S:main-result} We conclude this article by stating and proving our main result, which is a consequence of the results in the previous sections, and discussing its strengths and weaknesses. \begin{theorem}\label{T:cuasi main result} Let $\{(\lambda_k,u_k)\}_{k\in\NN_0}$ denote the whole sequence of discrete eigenpairs obtained through the adaptive loop stated in Section~\ref{S:adloop}. Then, there exists an eigenvalue $\lambda$ of the continuous problem~(\ref{E:cont-problem}) such that \begin{equation*} \lim_{k\rightarrow\infty} \lambda_k=\lambda \qquad \textrm{and}\qquad \lim_{k\rightarrow\infty} \dist_{H^1_0(\Omega)}(u_k,M(\lambda)) = 0. \end{equation*} \end{theorem} \begin{proof} By Theorem~\ref{T:limiting function}, taking $\lambda:= \lambda_\infty$, we have that $\lim_{k\rightarrow\infty} \lambda_k=\lambda,$ and by Theorem~\ref{T:eigenfunction}, $\lambda$ is an eigenvalue of the continuous problem~\eqref{E:cont-problem}. In order to prove that $\D \lim_{k\rightarrow\infty} \dist_{H^1_0(\Omega)}(u_k,M(\lambda)) = 0$ we argue by contradiction. If the result were not true, then there would exist a number $\epsilon>0$ and a subsequence $\{u_{k_m}\}_{m\in\NN_0}$ of $\{u_k\}_{k\in\NN_0}$ such that \begin{equation}\label{E:Absurd} \dist_{H^1_0(\Omega)}(u_{k_m},M(\lambda))>\epsilon,\qquad \forall~m\in\NN_0. \end{equation} By Remark~\ref{R:subsubsequence} it is possible to extract a subsequence of $\{u_{k_m}\}_{m\in\NN_0}$ which still converges to some function $\tilde u_\infty \in \VV_\infty$. By the arguments of Sections~\ref{S:convest} and~\ref{S:eigenfunction}, $\tilde u_\infty$ is an eigenfunction of the continuous problem~\eqref{E:cont-problem} corresponding to the same eigenvalue $\lambda$. That is, a subsequence of $\{u_{k_m}\}_{m\in\NN_0}$ converges to an eigenfunction in $M(\lambda)$, this contradicts~(\ref{E:Absurd}) and completes the proof. \end{proof} \begin{remark} We have proved that the discrete eigenvalues converge to an eigenvalue of the continuous problem, and the discrete eigenfunctions converge to the set of the corresponding continuous eigenfunctions, and this is the main result of this article. But there is still an open question: If $\lambda_k$ was chosen as the $j$-th eigenvalue of the discrete problem over $\Tau_k$, is it true that $\{\lambda_k\}_{k\in\NN_0}$ converges to the $j$-th eigenvalue of the continuous problem? The answer is affirmative for a large number of problems, but not necessarily for all. There could be some pathological cases in which looking for the $j$-th eigenvalue we converge to one that is larger. \end{remark} We now state an assumption on problem~(\ref{E:cont-problem}) that we will prove to be sufficient to guarantee that the convergence holds to the desired eigenvalue/eigenfunction. More precise sufficient conditions on problem data $\AAA$ and $\BB$ to guarantee that this assumption holds will be stated below. \begin{assumptions}[Non-Degeneracy Assumption]\label{A:non-deg} We will say that problem~(\ref{E:cont-problem}) satisfies the \emph{Non-Degeneracy Assumption} if whenever $u$ is an eigenfunction of~(\ref{E:cont-problem}), there is no nonempty open subset $\OO$ of $\Omega$ such that $u|_\OO\in\PP_\ell(\OO)$. \end{assumptions} \begin{theorem}\label{T:main-result} Let us suppose that the continuous problem~(\ref{E:cont-problem}) satisfies the Non-Degeneracy Assumption~\ref{A:non-deg}, and let $\{(\lambda_k,u_k)\}_{k\in\NN_0}$ denote the whole sequence of discrete eigenpairs obtained through the adaptive loop stated in Section~\ref{S:adloop} and $\lambda$ denote the $j$-th eigenvalue of the continuous problem~(\ref{E:cont-problem}). Then, \begin{equation*} \lim_{k\rightarrow\infty} \lambda_k=\lambda \qquad \textrm{and}\qquad \lim_{k\rightarrow\infty} \dist_{H^1_0(\Omega)}(u_k,M(\lambda)) = 0. \end{equation*} \end{theorem} Before embarking into the proof of this theorem, it is worth mentioning that the model case of $\AAA \equiv I$ and $\BB \equiv 1$ satisfies Assumption~\ref{A:non-deg}, due to the fact that the eigenfunctions of the laplacian are analytic. A weaker assumption on the coefficients $\AAA$ and $\BB$ that guarantee non-degeneracy of the problem are given in the following \begin{lemma} If $\AAA$ is continuous, and piecewise $\PP_1$, and $\BB$ is piecewise constant, then problem~(\ref{E:cont-problem}) satisfies the Non-Degeneracy Assumption~\ref{A:non-deg}. \end{lemma} \begin{proof} We will argue by contradiction. Let us suppose that there exists an eigenfunction $u$ of~(\ref{E:cont-problem}) with corresponding eigenvalue $\lambda$, and a nonempty open subset $\OO$ of $\Omega$ such that $u|_\OO \in \PP_\ell(\OO)$. Without loss of generality, we may assume that $\AAA|_\OO \in \PP_1(\OO)$ and $\BB$ is constant over $\OO$. Then $$-\nabla \cdot (\AAA\nabla u)=\lambda \BB u, \qquad \textrm{in}~\OO.$$ Since $u|_\OO\in\PP_\ell(\OO)$, we have that $-\nabla \cdot (\AAA\nabla u)\in \PP_{\ell-1}(\OO)$, and the last equation implies that $u|_\OO\in \PP_{\ell-1}(\OO)$. Repeating this argument we finally obtain that $$u|_\OO\equiv 0,$$ which cannot be true. In fact, $u$ is a solution of a linear elliptic equation of second order with uniformly elliptic and Lipschitz leading coefficients and therefore, it cannot vanish in an open subset of $\Omega$ unless it vanishes over $\Omega$~\cite{Han}. \end{proof} \begin{comment} \begin{proof} We will argue by contradiction. Let us suppose that $\Omega^+$ is not empty, then there exists $T\in\Tau^+$, and thus there exists $k_0\in\NN_0$ such that $T\in\Tau_k$, for all $k\geq k_0$ and $u_k|_T \in\PP_\ell(T)$, for all $k\geq k_0$. According to~(\ref{E:eta_k of T tends to zero}) we have $$\eta_k(T)^2= h_T^2\normT{-\nabla \cdot (\AAA\nabla u_k)-\lambda_k \BB u_k}^2 + h_T\normbT{J(u_k)}^2\longrightarrow 0.$$ From this equation and the facts that $u_k\rightarrow u_\infty$ in $H^1(T)$ and that $\lambda_k\rightarrow\lambda_\infty$ we have $$-\nabla \cdot (\AAA\nabla u_\infty)=\lambda_\infty \BB u_\infty, \qquad \textrm{on}~T.$$ Since $u_\infty|_T\in\PP_\ell(T)$, we have that $-\nabla \cdot (\AAA\nabla u_\infty)\in \PP_{\ell-1}(T)$, and the last equation implies that $u_\infty|_T\in \PP_{\ell-1}(T)$. Repeating this argument we finally obtain that $$u_\infty|_T\equiv 0,$$ which cannot be true. In fact, $u_\infty$ is a solution of a linear elliptic equation of second order with uniformly elliptic and Lipschitz leading coefficients and therefore, it cannot vanish on an element $T$ \cite{Han}. \end{proof} \end{comment} \begin{remark} Searching for other sufficient conditions on the coefficients to guarantee Assumption~\ref{A:non-deg} is out of the scope of this article. We believe that in the assumptions of the previous lemma, $\AAA$ can be allowed to be piecewise continuous with discontinuities along Lipschitz interfaces. The only thing needed is a proof of the fact that solutions to elliptic problems with coefficients like these cannot vanish in an open subset of $\Omega$ unless they vanish over all $\Omega$. We conjecture that this could be proved using Han's result~\cite{Han} in combination with Hopf's lemma~\cite{Gilbarg-Trudinger}, but it will be subject of future work. \end{remark} We now proceed to prove Theorem~\ref{T:main-result}, which will be a consequence of the following lemma. \begin{lemma}\label{L:nondeg->h0} Let $\{ h_k \}_{k\in\NN_0}$ denote the sequence of meshsize functions obtained through the adaptive loop stated in Section~\ref{S:adloop}. If the continuous problem~(\ref{E:cont-problem}) satisfies the \emph{Non-Degeneracy Assumption}~\ref{A:non-deg}, then $\| h_k \|_{L^\infty(\Omega)} \to 0$ as $k \to \infty$. \end{lemma} \begin{proof} We argue by contradiction. By Lemma~\ref{L:h0<=>omega+}, if $\| h_k \|_{L^\infty(\Omega)}$ does not tend to zero, then $\Omega^+$ is not empty, and then there exists $T\in\Tau^+$, and thus $k_0\in\NN_0$ such that $T\in\Tau_k$, for all $k\geq k_0$. Since $\|u_{k_m}-u_\infty\|_{L^2(T)}\rightarrow 0$ as $m\rightarrow \infty$, and $u_k|_T \in\PP_\ell(T)$, for all $k\geq 0$, using that $\PP_\ell(T)$ is a finite dimensional space we conclude that \begin{equation}\label{E:u_infty is a polynomial} u_\infty|_T\in\PP_\ell(T). \end{equation} Theorem~\ref{T:eigenfunction} claims that $u_\infty$ is an eigenfunction of~(\ref{E:cont-problem}) and thus~(\ref{E:u_infty is a polynomial}) contradicts Assumption~\ref{A:non-deg}. \end{proof} \begin{remark} It is important to notice that the convergence of $h_k$ to zero is not an assumption, but a consequence of the fact that a subsequence is converging to an eigenfunction $u_\infty$ and the Non-Degeneracy Assumption~\ref{A:non-deg}. \end{remark} \begin{comment} \begin{proof} This is a trivial consequence of the fact that if $h_k \to 0$ pointwise, then this convergence also holds uniformly (see Lemma~\ref{L:hktendstozero}). The a priori result~\eqref{E:lambdak converges} implies the claim. \end{proof} \end{comment} \begin{proof}[Proof of Theorem~\ref{T:main-result}] In view of Theorem~\ref{T:cuasi main result} it remains to prove that $\lambda_k$ converges to the $j$-th eigenvalue of~(\ref{E:cont-problem}). By Lemma~\ref{L:nondeg->h0} the result follows from~(\ref{E:lambdak converges}). \end{proof} We conclude the article with several remarks. \begin{remark} At first sight, the convergence of $\|h_k\|_{L^\infty(\Omega)}$ to zero looks like a very strong statement, especially in the context of adaptivity. But the uniform convergence of the meshsize to zero should not be confused with quasi-uniformity of the sequence of triangulations $\{\Tau_k\}_{k\in\NN_0}$, the latter is not necessary for the former to hold. Thinking about this more carefully, we realize that if we wish to have (optimal) convergence of finite element functions to some given function in $H^1(\Omega)$, then $h_k$ \emph{must tend to zero everywhere} (pointwise) unless the objective function is itself a polynomial of degree $\le \ell$ in an open region of $\Omega$. Lemma~\ref{L:hktendstozero} implies that the convergence of $h_k$ to zero is also uniform, and this does not necessarily destroy optimality~\cite{CKNS-quasi-opt,Stevenson,Garau-Morin-Zuppa-quasi-eig}. \end{remark} \begin{remark} A sufficient condition to guarantee that we converge to the desired eigenvalue is to assume that $h_k \to 0$ as $k\to\infty$. This condition is weaker than the Non-Degeneracy Assumption, but it is in general impossible to prove a priori. \end{remark} \begin{remark} Another option to guarantee convergence to the desired eigenvalue is to start with a mesh which is sufficiently fine. In view of the minimum-maximum principles, it is sufficient to start with a triangulation $\Tau_0$ that is sufficiently fine to guarantee that $\lambda_{j,\Tau_0} < \lambda_{j_0}$, where $j_0 > j$ is the minimum index such that $\lambda_{j_0} > \lambda_{j}$. This condition is verifiable a posteriori if we have a method to compute eigenvalues approximating from below. Some ideas in this direction are presented in~\cite{Armentano-Duran}, where the effect of mass lumping on the computation of discrete eigenvalues is studied. \end{remark} \begin{comment} \begin{remark} In the case that $\AAA \equiv I$ and $\BB \equiv 1$, the problem~(\ref{E:cont-problem}) is the classical problem: \begin{equation*} \left\{ \begin{array}{l} \int_\Omega \nabla u \cdot \nabla v=\lambda \int_\Omega uv,\qquad \forall~v\in H^1_0(\Omega),\\ \norm{u}=1. \end{array} \right. \end{equation*} For this problem we have the result of the Theorem~\ref{T:main-result}. \end{remark} \begin{remark} We know that the first eigenvalue of the problem~(\ref{E:cont-problem}) is simple. (Cf. \cite{Evans}, \cite{Gilbarg-Trudinger}). \end{remark} \end{comment} \bibliographystyle{amsalpha}
1,941,325,220,837
arxiv
\section{Introduction} Vanadium oxides compounds exhibiting exotic transport phenomena are subjects of extensive interest. In particular vanadium dioxide, VO$_2$, undergoes a first-order transition from a high-temperature metallic phase to a low-temperature insulating phase at almost the room temperature ($T=340\,$K) \cite{morin}. There are intensive efforts around the world to make devices such as switches, transistors, detectors, varistors, phase change memory, exploiting the unique properties of VO$_2$. At low-temperature VO$_2$ has a simple monoclinic ($M_1$) structure with space group $P2_1/c$ ($M_1$~phase) while at high temperature it has a simple tetragonal lattice with space group $P4_2/mnm$ rutile ({\em R}-phase) as displayed in Fig.~\ref{fig:structure}. The lattice structures of the two phases are closely related as emphasized in Fig.~\ref{fig:structure} by showing similar wedges of the different lattices: the $M_1$ unit cell is similar to the {\em R} unit cell, when the latter is doubled along the rutile $c$-axis. For the sake of simplicity, we will use the notation $c$-axis for both the $M_1$ and {\em R} phase when referring to the axis equivalent to the rutile $c$-axis (in $M_1$ phase this axis is sometimes called $a$-axis). The $M_1$ phase is characterized by a dimerization of the vanadium atoms into pairs, as well as a tilting of these pairs with respect to the $c$-axis \cite{KL70,andresson} as indicated by showing bonds between the paired V atoms in Fig.~\ref{fig:structure}b. \begin{figure}[!bt] \centering{ \includegraphics[height=0.5\linewidth]{fig1a.png} \includegraphics[height=0.5\linewidth]{fig1b.png} } \caption{(Color online) a): The high temperature metallic rutile structure. b): The low temperature insulating $M_1$ structure. {\em Large (red) spheres:} V atoms, {\em small (blue) spheres:} O atoms. Thin black box in the rutile structure emphasizes the similarity between the two phases. {\em Thick black boxes:} the unstressed unit cells, {\em thick gray (orange) boxes:} stressed unit cell. {\em Arrows:} the directions of lattice vectors (note the names of the axes differ from the usual notation \cite{E02}). The heights of the unstressed (stressed) unit cells are $c_0$ ($c=r c_0$) for the rutile and $\tilde{c}_0$ ($\tilde{c}=r \tilde{c}_0$) for the $M_1$ structure. Green lines connect the dimerized V atoms. The actual stress in theory and experiments is 10 times smaller than depicted in the figure. } \label{fig:structure} \end{figure} The electronic and transport properties are dramatically different for the two phases. The resistivity jumps by several orders of magnitude through the phase transition, and the crystal structure changes from {\em R}-phase at high-temperature to monoclinic $M_1$-phase at low-temperature \cite{morin,allen}. While the rutile phase is a conductor, the $M_1$ phase is an insulator with a gap of $\sim0.6$~eV \cite{KHH+06} at the Fermi energy. VO$_2$ has also attracted a great deal of attention for its ultrafast optical response, switching between the {\em R} and the $M_1$ phases \cite{lysenko,cavalleri,Baum}. Despite the large number of experimental \cite{Baum,MH02,Maekawa,Qazilbash1,Qazilbash2,Qazilbash3,QBW+08,Eguchi,Ruzmetov,Braichovich,Dmitry,KHH+06} and theoretical \cite{BPL+05,Tomczak1,Tomczak2,Sakuma,Gatti,E02,RSA94,LCM06,LIB05} studies focusing on this material the physics driving this phase transition and the resulting optical properties are still not identified undoubtedly. In the theoretical works using LDA \cite{E02,RSA94,allen} the formation of the gap was not found. The single site DMFT approach~\cite{LCM06,LIB05} is known to correctly describe the metallic phase but can not take into account the formation of the bonding states of the dimers in the insulating phase which requires the cluster DMFT method \cite{BPL+05}. As shown in Ref.~\cite{Tomczak2} the fully interacting one-electron spectrum of the $M_1$ phase can be reproduced by an effective band structure description. The optical conductivities calculated by using cluster DMFT shows good agreement with the experiment for both phases~\cite{TB09}. The metal-insulator transition (MIT) of VO$_2$ is usually attributed to two different physical mechanisms. One of them is the Peierls physics, i.e. the dimerization of the V atoms along the rutile $c$-axis \cite{g60}, and consequently opening of the gap in the Brillouin-zone reduced in the direction along the $c$-axis. In this case, the gap opening can be explained in the framework of effective one-electron theories. The other one is the Mott mechanism where the gap opens due to the strong Coulomb repulsion between the localized V-$3d$ orbitals \cite{ZM76} and the related dynamical effects. Understanding in detail the interplay and relative importance of both the Peierls and the Mott mechanisms for the electronic structure is crucial for controlling this material with an eye towards applications. For example, whether the driving force of this transition is electronic (i.e. occurring on femtosecond timescales) or structural (occurring on the picosecond timescale) is important to understand the speed of the switching from the $M_1$ to the rutile phase. From the perspective of applications, in order to control the properties of vanadium dioxide, it is essential to identify the effects of compressive and tensile stress resulting from the various substrates, on which the films are deposited \cite{MH02,Maekawa}. The experimental results of Ref.~\cite{MH02} showed that compressive uniaxial strain (along the $c$-axis) stabilizes the metallic phase. This result cannot be explained by applying a simple Peierls picture exclusively. The Peierls mechanism predicts that a compression along the $c$-axis increases the splitting between the bonding and antibonding state, formed by the combinations of particular $3d$ orbitals residing on the different V atoms in a dimer, and hence would promote insulating state. The Peierls mechanism thus increases the tendency to open the gap in the $M_1$ phase, which would stabilize the insulating phase at the expense of the metallic phase. In this picture, the transition temperature would increase under uniaxial compressive strain, opposite to experiment~\cite{MH02}. The LDA calculations fail to reproduce the gap in the $M_1$ phase~\cite{E02}. This suggests that the correlations are important in this material. Motivated by this and the above mentioned experiment we have examined the influence of strain on the electronic structure of VO$_2$ microscopically applying the dynamical mean-field theory \cite{GKK+96} combined with the local density approximation of density functional theory (LDA+DMFT) \cite{KSH+06}. We carried out LDA+DMFT calculations of VO$_2$ under strain. We established that in addition to the Peierls distortion, other factors like the position of the $e_g^\pi$ band and the band-width of the bonding state in the $M_1$ phase showing definite sensitivity to the strain, play a defining role in driving the MIT in VO$_2$. Theoretical predictions for the strain dependence of many spectroscopic quantities are made, including the photoemission, the optical conductivity, the inverse photoemission and XAS in the $R$ and $M_1$ phases. The insights achieved in this study together with the computational machinery developed, will serve as a basis for rational material design of VO$_2$ based applications. \section{Method} The unit cell of the rutile structure contains two vanadium and four oxygen atoms while the $M_1$ structure contains four vanadium (2 dimers) and eight oxygen atoms. The calculations in the rutile phase were performed with the doubled unit cells (four V and eight O atoms) to allow the formation of a bonding state between the V atoms separated along the $c$-axis, a mechanism which has dramatic effect in the $M_1$ phase. The lattice structure parameters for the rutile (referred as $a_0$, $b_0$ and $c_0$ in the following) and $M_1$ cases ($\tilde{a}_0$, $\tilde{b}_0$, $\tilde{c}_0$ and $\beta_0$) were published by McWhan {\it et al.} in Ref.~\cite{MMR+74} and Kierkegaard and Longo in Ref.~\cite{KL70}, respectively. Note that our notation of the axis differs from the published data \cite{E02}, in order to emphasize the similarity of the {\em R} and $M_1$ structures. The strain was applied by changing the lattice constants but not the internal parameters of the atomic positions. The strain is characterized by the ratio between the experimental lattice constant along the $c$-axis of the unstrained crystal, and the one used in the calculation, $r=c/c_0$ in the rutile and $r=\tilde{c}/\tilde{c}_0$ in the $M_1$ case. The applied strains correspond to $r=0.98$, 1.00 and 1.02 ratios both for the {\em R} and $M_1$ phases in order to make calculations which are compatible with the experiment in Ref.~\cite{MH02}. The lattice parameters were changed with the constraint to preserve the volume of the original unit cell and to form a structure to closely resemble the original structure. This is illustrated in Fig.~\ref{fig:structure} for both phases. The constant volume constraint in the rutile phase requires for the other two lattice constants $a=a_0/\sqrt{r}$ and $b=b_0/\sqrt{r}$. For the $M_1$ structure, two other constraints were introduced: the two ratios $(\tilde{b}\sin\beta)/\tilde{a}=0.9988$ and $-2(\tilde{b}\sin\beta)/\tilde{c}=1.0096$ were kept constant. Hence we used the following lattice constants and angles $\tilde{c}=r\tilde{c}_0$, $\tilde{a}= \tilde{a}_0 /\sqrt{r}$, $\tan(\beta)= \tan(\beta_0) /r^{3/2} $ and $\tilde{b}=\tilde{b}_0\sqrt{r^2\cos^2(\beta_0)+\sin^2(\beta_0)/r}$. For the {\em R}-phase, we used a local coordinate system introduced by Eyert in Ref~\onlinecite{E02}, as shown in Fig.~\ref{fig:orbitals}. In the rutile geometry the $x$-axis is parallel to the rutile $c$-axis while the $z$-axis points along the $[110]$ direction, pointing to the adjacent oxygen atom in the vanadium plane. In the $M_1$ phase, our local coordinate system is somewhat tilted to align to monoclinic geometry. The symmetry classification of the electronic orbitals is also adopted, namely $d_{3z^2-r^2}$ and $d_{xy}$ stand for the $e_g$ states and $d_{xz}$, $d_{yz}$ and $d_{x^2-y^2}$ for the $t_{2g}$ orbitals. The $t_{2g}$ manifold is divided further: the notation $a_{1g}$ is used for $d_{x^2-y^2}$ state and $e^{\pi}_g$ for the $d_{xz}$ and $d_{yz}$ states. In this choice of the local coordinate system the dimerization strongly affects the $d_{x^2-y^2}$ states, which have large electron density along the V-V bond. \begin{figure}[!bt] \begin{center}$ \begin{array}{ccc} \includegraphics[height=0.3\linewidth,clip=true, viewport=0pt 40pt 500pt 500pt]{fig2a.png}& \includegraphics[height=0.3\linewidth,clip=true, viewport=0pt 40pt 500pt 500pt]{fig2b.png}& \includegraphics[height=0.3\linewidth,clip=true, viewport=0pt 40pt 500pt 500pt]{fig2c.png}\\ d_{x^2-y^2}&d_{xz}&d_{yz} \end{array}$ \end{center} \caption{ (Color online) The sketch of vanadium $t_{2g}$ orbitals ($d_{x^2-y^2}$, $d_{xz}$, $d_{yz}$) together with the applied coordinate system and the V-V pairs considered in the rutile phase calculations. {\em Large (red) spheres:} V atoms, {\em small (blue) spheres:} O atoms. } \label{fig:orbitals} \end{figure} First, a self-consistent LDA calculation was performed using the linear muffin-tin orbitals method combined with the atomic sphere approximation (LMTO-ASA) \cite{lmto}. For a satisfactory description of the interstitial region 48 empty spheres were added in the LDA-ASA calculations. In the next step we determined a {\em downfolded parameters} of the low energy, effective Hamiltonian, $H_{eff}$, including only the $3d$ $t_{2g}$ subset of electronic states lying in the vicinity of the Fermi energy. The effective Hamiltonian is constructed as: \begin{equation} H_{\mathrm{eff}}= \sum\limits_{\sigma} \sum\limits_{\langle i,i'\rangle} \sum\limits_{ \alpha,\alpha'} \left( \epsilon_{i,\alpha,\sigma} \delta_{\alpha,\alpha'} \delta_{i,i'} + t_{i,\alpha,\sigma;i',\alpha',\sigma} \right) c_{i,\alpha,\sigma}^{\dagger} c_{i',\alpha',\sigma} \end{equation} with off-diagonal hopping ($t$) and diagonal one-electron energy ($\epsilon$) parameters belonging to different sites $(i)$ and states ($\alpha =d_{xz}$, $d_{yz}$, $d_{x^2-y^2}$) with a spin character $\sigma$. The applicability and accuracy of the above downfolding method is determined by the mutual positions and characters of the electronic bands. In the studied case of VO$_2$ both in the $M_1$ and {\em R} phases the bands in the proximity of the Fermi level have mainly $t_{2g}$ character and are well separated from both the low-lying $2p$-bands of the oxygen, and from the $e_g$ bands of vanadium, due to the strong crystal-field splitting ($\sim 3.0-3.5$~eV \cite{E02}). The crystal field splitting is due to oxygen octahedron surrounding the individual V atoms. This mapping of the LDA band structure onto a tight-binding type Hamiltonian provides a reasonably good description of the electronic structure as compared to the LDA serving as a good starting point for the following DMFT calculations. The evolution of the intra dimer V-V hopping matrix elements corresponding to the three different $t_{2g}$ orbitals induced by the strain are shown in Fig.~\ref{fig:hoppings}. Although, the relative changes in the parameters are small, the manifestations of these changes in the low energy spectral function, and therefore in the transport properties, are significant. \begin{figure}[!bt] \centering{ \includegraphics[width=0.49\linewidth]{fig3a.png} \includegraphics[width=0.49\linewidth]{fig3b.png} } \caption{ (Color online) The intra dimer V-V hoppings corresponding to the $t_{2g}$ orbitals shown in Fig.~\ref{fig:orbitals} as a function of strain. The left (right) panel shows the parameters in the rutile ($M_1$) phase. The direction of strain is depicted in Fig.~\ref{fig:structure}. } \label{fig:hoppings} \end{figure} The effects of the electron-electron interaction beyond LDA are treated by means of the dynamical mean-field theory \cite{GKK+96,KSH+06}. The coulomb interaction was taken into account for the V-V dimers by using the cluster extension of the DMFT \cite{KSH+06}. In this approach the quantities describing the electronic states of a cluster (Green's function, self-energy, {\it etc.}) are matrices of both the site and electronic state indices. Having two sites (two V atoms) and three states ($t_{2g}$ states) the resulted matrices have dimensions of $6\times6$ for non-magnetic calculations. For solving the impurity problem, we used the continuous time quantum Monte Carlo method (CTQMC)~\cite{H07} at the temperatures $T=232$~K and $T=390$~K, below and above the critical temperature, for the $M_1$ and {\em R} cases, respectively. In the present calculations we assumed on-site Coulomb interaction on the V atoms written as: \begin{equation} H_U= U \sum\limits_{i=1,2} \sum\limits_{ \alpha \alpha'} \sum\limits_{ \sigma \sigma'} c_{i,\alpha\sigma}^{\dagger} c_{i,\alpha\sigma} c_{i,\alpha'\sigma'}^{\dagger} c_{i,\alpha'\sigma'} (1-\delta_{\alpha\alpha'} \delta_{\sigma\sigma'}) \end{equation} excluding the terms with similar orbital and spin character at same time. We fixed the parameter $U$ at $U=2.2$~eV, to reproduce the experimentally measured gap in the $M_1$ phase, which shows a weak temperature dependence between 100 and 340~K, varying between $\sim0.75$ and $\sim0.6$~eV \cite{OFO01,KHH+06}. The strong dependence of the gap size on the $U$ parameter is shown in Fig.~\ref{fig:var_U}. \begin{figure}[!bt] \centering{ \includegraphics[width=0.5\linewidth]{fig4.png} } \caption{ (Color online) The variation of the $t_{2g}$ density of states of the $M_1$ phase as a function of the applied Coulomb repulsion parameter, $U$. Note the high sensitivity of the gap on $U$. } \label{fig:var_U} \end{figure} The linear $U$ dependence of the renormalization factor $Z$ calculated from the real part of the self-energy of the different orbitals $(\alpha)$ as $Z_{\alpha}=(1-\partial \mathrm{Re}\Sigma_{\alpha} (\omega)/\partial \omega)^{-1}$ and the electronic specific heat, $\gamma=\sum\limits_{\alpha}\rho_{\alpha}(0)/Z_{\alpha}$, of the rutile phase are shown in Fig.~\ref{fig:Z_gamma}. The $Z$ value obtained at $U=2.2$~eV is in good agreement with the one published in Ref.~\cite{BPL+05}. An alternative method to determine the value of $Z$ factor is to take the ratio of the experimentally measured {\em plasma frequency} in the rutile phase ($\omega_p^{exp}=2.75$~eV)\cite{QBW+08} and devide it by the band theoretical LDA-LAPW calculation \cite{wien2k} ($\omega_p^{LDA}\approx4.1$~eV), i.e., $\omega_p^{exp}/\omega_p^{LDA}\approx0.67$, which agrees well with $Z\approx 0.62$ in the case of $U=2.2$~eV. Although the $U$ was chosen to reproduce the gap in the $M_1$-phase, it is satisfying that this value is also compatible with the alternative estimation, based exclusively on the optical and thermodynamic properties of the rutile phase. \begin{figure}[!bt] \centering{ \includegraphics[width=0.5\linewidth]{fig5.png} } \caption{ (Color online) Renormalization constants $Z_{\alpha}=(1-\partial \mathrm{Re}\Sigma_{\alpha}(\omega)/\partial \omega)^{-1}$ of the different orbitals in the {\em R} phase as a function of $U$. {\em Inset:} electronic specific heat, $\gamma=\sum\limits_{\alpha}\rho_{\alpha}(0)/Z_{\alpha}$, calculated at different temperature and values of $U$. } \label{fig:Z_gamma} \end{figure} In order to achieve a structured (almost diagonal) self-energy matrix for the cluster, we used a basis of symmetric $(s)$ and anti-symmetric $(as)$ combination of the states localized on the individual V atoms of the dimers defined as \begin{equation} (c_{\alpha,\sigma }^{s(as)})^\dagger = \frac{1}{\sqrt{2}}(c_{1,\alpha,\sigma}^{\dagger} \pm c_{2,\alpha,\sigma}^{\dagger}) ,\quad (\alpha \in t_{2g}) \quad. \label{eq:b-ab} \end{equation} To obtain the physical properties at real energies we performed analytic continuation to the real axis using a recently developed method of expansion in terms of modified Gaussians and a polynomial fit at low frequencies, as described in details in Ref.~\cite{HYK09}. \section{Results and Discussion} Fig.~\ref{fig:lda_dos} displays the LDA $t_{2g}$ densities of states (DOS) of the V atoms in both the rutile and $M_1$ phases for different $r$ ratios ($r=0.98$,~1.00,~1.02). The total band-width ($\sim2.6$~eV for both phases) and also the fine details of the DOS obtained by the LDA-LMTO agree well with previous studies \cite{E02,LCM+06}. The minor discrepancies are probably a consequence of the different electronic structure method or the slightly different geometry. The effect of the stress applied along the rutile {\it c}-axis on the electronic DOS can be clearly seen. The position of the $a_{1g}$ peak is changed considerably, while the total band-width is only slightly changed ($<0.1$~eV) under application of stress. Only the $d_{x^2-y^2}$ orbital changes significantly, while the two $e_{g}^\pi$ orbitals remain mostly unchanged. This is due to the sensitivity of the overlap integral of $d_{x^2-y^2}$ states along the $c$-axis. It is clear from Fig.~\ref{fig:lda_dos} that the splitting of the $d_{x^2-y^2}$ peaks strongly increases with decreasing lattice parameter $c$ roughly following the linear dependence of the intra-dimer hopping parameter $t_{x^2-y^2,x^2-y^2}$ on ratio $r$, shown in Fig.~\ref{fig:hoppings}. However, even under compressive stress of $r=0.98$, the splitting is not large enough to open a gap in the $M_1$ phase. The value of the intra-dimer hopping parameters corresponding to the $d_{x^2-y^2}$ states, which plays a significant role in the formation of the electronic structure of VO$_2$, are $-0.30$~eV and $-0.61$~eV for the {\em R} and $M_1$ cases, respectively. These values are in a good agreement with published values of Ref.~\onlinecite{BPL+05}. One can observe that the splitting of the $d_{x^2-y^2}$ peaks are larger for the $M_1$ geometry ($\sim1.4$~eV) than for the rutile structure ($\sim0.76$~eV). This can be attributed to the reduced V-V distance along the $c$-axis due to the dimerization. The splitting of the bands can be roughly approximated by $2t_{x^2-y^2,x^2-y^2}$ for the different $r$ ratios. This behavior resembles the bonding and anti-bonding splitting of a dimer molecule, suggesting that the splitting of these states is determined mainly by the intra dimer hopping, especially in the $M_1$ case. For the $e_g^\pi$ states, this correspondence is less clear, showing the importance of the inter-dimer hoppings. It is worth to note that the LDA calculations can capture only the Peierls physics, which alone is not sufficient to explain the formation of the gap even if a reduced V-V distance is considered. \begin{figure}[th!] \includegraphics[width=0.49\linewidth]{fig6a.png} \includegraphics[width=0.49\linewidth]{fig6b.png} \caption{ (Color online) LDA partial DOS of the $t_{2g}$ states of a V atom in the rutile (left) and $M_1$ (right) phases in the proximity of the Fermi energy with different $r$ ratios: $r=0.98$ (dotted, red), 1.00 (full, black) and 1.02 (dashed, green). Note the large splitting of the $d_{x^2-y^2}$ states and the barley changed DOS at the Fermi level. } \label{fig:lda_dos} \end{figure} To trace the effect of the electron-electron interactions {\it e.g.}, appearance of the gap in the $M_1$ phase and the reduction of the band-width, we carried out LDA+DMFT calculations. Our theoretically calculated {\it orbitally resolved } spectral functions can be directly compared to the measured angle integrated photoemission (PES) and x-ray absorption spectroscopy spectra (XAS). In the last few years a large number of experimental studies of these kind were carried out probing also the many body character of the occupied and the unoccupied states in VO$_2$ and serving as a stringent test of the theoretical approach. This validation of our LDA+DMFT results is crucial before proceeding to make reliable predictions for the strained materials for which these spectroscopic information is not yet available. \begin{figure}[!bt] \centering{ \includegraphics[width=0.49\linewidth]{fig7a.png} \includegraphics[width=0.49\linewidth]{fig7b.png} } \caption{ (Color online) Comparison of V-$3d$, O-$2p$ and the total DOS of LDA+DMFT calculation in rutile (left) and $M_1$ (right) phases ($r=1.00$) with angle integrated photoemission (PES) and x-ray absorption spectroscopy spectra (XAS) measurement. The experimental results are reproduced from Ref.~\cite{KHH+06} and the XAS results were shifted to obtain the best agreement with our theoretical results.} \label{fig:total} \end{figure} The upfolded density of states of the LDA+DMFT, which includes besides the V-$3d-t_{2g}$ states the O-$2p$ and the V-$3d-e_g$ states, are shown in Fig.~\ref{fig:total}. The dynamical correlation effects, not included in LDA, decrease the width of the quasiparticle $t_{2g}$ states for $\sim 0.6$ times, in agreement with our calculated quasiparticle renormalization amplitude $Z\approx 0.62$. In the rutile structure two Hubbard bands appear: a very weak lower Hubbard band around -1.0~eV (was previously reported in theoretical \cite{BPL+05} and experimental \cite{KHH+06} studies) and a stronger upper hubbard band at 2.0~eV which overlaps with the $e_g$ band (can be seen better in Fig.~\ref{fig:resolved} where the $e_g$ band is not shown). In the $M_1$ phase, the most prominent effect of the DMFT theory is the appearance of a gap of $\sim 0.7$~eV at the chemical potential. For both the rutile and $M_1$ phase the positions of the O-$2p$ and the V-$3d-e_g$ states show good agreement with the experiments. Our calculated $Z$ factors for the $M_1$ phase are $\sim 0.9$ for the $e_{g}^\pi$ state and $\sim 0.7$ for the $a_{1g}$ state confirming that the $M_1$ phase is "less correlated" than the rutile phase, as emphasized in Ref.~\onlinecite{Tomczak2}. \begin{figure}[!bt] \centering{ $\begin{array}{cc} \includegraphics[height=0.3\linewidth]{fig8a1.png}& \includegraphics[height=0.3\linewidth]{fig8b1.png}\\ \includegraphics[width=0.49\linewidth]{fig8a2.png}& \includegraphics[width=0.49\linewidth]{fig8b2.png} \end{array}$\\ } \caption{ (Color online) {\em Upper panel:} Brillouin zones and high symmetry points of the rutile $(a)$ and the $M_1$ $(b)$ structures. {\em Dashed arrows:} the lattice vectors, {\em thick arrows:} the reciprocal lattice vectors, {\em dashed lines:} guide to the eye. {\em Lower panel:} The momentum resolved spectral function $A(\vec{k},\omega)$ along the high symmetry paths (shown in the Brillouin zones by blue lines) of the {\em R} (left) and $M_1$ (right) phases. For rutile the Brillouin zone corresponds to the doubled unit cell as indicated by the lattice vectors. The full lines show the LDA bands, and the color-coding shows the LDA+DMFT spectra. $\omega$ is in units of eV. } \label{Akw} \end{figure} Fig.~\ref{Akw} shows the momentum resolved spectra in the {\em R} and $M_1$-phase. Notice the downshift of the two bonding bands in the $M_1$ phase, primarily of $x^2-y^2$ character (notice that there are four V atoms per unit cell, and hence two bonding $x^2-y^2$ bands). The states above the Fermi level move slightly up and shrink due to many-body renormalization similarly to the $t_{2g}$ states in the {\em R} phase. The bands around $1\,$eV also acquire substantial lifetime in both phases. The oxygen bands below $-1.5\,$eV and the $eg$ bands above $2\,$eV are almost unchanged compared to LDA. The results for the orbitally resolved $3d-t_{2g}$ spectral functions are shown in Fig.~\ref{fig:resolved}. In the rutile phase one can see that the three $t_{2g}$ states are approximately equally occupied predicting isotropic transport properties. For the $a_{1g}$ state the bonding anti-bonding structure can be also recognized similarly to the $M_1$ phase with a splitting of $\sim0.4$~eV which is by a factor of 2 smaller then in the LDA results. The calculated positions of the $t_{2g}$ states agree well with the XAS results. In the $M_1$ phase, the weight redistribution is very different: only a single state, namely the bonding $a_{1g}$ orbital is occupied. One can observe that the bonding $a_{1g}$ state is shifted for $\sim0.8$~eV lower, which is in good agreement with the previous theoretical results \cite{BPL+05}. The spectral function at the upper edge of the gap has predominantly the $e_g^\pi$ character, but there is also some weight of the anti-bonding $a_{1g}$ character, which is in agreement with experiment showing that the spectral density has not purely $e_{g}^\pi$ character above the gap \cite{KHH+06}. The first two peaks at $\sim0.6$~eV above the chemical potential are attributed mainly to the $e_g^\pi$ states, and the third one is due to the $a_{1g}$ state. This is consistent with the recent results of polarization dependent O K XAS experiments from Koethe {\em at al.} \cite{KHH+06} where the orbital character of the states can be deduced by changing the polarization of the x-ray from parallel (O K XAS $\parallel$) to the $c$-axis to perpendicular polarization (O K XAS $\perp$). The anti-bonding $a_{1g}$ state lies at 1.3~eV above the Fermi level. This peak did not appear in previous theoretical studies of Ref~\onlinecite{BPL+05}, but agrees well with current XAS results, which show that there is a prominent $a_{1g}$ $\sim1.0$~eV above the $e_g^\pi$ peak \cite{KHH+06,AGF+91,SST+90}. It is interesting to note that the position of the anti-bonding $a_{1g}$ state is roughly the same in both the DMFT results and the pure LDA results. The calculated separation between the bonding and anti-bonding peaks of the $a_{1g}$ state is $\sim 2.1$~eV, which agrees reasonably with the experimentally found value ($2.5-2.8$~eV)\cite{KHH+06}. Finally, the peak around $3\,$eV of the XAS spectra may be assigned to the contribution from the $e_g$ states, included in Fig.~\ref{fig:total}, but excluded in Fig.~\ref{fig:resolved}. This is consistent with the experimental finding of Ref.~\cite{KHH+06} where negligible change of the peak weight was observed across the MIT, but strong sensitivity to the polarization was noticed. \begin{figure}[th!] \includegraphics[width=0.4\linewidth]{fig9a.png} \includegraphics[width=0.4\linewidth]{fig9b.png} \caption{ (Color online) Comparison of the orbitally resolved $t_{2g}$ DOS of LDA+DMFT calculation corresponding to a V atom in rutile (left) and $M_1$ (right) phases ($r=1.00$) with angle integrated photoemission (PES) and x-ray absorption spectroscopy spectra (XAS) measurement. The experimental results are reproduced from Ref.~\cite{KHH+06}. The XAS result is shifted to obtain the best agreement with our theoretical results. } \label{fig:resolved} \end{figure} To elucidate the effect of the strain along the $c$-axis the spectral functions of the $t_{2g}$ orbitals for different $r$ ratios ($r=0.98$, 1.00 and 1.02) are shown in Fig.~\ref{fig:dos_strain} for both phases. In the rutile phase the $e_{g}^\pi$ states are hardly affected by the strain. In contrast, the bonding anti-bonding splitting of the $a_{1g}$ states shows a strong sensitivity to strain. The width of the upper $a_{1g}$ peak at $\sim0.5$~eV decreases with increasing $r$, which is due to the weaker hybridization between the V atoms, as indicated by the decreasing hopping integrals. Surprisingly, the spectral weight at the chemical potential is practically unaffected by the changes of the lattice constant $c$. Similar calculation using cluster DMFT was carried out for the rutile phase, allowing for the formation of split bonding-anti-bonding pairs along the $c$-axis. We did not find an appreciable sign for the development of the bonding-antibonding splitting even for the $r=0.98$ case, confirming that the single site DMFT is quite accurate in the rutile phase. Fig.~\ref{fig:dos_strain} clearly shows that the width of the bonding $a_{1g}$ peak is increased with decreasing $r$ ratio, which can be attributed to the increase in inter dimer hoppings. For all ratios $r$, the gap in orbital space is indirect, i.e., the valence band is of $a_{1g}$ character and the conduction band of $e_{g}^\pi$ character. This is in agreement with the experimental findings of Ref.~\onlinecite{KHH+06} demonstrating that the Peierls physics is playing a secondary role in the gap opening in $M_1$ phase. Due to the decreasing length of the $c$-axis, the $e_{g}^\pi$ states are \textit{shifted to} slightly \textit{lower energy}, which together with the broadening of the bonding $a_{1g}$ peak results in the \textit{contraction} of the gap, despite the increase in the bonding anti-bonding splitting of the $a_{1g}$ peaks. The decrease of the gap size due to decreasing $c$-axis length is more apparent in the inset of Fig.~\ref{fig:optics}, which shows the total $t_{2g}$ DOS. This result is supported by the experimental result of Muraoka and Hiroi \cite{MH02} demonstrating that the decrease in lattice parameter $c$ leads to decrease in the metal-insulator transition temperature. This is a clear indication that smaller $c$-axis length leads to a weakened stability of the insulator in the $M_1$ phase, and consequently a smaller gap in $M_1$ phase. This behavior is not expected for a Peierls type gap, which increase as the lattice parameter decreases along the dimerized chains. \begin{figure}[th!] \includegraphics[width=0.4\linewidth]{fig10a.png} \includegraphics[width=0.4\linewidth]{fig10b.png} \caption{ (Color online) Orbitally resolved V$-3d-t_{2g}$ DOS of a V atom in rutile (left) and $M_1$ (right) phases in case of $r=0.98$ (dotted, red line), 1.00 (full, black line) and 1.02 (dashed, green line). } \label{fig:dos_strain} \end{figure} The evolution of the gap is reflected also in the gap of the optical conductivity \cite{millis04}. In Fig.~\ref{fig:optics} the real part of the average optical conductivity $\sigma_{av}=\frac{1}{3}(\sigma_{\parallel}+ 2\sigma_{\perp})$ is shown, where $\sigma_{\parallel(\perp)}$ is the optical conductivity in the case where the polarization of incident light is parallel (perpendicular) to the $c$-axis. The calculated optical gap and the intensity of the first peak corresponding to the $t_{2g}-t_{2g}$ excitations compare well with the experimental results for polycrystalline VO$_2$ films \cite{QBW+08}. The shoulder around $\sim 2.5$~eV in the experimental optical conductivity, which is primarily due to the inter-band transitions, is shifted slightly upwards ($\sim 0.5$~eV) in the theoretical result. This is an indication that the applied downfolding method describes well the low-energy properties, but not so well the higher energy interband excitations. \begin{figure}[!bt] \centering{ \includegraphics[width=0.5\linewidth]{fig11.png} } \caption{ (Color online) Real part of the averaged optical conductivity $\sigma_{av}=\frac{1}{3}(\sigma_{\parallel}+ 2\sigma_{\perp})$ of the $M_1$ phase with different $r$ ratios. Inset: total $t_{2g}$ density of states of the $M_1$ phase. The experimental result is taken from Ref.~\cite{QBW+08}. } \label{fig:optics} \end{figure} The fact that the gap in the density of states arises between orbitals of different symmetry, indicates that the anisotropy of the transport properties in this phase will be very sensitive to disorder and grain boundaries which can drastically alter the orientation of these orbitals changing the matrix elements for hopping and conductivity. Fig.~\ref{fig:optics_anis} shows the calculated and experimentally measured optical conductivity for differently polarized light. It can be seen that the trends of the dependence of the optical conductivity on the polarization are in a good agreement with experimental results, although the values are slightly different. When the polarization is perpendicular to the $c$-axis the optical response is practically unaffected by the strain. In case of parallel polarization the optical conductivity is strongly modified by changing the lattice parameter $\tilde{c}$, especially in the frequencies between $1.5-2.5$~eV. This region can be attributed to the $d_{x^2-y^2}-d_{x^2-y^2}$ excitations as can be concluded from the positions of the bonding and antibonding peaks in Fig.~\ref{fig:dos_strain}. This results strongly indicate that the anisotropy of the transport properties is due to directed V-V bonds along the $c$-axis. \begin{figure}[!bt] \centering{ \includegraphics[width=0.5\linewidth]{fig12.png} } \caption{ (Color online) Real part of the optical conductivity corresponding to light polarized parallel ($\sigma_{\parallel}$) and perpendicular ($\sigma_{\perp})$ to the $c$-axis in the $M_1$ phase with different $r$ ratios. The experimental result is taken from Ref.~\cite{QBW+08} (Exp. a) and \cite{VBB68,TB09} (Exp. b). } \label{fig:optics_anis} \end{figure} In Fig.~\ref{fig:imsig_optics}, the orientation average optical conductivity of the {\em R} phase and the imaginary part of the self-energy at the Matsubara frequencies close to zero are shown. While the area below the calculated and experimentally measured \cite{QBW+08} optical conductivity (plasma frequency) agree fairly well, the width of the two Drude peaks are different. In order to improve the agreement with experimental results \cite{QBW+08} an imaginary part of 0.55~eV (scattering rate) was added to the self-energy for the low frequency part of the optical conductivity, to simulate the experimentally measured broadening of the Drude peak. Inspecting the inset of Fig.~\ref{fig:imsig_optics}, one can see that even by employing larger $U$ values, the scattering rate ($\mathrm{Im}\Sigma(\omega \rightarrow 0 )$) is not large enough to reproduce the experimental results, and the calculated optical conductivity will not show {\em bad metal} behaviour at this temperature. From this result one can draw the conclusion that the experimentally measured large scattering rate is a consequence of an inhomogeneity of the system as indicated in recent experiments \cite{Qazilbash3,CES+09}. \begin{figure}[!bt] \centering{ \includegraphics[width=0.5\linewidth]{fig13.png} } \caption{ (Color online) Real part of the averaged optical conductivity $\sigma_{av}=\frac{1}{3}(\sigma_{\parallel}+ 2\sigma_{\perp})$ of the {\em R} phase. The experimental result is taken from Ref.~\cite{QBW+08}. {\em Inset:} Imaginary part of the self-energy corresponding to the $t_{2g}$ orbitals in the bonding anti-bonding basis at the imaginary Matsubara frequencies close to the real axis. } \label{fig:imsig_optics} \end{figure} \section{Phase diagram and limits of downfolding} \begin{figure}[!bt] \centering{ \includegraphics[width=0.5\linewidth]{fig14.png} } \caption{ (Color online) Sketch of the phase diagram of the rutile phase as a function of the temperature and the applied Coulomb repulsion $U$ based on the calculated points (dots). In the green region (M) the metallic solution (shown in the left inset in terms of the Matsubara Green's function) is stable only while in the blue region only the insulator solution (right inset) can be found. In the red coexistence region (C) both solutions are stable. } \label{fig:phase} \end{figure} In Fig.~\ref{fig:phase}, a sketch of the phase diagram of the rutile structure based on calculated points within the cluster DMFT is displayed as a function of the temperature and the Coulomb repulsion parameter $U$. Three different phases are distinguished: the metal (M), the insulator (I) and the coexistent (C) regions. To identify them, the imaginary part of the local Matsubara Green's function, $\mathrm{Im}\mathcal{G}(\omega_n)$, of the cluster was investigated. We considered that the solution is insulating when $\mathrm{Im}\mathcal{G}(\omega_n)$ converges to 0 at low imaginary frequencies for all orbitals, as shown in an example in the right inset of Fig.~\ref{fig:phase}. The metallic solutions are the ones where $\mathrm{Im}\mathcal{G}(\omega_n)$ tends to a finite value for at least one of the orbitals (see the left inset in Fig.~\ref{fig:phase}). In the metallic region only the metallic solution is stable. In the insulator region only the insulating solution is stable, and in the coexistence region, both are stable. In order to decide whether a solution is stable or not, the DMFT calculations were started from an ansatz of a specified type (metal or insulator) and if it remained of the same type after the self-consistent solution is reached, one can regard it as a stable mean-field solution. In Fig.~\ref{fig:phase} it can be observed that below $U\approx2.9$~eV only the metallic solutions are stable while above $\sim3.8$~eV only the insulator is stable. The nature of the insulating state in rutile phase is very different from the insulating state in $M_1$ phase. The lower Hubbard band in the rutile insulator is an almost equal mixture of all three $t_{2g}$ orbitals. However, the interaction strength needed to open the true Mott gap without the help of the Peierls mechanism, is considerably larger. In the coexistence region, one can expect a crossover between the two phases governed by the free energy of the system \cite{PHK08}. There is a strong experimental indication that the rutile phase resides in the vicinity (but on the metallic side) of this crossover. Pouget and Launois showed that the metallic feature of the rutile phase is very sensitive by substitutional alloying of VO$_2$ with Nb (V$_{1-x}$Nb$_{x}$O$_2$) which increases the $c/a$ ratio and results in the appearance of a gap keeping the rutile structure at $x=0.2$ \cite{JL76}. Recently, Holman {\it et~al.} \cite{HMW+09} reported insulator to metal transition in V$_{1-x}$Mo$_x$O$_2$ system at $x\approx0.2$. All those experiments suggest that VO$_2$ is in the crossover region near the coexistence of two solutions in the cluster DMFT phase diagram. On the basis of our calculation we conjecture that the rutile phase might be able to support either metallic or insulating solution (with a very small gap), and hence either of the two phases can be stabilized depending on small external stimuli. On the other hand, the $M_1$ phase at the same interaction strength supports only insulating solution, and it is unlikely that small external perturbation can turn it to metallic state. One possibility to improve the agreement between the experimental and theoretical results is that one should use different parameters in the downfolded model. The second possibility is that the material is strongly inhomogeneous which was not taken into account so far in any theoretical calculations. Finally, it is most likely that calculation without invoking the downfolding approximation will result in a more accurate description of this material. This would place the VO$_2$ closer to the Mott charge transfer insulator boundary in the Zaanen-Sawatzky-Allen phase diagram. Work in this direction is in progress. \section{Conclusions} Our exploratory theoretical research set up the machinery for describing the subtle interplay of Coulomb correlations, orbital degeneracy and strain in determining the mechanism of the MIT in VO$_2$. Our theory, coupled with existing strain experiments, clearly shows that the Peierls distortion is only one element affecting the MIT and the switching mechanism of this material. The LDA+DMFT calculations in the unstrained material are in good agreement with experiments. We performed the first LDA+DMFT studies of the electronic structure of VO$_2$ under strain. Besides the Peierls increase in $a_{1g}$ bonding-antibonding splitting, the lowering in energy of the $e^\pi_g$ orbital, and the rapid change in bandwidth of the $a_{1g}$ orbital due to the varying overlaps, play an equally important role in controlling the position of the MIT. These theoretical insights can be used for understanding and improving material properties by means of chemical substitutions. For a more accurate description it is mandatory to take into account the oxygen degrees of freedom, and calculations of the total energy in the regions suggested by this exploratory work. \section{Acknowledgments} This research has been supported by Grants No. DARPA W911NF-08-1-0203, NSF-DMR 0806937, and OTKA F68726. We are grateful to Jan Tomczak for stimulating discussions.
1,941,325,220,838
arxiv
\section{Preface} Nucleons are known to carry intensive pion clouds \cite{tony}, which can be employed to study the pion properties. In particular, leading neutron production was measured in deep-inelastic scattering (DIS) at HERA \cite{zeus,h1} aiming at extraction from data the pion structure function at small Bjorken $x$. In proton-proton collisions leading neutron production also offers an access to the pion-proton total cross section, which has been directly measured so far with pion beams within a restricted energy range in fixed-target experiments. With modern high-energy colliders the energy range for pion-nucleon collisions can be considerably extended. Measurements with polarized proton beams supply more detailed information about the interaction dynamic. Eventually, one can employ the unique opportunity to study pion-pion interactions in double-leading-neutron production in $pp$ collisions. These processes, data and theoretical developments, are briefly overviewed below. \section{Leading neutrons in DIS} Fig.~\ref{fig:pion-pole} (left) illustrates how the pion structure function can be measured in the reaction $\gamma^*p\to Xn$. \begin{figure}[htb] \centerline{% \includegraphics[width=5.5cm]{pion-pole.eps} \hspace{10mm} \includegraphics[width=5.5cm]{4q.eps}} \caption{{\it Left:} graphical representation of the pion pole contribution to $\gamma^*p\to Xn$. {\it Right:} absorption due to interaction of the debris from the $\gamma^*\pi$ inelastic collision.} \label{fig:pion-pole} \end{figure} The amplitude of this process in the Born approximation (no absorption corrections) has the form \cite{kpps-dis}, \beq A^B_{p\to n}(\vec q,z)= \bar\xi_n\left[\sigma_3\, q_L+ \frac{1}{\sqrt{z}}\, \vec\sigma\cdot\vec q_T\right]\xi_p\, \phi^B(q_T,z)\,, \label{100} \eeq where $\vec\sigma$ are Pauli matrices; $\xi_{p,n}$ are the proton or neutron spinors; $\vec q_T$ is the transverse momentum transfer; $q_L=(1-z)m_N/\sqrt{z}$; and $z$ is the fractional light-cone momentum of the initial proton, carried by the final neutron. At small $1-z\ll1$ the pseudo-scalar amplitude $\phi^B(q_T,z)$ has the triple-Regge form \cite{kpp}, \beq \phi^B(q_T,z)=\frac{\alpha_\pi^\prime}{8}\, G_{\pi^+pn}(t)\,\eta_\pi(t)\, (1-z)^{-\alpha_\pi(t)} A_{\gamma^*\pi^\to X}(M_X^2)\,, \label{120} \eeq where $M_X^2=(1-z)s$; the 4-momentum transfer squared $t$ has the form, $t=-q_L^2- q_T^2/z$; and $\eta_\pi(t)$ is the phase (signature) factor which is nearly real in the vicinity of the pion pole. The effective vertex function $G_{\pi^+pn}(t)=g_{\pi^+pn}\exp(R_1^2t)$, where $g^2_{\pi^+pn}(t)/8\pi=13.85$. The value of the slope parameter $R_1$ is small \cite{kpp,kpps-dis} and is dropped-off for clarity in what follows. Correspondingly, the fractional differential cross section of inclusive neutron production in the Born approximation reads, \beq \frac{1}{\sigma_{inc}}\,\frac{d\sigma^B_{p\to n}}{dz\,dq_T^2}= \left(\frac{\alpha_\pi^\prime}{8}\right)^2 \frac{|t|}{z}\,g_{\pi^+pn}^2\left|\eta_\pi(t)\right|^2 (1-z)^{1-2\alpha_\pi(t)}\, \frac{F_2^{\pi}(x_\pi,Q^2)}{F_2^p(x,Q^2)}\,, \label{155n} \eeq where $x_\pi=x/(1-z)$; $\alpha_\pi^\prime=0.9\,\mbox{GeV}^{-2}$ is the pion Regge trajectory slope. The Born approximation Eqs.~(\ref{120})-(\ref{155n}) is subject to strong absorption effects, related to initial and final state interactions of the debris of the $\gamma^*\pi$ inelastic collision, which can be presented as two color-octet $\bar qq$ pairs, as is illustrated in Fig.~\ref{fig:pion-pole} (right). At high energies and large $z$ such a dipole should be treated as a 4-quark Fock component of the projectile photon, $\gamma^*\to \{\bar qq\}_8-\{\bar qq\}_8$, which interacts with the target proton via $\pi^+$ exchange. This 4-quark state may also experience initial and final state interaction via vacuum quantum number (Pomeron) exchange with the nucleons (ladder-like strips in Fig.~\ref{fig:pion-pole} (right)). The absorption factor $S_{4q}(b)$ is naturally calculated in impact parameter representation, relying on the well known parametrizations of the dipole cross section, measured at HERA. The amplitude and the absorption factor factorize in impact parameters, then one should perform inverse Fourier transformation back to momentum representation. The details of this procedure can be found in \cite{kpps-dis}. The results of calculations are compared with data \cite{zeus} on $Q^2$-dependence of the fractional cross section in Fig.~\ref{fig:data-dis} (left). \begin{figure}[htb] \centerline{ \includegraphics[width=4.5cm]{Q2-dep.eps} \hspace{15mm} \includegraphics[width=4.0cm]{kk07.eps}} \caption{{\it Left:} comparison of the calculated $Q^2$-dependence of the fractional cross section of neutron production with data from \cite{zeus}. {\it Right:} the tractional cross section calculated at $Q^2=1.5\,\mbox{GeV}^2$ and $\nu=8\,\mbox{GeV}$.} \label{fig:data-dis} \end{figure} The observed independence of $Q^2$ at large $z$ is a direct consequence of the mechanism of absorption under consideration, shown in Fig.~\ref{fig:pion-pole} (right). These results include, besides the pion exchange, also contributions from other iso-vector Reggeons, natural parity $\rho$ and $a_2$, and unnatural parity $\tilde a_1$, which contains the weak $a_1$ pole and the strong $\rho$-$\pi$ Regge cut \cite{kpps-dis,kpss-AN}. Analogous measurements of leading proton production from a neutron target (deuteron) are planned to be done at Jefferson Lab. An example of expected fractional cross section calculated at $Q^2=1.5\,\mbox{GeV}^2$ and $\nu=8\,\mbox{GeV}$ is presented in Fig.~\ref{fig:data-dis} (right). The relative contribution of the pion pole is smaller compared with low-x processes, therefore the results are expected to be more model dependent. \section{Single neutron production in $pp$ collisions} Similar to DIS, production of leading neutrons at modern colliders (RHIC, LHC) offers a possibility to measure the pion-proton cross section at energies higher than has been available so far with pions beams. Otherwise, if the $\pi-p$ cross section is known or guessed, one can predict the cross section of $pp\to pX$, which is given by the same expression as Eq.~(\ref{155n}), except the last factor, the ratio $F^\pi_2/F^p_2$, should be replaced by $\sigma^{\pi p}_{tot}(M_X^2)/\sigma^{pp}_{tot}(s)$. The results of the Born approximation \cite{kpss} are depicted by upper three curves in Fig.~\ref{fig:data} (left), which agree with ISR data at $\sqrt{s}=30.6$ and $62.7\,\mbox{GeV}$ \cite{isr}. \begin{figure}[htb] \centerline{ \includegraphics[width=4cm]{isr2.eps} \hspace{10mm} \includegraphics[width=4cm]{isr-zeus.eps}} \caption{{\it Left:} energy dependence of the differential cross section of forward neutron production, calculated in the Born approximation (upper) and absorption corrected (bottom). Data are from \cite{zeus}. {\it Right:} Comparison of fractional forward cross sections of neutron production in $pp$ collisions \cite{isr,na49} and in DIS \cite{zeus}.} \label{fig:data} \end{figure} Of course these Born approximation results should be corrected for the absorption effects, which were found in \cite{3n,ryskin} rather weak, in agreement with the ISR data. On the contrary, the absorption factor calculated in \cite{kpss} leads to a much stronger suppression, close to what was found for DIS in the previous section. As a consequence, the absorption corrected three bottom curves in Fig.~\ref{fig:data} (left) strongly underestimate the data. It was concluded in \cite{kpss} that the normalization of the data \cite{isr} is incorrect. Indeed, comparison with other currently available data in DIS \cite{zeus} and in $pp$ collisions \cite{na49} plotted in Fig.~\ref{fig:data} (right), show that indeed these fractional cross sections are about twice below the ISR data. One hardly can imagine that absorption in photo-production process is stronger than in $pp$. The reason of weak absorption found in \cite{3n,ryskin} is explained in Fig.~\ref{fig:graphs}. The third Reggeon graph ({\bf c}) was neglected because the 4-Reggeon vertex $2\Pom2\pi$ was claimed to be unknown. \begin{figure}[htb] \centerline{ \includegraphics[width=6cm]{graphs.eps} \hspace{15mm} \includegraphics[width=2.5cm]{vertex-1.eps}} \caption{{\it Left:} triple-Regge graphs contributing to the absorption corrections. {\it Right:} the structure of the $2\Pom2\pi$ vertex in graph ({\bf c}).} \label{fig:graphs} \end{figure} However, this vertex has a structure, shown in the right part of Fig.~\ref{fig:graphs}, and it gives the largest contribution to the absorption effects. In addition to the cross section, the rich spin structure of the amplitude, Eq.~(\ref{100}), suggests a possibility of a stringent test of the dynamics of the process, supported by recent precise measurements of the single-spin asymmetry of neutron production at RHIC \cite{phenix,phenix2}. Of course the amplitude (\ref{100}) does not produce any spin asymmetry because both terms have the same phase. However, the strong absorption corrections change the phase and spin effects appear. Nevertheless the magnitude of $A_N(t)$ was found in \cite{kpss-AN} to be too small in comparison with the data. Interference with natural parity Reggeons $\rho$ and $a_2$ is strongly suppressed at high energies. Only the spin-non-flip axial-vector Reggeon $a_1$ is a promising candidate. Its effective contribution $\tilde a_1$ also includes the $\rho$-$\pi$ Regge cut. The results of parameter-free evaluation of $A_N(t)$ \cite{kpss-AN} due to $\pi$-$\tilde a_1$ interference, shown by stars in Fig.~\ref{fig:AN} (left), well agree with data \cite{phenix,phenix2}. \begin{figure}[htb] \centerline{ \includegraphics[width=4.5cm]{An-pi-a.eps} \hspace{15mm} \includegraphics[width=3.0cm]{pi-pi.eps} } \caption{{\it Left:} Comparison of the parameter-free calculations \cite{kpss-AN} (stars) with data \cite{phenix}. {\it Right:} double-pion exchange in the double-neutron production amplitude.} \label{fig:AN} \end{figure} \vspace{-5mm} \section{Double-neutron production} The large experiments at the RHIC and LHC colliders are equipped with zero-degree calorimeters (ZDC), which can detect small angle leading neutrons. This suggests an unique opportunity to detect two leading forward-backward neutrons. According to Fig.~\ref{fig:AN} (right) one can extract from data precious information about pion-pion interactions at high energies. The cross section of this process in the Born approximation has a factorized form \cite{pi-pi}, \beq \frac{d\sigma^B(pp\to nXn)}{dz_1dz_2\,dq_{1}^2dq_{2}^2} = f^B_{\pi^+/p}(z_1,q_{1})\, \sigma^{\pi^+\pi^+}_{tot}(\tau s) f^B_{\pi^+/p}(z_2,q_{2}), \label{160} \eeq where the pion flux in the proton with fractional momentum $1$-$z$ reads \cite{kpp}, \beq f^B_{\pi^+/p}(z,q)= -\frac{t}{z}\,G_{\pi^+pn}^2(t) \left|\frac{\alpha_\pi^\prime\eta_\pi(t))}{8}\right|^2 (1-z)^{1-2\alpha_\pi(t)}. \label{180} \eeq This flux at $q=0$ is plotted by dashed curve in Fig.~\ref{fig:flux} (left). \begin{figure}[htb] \centerline{ \includegraphics[width=4.8cm]{f0-old.eps} \hspace{10mm} \includegraphics[width=4.8cm]{PHI.eps} } \caption{{\it Left:} The forward flux of pions $f_{\pi^+/p}^{(0)}(z,q)$, calculated in the Born approximation (dashed) and including absorption (solid). {\it Right:} The integrated flux of two pions at $z_{min}=0.5$ in the Born approximation (dotted), and absorption corrected (dashed). Solid curves with $z_{min}=0.5$-$0.9$ also include the feed-down corrections.} \label{fig:flux} \end{figure} The absorption corrected flux is plotted by solid curve, demonstrating a considerable reduction \cite{pi-pi}. To maximize statistics, one can make use of all detected neutrons, fixing $M_X^2=\tau s=(1-z_1)(1-z_2)s$ to extract the $\pi\pi$ total cross section, $\sigma(pp\to nXn)_{z_{1,2}>z_{min}}= \Phi^B(\tau)\, \sigma^{\pi^+\pi^+}_{tot}(\tau s)$. The double-pion flux $\Phi(\tau)$ reads, \beqn \Phi(\tau)= \frac{d\sigma(pp\to nXn)_{z>z_{min}}} {\sigma^{\pi^+\pi^+}_{tot}(\tau s)}= \int\!\!\! \frac{dz_1}{1-z_1} F_{\pi^+/p}(z_1) F_{\pi^+/p}(z_2) D_{abs}^{NN}(s,z_1,z_2), \nonumber \eeqn where $D_{abs}^{NN}(s,z_1,z_2)$ is an extra absorption factor due to direct $NN$ interactions, which breaks down the pion-flux factorization. This factor was calculated in \cite{pi-pi}. The integrated pion flux $F_{\pi^+/p}(z)$ reads, \beq F_{\pi^+/p}(z)=-z\int\limits_{q_L^2}^\infty dt\, f_{\pi^+/p}(z,q). \label{197} \eeq Thus, detecting pairs of forward-backward neutrons with the ZDCs installed in all large experiments at RHIC and LHC, provides an unique opportunity to study the pion-pion interactions at high energies. However, the absorption effects are especially strong for this channel. \section{Summary} The higher Fock components of the proton, containing pions, allow to get unique information about the pion structure and interactions at high energies, provided that the kinematics of neutron production is in the vicinity of the pion pole. In this short overview we presented several processes with neutron production, DIS on a proton, proton-proton collisions, including spin effects, and also double neutron production. However, even in a close proximity of the pion pole, the analysis can hardly be performed in a model independent way. Strong absorption effects significantly suppress the cross sections. We identified the main mechanism for these effects, which has been missed in previous calculations. It arises from initial/final state interaction of the debris of the pion collision. It was evaluated employing the well developed color-dipole phenomenology, based on DIS data from HERA.\\ {Acknowledgments:} B.Z.K. is thankful to the organizers of the EDS Blois Meeting for inviting to speak. This work was supported in part by Fondecyt (Chile) grants 1130543, 1130549 and 1140390.
1,941,325,220,839
arxiv
\section{Introduction} The double perovskites are a fascinating playground of different properties such as half-metallicity \cite{Retuerto,Philipp}, high temperature ferrimagnetism \cite{Feng,Krockenberger}, ferroelectricity \cite{Fukushima,Kumar,Retuerto1} and a rich variety of magnetic interactions \cite{Ou,Morrow1}. The choice of A, B and B$^{'}$ in their chemical formula A$_2$BB$^{'}$O$_6$, having transition metal ions B and B$^{'}$ residing on the two sublattices of a three-dimensional cubic lattice, avoids related difficulties in designing different suitable electronic properties for several potential applications. Recently double perovskites Sr$_2$XOsO$_6$ (X$=$Sc, Mg) hosting a single magnetic ions X have been considered for their high magnetic transition temperatures. The magnetization measurements of Sr$_2$ScOsO$_6$ indicate an antiferromagnetic transition to type-I magnetic ordering (the two Os sites in the unit cell are coupled antiferromagnetically with spins oriented in the ab-plane) at T$_N$ = 92 K, one of the highest transition temperatures of any double perovskite \cite{Sr2ScOsO6}. Using both neutron and x-ray powder diffraction the crystal structure of Sr$_2$ScOsO$_6$ has been refined as monoclinic with symmetry $P2_1/n$ at all the temperatures \cite{Sr2ScOsO6}. The compound Sr$_2$MgOsO$_6$ represents another rare example of double perovskites with T$_N$ = 100 K, higher than Sr$_2$ScOsO$_6$, and with the same type-I magnetic ordering \cite{Sr2MgOsO6}. Its crystal symmetry has been refined in the tetragonal symmetry $I4/m$ and the temperature dependence of the resistivity of the polycrystalline Sr$_2$MgOsO$_6$ indicates semiconductor-like behavior \cite{Sr2MgOsO6}. In both the compounds the inverse susceptibility versus temperature shows clearly Curie-Weiss behaviour signaling the existence of local magnetic moments \cite{Sr2ScOsO6,Sr2MgOsO6}. The differences in the crystal structure of Sr$_2$ScOsO$_6$ and Sr$_2$MgOsO$_6$ are rather small and related to the crystal group symmetries in which the tilting between neighboring layers of octahedral environment are $a^{-}a^{-}c^{+}$ and $a^{0}a^{0}c^{−}$ (in the Glazer’s notations). In terms of electronic structure the double perovskite Sr$_2$ScOsO$_6$ and Sr$_2$MgOsO$_6$ are expected to have the transition metal Os ions in nominal 5$d^{3}$ and 5$d^{2}$ electronic configurations, which are combined with the ones of Sc$^{3+}$ and Mg$^{2+}$ ions. For such electronic configurations the effect of spin-orbit coupling (SOC) in 5d orbitals is expected to be rather different \cite{ChenBalents,Pardo,Lee2,Gangopadhyay} as well as the role of electronic correlations \cite{GeorgesAnnuRev,DeMedici}. The explanation of the presence of localized electrons in Sr$_2$XOsO$_6$ (X$=$Sc, Mg), which can host high magnetic ordering temperatures \cite{Sr2ScOsO6,Sr2MgOsO6}, it is hidden in the details of the interplay of SOC and electronic correlations in the Os 5d electrons \cite{Sr2ScOsO6,Sr2MgOsO6}. In this work using a set of ab-initio calculations we provide evidences of a correlated Mott magnetic state in recently discovered double perovskites Sr$_2$XOsO$_6$ (X$=$Sc, Mg). We investigate the electronic structure of Sr$_2$XOsO$_6$ (X$=$Sc, Mg) pointing out how the interplay of SOC and electronic correlations favors the inset of localization in the 5d manifold and as a consequence the magnetic ordering. \section{Calculation Details} We perform first-principles density functional calculations within the Local Density Approximation approximation (LDA) \cite{LDA} as implemented in the Vienna {\it Ab initio} Simulation Package (VASP) \cite{VASP} with the projector augmented wave (PAW) method \cite{PAW} to treat the core and valence electrons. A kinetic cutoff energy of 500\,eV is used to expand the wavefunctions and a $\Gamma$ centered 6$\times$6$\times$4 $k$-point mesh combined with the tetrahedron and Gaussian methods is used for Brillouin zone integrations. The ionic and cell parameters of Sr$_2$XOsO$_6$ (X$=$Sc, Mg) are fixed to the experimental ones \cite{Sr2ScOsO6,Sr2MgOsO6} (see Fig. \ref{fig0} for a schematic view of the unit cell). \begin{figure}[] \vspace{0.1cm} \includegraphics[width=0.875\columnwidth,angle=-0]{fig00} \\ \vspace{0.1cm} \caption{(Color online) Schematic view of the unit cell of Sr$_2$XOsO$_6$ (X$=$Sc, Mg). The Os octahedra are highlighted by a polyhedral shape.} \label{fig0} \end{figure} Since it is well known that the LDA often underestimates the effects of electronic correlations in systems with $d$ orbitals we also calculate electronic properties within the rotationally invariant LDA+$U$ method introduced by Liechtenstein et al. \cite{DFTplusU} which requires two parameters, the Hubbard parameter $U$ and the exchange interaction J$_h$. Performing magnetic spin polarized calculations in the AFM type-I order within this scheme we vary the magnitude of the Hubbard parameter $U$ between 0 and 4\,eV for the Os $d$-states. Note that the standard spin-polarized LDA corresponds to $U=$J$_h$=0\,eV. We explicity include the SOC in the LDA and LDA+U calculations. To treat local electronic correlations beyond mean-field theory and to better understand the effects of electronic correlations in double perovskites Sr$_2$XOsO$_6$ (X$=$Sc, Mg) we combine DFT and Dynamical Mean-Field Theory (DMFT)\cite{DMFT} and we focus mainly on the paramagnetic state, where magnetic ordering in inhibited. From our point of view, the choice of paramagnetic solutions allows us to characterize the parent state from which AFM establishes due to the presence of localized electrons. In our implementation of the LDA+DMFT and LDA+SOC+DMFT approach, we first construct maximally localized Wannier orbitals (using Wannier90 code \cite{wannier90}) for the $5d$ Os orbitals over the energy range spanned by the Os $t_{2g}$ from the LDA band structure without and with including SOC. When the SOC is included we use the numerical J,J$_z$ basis, which diagonalizes the onsite part of the Hamiltonian, to perform the calculations (in the following we refer to U as the matrix describing such rotation of the basis set) otherwise the Os $t_{2g}$ manifold is used. Then we solve the self-consistent impurity model using Exact Diagonalization\cite{Caffarel,Capone}, in which the impurity model is described with a number of levels $N_s=12$ \cite{Liebsch,2322,BaCrO3GG}, and diagonalized by a parallel Arnoldi algorithm \cite{ARPACK}. In the LDA+SOC+DMFT the Coulomb interaction was treated as a parametrized form in which the interactions in the J,J$_z$ basis are calculated from the ones used in the LDA+DMFT between the $t_{2g}$ orbitals in the density-density approximation as done in Ref. \cite{AritaSr2IrO4}. This corresponds to use the same matrix U defined above to rotate the interaction matrix in density-density approximation as defined in the $t_{2g}$ orbitals (L,L$_z$ basis set). For LDA+DMFT calculations we consider as terms of interactions U (between electrons occupying same orbital but opposite spin channel), U$^{'}$=U-2J$_h$ (between electrons occupying different orbitals but same spin channel), U$^{''}$=U-3J$_h$ (between electrons occupying different orbitals but opposite spin channel). The precise value of U and J$_h$ in 5d Os orbitals in Sr$_2$XOsO$_6$ (X$=$Sc, Mg) is unknown and then we perform calculation LDA+DMFT and LDA+SOC+DMFT calculations for different values of U at fixed ratio J$_h$/U kept to 0.15 and 0.3. \section{Results} \subsection{LDA and LDA+U calculations} \begin{figure}[] \vspace{0.1cm} \includegraphics[width=0.6\columnwidth,angle=-90]{fig1} \\ \vspace{0.1cm} \caption{(Color online) Band structure of Sr$_2$MgOsO$_6$ (a)) and Sr$_2$ScOsO$_6$ (b)) within LDA and LDA+SOC scheme along high symmetry direction of the Brillouin zone. The Fermi level is set to zero. The t$_{2g}$ orbital (LDA) and total angular momentum J (LDA+SOC) projected density of states based on Wannier functions for Sr$_2$MgOsO$_6$ (c)) Sr$_2$ScOsO$_6$ (d)).} \label{fig1} \end{figure} We start to determine the band structures of Sr$_2$XOsO$_6$ (X$=$Sc, Mg) by means of DFT paramagnetic calculations as shown in Fig. \ref{fig1}a),b) \cite{notaBZ}. LDA calculations clearly lead to a metallic solution with a sizable spectral weight at the Fermi level. The low-energy contribution to the spectral density is dominated by the bands arising from t$_{2g}$ Os 5d electrons, which are very weakly entangled with oxygen bands lying in energy windows well below the Fermi level. The octahedral environment splits the Os 5d orbitals into t$_{2g}$ and e$_g$ levels, stabilizing 5$d^{3}$ and 5$d^{2}$ electronic configurations respectively in Sr$_2$ScOsO$_6$ and Sr$_2$MgOsO$_6$. The bandwidths W of both double perovskites, related to the Os t$_{2g}$ orbitals, are narrow as consequence of the staggering coordination of Os sites combined with Sc and Mg non-magnetic ions in their crystal structure. The values of W are $\sim$ 1.2 and 1.35 eV respectively for X$=$Sc and Mg. This difference is related to the reduced degree of buckling in the crystal structure of Sr$_2$MgOsO$_6$ compared to the one of Sr$_2$ScOsO$_6$ \cite{Sr2ScOsO6,Sr2MgOsO6}. The effect of the inclusion SOC in DFT paramagnetic computations is sizeable (see Fig. \ref{fig1}) and it relates to the broadening of the bandwidths of Os t$_{2g}$ orbitals in Sr$_2$XOsO$_6$ (X$=$Sc, Mg). In our LDA+SOC framework the bandwidths W now increase to $\sim$ 1.45 and 1.6 eV respectively for X$=$Sc and Mg. \begin{figure}[] \vspace{0.5cm} \includegraphics[width=0.6\columnwidth,angle=-90]{fig2} \\ \vspace{0.5cm} \caption{(Color online) Magnetization M (a)), charge gap $\Delta$ (b)) as a function of U within LDA+U and LDA+U+SOC calculations for Sr$_2$MgOsO$_6$ and Sr$_2$ScOsO$_6$ in the AFM type-I magnetic structure. Orbital M$_L$ and spin M$_S$ magnetization (c)) as a function of U within LDA+U+SOC calculations for Sr$_2$MgOsO$_6$ and Sr$_2$ScOsO$_6$ in the AFM type-I magnetic structure. } \label{fig2} \end{figure} To investigate the effect of SOC in the electronic structure of the Os t$_{2g}$ orbitals for X$=$Sc and Mg we construct maximally localized Wannier orbitals \cite{wannier90} for the $5d$ Os orbitals over the energy range spanned by the Os $t_{2g}$ states in the LDA and LDA+SOC band structures (see Fig. \ref{fig1}). In these calculations for Sr$_2$XOsO$_6$ (X$=$Sc, Mg) in the t$_{2g}$ manifold the orbitals d$_{xz}$ and d$_{yz}$ are found degenerate and lower in energy of 0.08 eV respect to the orbital d$_{xy}$ (see Fig. \ref{fig1}c) and d)). In Fig. \ref{fig1}c) and Fig. \ref{fig1}d) the total angular momentum resolved density of states (evaluated in LDA+SOC approach in the Wannier basis set) clearly shows the formation of a lower J=$\frac{3}{2}$ quartet and an higher J=$\frac{1}{2}$ doublet, which are splitted by $\sim$ 0.5 eV. The same splitting is also found to be relevant in the electronic structure of the double peroskite Ba$_2$NaOsO$_6$ with d$^1$ configuration \cite{Lee}. In LDA scheme the orbitals occupancies (calculated from Wannier orbitals) of t$_{2g}$ (xy,xz,yz) orbitals are $\sim$ 0.5 and 0.33 $e$ per spin channel respectively for X$=$Sc and Mg while in LDA+SOC for the set of J=($\frac{3}{2}$,$\frac{1}{2}$) orbitals become $\sim$ (0.7,0.1) and (0.45,0.1) $e$ per spin channel respectively for X$=$Sc and Mg. To deep our understanding the interplay of the effects related to SOC and local electronic correlations in the AFM state of Sr$_2$XOsO$_6$ (X$=$Sc, Mg) we perform LDA+U and LDA+SOC+U calculations as function of different strengths of electronic correlations in Sr$_2$XOsO$_6$ (X$=$Sc, Mg) in the type-I ordered magnetic structure \cite{Sr2ScOsO6,Sr2MgOsO6,notaAFM}. The local magnetization M and charge gap $\Delta$􏰁 as a function of U are reported in Fig. \ref{fig2}a) and \ref{fig2}b) for LDA+U and LDA+SOC+U calculations (J$_h$=0). The Os-O hybridization reduces the Os spin moment from the expected 3(2)${\mu}_B$ corresponding to S=3/2(1) for Sr$_2$ScOsO$_6$ (Sr$_2$MgOsO$_6$). The orbital and spin moments, M$_L$ and spin M$_S$, are antiparallel to each other in both compounds (see Fig. \ref{fig2}c)). The orbital moment M$_L$ is almost constant in Sr$_2$ScOsO$_6$ while it increases in Sr$_2$MgOsO$_6$ over all the range of U. The band gap $\Delta$ correlates strongly with the module of the Os magnetization which increases by increasing U. As shown in Fig. \ref{fig2} at the optimal value of U=2 and 3 eV we can match the values of Os local magnetization (M) 1.55 and 0.59 ${\mu}_B$ found experimentally respectively for X$=$Sc, Mg \cite{Sr2ScOsO6,Sr2MgOsO6} (with band gaps $\Delta$ of 0.26 and 0.31 eV). The difference in the optimal values of U in Sr$_2$XOsO$_6$ (X$=$Sc, Mg) can be probably related to the different electronic configuration 5d$^3$ (X$=$Sc) and 5d$^2$ (X$=$Mg) but advanced cRPA calculations to evaluate the proper values of U would be really useful to elucidate this point \cite{Sasioglu}. \begin{figure}[] \vspace{0.5cm} \includegraphics[width=0.6\columnwidth,angle=-90]{fig5} \\ \vspace{0.5cm} \caption{(Color online) Band structure of Sr$_2$MgOsO$_6$ (a)) (U=3.0 eV) and Sr$_2$ScOsO$_6$ (b)) (U=2.0 eV) within LDA+U+SOC scheme for the AFM type-I magnetic order along high symmetry direction of the Brillouin zone. The Fermi level is set to zero. The two (three) bands of Sr$_2$MgOsO$_6$ (Sr$_2$ScOsO$_6$) that were half filled in the J,J$_z$ basis set are highlighted in red.} \label{fig5} \end{figure} The inclusion of the SOC in Sr$_2$ScOsO$_6$ i) decreases sligthly the Os local magnetization M and ii) increases the critical value of interactions U to reach the insulating AFM state. This is a common feature with other half-filled 5d$^3$ Os based materials \cite{LiOsO3GGMC,LiOsO3LiNbO3,Jung}. In Sr$_2$MgOsO$_6$ the LDA+U+SOC calculations show a large reduction of the i) Os local magnetization and ii) critical value of interactions U to reach the insulating state respect to the LDA+U scheme. We mention that different values of Hund's coupling J$_h$ do not change the qualitative picture at level of LDA+U+SOC calculations in the AFM type-I magnetic ordered state. Indeed for the optimal values of U (U=2 and 3 eV for respectively X$=$Sc, Mg) and J$_h$/U = 0.15 and 0.3 we find 1.51 and 1.47 ${\mu}_B$ for Sr$_2$ScOsO$_6$ and 0.50 and 0.42${\mu}_B$ for Sr$_2$MgOsO$_6$. In Fig. \ref{fig5} we show the band structure calculated in the AFM type-I magnetic order for Sr$_2$XOsO$_6$ (X$=$Sc, Mg) within LDA+U+SOC scheme. For both the compounds we find that two (X=Mg) and three (X=Sc) half-filled bands in the J,J$_z$ basis set split into fully filled lower and unfilled upper Hubbard bands. Then the combined effects of spin-orbit coupling and on-site Coulomb repulsion result in an AFM Mott insulating state in both compounds. \subsection{LDA+DMFT and LDA+SOC+DMFT calculations} To further relate to the magnetic scenario found experimentally the presence of electronic correlations and Mott state we perform LDA+DMFT and LDA+SOC+DMFT calculations. In Fig. \ref{fig6} we show the quasiparticle weigths in the LDA+DMFT and LDA+SOC+DMFT respectively for Sr$_2$MgOsO$_6$ and Sr$_2$ScOsO$_6$ at different values of the ratio J$_h$/U. \begin{figure}[] \vspace{0.5cm} \includegraphics[width=0.6\columnwidth,angle=-90]{fig6} \\ \vspace{0.5cm} \caption{(Color online) Quasiparticle weight Z of t$_{2g}$ orbitals of Sr$_2$ScOsO$_6$ (a)) and Sr$_2$ScOsO$_6$ (b)) in LDA+DMFT scheme for J$_h$/U=0.15 and 0.3. Quasiparticle weight Z of J$=\frac{3}{2}$,$\frac{1}{2}$ orbitals for Sr$_2$ScOsO$_6$ (c) for J$_h$/U=0.15 and e) for J$_h$/U=0.3) and Sr$_2$MgOsO$_6$ (d) for J$_h$/U=0.15 and f) for J$_h$/U=0.3) in LDA+SOC+DMFT scheme.} \label{fig6} \end{figure} The underlying physics related to the role of electronic correlations of the Os 5d$^2$ (Sr$_2$MgOsO$_6$) and 5d$^3$(Sr$_2$ScOsO$_6$) configurations within our LDA+DMFT scheme (without taking into account SOC) follows the trends already described for model calculations in Ref. \cite{GeorgesAnnuRev,DeMedici} (see \ref{fig6} a) and b)). In Sr$_2$ScOsO$_6$ the Os t$_{2g}^3$ orbitals are close to degeneracy and half-filled (0.5 $e$ per spin channel), a configuration which is well known to be particularly prone to the effects of electronic correlations due to the Coulomb interactions and Hund's coupling \cite{GeorgesAnnuRev,DeMedici}. The quasiparticle weights Z of all these half-filled Os t$_{2g}$ orbitals decrease by increasing the Hund's coupling in Sr$_2$ScOsO$_6$ (see Fig. \ref{fig6}a)) and we find the critical value of U for the metal-insulator to be $\sim$ 1 eV for both values of the ratio J$_h$/U. The electronic configuration 5d$^2$ in Sr$_2$MgOsO$_6$, with t$_{2g}$ orbitals close to degeneracy and populated 0.33 $e$ per spin channel, still is affected to a significant extent by electronic correlations \cite{GeorgesAnnuRev,DeMedici}. We find a two-fold effect of the Hund's coupling: a drastic reduction of the quasiparticle weight Z of all the Os t$_{2g}$ at small strength of electronic interactions and an increase of its critical value to reach the Mott state \cite{GeorgesAnnuRev,DeMedici} (see Fig. \ref{fig6}b)). Although Sr$_2$MgOsO$_6$ is prone to bad metallic behaviour the Mott insulating phase is pushed to large values of the interaction strength U larger than 3 eV. When SOC is included at level of LDA+SOC+DMFT calculations the physical picture, in terms of localization of quasiparticle weights Z and critical values of U to reach the Mott state, changes dramatically (see Fig.\ref{fig6}c), d), e) and f)) due to the reshape of interband and intraband interactions terms and the loss of the orbital degeneracy in the J,J$_z$ basis set. The orbital differentiation between J=$\frac{3}{2}$ quartet and J=$\frac{1}{2}$ doublet is larger in Sr$_2$MgOsO$_6$ than in Sr$_2$ScOsO$_6$. The inclusion of the SOC in LDA+SOC+DMFT calculations sets a smaller(larger) value of the strength of Coulomb interactions in Sr$_2$MgOsO$_6$ (Sr$_2$ScOsO$_6$) than the one obtained in LDA+DMFT scheme. \begin{figure}[] \vspace{0.5cm} \includegraphics[width=0.6\columnwidth,angle=-90]{fig4} \\ \vspace{0.5cm} \caption{(Color online) LDA+DMFT orbital J resolved spectral density for Sr$_2$ScOsO$_6$ (a)) for U=3.0 eV and J$_h$/U = 0.3 and Sr$_2$MgOsO$_6$ (b)) for U=2.5 eV and J$_h$/U = 0.15. } \label{fig4} \end{figure} The effects of the electronic correlations is to drive orbitals occupancies for the set of J=($\frac{3}{2}$,$\frac{1}{2}$) orbitals towards $\sim$ (0.5,0.5) and (0.5,0.0) $e$ per spin channel respectively for X$=$Sc and Mg. In our computations we find the inset of Mott state is stabilized in Sr$_2$MgOsO$_6$ in correspondence of orbital occupancies of J=$\frac{3}{2}$ close to half-filling and empty J=$\frac{1}{2}$ states while in Sr$_2$ScOsO$_6$ it is obtained when both the J=$\frac{3}{2}$,$\frac{1}{2}$ states are very close half-filled. The smallest critical values of on-site Coulomb repulsion U are respectively 2.75 eV for X$=$Sc (J$_h$/U=0.3) and 2.25 eV for X$=$Sc (J$_h$/U=0.15). We observe that the quasiparticle weights Z and the critical values of on-site Coulomb repulsion U decrease(increase) by increasing the Hund's coupling in X$=$Sc(Mg). This is due to the interplay between Coulomb interactions and the effective crystal-field splitting in the J,J$_z$ basis set which determines the orbital occupancies of J=$\frac{3}{2}$ and J=$\frac{1}{2}$ states and controls consequently their degree of electronic correlations. In Fig. \ref{fig4} we show the J,J$_z$ basis resolved spectral density in the paramagnetic insulating phase calculated within our LDA+SOC+DMFT scheme \cite{notaPade} for different values of interactions. The insulating Mott state is clearly demostrated by the presence of two (X=Mg, J=$\frac{3}{2}$) and three (X=Sc, J=$\frac{3}{2}$,$\frac{1}{2}$) fully filled lower and unfilled upper Hubbard bands in close similarity to what we mentioned in our LDA+SOC+U results. We remark that the electronic structure of Sr$_2$XOsO$_6$ (X$=$Sc, Mg) is different from the case of Sr$_2$IrO$_4$ in which there is only one-band systems with the J=$\frac{1}{2}$ being half-filled, while the J=$\frac{3}{2}$ bands are completely filled \cite{AritaSr2IrO4,Zhang}. Then we believe Sr$_2$XOsO$_6$ compounds (X=Sc, Mg) might be considered as a multiband version of Sr$_2$IrO$_4$. If the paramagnetic constrain is lifted we find a strong tendency to form magnetic moments increasing the electronic correlations. Then the present AFM insulating phase is then the consequence of such substantial correlation effects. We argue that the two and three correlated bands in the basis J,J$_z$ might be strongly related to the high magnetic ordering temperature of double perovskites Sr$_2$XOsO$_6$ (X$=$Sc, Mg) as it happens for other correlated half-filled materials in which the SOC is negligible \cite{Mravlje}. Further theoretical studies are needed to verify such hypothesis. \section{Conclusions} Using first-principles methods we investigate the electronic and magnetic structure of Sr$_2$XOsO$_6$ (X$=$Sc, Mg). Our LDA+U+SOC calculations point out that the double peroskite Sr$_2$ScOsO$_6$ presents larger effects of electronic correlations than Sr$_2$MgOsO$_6$ while in this latter the effect of the SOC is dominant. The effect of the SOC combined with electronic correlations is to decrease the local magnetization M in both Sr$_2$XOsO$_6$ and to increase(descrease) the critical value of interaction strength U to get the insulating phase in Sr$_2$ScOsO$_6$(Sr$_2$MgOsO$_6$). Within our LDA+U+SOC AFM calculations we identify the origin of the antiferromagnetic insulating phase to be related to the presence of two (X=Mg) and three (X=Sc) J=$\frac{1}{2},\frac{3}{2}$ half-filled bands splitted into fully filled lower and unfilled upper Hubbard bands. We relate this AFM state to the presence of localized electrons close to the Mott state by performing LDA+DMFT and LDA+SOC+DMFT calculations. The 5d$^2$ (X=Mg) and 5d$^3$ (X=Sc) electronic configurations in the LDA+DMFT produce respectively phases with promoted effects of electronic correlations far from and close to a Mott state. However when SOC is taken into account the effects of effective crystal-field splitting in the J,J$_z$ basis and electronic correlations shift Sr$_2$MgOsO$_6$ to be closer to a Mott state related to the J=$\frac{3}{2}$ bands while in Sr$_2$ScOsO$_6$ lead to a larger critical value of U for the Mott-Hubbard transition in J=$\frac{1}{2},\frac{3}{2}$ bands. Our LDA+DMFT and LDA+SOC+DMFT calculations substantiate the importance of the inclusion of the SOC in investigating the electronic structure of such compounds and give evidence of correlated J=$\frac{1}{2},\frac{3}{2}$ bands in double perovskites Sr$_2$XOsO$_6$ (X$=$Sc, Mg), which might be candidates to host high magnetic ordering temperature. \\ \section{ACKNOWLEDGMENTS} The author GG is indebted with M. Casula for the fruitfull discussions and thanks M. Aichhorn, A. Privitera for the careful and critical reading of the manuscript.
1,941,325,220,840
arxiv
\section{Introduction} A professional hierarchy is a field in which an employee enters at a designated low level and gradually moves up the ranks. For instance, large businesses have interns through CEOs, hospitals have residents through head physicians, and academic institutions have undergraduates through full professors. Over time, women have generally become better represented in many industries (e.g., \cite{luckenbill2002educational, kaye2007progress, terjesen2009women,farrell2005additions}), but women are still poorly represented at the highest levels of most professional hierarchies (e.g., \cite{nelson2003national, carr2015inadequate,shapiro2015middle}). This has been called the `leaky pipeline' effect. Countless factors have been proposed to explain this so-called `leaky pipeline' effect: family responsibilities \cite{eagly2012women}, different professional interests between the genders \cite{konrad2000sex,ceci2010sex,van2004academic}, biological differences \cite{sapienza2009gender}, unconscious bias in the workplace \cite{easterly2011conscious,lee2005unconscious}, laws restricting gender discrimination \cite{chacko1982women,sape1971title}, societal gender roles \cite{britton2000epistemology,blackburn1995measurement,dean2015work}, and other entrenched cultural or psychological factors. Many of these qualitative theories require an implicit assumption that men and women make fundamentally different decisions, either as a result of biological differences or social indoctrination. Some quantitative models have attempted to study the ascension of women through certain fields without relying on intrinsic differences between the sexes. Shaw and Stanton \cite{shaw2012leaks} calculate the `inertia' of women through several academic hierarchies, and find that gender differences play a diminishing role in promotion over time. Holman et al. \cite{holman2018gender} present the first quantitative model to our knowledge that attempts to predict the time required to reach gender parity in academic STEM fields, with estimates as high as several centuries in some disciplines. Their model assumes logistic growth to gender parity of the proportion of women in senior and junior academic roles (as estimated by last and first authorship on research papers, respectively). Although logistic growth and eventual gender parity is a reasonable assumption for their phenomenological model, we create a mechanistic model to examine the relative impact of two major sociological factors, homophilic (self-seeking) instincts and gender bias, on the progression of women through professional hierarchies. We find that gender parity is not guaranteed, and gender fractionation may never settle to an equilibrium. \section{Model} Broadly speaking, two classes of people influence the ascension of individuals through a professional hierarchy: people at lower levels choose to apply for higher positions, and people at higher levels choose to promote applicants into the next level. People at higher levels affect the promotion of individuals through their hiring biases, while the decisions to apply for promotion made by those at lower levels are affected by their own homophilic tendencies. Women in hierarchical professions tend to be promoted more slowly than men, even when accounting for differing productivity and attrition, indicating that gender is a salient factor in the hiring process \cite{tesch1995promotion,heilman2001description,eagly2007women,kumra2008study,winkler2000faculty}. If gender is the determining factor when deciding between equally qualified candidates, we will say that a gender bias exists. We define gender bias as all conscious or unconscious decisions made by the employer during the hiring process that are affected by the gender of the applicant. For simplicity, we will assume that gender-based hiring bias is constant across all hierarchy levels (i.e., employers will uniformly reduce or enhance female candidates' relative chance of promotion at all levels). Gendered differences in promotion also depend on gender differences in the applicant pool due to individuals self-selecting, consciously or unconsciously, whether or not to submit an application. When gender is a salient factor in deciding whether or not to submit an application, we will assume that such decisions are based largely on a homophilic instinct. In other words, when an individual considers whether or not to apply for a promotion, he or she looks at the demographics of those working at the above level and evaluate whether or not they `belong' in that higher level. While this assumption may seem simplistic, many studies show that people unconsciously self-segregate based on gender from a early age \cite{maccoby1987gender,alexander1994gender,moller1996antecedents,powlishta1995gender}. In fact, perceptions of gendered jobs perpetuate much of the occupational gender segregation we see today \cite{miller2004occupational,pan2015gender,ludsteck2014impact,alonso2012extent,bender2005job}. With the goal of understanding the relative roles that bias and homophily (self-segregation) play in the ascension of women through professional hierarchies, we derive a minimal mathematical model that incorporates both forces. To introduce the model, we begin with a simple example. \subsection{Example} Consider the decision process that occurs during the transition between two levels in a professional hierarchy (Figure \ref{fig:example}). Suppose the lower level is 40\% women, and gender is not a factor in eligibility for promotion; then the group eligible for promotion is also 40\% women. If women are not well-represented in the higher level, then women may not feel as comfortable applying for promotion as men. To be clear, we do not suppose that women are intrinsically less likely to apply for promotion; rather we assume that the gender demographics in the upper level affect both men and women's feeling of belonging (homophily) in the upper level. \begin{figure}[htb] \begin{center} \includegraphics[width=0.65\textwidth]{fig1.pdf} \end{center} \caption{Example of a potential decision process between two levels in a professional hierarchy.} \label{fig:example} \end{figure} Say men are twice as likely to apply for promotion due to these homophilic instincts. Then the applicant pool will shrink to 25\% women. If no bias towards or against women exists in hiring, then 25\% of those granted promotion will be women. However, if women are slightly less likely than men to be granted promotion due to bias, then the fraction of women hired will shrink again. We assume this decision process occurs between all levels in a professional hierarchy. The schematic in Figure \ref{fig:schematic} is a visualization of a generic hierarchy. \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\textwidth]{fig2.pdf} \end{center} \caption{Schematic of an $L$-level hierarchy. The $j$th level in the hierarchy has a certain fraction women $x_j$, and people retire or leave the field from each level at a rate $R_j$. The general population is assumed to be $1/2$ women at all times.} \label{fig:schematic} \end{figure} \subsection{Model derivation} We begin by assuming that the probability $P(u,v)$ of seeking promotion to the next level is a function of the fraction of people at the upper level who share the applicant's gender, $u$, and the fraction of like-gendered individuals in the applicant's current level, $v$. There exists a `one-third hypothesis' that supports the anecdotal evidence that an individual feels comfortable in a group environment when at least 30\% of the members share the individual's demographic status \cite{engelmann1967communication,srikantan1968curious}. To our knowledge, this hypothesis has not been rigorously tested in the real world, so we allow $P$ to take a more flexible form. Specifically, we suppose that the threshold of comfort may depend on the environment in which a person currently resides. We also assume that the threshold does not delineate an instantaneous switch from 0\% comfort to 100\% comfort; instead the comfort level may gradually change around that threshold. One simple function that captures this behavior is the sigmoid \begin{equation} \label{eq:P} P(u,v) = \frac{1}{1+ e^{-\lambda(u-v)}}, \end{equation} \noindent where $u$ is the fraction of like-gendered individuals in the level above, $v$ is the fraction of like-gendered individuals in the current level, and $\lambda$ is the strength of the homophilic tendency. This function need not be a literal probability because only the relative likelihood of applying for promotion is relevant. Because we choose to not include inherent gender difference in the model, we assume that this function applies to both men and women. See Figure \ref{fig:P} for a sketch of this homophily function. \begin{figure}[htb] \begin{center} \includegraphics[width=0.75\textwidth]{fig3.pdf} \end{center} \caption{An example of the probability that a woman seeks promotion, dependent on the demographics of the level to which she is applying. In this example, a woman is more likely to apply for promotion if there are more women in the level above her. The probability changes most rapidly around the demographic split she is most accustomed to, the gender split in her current position.} \label{fig:P} \end{figure} Given this probability $P(u,v)$ of seeking promotion, the fraction of women in the applicant pool is \begin{align} \label{eq:f0} f_0(u,v) = \frac{v P(u,v)}{v P(u,v) + (1-v) P(1-u,1-v)}, \end{align} \noindent where $u$ is the fraction of women in the higher level and $v$ is the fraction of women in the current level. In addition to self-segregation dynamics, hiring bias towards or against women will change the proportion of female applicants who are promoted. We incorporate this constant bias $b$ as the female fraction of those promoted if the applicant pool has an equal number of men and women. For instance, a bias $b$ exceeding $\tfrac{1}{2}$ would imply that women are favored disproportionately, and a bias less than $\tfrac{1}{2}$ suggests that men are favored. The fraction of women promoted to the next level is then \begin{align} f(u,v; b) = \frac{b \, v \, P(u,v)}{b \, v \, P(u,v) + (1-b) (1-v) P(1-u,1-v)}. \end{align} \noindent This is not the only way to incorporate bias, but it is a simple way to ensure that bias does not leave vacancies or induce the promotion of those who have not applied. As an example, a naive choice to incorporate bias would be $f(u,v; b)=b f_0(u,v)$, where $b<1$ indicates bias against women. However, this choice permits $f>1$ if $b$ or $f_0$ are sufficiently large. Because professional hierarchies are frequently competitive, with each level smaller than the level below it, we assume that all vacancies will be filled. The vacancies are created by individuals who are promoted to the next level, those leaving the field at a particular level, or those retiring from the top level. The change in the number of women at each level, $x_j N_j$, is \begin{equation} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d} t} (x_L N_L) =& \, R_L N_L\, f\left(x_L,x_{L-1}; b \right) - R_L N_L \, x_L \\ \frac{\mathrm{d}}{\mathrm{d} t} (x_j N_j) =& \left( \sum_{k=j}^{L} R_k N_k \right) f\left(x_j,x_{j-1}; b\right) -R_j N_j x_j \\ &- \left( \sum_{k=j+1}^{L} R_k N_k \right) f\left(x_{j+1}, x_{j}; b\right) \,\,\,\,\,\,\, \text{for } 1< j < L \\ \frac{\mathrm{d}}{\mathrm{d} t} (x_1 N_1) =& \left( \sum_{k=1}^{L} R_k N_k \right) f(x_1, \tfrac{1}{2}; b) -R_1 N_1 x_1 - \left( \sum_{k=2}^{L} R_k N_k \right) f\left(x_{2}, x_{1}; b\right), \label{eq:derive} \end{aligned} \end{equation} \noindent where $L$ is the number of levels in the hierarchy, $x_j$ is the fraction of people in level $j$ who are women, $N_{j}$ is the number of people in the $j$th level, $R_j$ is the retirement/leave rate at the $j$th level, $\sum_{k=j+1}^{L} R_k N_k$ is the total number of retiring people above the $j$th level, $b$ is the bias parameter, and $f(\,\cdot \,)$ is the fraction of people promoted to the next level who are women. Because it may not be intuitive that the change in the number of women at lower levels depends on the total number of retiring people above the level, we provide a simple example to illustrate this feature in the Appendix. We normalize system \eqref{eq:derive} by dividing each equation by the number of people retiring/leaving the level (i.e., $R_j N_j$): \begin{equation} \begin{aligned} \frac{1}{R_L}\frac{\mathrm{d} x_L}{\mathrm{d} t} &= \overbrace{f\left(x_L,x_{L-1}; b\right)}^{\substack{\text{promoted from} \\ \text{lower level}}} - \overbrace{x_L}^{\substack{\text{retire out} \\ \text{of level}}} \\ \frac{1}{R_j}\frac{\mathrm{d} x_j}{\mathrm{d} t} &= (1+r_j) f\left(x_j,x_{j-1}; b \right) - x_j - r_j f\left(x_{j+1}, x_{j}; b\right) \,\,\,\,\,\,\, \text{for } 1< j < L \\ \frac{1}{R_1}\frac{\mathrm{d} x_1}{\mathrm{d} t} &= \underbrace{(1+r_1) f(x_1,\tfrac{1}{2}; b)}_{\substack{\text{hired from} \\ \text{general pool}}} - \underbrace{x_1}_{\substack{\text{leave} \\ \text{field}}} - \underbrace{r_1 f\left(x_{2}, x_{1}; b\right)}_{\substack{\text{promoted to} \\ \text{next level}}}, \label{eq:full} \end{aligned} \end{equation} \noindent where $r_j$ is the ratio of the total retiring people above the $j$th level to the retiring people in the $j$th level. Algebraically, this ratio is $ r_j = \sum_{k=j+1}^{L} R_k N_k/R_j N_j$. Note that this system can be condensed into one line by taking $r_L = 0$ and $x_0 = 1/2$. Refer to Table \ref{tab:parameters} for descriptions of the model variables and parameters. \begin{table}[htb] \begin{center} \begin{tabular}{| p{1.5cm} p{10cm} |} \hline Variable & Meaning \\ \hline $x_j$ & fraction of people in the $j$th level who are women \\ $L$ & number of levels in hierarchy \\ $R_j$ & retirement/leave rate at the $j$th level \\ $N_j$ & number of people in the $j$th level \\ $r_j$ & ratio of the total retiring people above the $j$th level to the retiring people in the $j$th level $\left(\sum_{k=j+1}^{L} R_k N_k/R_j N_j\right)$ \\ $P(\,\cdot\,)$ & likelihood of seeking promotion \\ $f(\,\cdot\,)$ & fraction of people promoted to next level who are women \\ $b$ & bias towards or against women ($b=1/2$ is no bias) \\ $\lambda$ & strength of homophilic tendency \\ \hline \end{tabular} \caption{Model variables and parameters for \eqref{eq:full}.} \label{tab:parameters} \end{center} \end{table} \subsection{Null model} Consider a null model with no hiring bias or homophily. In model \eqref{eq:full}, this would imply bias $b=\tfrac{1}{2}$ and the likelihood of seeking promotion $P$ is constant ($\lambda=0$). The model then reduces to the linear system \begin{equation} \frac{1}{R_j} \frac{\mathrm{d} x_j}{\mathrm{d} t} = (1+r_j) (x_{j-1} - x_j ) \,\,\,\,\,\,\, \text{for } 1\le j \le L. \label{eq:null} \end{equation} The only steady state is $\{ x_j^{*} \} = \{ \tfrac{1}{2} \}$. The Jacobian of the system evaluated at this state yields all real, negative eigenvalues. Therefore, $\{ x_j \} = \{ \tfrac{1}{2} \}$ is a stable sink of the null model. In other words, without bias or homophily, each level in the hierarchy will directly converge to equal gender representation, as seen in the model by Holman et al. \cite{holman2018gender}. The rate of convergence to parity for each level depends on the eigenvalues of the system: $\lambda_j=-R_j(1+r_j),$ for $j=1,\ldots,L$. The eigenvalues depend only on the level sizes $N_j$ and leave rates $R_j$. The convergence time to parity for the whole system is then given by $1/\min_j\{ R_j(1+r_j) \},$ the characteristic timescale of the system. See the Appendix for a more complete discussion of analysis of the null model. Figure \ref{fig:null} shows convergence to gender parity in a hypothetical academic hierarchy. \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\textwidth]{fig4.pdf} \end{center} \caption{Example of direct convergence to 50/50 gender split under the null model \eqref{eq:null}. In this example, we consider a hypothetical academic hierarchy with six levels, $\{R_j\} = \{1/4,1/5,1/6,1/7,1/9,1/15\}$, and $\{N_j\} = \{13,8,5,3,2,1\}$.} \label{fig:null} \end{figure} \subsection{Homophily-free model} Now consider a model in which people do not use gender to decide whether to apply for a promotion (i.e., $\lambda = 0$), but employers are biased towards or against women (i.e., $b\ne \tfrac{1}{2}$). In this case, the model \eqref{eq:full} reduces to \begin{equation} \frac{1}{R_j} \frac{\mathrm{d} x_j}{\mathrm{d} t} = (1+r_j) \frac{b x_{j-1}}{b x_{j-1} + (1-b) (1-x_{j-1})} - x_j - r_j \frac{b x_{j}}{b x_{j} + (1-b) (1-x_{j})} \,\,\,\,\,\,\, \text{for } 1 \le j \le L. \label{eq:noseg} \end{equation} As in the null model \eqref{eq:null}, the homophily-free model has a single, attracting fixed point. The presence of bias, however, pushes the steady-state gender fractionation away from gender parity. This effect is more extreme in higher levels than in lower ones. In particular, if the bias is against women ($b<\frac{1}{2}$), \[ x^{*}_{L}<\ldots<x^{*}_{j}<\ldots<x^{*}_{1}<\frac{1}{2}. \] See the Appendix for details, and see Figure \ref{fig:noseg} for transient model behavior for a hypothetical academic hierarchy. \begin{figure}[htb] \begin{center} \includegraphics[width=\textwidth]{fig5.pdf} \end{center} \caption{Examples of transient behavior of a 6-level homophily-free model \eqref{eq:noseg}. \textbf{(a)} For strong bias against women ($b = 0.35$), all levels directly converge to male majority, with the strongest majority in the highest levels of leadership. \textbf{(b)} For weak bias against women ($b = 0.49$), the fraction of women in each level directly converges to a value near 50/50, though there are still more men in each level. \textbf{(c)} For weak bias favoring women ($b = 0.51$), the fraction of women in each level directly converges to a value near 50/50, though there are more women in each level. \textbf{(d)} For strong bias favoring women ($b = 0.65$), all levels directly converge to female majority, with the strongest majority in the highest levels of leadership.} \label{fig:noseg} \end{figure} \subsection{Bias-free model} Consider an alternative model in which people self-segregate by gender, but employers are not biased towards or against women (i.e., $b=\tfrac{1}{2}$). Then model \eqref{eq:full} reduces to \begin{equation} \frac{1}{R_j}\frac{\mathrm{d} x_j}{\mathrm{d} t} = (1+r_j) f_0\left(x_j,x_{j-1}\right) - x_j - r_j f_0\left(x_{j+1}, x_{j}\right) \,\,\,\,\,\,\, \text{for } 1\le j \le L. \label{eq:biasfree} \end{equation} We observe three qualitatively different model behaviors for \eqref{eq:biasfree}: for mild homophilic tendencies, the system converges to gender parity; for moderate homophily, the fraction of women oscillates in all levels; and for strong homophily, the system converges to either male or female dominance depending on the initial state. The emergence of oscillations in such a system may not seem intuitively obvious. We explain the onset of oscillations in the Appendix. Figure \ref{fig:nobias} shows the range of model behavior for a hypothetical academic hierarchy. See Figure \ref{fig:nobiasbif} for an example of a bifurcation diagram for the bias-free system. Although this diagram is representative of typical model behavior, the location of bifurcation points may shift as parameters vary. \begin{figure}[htb] \begin{center} \includegraphics[width=\textwidth]{fig6.pdf} \end{center} \caption{Examples of transient behavior of a hypothetical academic 6-level bias-free model \eqref{eq:biasfree}. \textbf{(a)} For mild homophily ($\lambda = 2$), all levels converge to gender equity after oscillating above and below a 50/50 split. \textbf{(b)} For stronger homophily ($\lambda = 3$), the fraction of women in each level oscillates about the 50/50 split without converging. \textbf{(c)} For yet stronger homophily ($\lambda = 4.5$), limit cycles appear to behave like those of a relaxation oscillator. \textbf{(d)} For strong homophily ($\lambda = 5$), each level equilibrates to nearly all women (solid lines) or nearly all men (dashed lines), depending on the initial condition. For all examples, $\{R_j\} = \{1/4,1/5,1/6,1/7,1/9,1/15\}$, and $\{N_j\} = \{13,8,5,3,2,1\}$.} \label{fig:nobias} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=\textwidth]{fig7.pdf} \end{center} \caption{Numerical bifurcation diagram for homophily parameter $\lambda$ in a 3-level bias-free system. Solid lines are stable equilibria/cycles, dashed lines are unstable equilibria/cycles, black dots are bifurcations of equilibria, and black lines are bifurcations of limit cycles. All limit cycles show the gender fractionation for the lowest level, $x_1$. Generated using AUTO\cite{doedel2007auto,ermentrout2002simulating} with $N_1=70, N_2=2, N_3=1, R_1=1/4, R_2=1/5, R_3=1/6$. Convergence to a degenerate pitchfork bifurcation at $\lambda=4$ as $r_j \to 0$ is shown in the Appendix.} \label{fig:nobiasbif} \end{figure} For the parameter values listed in the caption of Figure \ref{fig:nobias}, we see that as homophily increases from a small value, a supercritical Hopf bifurcation occurs, which initiates the onset of stable oscillations in all hierarchy levels. Although these oscillations are not identical, they have the same period at steady state, as suggested by the transient behavior in Figure \ref{fig:nobias}(b) and (c). At $\lambda\approx 3.9,$ the limit cycle in each hierarchy level undergoes a pitchfork bifurcation of limit cycles, after which no stable equilibria at or steady oscillations about gender parity occur. At $\lambda= 4,$ a degenerate pitchfork bifurcation occurs for all parameter values. At this point, $2L+1$ equilibria, several of which are unstable, emanate from the pitchfork as determined by a center manifold reduction. In Figure \ref{fig:nobiasbif}, we focus on the equilibrium at gender parity and a pair of equilibria which eventually become stable, through subcritical Hopf bifurcations at $\lambda\approx 4.35.$ All limit cycles eventually end at homoclinic bifurcations: the periodic orbit spends more and more time near a saddle point (not shown) as the period diverges. As $r_j \rightarrow 0$ for each level, the Hopf bifurcations converge at the pitchfork bifurcation. In that limit the pitchfork has a greater degeneracy, producing $3^{L}$ equilbria. Loosely speaking, hierarchies with small $r_j$ have very few people retiring relative to the number of people who would like to be promoted, making the hierarchies competitive. The limit $r_j \rightarrow 0$ is not realistic for any real-world hierarchy to our knowledge, but analysis near this limit aids numerical continuation; see the Appendix for details. \subsection{Model with homophily and bias} Finally, we explore the full model \eqref{eq:full} with bias $b\ne \tfrac{1}{2}$ and homophily $\lambda\ne0$. The long-term dynamics are similar to those of the bias-free model \eqref{eq:biasfree}. For small homophily, regardless of initial state, the hierarchy tends towards a `biased' fractionation profile. For large homophily, the gender fraction polarizes, with bistable equilibria at both large and small fractions of women at each level. Figure \ref{fig:bif2}(a),(d),(e) show examples of transient behavior at these high and low homophily values. Figure \ref{fig:bif2} shows a slight perturbation of the system from the bias-free case, highlighting the degeneracy of the pitchfork bifurcation in Figure \ref{fig:nobiasbif}; branch colors correspond with the colors of related branches in Figure \ref{fig:nobiasbif}. Generically, for moderate levels of homophily, the limit cycles that emanate from the bifurcation `bend' in the direction of bias (e.g., for $b<\frac{1}{2},$ toward fewer women in each hierarchy level) as homophily increases through the supercritical Hopf bifurcation. The degenerate pitchfork bifurcation unfolds into several saddle-node bifurcations and a continuous fixed point curve. Similarly, the pitchfork bifurcation of limit cycles unfolds into a saddle-node bifurcation of limit cycles and a continuous limit cycle curve. As in Figure \ref{fig:nobiasbif}, all limit cycles end in homoclinic bifurcations. For lower values of bias $b$, the Hopf bifurcation from the equigender fixed point shifts along the branch of equilibria it emanates from, corresponding to a decrease in $x_1$ and an increase in homophily. At the same time, the length of the limit cycle branches emanating from the Hopf point decreases, and the Hopf point is eliminated in a Takens-Bogdanov bifurcation. For stronger bias ($b\approx 0.45$), long-term behavior manifests solely as equilibria, which includes the possibility of decaying oscillations. Limit cycles are no longer possible. See the Appendix for the co-dimension 2 bifurcation diagram, where both bias $b$ and homophily $\lambda$ are varied. \begin{figure}[htb] \begin{center} \includegraphics[width=\textwidth]{fig8.pdf} \end{center} \caption{Numerical bifurcation diagram for homophily parameter $\lambda$ in a 3-level system with slight bias against women ($b=0.499$). Solid lines are stable equilibria/cycles, dashed lines are unstable equilibria/cycles, black dots are bifurcations of equilibria, and black lines are bifurcations of limit cycles. All curves show the gender fractionation for the lowest level, $x_1$. Generated using AUTO\cite{doedel2007auto,ermentrout2002simulating} with $N_1=70, N_2=2, N_3=1, R_1=1/4, R_2=1/5, R_3=1/6$. Examples of transient behavior for several positions within the bifurcation diagram are on the margins: (a) $\lambda = 3$, (b) $\lambda = 3.5$, (c) $\lambda = 4$, (d) $\lambda = 5$ with lower initial condition, and (e) $\lambda = 5$ with higher initial condition.} \label{fig:bif2} \end{figure} \section{Model validation} With this simple model, we aim to extract useful information from real-world hierarchies without claiming to fully explain their dynamics. For instance, we wish to predict when (or if) fields will reach gender parity, what sociological or psychological factors may be the main drivers of gender fractionation dynamics, and what interventions may help various fields reach gender parity more quickly. \subsection{Data} We collect time series data on the fraction of women in each level of many professional hierarchies \cite{nsf2017ncses,nsf2017nss,ies2017nces,acs2015cen,nasem2013seeking,apa2017sd,kff2017med,aamc2016med,abim2018med,brotherton2017graduate,brotherton2016graduate,brotherton2015graduate,topaz2016gender,lab_2017,asne2015,blount1998destined,dana2006women,cat2018www,aba2013law,nalp2018law,nawl2017law,nawl2018rut,pew2018gov,msu2017cihws,uscb2013nurse,hrsa2008nurse,lao2016orch}. Although most studies of this nature have focused on academia \cite{holman2018gender,shaw2012leaks}, the generality of our model allows us to examine a larger variety of hierarchies: medicine, law, politics, business, education, journalism, entertainment, and fine arts/music. Of the 23 hierarchy datasets we assembled, 16 are sufficiently comprehensive to attempt model fitting. Each dataset comprises the following components: \begin{itemize} \item A hierarchy structure (e.g., undergraduate $\to$ graduate $\to$ postdoctoral $\to$ assistant professor $\to$ associate professor $\to$ professor, in a typical academic hierarchy). In the real world, the hierarchical structure is not perfectly rigid, but we take the structure to be the `typical' route through the ranks. This structure determines the hierarchy size $L$ and the ordering of levels in our model \eqref{eq:full}. \item The fraction of each level of the hierarchy that are women over time. We include datasets with at least a decade's worth of continuous yearly data for all levels. If there are missing years, we use linear interpolation to fill the gaps. Some datasets were available in a table, but others were extracted from graphical representations using WebPlotDigitizer \cite{rohatgi2018webplotdigitizer}. This determines the exact $x_j(t)$ for a range of discrete times. \item The approximate relative sizes of each level. Although fields may grow (e.g., medicine) or shrink (e.g., journalism) over time, we find that the relative level sizes generally stay approximately the same. Where data on the relative level sizes were not available, we made educated guesses. This information estimates $N_j$ in our model if we normalize the top level to $1$ ($N_L = 1$). \item The approximate yearly `leave' or `retirement' rates for each level. These statistics are not available for any hierarchies, to our knowledge. We made educated guesses for these parameters based on the expected amount of time spent in each level. For instance, the vast majority of undergraduate degrees are completed in approximately four years, and relatively few graduates continue on to doctoral study. Therefore, our initial estimate for the undergraduate leave rate is $0.25$ (i.e., approximately a quarter of undergraduates leave college each year without moving up the academic hierarchy). We take these proxies to exit rates as estimates for $R_j$ in our model. \end{itemize} All compiled data, including datasets not sufficient for model fitting, are available at Northwestern's ARCH repository: \url{https://doi.org/10.21985/N2QF28}. \subsection{Model fitting} We wish to fit the model to each dataset in order to quantify the degree of bias and homophily in each field; with this information, we may predict the long-term fraction of women in each level of the hierarchies without any intervention, and we can suggest targeted interventions to reach gender parity more quickly. Theoretically, distinguishing between bias and homophily in the data should be straightforward because the qualitative effects of each parameter are different. Bias is the only parameter that independently `separates' levels (i.e., bias causes the female fractionation $x_j$ to differ among levels), while homophily is the only parameter that independently causes oscillations. There are many possible ways to fit the model to each dataset. One qualitative way to measure the degree of bias and homophily in each dataset is to look for separation between levels and indications of oscillations. Roughy speaking, datasets with strong bias either towards or against women will have large changes in the proportion of women as one ascends the hierarchy (e.g., see Figure \ref{fig:noseg}(a),(d)). On the other hand, datasets with weak bias and moderate homophily will show signs of oscillations in each level (e.g., see Figure \ref{fig:nobias}(b),(c)), although real datasets may not include enough time points to resolve a full period of the oscillations. Datasets with weak bias and strong homophily will appear male- or female-dominated without much separation between levels (e.g., see Figure \ref{fig:nobias}(d)). If both bias and homophily are strong, then the impact of each phenomenon will be difficult to deduce visually (see Appendix for phase diagram), and quantitative methods will be needed. As a quantitative attempt at fitting, we perform a global minimization of error between the model and data. We first find a best fit of the model to each dataset by minimizing the sum of squared error between the model gender fractionation $\hat{x}_j$ and the data $x_j$ over time using the Nelder-Mead minimization algorithm \cite{nelder1965simplex}. The fitting parameters are $b, \lambda, R_j, N_j$ and the initial conditions. We include $R_j$ and $N_j$ as fitting parameters because we do not have exact values for these parameters, but we heuristically verify that the model fit does not select values far from our initial guesses. The initial condition is a fitting parameter to ensure the first data point does not contribute more weight to the fitting process than the subsequent data points in the time series. We seed the Nelder-Mead algorithm with 20 initial guesses for the fitting parameters $b$ and $\lambda$, selected uniformly from $b \in (0.2,0.8)$ and $\lambda \in (1.5, 6.5)$. All other parameter guesses are taken to be our best estimates from available data. After finding the best fit parameters $(\tilde{b}, \tilde{\lambda})$ from among the 20 seeded searches, we run a second search in the parameter space near the best fit. In this next step, we seed the Nelder-Mead algorithm with 10 new initial guesses for $b$ and $\lambda$, selected from normal distributions $b \in \mathcal{N} (\tilde{b}, 0.05)$ and $\lambda \in \mathcal{N} (\tilde{\lambda}, 0.1)$. We take, as our final fit, the best fit parameters after this second search. See Appendix for a visual representation of this algorithm. We present the best fits from two representative hierarchies in Figure \ref{fig:fit1}. Best fit parameters $\hat{b}$ and $\hat{\lambda}$ from all datasets are shown in Figure \ref{fig:bvh}. See Appendix for fit parameters and additional model predictions for all datasets. \begin{figure}[htb] \begin{center} \includegraphics[width=\textwidth]{fig9.pdf} \end{center} \caption{Model fit to data from (a) clinical academic medicine\cite{kff2017med,abim2018med,aamc2016med} and (b) academic psychology\cite{nsf2017nss,ies2017nces,apa2017sd}.} \label{fig:fit1} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=\textwidth]{fig10.pdf} \end{center} \caption{Bias and homophily best fit parameters for each hierarchy. Colors indicate the predicted long-term (equilibrium) female fractionation in the highest level of leadership; if the hierarchy is not predicted to reach equilibrium, then a time average over the limit cycle was taken. *May not be a strict hierarchy: although producers hire directors, producers do not typically `promote' directors to producer positions. Likewise for politics.} \label{fig:bvh} \end{figure} To address the concern of possible overfitting, we verify that the ratio of data points to fitting parameters is large. For each dataset, there are $3 L+1$ fitting parameters and $L T$ data points, where $L$ is the number of levels and $T$ is the number of years in the dataset. The datasets with the fewest number of levels available and fewest years available should prompt greatest concern regarding overfitting. Among our datasets, the smallest ratio of parameters to data points was for the journalism hierarchy, which had 51 data points and 10 parameters. The typical ratio of data points to parameters was about 10:1. Because our parameter search algorithm is not guaranteed to find the absolute minimum error between the model and data, we verify that our model results are not excessively sensitive to changes in our fitting procedure. To illustrate, we seed our algorithm's random number generator with ten different seeds and verify that the variation in predicted average gender fractionation is small. We select the two most concerning datasets for this computationally intensive test: (1) journalism, due to its risk for overfitting, and (2) academic engineering, due to its unpredictable fitting results during early tests (see Appendix). \section{Discussion} The presented model vastly simplifies the process by which people choose to advance their careers, yet we may exploit the model to extract useful predictions and suggestions for interventions to reach gender parity. By fitting the model to data from over a dozen professional hierarchies, we may predict the time required to reach gender parity if there are no cultural or policy shifts within the fields. Unlike the model by Holman et al. \cite{holman2018gender}, we predict that many fields may never reach gender parity without intervention (see Appendix). For instance, fields that indicate especially strong homophily (e.g., engineering and nursing) are expected to become male- or female-dominated. Fields with apparently strong bias against women (e.g., academic chemistry, math, and computer science) are predicted to never reach sustained gender parity, at least in the highest levels of leadership. Fields with bias near $1/2$ and weak homophily (e.g., medicine and law) are predicted to eventually reach gender parity as fast as inertia allows, as modeled by Shaw and Stanton \cite{shaw2012leaks}. Effective affirmative action programs could artificially speed the process, but resources may be better spent in fields where gender parity is not inevitable. One benefit of our modeling approach is that we can extract the relative impact of two major decision-makers in a professional hierarchy: those who apply for promotion and those who grant promotion. For fields with strong bias against women ($b<1/2$), the decision-makers that should be targeted are hiring committees. For instance, hiring committees could be trained in unconscious bias, or policies could mandate that the number of promotions offered to women match the applicant pool. For fields with strong homophily, the decision-makers that should be targeted are women eligible for promotion. Knowing that fewer women than are eligible are applying for promotion in male-dominated fields, hiring committees could actively recruit women to apply for promotion or make the underrepresented gender more visible within the field. \subsection{Limitations} Of course, the predictions and interventions suggested by this simple model are subject to limitations. We assume that hierarchical structures remain constant over time, but this is not always the case. For instance, some fields that now require a college degree were once accessible to those with a high school education. We also assume that individuals must pass through each level linearly, but many academic fields may or may not include a postdoc, and political or business leaders may come from outside their field entirely. To avoid overfitting, we assume that bias and homophily are constant both across time and across the hierarchy structure. Naturally, the cultures and policies that shape these sociological properties are not constant; perhaps bias against women has diminished over time, but maybe bias is stronger at higher levels of leadership. Also, gender may be more salient to a young person deciding on a major than on an associate professor up for promotion. Therefore, we think of the fitting parameters $\hat{b}$ and $\hat{\lambda}$ as an average bias and homophily over time and the hierarchy structure. Finally, we have ignored the different decisions that men and women may make. Our model assumes that men and women on hiring committees are equally biased against a certain gender, that gender is equally salient to men and women, and that men and women are equally qualified for advancement. A more sophisticated model may break the symmetry between men and women. \subsection{Future steps} Allowing bias and homophily to change over time and across the hierarchy structure is a natural model extension. In addition to making the model more realistic, it would also permit interventions to be incorporated directly into the model. If the effect of an intervention is to change bias and/or homophily, then the model could serve as the basis of a control problem to find an optimal time-dependent intervention. Due to the generality of the model, it could also be extended to study the progression of underrepresented minorities through professional hierarchies. A few complications are introduced in this case: our model assumes that the gender distribution of the general population is constant in both space and time, but for racial minorities this is not true. Also, data collection may prove to be more complicated due to the evolving and sometimes overlapping definitions of various racial and ethnic groups. Finally, the model could be generalized to include a spectrum of gender identities, income levels, or socioeconomic privilege. Two major challenges are introduced with this model extension. First, the current system of $L$ ordinary differential equations may become a system of $L$ partial integro-differential equations, which will make model analysis more difficult. Second, data required to validate such a model will be more challenging to obtain. \section{Conclusion} We have developed a simple model of the progression of people through professional hierarchies, like academia, medicine, and business. The model assumes that gender is a salient factor in both the decision to apply for promotion and the decision to grant promotion, but that men and women do not make fundamentally different decisions. Unlike previous models of the phenomenon, our model predicts that gender parity is not inevitable in many fields. Without intervention, a few fields may even become male- or female-dominated in the long term. By fitting our model to available data, we extract the relative impact of the major decision-makers in the progression of women through 16 professional hierarchies. In some fields, like academic chemistry, bias of promotion and hiring committees may be the dominant reason that women are poorly represented. In other fields, like engineering, women not applying for promotion may be the dominant reason for the so-called leaky pipeline. With this information, we may suggest effective interventions to reach gender parity. \section{Acknowledgements} The authors wish to thank Danny Abrams, Yuxin Chen, Stephanie Ger, Joseph Johnson, and Rebecca Menssen for valuable conversations during the model development stage. Thanks are also due to Elizabeth Field, Alan Zhou, and the Illinois Geometry Lab for contributions to and support of model analysis and data collection. The authors additionally thank Chad Topaz for offering comments that greatly improved the manuscript. The authors also wish to thank Jo\~{a}o Moreira (Amaral Lab, Northwestern University), Peter Buerhaus and Dave Auerbach (Center for Interdisciplinary Health Workforce Studies, Montana State University), Roxanna Edwards (Bureau of Labor Statistics), and Karen Stamm (Center for Workforce Studies, American Psychological Association) for sharing unpublished data. This work was funded in part by National Science Foundation Graduate Research Fellowship DGE-1324585 and Mathways grant DMS-1449269 (SMC), Royal E.~Cabell Terminal Year Fellowship (KH, EEA), and National Science Foundation Research Training Grant DMS-1547394 (AJK). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. \section{Data availability} All data (Excel .xlsx file) and software (Matlab .m files and XPPAUT .ode files) are publicly available from the Northwestern ARCH repository (DOI:10.21985/N2QF28) at \url{https://doi.org/10.21985/N2QF28}. \section{Competing interests} The authors declare no competing interests. \bibliographystyle{ieeetr}
1,941,325,220,841
arxiv
\section{introduction} As is well known, the rare radiative decays of $B$ mesons is in particular sensitive to contributions from those new physics beyond the standard model(SM). Both inclusive and exclusive processes, such as the decays $B_s \to X\gamma$, $B_s \to \gamma\gamma$ and $B \to X_s\gamma$ have been received some attention in the literature$^{[1-14]}$. In this paper, we will present our results in Technicolor theories. The one generation Technicolor model (OGTM)$^{[15-16]}$is the simplest and most frequently studied model which contained the parameters are less than SM. Same as other models, the OGTM has its defects such as the S parameter large and positive$^{[17]}$. But we can relax the constraints on the OGTM form the $S$ parameter by introducing three additional parameters $(V,W,X)$$^{[18]}$. The basic idea of the OGTM is: we introduce a new set of asymptotically free gauge interactions and the Technicolor force act on Technifermions. The Technicolor interaction at $1Tev$ become strong and cause a spontaneous breaking of the global flavor symmetry $SU(8)_L \times SU(8)_R\times U(1)_{Y}$. The result is $8^{2}-1=63$ massless Goldstone bosons. Three of the these objects replace the Higgs field and induce a mass of $W^{\pm}$ and $Z^0$ gauge bosons. And at the new strong interaction other Goldstone bosons acquire masses. As for the $B_s \to \gamma\gamma$, only the charged color single and color octets have contributions. The gauge couplings of the PGBs are determined by their quantum numbers. In Table 1 we listed the relevant couplings$^{[19]}$ needed in our calculation, where the $V_{ud} $ is the corresponding element of $Kobayashi-Maskawa$ matrix . The Goldstone boson decay constant $F_\pi$$^{[20]}$ should be $F_{\pi}=v/2=123GeV$, which corresponds to the vacuum expectation of an elementary Higgs field . \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $P^+ P^- \gamma_\mu$ & $-ie(p_+ - p_-)_\mu$ \\ \hline $P^+_{8a} P^-_{8b} \gamma_\mu$ & $-ie(p_+ - p_-)_\mu \delta_{ab}$ \\\hline $P^+\; u\; d$ & $i\frac{V_{ud}}{2 F_\pi}\sqrt{\frac{2}{3}} [M_u (1-\gamma_5) - M_d (1 + \gamma_5) ]$ \\ \hline $P^+_{8a}\; u\; d$ & $i\frac{V_{ud}}{2 F_\pi} \lambda_a [M_u (1-\gamma_5) - M_d (1 + \gamma_5) ]$ \\ \hline $P^+_{8a} P^-_{8b} g_{c\mu}$ & $-g f_{abc}(p_a - p_b)_\mu $ \\ \hline \end{tabular} \end{center} \label{sm4} \caption{The relevant gauge couplings and Effective Yukawa couplings for the OGTM.} \end{table} At the LO in QCD the effective Hamiltonian is \begin{equation} {\cal H}_{eff} =\frac{-4G_F}{\sqrt{2}} V_{tb}V_{ts}^* \displaystyle{\sum_{i=1}^{8} }C_i (M_W^-) O_i(M_W^-). \end{equation} Where, as usual, $G_{F}$ denotes the Fermi coupling constant and $V_{tb}V_{ts}^*$ indicates the Cabibbo-Kobayashi-Maskawa matrix element.And the current-current, QCD penguin, electromagnetic and chromomagnetic dipole operators are of the form \begin{eqnarray} O_1&=&(\overline{c}_{L\beta} \gamma^{\mu} b_{L\alpha}) (\overline{s}_{L\alpha} \gamma_{\mu} c_{L\beta})\;\\ O_2&=&(\overline{c}_{L\alpha} \gamma^{\mu} b_{L\alpha}) (\overline{s}_{L\beta} \gamma_{\mu} c_{L\beta})\;\\ O_3&=&(\overline{s}_{L\alpha} \gamma^{\mu} b_{L\alpha}) \sum_{q=u,d,s,c,b}(\overline{q}_{L\beta} \gamma_{\mu} q_{L\beta})\;\\ O_4&=&(\overline{s}_{L\alpha} \gamma^{\mu} b_{L\beta}) \sum_{q=u,d,s,c,b}(\overline{q}_{L\beta} \gamma_{\mu} q_{L\alpha})\;\\ O_5&=&(\overline{s}_{L\alpha} \gamma^{\mu} b_{L\alpha}) \sum_{q=u,d,s,c,b}(\overline{q}_{R\beta} \gamma_{\mu} q_{R\beta})\;\\ O_6&=&(\overline{s}_{L\alpha} \gamma^{\mu} b_{L\beta}) \sum_{q=u,d,s,c,b}(\overline{q}_{R\beta} \gamma_{\mu} q_{R\alpha})\;\\ O_7&=&(e/16\pi^2) m_b \overline{s}_L \sigma^{\mu\nu} b_{R} F_{\mu\nu}\;\\ O_8&=&(g/16\pi^2) m_b \overline{s}_{L} \sigma^{\mu\nu} T^a b_{R} G_{\mu\nu}^a\; \end{eqnarray} where $\alpha$ and $\beta$ are color indices, $\alpha=1, . . . ,8$ labels SU(3)c generators, e and $g$ refer to the electromagnetic and strong coupling constants, while $F_{\mu\nu}$ and $G^{a}_{\mu\nu}$ denote the QED and QCD field strength tensors, respectively. The Feynman diagrams that contribute to the matrix element as the following \begin{figure}[th] {\includegraphics{G1.eps}} \caption {Examples of Feynman diagrams that contribute to the matrix element.} \label{G1} \end{figure} \begin{figure}[th] {\includegraphics{G2.eps}} \caption {The Feynman diagrams that contribute to the Wilson coefficients C7,C8.} \label{G2} \end{figure} In Fig.2 the shot-dash lines represent the charged PGBs $P^\pm$ and $P^{\pm}_8$ of OGTM. We at first integrate out the top quark and the weak $W$ bosons at $\mu=M_{W}$ scale, generating an effective five-quark theory and run the effective field theory down to b-quark scale to give the leading log QCD corrections by using the renormalization group equation. The Wilson coefficients are process independent and the coefficients $C_{i}(\mu)$ of 8 operators are calculated from the Fig.2.The Wilson coefficients are read$^{[21]}$ \begin{eqnarray} C_i(M_W)=0, \;\; i=1,3,4,5,6, \;\;\; C_2(M_W)=1,\\ C_7(M_W)=-A(\delta) +\frac{B(x)}{3\sqrt{2}G_F F_{\pi}^2 } +\frac{8 B(y)}{3\sqrt{2}G_F F_{\pi}^2 } \label{c7}\\ C_8(M_W)=-C(\delta)+\frac{D(x)}{3\sqrt{2}G_F F_{\pi}^2}+\frac{8D(y) + E(y)}{3\sqrt{2}G_F F_{\pi}^2 }\label{c8} \end{eqnarray} with $\delta=M_W^2/m_t^2$, $x=(m(P^{\pm})/m_t)^2$ and $y=(m(P^{\pm}_8)/m_t)^2$.From the $Eq(11),(12)$ , we can see the situation of the color-octet charged PGBs is more complicate than that of the color-singlet charged PGBs ,because of the involvement of the color interactions. where \begin{eqnarray} A(\delta)&=&\frac{ \frac{1}{3} +\frac{5}{24} \delta -\frac{7}{24} \delta^2}{(1-\delta)^3} +\frac{ \frac{3}{4}\delta -\frac{1}{2}\delta^2}{(1-\delta)^4} \log[\delta] \\ B(y)& =& \frac{ -\frac{11}{36} +\frac{53}{72}y -\frac{25}{72}y^2}{(1-y)^3}\nonumber \\ &+&\frac{ -\frac{1}{4}y +\frac{2}{3}y^2 -\frac{1}{3}y^3} {(1-y)^4}\log[y]\\ C(\delta)&=&\frac{\frac{1}{8} -\frac{5}{8} \delta-\frac{1}{4} \delta^2}{(1-\delta)^3} -\frac{ \frac{3}{4}\delta^2}{(1-\delta)^4} \log[\delta] \\ D(y)& =&\frac{ -\frac{5}{24} +\frac{19}{24}y -\frac{5}{6}y^2}{(1-y)^3}\nonumber \\ &+&\frac{ \frac{1}{4}y^2 -\frac{1}{2}y^3}{(1-y)^4} \log[y]\\ E(y) & =&\frac{ \frac{3}{2}-\frac{15}{8}y -\frac{15}{8}y^2 }{(1-y)^3} +\frac{\frac{9}{4}y -\frac{9}{2}y^2}{(1-y)^4 }\log[y] \end{eqnarray} By caculate the graphs of the exchanged $W$ boson in the SM we gained the function $A$ and $C$;And by caculate the graphs of the exchanged color-singlet and color-octet charged PGBs in OGTM we gained the function $B$, $D$ and $E$. when $\delta < 1$, $x,y >> 1$, the OGTM contribution $B$, $D$ and $E$ have always a relative minus sign with the SM contribution $A$ and $C$. As a result, the OGTM contribution always destructively interferes with the SM contribution. The leading-order results for the Wilson coefficients of all operators entering the effective Hamiltonian in Eq.(1) can be written in an analytic form. They are \begin{eqnarray} C_7^{eff}(m_b) &=& \eta^{16/23}C_7(M_W) +\frac{8}{3} ( \eta^{14/23}-\eta^{16/23} )\times \nonumber \\&&C_8(M_W)+C_2(M_W) \displaystyle \sum _{i=1}^{8} h_i \eta^{a_i}. \end{eqnarray} With $\eta = \alpha_s(M_W) /\alpha_s (m_b)$, \begin{eqnarray} h_i &=&(\frac{626126}{272277}, -\frac{56281}{51730}, -\frac{3}{7}, -\frac{1}{14},-0.6494,\nonumber \\&& -0.0380, -0.0186, -0.0057 ).\\ a_i &=&(\frac{14}{23}, \frac{16}{23}, \frac{6}{23}, -\frac{12}{23},\nonumber \\&& 0.4086, -0.4230, -0.8994, 0.1456 ). \end{eqnarray} To calculate $B_s \to \gamma\gamma$ , one may follow a perturbative QCD approach which includes a proof of factorization, showing that soft gluon effects can be factorized into $B_{s}$ meson wave function; and a systematic way of resumming large logarithms due to hard gluons with energies between 1Gev and $m_{b}$. In order to calculate the matrix element of Eq(1) for the $B_s \to \gamma\gamma$ , we can work in the weak binding approximation and assume that both the $b$ and the $s$ quarks are at rest in the $B_s $ meson, and the $b$ quarks carries most of the meson energy, and its four velocity can be treated as equal to that of $B_s $. Hence one may write $b$ quark momentum as $p_{b}=m_{b}v$ where is the common four velocity of $b$ and $B_{s}$. We have \begin{eqnarray} p_{b}\cdot k_1&=&m_bv\cdot k_1={1\over 2}m_bm_{B_s}=p_{b}\cdot k_2,\nonumber \\ p_{s}\cdot k_1&=&(p-k_1-k_2)\cdot k_1=\nonumber \\&&-{1\over2} m_{B_s}(m_{B_s}-m_b)=p_{s}\cdot k_2, \end{eqnarray} We compute the amplitude of $B_s \to \gamma\gamma$ using the following relations \begin{eqnarray} \left\langle 0\vert \bar{s}\gamma_{\mu}\gamma_5 b\vert B_s(P) \right\rangle &=& -if_{B_s}P_{\mu},\nonumber \\ \left\langle 0\vert \bar{s}\gamma_5 b\vert B_s(P) \right\rangle &=& if_{B_s}M_B, \end{eqnarray} where $f_{B_s}$ is the $B_s$ meson decay constant which is about $200$ MeV . The total amplitude is now separated into a CP-even and a CP-odd part \begin{equation} T(B_s\to \gamma\gamma)=M^+F_{\mu\nu}F^{\mu\nu} +iM^-F_{\mu\nu}\tilde{F}^{\mu\nu}. \end{equation} We find that \begin{eqnarray} M^+&=&{-4{\sqrt 2}\alpha G_F\over 9\pi}f_{B_s}m_{b_{s}}V_{ts}^*V_{tb}\times\nonumber \\&& \left(\frac{m_b}{m_{B_s}}B K(m_b^2) +{3C_7\over 8\bar{\Lambda} }\right). \end{eqnarray} with $B= -(3C_6+C_5)/4$, $ \bar{\Lambda}=m_{B_s}-m_b$, and \begin{eqnarray} M^-&=&{4{\sqrt 2}\alpha G_F\over 9\pi}f_{B_s}m_{b_{s}}V_{ts}^*V_{tb}\times\nonumber \\&& \left(\sum_q A_qJ(m_q^2)+ \frac{m_b}{m_{B_s}}BL(m_b^2)+{3C_7\over 8\bar{\Lambda}} \right). \end{eqnarray} where \begin{eqnarray} A_u &=&(C_3-C_5)N_c+(C_4-C_6)\nonumber \\ A_d &=&{1\over 4}\left[(C_3-C_5)N_c+(C_4-C_6)\right]\nonumber \\ A_c &=&(C_1+C_3-C_5)N_c+(C_2+C_4-C_6) \nonumber \\ A_s &=&{1\over 4}\left[(C_3+C_4-C_5)N_c+(C_3+C_4-C_6)\right]\\ A_s &=&{1\over 4}\left[(C_3+C_4-C_5)N_c+(C_3+C_4-C_6)\right]. \end{eqnarray} The functions $J(m^2)$, $K(m^2)$ and $L(m^2)$ are defined by \begin{eqnarray} J(m^2)&=&I_{11}(m^2),\nonumber \\ K(m^2)&=&4(I_{11}(m^2)-I_{00}(m^2)) ,\nonumber \\ L(m^2)&=&I_{00}(m^2), \end{eqnarray} with \begin{equation} I_{pq}(m^2)=\int_{0}^{1}{dx}\int_{0}^{1-x}{dy}\frac{x^{p}y^{q}}{m^{2}-2xyk_{1}\cdot k_{2}-i\varepsilon} \end{equation} The decay width for $B_s\to \gamma\gamma$ is simply \begin{equation} \Gamma(B_s\to \gamma\gamma)={m_{B_s}^3\over 16\pi}({\vert M^+\vert }^2+{\vert M^-\vert }^2). \end{equation} In SM, with $C_{2}=C_{2}(M_{W})=1$ , and the other Wilson coefficients are zero, we find $\Gamma( B_s\to \gamma\gamma)=1.3\times 10^{-10} \ {\rm eV}$ which amounts to a branching ratio $Br(B_s\to \gamma\gamma)=3.5\times 10^{-7}$, for the given $\Gamma^{total}_{B_s}=4\times 10^{-4} \ {\rm eV}$. In numerical calculations we use the corresponding input parameters $M_W=80.22\;GeV$, $\alpha_s(m_Z)=0.117$, $m_c=1.5\;GeV$, $m_b=4.8\;GeV$ and $|V_{tb} V_{ts}^*|^2/ |V_{cb}|^2= 0.95$ , respectively. The present experimental limit$^{[22]}$ on the decay $B_s\to\gamma\gamma$ is \begin{eqnarray} {\rm Br}(B_s \to\gamma\gamma)\leq 8.6\times 10^{-6}, \end{eqnarray} which is far from the theoretical results. So, we can not put the constraint to the masses of PGBs. The constraints of the masses of $P^\pm$ and $P^{\pm}_8$ can be from the decay$^{[24]}$ $B\to s\gamma$ : $m_{P^\pm_8} >400$GeV. \begin{figure}[th] {\includegraphics{G3.eps}} \caption {the $Br(B_{s}\to\gamma\gamma)$ about the mass of $P_8^\pm$ under different values of $m_{P^\pm}$.} \label{G3.eps} \end{figure} \begin{figure}[th] {\includegraphics{G4.eps}} \caption {the $Br(B_{s}\to\gamma\gamma)$ about the mass of $P^\pm$ under different values of $P_8^\pm$.} \label{g4} \end{figure} Fig.3(4) denotes the $Br(B_{s}\to\gamma\gamma)$ about the mass of $P_8^\pm$ ($P^\pm$) under different values of $m_{P^\pm}$ ($P_8^\pm$). From Fig.3 and 4, we find the the curves are much different from the the SM one. It can be enhanced about 1-2 levels to the SM prediction in the reasonable region of the masses of PGBs. This gives the strong new physics signals from the Technicolor Model. The branching ratio of $B_s\to \gamma\gamma$ decrease along with the mass of $P_8^\pm$ and $P^\pm$ reduce. This is from the decoupling theorem that for heavy enough nonstandard boson. When $m(P^{\pm})$ and $m(P^{\pm}_8)$ have large values, the contributions from OGTM is small.From the $Eq(16),(17),(18)$ ,we can see the functions $B$, $D$ and $E$ go to zero, as $x$, $y\to \infty$.The branching ratio in the Fig.(3) is changed much faster than that in the Fig.(4).This is because the contribution to $B_s\to\gamma\gamma$ from the color octet $P_8^\pm$ is large when compared with the contribution from color singlet $P^\pm$. As a conclusion, the size of contribution to the rare decay of $B_s\to\gamma\gamma$ from the PGBs strongly depends on the values of the masses of the charged PGBs. This is quite different from the SM case. By the comparison of the theoretical prediction with the current data one can derived out the the contributions of the PGBs: $P^\pm$ and $P^\pm_8 $ to $B_s\to\gamma\gamma$ and give the new physics signals of new physics.
1,941,325,220,842
arxiv
\section{Introduction} \label{sec:intro} The anomalous magnetic moment $g\!-\!2$\ of the electron is one of the most vigorously studied physical quantities at present, which provides a very stringent test of the validity of quantum electrodynamics (QED). To match the precision of the latest measurement of electron $g\!-\!2$ \cite{Hanneke:2008tm} the theory must include the QED radiative correction up to the eighth order \cite{Kinoshita:2005zr,Aoyama:2007dv,Aoyama:2007mn} as well as the hadronic contribution \cite{Hagiwara:2006jt,Jegerlehner:2009ry,Davier:2009zi,Krause:1996rf,Melnikov:2003xd,Bijnens:2007pz,Prades:2009tw,Nyffeler:2009tw} and the electroweak contribution \cite{Czarnecki:1996ww,Knecht:2002hr,Czarnecki:2002nt} within the context of the standard model. As a matter of fact, the largest theoretical uncertainty now comes from the tenth-order QED contribution which has not yet been evaluated and is given only a crude estimate \cite{Mohr:2008fa}. Thus it is an urgent matter to evaluate the actual value of the tenth-order term. To accomplish this task we started a systematic program several years ago to evaluate the complete tenth-order contribution \cite{Kinoshita:2005sm,Aoyama:2005kf,Aoyama:2007bs,Aoyama:2008gy,Aoyama:2008hz,Aoyama:2010yt}. The tenth-order QED contribution to the electron $g\!-\!2$\ consists of the mass-independent term $A_1^{(10)}$ and the mass-dependent terms $A_2^{(10)}$ and $A_3^{(10)}$ in which muon and/or tau lepton loop is involved, which may be expressed as \begin{equation} a_e^{(10)} = \left[ A_1^{(10)} + A_2^{(10)}(m_e/m_\mu) + A_2^{(10)}(m_e/m_\tau) + A_3^{(10)}(m_e/m_\mu,m_e/m_\tau) \right] \left(\frac{\alpha}{\pi}\right)^5. \end{equation} The mass-independent term $A_1^{(10)}$ may be classified into six sets and further divided into 32 gauge-invariant subsets according to the type of the closed lepton loop subdiagram. Thus far, the numerical evaluation of 21 subsets has been carried out and the results were published \cite{Kinoshita:2005sm,Aoyama:2005kf,Aoyama:2007bs,Aoyama:2008gy,Aoyama:2008hz,Aoyama:2010yt}. In this paper we focus our attention on the gauge-invariant set VI which consists of all diagrams containing a light-by-light-scattering subdiagram, one of whose photon vertex is external. (We call this an \textit{external} light-by-light-scattering subdiagram.) Of eleven gauge-invariant subsets of the Set VI, eight have been evaluated previously \cite{Kinoshita:2005sm}. The purpose of this paper is to report the evaluation of the remaining three gauge-invariant subsets: Sets VI(d), VI(g), and VI(h). In diagrams of Set VI(d) two virtual photon lines are attached to the open lepton line. This set contains 492 vertex diagrams. In diagrams of Set VI(g) one virtual photon line is attached to the open lepton line and the other virtual photon line is attached to the closed lepton loop. This set contains 480 vertex diagrams. In diagrams of Set VI(h) two virtual photon lines are attached to the closed lepton loop. This set contains 630 vertex diagrams. Typical diagrams of these sets are shown in Fig.~\ref{fig:set6dgh}. \begin{figure} \begin{center} \includegraphics[scale=1.2]{fig_set6dgh.ps} \end{center} \caption{% \label{fig:set6dgh} Typical diagrams of Set VI(d), Set VI(g), and Set VI(h). } \end{figure} Our numerical evaluation of Feynman diagrams is based on the parametric integration formula \cite{Cvitanovic:1974uf,Cvitanovic:1974sv,Kinoshita:1990}. To handle a relatively large number of diagrams systematically without errors, we developed an automated code-generating system called \textsc{gencode\textrm{LL}N} that produces FORTRAN codes for the numerical integration. It is an adaptation of the previously developed system for the type of diagrams without lepton loops \cite{Aoyama:2005kf,Aoyama:2007bs} to the diagrams containing an external light-by-light-scattering subdiagram. This paper is organized as follows. Section~\ref{sec:scheme} describes our scheme for numerical evaluation. Section~\ref{sec:result} gives the results of the numerical evaluation. Section~\ref{sec:discussion} is devoted to the summary and discussion. In Appendix~\ref{sec:loopmatrix} we describes an algorithm for identifying independent set of loops on a diagram that is required for constructing the amplitude. For simplicity the factor $(\alpha/\pi)^5$ is omitted in Secs.~\ref{sec:scheme} and~\ref{sec:result}. \section{Numerical evaluation scheme} \label{sec:scheme} In this section we describe our scheme for the numerical evaluation of the diagrams of Sets VI(d), VI(g), and VI(h). The diagram that belongs to these sets consists of an open lepton line ($\ell_1$) and a closed lepton loop ($\ell_2$) that forms a light-by-light-scattering (\textit{l-by-l}) subdiagram, where $\ell_1$ and $\ell_2$ refer to the types of leptons, i.e. electron ($e$), muon ($m$), or tau lepton ($t$). The mass-dependence of these diagrams and amplitudes is characterized by ($\ell_1$,$\ell_2$) or by superscript ${}^{(\ell_1\ell_2)}$. We adopt a relation derived from the Ward-Takahashi identity \begin{equation} \Lambda^\nu(p,q) \simeq - q^\mu \left.\frac{\partial\Lambda_\mu(p,q)}{\partial q_\nu}\right|_{q\to 0} - \frac{\partial\Sigma(p)}{\partial p_\nu} \label{eq:wt} \end{equation} where $\Lambda^\nu(p,q)$ is the sum of proper vertex parts which are obtained by inserting an external photon vertex in the lepton lines of the self-energy function $\Sigma(p)$ of a diagram $\mathcal{G}$ in all possible ways. Taking account of the charge conjugation and time reversal symmetry the numbers of independent integrals to evaluate become 45 for Set VI(d) (see Fig.~\ref{fig:set6d}), 26 for Set VI(g) (see Fig.~\ref{fig:set6g}), and 27 for Set VI(h) (see Fig.~\ref{fig:set6h}). \begin{figure*} \begin{center} \includegraphics[scale=0.75]{fig_diagram6d.ps} \end{center} \caption{% \label{fig:set6d} The contribution of Set VI(d) is represented by 45 independent diagrams as listed. } \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.75]{fig_diagram6g.ps} \end{center} \caption{% \label{fig:set6g} The contribution of Set VI(g) is represented by 26 independent diagrams as listed. } \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.75]{fig_diagram6h.ps} \end{center} \caption{% \label{fig:set6h} The contribution of Set VI(h) is represented by 27 independent diagrams as listed. } \end{figure*} The amplitude of a Feynman diagram is turned into an integral over the Feynman parameters assigned to the lepton and photon lines by using the parametric integral formula \cite{Cvitanovic:1974uf}. Note that the contribution of the second term on the right-hand side of Eq.~(\ref{eq:wt}) vanishes due to the Furry's theorem. In our numerical procedure the renormalization of the amplitude is carried out by the subtractive renormalization. The unrenormalized amplitude $M_\mathcal{G}$ of a diagram $\mathcal{G}$ is related to a finite calculable quantity $\Delta M_\mathcal{G}$ by appropriate subtraction terms of UV and IR divergences. These subtraction terms are prepared in the form of integrals over the same Feynman parameter space so that they cancel out the divergent behavior of the original unrenormalized integral {\it point-by-point}. The UV divergence arising from the l-by-l subdiagram must be regularized, e.g., by the Pauli-Villars regularization. However, in Eq.~(\ref{eq:wt}) the Ward-Takahashi-summed amplitude is given as the differentiation of $\Lambda^\mu(p,q)$ with respect to $q_\nu$. Therefore the divergence from the l-by-l loop is lifted, and the PV regularization is no longer needed. As a consequence the source of the UV divergences resides only in the vertex and self-energy subdiagrams of second and fourth order. These divergences are handled by {\em K}~operation \cite{Cvitanovic:1974sv,Kinoshita:1981ww}. By definition the {\em K}~operation yields the subtraction integral which is analytically factorizable into a product or a sum of products of lower-order quantities. It is symbolically denoted by \begin{equation} \mathbb{K}_{\mathcal{S}} M_{\mathcal{G}} = L_{\mathcal{S}}^{\text{UV}} M_{\mathcal{G}/\mathcal{S}}, \end{equation} when $S$ is a vertex subdiagram, and by \begin{equation} \mathbb{K}_{\mathcal{S}} M_{\mathcal{G}} = {\delta m}_{\mathcal{S}}^{\text{UV}} M_{\mathcal{G}/\mathcal{S}} + B_{\mathcal{S}}^{\text{UV}} M_{[\mathcal{G}/\mathcal{S},i]}, \end{equation} when $S$ is a self-energy subdiagram. Here, the superscript $\text{UV}$ means that the leading UV divergent part is taken for the vertex renormalization constant $L$, the mass renormalization constant $\delta m$, and the wave-function renormalization constant $B$, respectively. Note that ${\delta m}_2^{\text{UV}} = {\delta m}_2$. We also apply {\em R}~subtraction \cite{Aoyama:2007bs} by which the residual part $\widetilde{\delta m} \equiv {\delta m} - {\delta m}^{\text{UV}}$ of the fourth-order mass renormalization constant is subtracted away to accomplish complete subtraction of $\delta m$. Some diagrams of Set VI(d) and Set VI(g) have IR divergences. For example, the diagram VIg04 of Set VI(g) shown in Fig.~\ref{fig:set6g}, in which a photon line attached to the open lepton line at both ends encloses an eighth-order l-by-l subdiagram, exhibits an IR divergence when this outermost photon goes soft. These divergences are subtracted away by {\em I}~subtraction \cite{Aoyama:2007bs}. By construction the {\em I} subtraction term factorizes as \begin{equation} \mathbb{I}_{\mathcal{S}} M_{\mathcal{G}} = M_{\mathcal{S}} \widetilde{L}_{\mathcal{G}/\mathcal{S},i}, \end{equation} where $\widetilde{L} \equiv L - L^{\text{UV}}$ denotes the residual part of the vertex renormalization constant. The finite amplitude $\Delta M_{\mathcal{G}}$ obtained so far differs from the standard renormalized quantity, because the subtraction terms involve only a fraction of the renormalization constants relevant to the divergences. To achieve the standard on-the-mass-shell renormalization, the differences are collected over the diagrams of the subset, which is finite, and added to $\Delta M_{\mathcal{G}}$. This step is called the residual renormalization. The exactly renormalized contributions of Sets VI(d), VI(g), and VI(h) to the magnetic moment are given by the formulas: \begin{align} \label{eq:res-set6d} a_{\ell_1}^{(10)}[\text{VI(d)}^{(\ell_1\ell_2)}] & = {\Delta M}_{\text{VI(d)}}^{(\ell_1\ell_2)} - 4 \DeltaL\!B_{2}\,{\Delta M}_{\text{IVc}}^{(\ell_1\ell_2)} + \left( - 2 \DeltaL\!B_{4} + 5 (\DeltaL\!B_{2})^2 \right)\,{a}_{\text{6LL}}^{(\ell_1\ell_2)} \nonumber \\[1ex] & = \Delta M_{\text{VI(d)}}^{(\ell_1\ell_2)} - 4 \DeltaL\!B_{2}\,a_{\text{IVc}}^{(\ell_1\ell_2)} - \left( 2 \DeltaL\!B_{4} + 3 (\DeltaL\!B_{2})^2 \right)\,a_{\text{6LL}}^{(\ell_1\ell_2)}, \\[1ex] \label{eq:res-set6g} a_{\ell_1}^{(10)}[\text{VI(g)}^{(\ell_1\ell_2)}] & = \Delta M_{\text{VI(g)}}^{(\ell_1\ell_2)} - 2 \DeltaL\!B_{2}\,{\Delta M}_{\text{IVb}}^{(\ell_1\ell_2)} - 3 \DeltaL\!B_{2}\,{\Delta M}_{\text{IVc}}^{(\ell_1\ell_2)} + 6 (\DeltaL\!B_{2})^2\,{a}_{\text{6LL}}^{(\ell_1\ell_2)} \nonumber \\[1ex] & = \Delta M_{\text{VI(g)}}^{(\ell_1\ell_2)} - 2 \DeltaL\!B_{2}\,a_{\text{IVb}}^{(\ell_1\ell_2)} - 3 \DeltaL\!B_{2}\,a_{\text{IVc}}^{(\ell_1\ell_2)} - 6 (\DeltaL\!B_{2})^2\,a_{\text{6LL}}^{(\ell_1\ell_2)}, \\[1ex] \label{eq:res-set6h} a_{\ell_1}^{(10)}[\text{VI(h)}^{(\ell_1\ell_2)}] & = \Delta M_{\text{VI(h)}}^{(\ell_1\ell_2)} - 5 \DeltaL\!B_{2}\,{\Delta M}_{\text{IVb}}^{(\ell_1\ell_2)} + \left( - 3 \DeltaL\!B_{4} + 9 (\DeltaL\!B_{2})^2 \right)\,{a}_{\text{6LL}}^{(\ell_1\ell_2)} \nonumber \\[1ex] & = \Delta M_{\text{VI(h)}}^{(\ell_1\ell_2)} - 5 \DeltaL\!B_{2}\,a_{\text{IVb}}^{(\ell_1\ell_2)} - \left( 3 \DeltaL\!B_{4} + 6 (\DeltaL\!B_{2})^2 \right)\,a_{\text{6LL}}^{(\ell_1\ell_2)}. \end{align} Here, ${\Delta M}_{\text{VI(d)}}$, ${\Delta M}_{\text{VI(g)}}$, and ${\Delta M}_{\text{VI(h)}}$ are the sum of the finite amplitude of the diagrams within the subsets VI(d), VI(g), and VI(h), respectively. $a_{\text{6LL}}$ is the sixth-order anomalous magnetic moment containing the fourth-order l-by-l diagram. $a_{\text{IVb}}$ and $a_{\text{IVc}}$ are the eighth-order anomalous magnetic moments of the set of diagrams containing the external l-by-l subdiagram with a virtual photon line attached to the lepton loop (IVb) or to the open lepton line (IVc). ${\Delta M}_{\text{IVb}}$ and ${\Delta M}_{\text{IVc}}$ are their finite part defined in Ref.~\cite{Kinoshita:1981ww}. $\DeltaL\!B_2$ and $\DeltaL\!B_4$ are the sum of the finite part of vertex and wave-function renormalization constants of second and fourth order, respectively. The code-generating program \textsc{gencode\textrm{LL}N} takes a one-line representation of a diagram as an input, and generates the numerical integration program formatted in FORTRAN. During this process it finds the form of the unrenormalized amplitudes, identifies the divergence structure, and constructs the UV- and/or IR-subtraction integrals. The symbolic manipulations concerning e.g the gamma matrix calculus and the analytic integration using homemade integration tables are processed with the helps of FORM \cite{Vermaseren:2000nd} and Maple. \section{Results} \label{sec:result} The numerical integration is carried out using the adaptive-iterative Monte-Carlo integration routine VEGAS \cite{vegas}. The numerical values of individual amplitudes of Set VI(d), Set VI(g), and Set VI(h) are listed in Tables~\ref{table:set6d},~\ref{table:set6g}, and~\ref{table:set6h}, respectively, for the mass-independent term and the mass-dependent terms in which $(\ell_1,\ell_2) = (e,m)$, $(m,e)$, and $(m,t)$. We use the muon-electron mass ratio $m_\mu/m_e = 206.768~282~3~(52)$ and the tau-muon mass ratio $m_\tau/m_\mu = 16.818~3~(27)$ for numerical evaluation \cite{Mohr:2008fa}. \subsection{Mass-independent contribution} \label{sec:result:a1} Let us first consider the case in which $\ell_1$ and $\ell_2$ are of the same type of lepton, i.e., $\ell_1 = \ell_2 = e, m, \text{or}\ t$. This gives a mass-independent contribution to the lepton $g\!-\!2$. The numerical values are listed in the second columns of Table~\ref{table:set6d} for Set VI(d), Table~\ref{table:set6g} for Set VI(g), and Table~\ref{table:set6h} for Set VI(h), respectively. The values of the sixth- and eighth-order amplitudes and the finite renormalization constants are listed in Table~\ref{table:residual}. Putting these values into Eqs.~(\ref{eq:res-set6d}),~(\ref{eq:res-set6g}), and~(\ref{eq:res-set6h}), the mass-independent contributions $A_1$ of the respective subsets are: \begin{align} \label{eq:a1-set6d} A_1^{(10)} [\text{Set VI(d)}] &= \phantom{+} 1.840~5~(95), \\[1ex] \label{eq:a1-set6g} A_1^{(10)} [\text{Set VI(g)}] &= -1.591~3~(65), \\[1ex] \label{eq:a1-set6h} A_1^{(10)} [\text{Set VI(h)}] &= \phantom{+} 0.179~7~(40). \end{align} \subsection{Mass-dependent contribution ({\it e,m})} \label{sec:result:a2-em} The mass-dependent contribution to the electron $g\!-\!2$\ in which the light-by-light-scattering subdiagram consists of the muon loop, i.e., $\ell_1 = e$ and $\ell_2 = m$, is found from the numerical values listed in the third columns of Table~\ref{table:set6d} for Set VI(d), Table~\ref{table:set6g} for Set VI(g), and Table~\ref{table:set6h} for Set VI(h), respectively. The values of the mass-dependent sixth- and eighth-order amplitudes are listed in Table~\ref{table:residual}. Putting these values into Eqs.~(\ref{eq:res-set6d}),~(\ref{eq:res-set6g}), and~(\ref{eq:res-set6h}), we obtain the mass-dependent contributions $A_2(m_e/m_\mu)$ of the respective subsets: \begin{align} \label{eq:a2-em-set6d} A_2^{(10)}(m_e/m_\mu) [\text{Set VI(d)}] &= \phantom{+} 0.001~276~(76), \\[1ex] \label{eq:a2-em-set6g} A_2^{(10)}(m_e/m_\mu) [\text{Set VI(g)}] &= -0.000~497~(29), \\[1ex] \label{eq:a2-em-set6h} A_2^{(10)}(m_e/m_\mu) [\text{Set VI(h)}] &= \phantom{+} 0.000~045~(10). \end{align} \subsection{Mass-dependent contribution ({\it m,e})} \label{sec:result:a2-me} Similarly, the mass-dependent contribution to the muon $g\!-\!2$\ in which the light-by-light-scattering subdiagram consists of the electron loop, i.e., $\ell_1 = m$ and $\ell_2 = e$, is found from the numerical values listed in the fourth column of Tables~\ref{table:set6d},~\ref{table:set6g}, and~\ref{table:set6h}. Their contributions are: \begin{align} \label{eq:a2-me-set6d} A_2^{(10)}(m_\mu/m_e) [\text{Set VI(d)}] &= -7.798~(801), \\[1ex] \label{eq:a2-me-set6g} A_2^{(10)}(m_\mu/m_e) [\text{Set VI(g)}] &= \phantom{+} 7.346~(489), \\[1ex] \label{eq:a2-me-set6h} A_2^{(10)}(m_\mu/m_e) [\text{Set VI(h)}] &= -8.546~(231). \end{align} \subsection{Mass-dependent contribution ({\it m,t})} \label{sec:result:a2-mt} The mass-dependent contribution of the tau-lepton loop to the muon $g\!-\!2$\ is also evaluated and their numerical values are listed in the fifth column of Tables~\ref{table:set6d},~\ref{table:set6g}, and~\ref{table:set6h}. The results are \begin{align} \label{eq:a2-mt-set6d} A_2^{(10)}(m_\mu/m_\tau) [\text{Set VI(d)}] &= \phantom{+} 0.081~77~(161), \\[1ex] \label{eq:a2-mt-set6g} A_2^{(10)}(m_\mu/m_\tau) [\text{Set VI(g)}] &= -0.044~51~( 96), \\[1ex] \label{eq:a2-mt-set6h} A_2^{(10)}(m_\mu/m_\tau) [\text{Set VI(h)}] &= \phantom{+} 0.004~85~( 46). \end{align} \input tableX6d.tex \input tableX6g.tex \input tableX6h.tex \input tableres.tex \section{Summary and discussions} \label{sec:discussion} In this paper we evaluated the tenth-order QED corrections to the anomalous magnetic moments of electron and muon from the sets of diagrams, VI(d), VI(g), and VI(h). For the electron $g\!-\!2$, the total contribution is the sum of the mass-independent terms (\ref{eq:a1-set6d}), (\ref{eq:a1-set6g}), and~(\ref{eq:a1-set6h}) and the mass-dependent terms involving the muon loops (\ref{eq:a2-em-set6d}), (\ref{eq:a2-em-set6g}), and~(\ref{eq:a2-em-set6h}): \begin{align} \label{eq:a_e-set6d} a_e [\text{Set VI(d)}] &= \phantom{+} 1.841~8~(95) \left(\frac{\alpha}{\pi}\right)^5, \\ \label{eq:a_e-set6g} a_e [\text{Set VI(g)}] &= -1.591~8~(65) \left(\frac{\alpha}{\pi}\right)^5, \\ \label{eq:a_e-set6h} a_e [\text{Set VI(h)}] &= \phantom{+} 0.179~7~(40) \left(\frac{\alpha}{\pi}\right)^5. \end{align} The tau-lepton contributions to $a_e$ are more than an order of magnitude smaller than Eqs.~(\ref{eq:a2-em-set6d}), (\ref{eq:a2-em-set6g}), and~(\ref{eq:a2-em-set6h}) and lie within the uncertainties of Eqs.~(\ref{eq:a_e-set6d}), (\ref{eq:a_e-set6g}), and~(\ref{eq:a_e-set6h}). Thus they are negligible at present. For the muon $g\!-\!2$, the contributions are the sums of the mass-independent terms (\ref{eq:a1-set6d}), (\ref{eq:a1-set6g}), and~(\ref{eq:a1-set6h}) and the mass-dependent terms involving electron loops (\ref{eq:a2-me-set6d}), (\ref{eq:a2-me-set6g}), and~(\ref{eq:a2-me-set6h}) and tau-lepton loop (\ref{eq:a2-mt-set6d}), (\ref{eq:a2-mt-set6g}), and~(\ref{eq:a2-mt-set6h}): \begin{align} \label{eq:a_m-set6d} a_\mu [\text{Set VI(d)}] &= -5.876~(802) \left(\frac{\alpha}{\pi}\right)^5, \\ \label{eq:a_m-set6g} a_\mu [\text{Set VI(g)}] &= \phantom{+} 5.710~(490) \left(\frac{\alpha}{\pi}\right)^5, \\ \label{eq:a_m-set6h} a_\mu [\text{Set VI(h)}] &= -8.361~(232) \left(\frac{\alpha}{\pi}\right)^5. \end{align} \begin{acknowledgments} This work is supported in part by JSPS Grant-in-Aid for Scientific Research (C)19540322 and (C)20540261. T. K.'s work is supported in part by the U. S. National Science Foundation under Grant PHY-0757868, and the International Exchange Support Grants (FY2010) of RIKEN. T. K. thanks RIKEN for the hospitality extended to him while a part of this work is carried out. The numerical calculation was conducted on the RIKEN Super Combined Cluster (RSCC) and the RIKEN Integrated Cluster of Clusters (RICC) supercomputing systems. \end{acknowledgments}
1,941,325,220,843
arxiv
\section{Introduction} An interconnection topology can be represented by a graph $G=(V,E)$, where $V$ denotes the processors and $E$ the communication links. The \emph{distance} $d_G(u,v)$ between two vertices $u,v$ of a graph $G$ is the length of a shortest path connecting $u$ and $v$. An \emph{isometric} subgraph $H$ of a graph $G$ is an induced subgraph such that for any vertices $u,v$ of $H$ we have $d_H(u,v)=d_G(u,v)$. The \emph{hypercube} of dimension $n$ is the graph $Q_n$ whose vertices are the binary strings of length $n$ where two vertices are adjacent if they differ in exactly one coordinate. The \emph{weight} of a vertex, $w(u)$, is the number of $1$ in the string $u$. Notice that the graph distance between two vertices of $Q_n$ is equal to the \emph{Hamming distance} of the strings, the number of coordinates they differ. The hypercube is a popular interconnection network because of its structural properties.\\ \indent Fibonacci cubes and Lucas cubes were introduced in \cite{Hsu1993Fibonacci} and \cite{munarini2001lucas} as new interconnection networks. They are isometric subgraphs of $Q_n$ and have also recurrent structure. \begin{figure}[!p] \centering \setlength{\unitlength}{1 mm} \begin{picture}(160, 30) \newsavebox{\gtwo} \savebox{\gtwo} (30,30)[bl] \put(10,10){\circle*{2}} \put(10,20){\circle*{2}} \put(10,30){\circle*{2}} \put(10,10){\line(0,1){10}} \put(10,20){\line(0,1){10}} \put(5,10){$01$} \put(5,20){$00$} \put(5,30){$10$} \newsavebox{\gthree} \savebox{\gthree} (40,30)[bl] \put(10,10){\circle*{2}} \put(10,20){\circle*{2}} \put(10,30){\circle*{2}} \put(20,10){\circle*{2}} \put(20,20){\circle*{2}} \put(10,10){\line(1,0){10}} \put(10,20){\line(1,0){10}} \put(10,10){\line(0,1){10}} \put(10,20){\line(0,1){10}} \put(20,10){\line(0,1){10}} \put(3,10){$001$} \put(3,20){$000$} \put(3,30){$010$} \put(21,20){$100$} \put(21,10){$101$} \newsavebox{\gfour} \savebox{\gfour} (40,30)[bl] \put(10,10){\circle*{2}} \put(10,20){\circle*{2}} \put(10,30){\circle*{2}} \put(20,10){\circle*{2}} \put(20,20){\circle*{2}} \put(20,30){\circle*{2}} \put(30,10){\circle*{2}} \put(30,20){\circle*{2}} \put(10,10){\line(1,0){10}} \put(10,20){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(10,10){\line(0,1){10}} \put(10,20){\line(0,1){10}} \put(20,10){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(20,10){\line(0,1){10}} \put(20,20){\line(0,1){10}} \put(30,10){\line(0,1){10}} \put(20,12){$0001$} \put(20,22){$0000$} \put(20,32){$0010$} \put(31,20){$0100$} \put(1,20){$1000$} \put(1,10){$1001$} \put(1,30){$1010$} \put(31,10){$0101$} \newsavebox{\lthree} \savebox{\lthree} (40,30)[bl] \put(10,10){\circle*{2}} \put(10,20){\circle*{2}} \put(10,30){\circle*{2}} \put(20,20){\circle*{2}} \put(10,20){\line(1,0){10}} \put(10,10){\line(0,1){10}} \put(10,20){\line(0,1){10}} \newsavebox{\lfour} \savebox{\lfour} (40,30)[bl] \put(10,20){\circle*{2}} \put(10,30){\circle*{2}} \put(20,10){\circle*{2}} \put(20,20){\circle*{2}} \put(20,30){\circle*{2}} \put(30,10){\circle*{2}} \put(30,20){\circle*{2}} \put(10,20){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(10,20){\line(0,1){10}} \put(20,10){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(20,10){\line(0,1){10}} \put(20,20){\line(0,1){10}} \put(30,10){\line(0,1){10}} \put(0,-5){\usebox{\gtwo}} \put(15,-5){\usebox{\gthree}} \put(43,-5){\usebox{\gfour}} \put(81,-5){\usebox{\lthree}} \put(100,-5){\usebox{\lfour}} \end{picture} \caption{$\Gamma_2=\Lambda_2$, $\Gamma_3$, $\Gamma_4$ and $\Lambda_3$, $\Lambda_4$} \end{figure} \begin{figure}[!p] \centering \setlength{\unitlength}{2mm} \begin{picture}(80, 35) \newsavebox{\ggfour} \savebox{\ggfour} (40,40)[bl] \put(10,10){\circle*{2}} \put(10,20){\circle*{2}} \put(10,30){\circle*{2}} \put(20,10){\circle*{2}} \put(20,20){\circle*{2}} \put(20,30){\circle*{2}} \put(30,10){\circle*{2}} \put(30,20){\circle*{2}} \put(10,10){\line(1,0){10}} \put(20,10){\line(1,0){10}} \put(10,20){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(10,10){\line(0,1){10}} \put(10,20){\line(0,1){10}} \put(20,10){\line(0,1){10}} \put(20,20){\line(0,1){10}} \put(30,10){\line(0,1){10}} \newsavebox{\gfive} \savebox{\gfive} (40,40)[bl] \put(0,0){\usebox{\ggfour}} \put(24,14){\circle*{2}} \put(24,24){\circle*{2}} \put(24,34){\circle*{2}} \put(34,14){\circle*{2}} \put(34,24){\circle*{2}} \put(24,14){\line(1,0){10}} \put(24,24){\line(1,0){10}} \put(24,14){\line(0,1){10}} \put(24,24){\line(0,1){10}} \put(34,14){\line(0,1){10}} \put(20,10){\line(1,1){4}} \put(20,20){\line(1,1){4}} \put(20,30){\line(1,1){4}} \put(30,10){\line(1,1){4}} \put(30,20){\line(1,1){4}} \newsavebox{\gsix} \savebox{\gsix} (40,40)[bl] \put(0,0){\usebox{\gfive}} \put(6,6){\circle*{2}} \put(6,16){\circle*{2}} \put(6,26){\circle*{2}} \put(16,6){\circle*{2}} \put(16,16){\circle*{2}} \put(16,26){\circle*{2}} \put(26,6){\circle*{2}} \put(26,16){\circle*{2}} \put(6,6){\line(1,0){10}} \put(16,6){\line(1,0){10}} \put(6,16){\line(1,0){10}} \put(16,16){\line(1,0){10}} \put(6,26){\line(1,0){10}} \put(6,6){\line(0,1){10}} \put(6,16){\line(0,1){10}} \put(16,6){\line(0,1){10}} \put(16,16){\line(0,1){10}} \put(26,6){\line(0,1){10}} \put(6,6){\line(1,1){4}} \put(6,16){\line(1,1){4}} \put(6,26){\line(1,1){4}} \put(16,6){\line(1,1){4}} \put(16,16){\line(1,1){4}} \put(16,26){\line(1,1){4}} \put(26,6){\line(1,1){4}} \put(26,16){\line(1,1){4}} \put(14,11){$000001$} \put(14,21){$000000$} \put(14,31){$000010$} \put(24,21){$000100$} \put(4,21) {$001000$} \put(18,25){$010000$} \put(10,17){$100000$} \put(4,11) {$001001$} \put(4,31) {$001010$} \put(-1,7){$101001$} \put(-1,17){$101000$} \put(-1,27){$101010$} \put(10,7){$100001$} \put(10,27){$100010$} \put(18,35){$010010$} \put(24,11){$000101$} \put(35,25){$010100$} \put(35,15){$010101$} \put(18,15){$010001$} \put(20,17){$100100$} \put(20,7){$100101$} \put(0,-5){\usebox{\gfive}} \put(14,6){$00001$} \put(14,16){$00000$} \put(14,26){$00010$} \put(25,16){$00100$} \put(4,16){$01000$} \put(25,20){$10000$} \put(4,26){$01010$} \put(4,6){$01001$} \put(25,10){$10001$} \put(25,06){$00101$} \put(35,20){$10100$} \put(35,10){$10101$} \put(25,30){$10010$} \put(40,-5){\usebox{\gsix}} \end{picture} \caption{$\Gamma_5$ and $\Gamma_6$} \end{figure} \begin{figure}[!p] \centering \setlength{\unitlength}{2mm} \begin{picture}(80, 35) \newsavebox{\gggfour} \savebox{\gggfour} (40,40)[bl] \put(10,10){\circle*{2}} \put(10,20){\circle*{2}} \put(10,30){\circle*{2}} \put(20,10){\circle*{2}} \put(20,20){\circle*{2}} \put(20,30){\circle*{2}} \put(30,10){\circle*{2}} \put(30,20){\circle*{2}} \put(10,10){\line(1,0){10}} \put(20,10){\line(1,0){10}} \put(10,20){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(10,10){\line(0,1){10}} \put(10,20){\line(0,1){10}} \put(20,10){\line(0,1){10}} \put(20,20){\line(0,1){10}} \put(30,10){\line(0,1){10}} \newsavebox{\lfive} \savebox{\lfive} (40,40)[bl] \put(0,0){\usebox{\gggfour}} \put(24,24){\circle*{2}} \put(24,34){\circle*{2}} \put(34,24){\circle*{2}} \put(24,24){\line(1,0){10}} \put(24,24){\line(0,1){10}} \put(20,20){\line(1,1){4}} \put(20,30){\line(1,1){4}} \put(30,20){\line(1,1){4}} \newsavebox{\lsix} \savebox{\lsix} (40,40)[bl] { \newsavebox{\gggfive} \savebox{\gggfive} (40,40)[bl] \put(0,0){\usebox{\gggfour}} \put(24,14){\circle*{2}} \put(24,24){\circle*{2}} \put(24,34){\circle*{2}} \put(34,14){\circle*{2}} \put(34,24){\circle*{2}} \put(24,14){\line(1,0){10}} \put(24,24){\line(1,0){10}} \put(24,14){\line(0,1){10}} \put(24,24){\line(0,1){10}} \put(34,14){\line(0,1){10}} \put(20,10){\line(1,1){4}} \put(20,20){\line(1,1){4}} \put(20,30){\line(1,1){4}} \put(30,10){\line(1,1){4}} \put(30,20){\line(1,1){4}} \put(0,0){\usebox{\gggfive}} \put(6,16){\circle*{2}} \put(6,26){\circle*{2}} \put(16,16){\circle*{2}} \put(16,26){\circle*{2}} \put(26,16){\circle*{2}} \put(6,16){\line(1,0){10}} \put(16,16){\line(1,0){10}} \put(6,26){\line(1,0){10}} \put(6,16){\line(0,1){10}} \put(16,16){\line(0,1){10}} \put(6,16){\line(1,1){4}} \put(6,26){\line(1,1){4}} \put(16,16){\line(1,1){4}} \put(16,26){\line(1,1){4}} \put(26,16){\line(1,1){4}} \put(0,-5){\usebox{\lfive}} \put(40,-5){\usebox{\lsix}} \end{picture} \caption{$\Lambda_5$ and $\Lambda_6$} \end{figure} A {\em Fibonacci string} of length $n$ is a binary string $b_1b_2\ldots b_n$ with $b_ib_{i+1}=0$ for $1\leq i<n$. The {\em Fibonacci cube} $\Gamma_n$ ($n\geq 1$) is the subgraph of $Q_n$ induced by the Fibonacci strings of length $n$. For convenience we also consider the empty string and set $\Gamma_0 = K_1$. Call a Fibonacci string $b_1b_2\ldots b_n$ a {\em Lucas string} if $b_1b_n \neq 1$. Then the {\em Lucas cube} $\Lambda_n$ ($n\geq 1$) is the subgraph of $Q_n$ induced by the Lucas strings of length $n$. We also set $\Lambda_0=K_1$.\\ Since their introduction $\Gamma_n$ and $\Lambda_n$ have been also studied for their graph theory properties and found other applications, for example in chemistry (see the survey \cite{Klavzarsurvey}). Recently different enumerative sequences of these graphs have been determined. Among them: number of vertices of a given degree\cite{KlavzarDegree}, number of vertices of a given eccentricity\cite{Castro}, number of pair of vertices at a given distance\cite{KlavzarWiener} or number of isometric subgraphs isomorphic so some $Q_k$\cite{KlavzarCube}. The counting polynomial of this last sequence is known as cubic polynomial and has very nice properties\cite{cubepolmed}. We propose to study an other enumeration and characterization problem. For a given interconnection topology it is important to characterize maximal hypercubes, for example from the point of view of embeddings. So let us consider {\em maximal hypercubes of dimension $p$}, i.e. induced subgraphs $H$ of $\Gamma_n$ (respectively $\Lambda_n$) that are isomorphic to $Q_p$, and such that there exists no induced subgraph $H'$ of $\Gamma_n$ (respectively $\Lambda_n$), $H\subset H'$, isomorphic to $Q_{p+1}$. Let $f_{n,p}$ and $g_{n,p}$ be the numbers of maximal hypercubes of dimension $p$ of $\Gamma_n$, respectively $\Lambda_n$, and $C'(\Gamma_{n},x)=\sum_{p=0}^{\infty}{f_{n,p}x^p}$, respectively $C'(\Lambda_{n},x)\sum_{p=0}^{\infty}{g_{n,p}x^p}$, their counting polynomials. By direct inspection, see figures 1 to 3, we obtain the first of them: \begin{eqnarray*} C'(\Gamma_{0},x) & = & 1\,\:\ \ \ \ \ \ \ \ \ \ C'(\Lambda_{0},x) = 1\, \\ C'(\Gamma_{1},x) & = & x\,\ \ \ \ \ \ \ \ \ \ C'(\Lambda_{1},x) = 1\, \\ C'(\Gamma_{2},x) & = & 2x\,\ \ \ \ \ \ \ \ \ C'(\Lambda_{2},x) = 2x \\ C'(\Gamma_{3},x) & = & x^2+x\,\ \ \ \ C'(\Lambda_{3},x) = 3x\, \\ C'(\Gamma_{4},x) & = & 3x^2\,\ \ \ \ \ \ \ \ C'(\Lambda_{4},x) = 2x^2\, \\ C'(\Gamma_{5},x) & = & x^3+3x^2\,\ C'(\Lambda_{5},x) = 5x^2 \\ C'(\Gamma_{6},x) & = & 4x^3+x^2\,\ C'(\Lambda_{6},x) = 2x^3+3x^2 \\ \end{eqnarray*} The intersection graph of maximal hypercubes (also called cube graph) in a graph have been studied by various authors, for example in the context of median graphs\cite{Bresar}. Hypercubes playing a role similar to cliques in clique graph. Nice result have been obtained on cube graph of median graphs, and it is thus of interest, from the graph theory point of view, to characterize maximal hypercubes in families of graphs and thus obtain non trivial examples of such graphs. We will first characterize maximal induced hypercubes in $\Gamma_n$ and $\Lambda_n$ and then deduce the number of maximal $p$-dimensional hypercubes in these graphs. \section{Main results} For any vertex $x=x_1\dots x_n$ of $Q_n$ and any $i\in\{1,\dots,n\}$ let $x+\epsilon_i$ be the vertex of $Q_n$ defined by $(x+\epsilon_i)_i= 1-x_i$ and $(x+\epsilon_i)_j= x_j$ for $j\neq i$. Let $H$ be an induced subgraph of $Q_n$ isomorphic to some $Q_k$. The \emph{support} of $H$ is the subset set of $\{1\dots n\}$ defined by $Sup(H)=\{i/\: \exists\: x,y\in V(H)$ with $x_i\neq y_i\}$. Let $i\notin Sup(H)$, we will denote by $H\widetilde{+}\epsilon_i$ the subgraph induced by $V(H)\cup\{x+\epsilon_i/x\in V(H)\}$. Note that $H\widetilde{+}\epsilon_i$ is isomorphic to $Q_{k+1}$. The following result is well known\cite{Klavzarnbhyper}. \begin{proposition}\label{pro:bt} In every induced subgraph $H$ of $Q_n$ isomorphic to $Q_k$ there exists a unique vertex of minimal weight, \emph{the bottom vertex} $b(H)$. There exists also a unique vertex of maximal weight, the \emph{top vertex} $t(H)$. Furthermore $b(H)$ and $t(H)$ are at distance $k$ and characterize $H$ among the subgraphs of $Q_n$ isomorphic to $Q_k$. \end{proposition} We can precise this result. A basic property of hypercubes is that if $x, x+\epsilon_i, x+\epsilon_j$ are vertices of $H$ then $x+\epsilon_i+\epsilon_j$ must be a vertex of $H$. By connectivity we deduce that if $x, x+\epsilon_i$ and $y$ are vertices of $H$ then $y+\epsilon_i$ must be also a vertex of $H$. We have thus by induction on $k$: \begin{proposition}\label{pro:btprec} if $H$ is an induced subgraph of $Q_n$ isomorphic to $Q_k$ then \begin{wlist} \item[(i)]$|Sup(H)|=k$ \item[(ii)]$\text{If } i\notin Sup(H) \text{ then }\forall x\in V(H) \ \;x_i=b(H)_i=t(H)_i$ \item[(iii)]$\text{If } i\in Sup(H) \text{ then }b(H)_i=0 \text{ and } t(H)_i=1 $ \item[(iv)]$V(H)= \{x=x_1\dots x_n/ \;\forall i\notin Sup(H)\ x_i=b(H)_i\}.$ \end{wlist} \end{proposition} If $H$ is an induced subgraph of $\Gamma_n$, or $\Lambda_n$, then, as a set of strings of length $n$, it defines also an induced subgraph of $Q_n$; thus Propositions \ref{pro:bt} and \ref{pro:btprec} are still true for induced subgraphs of Fibonacci or Lucas cubes. A Fibonacci string can be view as blocks of $0$'s separated by isolated $1$'s, or as isolated $0$'s possibly separated by isolated $1$'s. These two points of view give the two following decompositions of the vertices of $\Gamma_n$. \begin{proposition}\label{pro:dec0} Any vertex of weight $w$ from $\Gamma_n$ can be uniquely decomposed as $0^{l_0}10^{l_1}\dots 10^{l_i}\dots10^{l_{p}}$ where $p=w$; $\sum_{i=0}^p{l_i}=n-w$; $l_0,l_p \geq 0$ and $l_1,\dots,l_{p-1}\geq1$. \end{proposition} \begin{proposition}\label{pro:dec1} Any vertex of weight $w$ from $\Gamma_n$ can be uniquely decomposed as $1^{k_0}01^{k_1}\dots 01^{k_i}\dots01^{k_{q}}$ where $q=n-w$; $\sum_{i=0}^q{k_i}=w$ and $k_0,\dots,k_{q}\leq1$. \end{proposition} \begin{proof} A vertex from $\Gamma_n$, $n\geq2$ being the concatenation of a string of $V(\Gamma_{n-1})$ with $0$ or a string of $V(\Gamma_{n-2})$ with $01$, both properties are easily proved by induction on $n$. \end{proof} \qed Using the the second decomposition, the vertices of weight $w$ from $\Gamma_n$ are thus obtained by choosing, in $\{0,1,\dots,q\}$, the $w$ values of $i$ such that $k_{i}=1$ in . We have then the classical result: \begin{proposition}\label{pro:nbw} For any $w\leq n$ the number of vertices of weight $w$ in $\Gamma_n$ is $\binom{n-w+1}{w}$. \end{proposition} Considering the constraint on the extremities of a Lucas string we obtain the two following decompositions of the vertices of $\Lambda_n$. \begin{proposition}\label{pro:declucas0} Any vertex of weight $w$ in $\Lambda_n$ can be uniquely decomposed as $0^{l_0}10^{l_1}\dots 10^{l_i}\dots 10^{l_{p}}$ where $p=w$, $\sum_{i=0}^p{l_i}=n-w$, $l_0,l_p\geq0$, $l_0+l_p\geq1$ and $l_1,\dots,l_{p-1}\geq1$. \end{proposition} \begin{proposition}\label{pro:declucas1} Any vertex of weight $w$ in $\Lambda_n$ can be uniquely decomposed as $1^{k_0}01^{k_1}\dots 01^{k_i}\dots01^{k_{q}}$ where $q=n-w$; $\sum_{i=0}^q{k_i}=w$; $k_0+k_q\leq1$ and $k_0,\dots,k_{q}\leq1$. \end{proposition} From Propositions \ref{pro:btprec} and \ref{pro:dec0} it is possible to characterize the bottom and top vertices of maximal hypercubes in $\Gamma_n$. \begin{lemma} If $H$ is a maximal hypercube of dimension $p$ in $\Gamma_n$ then $b(H)=0^n$ and $t(H)=0^{l_0}10^{l_1}\dots 10^{l_i}\dots10^{l_{p}}$ where $\sum_{i=0}^p{l_i}=n-p$; $0\leq l_0\leq 1$; $0\leq l_p\leq 1$ and $1\leq l_i\leq 2$ for $i=1,\dots,p-1$. Furthermore any such vertex is the top vertex of a unique maximal hypercube. \end{lemma} \begin{proof} Let $H$ be a maximal hypercube in $\Gamma_n$. Assume there exists an integer $i$ such that $b(H)_i = 1$. Then $i \notin sup(H)$ by Proposition \ref{pro:btprec}. Therefore, for any $x \in V(H)$, $x_i= b(H)_i= 1$ thus $x+\epsilon_i \in V(\Gamma_n)$. Then $H\widetilde{+}\epsilon_i$ must be an induced subgraph of $\Gamma_n$, a contradiction with $H$ maximal. Consider now $t(H)=0^{l_0}10^{l_1}\dots 10^{l_i}\dots10^{l_{p}}$. If $l_0\geq2$ then for any vertex $x$ of $H$ we have $x_0=x_1=0$, thus $x+\epsilon_0$ $\in V(\Gamma_n)$. Therefore $H\widetilde{+}\epsilon_i$ is an induced subgraph of $\Gamma_n$, a contradiction with $H$ maximal. The case $l_p\geq2$ is similar by symmetry. Assume now $l_i\geq3$, for some $i\in\{1,\dots,p-1\}$. Let $j=i+\sum_{k=0}^{i-1}{l_k}$. We have thus $t(H)_j=1$ and $t(H)_{j+1}=t(H)_{j+2}=t(H)_{j+3}$=0. Then for any vertex $x$ of $H$ we have $x_{j+1}=x_{j+2}=x_{j+3}=0$, thus $x+\epsilon_{j+2}$ $\in V(\Gamma_n)$ and $H$ is not maximal, a contradiction. Conversely consider a vertex $z=0^{l_0}10^{l_1}\dots 10^{l_i}\dots10^{l_{p}}$ where $\sum_{i=0}^p{l_i}=n-p$; $0\leq l_0\leq 1$; $0\leq l_p\leq 1$ and $1\leq l_i\leq 2$ for $i=1,\dots,p-1$. Then, by Propositions \ref{pro:bt} and \ref{pro:btprec}, $t(H)=z$ and $b(H)=0^n$ define a unique hypercube $H$ in $Q_n$ isomorphic to $Q_p$ and clearly all vertices of $H$ are Fibonacci strings. Notice that for any $i\notin Sup(H)$ $z+\epsilon_i$ is not a Fibonacci string thus $H$ is maximal. \qed \end{proof} With the same arguments we obtain for Lucas cube: \begin{proposition}\label{pro:lucas} If $H$ is a maximal hypercube of dimension $p\geq1$ in $\Lambda_n$ then $b(H)=0^n$ and $t(H)=0^{l_0}10^{l_1}\dots 10^{l_i}\dots10^{l_{p}}$ where $\sum_{i=0}^p{l_i}=n-p$; $0\leq l_0\leq 2$; $0\leq l_p\leq 2$; $1\leq l_0+l_p\leq 2$ and $1\leq l_i\leq 2$ for $i=1,\dots,p-1$. Furthermore any such vertex is the top vertex of a maximal hypercube. \end{proposition} \begin{theorem}\label{th:fp} Let $0\leq p \leq n$ and $f_{n,p}$ be the number of maximal hypercubes of dimension $p$ in $\Gamma_n$ then:\\ \[f_{n,p}=\binom{p+1}{n-2p+1}\]\\ \end{theorem} \begin{proof} This is clearly true for $p=0$ so assume $p\geq1$. Since maximal hypercubes of $\Gamma_n$ are characterized by their top vertex, let us consider the set $T$ of strings which can be write $0^{l_0}10^{l_1}\dots 10^{l_i}\dots10^{l_{p}}$ where $\sum_{i=0}^p{l_i}=n-p$; $0\leq l_0\leq 1$; $0\leq l_p\leq 1$ and $1\leq l_i\leq 2$ for $i=1,\dots,p-1$. Let $l'_i =l_i-1$ for $i=1,\dots,p-1$; $l'_0 =l_0$; $l'_p =l_p$. We have thus a 1 to 1 mapping between $T$ and the set of strings $D=\{0^{l'_0}10^{l'_1}\dots 10^{l'_i}\dots10^{l'_{p}}\}$ where $\sum_{i=0}^p{l'_i}=n-2p+1$ any $l'_i\leq 1$ for $i=0,\dots,p$. This set is in bijection with the set $E=\{1^{l'_0}01^{l'_1}\dots 01^{l'_i}\dots01^{l'_{p}}\}$. By Proposition \ref{pro:dec1}, $E$ is the set of Fibonacci strings of length $n-p+1$ and weight $n-2p+1$ and we obtain the expression of $f_{n,p}$ by Proposition \ref{pro:nbw}. \qed \end{proof} \begin{corollary} The counting polynomial $C'(\Gamma_{n},x)=\sum_{p=0}^{\infty}{f_{n,p}x^p}$ of the number of maximal hypercubes of dimension $p$ in $\Gamma_n$ satisfies: \begin{eqnarray*} C'(\Gamma_{n},x)& = &x(C'(\Gamma_{n-2},x)+C'(\Gamma_{n-3},x))\ \ \ (n\geq3)\\ C'(\Gamma_{0},x)& = &1,\ C'(\Gamma_{1},x)=x,\ C'(\Gamma_{2},x)=2x\\ \end{eqnarray*} The generating function of the sequence $\{C'(\Gamma_{n},x)\}$ is: $$\sum_{n\geq0}{C'(\Gamma_{n},x)y^n}=\frac{1+xy(1+y)}{1-xy^2(1+y)} $$ \end{corollary} \begin{proof} By theorem \ref{th:fp} and Pascal identity we obtain $f_{n,p}=f_{n-2,p-1}+ f_{n-3,p-1}$ for $n\geq3$ and $p\geq1$. Notice that $f_{n,0}=0$ for $n\neq 0$. The recurrence relation for $C'(\Gamma_{n},x)$ follows. Setting $f(x,y)=\sum_{n\geq0}{C'(\Gamma_{n},x)y^n}$ we deduce from the recurrence relation $f(x,y)-1-xy-2xy^2=x(y^2(f(x,y)-1)+y^3f(x,y))$ thus the value of $f(x,y)$. \qed \end{proof} \begin{theorem}\label{th:gp} Let $1\leq p \leq n$ and $g_{n,p}$ be the number of maximal hypercubes of dimension $p$ in $\Lambda_n$ then:\\ \[g_{n,p}=\frac{n}{p}\binom{p}{n-2p}.\]\\ \end{theorem}\begin{proof} The proof is similar to the previous result with three cases according to the value of $l_{0}$: By Proposition \ref{pro:lucas} the set $T$ of top vertices that begin with $1$ is the set of strings which can be write $10^{l_1}\dots 10^{l_i}\dots10^{l_{p}}$ where $\sum_{i=1}^p{l_i}=n-p$ and $1\leq l_i\leq 2$ for $i=1,\dots,p$. Let $l'_i =l_i-1$ for $i=1,\dots,p$. We have thus a 1 to 1 mapping between $T$ and the set of strings $D=\{10^{l'_1}\dots 10^{l'_i}\dots10^{l'_{p}}$\} where $\sum_{i=1}^p{l'_i}=n-2p$, $0\leq l'_i\leq 1$ for $i=1,\dots,p$. Removing the first $1$, and by complement, this set is in bijection with the set $E=\{1^{l'_1}\dots 01^{l'_i}\dots01^{l'_{p}}\}$. By Proposition \ref{pro:dec1}, $E$ is the set of Fibonacci strings of length $n-p-1$ and weight $n-2p$. Thus $|T|=\binom{p}{n-2p}$. The set $U$ of top vertices that begin with $01$ is the set of strings which can be write $010^{l_1}\dots 10^{l_i}\dots10^{l_{p}}$ where $\sum_{i=1}^p{l_i}=n-p-1$; $1\leq l_i\leq 2$ for $i=1,\dots,p-1$ and $l_p\leq 1$. Let $l'_i =l_i-1$ for $i=1,\dots,p-1$ and $l'_p =l_p$ . We have thus a 1 to 1 mapping between $U$ and the set of strings $F=\{010^{l'_1}\dots 10^{l'_i}\dots10^{l'_{p}}$\} where $\sum_{i=1}^p{l'_i}=n-2p$ and $l'_i\leq 1$ for $i=1,\dots,p$. Removing the first $01$, and by complement, this set is in bijection with the set $G=\{1^{l'_1}\dots 01^{l'_i}\dots01^{l'_{p}}\}$. By Proposition \ref{pro:dec1}, $G$ is the set of Fibonacci strings of length $n-p-1$ and weight $n-2p$. Thus $|U|=\binom{p}{n-2p}$. The last set, $V$, of top vertices that begin with $001$, is the set of strings which can be write $0010^{l_1}\dots 10^{l_i}\dots0^{l_{p-1}}1$ where $\sum_{i=1}^{p-1}{l_i}=n-p-2$ and $1\leq l_i\leq 2$ for $i=1,\dots,p-1$. Let $l'_i =l_i-1$ for $i=1,\dots,p-1$. We have thus a 1 to 1 mapping between $V$ and the set of strings $H=\{0010^{l'_1}\dots 10^{l'_i}\dots0^{l'_{p-1}}1\}$ where $\sum_{i=1}^{p-1}{l'_i}=n-2p-1$ and $l'_i\leq 1$ for $i=1,\dots,p-1$. Removing the first $001$ and the last $1$, this set, again by complement, is in bijection with the set $K=\{1^{l'_1}\dots 01^{l'_i}\dots01^{l'_{p-1}}\}$. The set $K$ is the set of Fibonacci strings of length $n-p-3$ and weight $n-2p-1$. Thus $|V|=\binom{p-1}{n-2p-1}$ and $g_{n,p}=2\binom{p}{n-2p}+\binom{p-1}{n-2p-1}=\frac{n}{p}\binom{p}{n-2p}$. \qed \end{proof} \begin{corollary} The counting polynomial $C'(\Lambda_{n},x)=\sum_{p=0}^{\infty}{g_{n,p}x^p}$ of the number of maximal hypercubes of dimension $p$ in $\Lambda_n$ satisfies: \begin{eqnarray*} C'(\Lambda_{n},x)& = &x(C'(\Lambda_{n-2},x)+C'(\Lambda_{n-3},x))\ \ \ (n\geq5)\\ C'(\Lambda_{0},x)& = &1,\ C'(\Lambda_{1},x)=1,\ C'(\Lambda_{2},x)=2x,\ C'(\Lambda_{3},x)=3x,\ C'(\Lambda_{4},x)=2x^2\\ \end{eqnarray*} The generating function of the sequence $\{C'(\Lambda_{n},x)\}$ is: $$\sum_{n\geq0}{C'(\Lambda_{n},x)y^n}=\frac{1+y+xy^2+xy^3-xy^4}{1-xy^2(1+y)} $$ \end{corollary} \begin{proof} Assume $n\geq 5$. Here also by theorem \ref{th:gp} and Pascal identity we get $g_{n,p}=g_{n-2,p-1}+ g_{n-3,p-1}$ for $n\geq5$ and $p\geq2$. Notice that when $n\geq5$ this equality occurs also for $p=1$ and $g_{n,0}=0$. The recurrence relation for $C'(\Lambda_{n},x)$ follows and $g(x,y)=\sum_{n\geq0}{C'(\Lambda_{n},x)y^n}$ satisfies $g(x,y)-1-y-2xy^2-3xy^3-2x^2y4=x(y^2(g(x,y)-1-y-2xy^2)+y^3(g(x,y)-1-y))$. \qed \end{proof} Notice that $f_{n,p}\neq0$ if and only if $\left\lceil \frac{n}{3} \right\rceil \leq p \leq \left\lfloor \frac{n+1}{2} \right\rfloor$ and $g_{n,p}\neq0$ if and only if $\left\lceil \frac{n}{3} \right\rceil \leq p \leq \left\lfloor \frac{n}{2} \right\rfloor$ (for $n\neq1$). Maximal induced hypercubes of maximum dimension are maximum induced hypercubes and we obtain again that cube polynomials of $\Gamma_n$, respectively $\Lambda_n$, are of degree $\left\lfloor \frac{n+1}{2} \right\rfloor$, respectively $\leq \left\lfloor \frac{n}{2} \right\rfloor$ \cite{KlavzarCube}.
1,941,325,220,844
arxiv
\section{Introduction} Robots would be much more useful if they had the capability to move obstacles out of the way. Such robots can be deployed in search and rescue scenarios, where they reach to places unreachable by humans and assist in the removal of rubble from disaster areas. In an attempt towards this vision, we explore the Navigation Among Movable Obstacles (NAMO) problem in which the robot attempts to navigate from one side of an environment towards reach a goal position in a reconfigurable environment, while manipulating obstacles along the way. We assume a 2D environment where the agent is considered to have a polygon-shaped footprint and can push any object in the environment. In this paper, we propose a planning algorithm which is a minimal collision path planner using a RRT-based heuristic. Our approach attempts finding a feasible path by making the area of the agent's footprint iteratively smaller, while at the same time planning for pushes that would clear the space for the actual size of the agent. In simulation experiments, we benchmark our approach against the straight-line navigation with pushes as well as a standard collision-free RRT. By interleaving path and push planning, we reduce the algoritmic complexity of the problem is reduced greatly. The organization of this paper is as follows. We examine the relevant literature in Section~\ref{sec:related-work}, including non-prehensile manipulation, and physics simulators. Section~\ref{sec:problem_description} defines the problem domain and Section~\ref{sec:proposed-algorithm} describes our proposed algorithms in detail. Section~\ref{sec:experiments} discusses experimental results produced in simulation with variations in domain complexity. Section~\ref{sec:conclusion} places our planners in the broader context of clutter manipulation. \begin{figure}[t!] \centering \includegraphics[clip, trim=0cm 0.2cm 0cm 0cm, width=0.49\textwidth]{./Images/main1.pdf} \includegraphics[clip, trim=0cm 0.2cm 0cm 0cm, width=0.49\textwidth]{./Images/main2.pdf} \caption{We study the problem of navigation among movable objects. We model the environment as a 2D world with polygonal objects and propose two algorithms for pushing obstacles to make enough space for the robot: 1) straight line navigation with push planning (Top) and 2) RRT-based iterative minimal collision with nominal pushing using a reduced area rigid body (Bottom).} \label{fig:intro} \end{figure} \section{Related Work} \label{sec:related-work} Our problem definition falls into the arena of manipulation planning. This involves planning the motion of a robot and the manipulation of one or more objects in the presence of clutter. This area includes manipulation planning among movable obstacles (MAMO)[6][16], rearrangement planning (RP)[11, 12], and navigation among movable obstacles (NAMO)[13,14]. Wilfong et al. [7] showed that NAMO is NP-hard, and that rearrangement planning is PSPACE-hard. The complexity arises from the high dimensional search space and the constraint that objects only move as a consequence of the robot’s actions. Alami et al. [8] classifies robot actions into two categories. Transit actions, which are collision-free robot motions, and transfer actions, which manipulate objects. Planning transit actions is a classical robot motion planning problem, and planning transfer actions requires additional intelligence about the mechanics of manipulation. Non-prehensile pushing allows for more flexibility and efficiency in completing a task~\cite{cosgun2020,brock,choi}. Pushing is desirable because it is quicker to execute, may reduce uncertainty~\cite{dogar2010} and be exerted on heavy or multiple objects at the same time. In rearrangement planning it is more useful to push multiple objects at the same time than sequentially. Ben-Shahar et al. [9] illustrates a rearrangement planner that allows the concurrent pushing of multiple objects. The algorithm performs a hill-climbing search on a sampled-based representation of the configuration space. It minimizes a cost function that represents the minimal cost to reach the goal. This cost is computed offline on the discrete search space using a reverse pushing model that compensates for multi-object contacts and non-quasistatic physics. Computing such a general reverse pushing model, however, is difficult and only a simplified model is presented. Our approach presents a high level planner that attempts to enable an agent to take a straight line path from a start to goal configuration in a cluttered environment by pushing movable obstacles out of the way while transporting a large object. This capability is beneficial in situations when a collision-free path is either too expensive to travel or does not exist at all. The planner attempts to accomplish this by (1) selecting a straight line path from the start to goal position by leveraging a heuristic that provides a path with minimal obstacle overlap in the environment and (2) determines a sequence of pushing actions that creates a straight unobstructed path for the agent to traverse from start to goal. The utilization of a straight line path, as opposed to trajectories, greatly reduces the search complexity of this algorithm. Despite significant differences, some of the concepts in this work are closely related to [15] which applies means-end analysis in order to efficiently compute plans for displacing objects with multiple interactions. We present a similar process of reverse search in our domain. This contribution builds on our work in [1] and [4] where nonprehensile push planning with dynamic obstacle interaction was leveraged to place large objects on a cluttered table. \begin{figure*}[h] \subfloat[]{\includegraphics[clip, trim=0.3cm 0.3cm 17cm 0.3cm, width=0.3\textwidth]{./Images/fig2.pdf}} \subfloat[]{\includegraphics[clip, trim=8.25cm 0.2cm 9.2cm 0.1cm, width=0.3\textwidth]{./Images/fig2.pdf}} \subfloat[]{\includegraphics[clip, trim=16.4cm 0.2cm 0.3cm 0.1cm, width=0.33\textwidth]{./Images/fig2.pdf}} \caption{a) Path footprint b) Current Environment c) Heuristic path placement after convolution with the environment.} \end{figure*} This work is also related to combining task and motion planning (TMP), which is still an open problem in the research community and is key to implementing successful mobile manipulation solutions [21-25]. The goal of an agent moving an object from a start to a goal position in a cluttered room is a high level discrete task, which is symbolically represented in TMP. The actual motion planning that accomplishes this effort takes many factors into consideration, which include continuous reasoning, uncertainty, and replanning. This is the algorithmic portion of TMP. The motion planning and physics-based object interaction is accomplished in simulation in this work. Next steps will encompass implementation on a physical robot, which will include lower-level planning to facilitate the human-robot physical interactions and the necessary object manipulations in the real world. This contribution builds on our work in [1] and [4] where nonprehensile push planning with dynamic obstacle interaction was leveraged to place objects on a cluttered table. \section{Problem Description} \label{sec:problem_description} Given an example of a rectangular room, $R$, defined by bounding corners $((x_{\min},\ y_{\min}), (x_{\max}, y_{\max}))$ and an assortment of object shapes $O = \{0_{1},\ 0_{2},\ .\ .\ .\ ,\ 0_{n}\}$. The first $n-1$ shapes describe the objects residing in the room and $0_{n}$ defines the shape of a virtual path, which represents a straight line path from the starting point to the goal. Let $q= (P_{1},\ P2,\ .\ .\ .\ ,\ P_{n})$ define the varying poses of all the objects, where $P_{j} = \{(x,\ y)\ :\ x\ \in \mathbb{R},\ y\ \in \mathbb{R},\ (x,\ y)\ \in\ 0_{j}\}$ is the set of all points occupied by object shape $0_{j}$. A linear pushing action is defined as $u_{j,k} = (0_{j},\ \phi_{k},\ d_{j,k})$), where the action is exerted on object $0_{j}$ in the direction $\phi_{k}$ for a distance $d_{j,k} =d(0_{j},\ \phi_{k},\ q) >0$ with a constant velocity $\epsilon>0$. The dynamic interactions between objects is governed by a function $f$, determined by the physics simulator, where $\mathrm{q}=f(O,\ q,\ u)$ . For the straight line planner, finding a straight line path from a starting point to a goal in a cluttered room requires an agent to find a sequence of push actions $U = (u^{1},\ u2,\ .\ .\ .\ ,\ u^{t})$ that will result in a state $q_{\mathrm{e}nd}$, For the minimal collision planner, finding a minimal collision path from a starting point to a goal in a cluttered room requires the RRT~\cite{rrt} algorithm to calculate discrete waypoints in the SE(2) state space and an agent to find a sequence of push actions $U = (u^{1},\ u2,\ .\ .\ .\ ,\ u^{t})$ that will result in a state $q_{\mathrm{e}nd}$,where three conditions are satisfied: \begin{enumerate \item No objects intersect: $\bigcup_{P_{i},P_{j}\in q_{\mathrm{e}nd},i\neq j}(P_{i}\cap P_{j})=\emptyset$ \item Objects are within the room wall borders: $x_{\min} < x < x_{\max}, y_{\min}<y<y_{\max}\forall(x,\ y)$ in $P\in q_{\mathrm{e}nd}.$ \item Objects are stationary: $\dot{q}=0.$ \end{enumerate} The potential search domain is infinite if there are no action constraints. The domain is limited to allow only one push per object $U$, such that $0_{i}\neq 0_{j}$ for every pair $(u_{i,k},\ u_{j,l}) = ((0_{i},\ \phi_{k},\ d_{i,k}),\ (0_{j},\ \phi_{l},\ d_{j,l}))$ in $U.$ Even in this domain, an exhaustive search is inefficient. Given a set of $n$ objects and $g$ pushing angles, the branching factor would be {\it ng} and total nodes in the tree would be $(ng)^{n}$. For $n=g=10$, an exhaustive search tree produces $10^{20}$ nodes, a space too significant for standard hardware. This formulation creates a maximum of $n$ pushes in any completed plan. We allow $k$ pushes per object, where $k$ is the number of objects initially overlapping the start to goal path, resulting in a maximum number of {\it kn}. By combining a reduced action space with informed heuristics and minimal collision path, we attempt to simplify the solution. In order to achieve generality and efficiency, we separate the task of moving an object in a cluttered environment into two stages: The first stage determines the pose/shape $P_{n}$ for the proposed path $0_{n}$ that the human-robot team will attempt to traverse. The second stage finds the set of push motions $U$, that satisfy the three requirements listed above. In order to discover shorter plans, we iterate stages 1 and 2 gradually towards more complex plans. The start and goal position are sampled with the bounds of the rectangle (the room). The samples are allowed anywhere on the y-axis, however only within the first 2 cm (start positions) and the last 2 cm of the x-axis (goal positions). Also the sample is only valid if the proposed straight-line path (for the straight line planner) or the rigid body (for the minimal collision planner) fits within the rectangle. \begin{figure*}[h] \centering\includegraphics[clip, trim=0cm 0.25cm 0cm 0cm, width=.9\textwidth]{./Images/fig3.pdf} \caption{An illustrative example that demonstrates the push planning algorithm for a Level 1 search.} \end{figure*} \section{Proposed Algorithm} \label{sec:proposed-algorithm} \subsection{Goal Configuration Calculation} The technical details of the goal configuration calculation for the straight-line push planner is covered in our previous work [1]. This approach uses a heuristic that attempts to sample object placement locations with the least amount of overlap with clutter using a convolutional technique. In our work, we modify the algorithm to sample straight-line start to goal configuration paths with the least amount of overlap with obstacles for the straight-line planner. Figures 2(a), (b), and (c) shows an example of a sampled path footprint, a cluttered environment, and the heuristically determined path footprint pose in the cluttered environment. Given a pose sample, we present a solution for push planning that clears the path footprint in Sections IV-B and IV-C. For the minimal collision planner, we utilize The Open Motion Planning Library~\cite{ompl} (OMPL) to attempt to calculate a collision-free path from a start to goal configuration in clutter using the RRT-Connect~\cite{rrt-connect} algorithm and a reduced sized rigid body. RRT-Connect is a probabilistically complete randomized algorithm used to solve single-query path planning problems. This algorithm incrementally builds two RRTs rooted at the start and the goal configurations. The human-robot team is represented as a rigid body with area $T$. Initially, the RRT-Connect algorithm attempts to calculate a collision-free path from the start to goal configuration. If it is successful, then the algorithm succeeds. If not, it reduces the area of the rigid body by 10$\%$, where $A_{n}$ = $0.9A_{n-1}$, and the algorithm is executed again. This is repeated recursively until a valid path is achieved or until the size is reduced to a point robot and no solutions were found. Samples of the discrete start to goal waypoints of the rigid body are inputted as the goal configuration in the push planner. The rigid body's discrete poses moves through the configuration space at each sampled point. Since a version of the rigid body with a smaller area is able to navigate through a collision-free path, the larger rigid body's path would have minimal overlap with clutter in the environment. \begin{figure*}[h] \centering\includegraphics[width=.99\textwidth]{./Images/fig4.pdf} \caption{An illustrative example that demonstrates the push planning algorithm for a Level 2 search.} \end{figure*} \subsection{Path Clearing Planner} The straight-line planner and minimal collision planner both utilize the same push planning algorithm. The following example applies to the straight-line planner, however the same principles apply to the minimal collision planner. Given a potential straight line start to goal path pose $\hat{P}_{n}$ we introduce a planning algorithm that clears the path defined by $(\hat{P}_{n},\ 0_{n})$ . First, we present the algorithm for the case where two objects {\it o}13 and {\it o}11 overlaps with the path $0_{n}$. Figure $3$ illustrates the search tree for this example. This case only requires one level (Level 1) of push planning for each object. Next, we expand this approach to one object {\it o}11 requiring two levels (Level 2) of push planning. The details of this algorithm are presented in Section IV-C and Figure $4$ illustrates the search tree for this example. The planner uses Breadth First Search (BFS) to attempt the possible pushing actions $U_{o_{1}} = \{u_{1,k} = (0_{1},\ \phi_{k},\ d_{1,k})\}$. It determines the pushing distance $d_{1,k}$ through the push termination conditions: (1) Any object colliding with the room wall or making contact with $ 0\in O$ that cannot be pushed. (2) $ 0_{1}\cap 0_{n}=\emptyset$. The interactions during the motion returns the resulting $q_{n\mathrm{e}w}$ and the first blocking object $0_{j}$ to the planner. In order to constrain the large set of possible pushes, our algorithm begins by investigating only the set of pushing actions $U_{o_{1}}$ on overlapping object {\it o}13. Let's suppose every push $u_{1,k}$ terminates following rule (1). After each push of {\it o} 13, we detect any object $0_{j}$ that blocks the motion of {\it o}13. In this example, we are able to clear {\it o}13 from the path during the first level of push planning. The algorithm is repeated for {\it o}11 that also overlaps the path and it is able to also clear the path during the first level of push planning. Next we consider the case of one overlapping object {\it o}13 that requires two levels of push planning to clear the path. Following each push of {\it o}13, we detect any object $0_{j}$ that blocks the motion of {\it o}13. After searching all single pushes of {\it o}13, the planner resets to the initial configuration $q_{init}$ and searches over the pushing actions $U_{o_{j}}$ of each blocking object $o_{j}$. Given the subsequent configuration from each push of $0_{j}$ the planner backtracks to {\it o}11 and searches over the pushes $U_{o_{1}}$, testing if any clears {\it o}13 from the path footprint. This approach guides the search by means-end analysis. The planner attempts to remove the blocking object $0_{j}$ from the path of {\it o}13. Since additional objects may constrain the movement of object $0_{j}$ the planner recursively follows this procedure until it clears the path footprint or reaches a maximum depth $L_{MAX}$. In this case, {\it o}14 is cleared from the path of {\it o}13. This results in a two level, two push plan to clear the path. In the following section we extend this algorithm to multiple objects overlapping the straight-line path or minimal collision path. \subsection{Iterative-Deepening Breadth-First Search} In the case of multiple overlapping objects, the push planner proceeds as follows. A path footprint is selected by rejection sampling from the probability distribution in Sec. IV-A. Each object that overlaps the path is a {\it sub}- {\it goal}. For each sub-goal, the planner applies Algorithm 1 in [1]. If a goal test succeeds then the planner continues with the next overlapping object starting from $q_{init}'$, where the sub-goal is satisfied. Algorithm 2 in [1] shows how the planner attempts prior pushing actions to evaluate if a sub-goal has been reached. A sub-goal succeeds if the number of overlapping objects has been reduced. Each configuration likely has many solutions. Our planner uses a heuristic to guide it towards solutions with fewer pushes. We extend our algorithm to use Iterative-Deepening Breadth- First Search (IDBFS). We add an outer loop around Algorithm 1 which iteratively increases the maximum allowed tree depth from $0$ to $L_{\max}$. IDBFS allows our BFS planner to run on a number of different straight line start to goal paths in an attempt to find shorter plans which will require fewer pushes. Algorithm 3 in [1] explains the full IDBFS Push Planner, which takes into account multiple objects overlapping the path. Iterative-Deepening Depth-First Search (IDDFS)~\cite{iddfs} inspires our search, which maintains the minimal solution length of BFS while taking advantage of Depth-First-Search's efficiency. \section{Experiments} \label{sec:experiments} Our planners use OMPL to calculate collision-free or minimal collision paths from a start to goal configuration and the open source $2\mathrm{D}$ physics engine $\mathrm{B}\mathrm{o}\mathrm{x}2\mathrm{D}$~\cite{box2d}. OMPL implements sampling-based motion planning including probabilistic roadmaps and tree-based planners. $\mathrm{B}\mathrm{o}\mathrm{x}2\mathrm{D}$ models contact, friction and restitution as well as collision detection. We modeled room objects as rigid bodies of convex polygonal objects with equal densities. Push actions were applied with a rectangular rigid body $0_{g}$ of dimension $1\mathrm{c}\mathrm{m}\times 4\mathrm{c}\mathrm{m}$. To apply $u_{i,k}=(0_{i},\ \phi_{k},\ d_{i,k})$ , first $0_{g}$ is placed centered at the center of mass of $0_{i}$ with orientation $\phi_{k}$ and gradually moved along $(\phi_{k}\ -\pi)$ direction while checking collision between $0_{i}$ and $0_{g}$. At the configuration where $P_{i}\cap P_{g} = \emptyset$, a final test between $0_{g}$ and all other objects in $O$ verifies whether the rigid body can be placed without collision. If the configuration is collision free, the pushing action is feasible and $0_{g}$ is moved in $\phi_{k}$ direction at a constant velocity. The algorithm is tested using gradually increasing clutter percentages of objects. Clutter percentage is defined as the ratio of the area occupied by the objects to the total room area. The virtual room size was $38\mathrm{c}\mathrm{m}\times 19\mathrm{c}\mathrm{m}$ for all experiments, the object shapes are 20 squares that are near evenly distributed throughout the room. The length and width of each square is increased by $.25\mathrm{c}\mathrm{m}$ respectively for each clutter percentage increase. \begin{figure}[ht!] \subfloat[]{\includegraphics[clip, trim=0.3cm 0.3cm 16.45cm 0.3cm, width=0.5\textwidth]{./Images/three_images.pdf}} \\ \subfloat[]{\includegraphics[clip, trim=7.8cm 0.2cm 8.45cm 0.2cm, width=0.5\textwidth]{./Images/three_images.pdf}} \\ \subfloat[]{\includegraphics[clip, trim=15.9cm 0.32cm 0.3cm 0.25cm, width=0.5\textwidth]{./Images/three_images.pdf}} \caption{a) Straight-line push planner attempts to clear a path, but fails at 43\% room clutter. b) Minimal collision planner utilizes RRT Connect to produce a valid path heuristic using a reduced area rigid body at 43\% room clutter. c) Minimal collision planner uses the heuristic to execute a valid plan using the footprint of the robotic agent.} \end{figure} The path for the straight-line planner has a width of $1.5\mathrm{c}\mathrm{m}$ and a length of $17.5\mathrm{c}\mathrm{m}$. The simulator provides the room configuration $q_{init}$ as input to the planning algorithm for path placement. If a path placement candidate is found, the simulator executed the pushes and attempts to clear a straight line path before selecting a new path. If the algorithm failed, the simulator reset the room to the initial configuration and repeated the procedure. The path for the minimal collision planner is provided as discrete poses by the RRT planner after utilizing a reduced area version of the original rigid body. The original rigid body has a length of $4.5\mathrm{c}\mathrm{m}$ and a width of $2.25\mathrm{c}\mathrm{m}$. It utilizes the poses provided by the RRT to attempt to traverse from the start position to the goal position. To make the algorithm practical for real world implementation, the maximum allowed tree level was set to 3, the maximum number of path configuration samples per allowed tree level was 20 and the push angle resolution was $\pi/12$, resulting in 24 push directions. If a plan was not found for all the candidate positions, the algorithm terminated with no solution. We performed 10 trial runs for each clutter percentage. Figure 5 illustrates a room configuration with 43\% clutter. a) The straight-line planner was unable to find a solution at this clutter percentage. However, after reducing the original rigid body area by 84\% and running the RRT (b), the minimal collision planner was able to find and execute a valid plan (c). Table~\ref{lab:table} shows the results for the collision-free planner (RRT Connect), the straight-line push planner, and the minimal collision push planner at clutter percentages ranging from 18\% to 56\%. \setlength{\tabcolsep}{0.15cm} \begin{table}[ht!] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline {\small Clutter} & {\small RRT-Connect} & {\small Straight-Line} & {\small Min. Collision}\\ \hline 18\% & 10 & 10 & 10 \\ \hline 37\% &0 & 4 & 10 \\ \hline 43\% &0 & 0 & 7 \\ \hline 49\% &0 & 0 & 5 \\ \hline 56\% &0 & 0 & 0 \\ \hline \end{tabular} \caption{Experimental results shows the number of successful plans out of 10 trials vs. clutter percentages for each planning algorithm (RRT-Connect, straight-line, and minimal collision.} \label{lab:table} \end{table} The RRT planner's maximum clutter percentage for finding a valid plan for the original rigid body is 18\%. The straight-line push planner's maximum clutter percentage is double that of the RRT planner at 37\%. It was not able to find a solution at the next clutter percentage of 43\%. This can be attributed to the fact that the start to goal path encompasses a long rectangular path which is almost equal to the total length of the room. This greatly increases the number of potential overlapping objects and decreases the chance of clearing all of them from the straight-line path placement candidate. Since the length of the room is greater than the width of the room and the path's length is almost equal to the room's length, the number of possible unique poses are also constrained. The straight line path constraint simplifies the search space at the cost of reducing the ability of the planner to find a solution. The minimum collision push planner is able to find a valid path up to a clutter percentage of 49\%. The success at this higher clutter percentage can be attributed to the fact that the planner does not have to take a straight-line path, which may be sub-optimal at times. The RRT-based heuristic guides the push planner through a non-linear path that has minimal collisions, which is an advantage over the straight-line planner. \section{Conclusion} \label{sec:conclusion} We have presented an algorithm for clearing objects in a room to facilitate navigation among movable obstacles. Exploiting the candidate path pose generating heuristics assist in finding shorter solutions by providing a variety of path candidates to the planner. Applications to a physical robot will benefit from this bias towards simpler solutions. Plans with a greater number of push actions during real world operation induce a greater chance of divergence from the simulated physics used by the planner. Constraining the number and distance of these actions reduces this error and agent can perform the task more robustly. The results of the experiment suggest that in practice the agent can first attempt to find a collision-free path using a standard RRT. If this is not feasible, then it can then execute the straight-line push planner, since moving in a straight line while pushing objects out of the way will expend the least amount of energy. If the straight-line push planner fails, then the RRT-based minimal collision push planner can be employed because it has demonstrated success at the highest clutter percentage, but would result in the most energy consumed during execution of the plan. \vspace{-0.2cm}
1,941,325,220,845
arxiv
\section{Introduction} \label{section} Hidden Markov models (HMMs) are a fundamental tool for data analysis and exploration. Many variants of the basic HMM have been developed in response to shortcomings in the original HMM formulation \cite{Rabiner89}. In this paper we address inference in the explicit state duration HMM (EDHMM). By state duration we mean the amount of time an HMM dwells in a state. In the standard HMM specification, a state's duration is implicit and, a priori, distributed geometrically. The EDHMM (or, equivalently, the hidden semi-Markov model \cite{Yu10}) was developed to allow explicit parameterization and direct inference of state duration distributions. EDHMM estimation and inference can be performed using the forward-backward algorithm; though only if the sequence is short or a tight ``allowable'' duration interval for each state is hard-coded a priori \cite{Yu2006}. If the sequence is short then forward-backward can be run on a state representation that allows for all possible durations up to the observed sequence length. If the sequence is long then forward-backward only remains computationally tractable if only transitions between durations that lie within pre-specified allowable intervals are considered. If the true state durations lie outside those intervals then the resulting model estimates will be incorrect: the learned duration distributions can only reflect what is allowed given the pre-specified duration intervals. Our contribution is the development of a procedure for EDHMM inference that does not require any hard pre-specification of duration intervals, is efficient in practice, and, as it is an asymptotically exact procedure, does not risk incorrect inference. The technique we use to do this is borrowed from sampling procedures developed for nonparametric Bayesian HMM variants \cite{vanGael2008}. Our key insight is simple: the machinery developed for inference in HMMs with a countable number of states is precisely the same as that which is needed for doing inference in an EDHMM with duration distributions over countable support. So, while the EDHMM is a distinctly parametric model, the tools from nonparametric Bayesian inference can be applied such that black-box inference becomes possible and, in practice, efficient. In this work we show specifically that a ``beam-sampling'' approach \cite{vanGael2008} works for estimating EDHMMs, learning both the transition structure and duration distributions simultaneously. In demonstrating our EDHMM inference technique we consider a synthetic system in which the state-cardinality is known and finite, but where each state's duration distribution is unknown. We show that the EDHMM beam sampler performs accurate tracking whilst capturing the duration distributions as well as the probability of transitioning between states. The remainder of the letter is organised as follows. In Section~\ref{sec:Model} we introduce the EDHMM; in Section~\ref{sec:inference} we review beam-sampling for the infinite Hidden Markov Model (iHMM) \cite{Beal2002} and show how it relates to the EDHMM inference problem; and in Section~\ref{sec:experiments} we show results from using the EDHMM to model synthetic data. \begin{figure}[t] \centering \subfloat[][]{ \includegraphics[width=0.5\textwidth]{EDHMM_graphical_model.pdf} \label{fig:graphical model} } \subfloat[][]{ \includegraphics[width=0.5\textwidth]{EDHMM_aux_graphical_model.pdf} \label{fig:aux graphical model} } \caption{a) The Explicit Duration Hidden Markov Model. The time left in the current state $x_t$ is denoted $d_t$. The observation at each point in time is denoted $y_t$. b) The EDHMM with the additional auxiliary variable $u_t$ used in the beam sampler.} \label{fig:graphs} \end{figure} \section{Explicit Duration Hidden Markov Model} \label{sec:Model} The EDHMM captures the relationships among state $x_t$, duration $d_t$, and observation $y_t$ over time $t$. It consists of four components: the initial state distribution, the transition distributions, the observation distributions, and the duration distributions. We define the observation sequence $\mathcal{Y} = \{y_1, y_2, \ldots, y_T\}$; the latent state sequence $\mathcal{X} = \{x_0, x_1, x_2, \ldots, x_T\}$; and the remaining time in each segment $\mathcal{D} = \{ d_1, d_2, \ldots, d_T\}$, where $x_t \in \{ 1, 2, \ldots, K\}$ with $K$ the maximum number of states, $d_t \in \{1, 2, \ldots \}$, and $y_t \in \mathbb{R}^n$. We assume that the Markov chain on the latent states is homogenous, i.e., that $p(x_t = j | x_{t-1}=i, A) = a_{i,j} \forall t$ where $A$ is a $K\times K$ matrix with element $a_{i,j}$ at row $i$ and column $j.$ The prior on $A$ is row-wise Dirichlet with zero prior mass on self-transitions, i.e. $p(a_{i,:}) = \mathrm{Dir}({1}/{(K-1)}, \ldots, 0 , \ldots {1}/{K-1})$ where $a_{i,:}$ is a row vector and the $i$th Dirichlet parameter is $0.$ Each state is imbued with its own duration distribution $p(d_t | x_t = k) = p(d_t | \lambda_{k})$ with parameter $\lambda_k$. Each duration distribution parameter is drawn from a prior $p(\lambda_k)$ which can be chosen in an application specific way. The collection of all duration distribution parameters is $\lambda = \{\lambda_1, \ldots, \lambda_K\}$. Each state is also imbued with an observation generating distribution $p(y_t | x_t = k) = p(y_t | \theta_{k})$ with parameter $\theta_k$. Each observation distribution parameter is drawn from a prior $p(\theta_k)$ also to be chosen according to the application. The set of all observation distribution parameters is $\theta.$ In the following exposition, explicit conditional dependencies on component distribution parameters are omitted to focus on the particulars unique to the EDHMM. In an EDHMM the transitions between states are only allowed at the end of a segment: \begin{equation} p(x_t | x_{t-1}, d_{t-1}) = \begin{cases} \delta(x_t, x_{t-1}) & \textrm{if $d_{t-1} > 1$} \\ p(x_t | x_{t-1}) & \textrm{otherwise} \end{cases} \end{equation} where the Kronecker delta $\delta(a,b) = 1$ if $a=b$ and zero otherwise. The duration distribution generates segment lengths at every state switch: \begin{equation} p(d_t | x_{t}, d_{t-1}) = \begin{cases} \delta(d_t, d_{t-1}-1) & \textrm{if $d_{t-1} > 1$} \\ p(d_t | x_{t}) & \textrm{otherwise.} \end{cases} \end{equation} The joint distribution of the EDHMM is \begin{multline} \label{eq:joint} p(\mathcal{X},\mathcal{D},\mathcal{Y}) = p(x_0)p(d_0) \\ \prod_{t=1}^T p(y_t | x_t, \theta) p(x_t | x_{t-1}, d_{t-1}, A) p(d_t | x_{t}, d_{t-1}, \lambda) \end{multline} corresponding to the graphical model in Figure \ref{fig:graphical model}. Alternative choices to define the duration variable $d_t$ exist; see \cite{Chiappa2011} for details. Algorithm \ref{alg:gen} illustrates the EDHMM as a generative model. \begin{figure*}[ttt!] \begin{minipage}[t]{2in} \begin{algorithm}[H] \caption{Generate Data} \label{alg:gen} \begin{algorithmic} \STATE sample $x_0 \sim p(x_0)$, $d_0 \sim p(d_0)$ \FOR {$t = 1, 2, \ldots, T$} \IF{$d_{t-1} = 1$} \STATE a new segment starts: \STATE sample $x_t \sim p(x_t | x_{t-1})$ \STATE sample $d_t \sim p(d_t | x_t)$ \ELSE \STATE the segment continues: \STATE $x_t = x_{t-1}$ \STATE $d_t = d_{t-1} - 1$ \ENDIF \STATE sample $y_t \sim p(y_t | x_t)$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfill \begin{minipage}[t]{3.3in} \begin{algorithm}[H] \caption{Sample the EDHMM} \label{alg:beam} \begin{algorithmic} \STATE Initialise parameters $A$, $\lambda$, $\theta.$ Initialize $u_t$ small $\forall\, T$ \FOR{sweep $ \in \{1,2,3,\ldots \}$} \STATE \textbf{Forward}: run \eqref{eqn:scaled forward} to get $\hat{\alpha}_t(z_t)$ given $\mathcal{U}$ and $\mathcal{Y} \; \forall\, T$ \STATE \textbf{Backward}: sample $z_T \sim \hat{\alpha}_T(z_T)$ \FOR{$t \in \{T, T-1, \ldots, 1\}$} \STATE sample $z_{t-1} \sim \mathbb{I}(u_t < p(z_{t} | z_{t-1}))\hat{\alpha}_{t-1}(z_{t-1})$ \ENDFOR \STATE \textbf{Slice:}\FOR {$t \in \{1, 2, \ldots, T \}$} \STATE evaluate $l = p(d_t|x_t,d_{t-1})p(x_t|x_{t-1},d_{t-1})$ \STATE sample $u_{t} \sim \mathrm{Uniform}(0,l)$ \ENDFOR \STATE sample parameters $A$, $\lambda$, $\theta$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfill \end{figure*} \section{EDHMM Inference} \label{sec:inference} Our aim is to estimate the conditional posterior distribution of the latent states ($\mathcal{X}$ and $\mathcal{D}$) and parameters ($\theta, \lambda$ and $A$) given observations $\mathcal{Y}$ by samples drawn via Markov chain Monte Carlo. Sampling $\theta$ and $A$ given $\mathcal{X}$ proceeds per usual textbook approaches \cite{Bishop06}. Sampling $\lambda$ given $\mathcal{D}$ is straightforward in most situations. Indirect Gibbs sampling of $\mathcal{X}$ is possible using auxiliary state-change indicator variables, but for reasons similar to those in \cite{Goldwater2009}, such a sampler will not mix well. The main contribution of this paper is to show how to generate posterior samples of $\mathcal{X}$ and $\mathcal{D}$. \subsection{Forward Filtering, Backward Sampling} We can, in theory, use the forward messages from the forward backward algorithm \cite{Rabiner89} to sample the conditional posterior distribution of $\mathcal{X}$ and $\mathcal{D}.$ To do this we treat each state-duration tuple as a single random variable (introducing the notation $z_t = \{x_t,d_t\}$). Doing so recovers the standard hidden Markov model structure and hence standard forward messages can be used directly. A forward filtering, backward sampler for $\mathcal{Z} = \{z_1, \ldots, z_T\}$ conditioned on all other random variables requires the classical forward messages: \begin{equation} \alpha_t(z_t) = \sum_{z_{t-1}} p(z_t | z_{t-1}) p(y_t|z_t) \alpha_{t-1}(z_{t-1}) \label{eqn:forward recursion} \end{equation} where the transition probability can be factorised according to our modelling assumptions: \begin{equation} p(z_{t} | z_{t-1}) = p(x_t | x_{t-1}, d_{t-1}) p(d_t | d_{t-1}, x_t). \end{equation} Unfortunately the sum in \eqref{eqn:forward recursion} has at worst an infinite number of terms in the case of duration distributions with countably infinite support and at best a very large number of terms in the case of long sequences. The standard approach to EDHMM inference involves truncating considered durations to only those that lie between $d_\mathrm{min}$ and $d_\mathrm{max}$ or computation involving all possible durations up to the observed length of the sequence ($d_\mathrm{min}=0, d_\mathrm{max}=T$). This leads to per-sample, forward-backward computational complexity of $O(T(K(d_\mathrm{max}-d_\mathrm{min}))^2)$. Truncation yields inference that will simply fail if an actual duration lies outside hard-coded allowable duration intervals. Considering all possible durations up to length $T$ is often computationally impossible. The beam-sampler we propose behaves like a dynamic version of the truncation approach, automatically defining and scaling per-state duration truncation intervals. Better though, the way it does this results in an asymptotically exact sample with no risk of incorrect inference resulting from incorrectly pre-specified duration truncations. We do not characterize the computational complexity of the proposed beam sampler in this work but note that it is upper bounded by $O(T(KT)^2)$ (i.e., the beam sampler admits durations of length equal to the entire sequence) but in practice is found to be as or more efficient than the risky hard-truncation approach. \subsection{EDHMM Beam Sampling} A recent contribution to inference in the infinite Hidden Markov Model (iHMM) \cite{Beal2002} suggests a way around truncation \cite{vanGael2008}. The iHMM is an HMM with a countable number of states. Computing the forward message for a forward filtering, backward sampler for the latent states in an iHMM also requires a sum over a countable number of elements. The ``beam sampling'' approach \cite{vanGael2008}, which we can apply largely without modification, is to truncate this sum by introducing a ``slice'' \cite{Neal2003} auxiliary variable $\mathcal{U} = \{u_1, u_2, \ldots,u_T\}$ at each time step. The auxiliary variables are chosen in such a way as to automatically limit each sum in the forward pass to a finite number of terms while still allowing all possible durations The particular choice of auxiliary variable $u_t$ is important. We follow \cite{vanGael2008} in choosing $u_t$ to be conditionally distributed given the current and previous state and duration in the following way (see the graphical model in Figure \ref{fig:aux graphical model}): \begin{equation} \label{eqn:slice} p(u_t | z_t, z_{t-1}) = \frac {\mathbb{I}(0 < u_t < p(z_t | z_{t-1}))} {p(z_t | z_{t-1})} \end{equation} where $\mathbb{I}(\cdot)$ returns one if its operand is true and zero otherwise. Given $\mathcal{U}$ it is possible to sample the state $\mathcal{X}$ and duration $\mathcal{D}$ conditional posterior. Using notation $\mathcal{Y}_{t_1}^{t_2} = \{y_{t_1}, y_{t_1+1}, \ldots,y_{t_2}\}$ to indicate sub-ranges of a sequence, the new forward messages we compute are: \begin{eqnarray} \hat{\alpha}_t(z_t) &=& p(z_t, \mathcal{Y}_1^t , \mathcal{U}_1^{t}) \label{eqn:scaled forward} = \sum_{z_{t-1}} p(z_t, z_{t-1} , \mathcal{Y}_1^t , \mathcal{U}_1^{t}) \\ &\propto& \sum_{z_{t-1}} p(u_{t} | z_t, z_{t-1}) p(z_t, z_{t-1} , \mathcal{Y}_1^t, \mathcal{U}_1^{t-1}) \nonumber \\ &=& \sum_{z_{t-1}} \mathbb{I}(0 < u_{t} < p(z_t | z_{t-1})) p(y_t|z_t) \hat{\alpha}_{t-1}(z_{t-1}) \nonumber. \end{eqnarray} The indicator function $\mathbb{I}$ results in non-zero probabilities in the forward message for only those states $z_t$ whose likelihood given $z_{t-1}$ is greater than $u_t$. The beam sampler derives its computational advantage from the fact that the set of $z_t$'s for which this is true is typically small. The backwards sampling step recursively samples a state sequence from the distribution $p(z_{t-1} | z_{t}, \mathcal{Y}, \mathcal{U})$ which can expressed in terms of the forward variable: \begin{eqnarray} p(z_{t-1} | z_{t}, \mathcal{Y}, \mathcal{U}) &\propto& p(z_{t},z_{t-1}, \mathcal{Y}, \mathcal{U}) \label{eqn:backward} \\ & \propto & p(u_{t} | z_t, z_{t-1})p(z_{t}|z_{t-1}) \hat{\alpha}_{t-1}(z_{t-1}) \nonumber\\ & \propto & \mathbb{I}(0 < u_{t} < p(z_{t} | z_{t-1})) \hat{\alpha}_{t-1}(z_{t-1}).\nonumber \end{eqnarray} The full EDHMM beam sampler is given in Algorithm \ref{alg:beam}, which makes use of the forward recursion in \eqref{eqn:scaled forward}, the slice sampler in \eqref{eqn:slice}, and the backwards sampler in \eqref{eqn:backward}. \subsection{Related Work} The need to accommodate explicit state duration distributions in HMMs has long been recognised. Rabiner \cite{Rabiner89} details the basic approach which expands the state space to include dwell time before applying a slightly modified Baum-Welch algorithm. This approach specifies a maximum state duration, limiting practical application to cases with short sequences and dwell times. This approach, generalised under the name ``segmental hidden Markov models'', includes more general transitions than those Rabiner considered, allowing the next state and duration to be conditioned on the previous state and duration \cite{Gales93}. Efficient approximate inference procedures were developed in the context of speech recognition \cite{Ostendorf96}, speech synthesis \cite{Zen07}, and evolved into symmetric approaches suitable for practical implementation \cite{Yu2006}. Recently, a ``sticky'' variant of the hierarchical Dirichlet process HMM (HDP-HMM) has been developed \cite{Fox2008}. The HDP-HMM has countable state-cardinality \cite{Teh06} allowing estimation of the number of states in the HMM; the sticky aspect addresses long dwell times by introducing a parameter in the prior that favours self-transition. \section{Experiments} \label{sec:experiments} \subsection{Synthetic Data} The first experiment uses the 500 data points (Figure \ref{fig:experiment1_data}) generated from a three state EDHMM. The duration distributions were Poisson with rates $\lambda_1 = 5$, $\lambda_2 = 15$, $\lambda_3 = 20$; each observation distribution was Gaussian with means of $\mu_1 = -3$, $\mu_2 = 0$, and $\mu_3 = 3$, each with a variance of 1. The transition distributions $A$ were set to \begin{equation*} \begin{bmatrix} 0 & 0.3 & 0.7 \\ 0.6 & 0 & 0.4 \\ 0.3 & 0.7 & 0 \end{bmatrix}. \end{equation*} \begin{figure} \subfloat[][]{ \includegraphics[width=\textwidth]{experiment_2_X.pdf} \label{fig:exp_1_state} } \\ \subfloat[][]{ \includegraphics[width=\textwidth]{experiment_2_Y.pdf} \label{fig:exp_1_data} } \caption{Example a) state and b) observation sequence generated by the explicit duration HMM. Here $K$ = 3; $p(y_t|x_t=j) = \mathrm{N}(\mu_j, 1)$ with $\mu_1 = -3$, $\mu_2 = 0$, and $\mu_3 = 3$; and $p(d_t|x_t=j) = \mathrm{Poisson}(\lambda_j)$ with $\lambda_1 = 5$, $\lambda_2 = 15$, and $\lambda_3 = 20$.} \label{fig:experiment1_data} \end{figure} Broad, uninformative priors were chosen for the parameters of the duration and observation distributions. The observation distribution parameters were given a normal-inverse-Wishart (N-IW) prior with parameters $\nu_0 = 2$, $\Lambda_0 = 1$, $\kappa=0.1$ and $\mu_0 = 0$. The rate parameters for all states were given $\mathrm{Gamma}(1, 10^{5})$ priors. One thousand samples were collected from the EDHMM beam sampler after a burn-in of 500 samples. The learned posterior distribution of the state duration parameters and means of the observation distributions are shown in Figure \ref{fig:experiment1_results}. The EDHMM achieves high accuracy in the estimated posterior distribution of the observation means, despite the overlap in observation distributions. The rate parameter distributions are reasonably estimated given the small number of observed segments. Figure \ref{fig:allowed} shows the mean number of transitions visited per time point over each iteration of the sampler. \begin{figure} \subfloat[][]{ \includegraphics[width=0.5\textwidth]{posterior_means.pdf} \label{fig:posterior_means} } \subfloat[][]{ \includegraphics[width=0.5\textwidth]{posterior_rates.pdf} \label{fig:posterior_rates} } \caption{Samples from the posterior distributions of a) the observation distribution means and b) the duration distribution rate parameters for the data shown in Figure \ref{fig:experiment1_data}.} \label{fig:experiment1_results} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{number_transitions_visited.pdf} \caption{Mean number of transitions considered per time point by the beam sampler for 1000 post-burn-in sweeps on data from Figure \ref{fig:experiment1_results}. Consider this in comparison to the $(KT)^2 = O(10^6)$ per time point transitions that would need to be considered by standard forward backward without truncation, a surely-safe, truncation-free, but computationally impractical alternative.} \label{fig:allowed} \end{figure} A second experiment was performed to demonstrate the ability of the EDHMM to distinguish between states having differing duration distributions but the same observation distribution. The same model and sampling procedure was used as above except here $\mu_1 = 0$, $\mu_2 = 0$, and $\mu_3 = 3$. Figure~\ref{fig:experiment2_results} shows that the sampler clearly separates the high state associated with $\mu_3$ from the other states and clearly reveals the presence of two low states with differing duration distributions. Figure~\ref{fig:exp_2_state} shows posterior samples that indicate that the model is mixing over ambiguities about states $0$ and $1$ as it should. \begin{figure} \subfloat[][]{ \includegraphics[width=\textwidth]{experiment_3_Y.pdf} \label{fig:exp_2_data} } \\ \subfloat[][]{ \includegraphics[width=\textwidth]{experiment_3_X.pdf} \label{fig:exp_2_state} } \\ \subfloat[][]{ \includegraphics[width=0.5\textwidth]{posterior_means_exp3.pdf} \label{fig:posterior_means_3} } \subfloat[][]{ \includegraphics[width=0.5\textwidth]{posterior_rates_exp3.pdf} \label{fig:posterior_rates_3} } \caption{Beam sampler results from a system with identical observation distributions but differing durations. Observations are shown in a); true states in b) overlaid with 20 state traces produced by the sampler. Here we have parameters $\mu_1 = \mu_2 = 0$, $\mu_3 = 3$ and $\lambda_1 = 5$, $\lambda_2 = 15$, $\lambda_3 = 20$. Samples from the posterior observation-mean and duration-rate distributions are shown in c) and d), respectively.} \label{fig:experiment2_results} \end{figure} \section{Discussion} \label{sec:dicussion} We presented a beam sampler for the explicit state duration HMM. This sampler draws state sequences from the true posterior distribution without any need to make truncation approximations. It remains future work to combine the explicit state duration HMM and the iHMM. Python code associated with the EDHMM is available online.\footnote{http://github.com/mikedewar/EDHMM} \bibliographystyle{plain}
1,941,325,220,846
arxiv
\section{Introduction} There is a long (but sparse) debate in the community of atmospheric physics and meteorology about the prediction horizon of weather forecasts. As it was prominently pointed out by Lorenz (1963)\cite{Lorenz1963}, the chaotic nature of the atmosphere when seen as a dynamical system has the consequence of sensitive dependence on initial conditions. Hence, even infinitesimal errors in the initial condition of a forecast model compared to the real atmospheric state grow exponentially fast in time and eventually reach macroscopic scales. Then the model forecast has no similarity to the real state anymore, and the forecast time after which this is the case is called the prediction horizon. As argued in \cite{Bauer2015}, this horizon has been pushed forward by about 1 day/decade in the past 3-4 decades, and is now, with current observation technology including remote sensing, current data assimilation, nowadays physical understanding of the atmospheric processes, and computer power, at around 10 days. It is also argued in \cite{Bauer2015} that this progress will continue and that one day one might be able to perform {\sl multi-seasonal} weather forecasts with high-resolution models. Indeed, this assumption is compatible with the conventional idea of {\sl exponential error growth} defined by positive Lyapunov exponents\cite{ott_book}: An initial perturbation of a state vector of size $E_0$ grows in time $t$ as $E(t)=E_0 e^{\lambda t}$, where $\lambda >0$ is the largest Lyapunov exponent of the system. A reduction of $E_0$ by $1/e$ (by more accurate/ more complete observations of the current state) will extend the prediction horizon linearly by one Lyapunov time $1/\lambda$, \begin{equation} E(t) = E_0 \rm{e} ^{\lambda t},\;\;\; t_{\rm pred} = \frac{1}{\lambda}\left( \ln(E_\infty) - \ln(E_0)\right) \rightarrow \infty\;\mbox{ for }{E_0\to 0}, \end{equation} where $E_\infty$ is the diameter of the attractor, which is the saturation amplitude of any errors and which means complete loss of information about the true trajectory at time $t_{\rm pred}$, the prediction horizon. Even if it is commonly assumed that reducing initial errors by orders of magnitude for a linear gain in prediction horizon is infeasible, and hence this classical notion of chaos usually implies unpredictability in the long, at least in principle there is no limit to the prediction horizon. Since long there have been warnings in the atmospheric physics literature that error growth might be dramatically different here, starting from Thompson\cite{THOMPSON1957} in 1957, Robinson\cite{Robinson1967} in 1967, and Lorenz\cite{Lorenz1969} in 1969. In a recent paper Palmer et al. \cite{Palmer2014} coined the notion of the 'the \textit{real} butterfly effect' for a strictly finite prediction horizon. In Refs. \cite{THOMPSON1957,Lorenz1969,Robinson1967,Robinson1971,Leith1972} the authors investigated the Navier-Stokes equation or some similar empirical flow equation in two and three-dimensions, with and without dissipation and different energy spectra ranging from $ E(k) \sim k^{-5/3} $ to $ k^{-3} $. They all conclude that there exists a fundamental limit of predictability which is an \textit{intrinsic} property of the investigated flow equation. Applying the results to the atmosphere by setting similar energy spectra and time and length scales the authors conclude that this fundamental limit of predictability of the atmosphere lies between 7 days \cite{THOMPSON1957}, 10 days \cite{Robinson1967} and approx 14 days \cite{Lorenz1969,Smagorinsky1969}. The recent study of ECMWF \cite{Zhang2019} on weather forecast systems comes to an intrinsic limitation of 15 days. Atmospheric dynamics takes place on a hierarchy of spatial and temporal scales which are coupled, see Figure\ref{fig:Meteo_plot_atmosphere_time_spatial_scales}. Whereas synoptic scale structures of sizes of several 1000km (e.g., high and low pressure systems) live on time scales of several days, small scale structures such as clouds show dynamics on the scale of minutes to hours. It is plausible that along with these life-times, also error growth takes place on different time scales: the smaller the spatial extent of some structure, the faster it evolves, and hence its prediction might fail correspondingly earlier. This has given rise to the notion of {\sl scale dependent error growth}: The conventional Lyapunov exponent should be replaced by a scale dependent quantity, e.g., a finite size Lyapunov exponent \cite{Cencini2013}. Indeed, in a study of scale dependent error growth in the Global Forecast System of the National Center for Environmental Prediction, Harlim et al.~\cite{Harlim2005} have shown that there is a scale dependent error growth rate which becomes very large if the errors become small (see Figure 1 in \cite{Harlim2005}). We propose that if the error growth rate with {\sl decreasing} error magnitude {\sl grows} sufficiently fast, then this will induce a finite prediction horizon. This behavior could naturally occur in systems which are described by partial differential equations PDEs (such as the Navier Stokes equations), and would require that the dynamics creates a hierarchy of spatial and temporal scales as it exists in the atmosphere. In the mathematical sense, this would imply a maximal Lyapunov exponent of $\lambda=\infty$, which, however, would be inaccessible in standard numerical simulations because of coarse graining of the continuum and thereby cut-offs in the spatial scale. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure1} \caption{Typical meteorological graph about time and length scales in the atmosphere. Own reproduction.} \label{fig:Meteo_plot_atmosphere_time_spatial_scales} \end{figure} In the remainder of this Letter, we will first introduce the idea of a power law dependence of the error growth rate on the error magnitude and show that this leads to a strictly finite prediction horizon. We then introduce a class of dynamical systems which shows exactly this behavior, and present as a specific example a hierarchy of coupled Lorenz96-1 models where numerical simulations validate a power law divergence of the error growth rate. Finally, we re-interprete data from the study by Harlim et al.\cite{Harlim2005} and show that what they observed is a power law divergence of error growth rates, which becomes evident in our new presentation of their data. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure2} \caption{Exponential error growth in chaotic systems in blue, power law error growth according to (3), prediction horizons indicated by vertical dashed lines. The inset shows the divergence of scale dependent growth rate $\lambda(E)$ as opposed to a standard Lyapunov exponent.} \label{fig:Introduction_error_type_gesamt} \end{figure} \section{Power law divergence of scale dependent error growth} Let us assume that the dynamics exhibits a scale dependent error growth rate $\lambda(E) := \frac{\mathrm{d} \ln(E(t))}{\mathrm{d} t}|_{E(t)=E}$ where the rate of growth is a power law with an exponent $-\beta$ and some coefficient $a>0$ and where $E$ is the magnitude of a perturbation: \begin{equation}\label{eq:2} \frac{\mathrm{d} \ln(E)}{\mathrm{d} t} = \frac{\dot{E}}{E} = a E^{-\beta}. \end{equation} Integration by separation of variables leads to a power law growth of errors. \begin{equation}\label{eq.:power-law-error-growth} E(t) = (E_0^\beta + a \beta t)^{1/\beta}. \end{equation} As for classical exponential error growth, Equation (\ref{eq.:power-law-error-growth}) becomes invalid for very large times $t$ when the error saturates at a value $E_\infty$ related to the finite extent of the attractor. What is strikingly different here is the diverging error growth rate for small $E_0$ and small $t$. The linear increment $ \Delta t $ we gain in prediction time every time we cut the error $E_0$ into half becomes smaller and smaller (see Figure \ref{fig:Introduction_error_type_gesamt}). The overall prediction time converges to a finite value - a maximum prediction horizon $ t_{\rm{max}} $. If the tolerable maximal error is denoted by $E_{\rm tol}$ then $t_{\rm max}$ is given by \begin{equation}\label{eq:4} t_{\rm pred} = \frac{E_{\rm tol}^{\beta}-E_0^{\beta}}{a \beta} \rightarrow t_{\rm{max}} = \frac{E_{\rm tol}^{\beta}}{a \beta} < \infty \mbox{ for }{E_0\to 0} \end{equation} This is a new and severe form of chaos, which we propose to exist in systems with an infinite dimensional phase space. While we assume that under certain conditions this will be exhibited by PDEs which intrinsically form cascades such as the Richardson cascade in turbulence, we present here a paradigmatic model class which is based on coupled low-dimensional systems with a hierarchy of scales imposed by our choice of parameters. Given a chaotic $N$-dimensional dynamical system in terms of an ODE, $\dot{\vec x} = \vec F(\vec x)$, we introduce a hierarchical coupling by defining a family of spatial scaling factors $\alpha_i$ decreasing in $i$ and of temporal scaling factors $\tau_i$ increasing in $i$. Here $i$ denotes the level of the hierarchy, where $i=1$ is the top level and $i=L$ the lowest. For the $i$-th level, we replace $\vec x$ by $\vec x_i/\alpha_i$ and $t$ by $\tau_i t$ and we obtain the equations of motion \begin{equation}\label{eq:hier} \dot{\vec{x}}_i = \tau_i [ \alpha_i \vec F (\vec x_i/\alpha_i) + \vec C(\vec x_{i+1}, \vec x_{i-1})] \;, \end{equation} for $i=1,\ldots,L$. Here, $\vec C(\vec a,\vec b)$ denotes a {\sl weak} coupling term which in the simplest case might be linear, $\vec C(\vec a,\vec b)=\vec a + \vec b$ (also weak global coupling is thinkable). For a finite number of levels $L$, the non-existing coupling inputs $\vec x_0$ and $\vec x_{L+1}$ are set to zero, and the system then has $N L$ degrees of freedom. Since coupling is weak and if the dynamics $\vec F$ generates just one positive Lyapunov exponent $\lambda$, the hierarchical system has $L$ positive Lyapunov exponents which are approximately $\lambda_i\approx\lambda \tau_i$. We chose the families of $\alpha_i$ and $\tau_i$ being monotonous in such a way that the top level hierarchy is slow but that the spatial extent of its attractor and the error saturation value $E_\infty$ is large, and that the lowest level is the fastest and its phase space range is the smallest. For infinitesimal errors, the error growth is governed by the maximum Lyapunov exponent $\lambda \tau_L$ of the fastest time scale, but this error growth saturates at the scale $\alpha_L$ which is small. Then the second largest Lyapunov exponent of the second lowest level takes over, till also this error growth saturates at a scale of $\alpha_{L-1}$, and so on. This way we generate a scale dependent error growth rate, where the properties of this scale dependence are tunable. One specific tuning which generates the proposed power-law-divergence of the error growth rate is to chose both families of scaling factors in a geometric way, i.e., $\tau_i= c^i$, and $\alpha_i = d^i$. As we will demonstrate by the help of the specific example below, the resulting power $\beta$ of the scaling of $\lambda(E)\propto E^{-\beta}$ is then $\beta= \ln c /\ln d$. Clearly, for finite $L$ this divergence is cut-off by a maximum rate of $\lambda(E)\le \lambda\tau_L$. \section{Multi-hierarchical model L96-H}\label{sec:L96_H_novel_model} We specify now the general model class by choosing the model L96-1 introduced in Lorenz (1996)\cite{Lorenz1996} for the dynamics $\vec F(\vec x)$. Its governing equations, using the notation of \cite{Lorenz1996}, read \begin{equation} \dot{x}_n = x_{n-1} \left(x_{n+1} - x_{n-2} \right) - x_{n } + F \end{equation} with $ n = 1 \dots N $ , $ x_n $ cyclic permutable with $ x_{n\pm N} = x_n $, and $ F $ a constant driving force. For $ N > 6 $ and $ F > 8 $ all instances behave chaotically with increasing positive largest Lyapunov exponent for increasing $ N $ and $ F $. Its equations of motion for some inner level $i$ read \begin{eqnarray} \dot{x}_{n, i} &=& \tau_i \Big[ \frac{1}{\alpha_i} x_{n-1, i}\left(x_{n + 1, i} - x_{n - 2, i} \right) - x_{n, i} \\ && + \alpha_i F_i + x_{n, i + 1} + \frac{\alpha_{i+1}}{\alpha_{i-1}} x_{n, i - 1} \Big]\;. \end{eqnarray} The system is $ L N $ dimensional and the state space can be divided into $ L $ subspaces of dimension $ N $ for each level of the hierarchy. We denote the state vector by $ \vec{X} = \{\vec{x}_1, \dots, \vec{x}_L \} $ with $ \vec{x}_i \in {\rm I\!R}^N $. The coupling is bidirectional with \textit{upwards} coupling from lower to higher level $ x_{n, i + 1} $ and with \textit{downwards} coupling $ \frac{\alpha_{i+1}}{\alpha_{i-1}} x_{n, i - 1} $. The pre-factor $ \frac{\alpha_{i+1}}{\alpha_{i-1}} $ is chosen such that the downwards coupling has the same magnitude as the upwards coupling. In the lowest level the undefined scale $\alpha_{L+1}$ is chosen as continuation of the sequence of $\alpha_i$. The term $\alpha_i F_i + x_{n, i + 1} + \frac{\alpha_{i+1}}{\alpha_{i-1}} x_{n, i - 1} $ can be considered as time dependent driving force $ F_i(t) $. It is important to make sure that $ F_i(t) > \alpha_i \cdot 8 $ for all times to ensure that each level is chaotic. In the following, we will present numerical results of this system for the parameters $ N = 7$, $F = 15$ for which the single level dynamics (without coupling) is chaotic with a maximal Lyapunov exponent of $\lambda\approx 2.66$ and an error saturation value $E_\infty\approx 22$. We define the \textit{error} $E(t)$ as the ensemble average of the Euclidean distance between a reference trajectory and an initially randomly perturbed error trajectory with perturbation strength $E_0 $. Thereby we distinguish between the error of the total system $ \vec{X} $ denoted $ E_{\rm{tot}}(t) $ and the error $ E_i(t) $ regarding only the subspace of one single level $ \vec{x}_i $. The scale dependent \textit{error growth rate} is defined as the time derivative of the logarithm of the error $\frac{\mathrm{d} \ln E}{\mathrm{d} t} $ as a function of the error magnitude $E(t)$ at time $t$. Indeed, one can also study the propagation of the error from level to level by initial perturbations in selected levels only, which leads to interesting transient behaviors but eventually converges to error growth as for a global perturbation. We study a hierarchy of $ L = 5 $ levels with the scale factors $ \tau_i = 2^{i-5}$ and $ \alpha_i = 10^{5-i}$. The random initial perturbation has magnitude $E_0=10^{-2}$ while the saturation value is at $E_{\infty,1}\approx E_\infty\alpha_1 = 22\cdot 10^4$. In Figure \ref{fig:L96_H_5_power_law} the error growth is shown on a \textit{double logarithmic} plot. Both, the total error $E_{\rm tot}(t)$ (blue) and the error growth of the levels $E_i(t)$ show power-law behavior, where the errors measured in the sub-spaces of a certain level have additional features to be discussed elsewhere. For the resulting power law error growth, the interplay of level-$i$-Lyapunov exponents $\lambda \tau_i$ and of their saturation scales $E_{\infty,i} \approx E_\infty\alpha_i$ is crucial. The inset shows the numerically determined error growth rates as a function of error magnitude. The parameters $ \tau_i $ and $ \alpha_i $ are chosen such that the error growth rate decreases by a factor of $ 1/2 $ every time the error becomes larger by a factor of $ 10 $. This proportionality is shown by the bold dashed line with $ \mathrm{d}(\ln E)/\mathrm{d} t \propto E^{-\ln(2)/\ln(10)} $. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure3} \caption{Power-law error growth of the model L96-H on a double-logarithmic plot} \label{fig:L96_H_5_power_law} \end{figure} \section{Re-analysis of the Harlim et al. results} We have presented evidence that a power law divergence of error growth rates can be realized by a dynamical system with properties which resemble the observed coupling of time and length scales in the atmosphere. Here, we want to strengthen this concept by a study of a numerical weather forecast system. Actually, since the authors of this article do neither have the skills nor the resources to do a study on real weather forecasts, we use published results to support our ideas. Harlim et al.\cite{Harlim2005} have performed extensive numerical experiments with the Global Forecast System of the National Center for Environmental Prediction (NCEP -GFS), focusing with their studies on mid-latitudes wind prediction (vorticity). We recall here some details from their original publication, for more see\cite{Harlim2005}. They applied perturbations to reference trajectories and measured numerically the rate of divergence of such two trajectories as a function of their Euclidian distance in phase space. The results are depicted in Figure 1 of \cite{Harlim2005} as a scatter plot of error growth rate versus error magnitude. We used the free software ``WebPlotDigitizer'' \cite{WebPlotDigitizer} to obtain the coordinates of an essential sub-set of the dots in this diagram. These data, in the same representation as the original figure (inset), and on a doubly logarithmic scale are shown in Figure \ref{fig:Harlimdata}. Evidently, the error growth rate in this system can be well described by a power law with divergence for small errors Equation (\ref{eq:2}), and the estimated power $\beta$ is about $0.63$. There is hence considerable evidence that a real weather forecast model does suffer not only from scale dependent error growth, but that this is indeed governed by a power law with a maximum prediction horizon of 15-16 days, when we use Equation (\ref{eq:4}), insert $E_\infty=1$ (which is the saturation value in the normalization of \cite{Harlim2005}), a time unit of days, and $a=0.1$ and $\beta=0.63$ as obtained by our fit. \begin{figure} \includegraphics[width=0.9\columnwidth]{figure4} \caption{\label{fig:Harlimdata}The dots are taken from Figure 1 of Harlim et al.\cite{Harlim2005} and denote error growth rate in units of 1/day as a function of error magnitude for a numerical weather model. The line is a power law fit with power $\beta=0.63$. The inset shows the scanned data in a representation as in the original publication and verifies that our recording of the plotted data is reasonable.} \end{figure} \section{Conclusions} Based on the idea of scale dependent error growth, which is motivated by meteorological evidence, we proposed the possibility of deterministic dynamical systems with a strict prediction horizon, which is given if the error growth rate diverges for small error magnitudes like a power law. We proposed a class of chaotic dynamical systems which exhibit such a behavior and illustrated this by a model system. The system models the spatial and temporal hierarchies present in atmospheric dynamics. We then re-analyzed data produced in \cite{Harlim2005} in terms of a power law divergence of error growth rates and found thereby that indeed in this weather forecast system, this power law divergence is present and that the maximum forecast range is limited to 15 days. We find it plausible that the same holds for other weather forecast systems, as already also stated for the European IFS\cite{Zhang2019}. The dynamical origin of this phenomenon lies in the linkage of spatial and temporal scales of this multi-scale phenomenon, where the smallest scales are the fastest.\\[.3cm] \ack We are grateful for fruitful discussions with M. Firmbach, C. Dubslaff, and B. L\"unsmann.
1,941,325,220,847
arxiv
\section{\label{Sec:Intro}Introduction} Liquid crystalline gels\cite{2003LCEWarner.M;Terentjev.E} refer to special soft materials that incorporate the symmetry properties of liquid crystalline\cite{1993PhysLC_Gennes.P;Prost.J} phases into the crosslinked polymeric backbones, thus the translational response of crosslinked polymeric networks and the orientational response of liquid crystalline mesogens are coupled together. Among all the possible liquid crystalline gels, nematic gel has the simplest symmetry where the crosslinked polymeric backbones are spontaneously elongated along one certain direction (usually the nematic director $\mathbf{\hat{n}}$) under the effect of symmetry broken properties of the nematic solvent. The uniaxial prolate ellipsoidal polymer backbones can be described by a step length tensor $l_{ij}=l_\perp\delta_{ij}+(l_\parallel-l_\perp)n_in_j$ and the anisotropic parameter $r$ is defined as the ratio of effective step length of polymer coil parallel ($l_{\parallel}$) and perpendicular ($l_{\perp}$) to the nematic director $\mathbf{\hat{n}}$. The value of $r$ depends on the symmetry properties of the nematic solvent: $r$ increases as the system becomes more ordered, i.e. lower temperature for thermotropic liquid crystal. When there are no rigid mechanical constrains, such relationship can be verified by observing the macroscopic shape change of the nematic gel between the isotropic phase and the nematic phase\cite{__In-Preparation_Vol:_Pg:_Meng.G;Meyer.R,2002_11_Physical-Review-Letters_Vol:89_Pg:225701_Selinger.J;Jeon.H;etal}. As the temperature decreases, the mono-domain nematic gel sample will become elongated along the direction of nematic director when sample changes from the isotropic phase into the nematic phase. When the material is confined by rigid boundaries, for example, on the direction of elongation during the cooling, a buckling transition is expected to happen and it has been experimentally observed as stripe patterns under polarized light microscopy\cite{2006_04_Physical-Review-Letters_Vol:96_Pg:147802_Verduzco.R;Meng.G;etal}. Here, we report another buckling transition in thin layers of same nematic liquid crystalline gels within different confined geometries. The physical reasons for such buckling transition can be qualitatively interpreted by the coupling between the mechanical response of crosslinked polymeric backbones and the orientation response of the nematic solvent. The instability analysis were applied to explain the experimental phenomena such as the temperature dependence of critical point and wavelength of periodic patterns. The study about this buckling phenomena is helpful to provide insights to the buckling transition found in other soft materials, i.e. microtubules\cite{1996_Physical-Review-Letters_Vol.76_No.21_Pg.4078-4081_Elbaum.M;Fygenson.D;Libchaber.A_}, F-actin networks\cite{2007_Nature_Vol.445_No.7125_Pg.295-298_Chaudhuri.O;Parekh.S;Fletcher.D_}. \section{Material and Experimental} Nematic gel material was synthesized in Kornfield's group\cite{2004_11_Macromolecules_Vol:37_Pg:8730--8738_Kempe.M;Kornfield.J;etal,2004_05_Macromolecules_Vol:37_Pg:3569--3575_Kempe.M;Kornfield.J;etal,2004_03_Nature-Materials_Vol:3_Pg:139--140_Palffy-Muhoray.P;Meyer.R,2004_03_Nature-Materials_Vol:3_Pg:177--182_Kempe.M;Scruggs.N;etal}. Briefly, 5-wt\% of ABA triblock copolymer, which consist of polystyrene as end blocks and side group liquid crystalline polymer as middle blocks, were dissolved into a nematic solvent (4-\emph{n}-pentyl-4'-cyanobiphenyl, 5CB). The formation of weak physical network is controlled by the order parameter of the solvent: the polystyrene end blocks are soluble in the isotropic phase and aggregate in the nematic phase. The phase transition temperature of such nematic gel ($T_{\mathrm{NI}}\approx 37$\textcelsius ) is very close to the transition temperature of 5CB ($T_{\mathrm{NI}}\approx 35$\textcelsius ), and the reversibility of the physical crosslinking mechanism allows repeatable experiments being easily conducted on the same sample. The nematic gel is loaded in a 25$\mu\mathrm{m}$ thick homeotropic electro-opitcal cell, in which a thin layer of \emph{n},\emph{n}-dimethyl-\emph{n}-octadecyl-3-aminopropyl-trimethoxysilyl-chloride were spin coated on the surface of transparent indium-tin oxide conductors of glass slides. In the presence of a strong applied electric field ($E_{0}=3\mathrm{V}/\mu\mathrm{m}$, 1kHz) across the cell, the nematic mesogens can be easily aligned vertically throughout the cell, where the long axis pointing perpendicularly to the boundary surfaces. The temperature of the sample was controlled by a peltier-based microscope stage during the observation. Initially, the sample was heated up to 45\textcelsius\ in the isotropic phase with no crosslinked polymer networks. The sample was cooled down (2\textcelsius/minute) across its $T_{\mathrm{NI}}$ to certain final temperatures ($T_{f}$) and a thin layer film of mono-domain nematic gel was obtained during the gelation throughout the cell volume, in which $\mathbf{\hat{n}}$ points perpendicularly to the boundary surfaces. The sample appeared homogenous dark under the cross polarized optical microscopy while the aligning field maintained its original magnitude. When the electric filed was turned off, birefringent stripe patterns with wavelength about 5$\mu\textrm{m}$ appeared throughout the sample, as shown in Fig.~\ref{Fig:BucklingTransition}. Both the wavelength of stripe pattern and the critical field ($E_{\mathrm{C}}$), at which the sample changes from homogeneous dark to birefringent patterned, depend on the sample's final cooling temperature ($T_{f}$), such temperature dependance are recorded and plotted in Fig.~\ref{Fig:TempExperiment}. It can be seen that both $E_{\mathrm{C}}$ and wavelength stay in a plateau when 10\textcelsius$<T_{f}<$24\textcelsius\ , and when $T_{f}>24$\textcelsius\ $E_{\mathrm{C}}$ decreases as $T_{f}$ increases, while the wavelength increases. \begin{figure} \def 0.225\textwidth {0.45\textwidth} \includegraphics[width=0.225\textwidth]{Figure1.pdf} \caption{\label{Fig:BucklingTransition} Optical micrographs of birefringent stripe pattern of buckled nematic gel observed through polarized optical microscopy with applied electric field decreased to zero at different temperatures. The scaling bars stand for 50$\mu\mathrm{m}$ in all the images. } \end{figure} \begin{figure} \def 0.225\textwidth {0.225\textwidth} \subfigure ]{\label{Fig:WavelengthTemp}\includegraphics[width=0.225\textwidth]{Figure2a.pdf}} \subfigure ]{\label{Fig:FieldTemp}\includegraphics[width=0.225\textwidth]{Figure2b.pdf}} \caption{\label{Fig:TempExperiment} Experimental measurement about the temperature dependence of \subref{Fig:WavelengthTemp} wavelength of stripes in the buckled state and \subref{Fig:FieldTemp} critical field ($E_{\mathrm{C}}$) of the buckling transition in nematic gel. } \end{figure} \section{Discussions} \begin{figure} \def 0.225\textwidth {0.225\textwidth} \subfigure[ $T_{i}\lessapprox T_{\mathrm{NI}}$]{\label{Fig:BucklingInitial}\includegraphics[width=0.225\textwidth]{Figure3a}} \subfigure[ $T_{f}<T_{i}$]{\label{Fig:BucklingFinal}\includegraphics[width=0.225\textwidth]{Figure3b}} \caption{\label{Fig:BucklingDiagram} Diagrams of the buckling transition in nematic gel. The rigid boundaries cause the material buckling within itself as the polymeric backbones elongate in a more ordered state \subref{Fig:BucklingFinal} in comparison to a less ordered gelation state \subref{Fig:BucklingInitial}, at which the liquid crystal mesogens are aligned vertically by the applied electric field $\mathbf{E}$. } \end{figure} The driving force of such transition can be attributed to the thermo-mechanical-optical coupling between the nematic liquid crystalline solvent and anisotropic crosslinked polymeric backbones in the nematic gel within a confined boundary condition. Diagrams in Fig.~\ref{Fig:BucklingDiagram} are used to interpret the physical reasons for such transition. Initially, the monodomain nematic gel is formed at a higher initial temperature ($T_{i}\lessapprox T_{\mathrm{NI}}$) within the glass cell as the electric field is applied across the cell, as show in Fig.~\ref{Fig:BucklingInitial}. The micro picture of the nematic gel is sketched as the ellipsoid and macroscopic shape of the gel is expressed as the square shape in the diagram. As the temperature is lowered to final temperature ($T_{f}<T_{i}$), the system becomes more ordered and the anisotropic polymeric coil will elongate along the nematic director's direction, as shown in the ellipsoid of Fig.~\ref{Fig:BucklingFinal}. If there is no boundaries to confine the shape of the material, the macroscopic shape of the mono-domain gel sample will elongate vertically, which is illustrated as the rectangular dashed line. Such ``artificial muscle'' effect has been observed experimentally in nematic elastomers and gels\cite{2002_11_Physical-Review-Letters_Vol:89_Pg:225701_Selinger.J;Jeon.H;etal}. When the sample is put into an environment with rigid constrains, e.g. a cell with two glass slides glued together, the material has to buckle within the boundaries in order to gain the elongation along the direction perpendicular to the rigid boundaries. Due to the coupling between the translational response of the polymeric coil and rotational response of the liquid crystalline mesogen, the nematic director $\hat{\mathbf{n}}$ will rotate correspondingly. The applied electric field with enough magnitude can be used to keep the nematic director aligned vertically, and the material will buckle spontaneously as the electric field is decreased due to the instability behavior. To prove our physical explanation, detailed analytical calculations are conducted and discussed in the following. \subsection{Modeling and Free Energy Calculation} The coordinate origin is selected at the middle of the cell gap and the $z$-axis is perpendicular to the cell boundaries ($z=\pm d/2$). Initially, the aligning electric field is applied along the $z$-axis ($\mathbf{E}=E_{0}\hat{\mathbf{z}}$) and the nematic director is initially aligned vertically ($\mathbf{\hat{n}}^0=\hat{\mathbf{z}}$) as well, as shown in Fig.~\ref{Fig:BucklingInitial}. The superscript $0$ is added onto parameters for the initial gelation state of the material. The polymeric networks are formed during the gelation process at temperature $T_{i}$, with $r^{0}$ as the anisotropic parameter. When the material is cooled to a lower final temperature $T_{f}<T_{i}$, the crosslinked polymeric network will be more elongated along nematic director $\hat{\mathbf{n}}$ as the nematic solvent become more ordered and the anisotropy parameter $r$ that is larger than the initial value, $r>r^0$. When the electric field is decreased to zero ($\mathbf{E}=0$), the elongation of the polymeric coils within the nematic liquid crystalline gel buckle with a displacement field within the $xz$-plane: $\mathbf{R}=\zeta\cos{kx}\cos{qz}\hat{\mathbf{x}}+\eta\sin{kx}\sin{qz}\hat{\mathbf{z}}$, in which $k$ is the wavevector of the shear wave within $xy$-plane, and we select it along the $x$-axis, $q$ is the wavevector along the $z$-direction. The value of $q$ is determined by the sample thickness $d$ as $q=\pi/d$ for the first harmonic mode. $\zeta$ and $\eta$ are related by the incompressibility condition\cite{2003LCEWarner.M;Terentjev.E} as $q\eta=k\zeta$. Such shear motion of the polymeric network will induce the nematic director to rotate within $xz$-plane of small amplitude $\xi$ about $z$-axis: $\mathbf{\hat{n}}=\xi\cos{kx}\sin{qz}\hat{\mathbf{x}}+\hat{\mathbf{z}}$. We plug these conditions into the formulas for free energy density, which include the Frank curvature elastic energy of the nematic solvent\cite{1993PhysLC_Gennes.P;Prost.J} and the nematic rubber elastic energ of crosslinked polymeric backbones\cite{2003LCEWarner.M;Terentjev.E}: \begin{widetext} \begin{eqnarray}\label{Eq:NematicCurvature} F&=&\frac{1}{2}\left( K_{S}\left(\nabla\cdot\mathbf{\hat{n}}\right)^2+K_{B}\left(\mathbf{\hat{n}}\times\left(\nabla\times\mathbf{\hat{n}}\right)\right)^2\right)-\frac{1}{2}\epsilon_a(\mathbf{E\cdot\hat{n}})^2\nonumber\\ &&+\frac{1}{2}\mu\cdot Tr(\underline{\underline{l^0}}\cdot\underline{\underline{\Lambda}}^T\cdot\underline{\underline{l}}^{-1}\cdot \underline{\underline{\Lambda}})-\frac{1}{2}A\mu\cdot Tr(\underline{\underline{\Lambda}}^T\cdot\underline{n}\cdot\underline{\underline{\Lambda}}-\underline{n^0}\cdot\underline{\underline{\Lambda}}^T\cdot\underline{n}\cdot \underline{\underline{\Lambda}}). \end{eqnarray} \end{widetext} In Eq.~\ref{Eq:NematicCurvature}, $K_{S}$ and $K_{B}$ are the curvature elastic constants for the nematic solvent; $\mu$ is the shear modulus of the gel and $A$ is the semisoftness coefficient; $\underline{\underline{\Lambda}}$ is the Cauchy strain tensor, $\Lambda_{ij}=\delta_{ij}+\partial R_{i}/\partial x_{j}$. By averaging this free energy density over the space, minimizing with respect to $\xi$, and keeping only terms only to second order in $\zeta$, the free energy density $f$ in the material can be written in Eq.~\ref{Eq:FinalFreeEnergy}. \begin{widetext} \begin{eqnarray} \label{Eq:FinalFreeEnergy} f&=&\frac{\mu\zeta^2}{4 q^2r\big(1-r^{0}+r(A-1+\frac{\epsilon_aE^2+K_{S}k^2+K_{B}q^2}{\mu}+r^{0})\big)}\times\nonumber\\ &&\bigg(k^6r(1+Ar)\frac{K_{S}}{\mu} +q^4r^{0}\big(r+(A-1+\frac{\epsilon_aE^2+K_{B}q^2}{\mu})r^2+(r-1)r^{0}\big) \nonumber\\ &&+k^4r\big((1+Ar)(1+r^{0})-r-Ar^{0}+(1+Ar)\frac{\epsilon_aE^2+K_{B}q^2}{\mu}+(r+r^{o})\frac{K_{S}q^2}{\mu}-\frac{r^{0}}{r}\big)\nonumber\\ &&+k^2q^2\Big((3-r^{0})r^{0}+r\big(1+r^{0}(3A-6+\frac{\epsilon_aE^2+K_{B}q^2}{\mu}+r^{0})\big)\Big)\nonumber\\ &&+r^2\big(A-1+\frac{\epsilon_aE^2}{\mu}+3r^{0}-2Ar^{0}+\frac{q^2}{\mu}(K_{B}+K_{S}r^{0})\big)\bigg) +o(\zeta^4). \end{eqnarray} \end{widetext} It can be seen that the free energy is proportional to the square of the perturbation's amplitude $\zeta^2$. If $f<0$, the perturbed state would be stable and the system would have a transition with $\zeta\ne0$; on the other hand, if $f>0$, the perturbed state is unstable and the system would stay in its initial state with $\zeta=0$. $f=0$ is the critical point at which the transition starts. In this way, the values of the material's physical parameters determine the instability behavior under different external experimental conditions. e.g. applied field, temperature. We can plug the known parameters of the nematic liquid crystalline gel\cite{2004_03_Nature-Materials_Vol:3_Pg:177--182_Kempe.M;Scruggs.N;etal,2006_04_Physical-Review-Letters_Vol:96_Pg:147802_Verduzco.R;Meng.G;etal} into Eq.~\ref{Eq:FinalFreeEnergy}, as $d=25 \mu\mathrm{m}$, $\mu=50\mathrm{Jm^{-3}}$, $A=0.1$, $K_{B}=10^{-11}\mathrm{Jm^{-1}}$, $K_{S}=1.5\times10^{-11}\mathrm{Jm^{-1}}$, $\epsilon_a=15\epsilon_0$. We choose $r^0=1.5$ for the initial gelation state at temperature $T_{i}$, and $r$ depends on the order parameter of the nematic solvent in the gel for specific temperature $T_{f}$ with elongated polymeric coil. \subsection{Instability Analysis: Critical Field} For the case of a homogeneous birefringence change as the basic mode of transition, $f$ can be further simplified by setting $k=0$: \begin{widetext} \begin{equation}\label{Eq:EnergyField} f_{k\to0}=\frac{\mu \zeta^2q^2r^0}{4r \left(\frac{r+(r-1)r^0+r^2\left(A-1+\frac{\epsilon_{a}E^2+K_{B}q^2}{\mu}\right)}{1-r^0+r\left(A-1+r^0+\frac{\epsilon_{a}E^2+K_{B}q^2}{\mu}\right)}\right). \end{equation} \end{widetext} Fig.~\ref{Fig:FreeEnergyField} shows the plot of $f_{k\to0}$ as a function of electric field's intensity $E$, which corresponds to the situation when we decrease the electric field at a certain temperature of final state. The free energy is positive when the electric field is still large enough to keep the vertical alignment of the nematic directors across the cell, the buckling transition is energetic unfavorable; as the electric field is further decreased, the energy become negative, which means the system transforms into the buckled state to minimize the free energy. The critical electric field $E_{\mathrm{C}}$ can be found at the point where the free energy equals zero. The relationship between the $E_C$ and $r$ can be studied numerically and plotted in Fig.~\ref{Fig:FieldRPlot}, where $E_{\mathrm{C}}$ increases with $r$, in another word, $E_{\mathrm{C}}$ decreases with final temperature $T_{f}$. This agree qualitatively with our experimental measurement in Fig.~\ref{Fig:FieldTemp}: when $r=3.5$, critical field is calculated as $E_{\mathrm{C}}=0.48\textrm{V}/\mu\mathrm{m}$ comparing with experimental value $0.56\textrm{V}/\mu\mathrm{m}$ at 31\textcelsius ; when $r=5.8$ corresponding to a lower $T_{f}$, critical field is calculated as $E_{\mathrm{C}}=0.621\textrm{V}/\mu\mathrm{m}$ comparing with experimental value $0.852\textrm{V}/\mu\mathrm{m}$ at 25\textcelsius . \begin{figure} \def 0.225\textwidth {0.225\textwidth} \centering \subfigure[ $f_{k\to0}$ vs. $E$]{\label{Fig:FreeEnergyField}\includegraphics[width=0.225\textwidth]{Figure4a}} \subfigure[ $E_{\mathrm{C}}$ vs. $r$]{\label{Fig:FieldRPlot}\includegraphics[width=0.225\textwidth]{Figure4b}} \subfigure[ $f_{E\to0}$ vs. $k$]{\label{Fig:FreeEnergyK}\includegraphics[width=0.225\textwidth]{Figure4c}} \subfigure[ $\lambda$ vs. $r$]{\label{Fig:KR}\includegraphics[width=0.225\textwidth]{Figure4d}} \caption{ The free energy density ($f$) depends both on \subref{Fig:FreeEnergyField} electric field and \subref{Fig:FreeEnergyK} wavevector $k$. The free energy is minimized by decreasing applied field to zero and spatial modulation of director field $\hat{\mathbf{n}}$ with a finite $k$. Both critical field $E_{\mathrm{C}}$ and wavelength depend on the anisotropic properties of final buckled state: as the nematic gel is more ordered at its final state (corresponding lower $T_{f}$), \subref{Fig:FieldRPlot} $E_{\mathrm{C}}$ increases and \subref{Fig:KR} wavelength decreases. } \end{figure} \subsection{Instability Analysis: Stripe Wavelength} Since the nematic gel buckles when the applied electric field removed ($E=0$), the free energy $f$ depends on the wavevector $k$ in $xy$-plane: \begin{widetext} \begin{eqnarray}\label{Eq:EnergyK} f_{E\to0}&=&\frac{\mu\zeta^2}{4 q^2r\big(1-r^{0}+r(A-1+\frac{K_{S}k^2+K_{B}q^2}{\mu}+r^{0})\big)}\times\bigg(k^6K_{S}r\frac{1+Ar}{\mu} \nonumber \\ &&+k^4r\big(1+r^{0}-Ar^{0}+\frac{q^2}{\mu}(K_{B}+K_{S}r^{0})\big)+k^2q^2\Big((3-r^{0})r^{0}+r\big(1+r^{0}(3A-6+\frac{K_{B}q^2}{\mu}+r^{0})\big)\Big) \nonumber \\ &&+q^4r^{0}\big(r+(A-1+\frac{K_{B}q^2}{\mu})r^2+(r-1)r^{0}\big)+r^2\big(A-1+3r^{0}-2Ar^{0}+\frac{q^2}{\mu}(K_{B}+K_{S}r^{0})\big)\nonumber \\ &&+r^2\big(\frac{K_{S}q^2}{\mu}-1+A(1+\frac{K_{B}q^2r^{0}}{\mu})\big)-r^{0}\bigg). \end{eqnarray} \end{widetext} Fig.~\ref{Fig:FreeEnergyK} shows the plot of $f_{E\to0}$ as a function of wavevectore $k$. It can be seen that free energy is negative at a finite $k$, which corresponding to the stripe pattern observed experimetally. The spatial modulation of the translational order ($\mathbf{R}$) and the orientational order ($\hat{\mathbf{n}}$) further minimize the free energy, which is similar to the stripe pattern observed in previous experimental observation in the planar aligned sample\cite{2006_04_Physical-Review-Letters_Vol:96_Pg:147802_Verduzco.R;Meng.G;etal}. Furthermore, wavelength can be numerically calculated as a function of the sample's anisotropic parameter $r$ at final state, which is plotted in Fig.~\ref{Fig:KR}. It can be seen that wavelength decreases as $r$ increases, in another word, stripe's wavelength is smaller for lower final temperature $T_{f}$. This agrees with our experimental observations in Fig.~\ref{Fig:WavelengthTemp}: when $r=3.5$, the wavelength is calculated as $6.48\mu\mathrm{m}$ comparing with experimental value $6.46\mu\mathrm{m}$ at 31\textcelsius ; when $r=5.8$ corresponding to a lower $T_{f}$, the wavelength is calculated as $3.82\mu\mathrm{m}$ comparing with experimental value $3.82\mu\mathrm{m}$ at 25\textcelsius . Currently, the analytic relationship between temperature $T$ (or the order parameter of nematic liquid crystals) and the anisotropy parameter $r$ of polymeric networks is not known experimentally. Therefore we can not fit our experimental measurement of the critical field and wavelength with temperature. The theoretical calculation can only be compared with experimental data qualitatively. It can be seen that it agrees well qualitatively with the experimental measurements. \section{\label{Sec:Conclusion}Conclusions} In summary, the spontaneous buckling transitions of thin layers of nematic liquid crystalline gel in a homeotropic cell were observed by polarized light microscopy. This is good example to show the coupling between the liquid crystalline ordering and the crosslinked polymer backbones inside the nematic gel material. As the nematic mesogens become more ordered when the gel is cooled down from the initial crosslinking stage with a higher temperature, the polymer network tends to elongate along the direction parallel to the initial nematic director, which is perpendicular to the rigid glass surfaces in the experimental setup. The shape change of such confined gel sample lead to the spontaneous buckling transition. The applied electric field will change the in stability behavior, and spatial modulated stripe pattern in orientational ordering of nematic solvent helps to accommodate the buckling transformation of gel network and minimize the free energy. The experimental observation and measurement can be can be explained qualitatively at different temperature. \section{Acknowledgments} \begin{acknowledgments} We gratefully thank for Julia~A.~Kornfield' group at California Institute of Technology for providing nematic gel material. This research was supported by NSF Grant No. DMR-0322530. \end{acknowledgments} \bibliographystyle{apsrev}
1,941,325,220,848
arxiv
\section{Introduction} The study of black holes from astrophysical point of view and by astronomers has blossomed in the last decade because of the dramatic increase in the number of black hole candidates from the sole candidate (Cygnus X-1) some 25 years ago. This in turn requires a deeper familiarity with black hole physics and especially with {\it black hole radiation}, for astronomers and classical relativists. Hawking in his original work on the black hole radiation ~(Ref.[4]) has used a quantum field theoretical approach to arrive at this radiation. In the next few sections we will describe the classical essence of this radiation in a language which is free from the usual quantum field theoretic tools and therefore more suitable for the astronomers and relativists. Our derivation of Hawking radiation will also establish the close connection between the black hole radiation and the existence of an event horizon. \section{Schwarzschild black hole}\label{sec:Schwaz} We start with the simplest case of the one parameter family of blackholes, namely the Schwarzschild black hole, which was previously discussed in Ref.[6]. Consider a radial light ray in the Schwarzschild spacetime which propagates from $r_{in} = 2M + \epsilon $ at $t=t_{in}$ to the event ${\cal P}(t,r)$ where $r \gg 2M$ and $ \epsilon \ll 2M$. The trajectory can be found using the fact that for light rays $${\rm d}s^2=1 - {2M \over r}{\rm d}t^2 - {1\over {1 - {2M \over r}}} {\rm d}r^2 - r^2 {\rm d}\Omega^2 = 0,$$ for radial light rays, ${\rm d}\theta = {\rm d}\phi =0$, we have $${{\rm d}r \over {\rm d}t} ={1 - {2M \over r}},\eqno(1)$$ from which the trajectory with the requierd initial condition, is $$r=r_{in} - 2M{\rm ln}({r-2M \over 2M}) + 2M{\rm ln} ({r_{in} - 2M \over 2M})+ t - t_{in} \cong t - t_{in} + 2M {\rm ln} ({\epsilon \over 2M})\eqno(2)$$ where the last equality uses $r\gg 2M$ , $\epsilon \ll 2M$. The frequency of a wave will be redshifted as it propagates on this trajectory. This redshift is basically due to the fact that the frequency is measured in terms of the proper time $\tau$, which flows differently at different points of a stationary spacetime according to the following relation: $$\tau ={1\over c}\sqrt{g_{00}} x^0. \eqno(3)$$ The frequency $\Omega$ at $r\gg 2M$ will be related to the frequency $\Omega_{in}$ at $r =2M + \epsilon$ by $$\Omega \cong \Omega_{in}[g_{00}(r=2M+\epsilon)]^{1/2} \cong \Omega_{in} \left( \epsilon \over 2M \right)^{1/2} = \Omega_{in} {\rm exp}\left( -{{t - t_{in} - r}\over 4M}\right) \eqno(4)$$ If the wave packet, $\Phi(r,t)\propto {\rm exp}({\it i}\theta(t,r))$, centered on this null ray has a phase $\theta (r,t)$, then the instantaneous frequency is related to the phase by $(\partial \theta / \partial t) = \Omega$. Integrating (4) with respect to $t$, we find the relevant wave mode to be $$\Phi(t,r) \propto {\rm exp} {\it i}\int \Omega{\rm d} t \propto {\rm exp} \left[ -4M{\it i}\Omega_{in} {\rm exp} \left(-{{t - t_{in} - r} \over 4M}\right)\right]\eqno(5)$$ (This form of the wave can also be obtained by directly integrating the wave equation in Schwarzschild space with appropriate boundary conditions). Equation (4) shows that, despite being in a static spacetime, the frequency of the wave (measured by an observer at fixed $r\gg 2M$) depends nontrivially on $t$, for a fixed $t_{in}$ and $\epsilon$. Such an observer will not see a monochromatic radiation. Therefore an observer using the time coordinate $t$ will Fourier decompose these modes with respect to the frequency $\omega$ defined using $t$ as: $$\Phi(t,r)={1\over 2\pi} \int^{\infty}_{-\infty} f(\omega) e^{-i\omega t} {\rm d}\omega \eqno(6)$$ where $$f(\omega)= \int^{\infty}_\infty \Phi(t,r) e^{i\omega t}{\rm d}t \propto \int^{\infty}_0 x^{-4iM\omega - 1} {\rm exp}(-4Mix \Omega_{in}) {\rm d}x \eqno(7) $$ and $x={\rm exp} \left(\; [-t + t_{in} + r]/ 4M \right)$. To evaluate the above integral we rotate the contour to the imaginary axis, i.e. $x \rightarrow y=ix$, $$f(\omega) \propto e^{-2\pi M \omega}\int^{i\infty}_0 Y^{z-1} e^{-Y} dY \eqno(8)$$ where $z=-4 i M \omega$ and $Y=-4 M\Omega_{in}y$. Using the fact that the integral in the right hand side of the above relation is one of the representations of Gamma function we get the corresponding power spectrum to be $$|f(\omega)|^2 \propto ({\rm exp}(8\pi M\omega) -1)^{-1}\eqno(9)$$ where we have used the fact that $|\Gamma (ix)|^2 = {(\pi / x {\rm sinh} \pi x)}$. In terms of the conventional units the above relation becomes $$|f(\omega)|^2 \propto ({\rm exp}({8\pi G M \omega \over c^3}) -1)^{-1} \equiv (exp({\omega \over \omega_0}) -1)^{-1}\eqno(10)$$ where $$\omega_0 = {c^3\over 8\pi G M}\eqno(11).$$ As one can see no $\hbar$ appears in the above analysis and $\omega_0$ can be thought of as the characteristic frequency of the problem by a radio astronomer who thinks in terms of frequency. On the other hand an X-ray or a $\gamma$-ray astronomer -who thinks in terms of photons- will introduce the energy $E=\hbar \omega$ into the above relation in the following form: $$|f(\omega)|^2 \propto (exp({\hbar \omega \over \hbar \omega_0}) -1)^{-1} \equiv (exp({E \over k_{_B}T}) -1)^{-1}\eqno(12)$$ which shows that the corresponding power spectrum is Planckian at temperature $$T = {\hbar c^3 \over 8\pi G M k_{_B}} .\eqno(13)$$ \section{Reissner-Nordstrom balck hole}\label{sec:R-N} The same approach can be used to study the radiation in the space of static charged black holes which are charcterized by two parameters M and Q. The equation governing the outgoing null radial geodesics in R-N spacetime has the following form $${{\rm d}r \over {\rm d}t} = 1 - {2M \over r} + {Q^2 \over r^2}\eqno(14)$$ In terms of the conventional units the above equation will take the following form $${{\rm d}r \over {\rm d}t} = c - {2M \over r}({G\over c}) + {Q^2 \over r^2} ({G\hbar\over c^2}).\eqno(15) $$ The event horizon of the Reissner-Nordstrom balck hole is at $r_{+} = M + (M^2 - Q^2)^{1/2}$. Considering a light ray propagating from $r_{in} = r_{+} + \epsilon$ at $t=t_{in}$ to the event ${\cal P}(t,r)$ where $r \gg r_{+} $ and $ \epsilon \ll r_{+} $ we will find the trajectory in the follwing form $$r \cong t - t_{in} + {{r_{+}}^2\over 2(M^2 - Q^2)^{1/2}}{\rm ln} \epsilon \eqno(16)$$ The redshifted frequency $\omega$ will be related to the frequency at $r = r_{+} + \epsilon$ by $$\Omega \cong \Omega_{in}[g_{00}(r=r_{+}+\epsilon)]^{1/2} \cong \omega_{in} \left( \epsilon \over 2M \right)^{1/2} = \omega_{in} {\rm exp}\left( -{{t - t_{in} - r}\over {(M+(M^2-Q^2)^{1/2})^2 \over (M^2-Q^2)^{1/2}}}\right) \eqno(17)$$ Now if we repeat the analysis of the Schwarzschild case for R-N spacetime, in exactly the same way, we find that the corresponding power spectrum for a wave packet which has scattered off the R-N black hole and travelled to infinity at late times has the following Planckian form $$|f(\omega)|^2 \propto \left( {\rm exp}\left[{2\pi[M+(M^2-Q^2)^{1/2}]^2 \over (M^2-Q^2)^{1/2}} \right]\omega - 1 \right)^{-1}\eqno(18)$$ at temperature $T = {(M^2-Q^2)^{1/2}\over 2\pi[M+(M^2-Q^2)^{1/2}]^{2}}$ which is the standard result and reduces to that of the Schwarzschild case when $Q=0$. \section{Hawking radiation of a Kerr black hole}\label{sec:Kerr} In applying the approach of the last two sections to the radiation of Kerr black holes we should be more careful because- unlike the Schwarzschild and R-N black holes- the event horizon and infinte redshift surface do not coincide. We will see that in this case the infinte redshift surface acts as a boundary for the outgoing null geodesics originating from inside the ergosphere, on which we should be concerned about the continuty problem. In Kerr spacetime the principal null congruences play the same role as the radial null geodesics in Schwarzschild and R-N spacetimes, so we consider them in our derivation of the Hawking radiation by Kerr black holes. The equation governing the principal null congruences ($\theta$ = const.) is given by $${{\rm d}r \over {\rm d}t} = 1 - {2M\over r} + {a^2 \over r^2} \eqno(19)$$ If we restrict our attention to the case $a^2 < M^2$, the above equation can be integrated to give $$t=r+\left( M + {M^2\over (M^2 - a^2)^{1/2}}\right){\rm ln}|r-r_+| +\left( M - {M^2\over (M^2 - a^2)^{1/2}}\right){\rm ln}|r-r_-|\eqno(20)$$ where $$r_{\pm} = M \pm (M^2 - a^2)^{1/2}\eqno(21)$$ are the event horizons of the Kerr metric. Now as in the previous sections we consider a light ray propagating from point $r_+ + \epsilon$ at $t=t_{in}$ to the event ${\cal P}(r,t)$ where $r,t \gg M $ and $\epsilon \ll M$. Starting from a point very close to the outer event horizon ($r_+ +\epsilon$) the trajectory would have the following form $$r \cong t - t_{in} + \left(M+{M^2\over (M^2 - a^2)^{1/2}}\right){\rm ln} \epsilon \eqno(22)$$ The frequency $\Omega$ at $r$ will be related to the frequency $\Omega_{in}$ of a light ray emitted by a {\it locally nonrotating observer} (Ref.[1]) at $r =r_{+} + \epsilon$ (inside the ergosphere) by (see appendix A) $$\Omega = \Omega_{in}{\left( g_{00} - {g^2_{03}/ g_{33}}\right)^{1/2} \over (1+({g_{03}/ g_{00}})a {\rm sin}^2\theta )} \propto \Omega_{in} \epsilon^{1/2} = \Omega_{in} {\rm exp} \left( -{(t - t_{in} - r)(M^2-a^2)^{1/2}\over 2(M^2 +M(M^2-a^2)^{1/2}) }\right)\eqno(23)$$ repeating the procedure of the last two sections to the above redshifted frequncy we find the following power spectrum for a wave packet scattered off the Kerr black hole at late times $$|f(\omega)|^2 \propto \left( {{\rm exp} \left[ 4\pi[M^2+M(M^2-a^2)^{1/2}] \over (M^2-a^2)^{1/2}\right]\omega} - 1 \right)^{-1}\eqno(24)$$ which is Planckian at temperature $$T = {(M^2-a^2)^{1/2}\over 4\pi[M^2+M(M^2-a^2)^{1/2}]}.\eqno(25)$$ which is again the standard result (Ref.[2]) and reduces to (13) for $a=0$. \section{Discussion}\label{sec:cnclsns} In this letter we gave a simple derivation of balck hole radiation which strips the Hawking process to its bare bones and establishes the following two facts: (i) The key input which leads to the Planckian spectrum is the exponential redshift given by equations (4,17 \& 23) of modes which scatter off the black hole and travel to infinity at late times, which in turn requires the existence of an event horizon. It is well known that frequencies of outgoing waves at late times in black hole evaporation correspond to super planckian energies of the ingoing modes near the horizon. One might ask where do the ingoing modes corresponding to the outgoing modes come from?. This where the quantum field theory plays its role in the black hole radiation. According to quantum field theory vacuum is a dynamical entity and space is nowhere free of vacuum fluctuations. The vacuum field fluctuations can be thought of as a superposition of ingoing and outgoing modes. A {\it collapsing star} will introduce a mismatch between these virtual modes causing the appearance of a real particle at infinity. The calculation shows that the energy carried by the radiation is extracted from the black hole (Ref.[4]). What we have done is to mimic the essence of this process by considering a classical mode propagating from near event horizon to infinity. (ii) The analysis given in the previous sections is entirely classical and no $\hbar $ appears anywhere. The mathematics of Hawking evaporation is puerly classical and lies in the Fourier transform of an exponentially redshifted wave mode (for a more detailed discussion of classical versus quantum features see ref. [7]). \section*{Appendix A : Gravitational redshift by a Kerr black hole} In this appendix we derive the gravitational redshift of a light ray emmitted from inside the ergosphere and received by a Lorentzian observer at infinity (as given by equation (17) of the text). The general relation for the redshift between a source and an observer located at events ${\cal P}_1$ and ${\cal P}_2$ in an stationary spacetime, is given by (Ref[8]) $${\omega_{\;_{{\cal P}_1}} \over \omega_{\;_{{\cal P}_2}}} = {(k_a u^a)_ {{\cal P}_1}\over (k_a u^a)_{{\cal P}_2}}\eqno(A1)$$ where $k^a$ is the wave vector and $u^a$s are the 4-velocities of the source and the observer. The numerator and denominator are evaluated at the events ${\cal P}_1$ and ${\cal P}_2$ respectively. One should note that the null geodesic (or equivalently its tangent vector) joining the source and the observer should be continuous over the boundary which in this case is the infinte redshift surface. The principal null congrunces we are considering here, indeed satisfy this condition. Since there are no static observers inside the ergosphere we choose as our source the {\it locally nonrotating observer} ~(Ref.[1]) whose angular velocity in Boyer-Lindquist coordinates is given by ~(Ref[5]) $$\Omega= -{g_{03}\over g_{33}}={2Mra \over (r^2 + a^2)^2 - \Delta a^2 {\rm sin}^2\theta}\eqno(A2) $$ so the 4-velocities of the source and the static Lorentzian observer are given by (Ref[5]) $$u^a|_S = {1\over (g_{00} + 2\Omega g_{03} + \Omega^2 g_{33})^{1/2}} (1, 0, 0, \Omega)\;\;\ \& \;\; u^a|_\infty =(1/\sqrt{g_{00}}, 0, 0, 0)\eqno(A3)$$ Substituting (A2) and (A3) in (A1) we have $$\omega |_\infty = \omega|_S \left({k_0 |_{\infty}\over (k_0 u^0 + k_3 u^3)|_S}\right) $$ Using the fact that the frequency measured with respect to the coordinate time, $k_0$, is constant and that $k_3 / k_0 =-a \rm sin^2 \theta$ for the principal null congruences ~(Ref[3]) we have $$\omega |_{\infty} = \omega|_S \left({1 \over (u^0 - a \rm sin^2 \theta u^3)|_S}\right)\eqno(A4)$$ Now Substituting from (A2) and (A3) in (A4) we obtain the following result $$\omega |_{\infty} = \omega|_S {\left( g_{00} - {g^2_{03} /g_{33}}\right)^{1/2} \over (1+({g_{03}/ g_{00}})a {\rm sin}^2\theta )}\eqno(A5)$$ which is the relation used in the text.\\
1,941,325,220,849
arxiv
\section{Introduction} Controlling the interaction of photons and electrons at the sub-wavelength electromagnetic regime has led to a wide variety of novel optical materials and applications in the territory of metamaterials and plasmonics relevant to computing, communications, defense, health, sensing, imaging, energy, and other technologies \cite{Fang534,taubner2006near,zhang2008superlenses,liu2007far,rho2010spherical,lu2012hyperlenses,sun2015experimental,lee2007development,valentine2008three,landy2008perfect,temnov2012ultrafast,aslam2012negative,sadatgol2016enhanced,choi2011terahertz,chen2012extremely,zhang2015extremely,schurig2006metamaterial,gwamuri2013advances,rockstuhl2008absorption,vora2014exchanging,vora2014multi,bulu2005compact,odabasi2013electrically,guney2009negative,smolyaninov2010metric,tame2013quantum,al2015quantum,asano2015distillation,jha2015metasurface,sperling2008biological,huang2006cancer,lal2008nanoshell,guo2010multifunctional,doi:10.1021/nl304208s,blankschien2012light,chen2010enhanced,sherlock2011photothermally,ahmadivand2015enhancement}. The prospect of circumventing Rayleigh's diffraction limit, thereby allowing super-resolution imaging has regained tremendous ground since Pendry theorised that a slab of negative (refractive) index material (NIM) can amplify and focus evanescent fields which contain information about the sub-wavelength features of an object \cite{Pendry}. A recent review of super-resolution imaging in the context of metamaterials is given in \cite{adams2016review}. However, a perfect NIM does not exist in nature and although recent developments in metamaterials have empowered their realization, a fundamental limitation exists. The presence of material losses in the near infrared and visible region is significant \cite{PhysRevLett.95.137404,RN4,Dolling892}. This compromises the performance of the theoretical perfect lens \cite{Webb1,Yang:05} since a significant portion of evanescent fields is below the noise floor of the detector and are indiscernible. Therefore, new efforts were directed towards the compensation of losses in metamaterials \cite{Nezhad:04,Popov:06,PhysRevB.80.125129}. Amongst the schemes that were developed, gain media to compensate intrinsic losses gained popularity \cite{Anantha,refId0}. However, Stockman \cite{PhysRevLett.98.177404} demonstrated that the use of gain media involved a fundamental limitation. Using the Kramers-Kronig relations, they developed a rule based on causality which makes loss compensation with gain media difficult to realize. Recently, a new compensation scheme, called plasmon injection or $\Pi$ scheme \cite{PI}, was proposed. The $\Pi$ scheme was conceptualized with surface plasmon driven NIMs \cite{Aslam} and achieves loss compensation by coherently superimposing externally injected surface plasmon polaritons (SPPs) with local SPPs. Therefore, absorption losses in the NIM could be removed without a gain medium or non-linear effects. Although the $\Pi$ scheme was originally envisioned for plasmonic metamaterials \cite{PI,guney2011surface,aslam2012dual}, the idea is general and can be applied to any type of optical modes. In \cite{Wyatt} Adams, et. al used a post processing technique equivalent to this method. They demonstrated that the process can indeed be used to amplify the attenuated Fourier components and thereby accurately resolve an object with sub-wavelength features. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth,height=5cm]{1.eps} \caption{Electric field magnitude squared $[Vm^{-1}]^2$ distribution in the object and image planes for an object with features separated by $\lambda _o/4$ with $\lambda _o = 1 \mu m$. The compensated image is reasonably well resolved with the equivalent inverse filter post-processing technique. } \label{fig:Fig1} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{2.eps} \caption{Fourier spectra $[Vm^{-1}]$ of the three Gaussians and the compensated image in figure \ref{fig:Fig1}, and the raw image obtained without loss compensation. The compensation filter is the inverse of the transfer function. The compensated image spectrum is obtained simply by multiplying the raw image spectrum with the compensation filter. Notice that the noise can be seen for high spatial frequencies which is amplified by the compensation.} \label{fig:Fig2} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth,height=5cm]{3.eps} \caption{Electric field magnitude squared $[Vm^{-1}]^2$ for the four Gaussians, in the object and image planes, separated by $\lambda _o/4$ with $\lambda _o = 1 \mu m$. The compensated image is very poorly resolved with one of the Gaussians missing.} \label{fig:Fig3} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{4.eps} \caption{Fourier spectra $[Vm^{-1}]$ of the four Gaussians and the compensated image in figure \ref{fig:Fig3}, and the raw image along with the compensation filter. Notice that the feature at $\frac{k_y}{k_o} = 2$ is not discernable under the noise and cannot be recovered well with the compensation.} \label{fig:Fig4} \end{figure} Although this form of passive inverse filter provides compensation for absorption losses, it is also prone to noise amplification \cite{Wyatt}. This is illustrated in figure \ref{fig:Fig1} which shows an object with three Gaussian features separated by $\lambda _o/4$, where $\lambda _o$ is the free space wavelength. Noise is prominent in the Fourier spectra beyond $\frac{k_y}{k_o} = 2.5$ as seen in figure \ref{fig:Fig2}. However, the compensated image is still reasonably well resolved. Consider now the object shown in figure \ref{fig:Fig3}, which has four Gaussians separated by $\lambda _o/4$. The Fourier spectra of the raw image, shown in figure \ref{fig:Fig4}, demonstrates how the feature at $\frac{k_y}{k_o} = 2$ is not distinguishable under the noise. The final compensated image, when subject to the same compensation scheme, is poorly resolved. We define a \textbf{\emph{feature}} as any spatial Fourier component that has substantial contribution to the shape of the object. It is clear that the Fourier components beyond $\frac{k_y}{k_o} = 2$ have a significant contribution to the four Gaussians and must be recovered from the image spectrum in order to accurately resolve the object. Therefore, noise presents a limitation which must be overcome to make the $\Pi$ scheme versatile. In the present work, we demonstrate how the $\Pi$ scheme can be significantly improved with the use of a physical auxiliary source to recover high spatial frequency features that are buried under the noise. We show that by using a convolved auxiliary source we can amplify the object spectrum in the frequency domain. The amplification makes the Fourier components that are buried in the noise distinguishable. This allows for the recovery of the previously inaccessible object features by adjusting the amount of compensation from the $\Pi$ scheme. The technique presented in this paper is based on the same negative index flat lens (NIFL) as in \cite{Wyatt}. We use the words \textbf{\emph{"passive"}} and \textbf{\emph{"active"}} to distinguish between the compensation schemes applied in \cite{Wyatt} and in this work, respectively. Therefore, the inverse filter post processing used in \cite{Wyatt} and figures \ref{fig:Fig1}-\ref{fig:Fig4} to emulate the physical compensation of losses can be called passive $\Pi$ scheme, since no external physical auxiliary is actively involved as opposed to the active $\Pi$ scheme here, where the direct physical implementation using an external auxiliary source as originally envisioned in \cite{PI} is sought. The active compensation scheme allows us to control noise amplification and hence extend the applicability of the $\Pi$ scheme to higher spatial frequencies. \section{Theory} We define the optical properties of the NIFL with the relative permittivity and permeability expressed as $\epsilon_r = \epsilon^{'} + i\epsilon^{''}$ and $\mu_r = \mu^{'} + i\mu^{''}$, where $\epsilon^{'} = -1$ and $\mu^{'} = -1$. COMSOL Multiphysics, the finite element method based software package that we use here, assumes $exp(j\omega t)$ time dependence. Therefore, the imaginary parts of $\epsilon_r$ and $\mu_r$ are negative for passive media. In this paper we have used $0.1$ as the imaginary parts of both $\epsilon_r$ and $\mu_r$ which is a reasonable value given currently fabricated metamaterial structures \cite{RN1,Garc,Verhagen}. The geometry used to numerically simulate the NIFL in COMSOL is given in figure \ref{fig:Fig5}. The first step is characterizing the NIFL with a transfer function. For a detailed discussion on the geometry setup and transfer function calculations, the reader is referred to \cite{Wyatt}. Here, we present a brief mathematical description of the compensation scheme. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{18.eps} \caption{The geometry built in COMSOL to perform numerical simulations (not to scale). OP and IP are the object and image planes, respectively. Electric field is polarized along the z-axis (pointing out of plane). The object is defined as an electric field distribution $[E_z(y)]$ on the object plane. The operating wavelength is $\lambda _o = 1 \mu m$ and $2d = 0.5 \mu m$. Blue, white, and orange regions are the NIFL, air, and perfectly matched layer (PML), respectively. } \label{fig:Fig5} \end{figure} The spatial Fourier transforms of the electric fields in the object and image planes are related by the passive transfer function, $T_P(k_y)$ of the imaging system, which can be calculated with COMSOL. This is expressed mathematically as \begin{equation} I(k_y) = T_P(k_y)O(k_y). \label{eq:TF} \end{equation} Here $O(k_y) = \mathcal{F}\{O(y)\}$ and $I(k_y) = \mathcal{F}\{I(y)\}$, where $O(y)$ and $I(y)$ are the spatial distribution of the electric fields in the object and image planes, respectively, and $\mathcal{F}$ is the Fourier transform operator. According to \cite{Wyatt} the passive compensation is defined by the inverse of the transfer function. Hence, the loss compensation is achieved by multiplying the raw image spectrum in Eq. \ref{eq:TF} with the inverse of the transfer function given by \begin{equation} C_P(k_y) = \bigg[T_P(k_y)\bigg]^{-1}. \label{eq:CF} \end{equation} For \emph{"active"} compensation we first define a mathematical expression given by \begin{equation} A(k_y) = 1+ P(k_y), \label{eq:AUX} \end{equation} \begin{equation} P(k_y) = P_oexp\bigg[- \frac{(\frac{k_y}{k_o} - k_c)^2}{2\sigma ^2}\bigg], \label{eq:PUMP} \end{equation} where $P_o$ is a constant. $k_c$ controls the center frequency of the Gaussian, $k_o = \frac{2\pi}{\lambda}$ is the free space wave number and $\sigma$ controls the full width at half maximum (FWHM) of the Gaussian. We convolve $A(y) = \mathcal{F}^{-1}\{A(k_y)\}$ with the object $O(y)$ in the spatial domain and denote the new object by $O^{'}(y)$. This is expressed as \begin{equation} O^{'}(y) = \int\limits^{\infty}_{-\infty} O(y)A(y-\alpha) d\alpha . \label{eq:CONV} \end{equation} We shall refer to this convolved object as the \emph{total object}. Since convolution in the spatial domain is equivalent to multiplication in the spatial frequency domain, the Fourier spectrum of the total object $O^{'}(k_y) = \mathcal{F}\{O^{'}(y)\}$, is related to the original object by \begin{equation} O^{'}(k_y) = O(k_y) + O(k_y)P(k_y). \label{eq:TOTALOBJ} \end{equation} The second term on the RHS will be referred to as the \emph{"auxiliary source,"} where $P_o$ in Eq. \ref{eq:PUMP} defines its amplitude at the center frequency $k_c$ . Note that this term, which is a convolution of $O(y)$ with $P(y)= \mathcal{F}^{-1}\{P(k_y)\}$, represents amplification in the spatial frequency domain provided that $P(k_y) > 1$. Even though the auxiliary source is object dependent, as we will discuss later, the external field to generate auxiliary source does not require prior knowledge about the object. Now, the Fourier transform of the fields in the object and image planes are related to each other by the transfer function of the NIFL as defined by Eq. \ref{eq:TF}. Therefore, in response to the total object, the new field distribution in the image plane, expressed as $I^{'}(y) = \mathcal{F}^{-1}\{I^{'}(k_y)\}$, is transformed as \begin{equation} I^{'}(k_y) = T_P(k_y)O^{'}(k_y), \label{eq:IMGTOTALOBJ} \end{equation} where we can plug in the value of $O^{'}(k_y)$ from Eq. \ref{eq:TOTALOBJ} to obtain the convolved image \begin{equation} I^{'}(k_y) = T_P(k_y)O(k_y) + T_P(k_y)O(k_y)P(k_y). \label{eq:IMGTOTALOBJ1} \end{equation} \begin{figure}[h] \centering \includegraphics[width=\linewidth,height=5cm]{5.eps} \caption{Fourier spectra [$Vm^{-1}$] of an arbitrary object illustrating how the auxiliary amplitude can be tuned from the object plane to raise the image spectrum above the noise floor by controlling the amplification. $P(k_y)$ is centered at $k_c = \frac{k_y}{k_o} = 3$ with $\sigma = 0.13.$} \label{fig:Fig6} \end{figure} \noindent The second term on the RHS of Eq. \ref{eq:IMGTOTALOBJ1} is a measure of the residual amplification which managed to propagate to the image plane. Therefore, by controlling $P_o$, from the object plane, we can tune the necessary amplification of high spatial frequency features to raise the desired frequency spectrum above the noise floor in the image plane. This process is illustrated in figure \ref{fig:Fig6} for different auxiliary amplitudes. The new "active" loss compensation scheme must consider the extra power that is now available in the image spectrum. We distinguish the compensation scheme from Eq. \ref{eq:CF} with the subscript "A". We start by defining the active transfer function of the NIFL as \begin{equation} T_{A}(k_y) = \frac{I^{'}(k_y) }{O(k_y) }. \label{eq:ACT} \end{equation} The numerator of Eq. \ref{eq:ACT} is the image of the total object which is given by Eq. \ref{eq:IMGTOTALOBJ1}. This transfer function is called "active" because it considers the auxiliary to be a part of the imaging system. Plugging in the value of $I^{'}(k_y)$ from Eq. \ref{eq:IMGTOTALOBJ1} into Eq. \ref{eq:ACT} we obtain the following expression for the active transfer function, \begin{equation} T_{A}(k_y) = T_{P}(k_y)+ T_{P}(k_y)P(k_y). \label{eq:TFA} \end{equation} The active compensation filter is simply defined as the inverse of the active transfer function and is expressed mathematically by \begin{equation} C_{A}(k_y) = \bigg[T_{P}(k_y)+ T_{P}(k_y)P(k_y)\bigg]^{-1}. \label{eq:CFA} \end{equation} \begin{figure}[h] \centering \includegraphics[width=\linewidth,height=5cm]{6.eps} \caption{Comparisons of the passive transfer function $T_P(k_y)$ and compensation filter $C_P(k_y)$ with the active transfer function $T_A(k_y)$ and compensation filter $C_A(k_y)$. $P(k_y)$ incorporated into the active compensation filter is centered at $k_c = \frac{k_y}{k_o}= 3$ with $ \sigma = 0.13$.} \label{fig:Fig7} \end{figure} Figure \ref{fig:Fig7} illustrates the active and passive transfer functions and the corresponding loss compensation schemes. The amount of active compensation drops within $2.5 < \frac{k_y}{k_o} < 3$. This indicates that in this region the auxiliary source is expected to provide compensation to the image. Therefore, the greater the auxiliary power, the lower is the required compensation through inverse filter within that region of spatial frequencies. It is interesting to note at this point the similarity of the active transfer functions in figure \ref{fig:Fig7} and those in \cite{Chen:16}. In the latter, however, highly stringent conditions are imposed on the negative index lens to obtain such a transfer function. \section{Noise Characterization} The active compensation scheme will be applied to an NIFL imaging system affected by noise where the noise process is a circular Gaussian random variable. Although there are many different sources of noise, they can be broadly classified into, "signal-dependent" (SD) and "signal-independent" (SI). The random nature of noise manifests itself in the form of an uncertainty in the level of the desired signal. This uncertainty is quantified by the standard deviation $\sigma _n$. The actual distortion can be thought of as a random selection from an infinite set of values and the selection process obeys a probability distribution function. The standard deviation describes the range of values which have the greatest likelihood of being selected. When the underlying signal is distorted by multiple independent sources of noise, each characterized by Gaussian distributions, then the variance $(\sigma _n ^{2})$ of the total noise is the sum of the variances of individual noise sources \cite{Fiete}. SD noise, as the name implies, is characterized by a $\sigma _n$ that is intricately related to spatial (or temporal) variations in the incoming signal intensity. The magnitude of the signal distortion therefore also increases with the signal strength. Sources of SD noise in an imaging system can be present on the detector side or the transmission medium. For example, the statistical nature of photons manifests itself as noise which has Poissonian statistics. In radiographic detection equipment, such sources of noise are called quantum mottle or quantum noise \cite{0031-9155-48-23-006,MP:MP5126}. Another source of SD noise originates from roughness of the transmission media, which in sub-wavelength imaging systems can be for example, surface roughness of the NIFL. One can think of surface irregularities as electromagnetic scatterers which radiate in different directions, distorting the propagating wave. Previous experiments on the impact of surface roughness \cite{Guo2014,Wang:11,Liu_Hong} showed that increasing material losses in the NIFL improved the image resolution of the perfect lens for relatively large surface roughness. Although this may seem counter-intuitive, it can be explained if roughness is modelled as a source of scattering. Adding material loss is equivalent to lowering the power transmission of the lens. This lowers the magnitude of the excitation field responsible for scattering effects and in turn reduces the magnitude of the scattered field. If material losses are kept constant, the scattering process will be proportional to the intensity of illumination provided to the object. Therefore, such kind of noise is amplified as the illumination intensity is increased. On the other hand, the SI noise is quantified by a standard deviation which is not a function of the incoming signal. Therefore, the random nature of the noise will be visible only when the incoming signal amplitude is comparable to the distortions due to the SI noise. A good example of this is "dark noise", which affects a CCD sensor even in the absence of illumination \cite{holst1998ccd}. A well known model \cite{Walkup} used to describe the spatial distribution of a signal that has been distorted with both SD and SI sources of noise is \begin{equation} r(y) = s(y) + f(s(y))N_{1}(y) + N_{2}(y), \label{eq:NOISE} \end{equation} where $s(y)$ is the noiseless or ideal signal and $r(y)$ is the noisy version. $N_{1}(y)$ and $N_{2}(y)$ are two statistically independent random noise processes with zero mean and Gaussian probabilities with standard deviations $\sigma _{n1}$ and $\sigma _{n2}$, respectively. The noise processes $N_{1}(y)$ and $N_{2}(y)$ are signal independent. The signal dependent nature of noise is modelled by modulating $N_{1}(y)$ using the function $f(s(y))$. Generally, $f(s(y))$ is a non-linear function of the ideal signal itself which is chosen based on the system which Eq. \ref{eq:NOISE} is attempting to describe. For example, $f(s(y))$ is usually considered to be the photographic density which is unitless when modelling signal-dependent film grain noise. Therefore, $f(s(y))N_{1}(y)$ represents the effective signal dependent noise term. We can re-write the expression in Eq. \ref{eq:NOISE} as \begin{equation} r(y) = s(y) + N_{SD}(y) + N_{SI}(y), \label{eq:NOISE1} \end{equation} where the standard deviation of $N_{SD}(y)$ is $f(s(y))\sigma _{n1}$ and the subscripts SD, SI distinguish between the sources of noise. The noise model of Eq. \ref{eq:NOISE}, referred to as the \emph{signal modulated noise model}, is used for signal estimation purposes with the Wiener filter. A detailed discussion on this can be found in \cite{Heine:06,Kasturi:83,Froehlich:81,Walkup}. However, we will use Eq. \ref{eq:NOISE1} in this paper for mathematical convenience to analyse the relative contributions of SD and SI noise. In the NIFL imaging system which we consider, the ideal signal $s(y)$ will be the electric field distribution on the image plane, that is $I(y)$ and $I^{'}(y)$ for passive and active schemes, respectively. In \cite{Chen:16}, Chen, et. al adopted a $60 dB$ signal to noise ratio (SNR) in their negative index lens considering an experimental imaging system detector \cite{Akiba:10}. This corresponds to a SD standard deviation of $10^{-3}I(y)$. In this work, we adopt the same standard for the SD noise. Additionally, we assume a SI noise process in the imaging system by adopting a spatially invariant standard deviation of $10^{-3} [V/m]$. This means that even in the absence of illumination, there is a constant background noise of the order of $1 \ mV/m$ in the detector. Although the value of the SI noise is chosen arbitrarily, this does not limit the results discussed in this paper, since the SI noise can be easily suppressed by additional auxiliary power. By taking into consideration the SNR standard used by Chen, et. al \cite{Chen:16} and the mathematical form of Eq. \ref{eq:NOISE1} we can frame the equations for the noisy images as \begin{equation} I_N(y) = I(y) + N_{SD}(y) + N_{SI}(y) \label{eq:NOISE_PASSIVE} \end{equation} and \begin{equation} I^{'}_N(y) = I^{'}(y) + N^{'}_{SD}(y) + N^{'}_{SI}(y) \label{eq:NOISE_ACTIVE} \end{equation} corresponding to the ideal images described by Eqs. \ref{eq:TF} and \ref{eq:IMGTOTALOBJ1}, respectively. The subscript N indicates the noisy image. The standard deviations of the noise processes are \begin{equation} \sigma _{n(SD)} = 10^{-3}I(y), \label{eq:SD_Passive} \end{equation} \begin{equation} \sigma ^{'} _{n(SD)} = 10^{-3}I^{'}(y) \label{eq:SD_Active} \end{equation} and \begin{equation} \sigma _{n(SI)} = \sigma^{'} _{n(SI)} = 10^{-3} \ V/m. \label{eq:SD_SI} \end{equation} Eqs. \ref{eq:SD_Passive} - \ref{eq:SD_SI} fully describe the random variables that are used to construct the SI and SD noise terms in Eqs. \ref{eq:NOISE_PASSIVE} and \ref{eq:NOISE_ACTIVE}. \begin{figure}[hpbp] \centering \includegraphics[width=\linewidth]{7.eps} \caption{Electric field [$Vm^{-1}$] distributions in the object and image planes. The fields on the image planes are multiplied by the window function to reduce the errors in the Fourier transform. The total object has been scaled down by $\approx 10^4$.} \label{fig:Fig8} \end{figure} \section{Results} Having described how both SD and SI noise are added to the system, the next step is to evaluate the performance of active-compensation and compare with the passive version. We will attempt to image the previously exemplified object comprising four Gaussian features, separated by $\frac{\lambda _o}{4}$ with $\lambda _o = 1 \mu m$ and compare the results of passive and active compensation. Note that due to the finite extent of the image plane, it is necessary to multiply the electric fields with a window function to ensure that the field drops to zero where the image plane is abruptly terminated. Otherwise, errors are introduced in the Fourier transform calculations. Windowing the field distribution simply reduces these sources of error, which will be then visible only in the higher spatial frequencies. Since these errors are very small compared with the amplitude of the SD and SI sources of noise, they do not have a significant impact on the calculations. Increasing the length of the image plane along the y-axis can also reduce these errors, but because of computational constraints this may not be desirable. Figure \ref{fig:Fig8} shows the spatial electric field distributions on the object and image planes. Noise was artificially added to the fields on the image plane that were calculated with COMSOL. The resultant noisy images are indicated by the red and green lines in the figure. The Tukey (tapered cosine) window function was applied to the image plane only. The Fourier transforms of the images, with and without added noise are shown in figure \ref{fig:Fig9}. The black line, which corresponds to $I_{N}(k_y)$ in Eq. \ref{eq:NOISE_PASSIVE}, shows how the added noise has clearly affected the raw image spectrum beyond $\frac{k_y}{k_o} = 2$, where all of the object features are now completely buried under the noise and indiscernible. However, the blue line, which corresponds to $I^{'}_{N}(k_y)$ in Eq. \ref{eq:NOISE_ACTIVE}, shows how these features can be recovered with the convolved auxiliary. We propose the following iterative process to apply the auxiliary source. We then use active compensation filter to reconstruct the image spectrum. \begin{enumerate} \item Select an arbitrary $k_c$ in the region where the noise has substantially degraded the spectrum. Choose a guess auxiliary amplitude by selecting $P_0$. \item Convolve the object with $A(y)$ to obtain the total object. \item Measure the electric fields on the image plane corresponding to the total object. \item Re-scale $P_0$ for the selected $k_c$ if necessary, to make sure that adequate amplification is available in the image plane and noise is not visible in the Fourier spectrum. \item Select another $k_c$ on the noise floor and ensure that there is sufficient overlap between the adjacent auxiliaries. \item Repeat the processes in $1-5$ by superimposing those multiple auxiliaries until the transfer function of the imaging system is reasonably accurate. We were restricted by the inaccuracy of the simulated passive transfer function $T_P(k_y)$ beyond $\frac{k_y}{k_o} = 4.7$ which prevented us from going beyond. \end{enumerate} Note that in the above steps the selection of $P_o$ does not require prior knowledge about the object. The blue and green lines of figure \ref{fig:Fig9} show the total images with and without added noise, respectively. They include four auxiliary sources with the center frequencies $k_c = k_y/k_o = 2.8, \ 3.1, \ 3.6, \ 4.2$ and $P_o = 3000, \ 10000, \ 4 \times 10^5, \ 10^6$, respectively, with the same $\sigma = 0.35$. The final $A(y)$ was then convolved with the object and the resulting total field distribution on the object plane is shown by the blue line in figure \ref{fig:Fig8}. Active compensation filter, defined by Eq. \ref{eq:CFA} and illustrated in figure \ref{fig:Fig10}, is then multiplied in the spatial frequency domain by the total image spectrum with the added noise. The resulting compensated spectrum is the red line of figure \ref{fig:Fig10}. The noise, which was visible in the total image spectrum beyond $\frac{k_y}{k_o} = 4.5$, is also amplified in this reconstruction process. However, in the regions where the auxiliary source is sufficiently strong, suppression of noise amplification is evident. The reconstructed spectrum perfectly coincides with the original object shown by the black curve in figure \ref{fig:Fig10}. The light blue line corresponds to the passively compensated image obtained by multiplying Eq. \ref{eq:CF} (i.e., dark blue line in figure \ref{fig:Fig10}) with the noise added raw image (i.e., black line in figure \ref{fig:Fig9}). The advantage of the active compensation over its passive counterpart is therefore clearly evident from the reconstructed Fourier spectrum. After the loss compensation process, the spectrum is truncated at $\frac{k_y}{k_o} = 4.7$ because the simulated transfer function loses accuracy. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{8.eps} \caption{Amplitude of the Fourier transforms [$Vm^{-1}$] on a log scale. Red and green lines are the image spectra $I(k_y)$ and $I^{'} (k_y)$ with no added noise, respectively. The blue line shows the image of the total object with added noise, $I^{'}_{N}(k_y)$. Noise is visible at $\frac{k_y}{k_o} \ > \ 4.5$ due to the inadequate amplification. The apparent noise in the pink line are due to the numerical errors introduced by the Fourier transform and shifts to higher $ \frac{k_y}{k_o}$ values after applying the Tukey window as seen in the red line.} \label{fig:Fig9} \end{figure} Figure \ref{fig:Fig7} shows that the passive transfer function starts to flatten beyond $\frac{k_y}{k_o} = 4.5$, even though the analytical transfer function monotonically decreases (see figure 3 in \cite{Chen:16}), inaccurate simulated transfer function $T_P(k_y)$ indicates that it is no longer possible to perform the required compensation accurately (see Eq. \ref{eq:CFA}). More precisely, the imaging system requires more compensation than the transfer function predicts. The reconstructed spectrum therefore starts to deviate from the original object when $\frac{k_y}{k_o} > 4.5$ as seen in the red plot of figure \ref{fig:Fig10}, indicating inadequate compensation. This was one of the main reasons why we were unable to image beyond $\frac{k_y}{k_o} = 5$. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{9.eps} \caption{Fourier spectra [$Vm^{-1}$] of the reconstructed images illustrating the difference between active and passive compensation. The passive compensation has significantly amplified the noise whereas the active one does not.} \label{fig:Fig10} \end{figure} Additionally, reconstructing the object features successfully requires a strong amplification. The auxiliary amplitude necessary to produce this amplification is very high and it starts to generate substantial electric field oscillations towards the edges of the image plane. Because the image plane is finite along the y-axis and the electric field is abruptly cut at a point where it is non-zero, a computational error is introduced in the spatial Fourier transform. An artefact of this can be seen in figure \ref{fig:Fig10} where the red plot shows that the feature at $\frac{k_y}{k_o} = 1$ is slightly shifted. The error is more prominent when the intensity of illumination is increased. Extending the size of the image plane along the y-axis mitigates the error at the expense of computational or physical resources. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{10.eps} \caption{Reconstructed images showing the difference between active and passive compensation schemes. Note that the passively compensated image has been scaled down by $10^7$.} \label{fig:Fig11} \end{figure} Note that towards the tails of the amplification, or when $\frac{k_y}{k_o} > 4.5$, the amplification is not strong enough to overcome the noise. Hence, there should be sufficient overlap between the two adjacent auxiliaries. The FWHM of $P(k_y)$, controlled by $\sigma$, can be selected arbitrarily. In our simulations we were limited by the finite image plane. A very narrow $P(k_y)$ in the spatial frequency domain translates to a wide field distribution in the spatial domain. This created additional field oscillations towards the edges of the image plane increasing the errors in the Fourier transform calculations. Figure \ref{fig:Fig11} shows the amplitude squared of the reconstructed fields illustrating the improvement of the active over passive compensation scheme. \section{Discussion} The active compensation scheme works, because the convolved auxiliary source allows us to \emph{``selectively amplify''} spatial frequency features of the object. This amplification cannot be achieved simply by the superposition of the object with an object independent auxiliary source. This is illustrated in figure \ref{fig:Fig12} where we set $O(k_y) = 1 Vm^{-1}$ in the second term of Eq. \ref{eq:TOTALOBJ} and use the same $P(k_y)$ distributions in figure \ref{fig:Fig9}. The blue and green lines correspond to the images of the object superimposed with the object independent auxiliaries with and without added noise, respectively. The buried object spectrum at $\frac{k_y}{k_o} = 3$ shown in figure \ref{fig:Fig9} has gone undetected. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{17.eps} \caption{Fourier spectra [$Vm^{-1}$] of the raw image superimposed with several object independent auxiliaries. Such superposition does not provide amplification and hence the feature at $\frac{k_y}{k_o} = 3$ is not recovered. Note that the black and red lines are the raw images with and without added noise, respectively, as also shown in figure \ref{fig:Fig9}.} \label{fig:Fig12} \end{figure} The convolution process to construct the auxiliary source that was described in this paper can be thought of as a form of structured light illumination or wavefront engineering \cite{Zhao,Kildishev1232009,Pors2013,PhysRevB.84.205428,Yu333,Xu:16,smith2003limitations,Cao:17}. Along these lines, for example, a plasmonic lens imaging system was discussed recently in \cite{Zhao}, where the authors described the fields on the image plane by Eq. \ref{eq:TF} that contains an illumination function as a result of a phase shifting mask. Spatial filters based on hyperbolic metamaterials \cite{Rizza:12,schurig2003spatial,wood2006directed} may be promising for the implementation of the proposed convolution. For example, an object illuminated with a high intensity plane wave and projected on such spatial filters can physically implement the convolved auxiliary source corresponding to the second term in Eq. \ref{eq:TOTALOBJ} and used in step 2 of the iterative reconstruction process. Here, the spatial filter needs to be engineered to have a transfer function of the form similar to Eq. \ref{eq:PUMP}. The object (i.e., such as an aperture based object illuminated by a plane wave) is to be placed on top of this additional metamaterial layer. The field distribution at the exit of this layer would be the convolution of the object field distribution with the point spread function of the layer, hence leading to the auxiliary source term in Eq. \ref{eq:TOTALOBJ} (i.e., second term). One way to engineer such a transfer function is with the hyperbolic metamaterials which support high spatial frequency modes. In \cite{wood2006directed}, for example, the transmission coefficient for the transverse magnetic waves in a hyperbolic medium was shown to have multiple peaks in the high spatial frequency region. The position of these peaks can be tuned by changing the filling fraction or the thickness of the hyperbolic medium. If one has engineered a metamaterial with a transfer function having one transmission peak $P_0$ around a certain spatial frequency (i.e., center spatial frequency $k_c$ in Eq. \ref{eq:PUMP}) and is zero everywhere else, the iterative process where $P_0$ is re-scaled (i.e., to control amplification) is functionally equivalent to re-scaling the amplitude of the plane wave $E_0$ illuminating the object. It is also worth mentioning here that this physically means actively adjusting the coherent plasmon injection rate in the imaging system to compensate the losses as conceptualized in \cite{PI}. On the other hand, controlling the center frequency will require multiple or tunable metamaterial structures where the transfer function can be tuned to show transmittance peaks at different center frequencies. Therefore, it would be advantageous to have a broad transmittance in a physical implementation as long as the noise amplification does not start to dominate. Another possible way to construct the necessary transfer function may be with the use of metasurfaces \cite{Genevet:17}, which are ultrathin nanostructures fabricated at the interface of two media. The scattering properties of the sub-wavelength resonant constituents of the metasurfaces can be engineered to control the polarization, amplitude, phase, and other properties of light \cite{Kildishev1232009,Pors2013,PhysRevB.84.205428,Yu333,pfeiffer2013metamaterial,holloway2012overview}. This can allow one to engineer an arbitrary field pattern from a given incident illumination \cite{pfeiffer2013metamaterial,holloway2012overview}. In order to understand how the active compensation enhances the resolution limit of the NIFL we need a deeper understanding of the effect of the noise on the ideal image spectrum. Eqs. \ref{eq:SD_Passive} and \ref{eq:SD_Active} tell us that the signal dependent noise is amplified proportionally with the illumination. But since the active compensation seems to work so well, can we say that the noise is not amplified to the same extent as the signal? Then, would it be possible to achieve the same results by simply increasing the intensity of the plane wave illuminating the object? To address these questions we will consider below the \emph{``weak illumination,''} \emph{``structured illumination''} and \emph{``strong illumination''} cases. We will take the Fourier transforms of Eqs. \ref{eq:NOISE_PASSIVE} and \ref{eq:NOISE_ACTIVE} and analyze how each noise term contributes to the total distortion of the ideal image under the three illumination schemes. The linearity of the Fourier transform allows us to plot each term in the equations separately and these are shown in figures \ref{fig:Fig21} - \ref{fig:Fig23} for the weak, strong, and structured illuminations, respectively. In the strong illumination case we have used a plane wave whose electric field is $10^8$ times stronger than the weak illumination. The green and black lines in figures \ref{fig:Fig21} - \ref{fig:Fig23} correspond to the images with and without added noise, respectively. The Fourier transforms of the SI and SD noise are the blue and gold lines, respectively, which add up to the total noise shown by the red line. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{21.eps} \caption{Fourier spectra [$Vm^{-1}$] of the raw images with and without added noise illustrating the contribution of the SD and SI noise to the total distortion of the image under the weak illumination case.} \label{fig:Fig21} \end{figure} To compare the performance of the imaging system under the three illumination schemes, we will see how closely the noise added images overlap with the images with no added noise. We should note that the spatial distribution of the SD noise will be spread out over multiple Fourier components \cite{4518410} and therefore the random nature of noise will not be visible in the spatial frequency domain. This can be seen in the gold plots which are fairly smooth compared to the blue lines. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{22.eps} \caption{Fourier spectra [$Vm^{-1}$] of the raw images with and without added noise illustrating the contribution of the SD and SI noise to the total distortion of the image under the strong illumination case. The SD noise is amplified approximately by a factor of $10^8$ throughout the spectrum and the contribution of the SI noise is very small.} \label{fig:Fig22} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{23.eps} \caption{Fourier spectra [$Vm^{-1}$] of the convolved images with and without added noise illustrating the contribution of the SD and SI noise to the total distortion of the image under the structured illumination case. The SD noise is approximately at the same level as the weak illumination except that the noise is redistributed.} \label{fig:Fig23} \end{figure} If we compare the SD noise spectra in figures \ref{fig:Fig21} and \ref{fig:Fig22} we immediately conclude that as we increase the intensity of the illumination, the SD noise is amplified throughout the spectrum. However, a slight improvement to the noisy spectrum over the weak illumination is visible within the region $2 < \frac{k_y}{k_o} < 3$. Additionally, if we analyze figure \ref{fig:Fig21}, where the gold line intersects the black line, we see that in the strong illumination case in figure \ref{fig:Fig22}, only the Fourier components until this intersection point are recovered. The intersection marks the spatial frequency at which the ideal image (i.e., raw image with no added noise) $I(k_y)$ matches the Fourier transform of the SD noise $N_{SD}(k_y)$. Beyond this point, we can say that the ideal image is completely buried under the SD noise alone. As we steadily increase the intensity of the illumination, $I(k_y)$ and $N_{SD}(k_y)$ increase by the same proportion and therefore, the value of $\frac{k_y}{k_o}$ where the two intersect does not change. We can therefore say that the improvement in the noisy spectrum in figure \ref{fig:Fig22} is due to the signal rising above the SI noise which does not change with the illumination intensity. Further increments in the strength of the illumination will not improve the noisy image spectrum. On the other hand, if we study the SD noise spectrum in figure \ref{fig:Fig23}, we can see that it is approximately at the same average level as figure \ref{fig:Fig21}. This is not surprising if we compare the green and red plots of figure \ref{fig:Fig8}. We can see that the spatial electric field distribution of the noise added images $I^{'}_{N}(y)$ and $I_{N}(y)$ are comparable. The only difference is that $I^{'}_{N}(y)$ has high spatial frequency features. Since the standard deviation of the SD noise is proportional to the amplitude of the image, according to Eqs. \ref{eq:SD_Passive} and \ref{eq:SD_Active}, we can see why the SD noise is approximately the same in both the weak and structured illumination schemes. Note that under the structured illumination, the noise added convolved image closely follows the ideal image until $\frac{k_y}{k_o} = 4.5$. This can be pushed to even higher spatial frequencies if the transfer function characterizing the lens is accurate. Also, note that the structured illumination has successfully suppressed the computational errors in the Fourier transform which are visible in the black line in figure \ref{fig:Fig21} beyond $\frac{k_y}{k_o} = 3$. These errors are amplified by a factor of $10^8$ times in figure \ref{fig:Fig22}. From the above discussion we conclude that by using structured illumination the SD noise is not amplified but redistributed when compared with the strong illumination. Therefore, it is possible to raise the high spatial frequency features of the object above the noise. This is the primary reason why structured illumination can accurately resolve the image while strong illumination fails. The technique is generally applicable to any arbitrary object with the use of any plasmonic or metamaterial lens provided that accurate transfer function for the imaging system is available. A selective amplification process is used to recover specific object features by controlling $P_0$ near and beyond where the noise floor is reached in the Fourier spectrum of the raw image. Therefore, no prior knowledge of the object is required. However, a necessary criterion is a sufficiently accurate transfer function in the region where the auxiliary is applied to correctly estimate the required amount of amplification. It should be noted that different objects may require different auxiliaries, since the spatial frequency at which the noise floor is reached may vary for different objects. Therefore, it would be instrumental to have a tunability mechanism for the versatility of the imaging system. Even though a single narrowband auxiliary would be still sufficient to enhance the resolution of the raw image, further enhancement in the resolution would demand either superimposing multiple narrowband auxiliaries or a single sufficiently broadband auxiliary within the range of accurate transfer function. Narrowband auxiliaries require larger image plane and more post-processing while a single broadband auxiliary requires less post-processing and smaller image plane at the expense of possibly higher noise amplification. Similarly, unnecessarily large amplitude of a narrowband auxiliary may excessively amplify the noise. Another likely limitation arises from increasingly large power loss in the deep subwavelength regime, which requires increasingly high amount of amplification to reconstruct extremely fine details of the object. This does not only reduce the efficiency but might also introduce undesired non-linear and thermal effects in the optical materials, hence, limiting the resolution of the imaging system. \section{Conclusion} In summary, we proposed an active implementation of the recently introduced plasmon injection scheme \cite{PI} to significantly improve the resolution of Pendry’s non-ideal negative index flat lens beyond diffraction limit in the presence of realistic material losses and SD noise. Simply by increasing the illumination intensity, it is not generally possible to efficiently reconstruct the image due to the noise amplification. However, in the proposed active implementation one can counter the adverse noise amplification effect by using a convolved auxiliary source which allows for a selective amplification of the high spatial frequency features deep within the sub-wavelength regime. We have shown that this approach can be used to control the noise amplification while at the same time recover features buried within the noise, thus enabling ultra-high resolution imaging far beyond the previous passive implementations of the plasmon injection scheme \cite{Wyatt,zhang2016enhancing}. The convolution process to construct the auxiliary source in the proposed active scheme may be realized physically by different methods, metasurfaces \cite{Genevet:17,Kildishev1232009,Pors2013,PhysRevB.84.205428,Yu333,Xu:16,pfeiffer2013metamaterial,holloway2012overview} and hyperbolic metamaterials \cite{Rizza:12,schurig2003spatial,wood2006directed,zhang2015hyperbolic} being the primary candidates. A more detailed analysis on the design of such structures to implement the convolved auxiliary source will be the focus of our future research. Finally, we should note that we purposefully focused on imperfect negative index flat lens here that poses a highly stringent and conservative problem. However, in the shorter term the proposed method can be relatively easily applied to experimentally available plasmonic superlenses \cite{Guo2014,Liu_Hong,Zhao,Fang534,taubner2006near,zhang2008superlenses} and hyperlenses \cite{liu2007far,rho2010spherical,lu2012hyperlenses,sun2015experimental,lee2007development}. Our findings also raises the hopes for reviving Pendry’s early vision of perfect lens \cite{Pendry} by decoupling the loss and isotropy issues toward a practical realization \cite{soukoulis2010optical,soukoulis2011past,guney2009connected,guney2010intra,rudolph2012broadband,yang2016experimental}. \section{Funding Information} Office of Naval Research (award N00014-15-1-2684). \section{Acknowledgement} We thank Jeremy Bos at Michigan Technological University for fruitful discussion on noise characterization.
1,941,325,220,850
arxiv
\section{Challenges} \section{Simple Drive Train Model} An automatic transmission provides gear shifts characterized by two features: (1) the torque transfer, in which a clutch corresponding to the target gear takes over the drive torque, and (2) the speed synchronization, in which slip from input to output speed of the clutch is reduced such that it can be closed or controlled at low slip. In this work, we consider the reference tracking control problem for a friction clutch during synchronization phase. Its input is driven by a motor and the output is propagated through the gearbox to the wheels of the car. Speed control action can only be applied by the clutch when input and output force plates are in friction contact with slip. Generally, the aim in speed synchronization is to smoothly control the contact force of the clutch without jerking towards zero slip. This is where our RL approach for tracking control comes into account. For a smooth operation a reference to follow during the friction phase is predesigned by the developer of the drive train. For easier understanding, a simple drive train model is used which is derived from the motion equations of an ideal clutch \cite{quang_1998} extended by the influence of the gearbox \begin{align} J_{in} \, \dot{\omega}_{in} &= -T_{cl}+T_{in} \enspace, \label{eq:clutchMotA}\\ J_{out} \, \dot{\omega}_{out} &= \theta \, T_{cl} - T_{out} \enspace , \label{eq:clutchMotB} \end{align} where ${\omega}_{in}$ is the input speed on the motor side and ${\omega}_{out}$ is the output speed at the side of the wheels. Accordingly, $T_{in}$ is the input torque and $T_{out}$ is the output torque. The transmission ratio $\theta$ of the gearbox defines the ratio between the input and output speed. The input and output moment of inertia $J_{in}$, $J_{out}$ and the transmission ratio $\theta$ are fixed characteristics of the drive train. The clutch is controlled varying the torque transmitted from the clutch $T_{cl}$. The input torque $T_{in}$ is approximated as constant while the output torque is assumed to depend linear on the output speed $T_{out} = \eta \cdot \omega_{out}$ which changes \eqref{eq:clutchMotA} and \eqref{eq:clutchMotB} to \begin{align} J_{in} \, \dot{\omega}_{in} &= -T_{cl}+T_{in} \enspace, \\ J_{out} \, \dot{\omega}_{out} &= \theta \, T_{cl} - \eta \cdot \omega_{out} \enspace . \label{eq:clutchMot2} \end{align} Solving the differential equations for a time interval $\Delta T$, yields the discrete system equation \begin{equation} \begin{bmatrix} \omega_{in} \\ \omega_{out} \end{bmatrix}_{k+1} = \bol{A} \begin{bmatrix} \omega_{in} \\ \omega_{out} \end{bmatrix}_{k} + \bol{B}_1 \cdot T_{cl,k} + \bol{B}_2 \cdot T_{in} \enspace, \label{eq:systemEqCl} \end{equation} where \begin{align} \bol{A} &= \begin{bmatrix} 1 & 0 \\ 0 & \exp \left( {-\frac{ \eta \cdot \Delta T}{J_{out}}} \right) \end{bmatrix} \enspace, \\ \bol{B}_1 &= \begin{bmatrix} - \frac{\Delta T}{J_{in}} \\ \frac{\theta}{\eta} \left(1 - \exp \left( {-\frac{ \eta \cdot \Delta T}{J_{out}}} \right) \right) \end{bmatrix} \, , \enspace \bol{B}_2= \begin{bmatrix} \frac{\Delta T}{J_{in}} \\ 0 \end{bmatrix} \enspace, \end{align} with state $\vec{x}_k = [\omega_{in} \omega_{out}]^T_k$ and the control input $u_k = T_{cl,k}$. For a friction-plate clutch, the clutch torque $T_{cl,k}$ depends on the capacity torque $T_{cap,k}$ \begin{equation} T_{cl,k}=T_{cap,k} \cdot \text{sign} \left(\omega_{in,k} - \theta \cdot \omega_{out,k}\right), \enspace T_{cap,k} \geq 0 \enspace, \label{eq:Kupplungsmoment} \end{equation} which means $T_{cl,k}$ is changing its sign according to if the input speed or the output speed is higher. The capacity torque is proportional to the contact force which is applied on the plates $T_{cap,k} \sim F_{N,k} $. In real drive trains, the change of the control input in a time step is limited due to underlying dynamics such as pressure dynamics. For simulating this behavior, a low pass filter is applied on the control inputs. We use a first-order lag element (PT1) with the cutoff frequency $f_g$ \begin{equation} T'_{cl,k} =\begin{cases} \left(T_{cl,k} - T'_{cl,k-1} \right) \cdot \left( 1- a \right) + T'_{cl,k-1} , & \text{if $T_{cl,k} > T'_{cl,k-1}$} \enspace, \\ \left(T'_{cl,k-1} - T_{cl,k} \right) \cdot a + T_{cl,k} , & \text{otherwise} \enspace, \end{cases} \end{equation} where \begin{equation} a = \exp { \left( - 2 \pi f_g \Delta T \right)} \enspace. \end{equation} In this case, the control input provided by the controller $T_{cl,k}$ is transformed to the delayed control input $T'_{cl,k}$ applied on the system. The simple drive train model is used in three different experiments of input speed synchronization which are derived from use cases of real drive trains. In the first experiment, the references to be followed are smooth. This is associated with a usual gear shift. In real applications, delays, hidden dynamics, or change-of-mind situations can arise which make a reinitialization of the reference necessary. This behavior is modeled in the second experiment where the references contain jumps and discontinuities, respectively. Jumps in the reference can lead to fast changes in the control inputs. To still maintain a realistic behavior the lag element is applied on the control inputs. In the third experiment, we use again smooth references but in different input speed ranges. Varying drive torque demands typically cause the gear shift to start at different input speed levels. Our approach will be evaluated on all three experiments to determine the performance for different use cases. Note, that for demonstration purposes we consider speed synchronization by clutch control input only. In real applications, clutch control is often combined with input torque feedforward. \section{Conclusion} In this work, proximal policy optimization for tracking control exploiting future reference information was presented. We introduced two variants of extending the argument of both actor and critic. In the first variant, we added global future reference values to the argument. In the second variant, the argument was defined in a novel kind of residual space between the current state and the future reference values. By evaluating our approach on a simple drive train model we could clearly show that both variants improve the performance compared to an argument taking only the current reference value into account. If the approach is applied to references with discontinuities, adding several future reference values to the argument is beneficial. The residual space variant shows its advantages especially for references with different offsets. In addition, our approach outperforms PI controllers commonly used in drive train control. Besides higher tracking quality, the generalization to different references is significantly better than using a PI controller. This guarantees an adequate performance on arbitrary, before unseen, references. In future work, our approach will be applied to a more sophisticated drive train model where noise behavior, model inaccuracies, non-modeled dynamics, and control input range limits, as being expected in a real drive train system, are systematically included in training. \section{Evaluation} As mentioned in the last chapter, we will evaluate our approach using three different experiments representing three use cases of the drive train. In every experiment, the three different arguments of the actor and the critic introduced in Chapter \ref{sec:PPO_TC} will be applied. For the drive train system \eqref{eq:systemEqCl}, the arguments of the actor and the critic have to be defined. The reference to be followed is only corresponding to the input speed. Since no reference for the output speed is given we add the output speed $\omega_{out,k}$ as global variable in all three arguments. Analogous to \eqref{eq:arg0}, \eqref{eq:arg1} and \eqref{eq:arg2}, the arguments of the drive train system are \begin{align} \vec{s}_k^{0,cl} &= \left[ \omega_{out,k}, \omega_{in,k}, \left( \omega^r_{in,k} - \omega_{in,k} \right) \right]^T \enspace, \\ \vec{s}_k^{1N,cl} &= \left[\omega_{out,k}, \omega_{in,k}, \omega^r_{in,k}, \omega^r_{in,k+1}, \ldots, \omega^r_{in,k+N} \right]^T \enspace, \\ \vec{s}_k^{2N,cl} &= \left[ \omega_{out,k}, \left(\omega^r_{in,k} - \omega_{in,k}\right), \left(\omega^r_{in,k+1} - \omega_{in,k}\right), \ldots, \left(\omega^r_{in,k+N} - \omega_{in,k}\right) \right]^T . \end{align} Without future reference values as applied in \cite{zhang_2019} the argument is $\vec{s}_k^{0,cl}$, with global future reference values $\vec{s}_k^{1N,cl}$ and in residual space with future reference values $\vec{s}_k^{2N,cl}$. The reward function for the simple drive train model derived from \eqref{eq:reward} is given as \begin{equation} r_k = -\left( \omega_{in,k+1} - {\omega}^r_{in,k+1}\right)^2 - \beta \cdot \left (T_{cl,k} - T_{in}\right)^2 \enspace. \label{eq:rewardCl} \end{equation} If $T_{cl,k} = T_{in}$ the input speed $\omega_{in}$ is not changing from time step $k$ to time step $k+1$ according to \eqref{eq:systemEqCl}. Thus, deviations of $T_{cl,k}$ from $ T_{in}$ are penalized to suppress control inputs which would cause larger changes in the state. All parameters used in the simple drive train model are given in Table \ref{tab:paraClutch}. \begin{table}[tb] \caption{Parameters of the simple drive train model.} \label{tab:paraClutch} \centering \begin{tabularx}{8.5cm}{p{5.0cm}|X} \hline Parameter & Value \tabularnewline \hline \hline Input moment of inertia $J_{in}$ & \SI{0.209}{kg m^2} \\ \hline Output moment of inertia $J_{out}$ & \SI{86.6033}{kg m^2} \\ \hline Transmission ratio $\theta$ & 10.02 \\ \hline Input torque $T_{in}$ & \SI{20}{Nm} \\ \hline $\eta$ & \SI{2}{(Nms)/rad} \\ \hline Time step $\Delta T$ & \SI{10}{ms} \\ \hline \end{tabularx} \end{table} \subsection{Algorithm} \begin{algorithm}[tb!] \caption{PPO for tracking control } \label{algo:controlPPO} \begin{algorithmic}[1] \REQUIRE Replay buffer $\mathcal D$, critic parameters $\vec{\phi}$, actor parameters $\vec{\theta}$, actor learning rate $\alpha_a$, critic learning rate $\alpha_c$, target network delay factor $\tau$ \STATE Init target critic parameters $\vec{\phi}' \leftarrow \vec{\phi}$ and $h = 0$ \FOR {1 .. Number of episodes} \STATE Observe initial state $\vec{x}_0$ and new reference $\omega^r_{in}$ \FOR {1 .. K} \STATE Apply control input $u_k \leftarrow \pi_\theta(\vec{s}_k)$ \STATE Observe new state $\vec{x}_{k+1}$ and reward $r_k$ \STATE Add $(\vec{s}_k, u_k, r_k, \vec{s}_{k+1} )$ to replay buffer $\mathcal D$ \ENDFOR \FOR {1 .. Number of epochs} \STATE Sample training batch from $\mathcal D$ \STATE Update critic $\vec{\phi}_{h+1} \leftarrow \vec{\phi}_h + \alpha_c \nabla_{\vec{\phi}} C(\vec{\phi}_h)$ \STATE Calculate advantage $\vec{\hat A}$ using GAE \STATE Update actor $\vec{\theta}_{h+1} \leftarrow \vec{\theta}_h + \alpha_a \nabla_{\vec{\theta}} J(\vec{\theta}_h)$ \STATE $h \leftarrow h+1$ \STATE \textbf{Every m-th epoch} \\ \quad Update target critic $\vec{\phi}' \leftarrow (1 - \tau) \, \vec{\phi}' + \tau \, \vec{\phi}$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} The PPO algorithm, applied for the evaluation, is shown in Algorithm \ref{algo:controlPPO}. We use two separate networks representing the actor and the critic. In order to improve data-efficiency we apply experience replay \cite{lin_1992}, \cite{lillicrap2015}. While interacting with the system, the tuples $(\vec{s}_k, u_k, r_k, \vec{s}_{k+1} )$ are stored in the replay buffer. In every epoch a training batch of size $L$ is sampled from the replay buffer to update the actor's and the critic's parameters. As introduced in Chapter \ref{sec:PPO}, the advantage function is determined via generalized advantage estimation~\eqref{eq:adv} with the GAE parameter set to $\lambda = 0$ and the discount $\gamma = 0.7$. To improve the stability of the critic network during the training, we added a target critic network~$V'$ \cite{lillicrap2015} which is only updated every m-th epoch (in our implementation $m=2$). The critic loss is defined as the mean squared temporal difference error~\eqref{eq:TDE} \begin{equation} C(\vec{\phi}) = \frac{1}{L} \sum^L \left( r_k + \gamma \, V'(\vec{s}_{k+1}) - V(\vec{s}_{k})\right)^2 \enspace . \end{equation} The output of the actor provides $T_{cap}$ in \eqref{eq:Kupplungsmoment} and the sign for $T_{cl}$ is calculated through the input and output speed. To ensure $T_{cap} \geq 0$ the last activation function of the actor is chosen as ReLU. In Table~\ref{tab:PPOpara}, all parameters used in the PPO algorithm are shown. In our trainings, we perform 2000 episodes each with 100 time steps. Each run through the system is followed by 100 successive training epochs. \begin{table}[tb] \caption{Parameters used in the PPO algorithm.} \label{tab:PPOpara} \centering \begin{tabularx}{11.5cm}{p{7.0cm}|X} \hline Parameter & Value \tabularnewline \hline \hline Hidden layers size actor and critic networks & [400, 300] \\ \hline Activation functions actor network & [tanh, tanh, ReLU] \\ \hline Activation functions critic network & [tanh, tanh, -ReLU] \\ \hline Actor's learning rate $\alpha_{a}$ & $5 \cdot 10^{-5}$ \\ \hline Critic's learning rate $\alpha_{c}$ & $1.5 \cdot 10^{-3} $ \\ \hline Batch size $L$ & 100 \\ \hline Reply buffer size & 10000 \\ \hline Initial standard deviation stochastic actor & 10 \\ \hline c & 0.1 \\ \hline $\tau$ & 0.001 \\ \hline Entropy coefficient $\mu$ & 0.01 \\ \hline \end{tabularx} \end{table} \subsection{Simulation procedure} \label{simu} In the following, we will evaluate our approach on three different experiments. For each experiment 15 independent simulations are performed. In each simulation the algorithm is trained using 2000 different training references (one for each episode). After every tenth episode of the training the actor is tested on an evaluation reference. Using the evaluation results the best actor of the training is identified. In the next step, the best actor is applied to 100 test references to evaluate the performance of the algorithm. In the following, the mean episodic reward over all 100 test references and all 15 simulations will serve as quality criterion. The smooth references are cubic splines formed from eight random values around the mean \SI{2000}{rpm}. In the second experiment, between one and 19 discontinuities are induced by adding periodic square-waves of different period durations to the reference. The references in the third experiment are shifted by adding an offset to the spline. Five different offsets are used, resulting in data between approximately \SI{1040}{rpm} and \SI{4880}{rpm}. As mentioned before, the results will be compared to the performance of a PI controller. We determine the parameters of the PI controller by minimizing the cumulative costs over an episode, respectively the negative of the episodic reward defined by the reward function \eqref{eq:rewardCl}. To avoid local minima, we first apply a grid search and then use a quasi-Newton method to determine the exact minimum. In the optimization, the cumulative costs over all training references (same as in the PPO training) are used as quality criterion. \subsection{Results} \begin{figure}[tb] \centering \includegraphics[width=0.5\textwidth]{figures/PE-MeanTraining_smooth6.pdf} \caption{Training over 2000 episodes on smooth references.} \label{fig:smoothTrain} \end{figure} \begin{table}[b] \caption{Quality evaluated on test a set for smooth references.} \label{tab:smoothTest} \centering \begin{tabularx}{15.0cm}{p{5.0cm}|p{2.0cm}|X|X} \hline Arguments of actor and critic & Acronym & Mean episodic & Standard deviation \tabularnewline & & reward & episodic reward \tabularnewline \hline \hline Current state and residuum ($\vec{s}_k^{0,cl}$) & CPPO & -0.303 & 0.027 \\ \hline Global space with one future reference ($\vec{s}_k^{11,cl}$) & GPPO1 & -0.035 & 0.020 \\ \hline Residual space with one future reference ($\vec{s}_k^{21,cl}$) & RPPO1 & -0.030 & 0.020 \\ \hline PI controller & PI & -0.069 & 1.534 \\ \hline \end{tabularx} \end{table} \begin{figure}[bt] \centering \subfloat[Input speeds and reference for one episode.] {\includegraphics[height=5.2cm]{figures/test_plot_smooth2.pdf}\label{fig:testStatesSmooth}} \subfloat[Control inputs for one episode.] {\includegraphics[height=5.2cm]{figures/test_plot_act_smooth2.pdf}\label{fig:testActionssmooth}} \caption{Performance of the best trained actors on a smooth test reference. Note that in real applications reference trajectories are designed in such a way that the input speed is synchronized quickly but smoothly towards zero slip at the clutch.} \label{fig:smoothRef} \end{figure} In the following, the results of the different experiments with (1) a class of smooth references, (2) a class of references with discontinuities and (3) a class of smooth references shifted by an offset will be presented. \subsubsection*{Class of smooth references} For smooth references the weighting parameter~$\beta$ in the reward function \eqref{eq:rewardCl} is set to zero. The parameters of the PI controller were computed as described in Section \ref{simu}, the parameter of the proportional term $K_P$ was determined as $20.88$ and the parameter of the integral term $K_I$ is $11.08$. The training curves of the PPO approaches are drawn in Figure~\ref{fig:smoothTrain}. It can be clearly seen that the approaches using one future reference value in the argument of the actor and the critic (GPPO1, RPPO1) reach a higher mean episodic reward than the approach including only reference information of the current time step (CPPO). Furthermore, the approach with the argument defined in the residual space (RPPO1) achieves high rewards faster than the GPPO1 and the CPPO. As mentioned before, the trained actors are evaluated on a set of test references. The obtained results are given in Table \ref{tab:smoothTest}. The approach using global reference values in the argument (GPPO1) and the approach defining the argument in a residual space (RPPO1) provide the best results. The mean episodic reward of the CPPO is ten times lower than the ones of the GPPO1 and the RPPO1. The performance of the classical PI controller lies in between them but the standard deviation is very high. This implies that optimizing the PI's parameters using the training data, no parameters can be determined which lead to equally good performance for all the different references. In Figure~\ref{fig:smoothRef}, the best actors are applied to follow a test reference. The tracks of the PI controller, the GPPO1 and the RPPO1 show similar behavior. Only in the valleys the PI controller deviated slightly more from the reference. Since the CPPO has to decide for a control input only knowing the current reference value, its reaction is always behind. In Figure~\ref{fig:testStatesSmooth}, it can be seen that the CPPO applies a control input that closes the gap between the current input speed and the current reference value. But in the next time step the reference value has already changed and is consequently not reached. The same effect leads to a shift of the control inputs in Figure~\ref{fig:testActionssmooth}. \subsubsection*{Class of references with discontinuities} As mentioned before, to avoid unrealistic fast changes of the control inputs we add a first-order lag element with cutoff frequency $f_g = \SI{100}{Hz}$ to the traindrive model in the experiment setting for references with discontinuities. In addition, $\beta$ is set to $1/3000$ in the reward function \eqref{eq:rewardCl} to prevent the learning of large control inputs when discontinuities occur in the reference signal. The PI controller parameters were optimized on the same reward function resulting in $K_P=18.53$ and $K_I=5.67$. \begin{figure}[tb] \centering \includegraphics[width=0.5\textwidth]{figures/PE-MeanTraining_jumps4.pdf} \caption{Training over 2000 episodes on references with discontinuities.} \label{fig:jumpTraining} \end{figure} Besides including the information of one future reference value the performance of adding three future reference values (GPPO3 and RPPO3) is also investigated in this experiment. The learning curves of all five PPO approaches are illustrated in Figure~\ref{fig:jumpTraining}. All approaches with future reference values show similar results in the training. The PPO approach containing only current reference values (CPPO) achieves a significant lower mean episodic reward. A similar behavior can be observed in the evaluation on the test reference set for which the results are given in Table~\ref{tab:jumpTest}. \begin{table}[bt] \caption{Quality evaluated on test references with discontinuities.} \label{tab:jumpTest} \centering \begin{tabularx}{15.0cm}{p{5.0cm}|p{2.0cm}|X|X} \hline Arguments of actor and critic & Acronym & Mean episodic & Standard deviation \tabularnewline & & reward & episodic reward \tabularnewline \hline \hline Current state and residuum ($\vec{s}_k^{0,cl}$) & CPPO & -10.936 & 0.618 \\ \hline Global space with one future reference ($\vec{s}_k^{11,cl}$) & GPPO1 & -1.597 & 0.079 \\ \hline Global space with three future reference ($\vec{s}_k^{13,cl}$)& GPPO3 & -1.499 & 0.082 \\ \hline Residual space with one future reference ($\vec{s}_k^{21,cl}$) & RPPO1 & -1.614 & 0.088 \\ \hline Residual space with three future reference ($\vec{s}_k^{23,cl}$) & RPPO3 & -1.510 & 0.086 \\ \hline PI controller & PI & -1.747 & 2.767 \\ \hline \end{tabularx} \end{table} Again, the PPO approaches with one global future reference (GPPO1) and one future reference in the residual space (RPPO1) show similar results and perform significantly better than the CPPO. Adding the information of more future reference values (GPPO3 and RPPO3) leads to an even higher mean episodic reward. The PI controller performs slightly worse than the PPO aproaches with future reference information. Due to the large standard deviation of the PI controller's performance, it cannot be guaranteed to perform well on a specific reference. The performance of the best actors on a test reference with discontinuities is illustrated in Figure~\ref{fig:testStatesjumps}. Note, for better visibility, the figure shows only a section of a full episode. \begin{figure}[b] \centering \subfloat[Input speeds and reference.]{\includegraphics[height=5.2cm]{figures/test_plot_jumps2.pdf}\label{fig:testStatesjumps}} \subfloat[Control inputs.]{\includegraphics[height=5.2cm]{figures/test_plot_act_jumps2.pdf}\label{fig:testActionsjumps}} \caption{Performance of the best trained actors on a test reference with discontinuities.} \label{fig:testjumps} \end{figure} The PPO approaches including future reference values and the PI controller are able to follow the reference even if a discontinuity occurs. The PPO approach without future reference values is performing even worse than on the smooth reference. It can be clearly seen that the jump cannot be detected in advance and the reaction of the actor is delayed. In Figure~\ref{fig:testStatesjumps} the applied control inputs are shown, the approaches including three future reference values respond earlier to the upcoming discontinuities and cope with the discontinuity requiring only smaller control inputs (Figure~\ref{fig:testActionsjumps}) which also leads to a higher reward. The PI controller shows a similar behavior than the PPO approaches with one future reference value (GPPO1 and RPPO1). \subsubsection*{Class of smooth references with offsets} For the class of smooth references with offsets, the same settings, as used in the experiment for smooth references, are applied. The PI parameters are determined as $20.88$ for $K_P$ and $11.04$ for $K_I$. The training curves of the PPO approaches are illustrated in Figure~\ref{fig:offsetTrain}. \begin{figure}[tb] \centering \includegraphics[width=0.5\textwidth]{figures/PE-MeanTraining_offset.pdf} \caption{Training over 2000 episodes on smooth references with offsets.} \label{fig:offsetTrain} \end{figure} Compared to the learning curves of the smooth references experiment the training on the references with offset needs more episodes to reach the maximum episodic reward. The training of the approach using global reference values (GPPO1) is extremely unstable. The reason might be that the training data contains only references in a finite number of small disjointed areas in a huge state space. As the argument of GPPO1 includes only global states and reference value no shared data between this areas exist and the policy is learned independently for each area. Consequently, the stability of the approaches with argument components in the residual space CPPO and RPPO is not effected by this issue. However, the RPPO1 is receiving higher rewards significantly faster than the CPPO which also contains the global input speed. \begin{table}[b] \caption{Quality evaluated on a test set for smooth references with offset.} \label{tab:offsetTest} \centering \begin{tabularx}{15.0cm}{p{5.0cm}|p{2.0cm}|X|X} \hline Arguments of actor and critic & Acronym & Mean episodic & Standard deviation \tabularnewline & & reward & episodic reward \tabularnewline \hline \hline Current state and residuum ($\vec{s}_k^{0,cl}$) & CPPO & -0.377 & 0.091 \\ \hline Global space with one future reference ($\vec{s}_k^{11,cl}$) & GPPO1 & -0.102 & 0.089 \\ \hline Residual space with one future reference ($\vec{s}_k^{21,cl}$) & RPPO1 & -0.031 & 0.018 \\ \hline PI controller & PI & -0.069 & 1.541 \\ \hline \end{tabularx} \end{table} In Table \ref{tab:offsetTest}, the performance on a set of test references is given. The mean episodic reward of the CPPO, the RPPO1 and the PI controller is in the same range as in the smooth references experiment. As in the other experiments, the PI controller shows a large standard deviation. Despite the instabilities in the training, the GPPO1 receives a higher mean episodic reward than the CPPO but its performance is significant worse than in the smooth references experiment. It can be clearly seen that residual space arguments lead to better performance for references with offsets. \begin{figure}[t] \centering \subfloat[Input speeds and reference.]{\includegraphics[height=5.2cm]{figures/test_plot_offset2.pdf}\label{fig:offsetStates}} \subfloat[Control inputs.]{\includegraphics[height=5.2cm]{figures/test_plot_act_offset2.pdf}\label{fig:offsetActions}} \caption{Performance of the best trained actors on a smooth test reference with offsets.} \label{fig:offsetPlots} \end{figure} In Figure~\ref{fig:offsetPlots}, the performance of the best actors on a test reference is illustrated. For better visibility, only a section of a full episode is drawn. The results are similar to the smooth reference experiment, only the GPPO1 shows slightly worse tracking performance. \section{Introduction} In cars with automatic transmissions, gear shifts shall be performed in such a way that no discomfort is caused by the interplay of motor torque and clutch control action. Especially, the synchronization of input speed to the gears' target speed is a sensitive process mainly controlled by the clutch. Ideally, the motor speed should follow a predesigned reference, but not optimal operating feedforward control and disturbances in the mechanical system can cause deviations from the optimal behavior. The idea is to apply a reinforcement learning (RL) approach to control the clutch behavior regulating the deviations from the optimal reference. The advantage of a RL approach over classical approaches as the PI control is that no extensive experimental parametrization for every gear and every clutch is needed which is very complex for automatic transmissions. Instead, the RL algorithm is supposed to learn the optimal control behavior autonomously. Since the goal is to guide the motor speed along a given reference signal the problem at hand belongs to the family of tracking control problems. In the following, a literature review on RL for tracking control is given. In \cite{qin_2018}, the deep deterministic policy gradient method, introduced in \cite{lillicrap_2016}, is applied to learn the parameters of a PID controller. An adaptive PID controller is realized in \cite{carlucho_2017} using an incremental Q-learning for real-time tuning. A combination of Q-learning and a PID controller is presented in \cite{wang_2020}, where the applied control input is a sum of the PID control input and a control input determined by Q-learning. Another common concept applied to tracking control problems is model predictive control (MPC) which can also be combined with RL. A data-efficient model-based RL approach based on probabilistic model predictive control (MPC) is introduced in \cite{kamthe_2018}. The key idea is to learn the probabilistic transition model using Gaussian processes. In \cite{gros_2020}, nonlinear model predictive control (NMPC) is used as a function approximator for the value function and the policy in a RL approach. For tracking control problems with linear dynamics and quadratic costs a RL approach is presented in \cite{koepf_2020}. Here, a Q-function is analytically derived that inherently incorporates a given reference trajectory on a moving horizon. In contrast to the before presented approaches derived from classical controller concepts also pure RL approaches for tracking control were invented. A model-based variant is presented in \cite{hu_2020} where a kernel-based transition dynamic model is introduced. The transition probabilities are learned directly from the observed data without learning the dynamic model. The model of the transition probabilities is then used in a RL approach. A model-free RL approach is introduced in \cite{Yu_2017} where the deep deterministic policy gradient approach \cite{lillicrap_2016} is applied on tracking control of an autonomous underwater vehicle. In \cite{kamran_2019}, images of a reference are fed to a convolutional neural network for a model-free state representation of the path. A deep deterministic policy gradient approach \cite{lillicrap_2016} is applied where previous local path images and control inputs are given as arguments to solve the tracking control problem. Proximal policy optimization (PPO) \cite{schulman_2017} with generalized advantage estimation (GAE) \cite{schulman_2016} is applied on tracking control of a manipulator and a mobile robot in \cite{zhang_2019}. Here, the actor and the critic are represented by a long short-term memory (LSTM) and a distributed version of PPO is used. In this work, we apply PPO to a tracking control problem. The key idea is to extend the arguments of the actor and the critic to take into account information about future reference values and thus improve the tracking performance. Besides adding global reference values to the argument, we also define an argument based on residua between the states and the future reference values. For this purpose, a novel residual space with future reference values is introduced applicable to model-free RL approaches. Our approach is evaluated on a simple drive train model. The results are compared to a classical PI controller and a PPO approach, which does not consider future reference values. \section{Preliminaries: Proximal Policy Optimization Algorithm} \label{sec:PPO} As policy gradient method, proximal policy optimization (PPO) \cite{schulman_2017} is applied in this work. PPO is a simplification of the trust region policy optimization (TRPO) \cite{schulman_2017_2}. The key idea of PPO is a novel loss function design where the change of the stochastic policy $\pi_{\vec{\theta}}$ in each update step is limited introducing a clip function \begin{equation} J (\vec{\theta}) = \mathbb{E}_{\left(\vec{s}_k, \vec{u}_k\right)} \lbrace \min \left(p_k (\vec{\theta}) A_k, \text{clip} \left( p_k(\vec{\theta}),1-c,1+c\right) A_k \right) \rbrace \enspace, \label{eq:actorloss} \end{equation} where \begin{equation} p_k(\vec{\theta}) = \frac{\pi_{\vec{\theta}} (\vec{u}_k \vert \vec{s}_k) }{\pi_{\vec{\theta}_{old}} (\vec{u}_k \vert \vec{s}_k)} \enspace . \label{eq:pPPO} \end{equation} The clipping motivates $p_k(\vec{\theta})$ not to leave the interval $[1-c,1+c]$. The argument $\vec{s}_k$ given to the actor commonly contains the system state $\vec{x}_k$ of the system, but can be extended by additional information. The loss function is used in a policy gradient method to learn the actor network's parameters~$\vec{\theta}_h$ \begin{equation} \vec{\theta}_{h+1} = \vec{\theta}_h + \alpha_{a} \cdot \nabla_{\vec{\theta}} J (\vec{\theta}) \vert_{\vec{\theta} = \vec{\theta}_h} \enspace, \end{equation} where $\alpha_{a}$ is referred as the actor's learning rate and $h \in \lbrace 0,1,2, \ldots \rbrace$ is the policy update number. A proof of convergence for PPO is presented in \cite{holzleitner_2020}. The advantage function $A_k$ in \eqref{eq:actorloss} is defined as the difference between the Q-function and the value function $V$ \begin{equation} A_k(\vec{s}_k,\vec{u}_k) = Q(\vec{s}_k,\vec{u}_k) - V(\vec{s}_k) \enspace . \end{equation} In \cite{schulman_2017}, generalized advantage estimation (GAE) \cite{schulman_2016} is applied to approximate the value function \begin{equation} \hat{A}_k^{GAE(\gamma, \lambda)} = \sum_{l = 0}^\infty \left( \gamma \lambda \right)^l \delta_{k+l}^V \enspace, \label{eq:adv} \end{equation} where $\delta_k^V$ is the temporal difference error of the value function~V \cite{sutton_2018} \begin{equation} \delta_k^V = r_k + \gamma V(\vec{s}_{k+1}) -V(\vec{s}_{k}) \enspace . \label{eq:TDE} \end{equation} The discount $\gamma \in [0,1]$ reduces the influence of future incidences. $\lambda \in [0,1]$ is a design parameter of GAE. Note, the value function also has to be learned during the training process and serves as critic in the approach. The critic is also represented by a neural network and and receives the same argument as the actor. To ensure sufficient exploration the actor's loss function \eqref{eq:actorloss} is commonly extended by an entropy bonus $S[\pi_{\vec{\theta}}] (\vec{s}_k)$ \cite{mnih_2016}, \cite{williams_1991} \begin{equation} J (\vec{\theta}) = \mathbb{E}_{(\vec{s}_k, \vec{u}_k)} \lbrace \min \left(p_k (\vec{\theta}) A_k, \text{clip} \left( p_k(\vec{\theta}),1-c,1+c\right) A_k \right) \\ + \mu \, S[\pi_{\vec{\theta}}] (\vec{s}_k) \rbrace \enspace, \end{equation} where $\mu \geq 0$ is the entropy coefficient. \section{Proximal Policy Optimization for Tracking Control with future references} \label{sec:PPO_TC} As mentioned before, the key idea of the presented approach is to add the information of future reference values to the argument $\vec{s}_k$ of the actor and the critic in order to improve the control quality. PPO was already applied to a tracking control problem in \cite{zhang_2019} where the argument $\vec{s}_k^0$, contains the current system state $\vec{x}_k$ as well as the residuum of $\vec{x}_{k}$ and the current reference $\vec{x}^r_{k}$ \begin{equation} \vec{s}_k^0 = \left[ \vec{x}_k, \left(\vec{x}^r_{k} - \vec{x}_{k}\right) \right]^T \enspace . \label{eq:arg0} \end{equation} However, no information of future reference values is part of the argument. We take advantage of the fact that the future reference values are known and incorporate them to the argument of the actor and the critic. In the following, two variants will be discussed: (1) Besides the system state $\vec{x}_k$ $N$ future reference values are added to the argument \begin{equation} \vec{s}_k^{1N} = \left[ \vec{x}_k, \vec{x}^r_k, \vec{x}^r_{k+1}, \ldots, \vec{x}^r_{k+N} \right]^T , \enspace N \in \mathbb{N} \enspace . \label{eq:arg1} \end{equation} (2) We introduce a novel residual space where the future reference values are related to the current state and the argument is defined as \begin{equation} \vec{s}_k^{2N} = \left[ \left(\vec{x}^r_{k} - \vec{x}_{k}\right), \left(\vec{x}^r_{k+1} - \vec{x}_{k} \right), \ldots, \left(\vec{x}^r_{k+N} - \vec{x}_{k}\right) \right]^T . \label{eq:arg2} \end{equation} In Figure~\ref{fig:res}, the residual space is illustrated for two-dimensional states and references. Being in the current state~$\vec{x}_k = [x^1_k, x^2_k]^T$ (red dot) the the residua between the current state and the future reference values (black arrows) indicate if the components of the state $x^1_k$ and $x^2_k$ have to be increased or decreased to reach the reference values $\vec{x}^r_{k+1}$ and $\vec{x}^r_{k+2}$ in the next time steps. Thus, the residual argument gives sufficient information to the PPO algorithm to control the system. Please note, a residual space containing future states $\left(\vec{x}^r_{k+1} - \vec{x}_{k+1}\right),\ldots, \left( \vec{x}^r_{k+N} - \vec{x}_{k+N}\right)$ would suffer from two disadvantages. First, the model has to learned which would increase the complexity of the algorithm. Second, the future states $\vec{x}_{k+1},\ldots,\vec{x}_{k+N}$ depend on the current policy thus the argument is a function of the policy. Applied as argument in~\eqref{eq:pPPO} the policy becomes a function of the policy $\pi_{\vec{\theta}} (\vec{u}_k \vert \vec{s}_k(\pi_{\vec{\theta}}))$. This could be solved by calculating the policy in a recursive manner where several instances of the policy are trained in form of different neural networks. This solution would lead to a complex optimization problem which is expected to be computationally expensive and hard to stabilize in the training. Therefore, we consider this solution as impractical. On the other hand, the residual space defined in \eqref{eq:arg2} contains all information about the future course of the reference, consequently a residual space including future states would not enhance the information content. \begin{figure} \centering \def0.7\linewidth{0.7\linewidth} \graphicspath{{figures/}} \input{figures/grafik_res6.pdf_tex} \caption{Residual space defined by current state and future reference values.} \label{fig:res} \end{figure} In the residual space, the arguments are centered around zero, independent of the actual values of the state or the reference. The advantage, compared to the argument with global reference values, is that only the deviation of the state from the reference is represented, which scales down the range of the state space has to be learned. But the residual space argument is only applicable if the change in the state for a given control input is independent of the state itself. As part of the tracking control problem a reward function has to be designed. The reward should represent the quality of following the reference. Applying the control input $\vec{u}_k$ in the state $\vec{x}_k$ leads to the state $\vec{x}_{k+1}$ and related to this a reward depending on the difference between $\vec{x}_{k+1}$ and the reference $\vec{x}^r_{k+1}$. Additionally, a punishment of huge control inputs can be appended. The resulting reward function for time step $k$ is defined as \begin{equation} r_k = - \left( \vec{x}_{k+1} - \vec{x}^r_{k+1}\right)^2 - \beta \cdot \vec{u}_{k}^2 \enspace, \label{eq:reward} \end{equation} where $\beta \geq 0$ is a weighting parameter. \section{Problem Formulation} In this work, we consider a time-discrete system with non-linear dynamics \begin{equation} \Vec{x}_{k+1} = f \left( \Vec{x}_k, \Vec{u}_k\right) \enspace , \label{eq:sy} \end{equation} where $\vec{x}_k \in \mathbb{R}^{n_x}$ is the state and $\Vec{u}_k \in \mathbb{R}^{n_u}$ is the control input applied in time step $k$. The system equation \eqref{eq:sy} is assumed to be unknown for the RL algorithm. Furthermore, the states $\vec{x}_k$ are exactly known. In the tracking control problem, the state or components of the state are supposed to follow a reference $\vec{x}^r_k \in \mathbb{R}^{n_h}$. Thus, the goal is to control the system in a way that the deviation between the state $\vec{x}_k$ and the reference $\vec{x}^r_k$ becomes zero in all time steps. The reference is assumed to be given and the algorithm should be able to track before unseen references. To reach the goal, the algorithm can learn from interactions with the system. In policy gradient methods, a policy is determined which maps the RL states~$\vec{s}_k$ to a control input~$\vec{u}_k$. The states~$\vec{s}_k$ can be the system states~$\vec{x}_k$ but can also contain other related components. In actor-critic approaches, the policy is represented by the actor. Here, $\vec{s}_k$ is the argument given to the actor and in most instances also to the critic as input. To prevent confusion with the system state $\vec{x}_k$, we will refer to $\vec{s}_k$ as argument in the following. \section{Existing solutions and challenges} In existing policy gradient methods for tracking control, the argument $\vec{s}_k$ is either identical to the system state $\vec{x}_k$~\cite{Yu_2017} or is composed of the system state and the residuum between the system state and the reference in the current time step~\cite{zhang_2019}. Those approaches show good results if the reference is fixed. Applied on arbitrary references the actor can only respond to the current reference value but is not able to act optimally for the subsequent reference values. In this work, we will show that this results in poor performance. Another common concept, applied on tracking control problems, is model predictive control (MPC), e.g., \cite{kamthe_2018}. Here, the control inputs are determined by predicting the future system states and minimizing their deviation from the future reference values. In general, the optimization over a moving horizon has to be executed in every time step as no explicit policy representation is determined. Another disadvantage of MPC is the need to know or learn the model of the system. Our idea is to transfer the concept of utilizing the information given in form of known future reference values from MPC to policy gradient methods. A first step in this direction is presented in \cite{koepf_2020}, where an adaptive optimal control method for reference tracking was developed. Here, a Q-function could be analytically derived by applying dynamic programming. The received Q-function depends inherently on the current state but also on current and future reference values. However, the analytical solution is limited to the case of linear system dynamics and quadratic costs (rewards). In this work, we transfer those results to a policy gradient algorithm by extending the arguments of the actor and the critic with future reference values. In contrast to the linear quadratic case, the Q-function cannot be derived analytically. Accordingly, the nonlinear dependencies are approximated by the actor and the critic. The developed approach can be applied to tracking control problems with nonlinear system dynamics and arbitrary references. In some applications, the local deviation of the state from the reference is more informative than the global state and reference values, e.g. operating the drive train in different speed ranges. In this case, it can be beneficial if the argument is defined as residuum between the state and the corresponding reference value, because this scales down the range of the state space has to be explored. Thus, we introduce a novel kind of residual space between states and future reference values which can be applied without knowing or learning the system dynamics. The key ideas are (1)~extending the arguments of the actor and the critic of a policy gradient method by future reference values, and (2)~introducing a novel kind of residual space for model-free RL. \section{TO DO} \section{Challenges} \section{Simple Drive Train Model} An automatic transmission provides gear shifts characterized by two features: (1) the torque transfer, in which a clutch corresponding to the target gear takes over the drive torque, and (2) the speed synchronization, in which slip from input to output speed of the clutch is reduced such that it can be closed or controlled at low slip. In this work, we consider the reference tracking control problem for a friction clutch during synchronization phase. Its input is driven by a motor and the output is propagated through the gearbox to the wheels of the car. Speed control action can only be applied by the clutch when input and output force plates are in friction contact with slip. Generally, the aim in speed synchronization is to smoothly control the contact force of the clutch without jerking towards zero slip. This is where our RL approach for tracking control comes into account. For a smooth operation a reference to follow during the friction phase is predesigned by the developer of the drive train. For easier understanding, a simple drive train model is used which is derived from the motion equations of an ideal clutch \cite{quang_1998} extended by the influence of the gearbox \begin{align} J_{in} \, \dot{\omega}_{in} &= -T_{cl}+T_{in} \enspace, \label{eq:clutchMotA}\\ J_{out} \, \dot{\omega}_{out} &= \theta \, T_{cl} - T_{out} \enspace , \label{eq:clutchMotB} \end{align} where ${\omega}_{in}$ is the input speed on the motor side and ${\omega}_{out}$ is the output speed at the side of the wheels. Accordingly, $T_{in}$ is the input torque and $T_{out}$ is the output torque. The transmission ratio $\theta$ of the gearbox defines the ratio between the input and output speed. The input and output moment of inertia $J_{in}$, $J_{out}$ and the transmission ratio $\theta$ are fixed characteristics of the drive train. The clutch is controlled varying the torque transmitted from the clutch $T_{cl}$. The input torque $T_{in}$ is approximated as constant while the output torque is assumed to depend linear on the output speed $T_{out} = \eta \cdot \omega_{out}$ which changes \eqref{eq:clutchMotA} and \eqref{eq:clutchMotB} to \begin{align} J_{in} \, \dot{\omega}_{in} &= -T_{cl}+T_{in} \enspace, \\ J_{out} \, \dot{\omega}_{out} &= \theta \, T_{cl} - \eta \cdot \omega_{out} \enspace . \label{eq:clutchMot2} \end{align} Solving the differential equations for a time interval $\Delta T$, yields the discrete system equation \begin{equation} \begin{bmatrix} \omega_{in} \\ \omega_{out} \end{bmatrix}_{k+1} = \bol{A} \begin{bmatrix} \omega_{in} \\ \omega_{out} \end{bmatrix}_{k} + \bol{B}_1 \cdot T_{cl,k} + \bol{B}_2 \cdot T_{in} \enspace, \label{eq:systemEqCl} \end{equation} where \begin{align} \bol{A} &= \begin{bmatrix} 1 & 0 \\ 0 & \exp \left( {-\frac{ \eta \cdot \Delta T}{J_{out}}} \right) \end{bmatrix} \enspace, \\ \bol{B}_1 &= \begin{bmatrix} - \frac{\Delta T}{J_{in}} \\ \frac{\theta}{\eta} \left(1 - \exp \left( {-\frac{ \eta \cdot \Delta T}{J_{out}}} \right) \right) \end{bmatrix} \, , \enspace \bol{B}_2= \begin{bmatrix} \frac{\Delta T}{J_{in}} \\ 0 \end{bmatrix} \enspace, \end{align} with state $\vec{x}_k = [\omega_{in} \omega_{out}]^T_k$ and the control input $u_k = T_{cl,k}$. For a friction-plate clutch, the clutch torque $T_{cl,k}$ depends on the capacity torque $T_{cap,k}$ \begin{equation} T_{cl,k}=T_{cap,k} \cdot \text{sign} \left(\omega_{in,k} - \theta \cdot \omega_{out,k}\right), \enspace T_{cap,k} \geq 0 \enspace, \label{eq:Kupplungsmoment} \end{equation} which means $T_{cl,k}$ is changing its sign according to if the input speed or the output speed is higher. The capacity torque is proportional to the contact force which is applied on the plates $T_{cap,k} \sim F_{N,k} $. In real drive trains, the change of the control input in a time step is limited due to underlying dynamics such as pressure dynamics. For simulating this behavior, a low pass filter is applied on the control inputs. We use a first-order lag element (PT1) with the cutoff frequency $f_g$ \begin{equation} T'_{cl,k} =\begin{cases} \left(T_{cl,k} - T'_{cl,k-1} \right) \cdot \left( 1- a \right) + T'_{cl,k-1} , & \text{if $T_{cl,k} > T'_{cl,k-1}$} \enspace, \\ \left(T'_{cl,k-1} - T_{cl,k} \right) \cdot a + T_{cl,k} , & \text{otherwise} \enspace, \end{cases} \end{equation} where \begin{equation} a = \exp { \left( - 2 \pi f_g \Delta T \right)} \enspace. \end{equation} In this case, the control input provided by the controller $T_{cl,k}$ is transformed to the delayed control input $T'_{cl,k}$ applied on the system. The simple drive train model is used in three different experiments of input speed synchronization which are derived from use cases of real drive trains. In the first experiment, the references to be followed are smooth. This is associated with a usual gear shift. In real applications, delays, hidden dynamics, or change-of-mind situations can arise which make a reinitialization of the reference necessary. This behavior is modeled in the second experiment where the references contain jumps and discontinuities, respectively. Jumps in the reference can lead to fast changes in the control inputs. To still maintain a realistic behavior the lag element is applied on the control inputs. In the third experiment, we use again smooth references but in different input speed ranges. Varying drive torque demands typically cause the gear shift to start at different input speed levels. Our approach will be evaluated on all three experiments to determine the performance for different use cases. Note, that for demonstration purposes we consider speed synchronization by clutch control input only. In real applications, clutch control is often combined with input torque feedforward. \section{Conclusion} In this work, proximal policy optimization for tracking control exploiting future reference information was presented. We introduced two variants of extending the argument of both actor and critic. In the first variant, we added global future reference values to the argument. In the second variant, the argument was defined in a novel kind of residual space between the current state and the future reference values. By evaluating our approach on a simple drive train model we could clearly show that both variants improve the performance compared to an argument taking only the current reference value into account. If the approach is applied to references with discontinuities, adding several future reference values to the argument is beneficial. The residual space variant shows its advantages especially for references with different offsets. In addition, our approach outperforms PI controllers commonly used in drive train control. Besides higher tracking quality, the generalization to different references is significantly better than using a PI controller. This guarantees an adequate performance on arbitrary, before unseen, references. In future work, our approach will be applied to a more sophisticated drive train model where noise behavior, model inaccuracies, non-modeled dynamics, and control input range limits, as being expected in a real drive train system, are systematically included in training. \section{Evaluation} As mentioned in the last chapter, we will evaluate our approach using three different experiments representing three use cases of the drive train. In every experiment, the three different arguments of the actor and the critic introduced in Chapter \ref{sec:PPO_TC} will be applied. For the drive train system \eqref{eq:systemEqCl}, the arguments of the actor and the critic have to be defined. The reference to be followed is only corresponding to the input speed. Since no reference for the output speed is given we add the output speed $\omega_{out,k}$ as global variable in all three arguments. Analogous to \eqref{eq:arg0}, \eqref{eq:arg1} and \eqref{eq:arg2}, the arguments of the drive train system are \begin{align} \vec{s}_k^{0,cl} &= \left[ \omega_{out,k}, \omega_{in,k}, \left( \omega^r_{in,k} - \omega_{in,k} \right) \right]^T \enspace, \\ \vec{s}_k^{1N,cl} &= \left[\omega_{out,k}, \omega_{in,k}, \omega^r_{in,k}, \omega^r_{in,k+1}, \ldots, \omega^r_{in,k+N} \right]^T \enspace, \\ \vec{s}_k^{2N,cl} &= \left[ \omega_{out,k}, \left(\omega^r_{in,k} - \omega_{in,k}\right), \left(\omega^r_{in,k+1} - \omega_{in,k}\right), \ldots, \left(\omega^r_{in,k+N} - \omega_{in,k}\right) \right]^T . \end{align} Without future reference values as applied in \cite{zhang_2019} the argument is $\vec{s}_k^{0,cl}$, with global future reference values $\vec{s}_k^{1N,cl}$ and in residual space with future reference values $\vec{s}_k^{2N,cl}$. The reward function for the simple drive train model derived from \eqref{eq:reward} is given as \begin{equation} r_k = -\left( \omega_{in,k+1} - {\omega}^r_{in,k+1}\right)^2 - \beta \cdot \left (T_{cl,k} - T_{in}\right)^2 \enspace. \label{eq:rewardCl} \end{equation} If $T_{cl,k} = T_{in}$ the input speed $\omega_{in}$ is not changing from time step $k$ to time step $k+1$ according to \eqref{eq:systemEqCl}. Thus, deviations of $T_{cl,k}$ from $ T_{in}$ are penalized to suppress control inputs which would cause larger changes in the state. All parameters used in the simple drive train model are given in Table \ref{tab:paraClutch}. \begin{table}[tb] \caption{Parameters of the simple drive train model.} \label{tab:paraClutch} \centering \begin{tabularx}{8.5cm}{p{5.0cm}|X} \hline Parameter & Value \tabularnewline \hline \hline Input moment of inertia $J_{in}$ & \SI{0.209}{kg m^2} \\ \hline Output moment of inertia $J_{out}$ & \SI{86.6033}{kg m^2} \\ \hline Transmission ratio $\theta$ & 10.02 \\ \hline Input torque $T_{in}$ & \SI{20}{Nm} \\ \hline $\eta$ & \SI{2}{(Nms)/rad} \\ \hline Time step $\Delta T$ & \SI{10}{ms} \\ \hline \end{tabularx} \end{table} \subsection{Algorithm} \begin{algorithm}[tb!] \caption{PPO for tracking control } \label{algo:controlPPO} \begin{algorithmic}[1] \REQUIRE Replay buffer $\mathcal D$, critic parameters $\vec{\phi}$, actor parameters $\vec{\theta}$, actor learning rate $\alpha_a$, critic learning rate $\alpha_c$, target network delay factor $\tau$ \STATE Init target critic parameters $\vec{\phi}' \leftarrow \vec{\phi}$ and $h = 0$ \FOR {1 .. Number of episodes} \STATE Observe initial state $\vec{x}_0$ and new reference $\omega^r_{in}$ \FOR {1 .. K} \STATE Apply control input $u_k \leftarrow \pi_\theta(\vec{s}_k)$ \STATE Observe new state $\vec{x}_{k+1}$ and reward $r_k$ \STATE Add $(\vec{s}_k, u_k, r_k, \vec{s}_{k+1} )$ to replay buffer $\mathcal D$ \ENDFOR \FOR {1 .. Number of epochs} \STATE Sample training batch from $\mathcal D$ \STATE Update critic $\vec{\phi}_{h+1} \leftarrow \vec{\phi}_h + \alpha_c \nabla_{\vec{\phi}} C(\vec{\phi}_h)$ \STATE Calculate advantage $\vec{\hat A}$ using GAE \STATE Update actor $\vec{\theta}_{h+1} \leftarrow \vec{\theta}_h + \alpha_a \nabla_{\vec{\theta}} J(\vec{\theta}_h)$ \STATE $h \leftarrow h+1$ \STATE \textbf{Every m-th epoch} \\ \quad Update target critic $\vec{\phi}' \leftarrow (1 - \tau) \, \vec{\phi}' + \tau \, \vec{\phi}$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} The PPO algorithm, applied for the evaluation, is shown in Algorithm \ref{algo:controlPPO}. We use two separate networks representing the actor and the critic. In order to improve data-efficiency we apply experience replay \cite{lin_1992}, \cite{lillicrap2015}. While interacting with the system, the tuples $(\vec{s}_k, u_k, r_k, \vec{s}_{k+1} )$ are stored in the replay buffer. In every epoch a training batch of size $L$ is sampled from the replay buffer to update the actor's and the critic's parameters. As introduced in Chapter \ref{sec:PPO}, the advantage function is determined via generalized advantage estimation~\eqref{eq:adv} with the GAE parameter set to $\lambda = 0$ and the discount $\gamma = 0.7$. To improve the stability of the critic network during the training, we added a target critic network~$V'$ \cite{lillicrap2015} which is only updated every m-th epoch (in our implementation $m=2$). The critic loss is defined as the mean squared temporal difference error~\eqref{eq:TDE} \begin{equation} C(\vec{\phi}) = \frac{1}{L} \sum^L \left( r_k + \gamma \, V'(\vec{s}_{k+1}) - V(\vec{s}_{k})\right)^2 \enspace . \end{equation} The output of the actor provides $T_{cap}$ in \eqref{eq:Kupplungsmoment} and the sign for $T_{cl}$ is calculated through the input and output speed. To ensure $T_{cap} \geq 0$ the last activation function of the actor is chosen as ReLU. In Table~\ref{tab:PPOpara}, all parameters used in the PPO algorithm are shown. In our trainings, we perform 2000 episodes each with 100 time steps. Each run through the system is followed by 100 successive training epochs. \begin{table}[tb] \caption{Parameters used in the PPO algorithm.} \label{tab:PPOpara} \centering \begin{tabularx}{11.5cm}{p{7.0cm}|X} \hline Parameter & Value \tabularnewline \hline \hline Hidden layers size actor and critic networks & [400, 300] \\ \hline Activation functions actor network & [tanh, tanh, ReLU] \\ \hline Activation functions critic network & [tanh, tanh, -ReLU] \\ \hline Actor's learning rate $\alpha_{a}$ & $5 \cdot 10^{-5}$ \\ \hline Critic's learning rate $\alpha_{c}$ & $1.5 \cdot 10^{-3} $ \\ \hline Batch size $L$ & 100 \\ \hline Reply buffer size & 10000 \\ \hline Initial standard deviation stochastic actor & 10 \\ \hline c & 0.1 \\ \hline $\tau$ & 0.001 \\ \hline Entropy coefficient $\mu$ & 0.01 \\ \hline \end{tabularx} \end{table} \subsection{Simulation procedure} \label{simu} In the following, we will evaluate our approach on three different experiments. For each experiment 15 independent simulations are performed. In each simulation the algorithm is trained using 2000 different training references (one for each episode). After every tenth episode of the training the actor is tested on an evaluation reference. Using the evaluation results the best actor of the training is identified. In the next step, the best actor is applied to 100 test references to evaluate the performance of the algorithm. In the following, the mean episodic reward over all 100 test references and all 15 simulations will serve as quality criterion. The smooth references are cubic splines formed from eight random values around the mean \SI{2000}{rpm}. In the second experiment, between one and 19 discontinuities are induced by adding periodic square-waves of different period durations to the reference. The references in the third experiment are shifted by adding an offset to the spline. Five different offsets are used, resulting in data between approximately \SI{1040}{rpm} and \SI{4880}{rpm}. As mentioned before, the results will be compared to the performance of a PI controller. We determine the parameters of the PI controller by minimizing the cumulative costs over an episode, respectively the negative of the episodic reward defined by the reward function \eqref{eq:rewardCl}. To avoid local minima, we first apply a grid search and then use a quasi-Newton method to determine the exact minimum. In the optimization, the cumulative costs over all training references (same as in the PPO training) are used as quality criterion. \subsection{Results} \begin{figure}[tb] \centering \includegraphics[width=0.5\textwidth]{figures/PE-MeanTraining_smooth6.pdf} \caption{Training over 2000 episodes on smooth references.} \label{fig:smoothTrain} \end{figure} \begin{table}[b] \caption{Quality evaluated on test a set for smooth references.} \label{tab:smoothTest} \centering \begin{tabularx}{15.0cm}{p{5.0cm}|p{2.0cm}|X|X} \hline Arguments of actor and critic & Acronym & Mean episodic & Standard deviation \tabularnewline & & reward & episodic reward \tabularnewline \hline \hline Current state and residuum ($\vec{s}_k^{0,cl}$) & CPPO & -0.303 & 0.027 \\ \hline Global space with one future reference ($\vec{s}_k^{11,cl}$) & GPPO1 & -0.035 & 0.020 \\ \hline Residual space with one future reference ($\vec{s}_k^{21,cl}$) & RPPO1 & -0.030 & 0.020 \\ \hline PI controller & PI & -0.069 & 1.534 \\ \hline \end{tabularx} \end{table} \begin{figure}[bt] \centering \subfloat[Input speeds and reference for one episode.] {\includegraphics[height=5.2cm]{figures/test_plot_smooth2.pdf}\label{fig:testStatesSmooth}} \subfloat[Control inputs for one episode.] {\includegraphics[height=5.2cm]{figures/test_plot_act_smooth2.pdf}\label{fig:testActionssmooth}} \caption{Performance of the best trained actors on a smooth test reference. Note that in real applications reference trajectories are designed in such a way that the input speed is synchronized quickly but smoothly towards zero slip at the clutch.} \label{fig:smoothRef} \end{figure} In the following, the results of the different experiments with (1) a class of smooth references, (2) a class of references with discontinuities and (3) a class of smooth references shifted by an offset will be presented. \subsubsection*{Class of smooth references} For smooth references the weighting parameter~$\beta$ in the reward function \eqref{eq:rewardCl} is set to zero. The parameters of the PI controller were computed as described in Section \ref{simu}, the parameter of the proportional term $K_P$ was determined as $20.88$ and the parameter of the integral term $K_I$ is $11.08$. The training curves of the PPO approaches are drawn in Figure~\ref{fig:smoothTrain}. It can be clearly seen that the approaches using one future reference value in the argument of the actor and the critic (GPPO1, RPPO1) reach a higher mean episodic reward than the approach including only reference information of the current time step (CPPO). Furthermore, the approach with the argument defined in the residual space (RPPO1) achieves high rewards faster than the GPPO1 and the CPPO. As mentioned before, the trained actors are evaluated on a set of test references. The obtained results are given in Table \ref{tab:smoothTest}. The approach using global reference values in the argument (GPPO1) and the approach defining the argument in a residual space (RPPO1) provide the best results. The mean episodic reward of the CPPO is ten times lower than the ones of the GPPO1 and the RPPO1. The performance of the classical PI controller lies in between them but the standard deviation is very high. This implies that optimizing the PI's parameters using the training data, no parameters can be determined which lead to equally good performance for all the different references. In Figure~\ref{fig:smoothRef}, the best actors are applied to follow a test reference. The tracks of the PI controller, the GPPO1 and the RPPO1 show similar behavior. Only in the valleys the PI controller deviated slightly more from the reference. Since the CPPO has to decide for a control input only knowing the current reference value, its reaction is always behind. In Figure~\ref{fig:testStatesSmooth}, it can be seen that the CPPO applies a control input that closes the gap between the current input speed and the current reference value. But in the next time step the reference value has already changed and is consequently not reached. The same effect leads to a shift of the control inputs in Figure~\ref{fig:testActionssmooth}. \subsubsection*{Class of references with discontinuities} As mentioned before, to avoid unrealistic fast changes of the control inputs we add a first-order lag element with cutoff frequency $f_g = \SI{100}{Hz}$ to the traindrive model in the experiment setting for references with discontinuities. In addition, $\beta$ is set to $1/3000$ in the reward function \eqref{eq:rewardCl} to prevent the learning of large control inputs when discontinuities occur in the reference signal. The PI controller parameters were optimized on the same reward function resulting in $K_P=18.53$ and $K_I=5.67$. \begin{figure}[tb] \centering \includegraphics[width=0.5\textwidth]{figures/PE-MeanTraining_jumps4.pdf} \caption{Training over 2000 episodes on references with discontinuities.} \label{fig:jumpTraining} \end{figure} Besides including the information of one future reference value the performance of adding three future reference values (GPPO3 and RPPO3) is also investigated in this experiment. The learning curves of all five PPO approaches are illustrated in Figure~\ref{fig:jumpTraining}. All approaches with future reference values show similar results in the training. The PPO approach containing only current reference values (CPPO) achieves a significant lower mean episodic reward. A similar behavior can be observed in the evaluation on the test reference set for which the results are given in Table~\ref{tab:jumpTest}. \begin{table}[bt] \caption{Quality evaluated on test references with discontinuities.} \label{tab:jumpTest} \centering \begin{tabularx}{15.0cm}{p{5.0cm}|p{2.0cm}|X|X} \hline Arguments of actor and critic & Acronym & Mean episodic & Standard deviation \tabularnewline & & reward & episodic reward \tabularnewline \hline \hline Current state and residuum ($\vec{s}_k^{0,cl}$) & CPPO & -10.936 & 0.618 \\ \hline Global space with one future reference ($\vec{s}_k^{11,cl}$) & GPPO1 & -1.597 & 0.079 \\ \hline Global space with three future reference ($\vec{s}_k^{13,cl}$)& GPPO3 & -1.499 & 0.082 \\ \hline Residual space with one future reference ($\vec{s}_k^{21,cl}$) & RPPO1 & -1.614 & 0.088 \\ \hline Residual space with three future reference ($\vec{s}_k^{23,cl}$) & RPPO3 & -1.510 & 0.086 \\ \hline PI controller & PI & -1.747 & 2.767 \\ \hline \end{tabularx} \end{table} Again, the PPO approaches with one global future reference (GPPO1) and one future reference in the residual space (RPPO1) show similar results and perform significantly better than the CPPO. Adding the information of more future reference values (GPPO3 and RPPO3) leads to an even higher mean episodic reward. The PI controller performs slightly worse than the PPO aproaches with future reference information. Due to the large standard deviation of the PI controller's performance, it cannot be guaranteed to perform well on a specific reference. The performance of the best actors on a test reference with discontinuities is illustrated in Figure~\ref{fig:testStatesjumps}. Note, for better visibility, the figure shows only a section of a full episode. \begin{figure}[b] \centering \subfloat[Input speeds and reference.]{\includegraphics[height=5.2cm]{figures/test_plot_jumps2.pdf}\label{fig:testStatesjumps}} \subfloat[Control inputs.]{\includegraphics[height=5.2cm]{figures/test_plot_act_jumps2.pdf}\label{fig:testActionsjumps}} \caption{Performance of the best trained actors on a test reference with discontinuities.} \label{fig:testjumps} \end{figure} The PPO approaches including future reference values and the PI controller are able to follow the reference even if a discontinuity occurs. The PPO approach without future reference values is performing even worse than on the smooth reference. It can be clearly seen that the jump cannot be detected in advance and the reaction of the actor is delayed. In Figure~\ref{fig:testStatesjumps} the applied control inputs are shown, the approaches including three future reference values respond earlier to the upcoming discontinuities and cope with the discontinuity requiring only smaller control inputs (Figure~\ref{fig:testActionsjumps}) which also leads to a higher reward. The PI controller shows a similar behavior than the PPO approaches with one future reference value (GPPO1 and RPPO1). \subsubsection*{Class of smooth references with offsets} For the class of smooth references with offsets, the same settings, as used in the experiment for smooth references, are applied. The PI parameters are determined as $20.88$ for $K_P$ and $11.04$ for $K_I$. The training curves of the PPO approaches are illustrated in Figure~\ref{fig:offsetTrain}. \begin{figure}[tb] \centering \includegraphics[width=0.5\textwidth]{figures/PE-MeanTraining_offset.pdf} \caption{Training over 2000 episodes on smooth references with offsets.} \label{fig:offsetTrain} \end{figure} Compared to the learning curves of the smooth references experiment the training on the references with offset needs more episodes to reach the maximum episodic reward. The training of the approach using global reference values (GPPO1) is extremely unstable. The reason might be that the training data contains only references in a finite number of small disjointed areas in a huge state space. As the argument of GPPO1 includes only global states and reference value no shared data between this areas exist and the policy is learned independently for each area. Consequently, the stability of the approaches with argument components in the residual space CPPO and RPPO is not effected by this issue. However, the RPPO1 is receiving higher rewards significantly faster than the CPPO which also contains the global input speed. \begin{table}[b] \caption{Quality evaluated on a test set for smooth references with offset.} \label{tab:offsetTest} \centering \begin{tabularx}{15.0cm}{p{5.0cm}|p{2.0cm}|X|X} \hline Arguments of actor and critic & Acronym & Mean episodic & Standard deviation \tabularnewline & & reward & episodic reward \tabularnewline \hline \hline Current state and residuum ($\vec{s}_k^{0,cl}$) & CPPO & -0.377 & 0.091 \\ \hline Global space with one future reference ($\vec{s}_k^{11,cl}$) & GPPO1 & -0.102 & 0.089 \\ \hline Residual space with one future reference ($\vec{s}_k^{21,cl}$) & RPPO1 & -0.031 & 0.018 \\ \hline PI controller & PI & -0.069 & 1.541 \\ \hline \end{tabularx} \end{table} In Table \ref{tab:offsetTest}, the performance on a set of test references is given. The mean episodic reward of the CPPO, the RPPO1 and the PI controller is in the same range as in the smooth references experiment. As in the other experiments, the PI controller shows a large standard deviation. Despite the instabilities in the training, the GPPO1 receives a higher mean episodic reward than the CPPO but its performance is significant worse than in the smooth references experiment. It can be clearly seen that residual space arguments lead to better performance for references with offsets. \begin{figure}[t] \centering \subfloat[Input speeds and reference.]{\includegraphics[height=5.2cm]{figures/test_plot_offset2.pdf}\label{fig:offsetStates}} \subfloat[Control inputs.]{\includegraphics[height=5.2cm]{figures/test_plot_act_offset2.pdf}\label{fig:offsetActions}} \caption{Performance of the best trained actors on a smooth test reference with offsets.} \label{fig:offsetPlots} \end{figure} In Figure~\ref{fig:offsetPlots}, the performance of the best actors on a test reference is illustrated. For better visibility, only a section of a full episode is drawn. The results are similar to the smooth reference experiment, only the GPPO1 shows slightly worse tracking performance. \section{Introduction} In cars with automatic transmissions, gear shifts shall be performed in such a way that no discomfort is caused by the interplay of motor torque and clutch control action. Especially, the synchronization of input speed to the gears' target speed is a sensitive process mainly controlled by the clutch. Ideally, the motor speed should follow a predesigned reference, but not optimal operating feedforward control and disturbances in the mechanical system can cause deviations from the optimal behavior. The idea is to apply a reinforcement learning (RL) approach to control the clutch behavior regulating the deviations from the optimal reference. The advantage of a RL approach over classical approaches as the PI control is that no extensive experimental parametrization for every gear and every clutch is needed which is very complex for automatic transmissions. Instead, the RL algorithm is supposed to learn the optimal control behavior autonomously. Since the goal is to guide the motor speed along a given reference signal the problem at hand belongs to the family of tracking control problems. In the following, a literature review on RL for tracking control is given. In \cite{qin_2018}, the deep deterministic policy gradient method, introduced in \cite{lillicrap_2016}, is applied to learn the parameters of a PID controller. An adaptive PID controller is realized in \cite{carlucho_2017} using an incremental Q-learning for real-time tuning. A combination of Q-learning and a PID controller is presented in \cite{wang_2020}, where the applied control input is a sum of the PID control input and a control input determined by Q-learning. Another common concept applied to tracking control problems is model predictive control (MPC) which can also be combined with RL. A data-efficient model-based RL approach based on probabilistic model predictive control (MPC) is introduced in \cite{kamthe_2018}. The key idea is to learn the probabilistic transition model using Gaussian processes. In \cite{gros_2020}, nonlinear model predictive control (NMPC) is used as a function approximator for the value function and the policy in a RL approach. For tracking control problems with linear dynamics and quadratic costs a RL approach is presented in \cite{koepf_2020}. Here, a Q-function is analytically derived that inherently incorporates a given reference trajectory on a moving horizon. In contrast to the before presented approaches derived from classical controller concepts also pure RL approaches for tracking control were invented. A model-based variant is presented in \cite{hu_2020} where a kernel-based transition dynamic model is introduced. The transition probabilities are learned directly from the observed data without learning the dynamic model. The model of the transition probabilities is then used in a RL approach. A model-free RL approach is introduced in \cite{Yu_2017} where the deep deterministic policy gradient approach \cite{lillicrap_2016} is applied on tracking control of an autonomous underwater vehicle. In \cite{kamran_2019}, images of a reference are fed to a convolutional neural network for a model-free state representation of the path. A deep deterministic policy gradient approach \cite{lillicrap_2016} is applied where previous local path images and control inputs are given as arguments to solve the tracking control problem. Proximal policy optimization (PPO) \cite{schulman_2017} with generalized advantage estimation (GAE) \cite{schulman_2016} is applied on tracking control of a manipulator and a mobile robot in \cite{zhang_2019}. Here, the actor and the critic are represented by a long short-term memory (LSTM) and a distributed version of PPO is used. In this work, we apply PPO to a tracking control problem. The key idea is to extend the arguments of the actor and the critic to take into account information about future reference values and thus improve the tracking performance. Besides adding global reference values to the argument, we also define an argument based on residua between the states and the future reference values. For this purpose, a novel residual space with future reference values is introduced applicable to model-free RL approaches. Our approach is evaluated on a simple drive train model. The results are compared to a classical PI controller and a PPO approach, which does not consider future reference values. \section{Preliminaries: Proximal Policy Optimization Algorithm} \label{sec:PPO} As policy gradient method, proximal policy optimization (PPO) \cite{schulman_2017} is applied in this work. PPO is a simplification of the trust region policy optimization (TRPO) \cite{schulman_2017_2}. The key idea of PPO is a novel loss function design where the change of the stochastic policy $\pi_{\vec{\theta}}$ in each update step is limited introducing a clip function \begin{equation} J (\vec{\theta}) = \mathbb{E}_{\left(\vec{s}_k, \vec{u}_k\right)} \lbrace \min \left(p_k (\vec{\theta}) A_k, \text{clip} \left( p_k(\vec{\theta}),1-c,1+c\right) A_k \right) \rbrace \enspace, \label{eq:actorloss} \end{equation} where \begin{equation} p_k(\vec{\theta}) = \frac{\pi_{\vec{\theta}} (\vec{u}_k \vert \vec{s}_k) }{\pi_{\vec{\theta}_{old}} (\vec{u}_k \vert \vec{s}_k)} \enspace . \label{eq:pPPO} \end{equation} The clipping motivates $p_k(\vec{\theta})$ not to leave the interval $[1-c,1+c]$. The argument $\vec{s}_k$ given to the actor commonly contains the system state $\vec{x}_k$ of the system, but can be extended by additional information. The loss function is used in a policy gradient method to learn the actor network's parameters~$\vec{\theta}_h$ \begin{equation} \vec{\theta}_{h+1} = \vec{\theta}_h + \alpha_{a} \cdot \nabla_{\vec{\theta}} J (\vec{\theta}) \vert_{\vec{\theta} = \vec{\theta}_h} \enspace, \end{equation} where $\alpha_{a}$ is referred as the actor's learning rate and $h \in \lbrace 0,1,2, \ldots \rbrace$ is the policy update number. A proof of convergence for PPO is presented in \cite{holzleitner_2020}. The advantage function $A_k$ in \eqref{eq:actorloss} is defined as the difference between the Q-function and the value function $V$ \begin{equation} A_k(\vec{s}_k,\vec{u}_k) = Q(\vec{s}_k,\vec{u}_k) - V(\vec{s}_k) \enspace . \end{equation} In \cite{schulman_2017}, generalized advantage estimation (GAE) \cite{schulman_2016} is applied to approximate the value function \begin{equation} \hat{A}_k^{GAE(\gamma, \lambda)} = \sum_{l = 0}^\infty \left( \gamma \lambda \right)^l \delta_{k+l}^V \enspace, \label{eq:adv} \end{equation} where $\delta_k^V$ is the temporal difference error of the value function~V \cite{sutton_2018} \begin{equation} \delta_k^V = r_k + \gamma V(\vec{s}_{k+1}) -V(\vec{s}_{k}) \enspace . \label{eq:TDE} \end{equation} The discount $\gamma \in [0,1]$ reduces the influence of future incidences. $\lambda \in [0,1]$ is a design parameter of GAE. Note, the value function also has to be learned during the training process and serves as critic in the approach. The critic is also represented by a neural network and and receives the same argument as the actor. To ensure sufficient exploration the actor's loss function \eqref{eq:actorloss} is commonly extended by an entropy bonus $S[\pi_{\vec{\theta}}] (\vec{s}_k)$ \cite{mnih_2016}, \cite{williams_1991} \begin{equation} J (\vec{\theta}) = \mathbb{E}_{(\vec{s}_k, \vec{u}_k)} \lbrace \min \left(p_k (\vec{\theta}) A_k, \text{clip} \left( p_k(\vec{\theta}),1-c,1+c\right) A_k \right) \\ + \mu \, S[\pi_{\vec{\theta}}] (\vec{s}_k) \rbrace \enspace, \end{equation} where $\mu \geq 0$ is the entropy coefficient. \section{Proximal Policy Optimization for Tracking Control with future references} \label{sec:PPO_TC} As mentioned before, the key idea of the presented approach is to add the information of future reference values to the argument $\vec{s}_k$ of the actor and the critic in order to improve the control quality. PPO was already applied to a tracking control problem in \cite{zhang_2019} where the argument $\vec{s}_k^0$, contains the current system state $\vec{x}_k$ as well as the residuum of $\vec{x}_{k}$ and the current reference $\vec{x}^r_{k}$ \begin{equation} \vec{s}_k^0 = \left[ \vec{x}_k, \left(\vec{x}^r_{k} - \vec{x}_{k}\right) \right]^T \enspace . \label{eq:arg0} \end{equation} However, no information of future reference values is part of the argument. We take advantage of the fact that the future reference values are known and incorporate them to the argument of the actor and the critic. In the following, two variants will be discussed: (1) Besides the system state $\vec{x}_k$ $N$ future reference values are added to the argument \begin{equation} \vec{s}_k^{1N} = \left[ \vec{x}_k, \vec{x}^r_k, \vec{x}^r_{k+1}, \ldots, \vec{x}^r_{k+N} \right]^T , \enspace N \in \mathbb{N} \enspace . \label{eq:arg1} \end{equation} (2) We introduce a novel residual space where the future reference values are related to the current state and the argument is defined as \begin{equation} \vec{s}_k^{2N} = \left[ \left(\vec{x}^r_{k} - \vec{x}_{k}\right), \left(\vec{x}^r_{k+1} - \vec{x}_{k} \right), \ldots, \left(\vec{x}^r_{k+N} - \vec{x}_{k}\right) \right]^T . \label{eq:arg2} \end{equation} In Figure~\ref{fig:res}, the residual space is illustrated for two-dimensional states and references. Being in the current state~$\vec{x}_k = [x^1_k, x^2_k]^T$ (red dot) the the residua between the current state and the future reference values (black arrows) indicate if the components of the state $x^1_k$ and $x^2_k$ have to be increased or decreased to reach the reference values $\vec{x}^r_{k+1}$ and $\vec{x}^r_{k+2}$ in the next time steps. Thus, the residual argument gives sufficient information to the PPO algorithm to control the system. Please note, a residual space containing future states $\left(\vec{x}^r_{k+1} - \vec{x}_{k+1}\right),\ldots, \left( \vec{x}^r_{k+N} - \vec{x}_{k+N}\right)$ would suffer from two disadvantages. First, the model has to learned which would increase the complexity of the algorithm. Second, the future states $\vec{x}_{k+1},\ldots,\vec{x}_{k+N}$ depend on the current policy thus the argument is a function of the policy. Applied as argument in~\eqref{eq:pPPO} the policy becomes a function of the policy $\pi_{\vec{\theta}} (\vec{u}_k \vert \vec{s}_k(\pi_{\vec{\theta}}))$. This could be solved by calculating the policy in a recursive manner where several instances of the policy are trained in form of different neural networks. This solution would lead to a complex optimization problem which is expected to be computationally expensive and hard to stabilize in the training. Therefore, we consider this solution as impractical. On the other hand, the residual space defined in \eqref{eq:arg2} contains all information about the future course of the reference, consequently a residual space including future states would not enhance the information content. \begin{figure} \centering \def0.7\linewidth{0.7\linewidth} \graphicspath{{figures/}} \input{figures/grafik_res6.pdf_tex} \caption{Residual space defined by current state and future reference values.} \label{fig:res} \end{figure} In the residual space, the arguments are centered around zero, independent of the actual values of the state or the reference. The advantage, compared to the argument with global reference values, is that only the deviation of the state from the reference is represented, which scales down the range of the state space has to be learned. But the residual space argument is only applicable if the change in the state for a given control input is independent of the state itself. As part of the tracking control problem a reward function has to be designed. The reward should represent the quality of following the reference. Applying the control input $\vec{u}_k$ in the state $\vec{x}_k$ leads to the state $\vec{x}_{k+1}$ and related to this a reward depending on the difference between $\vec{x}_{k+1}$ and the reference $\vec{x}^r_{k+1}$. Additionally, a punishment of huge control inputs can be appended. The resulting reward function for time step $k$ is defined as \begin{equation} r_k = - \left( \vec{x}_{k+1} - \vec{x}^r_{k+1}\right)^2 - \beta \cdot \vec{u}_{k}^2 \enspace, \label{eq:reward} \end{equation} where $\beta \geq 0$ is a weighting parameter. \section{Problem Formulation} In this work, we consider a time-discrete system with non-linear dynamics \begin{equation} \Vec{x}_{k+1} = f \left( \Vec{x}_k, \Vec{u}_k\right) \enspace , \label{eq:sy} \end{equation} where $\vec{x}_k \in \mathbb{R}^{n_x}$ is the state and $\Vec{u}_k \in \mathbb{R}^{n_u}$ is the control input applied in time step $k$. The system equation \eqref{eq:sy} is assumed to be unknown for the RL algorithm. Furthermore, the states $\vec{x}_k$ are exactly known. In the tracking control problem, the state or components of the state are supposed to follow a reference $\vec{x}^r_k \in \mathbb{R}^{n_h}$. Thus, the goal is to control the system in a way that the deviation between the state $\vec{x}_k$ and the reference $\vec{x}^r_k$ becomes zero in all time steps. The reference is assumed to be given and the algorithm should be able to track before unseen references. To reach the goal, the algorithm can learn from interactions with the system. In policy gradient methods, a policy is determined which maps the RL states~$\vec{s}_k$ to a control input~$\vec{u}_k$. The states~$\vec{s}_k$ can be the system states~$\vec{x}_k$ but can also contain other related components. In actor-critic approaches, the policy is represented by the actor. Here, $\vec{s}_k$ is the argument given to the actor and in most instances also to the critic as input. To prevent confusion with the system state $\vec{x}_k$, we will refer to $\vec{s}_k$ as argument in the following. \section{Existing solutions and challenges} In existing policy gradient methods for tracking control, the argument $\vec{s}_k$ is either identical to the system state $\vec{x}_k$~\cite{Yu_2017} or is composed of the system state and the residuum between the system state and the reference in the current time step~\cite{zhang_2019}. Those approaches show good results if the reference is fixed. Applied on arbitrary references the actor can only respond to the current reference value but is not able to act optimally for the subsequent reference values. In this work, we will show that this results in poor performance. Another common concept, applied on tracking control problems, is model predictive control (MPC), e.g., \cite{kamthe_2018}. Here, the control inputs are determined by predicting the future system states and minimizing their deviation from the future reference values. In general, the optimization over a moving horizon has to be executed in every time step as no explicit policy representation is determined. Another disadvantage of MPC is the need to know or learn the model of the system. Our idea is to transfer the concept of utilizing the information given in form of known future reference values from MPC to policy gradient methods. A first step in this direction is presented in \cite{koepf_2020}, where an adaptive optimal control method for reference tracking was developed. Here, a Q-function could be analytically derived by applying dynamic programming. The received Q-function depends inherently on the current state but also on current and future reference values. However, the analytical solution is limited to the case of linear system dynamics and quadratic costs (rewards). In this work, we transfer those results to a policy gradient algorithm by extending the arguments of the actor and the critic with future reference values. In contrast to the linear quadratic case, the Q-function cannot be derived analytically. Accordingly, the nonlinear dependencies are approximated by the actor and the critic. The developed approach can be applied to tracking control problems with nonlinear system dynamics and arbitrary references. In some applications, the local deviation of the state from the reference is more informative than the global state and reference values, e.g. operating the drive train in different speed ranges. In this case, it can be beneficial if the argument is defined as residuum between the state and the corresponding reference value, because this scales down the range of the state space has to be explored. Thus, we introduce a novel kind of residual space between states and future reference values which can be applied without knowing or learning the system dynamics. The key ideas are (1)~extending the arguments of the actor and the critic of a policy gradient method by future reference values, and (2)~introducing a novel kind of residual space for model-free RL. \section{TO DO}
1,941,325,220,851
arxiv
\section{Introduction} \label{sec1} Let $\mathbb{R}^{m}$ --- $m$ -- dimensional Euclidean point space $\bar{x} = (x_{1}, \ldots, x_{m})$ with real coordinates; $I^{m} = \{\bar{x} \in \mathbb{R}^{m}; \ 0 \leqslant x_{j} \leqslant 1; \ j = 1, \ldots, m \}=[0, 1]^{m} $ --- $ m $ -- dimensional cub. \smallskip Two nonnegative Lebesgue measurable functions ~ $ f, $ $ g $ are called equimeasurable if $$ \mu\{\bar x\in I^{m}\colon f(\bar x) > \lambda\} = \mu\{\bar x\in I^{m}\colon g(\bar x) > \lambda\},\quad \lambda > 0, $$ where $\mu e $ is the Lebesgue measure of the set $e \subset I^{m}$. For a nonnegative measurable function $ f $, a nonincreasing rearrangement is a function $f^{*}(t)=\inf\{\lambda >0 : \,\, \mu\{\bar x\in I^{m}\colon f(\bar x) > \lambda\}< t\}$. It is known that the functions $f$, $f^{*}$ are equimeasurable ([1], Ch. 2, Section 2). Let $X$ be the Banach space of Lebesgue measurable functions $f$ on $I^{m}$ with norm $\|f\|_{X}$. The space $X$ is called symmetric if 1) from the fact that $|f(\bar x)|\leqslant |g(\bar x)|$ almost everywhere on $I^{m}$ and $g \in X $, it follows that $ f \in X $ and $ \| f \|_{X} \leqslant \| g \|_{X}$ 2) from the fact that $ f \in X $ and the equimeasurability of the functions $|f(\bar x)|$ and $|g(\bar x)|$ it follows that $ g \in X $ and $ \|f\|_{X} = \|g\|_{X}$ (see\cite[Ch. 2, Sec. 4.1]{1}). The norm $\|\chi_{e}\|_{X}$ of the characteristic function $\chi_{e}(t)$ of the measurable set $e \subset I^{m}$ is called is the fundamental function of the space $X$ and is denoted by $\varphi(\mu e) = \|\chi_{e}\|_{X} $. It is known that the non-increasing rearrangement of the characteristic function $ \chi_{e}$ of the measurable set $ e \subset [0, 1]^{m}$ is equal to the function $ \chi_{[0, t]}$, where $ t = \mu e$. Therefore, the fundamental function of the symmetric space $ X $ is the function $\varphi(t) = \|\chi_{[0, t]}\|_{X}$, defined on the segment $[0, 1]$. She is a concave, non-decreasing, continuous function on $[0, 1]$, and $\varphi (0) = 0$ (see \cite[Ch. 2, Sec. 4.4]{1}). Such functions are called $ \Phi$ -- functions. For a given function $\varphi(t),$\ $t\in [0,1]$, we define $$ \alpha_{\varphi}={\underline\lim}_{t\rightarrow 0}\frac{\varphi(2t)}{\varphi(t)},\quad \beta_{\varphi}=\overline{\lim}_{t\rightarrow 0}\frac{\varphi(2t)}{\varphi(t)}. $$ Symmetric space $X$ with fundamental function $\varphi$ and norm $\|f\|_{X}$ will be denoted by $X(\varphi) $ and its norm as $\|f\|_{X(\varphi)} $. It is known that for any symmetric space $X(\varphi)$ inequalities $1 \leqslant \alpha_{\varphi} \leqslant \beta_{\varphi} \leqslant 2$. One example of a symmetric space is $ L_{q}(\mathbb{T}^{m})$--the Lebesgue space with the norm $$ \|f\|_{q}=\biggl(\,\int\limits_{I^{m}}|f(2\pi\bar{x})|^{q}d\bar{ x}\,\biggr)^{1/q},\quad 1\leqslant q < \infty. $$ Here and in what follows, $\mathbb{T}^{m} = [0, 2\pi]^{m}$, the functions $f$ are $2\pi$--periodic in each variable. \smallskip Let the function $\psi$ be continuous, concave and non-decreasing by $[0, 1]$, $\psi(0)=0$ and $0< \tau <\infty$. The generalized Lorentz space $L_{\psi, \tau}(\mathbb{T}^{m})$ is the set of measurable functions $f(\overline{x})=f(x_1,\ldots, x_m)$ of periodic $2\pi$ in each variable, such that (see \cite{3}) $$ \|f\|_{\psi,\tau}^{*}=\bigg(\int\limits_{0}^{1} f^{*^\tau}(t) \psi^{\tau}(t)\frac{dt}{t}\bigg)^{1/\tau} <\infty. $$ It is known that under the conditions $ 1 <\alpha_{\psi}, \beta_{\psi} <2$, the space $ L_{\psi, \tau}(\mathbb{T}^{m})$ is a symmetric space with a fundamental function $\psi$. Note that for $\psi (t) = t^{1/q}$ the space $L_{\psi, \tau}(\mathbb{T}^{m})$ coincides with the Lorentz space $L_{q, \tau}(\mathbb{T}^{m})$, $1<q, \tau <\infty$, which consists of all functions $f$ such that (see \cite[Ch. 1, Sec. 3]{2}) $$ \|f\|_{q,\tau}=\Bigg(\frac{\tau}{q}\int\limits_{0}^{1}\biggl(\int\limits_{0}^{t}f^{*}(y)dy\biggr)^{\tau}t^{\tau(\frac{1}{q}-1)-1}dt \Bigg)^{1/\tau} < \infty. $$ We consider $ X(\bar\varphi)$ anisotropic symmetric space $ 2\pi$ of periodic functions of $ m $ variables, with the norm $\|f\|_{X(\bar\varphi)}^{*} = \|\ldots\|f^{*_{1},...,*_{m}}\|_{X(\varphi_{1})}\ldots\|_{X(\varphi_{m})}$, where $f^{*_{1},...,*_{m}}(t_{1},...,t_{m})$ non-increasing rearrangement of a function $|f(2\pi \bar{x})|$ for each variable $x_{j} \in [0, 1]$ with fixed other variables (see [5]) and $X(\varphi_{j})$ --- symmetric space in the variable $ x_{j}$, with the fundamental function $\varphi_{j}$ (see [4]). The associated space to the symmetric space $X(\bar\varphi)$ is the space of all measurable functions $ g $ for which (see \cite{4}) $$ \sup_{{}_{\|f\|_{X(\bar\varphi)}^{*}\leqslant 1}^{f\in X(\bar\varphi)}} \int_{I^{m}} f(2\pi\bar{x})g(2\pi\bar{x})d\bar{x} < \infty, $$ and is denoted by the symbol $X^{'}(\bar{\tilde{\varphi}})$, and its norm as $\|g\|_{X^{'}(\bar{\tilde{\varphi}})}^{*}$, where $$ \bar{\tilde{\varphi}}(t)=(\tilde{\varphi}_{1}(t),...,\tilde{\varphi}_{m}(t)), \,\, \tilde{\varphi}_{j}(t)=\frac{t}{\varphi_{j}(t)} $$ for $t\in (0, 1]$ and $\tilde{\varphi}_{j}(0)=0$, $j=1,...,m$. It is known that $$ \left|\int_{I^{m}} f(2\pi\bar{x})g(2\pi\bar{x})d\bar{x}\right| \leqslant \|g\|_{X^{'}(\bar{\tilde{\varphi}})}^{*} \|f\|_{X(\bar\varphi)}^{*}, \,\, f\in X(\bar\varphi), g\in X^{'}(\bar{\tilde{\varphi}}). $$ Let $\bar{x}=(x_{1},...,x_{m})\in \mathbb{I}^{m}=[0, 1]^{m}$ and given $\Phi$--functions $\psi_{j}(x_{j})$, $x_{j}\in[0, 1]$ and $\tau_{j}\in[1,+\infty)$, $j=1,...,m.$ We shall denote by $L_{\bar{\psi},\bar{\tau}}^{*}(\mathbb{T}^{m})$ the generalized Lorentz space with anisotropic norm of Lebesgue measurable functions $f(2\pi \bar{x})$ of period $2\pi$ in each variable , such that the quantity $$ \|f\|_{\bar{\psi},\bar{\tau}}^{*}=\Bigl[\int_{0}^{1}\psi_{m}^{\tau_{m}} (t_{m}) \Bigl[\ldots\Bigl[\int_{0}^{1}\psi_{1}^{\tau_{1}}(t_{1})\left( f^{*_{1},...,*_{m}}(t_{1},...,t_{m}) \right)^{\tau_{1}}\frac{dt_{1}}{t_{1}}\Bigr]^ {\frac{\tau_{2}}{\tau_{1}}} \Bigr]^{\frac{\tau_{m}}{\tau_{m-1}}}\frac{dt_{m}}{t_{m}} \Bigr]^{\frac{1}{\tau_{m}}} $$ is finite. For functions $\psi_{j}(t)=t^{\frac{1}{q_{j}}}$, $j=1,...,m$ Lorentz space $L_{\bar{\psi},\bar{\tau}}^{*}(\mathbb{T}^{m})$, will be denoted by $L_{\bar{q},\bar{\tau}}^{*}(\mathbb{T}^{m})$ and instead of $\|\bullet\|_{\bar{\psi}, \bar{\tau}}^{*}$ respectively we will write $\|\bullet\|_{\bar{q},\bar{\tau}}^{*}$ (see \cite{5}). Let $\overset{\circ \;\;} L_{\bar{\psi}, \bar{\tau}}^{*} \left(\mathbb{T}^{m} \right)$ be the set of functions $f\in L_{\bar{\psi}, \bar{\tau}}^{*}(\mathbb{T}^{m})$ such that $$ \int\limits_{0}^{2\pi }f\left(\overline{x} \right) dx_{j} =0,\;\;\forall j=1,...,m . $$ We will use the following notation: $a_{\overline{n} } (f)$ be the Fourier coefficients of $f\in L_{1}(\mathbb{T}^{m})$ with respect to the multiple trigonometric system and $$ \delta _{\overline{s} } \left( f,\overline{x} \right) =\sum\limits_{\overline{n} \in \rho \left( \overline{s} \right) }a_{\overline{n} } \left( f\right) e^{i\langle\overline{n} ,\overline{x}\rangle } , $$ where $\langle\bar{y},\bar{x}\rangle=\sum\limits_{j=1}^{m}y_{j} x_{j}$, $$ \rho (\bar{s})=\left\{ \overline{k} =\left( k_{1} ,...,k_{m} \right) \in \mathbb{Z}^{m}: \quad 2^{s_{j} -1} \leqslant \left| k_{j} \right| <2^{s_{j} } ,j=1,...,m\right\}. $$ We will consider the functional class of Nikol'skii-Besov $$ S_{X(\bar{\varphi}),\bar{\theta}}^{\bar r}B= \Bigl\{f\in \overset{\circ \;\;}X(\bar{\varphi}) : \quad \|f\|_{X(\bar{\varphi})}^{*} + \Bigl\|\Bigl\{\prod_{j=1}^{m} 2^{s_{j}r_{j}} \|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*} \Bigr\}_{\bar{s}\in \mathbb{Z}_{+}^{m}}\Bigr\|_{l_{\bar\theta}}\leqslant 1\Bigr\}, $$ where $\bar{\theta}=(\theta_{1},...,\theta_{m}),$ $\bar{r}=(r_{1},...,r_{m}),$ $1\leqslant\theta_{j}\leqslant+\infty,$ $0<r_{j}<+\infty,$ $j=1,...,m.$ In the case of $X(\bar{\varphi})=L_{p}(\mathbb{T}^{m})$, $1\leqslant p < \infty$, $ 1 \leqslant p <\infty $, class $S_{X(\bar{\varphi}),\bar{\theta}}^{\bar r}B$ is defined and studied in \cite{6}--\cite{8}. For a fixed vector $\bar{\gamma}=(\gamma_{1},\ldots,\gamma_{m}),$ \;\;$\gamma_{j}>0, \;\; j=1,\ldots,m$, set $$ Q_{n}^{\bar\gamma}= \cup_{{}_{\langle\bar{s},\bar{\gamma}\rangle < n}}\rho(\bar{s}), \quad T(Q_{n}^{\bar \gamma})=\{t(\bar x)=\sum\limits_{\bar{k}\in Q_{n}^{\bar\gamma}}b_{\bar k} e^{i\langle\bar{k},\bar{x}\rangle}\}, $$ $E_{n}^{(\overline{\gamma})}(f)_{X(\bar{\varphi})}$ is the best approximation of a function $f\in X(\bar{\varphi})$ by polynomials in $T(Q_{n}^{\bar\gamma})$ , and $S_{n}^{\bar \gamma}(f,\bar{x})=\sum_{\bar{k}\in Q_{n}^{\bar \gamma}}a_{\bar k}(f)\cdot e^{i\langle\bar{k}, \bar{x}\rangle}$ is a partial sum of the Fourier series of $f$. We shall denote by $C(p,q,y,..)$ positive quantities which depend only on the parameter in the parentheses and not necessarily the same in distinct formulae . The notation $A\left( y\right) \asymp B\left( y\right)$ means that there exist positive constants $C_{1},\,C_{2} $ such that $C_{1} \cdot A\left( y\right) \leqslant B\left( y\right) \leqslant C_{2} \cdot A\left( y\right)$. Exact order estimates for the best approximation of functions of various classes in the Lebesgue space $L_{p}(\mathbb{T}^{m})$ are well known (see survey articles \cite{9}--\cite{11}, monograph \cite{12} and bibliographies in them). These questions in the space $L_{\bar{q}, \bar{\tau}}^{*}(\mathbb{T}^{m})$ were studied in \cite{13}--\cite{18}. The main aim of the present paper is to find the order of the quantity $$ E_{n}^{(\bar\gamma)}(S_{X(\bar{\varphi}),\bar{\theta}}^{\bar{r}}B) _{\bar{\psi},\bar{\tau}}\equiv \sup_{f\in S_{X(\bar{\varphi}),\bar{\theta}}^{\bar r}B}E_{n}^{(\overline{\gamma})}(f)_{\bar{\psi},\bar{\tau}}. $$ This paper is organized as follows. In Section 1 we give auxiliary results. In Section 2, we will prove the main results. Our main results in this Section reads: {\bf Theorem 2.} \textit{ Let $1<\alpha_{\psi_{j}}\leqslant \beta_{\psi_{j}}< \alpha_{\varphi_{j}}\leqslant \beta_{\varphi_{j}}<2$, $1\leqslant\tau_{j}<+\infty$, $j=1,...,m$. If $f\in X(\bar{\varphi})$ and $$ \Bigl\{\prod_{j=1}^{m}\frac{\psi_{j}(2^{-s_{j}})}{\varphi_{j} (2^{-s_{j}})}\|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*} \Bigr\}_{\bar{s}\in{\mathbb Z}_{+}^{m}}\in l_{\bar{\tau}}, $$ then $f\in \overset{\circ}{L}_{\bar{\psi},\bar{\tau}}^{*}(\mathbb{T}^{m})$ and the following inequality holds $$ \|f\|_{\bar{\psi},\bar{\tau}}^{*}\leqslant C \Bigl\|\Bigl\{\prod_{j=1}^{m}\frac{\psi_{j}(2^{-s_{j}})}{\varphi_{j} (2^{-s_{j}})}\|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*} \Bigr\}_{\bar{s}\in{\mathbb Z}_{+}^{m}} \Bigr\|_{l_{\bar\tau}}. $$ } Further, for brevity, we put $ \mu_{j}(s)=\frac{\psi_{j}\left(2^{-s}\right)} {\varphi_{j}\left(2^{-s}\right)}. $ {\bf Theorem 5.} {\it Let $1\leqslant\theta_{j}\leqslant +\infty,$ $1\leqslant\tau_{j}<+\infty,$ $r_{j}>0$, $\frac{r_{j}}{r_{1}}$, $j=1,...,m$ and functions $\varphi_{j},$ $\psi_{j}$ satisfy the conditions $1<\alpha_{\psi_{j}} \leqslant\beta_{\psi_{j}}<\alpha_{\varphi_{j}}\leqslant\beta_{\varphi_{j}}<2,$ $j=1,...,m$ and $$ \Bigl[\sum_{s_{j}=0}^{\infty}\Bigl(\frac{\psi_{j}(2^{-s_{j}})}{\varphi_{j}(2^{-s_{j}} )}2^{-s_{j}r_{j}}\Bigr)^{\varepsilon_{j}}\Bigr]^{\frac{1}{\varepsilon_{j}}}<+\infty, $$ where $\varepsilon_{j}=\tau_{j}\beta_{j}'$, $\beta_{j}'= \frac{\beta_{j}}{\beta_{j}-1}$, $j=1,...,m$, if $\beta_{j}= \frac{\theta_{j}}{\tau_{j}}>1$ and $\varepsilon_{j}=+\infty$, if $\theta_{j}\leqslant\tau_{j}<\infty$, $j=1,...,m.$ 1) If $1\leqslant\tau_{j}<\theta_{j}<+\infty$, $j=1,...,m,$ then $$ E_{n}^{(\bar\gamma)}(S_{X(\bar{\varphi}),\bar{\theta}}^{\bar{r}}B) _{\bar{\psi},\bar{\tau}}\leqslant C\Bigl\|\Bigl\{\prod_{j=1}^{m}2^{-s_{j}r_{j}} \mu_{j}(s_{j})\Bigr\}_{\bar{s}\in Y^{m}(\bar{\gamma},n)} \Bigr\|_{l_{\bar\varepsilon}}. $$ 2) If $1\leqslant \theta_{j}\leqslant\tau_{j}<+\infty,$ $j=1,...,m,$ then $$ E_{n}^{\bar \gamma}(S_{X(\bar{\varphi}),\bar{\theta}}^{\bar r}B) _{\bar{\psi},\bar{\tau}}\leqslant C\sup\Bigl\{\prod_{j=1}^{m}2^{-s_{j}r_{j}} \mu_{j}(s_{j}): \quad \bar{s}\in\mathbb{Z}_{+}^{m}, \langle\bar{s},\bar{\gamma}\rangle\geqslant n\Bigr\}. $$} Here $Y^{m}(\bar{\gamma},n)=\{\bar{s}\in\mathbb{Z}_{+}^{m}, \langle\bar{s},\bar{\gamma}\rangle\geqslant n \}$. \section{Auxiliary statements} \label{sec 1} In this section, we present some well-known results and prove several lemmas. The following generalization of the discrete Hardy inequality is known \begin{lemma}\label{lem1} (see \cite{19}). {\it Let $0<\theta<+\infty$ and given positive numbers $a_{k}$, $b_{k}, k=0,1,2,...$ . a) If $ \sum\limits_{k=0}^{n}a_{k}\leqslant C\cdot a_{n},\,\, $ then $ \sum\limits_{n=0}^{\infty}a_{n}\Bigl(\sum\limits_{k=n}^{\infty} b_{k}\Bigr)^{\theta}\leqslant C\cdot\sum\limits_{n=0}^{\infty}a_{n}b_{n}^{\theta}. $ b) If $ \sum\limits_{k=n}^{\infty}a_{k}\leqslant C a_{n},\,\, $ then $ \sum\limits_{n=0}^{\infty}a_{n}\Bigl( \sum\limits_{k=0}^{n}b_{k}\Bigr)^{\theta}\leqslant C\cdot \sum\limits_{n=0}^{\infty}a_{n}\cdot b_{n}^{\theta}. $ } \end{lemma} The proof of Lemma 1 is also given in \cite{24}. \begin{lemma}\label{lem2} (see \cite{20}, \cite{21}) {\it If a $1<\alpha_{\psi},\beta_{\psi}<2$ for $\Phi$--function $\psi (x)$, $x \ in [0,1] $, then for any $ q> 0 $ the relations are satisfied $$ \int\limits_{0}^{x}\frac{\psi^{q}(t)}{t}dt =O(\psi^{q}(x)), x\rightarrow +0, $$ $$ \int\limits_{x}^{1}[t\psi^{q}(t)]^{-1}dt=O(\psi^{-q}(x)), x\rightarrow +0. $$ } \end{lemma} \begin{lemma}\label{lem3} (see \cite{20}, \cite{21}) {\it If $\Phi$-th functions $\varphi(x), \psi(x),x\in[0,1]$ satisfies the condition $\alpha_{\varphi}>\beta_{\psi}>1$, then for function $$ g(x)=\left\{ \begin{array}{ll}\frac{\varphi(x)}{\psi(x)}, &\mbox{if} \,\, x\in(0,1]\\ 0, &\mbox{if} \,\, x=0 \end{array}\right. $$ there is a $\Phi$-th function $g_{1}(x)$ such that $g(x)\asymp g_{1}(x),x\in [0,1]$ and $\alpha_{g_{1}}>1$. } \end{lemma} \begin{lemma}\label{lem4} {\it If $\Phi$-th functions $\varphi(x), \psi(x),x\in[0,1]$ satisfies the condition $1<\alpha_{\psi} \leqslant \beta_{\psi} <\alpha_{\varphi}\leqslant \beta_{\varphi}<2$ and $0< \theta < \infty$, then the following inequality holds $$ \sum\limits_{s=0}^{n}\Bigl(\frac{\psi(2^{-s})}{\varphi(2^{-s})}\Bigr)^{\theta} \leqslant C \Bigl(\frac{\psi(2^{-n})}{\varphi(2^{-n})}\Bigr)^{\theta}, \,\, n\in \mathbb{N}. $$ } \end{lemma} {\bf Proof.} We will consider the function $$ g(x) = \left\{ \begin{array}{ll}\frac{\varphi(x)}{\psi(x)}, &\mbox{if} \,\, x\in(0,1]\\ 0 , &\mbox{if} \,\, x=0 . \end{array}\right. $$ Since $1<\alpha_{\psi} \leqslant \beta_{\psi} <\alpha_{\varphi}$, then according to Lemma 3 there exists a $\Phi$ - function $ g_{1}$ such that $g(x) \asymp g_{1}(x) $, $ x \rightarrow + 0 $ and $\alpha_{g_{1}}> 1$. Therefore $$ \sum\limits_{s=0}^{n}\Bigl(\frac{\psi(2^{-s})}{\varphi(2^{-s})}\Bigr)^{\theta} \leqslant C\sum\limits_{s=0}^{n}g_{1}^{-\theta}(2^{-s}). \eqno (1) $$ Since $g_{1}\uparrow$ on $(0, 1]$, then $$ \sum\limits_{s=0}^{n}g_{1}^{-\theta}(2^{-s})\leqslant \ln 2 \sum\limits_{s=0}^{n}\int\limits_{2^{-s-1}}^{2^{-s}} g_{1}^{-\theta}(t)\frac{dt}{t} = \ln 2 \int\limits_{2^{-n-1}}^{1} g_{1}^{-\theta}(t)\frac{dt}{t} \eqno (2) $$ Now, from inequalities (1) and (2), according to Lemma 2, we obtain the assertion of the lemma. \begin{lemma}\label{lem5} {\it Let $\Phi$ be a function $\psi$ satisfies the conditions $1<\alpha_{\psi} \leqslant \beta_{\psi} <2$, then the following inequality holds $$ \sum\limits_{s=n}^{\infty}\psi^{\theta}(2^{-s}) \leqslant C\psi^{\theta}(2^{-n}), n\in \mathbb{N}. $$ } \end{lemma} {\bf Proof.} Since $\frac{\psi(t)}{t} \downarrow$ on $(0, 1]$, then $$ \psi^{\theta}(2^{-s}) \leqslant C\int\limits_{2^{-s-1}}^{2^{-s}} \psi^{\theta}(t)\frac{dt}{t}. $$ Therefore $$ \sum\limits_{s=n}^{\infty}\psi^{\theta}(2^{-s}) \leqslant C\sum\limits_{s=n}^{\infty} \int\limits_{2^{-s-1}}^{2^{-s}} \psi^{\theta}(t)\frac{dt}{t} = C\int\limits_{0}^{2^{-n}} \psi^{\theta}(t)\frac{dt}{t}. $$ Hence, according to Lemma 2, from this we obtain the assertion of the lemma. {\bf Remark.} In Lemma 4 and Lemma 5 the coefficients C do not depend on $ n $. These lemmas were proved in integral form in \cite{20}. We now prove a multidimensional version of Lemma 1. \begin{lemma}\label{lem6} {\it Let positive numbers $b_{\bar{k}}=b_{k_{1},\ldots , k_{m}}$ be given for $\bar{k}=(k_{1},\ldots , k_{m})\in \mathbb{Z}_{+}^{m}$ and $a_{k_{j}}$, $ k_{j}=0,1,2,...$ and $1 \leqslant \theta_{j}<+\infty$, $j=1,\ldots , m$. a) If $$ \sum\limits_{k_{j}=0}^{n_{j}}a_{k_{j}}\leqslant C a_{n_{j}},\,\, j=1,\ldots , m, $$ then $$ A_{\theta_{1},\ldots ,\theta_{m}}=\left\{\sum\limits_{n_{m}=0}^{\infty}a_{n_{m}}\left[\sum\limits_{n_{m-1}=0}^{\infty}a_{n_{m-1}} \ldots \left[\sum\limits_{n_{1}=0}^{\infty}a_{n_{1}}\left( \sum\limits_{k_{m}=n_{m}}^{\infty}\ldots\sum\limits_{k_{1}=n_{1}}^{\infty}b_{\bar{k}} \right)^{\theta_{1}}\right]^{\frac{\theta_{2}}{\theta_{1}}}\ldots \right]^{\frac{\theta_{m}}{\theta_{m-1}}}\right\}^{\frac{1}{\theta_{m}}} $$ $$ \leqslant C \left\{\sum\limits_{n_{m}=0}^{\infty}a_{n_{m}}\left[\sum\limits_{n_{m-1}=0}^{\infty}a_{n_{m-1}} \ldots \left[\sum\limits_{n_{1}=0}^{\infty}a_{n_{1}}b_{\bar{n}}^{\theta_{1}}\right]^{\frac{\theta_{2}}{\theta_{1}}}\ldots \right]^{\frac{\theta_{m}}{\theta_{m-1}}}\right\}^{\frac{1}{\theta_{m}}}. $$ b) If $$ \sum\limits_{k_{j}=n_{j}}^{\infty}a_{k_{j}}\leqslant Ca_{n_{j}},\,\,j=1,\ldots , m, $$ then $$ \left\{\sum\limits_{n_{m}=0}^{\infty}a_{n_{m}}\left[\sum\limits_{n_{m-1}=0}^{\infty}a_{n_{m-1}} \ldots \left[\sum\limits_{n_{1}=0}^{\infty}a_{n_{1}}\left( \sum\limits_{k_{m}=0}^{n_{m}}\ldots\sum\limits_{k_{1}=0}^{n_{1}}b_{\bar{k}} \right)^{\theta_{1}}\right]^{\frac{\theta_{2}}{\theta_{1}}}\ldots \right]^{\frac{\theta_{m}}{\theta_{m-1}}}\right\}^{\frac{1}{\theta_{m}}} $$ $$ \leqslant C \left\{\sum\limits_{n_{m}=0}^{\infty}a_{n_{m}}\left[\sum\limits_{n_{m-1}=0}^{\infty}a_{n_{m-1}} \ldots \left[\sum\limits_{n_{1}=0}^{\infty}a_{n_{1}}b_{\bar{n}}^{\theta_{1}}\right]^{\frac{\theta_{2}}{\theta_{1}}}\ldots \right]^{\frac{\theta_{m}}{\theta_{m-1}}}\right\}^{\frac{1}{\theta_{m}}}. $$ } \end{lemma} {\bf Proof.} Let us prove item a). For $ m = 1 $ the statement is known (see Lemma 1). Let $m=2$. Since $ 1 \leqslant \theta_{1} <\infty$, then by the property of the norm we have $$ \sum\limits_{n_{2}=0}^{\infty}a_{n_{2}} \left[\sum\limits_{n_{1}=0}^{\infty}a_{n_{1}}\left( \sum\limits_{k_{2}=n_{2}}^{\infty}\sum\limits_{k_{1}=n_{1}}^{\infty}b_{\bar{k}} \right)^{\theta_{1}}\right]^{\frac{\theta_{2}}{\theta_{1}}} \leqslant \sum\limits_{n_{2}=0}^{\infty}a_{n_{2}} \left[\sum\limits_{k_{2}=n_{2}}^{\infty}\left(\sum\limits_{n_{1}=0}^{\infty}a_{n_{1}}\left(\sum\limits_{k_{1}=n_{1}}^{\infty}b_{\bar{k}} \right)^{\theta_{1}}\right)^{\frac{1}{\theta_{1}}}\right]^{\theta_{2}} $$ Now, applying statement a) of Lemma 1 twice, from this we obtain $$ \sum\limits_{n_{2}=0}^{\infty}a_{n_{2}} \left[\sum\limits_{n_{1}=0}^{\infty}a_{n_{1}}\left( \sum\limits_{k_{2}=n_{2}}^{\infty}\sum\limits_{k_{1}=n_{1}}^{\infty}b_{\bar{k}} \right)^{\theta_{1}}\right]^{\frac{\theta_{2}}{\theta_{1}}} \leqslant \sum\limits_{n_{2}=0}^{\infty}a_{n_{2}} \left[\sum\limits_{n_{1}=0}^{\infty}a_{n_{1}}b_{\bar{n}}^{\theta_{1}}\right]^{\frac{\theta_{2}}{\theta_{1}}}. $$ Now suppose the statement is true for $m-1$. Let us prove it for $m$. Since $\theta_{j}$, $ j = 1, \ldots, m-1$, then by the property of the norm and using the assumption, we have $$ A_{\theta_{1},\ldots ,\theta_{m}}^{\theta_{m}} \leqslant $$ $$ \sum\limits_{n_{m}=0}^{\infty}a_{n_{m}}\left[\sum\limits_{k_{m}=n_{m}}^{\infty} \left[\sum\limits_{n_{m-1}=0}^{\infty}a_{n_{m-1}} \ldots \left[\sum\limits_{n_{1}=0}^{\infty}a_{n_{1}}\left( \sum\limits_{k_{m-1}=n_{m-1}}^{\infty}\ldots\sum\limits_{k_{1}=n_{1}}^{\infty}b_{\bar{k}} \right)^{\theta_{1}}\right]^{\frac{\theta_{2}}{\theta_{1}}}\ldots \right]^{\frac{1}{\theta_{m-1}}}\right]^{\theta_{m}} $$ $$ \leqslant C\sum\limits_{n_{m}=0}^{\infty}a_{n_{m}}\left[\sum\limits_{k_{m}=n_{m}}^{\infty} \left[\sum\limits_{n_{m-1}=0}^{\infty}a_{n_{m-1}} \ldots \left[\sum\limits_{n_{1}=0}^{\infty}a_{n_{1}}b_{n_{1},\ldots, n_{m-1}, k_{m}} ^{\theta_{1}}\right]^{\frac{\theta_{2}}{\theta_{1}}}\ldots \right]^{\frac{1}{\theta_{m-1}}}\right]^{\theta_{m}} $$ Now, by applying assertion a) of Lemma 1, from this we obtain the required inequality. Assertion $ b) $ is proved similarly. \begin{lemma}\label{lem7} {\it Let $1 < \alpha_{\varphi_{j}}\leqslant \beta_{\varphi_{j}} \leqslant 2, $ $j=1,...,m$ and $E_{j}\subset [0, 2\pi), j=1,...,m,$ be measurable sets. Then for any trigonometric polynomial $$ T_{\bar n}(\bar x)=\sum_{k_{1}=-n_{1}}^{n_{1}}... \sum_{k_{m}=-n_{m}}^{n_{m}}b_{\bar k}e^{i\langle\bar{x},\bar{k}\rangle} $$ the following inequality holds: $$ \int_{E_{m}}...\int_{E_{1}} |T_{\bar n}(\bar x)|dx_{1}...dx_{m}\leqslant $$ $$ \leqslant C\prod_{j\in e} \frac{1}{\varphi_{j}(n_{j}^{-1})}\prod_{j\in e}|E_{j}|\prod_{j\in \bar e} \bar{\varphi_{j}}(|E_{j}|) \|T_{\bar n}\|_{X(\bar\varphi)}^{*}, $$ where $e\subset \{1,...,m\}$ and $\bar{e}$--complement set $e$ , $|E_{j}|$ --- Lebesgue measure of $E_{j}$. } \end{lemma} {\bf Proof.} Let $e$ be an arbitrary subset of $\{1,...,m\}.$ Also let $E_{e}=\prod_{j\in e}E_{j},$ $d^{e}\bar x=\prod_{j\in e}dx_{j}.$ It is known that for any trigonometric polynomial $T_{\bar n}(\bar x)$ for fixed $x_{j}, j\in \bar e,$ the following formula holds: $$ T_{\bar n}(\bar x)=\frac{1}{(2\pi)^{|e|}}\int_{I^{|e|}} T_{\bar n}(\bar y^{e},\bar x^{\bar e})\cdot \prod_{j\in e}D_{n_{j}}(x_{j}-y_{j})d^{e}\bar y, $$ where $D_{n}(t)$ -- is the Dirichlet kernel of the trigonometric system, $\bar{y}^{e}$ -- the vector with the coordinates $y_{j}$ if $j\in e$, and $\bar{x}^{\bar e}$ -- the vector with the coordinates $x_{j}$ if $j\in \bar e$ ($\bar e$ is the set--theoretic completion of $e$ ) and $|e|$ is the number of elements of $e.$ By the property of Lebesgue integral, we have $$ \int_{E_{\bar e}}|T_{\bar n}(\bar x)|d^{\bar e}\bar x = \frac{1}{(2\pi)^{|e|}}\int_{E_{\bar e}}|\int_{I^{|e|}} T_{\bar n}(\bar y^{e},\bar x^{\bar e})\cdot \prod_{j\in e}D_{n_{j}}(x_{j}-y_{j})d^{e}\bar y|d^{\bar e}\bar x\leqslant $$ $$ \leqslant \frac{1}{(2\pi)^{|e|}}\int_{E_{\bar e}}\int_{I^{|e|}} |T_{\bar n}(\bar y^{e},\bar x^{\bar e})|\cdot \prod_{j\in e}|D_{n_{j}}(x_{j}-y_{j})|d^{e}\bar y d^{\bar e}\bar x = $$ $$ =\frac{1}{(2\pi)^{|e|}}\int_{I^{m}} |T_{\bar n}(\bar y^{e},\bar x^{\bar e})|\cdot \prod_{j\in e}|D_{n_{j}}(x_{j}-y_{j})| \cdot \prod_{j\in \bar e}\chi_{E_{j}}(x_{j}) d^{e}\bar y d^{\bar e}\bar x, $$ where $\chi_{E}$ is the characteristic function of the set $E.$ By appling the H\"{o}lder's inequality to the integrals in the right side of the last relation. Then $$ \int_{E_{\bar e}}|T_{\bar n}(\bar x)|d^{\bar e}\bar x \leqslant \frac{1}{(2\pi)^{|e|}} \|T_{\bar n}\|_{X(\bar\varphi)}^{*} \|\prod_{j\in e}|D_{n_{j}}(x_{j}-\bullet)| \prod_{j\in \bar e}\chi_{E_{j}}\|_{X(\tilde{\bar\varphi})}^{*}= $$ $$ =\frac{1}{(2\pi)^{|e|}} \|T_{\bar n}\|_{X(\bar\varphi)}^{*} \prod_{j\in e}\|D_{n_{j}}(x_{j}-\bullet)\|_{X(\tilde{\varphi}_{j})}^{*} \prod_{j\in \bar e}\|\chi_{E_{j}}\|_{X(\tilde{\varphi}_{j})}^{*}. \eqno(3) $$ Since $$ \|\chi_{E_{j}}\|_{X(\tilde{\varphi}_{j})}^{*}=\tilde{\varphi}_{j}(|E_{j}|) $$ then from inequality (3) we obtain $$ \int_{E_{\bar e}}|T_{\bar n}(\bar x)|d^{\bar e}\bar x \leqslant \frac{1}{(2\pi)^{|e|}} \|T_{\bar n}\|_{X(\bar\varphi)}^{*} $$ $$ \prod_{j\in \bar e}\tilde{\varphi}_{j}(|E_{j}|) \prod_{j\in e} \sup_{x_{j}\in [0, 2\pi]} \|D_{n_{j}}(x_{j}-\bullet)\|_{\tilde{\varphi}_{j}}^{*}, \eqno(4) $$ where $d^{\bar e}\bar x = \prod\limits_{j\in e}dx_{j}$. In the one-dimensional case, the estimate is known (see, for example, \cite{23}) $$ \sup_{x_{j}\in [0, 2\pi]} \|D_{n_{j}}(x_{j}-\bullet)\|_{X(\tilde{\varphi}_{j})}^{*}\leqslant C n_{j}\tilde{\varphi}_{j}(n_{j}^{-1}), \,\, j=1,\ldots m . \eqno(5) $$ Now from inequalities (4) and (5) we will have $$ \int_{E_{\bar e}}|T_{\bar n}(\bar x)|d^{\bar e}\bar x \leqslant C \|T_{\bar n}\|_{X(\bar\varphi)}^{*} \prod_{j\in \bar e}\tilde{\varphi}_{j}(|E_{j}|) \prod_{j\in e} n_{j}\tilde{\varphi}_{j}(n_{j}^{-1}) $$ for fixed $x_{j}, j\in e.$ By integrating by the variables $x_{j}, j\in e,$ the both sides of this inequality, we get $$ \int_{E_{\bar e}}\int_{E_{e}}|T_{\bar n}(\bar x)|d^{e}\bar x d^{\bar e}\bar x \leqslant C \|T_{\bar n}\|_{X(\bar\varphi)}^{*} \prod_{j\in \bar e}\tilde{\varphi}_{j}(|E_{j}|)\prod_{j\in e} n_{j}\tilde{\varphi}_{j}(n_{j}^{-1})\prod_{j\in e}|E_{j}|. $$ The proof is finished. \section{Main results}\label{sec 2} We now prove the main results. We set $$ G_{e}(\bar n)=\{\bar{s}=(s_{1},...,s_{m})\in \mathbb{N}^{m}: s_{j} \leqslant n_{j}, j\in e; \quad s_{j}>n_{j}, j\notin e\}, $$ where $e\subset\{1,...,m\}$; $$ U_{\bar n}(f,\bar{x})=\sum_{e\subset\{1,...,m\}}\sum_{\bar{s}\in G_{e}(\bar n)} \delta_{\bar s}(f,\bar{x}). $$ Let $$ \bar{f}(\bar t)=\sup_{|E_{m}|\geqslant t_{m}}\frac{1}{|E_{m}|} \int_{E_{m}}...\sup_{|E_{1}|\geqslant t_{1}}\frac{1}{|E_{1}|} \int_{E_{1}}|f(x_{1},...,x_{m})|dx_{1}...dx_{m}, $$ $|E_{j}|$ is the Lebesgue measure of the set $E_{j}\subset [0,2\pi).$ \begin{theorem}\label{th1} {\it Let $\bar{\varphi}=(\varphi_{1},...,\varphi_{m})$ and functions $\varphi_{j}$ satisfies the conditions $1<\alpha_{\varphi_{j}}\leqslant \beta_{\varphi_{j}}<2$, $j=1,\ldots , m$. Then for each function $ f\in X(\bar{\varphi})$ the next inequality holds $$ {\bar{f}}(\bar{t})\leqslant C\left\{\prod\limits_{j=1}^{m}\frac{1}{\varphi_{j}(t_{j})} \sum\limits_{s_{m}=n_{m}+1}^{\infty}...\sum\limits_{s_{1}=n_{1}+1}^{\infty} \|\delta_{\bar{s}}(f)\|_{X(\bar{\varphi})}^{*}+\right. $$ $$ \left.+\sum\limits_{e\subset \{1,...,m\}} \prod\limits_{j\notin e} \frac{1}{\varphi_{j}(t_{j})} \sum\limits_{\bar{s}\in G_{e}(\bar{n})}\prod\limits_{j\in e}\frac{1}{\varphi_{j}(2^{-n_{j}})}\|\delta_{\bar{s}}(f) \|_{X(\bar{\varphi})}^{*}\right\}, $$ for $\bar{t}=(t_{1},...,t_{m})\in (2^{-n_{1}-1},2^{-n_{1}}]\times...\times (2^{-n_{m}-1},2^{-n_{m}}],$ $n_{j}=1,2,...;$ \\ $j=1,...,m.$} \end{theorem} {\bf Proof.} $E_{j}\subset [0,2\pi)$ be a Lebesgue measurable subset. Then, by the properties of the integral we get $$ \int_{E_{m}}...\int_{E_{1}}|f(x_{1},...,x_{m})|dx_{1}...dx_{m}\leqslant \int_{E_{m}}...\int_{E_{1}}|f(\bar x)-U_{\bar n}(f,\bar{x})|d{\bar x}+ $$ $$ +\int_{E_{m}}...\int_{E_{1}}|U_{\bar n}(f,\bar{x})|d\bar{x}. \eqno(6) $$ Using H\"{o}lder's integral inequality we obtain $$ \int_{E_{m}}...\int_{E_{1}}|f(\bar x)-U_{\bar n}(f,\bar{x})|d\bar{x}\leqslant C\prod_{j=1}^{m}\frac{|E_{j}|}{\varphi_{j}(|E_{j}|)} \|f-U_{\bar n}(f)\|_{X(\bar{\varphi})}^{*}. \eqno(7) $$ Let $\forall e \subset \{1,...,m\}.$ Then by applying Lemma 1 we obtain $$ \int_{E_{m}}...\int_{E_{1}}\left|\sum_{\bar{s}\in G_{e}(\bar n)}\delta_{\bar s}(f,\bar{x})\right|d\bar{x} \leqslant $$ $$ C\prod_{j\in e}|E_{j}| \prod_{j\notin e} \frac{|E_{j}|}{\varphi_{j}(|E_{j}|)}\sum_{\bar{s}\in G_{e}(\bar n)}\prod_{j\in e}\frac{1}{\varphi_{j}(2^{-s_{j}})}\|\delta_{\bar s}\|_{X(\bar{\varphi})}^{*}. \eqno(8) $$ Further, taking into account that $|E_{j}| \geqslant t_{j}$ and the properties of the function $\varphi_{j}$ from inequalities (6)--(8) we have $$ \prod_{j=1}^{m}|E_{j}|^{-1}\int_{E_{m}}...\int_{E_{1}}|f(\bar x)-U_{\bar n}(f,\bar{x})|d\bar{x}\leqslant $$ $$ \leqslant C\cdot \left\{\prod_{j=1}^{m}\frac{1}{\varphi_{j}(t_{j})}\sum_{s_{m}=n_{m}+1}^{\infty} ...\sum_{s_{1}=n_{1}+1}^{\infty}\|\delta_{\bar s}(f) \|_{X(\bar{\varphi})}^{*}+\right. $$ $$ \left.+\sum_{e\subset \{1,...,m\}}\prod_{j\notin e}\frac{1}{\varphi_{j}(t_{j})} \sum_{\bar{s}\in G_{e}(\bar n)}\prod_{j\in e} \frac{1}{\varphi_{j}(2^{-s_{j}})}\|\delta_{\bar s}(f) \|_{X(\bar{\varphi})}^{*}\right\}. $$ This implies the assertion of the theorem. \begin{theorem}\label{th2} {\it Let $1<\alpha_{\psi_{j}}\leqslant \beta_{\psi_{j}}< \alpha_{\varphi_{j}}\leqslant \beta_{\varphi_{j}}<2$, $1\leqslant\tau_{j}<+\infty$, $j=1,...,m$. If $f\in X(\bar{\varphi})$ and $$ \left\{\prod_{j=1}^{m}\frac{\psi_{j}(2^{-s_{j}})}{\varphi_{j} (2^{-s_{j}})}\|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*} \right\}_{\bar{s}\in{\mathbb Z}_{+}^{m}}\in l_{\bar{\tau}}, $$ then $f\in L_{\bar{\psi},\bar{\tau}}^{*}$ and the following inequality holds $$ \|f\|_{\bar{\psi},\bar{\tau}}^{*}\leqslant C \left\|\left\{\prod_{j=1}^{m}\frac{\psi_{j}(2^{-s_{j}})}{\varphi_{j} (2^{-s_{j}})}\|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*} \right\}_{\bar{s}\in{\mathbb Z}_{+}^{m}} \right\|_{l_{\bar\tau}}. $$} \end{theorem} {\bf Proof.} According to Lemma 2 \cite{4}, the following inequality holds: $$ f^{*_{1},...,*_{m}}(t_{1},...,t_{m})\leqslant \bar{f}(t_{1},...,t_{m})\equiv $$ $$ \equiv \sup_{|E_{m}|\geqslant t_{m}}\int_{E_{m}}dx_{m}... \sup_{|E_{1}|\geqslant t_{1}}\int_{E_{1}}|f(x_{1},...,x_{m})|dx_{1}. $$ Therefore $ \|f\|_{\bar{\psi},\bar{\tau}}^{*}\leqslant C\|\bar{f}\|_{ \bar{\psi},\bar{\tau}}^{*}, \,\, 1\leqslant\tau_{j}<+\infty, j=1,...,m. $ Taking into account the relation $$ \int\limits_{2^{-n-1}}^{2^{-n}}\psi_{j}(t)\frac{dt}{t}\asymp \psi_{j}(2^{-n}). \eqno(9) $$ and using Theorem 1, we have $$ \|f\|_{\bar{\psi},\bar{\tau}}^{*}\leqslant C\left[\left\{\sum_{n_{m}=0}^{\infty}\left(\frac{\psi_{m}(2^{-n_{m}})}{ \varphi_{m}(2^{-n_{m}})}\right)^{\tau_{m}} \left[...\right.\right.\right. $$ $$ ...\left.\left.\left[\sum_{n_{1}=0} ^{\infty} \left(\frac{\psi_{1}(2^{-n_{1}})}{ \varphi_{1}(2^{-n_{1}})}\right)^{\tau_{1}} \left( \sum_{s_{m}=n_{m}+1}^{\infty}...\sum_{s_{1}=n_{1}+1}^{\infty} \|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*}\right)^{\tau_{1}} \right]^{\frac{\tau_{2}}{\tau_{1}}}...\right]^{\frac{\tau_{m}}{ \tau_{m-1}}} \right\}^{\frac{1}{\tau_{m}}}+ $$ $$ +\sum\limits_{e\subset\{1,...,m\}}\left\{\sum_{n_{m}=0}^{\infty}\int\limits_{2^{-n_{m}-1}}^{ 2^{-n_{m}}} \psi_{m}^{\tau_{m}}(t_{m})t_{m}^{-1}\left[...\left[\sum_{n_{1}=0} ^{\infty}\int\limits_{2^{-n_{1}-1}}^{2^{-n_{1}}}\psi_{1}^{\tau_{1}}(t_{1}) \cdot t_{1}^{-1} \times \right.\right.\right. $$ $$ \left.\times\left(\prod\limits_{j\notin e}\frac{1}{\varphi_{j}(t_{j})}\sum\limits_{\bar{s}\in G_{e}(\bar{n})}\prod\limits_{j\in e}\frac{1}{\varphi_{j}(2^{-s_{j}})}\|\delta_{\bar{s}}(f) \|_{X(\bar{\varphi})}^{*}\right)^{\tau_{1}}dt_{1} \right]^{\frac{\tau_{2}}{\tau_{1}}}... $$ $$ ...\left.\left.\left.\right]^{\frac{\tau_{m}}{ \tau_{m-1}}} dt_{m}\right\}^{\frac{1}{\tau_{m}}}\right]= C\cdot\left[J_{1}+\sum\limits_{e\subset\{1,...,m\}}J_{e}\right]. \eqno (10) $$ According to Lemma 4 and Lemma 5, the numbers $$ a_{s_{j}} = \frac{\psi(2^{-s_{j}})}{\varphi(2^{-s_{j}})} $$ and $a_{s_{j}} = \psi(2^{-s_{j}})$, $j=1,\ldots,m$ satisfy the conditions $ a) $ and $ b) $ of Lemma 6, respectively. Let $e=\{1,...,i\}, i\leqslant m$. Then using relation (9) and successively applying the triangle inequality, assertions $ a) $ and $ b) $ of Lemma 6, we will have $$ J_{e}\leqslant \left\{\sum\limits_{n_{m}=0}^{\infty} \left(\frac{\psi_{m}(2^{-n_{m}})}{ \varphi_{m}(2^{-n_{m}})}\right)^{\tau_{m}} \left[...\right. \right. $$ $$ ...\left[\sum_{n_{i+1}=0}^{\infty} \left(\frac{\psi_{i+1}(2^{-n_{i+1}})}{ \varphi_{i+1}(2^{-n_{i+1}})}\right)^{\tau_{i+1}} \left[ \sum_{n_{i}=0}^{\infty}\psi_{i}^{\tau_{i}}(2^{-n_{i}})\left[...\left[ \sum_{n_{1}=0}^{\infty}\psi_{1}^{\tau_{1}}(2^{-n_{1}})\times \right. \right. \right. \right. $$ $$ \times \left(\sum_{s_{m}=n_{m}+1}^{\infty}...\sum_{s_{i+1}=n_{i+1}+1}^{\infty} \sum_{s_{i}=1}^{n_{i}}...\sum_{s_{1}=1}^{n_{1}}\prod_{j=1}^{i} \frac{1}{\varphi_{j}(2^{-s_{j}})}\times\right. $$ $$ \left.\left.\left.\left.\left. \times \|\delta_{\bar{s}}(f)\|_{X(\bar{\varphi})}^{*} \right)^{\tau_{1}} \right]^{\frac{\tau_{2}}{\tau_{1}}}...\biggr]^{\frac{\tau_{i}} {\tau_{i-1}}} \right]^{\frac{\tau_{i+1}}{\tau_{i}}}... \right]^{\frac{\tau_{m}}{\tau_{m-1}}}\right\}^{\frac{1}{\tau_{m}}} \leqslant $$ $$ \leqslant C\left\{\sum\limits_{n_{m}=0}^{\infty} \left(\frac{\psi_{m}(2^{-n_{m}})}{ \varphi_{m}(2^{-n_{m}})}\right)^{\tau_{m}} \left[\sum\limits_{n_{m-1}=0}^{\infty} \left(\frac{\psi_{m-1}(2^{-n_{m-1}})}{ \varphi_{m-1}(2^{-n_{m-1}})}\right)^{\tau_{m-1}}...\right. \right. $$ $$ ...\left.\left.\left[\sum\limits_{n_{1}=0}^{\infty} \left(\frac{\psi_{1}(2^{-n_{1}})}{ \varphi_{1}(2^{-n_{1}})}\right)^{\tau_{1}} \left(\|\delta_{\bar{s}}(f)\|_{X(\bar{\varphi})}^{*}\right)^{\tau_{1}} \right]^{\frac{\tau_{2}}{\tau_{1}}}...\right]^{\frac{\tau_{m}}{ \tau_{m-1}}} \right\}^{\frac{1}{\tau_{m}}}. \eqno(11) $$ Let $e \subset \{1,2,\ldots , m\},$ $e \neq \emptyset$ and $j_{0}=\min e$, $k_{0}=\max e$. If $\{j_{0}, j_{0}+1,\ldots , k_{0}\}\cap \bar{e} = \emptyset$, where $\bar{e}$ is the complement of the set $e$, then $$ \sum\limits_{\bar{s}\in G_{e}(\bar{n})}\prod\limits_{j\in e}\frac{1}{\varphi_{j}(2^{-s_{j}})}\|\delta_{\bar{s}}(f) \|_{X(\bar{\varphi})}^{*} = $$ $$ \sum_{s_{m}=n_{m}+1}^{\infty}...\sum_{s_{k_{0}+1}=n_{k_{0}+1}+1}^{\infty} \sum_{s_{k_{0}}=1}^{n_{k_{0}}}...\sum_{s_{j_{0}}=1}^{n_{j_{0}}} \sum_{s_{j_{0}-1}=n_{j_{0}-1}+1}^{\infty}...\sum_{s_{1}=n_{1}+1}^{\infty}\prod\limits_{j\in e}\frac{1}{\varphi_{j}(2^{-s_{j}})}\|\delta_{\bar{s}}(f) \|_{X(\bar{\varphi})}^{*} $$ If $\{j_{0}, j_{0}+1,\ldots , k_{0}\}\cap \bar{e} = \{l_{0},\ldots , l_{1}\}$, then $$ \sum\limits_{\bar{s}\in G_{e}(\bar{n})}\prod\limits_{j\in e}\frac{1}{\varphi_{j}(2^{-s_{j}})}\|\delta_{\bar{s}}(f) \|_{X(\bar{\varphi})}^{*} = $$ $$ \sum_{s_{m}=n_{m}+1}^{\infty}...\sum_{s_{k_{0}+1}=n_{k_{0}+1}+1}^{\infty} \sum_{s_{k_{0}}=1}^{n_{k_{0}}}...\sum_{s_{l_{1}-1}=0}^{n_{l_{1}-1}} \sum_{s_{l_{1}}=n_{l_{1}}+1}^{\infty}...\sum_{s_{l_{0}}=n_{l_{0}}+1}^{\infty} \sum_{s_{l_{0}+1}=1}^{n_{l_{0}+1}}...\sum_{s_{j_{0}}=1}^{n_{j_{0}}} $$ $$ \sum_{s_{j_{0}-1}=n_{j_{0}-1}+1}^{\infty}...\sum_{s_{1}=n_{1}+1}^{\infty} \prod\limits_{j\in e}\frac{1}{\varphi_{j}(2^{-s_{j}})}\|\delta_{\bar{s}}(f) \|_{X(\bar{\varphi})}^{*}. $$ Now, using these equalities and successively applying the assertions $ a) $, $ b) $ of Lemma 6, we obtain that inequality (11) holds for an arbitrary non-empty subset of $e$. Further, applying assertions $ a) $ of Lemma 6 we also obtain $$ J_{1}\leqslant C\cdot \left\{\sum\limits_{n_{m}=0}^{\infty} \left(\frac{\psi_{m}(2^{-n_{m}})}{ \varphi_{m}(2^{-n_{m}})}\right)^{\tau_{m}} \left[...\left[\sum\limits_{n_{1}=0}^{\infty} \left(\frac{\psi_{1}(2^{-n_{1}})}{ \varphi_{1}(2^{-n_{1}})}\right)^{\tau_{1}} \|\delta_{\bar{s}}(f)\|_{X(\bar{\varphi})}^{*} \right]^{\frac{\tau_{2}}{\tau_{1}}}...\right]^{\frac{\tau_{m}}{ \tau_{m-1}}} \right\}^{\frac{1}{\tau_{m}}}. \eqno(12) $$ Now the assertion of the theorem follows from inequalities (10)--(12). \begin{rem} In the case $X(\bar\varphi)=L_{\bar{\varphi}, \bar{\tau}}^{*}(\mathbb{T}^{m})$ --- the generalized Lorentz space, Theorem 2 was announced in \cite{25} no proof. \end{rem} \begin{theorem}\label{th3} {\it Let $1\leqslant\theta_{j}\leqslant +\infty$, $1\leqslant\tau_{j}<+\infty$, $j=1,...,m$ and functions $\varphi_{j}$, $\psi_{j}$ satisfies the conditions $1<\alpha_{\psi_{j}} \leqslant\beta_{\psi_{j}}<\alpha_{\varphi_{j}}\leqslant\beta_{\varphi_{j}}<2$, $j=1,...,m$. If $$ \left[\sum_{s_{j}=0}^{\infty}\left(\frac{\psi_{j}(2^{-s_{j}})} {\varphi_{j}(2^{-s_{j}})} 2^{-s_{j}\tau_{j}}\right)^{\varepsilon_{j}}\right]^{\frac{1}{\varepsilon_{j}}}<+\infty, \eqno(13) $$ where $\varepsilon_{j}=\tau_{j}\beta_{j}'$, $\beta_{j}'= \frac{\beta_{j}}{\beta_{j}-1}$, $j=1,...,m$, if $\beta_{j}= \frac{\theta_{j}}{\tau_{j}}>1$ and $\varepsilon_{j}=+\infty$, if $\theta_{j}\leqslant\tau_{j}$, $j=1,...,m,$ then the embedding $S_{X(\bar{\varphi}), \bar{\theta}}^{\bar r}B\subset L_{\bar{\psi},\bar{\tau}}^{*}(I^{m})$ and $$ \|f\|_{\bar{\psi}, \bar{\tau}}^{*}\leqslant C \|f\|_{S_{X(\bar{\varphi}), \bar{\theta}}^{\bar r}B}. $$} \end{theorem} {\bf Proof.} If $\tau_{j}<\theta_{j}$, $j=1,...,m$, then applying H\"{o}lder's inequality with exponents $\beta_{j}=\frac{\theta_{j}}{\tau_{j}}$, $\frac{1}{\beta_{j}}+ \frac{1}{\beta_{j}'}=1$, $j=1,...,m$ we obtain $$ \sigma_{\bar{\tau}, \bar{\theta}, \bar{\varphi}}(f)=\left\|\left\{ \prod_{j=1}^{m}\frac{\psi_{j}(2^{-s_{j}} )}{\varphi_{j}(2^{-s_{j}})}\|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*} \right\}_{\bar s \in \mathbb{Z}_{+}^{m}}\right\|_{l_{\bar\tau}}\leqslant $$ $$ \leqslant \left\|\left\{ \prod_{j=1}^{m}\frac{\psi_{j}(2^{-s_{j}})}{\varphi_{j}(2^{-s_{j}})}2^{-s_{j}r_{j}} \right\}_{\bar s \in \mathbb{Z}_{+}^{m}}\right\|_{l_{\bar\varepsilon}} \left\|\left\{\prod_{j=1}^{m}2^{s_{j}r_{j}}\|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*} \right\}_{\bar s \in \mathbb{Z}_{+}^{m}}\right\|_{l_{\bar\theta}}, \eqno(14) $$ where $\bar\varepsilon=(\varepsilon_{1},...,\varepsilon_{m}),$ $\varepsilon_{j}=\tau_{j}\beta_{j}',$ $j=1,...,m.$ If $\theta_{j}\leqslant\tau_{j},$ $j=1,...,m,$ then by Jensen's inequality will have $$ \sigma_{\bar{\tau},\bar{\theta}, \bar{\varphi}}(f) \leqslant \left\|\left\{ \prod_{j=1}^{m} 2^{-s_{j}r_{j}} \|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*} \right\}_{\bar s \in \mathbb{Z}_{+}^{m}}\right\|_{l_{\bar\theta}} \prod_{j=1}^{m}\sup_{s_{j}\in\mathbb{Z}_{+}} \frac{\psi_{j}(2^{-s_{j}})} {\varphi_{j}(2^{-s_{j}})}2^{-s_{j}r_{j}}. \eqno(15) $$ By conditions (13) and Theorem 2, inequalities (14) and (15) imply the assertion theorems. \begin{theorem}\label{th4} {\it Let functions $\psi_{j}$ satisfies the conditions $1< 2^{1/\lambda_{j}} < \alpha_{\psi_{j}}\leqslant \beta_{\psi_{j}}<2$, $1<\tau_{j}<+\infty$, $j=1,...,m.$ If $f\in L_{\bar{\psi}, \bar{\tau}}^{*}(\mathbb{T}^{m})$ and $$ f(\bar x)\sim \sum_{\bar{s}\in\mathbb{Z}_{+}^{m}}b_{\bar s} \sum_{\bar{k}\in\rho(\bar s)}e^{i\langle\bar{k},\bar{x}\rangle}, $$ then the inequality holds $$ \|f\|_{\bar{\psi},\bar{\tau}}^{*}\geqslant C\times $$ $$ \times \left\{\sum_{s_{m}=1}^{\infty}2^{s_{m}\frac{\tau_{m}}{\lambda_{m}}}\psi_{m}^{\tau_{m}}(2^{-s_{m}})\left[...\left[ \sum_{s_{1}=1}^{\infty}2^{s_{1}\frac{\tau_{1}}{\lambda_{1}}}\psi_{1}^{\tau_{1}}(2^{-s_{1}})\left(\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right)^{\tau_{1}}\right]^{\frac{\tau_{2}}{\tau_{1}}}... \right]^{\frac{\tau_{m}}{\tau_{m-1}}}\right\}^{\frac{1}{\tau_{m}}}, $$ where $b_{\bar s}$---real numbers. } \end{theorem} {\bf Proof.} Let $S_{2^{\nu},...,2^{\nu}}(f,\bar{x})$ be rectangular partial sum of the Fourier series of a function $f\in L_{\bar{\psi}, \bar{\tau}}^{*}(I^{m}).$ It is known that (see \cite{4}) $$ \|f\|_{\bar{\psi}, \bar{\theta}}^{*} \asymp \sup_{{}_{\|g\|_{\bar{\tilde{\psi}},\bar{\tau}'}^{*}\leqslant 1}^{g\in L_{\bar{\tilde{\psi}}',\bar{\tau}'}^{*}}} \int_{I^{m}} f(2\pi\bar{x})g(2\pi\bar{x})d\bar{x}, \eqno(16) $$ where $\bar{\tau}'= (\tau_{1}',...,\tau_{m}'),$ $\frac{1}{\tau_{j}}+\frac{1}{\tau_{j}'}=1$, $j=1,...,m$ and $\bar{\tilde{\psi}}(t)=(\tilde{\psi}_{1}(t),...,\tilde{\psi}_{m}(t))$, $ \tilde{\psi}_{j}(t)=\frac{t}{\psi_{j}(t)}, $ for $t\in (0, 1]$ and $\tilde{\psi}_{j}(0)=0$, $j=1,...,m$. We will introduce the notation $$ \sigma_{\nu}(f)_{\bar{\tau}_{j}}= \left\{\sum_{s_{j}=1}^{\nu-1}2^{\frac{s_{j}\tau_{j}}{\lambda_{j}}}\psi_{j}^{\tau_{j}}(2^{-s_{j}})\left[...\left[ \sum_{s_{1}=1}^{\nu-1}2^{\frac{s_{1}\tau_{1}}{\lambda_{1}}}\psi_{1}^{\tau_{1}}(2^{-s_{1}})\left(\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right)^{\tau_{1}}\right]^{\frac{\tau_{2}}{\tau_{1}}}... \right]^{\frac{\tau_{j}}{\tau_{j-1}}}\right\}^{\frac{1}{\tau_{j}}}. $$ We consider trigonometric polynomial $$ g_{\nu}(\bar x)=\sum_{s_{m}=1}^{\nu-1}...\sum_{s_{1}=1}^{\nu-1}b_{\bar{s},\nu} \sum_{\bar{k}\in\rho(\bar s)}e^{i\langle\bar{k},\bar{x}\rangle}, $$ where $$ b_{\bar{s},\nu}=\left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right\}_{\bar{s}=\bar{1}}^{\bar{\nu}}\right\|_{l_{\bar\tau}}^{ -\frac{\tau_{m}}{\tau'_{m}}} \prod_{j=1}^{m-1}(\sigma_{\nu} (f)_{\bar{\tau}_{j}})^{\tau_{j+1}-\tau_{j}} $$ $$ \times \prod_{j=1}^{m}(2^{s_{j}}\psi_{j}(2^{-s_{j}}))^{\tau_{j}} \left(\prod_{j=1}^{m}2^{s_{j}}\right)^{-1} \prod_{j=2}^{m}2^{s_{j}(1-\frac{1}{\lambda_{j}})(\tau_{1}-\tau_{j})} |b_{\bar s}(f)|^{\tau_{1}-1}sign(b_{\bar s}(f)) $$ and $\bar{\tau}_{j}=(\tau_{1}, \ldots , \tau_{j})$. Then, taking into account the orthogonality of the trigonometric system, we have $$ \int_{I^{m}} f(2\pi\bar{x})g(2\pi\bar{x})d\bar{x} = \int_{I^{m}} S_{2^{\nu},...,2^{\nu}}(f,2\pi\bar{x}) g_{\nu}(2\pi\bar x)d\bar{x} $$ $$ = \sum_{s_{m}=1}^{\nu-1}...\sum_{s_{1}=1}^{\nu-1} \int_{I^{m}}\delta_{\bar s}(f, 2\pi\bar{x})g_{\nu}(2\pi\bar x)d\bar{x}. \eqno(17) $$ We will prove that $\|g_{\nu}\|_{\bar{\tilde{\psi}}, \bar{\tau}'}^{*} \leqslant C_{0},$ where $C_{0}$ is some positive constant independent of $\nu$. Taking into account the relation (see (5)) $$ \Bigl \|\sum_{\bar{k}\in\rho(\bar s)}e^{i\langle\bar{k},\bar{x}\rangle} \Bigr\|_{\bar{\psi},\bar{\tau}}^{*}\asymp \prod_{j=1}^{m} 2^{s_{j}}\psi_{j}(2^{-s_{j}}),\,\, 1< \tau_{j}<+\infty, \,\, 1< \alpha_{\psi_{j}} \leqslant \beta_{\psi_{j}}<2, \eqno(18) $$ it is easy to verify the following inequality $$ \left(\|\delta_{\bar s}(g_{\nu})\|_{\bar{\lambda}', \bar{\tau}'}^{*} \right)^{\tau'_{1}}=\Bigl(|b_{s,\nu}| \Bigl\| \sum_{\bar{k}\in\rho(\bar s)}e^{i\langle\bar{k},\bar{x}\rangle} \Bigr\|_{\bar{\lambda}', \bar{\tau}'}^{*}\Bigr)^{\tau'_{1}}\leqslant $$ $$ \leqslant C\prod_{j=2}^{m}\Bigl(2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\Bigr)^{\tau_{j}\tau'_{1}} \left(2^{\frac{s_{1}}{\lambda_{1}}}\psi_{1}(2^{-s_{1}})\right)^{(\tau_{1}+\tau'_{1})}\left(\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*}\right)^{\tau_{1}} $$ $$ \times \left(\prod_{j=1}^{m}(\sigma_{\nu} (f)_{\bar{\tau}_{j}})^{\tau_{j+1}-\tau_{j}} \left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right\}_{\bar{s}=\bar{1}}^{\bar{\nu}}\right\|_{l_{\bar\tau}}^{ -\frac{\tau_{m}}{\tau'_{m}}}\right)^{\tau'_{1}}, \eqno(19) $$ where $\bar{\nu} = (\nu, \ldots , \nu )$. Further, by using inequality (19), we obtain $$ J(g_{\nu}):= \left\{\sum_{s_{m}=1}^{\nu-1}(2^{\frac{s_{m}}{\lambda_{m}^{'}}}\tilde{\psi}_{m}(2^{-s_{m}}))^{\tau'_{m}}\left[...\left[ \sum_{s_{1}=1}^{\nu-1}(2^{\frac{s_{1}}{\lambda_{1}^{'}}}\tilde{\psi}_{1}(2^{-s_{1}}))^{\tau'_{1}}\left(\|\delta_{\bar s}(f)\| _{\bar{\lambda}', \bar{\tau}'}^{*} \right)^{\tau'_{1}}\right]^{\frac{\tau'_{2}}{\tau'_{1}}}... \right]^{\frac{\tau'_{m}}{\tau'_{m-1}}}\right\}^{\frac{1}{\tau'_{m}}} $$ $$ \leqslant C \left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right\}_{\bar{s}\in\mathbb{Z}_{+}^{m}}\right\|_{l_{\bar\tau}}^{ -\frac{\tau_{m}}{\tau'_{m}}} $$ $$ \times \left\{\sum_{s_{m}=1}^{\nu-1}(2^{\frac{s_{m}}{\lambda_{m}^{'}}}\tilde{\psi}_{m}(2^{-s_{m}}))^{\tau'_{m}}\left[...\left[ \sum_{s_{1}=1}^{\nu-1}(2^{\frac{s_{1}}{\lambda_{1}^{'}}}\tilde{\psi}_{1}(2^{-s_{1}}))^{\tau'_{1}}\prod_{j=2}^{m} (2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}}))^{\tau_{j}\tau'_{1}} \right.\right.\right. $$ $$ \left.\left.\left.\times (2^{\frac{s_{1}}{\lambda_{1}}}\psi_{1}(2^{-s_{1}}))^{(\tau_{1}+\tau'_{1})} \left(\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*}\right)^{\tau_{1}} \left(\prod_{j=1}^{m-1}(\sigma_{\nu} (f)_{\bar{\tau}_{j}})^{\tau_{j+1}-\tau_{j}}\right)^{\tau'_{1}} \right]^{\frac{\tau'_{2}}{\tau'_{1}}}... \right]^{\frac{\tau'_{m}}{\tau'_{m-1}}}\right\}^{\frac{1}{\tau'_{m}}}. \eqno(20) $$ Since $\frac{1}{\lambda_{j}^{'}}=1-\frac{1}{\lambda_{j}}$, $\tau_{j}\tau'_{j}=\tau_{1}+\tau'_{1}$ and $\tilde{\psi}_{j}(t)=\frac{t}{\psi_{j}(t)}$, $j=1, \ldots, m$ then $$ \Bigl(2^{\frac{s_{j}}{\lambda_{j}^{'}}}\tilde{\psi}_{j}(2^{-s_{j}})\Bigr)^{\tau'_{j}} \Bigl(2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\Bigr)^{\tau_{j}\tau'_{j}} = \Bigl(\frac{2^{-\frac{s_{j}}{\lambda_{j}}}}{\psi_{j}(2^{-s_{j}})}\Bigr)^{\tau'_{j}} \Bigl(2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\Bigr)^{\tau_{j}+\tau'_{j}} $$ $$ = \Bigl(2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\Bigr)^{\tau_{j}}, \,\, j=1, \ldots, m. $$ Now, considering this equality and the definition of numbers $\sigma_{\nu}(f)_{\bar{\tau}_{j}}$ inequality (20) continues $$ J(g_{\nu})\leqslant C \left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right\}_{\bar{s}\in\mathbb{Z}_{+}^{m}}\right\|_{l_{\bar\tau}}^{ -\frac{\tau_{m}}{\tau'_{m}}} $$ $$ \times \left\{\sum_{s_{m}=1}^{\nu-1}\Bigl(2^{\frac{s_{m}}{\lambda_{m}}}\psi_{m}(2^{-s_{m}})\Bigr)^{\tau_{m}}\left[...\left[ \sum_{s_{1}=1}^{\nu-1}\Bigl(2^{\frac{s_{1}}{\lambda_{1}}}\psi_{1}(2^{-s_{1}})\Bigr)^{\tau_{1}} \left(\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*}\right)^{\tau_{1}} \times\right.\right.\right. $$ $$ \left.\left.\left.\times \left(\prod_{j=1}^{m-1}(\sigma_{\nu} (f)_{\bar{\tau}_{j}})^{\tau_{j+1}-\tau_{j}}\right)^{\tau'_{1}} \right]^{\frac{\tau'_{2}}{\tau'_{1}}}... \right]^{\frac{\tau'_{m}}{\tau'_{m-1}}}\right\}^{\frac{1}{\tau'_{m}}}= $$ $$ = \left\{\sum_{s_{m}=1}^{\nu-1}\Bigl(2^{\frac{s_{m}}{\lambda_{m}}}\psi_{m}(2^{-s_{m}})\Bigr)^{\tau_{m}}\left[...\left[ \sum_{s_{1}=1}^{\nu-1}\Bigl(2^{\frac{s_{1}}{\lambda_{1}}}\psi_{1}(2^{-s_{1}})\Bigr)^{\tau_{1}} \left(\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*}\right)^{\tau_{1}} \right]^{\frac{\tau_{2}}{\tau_{1}}}... \right]^{\frac{\tau_{m}}{\tau_{m-1}}}\right\}^{\frac{1}{\tau_{m}}} $$ $$ \times C \left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right\}_{\bar{s}\in\mathbb{Z}_{+}^{m}}\right\|_{l_{\bar\tau}}^{ -\frac{\tau_{m}}{\tau'_{m}}} \leqslant C. \eqno(21) $$ Since by the hypothesis of the theorem $1< 2^{\frac{1}{\lambda_{j}}} < \alpha_{\psi_{j}}$, then $\beta_{\tilde{\psi_{j}}} < 2^{\frac{1}{\lambda_{j}^{'}}}$, $j=1,...,m.$ Therefore, according to Theorem 2 and inequality (21), we obtain $$ \|g_{\nu}\|_{\bar{\tilde{\psi}}, \bar{\tau}'}^{*}\leqslant C J(g_{\nu}) \leqslant C_{0}. $$ Hence the function $\varphi_{\nu}=\frac{1}{C_{0}}g_{\nu} \in L_{\bar{\tilde{\psi}}, \bar{\tau}'}^{*}(I^{m})$ and $\|\varphi_{\nu}\|_{\bar{\tilde{\psi}}, \bar{\tau}'}^{*}\leqslant 1.$ Further, according to the orthogonality property, we have $$ \int_{I^{m}}\delta_{\bar s}(f,\bar{x})g_{\nu}(\bar x)d\bar{x}= \sum_{l_{m}=1}^{\nu-1}...\sum_{l_{1}=1}^{\nu-1}b_{\bar s}(f) b_{\bar{l},\nu}\times $$ $$ \times \int_{I^{m}}\left|\sum_{\bar{k}\in\rho(\bar s)} e^{i\langle\bar{k},\bar{x}\rangle}\right|^{2}d\bar{x}=(2\pi)^{d}\cdot b_{\bar s}(f)b_{\bar{s},\nu}\cdot \prod_{j=1}^{m}2^{s_{j}-1}. \eqno(22) $$ From the definition of the numbers $b_{\bar{s},\nu}$ it follows that $$ \prod_{j=1}^{m}2^{s_{j}}b_{\bar s}(f)b_{\bar{s},\nu}=\prod_{j=1}^{m}2^{s_{j}} \left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right\}_{\bar{s}=\bar{1}}^{\bar{\nu}}\right\|_{l_{\bar\tau}}^{ -\frac{\tau_{m}}{\tau'_{m}}} \prod_{j=1}^{m-1}(\sigma_{\nu} (f)_{\bar{\tau}_{j}})^{\tau_{j+1}-\tau_{j}} $$ $$ \times \prod_{j=1}^{m}2^{s_{j}\theta_{j}}\psi_{j}^{\tau_{j}}(2^{-s_{j}}) \left(\prod_{j=1}^{m}2^{s_{j}}\right)^{-1} \prod_{j=2}^{m}2^{s_{j}(1-\frac{1}{\lambda_{j}})(\tau_{1}-\tau_{j})} |b_{\bar s}(f)|^{\tau_{1}}= $$ $$ =\prod_{j=1}^{m-1}(\sigma_{\nu} (f)_{\bar{\tau}_{j}})^{\tau_{j+1}-\tau_{j}} \left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right\}_{\bar{s}=\bar{1}}^{\bar{\nu}}\right\|_{l_{\bar\tau}}^{ -\frac{\tau_{m}}{\tau'_{m}}} \prod_{j=1}^{m}2^{s_{j}\tau_{j}}\psi_{j}^{\tau_{j}}(2^{-s_{j}}) $$ $$ \times \prod_{j=2}^{m}2^{-s_{j}(1-\frac{1}{\lambda_{j}})\tau_{j}} 2^{-s_{1}(1-\frac{1}{\lambda_{1}})\tau_{1}} \left(\prod_{j=1}^{m} 2^{s_{j}(1-\frac{1}{\lambda_{j}})}|b_{\bar s}(f)|\right)^{\tau_{1}}\geqslant $$ $$ \geqslant C(\tau, \lambda) \left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right\}_{\bar{s}=\bar{1}}^{\bar{\nu}}\right\|_{l_{\bar\tau}}^{ -\frac{\tau_{m}}{\tau'_{m}}} \prod_{j=1}^{m-1}(\sigma_{\nu} (f)_{\bar{\tau}_{j}})^{\tau_{j+1}-\tau_{j}} $$ $$ \times \prod_{j=1}^{m}\Bigl(2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\Bigr)^{\tau_{j}} \left(\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*}\right)^{\tau_{1}}. \eqno(23) $$ Now, taking into account (22) and (23), we obtain $$ \sup_{{}_{\|g\|_{\bar{\tilde{\psi}}, \bar{\tau}'}^{*} \leqslant 1}^{g\in L_{\bar{\tilde{\psi}}, \bar{\tau}'}^{*}}} \sum_{s_{m}=1}^{\nu-1}...\sum_{s_{1}=1}^{\nu-1} \int_{I^{m}}\delta_{\bar s}(f,\bar{x}) g(\bar x)d\bar{x}\geqslant \sum_{s_{m}=1}^{\nu-1}...\sum_{s_{1}=1}^{\nu-1} \int_{I^{m}}\delta_{\bar s}(f,\bar{x})\varphi_{\nu}(\bar x) d\bar{x}= $$ $$ =C \sum_{s_{m}=1}^{\nu-1}...\sum_{s_{1}=1}^{\nu-1} \int_{I^{m}}\delta_{\bar s}(f,\bar{x}) g(\bar x)d\bar{x}\geqslant \sum_{s_{m}=1}^{\nu-1}...\sum_{s_{1}=1}^{\nu-1} \int_{I^{m}}\delta_{\bar s}(f,\bar{x})g_{\nu}(\bar x) d\bar{x} $$ $$ \geqslant C \left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\tau_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\tau},\bar{\theta}}^{*} \right\}_{\bar{s}=\bar{1}}^{\bar{\nu}}\right\|_{l_{\bar\theta}}^{ -\frac{\theta_{m}}{\theta'_{m}}} \prod_{j=1}^{m-1}(\sigma_{\nu} (f)_{\bar{\theta}_{j}})^{\theta_{j+1}-\theta_{j}} $$ $$ \times \sum_{s_{m}=1}^{\nu-1}...\sum_{s_{1}=1}^{\nu-1} \prod_{j=1}^{m}\Bigl(2^{\frac{s_{j}}{\tau_{j}}}\psi_{j}(2^{-s_{j}})\Bigr)^{\theta_{j}} \left(\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*}\right)^{\tau_{1}}= $$ $$ =C \left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right\}_{\bar{s}=\bar{1}}^{\bar{\nu}}\right\|_{l_{\bar\tau}}. \eqno(24) $$ It follows from inequalities (17) and (24) that $$ \|f\|_{\bar{\psi}, \bar{\tau}}^{*}\geqslant C \left\|\left\{\prod_{j=1}^{m}2^{\frac{s_{j}}{\lambda_{j}}}\psi_{j}(2^{-s_{j}})\|\delta_{\bar s}(f)\|_{\bar{\lambda}, \bar{\tau}}^{*} \right\}_{\bar{s}\in\mathbb{Z}_{+}^{m}}\right\|_{l_{\bar\tau}}. $$ The theorem is proved. \begin{rem} In the case $\psi_{j}(t)=t^{\frac{1}{q_{j}}}$, $\varphi_{j}(t)=t^{\frac{1}{p_{j}}}$, $j=1,\ldots , m$ Theorem 2 and Theorem 4 were proved in \cite{14}. \end{rem} \begin{theorem}\label{th5} {\it Let $1\leqslant\theta_{j}<+\infty,$ $1\leqslant\tau_{j}<+\infty,$ $j=1,...,m$ and functions $\varphi_{j},$ $\psi_{j}$ satisfies the conditions $1<\alpha_{\psi_{j}} \leqslant\beta_{\psi_{j}}<\alpha_{\varphi_{j}}\leqslant\beta_{\varphi_{j}}<2,$ $j=1,...,m$ and (13). 1) If $1\leqslant \tau_{j}<\theta_{j}<+\infty,$ $j=1,...,m,$ then $$ E_{n}^{\bar \gamma}(S_{X(\bar{\varphi}), \bar\theta}^{\bar r}B) _{\bar{\psi}, \bar{\tau}}\leqslant C\left\|\left\{\prod_{j=1}^{m}2^{-s_{j}r_{j}} \mu_{j}(s_{j})\right\}_{\bar{s}\in Y^{m}(\bar{\gamma},n)} \right\|_{\bar\varepsilon}, $$ where $\bar{\varepsilon}=(\varepsilon_{1},...,\varepsilon_{m}),$ $\varepsilon_{j}=\tau_{j}\beta_{j}',$ $\frac{1}{\beta_{j}}+ \frac{1}{\beta_{j}'}=1,$ $\beta_{j}=\frac{\theta_{j}}{\tau_{j}}.$ 2) If $1\leqslant \theta_{j}\leqslant \tau_{j}<+\infty,$ $j=1,...,m,$ then $$ E_{n}^{\bar \gamma}(S_{X(\bar{\varphi}), \bar\theta}^{\bar r}B) _{\bar{\psi},\bar{\tau}}\leqslant C\sup\left\{\prod_{j=1}^{m}2^{-s_{j}r_{j}} \mu_{j}(s_{j}): \quad \bar{s}\in\mathbb{Z}_{+}^{m}, \quad \langle\bar{s},\bar{\gamma}\rangle\geqslant n\right\}. $$ } \end{theorem} {\bf Proof.} Let $f\in S_{X(\bar{\varphi}), \bar\theta}^{\bar r}B.$ Then by Theorem 3 we obtain $$ \|f-S_{n}^{(\bar\gamma)}(f)\|_{\bar{\psi}, \bar{\tau}}^{*}\leqslant C\left\|\left\{\prod_{j=1}^{m} \mu_{j}(s_{j})\|\delta_{\bar s}(f-S_{n}^{(\bar\gamma)}(f))\|_{X(\bar{\varphi})}^{*}\right\}_{\bar s \in \mathbb{Z}_{+}^{m}}\right\|_{l_{\bar\tau}}. \eqno(25) $$ Since $\delta_{\bar s}(f-S_{n}^{(\bar\gamma)}(f))=0$, $\bar{s} \notin Y^{m}(\bar{\gamma},n)$ and $\delta_{\bar s}(f-S_{n}^{(\bar\gamma)}(f))=\delta_{\bar s}(f)$, $\bar{s} \in Y^{m}(\bar{\gamma},n),$ then from (25) we obtain $$ \|f-S_{n}^{(\bar\gamma)}(f)\|_{\bar{\psi}, \bar{\tau}}^{*}\leqslant C(\theta, m)\left\|\left\{\prod_{j=1}^{m}\mu_{j}(s_{j}) \|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*} \right\}_{\bar{s}\in Y^{m}(\bar{\gamma},n)}\right\|_{l_{\bar\tau}}. \eqno(26) $$ We put $$ b_{\bar s}(n)=\prod_{j=1}^{m}\frac{\psi_{j}\left(2^{-s_{j}}\right)} {\varphi_{j}\left(2^{-s_{j}}\right)} , \,\, \text{????} \quad \bar{s}\in Y^{m}(\bar{\gamma},n), $$ $b_{\bar s}(n)=0$, $\bar{s}\notin Y^{m}(\bar{\gamma},n)$. We will prove item 1). Since $1\leqslant \tau_{j}<\theta_{j}<+\infty,$ $j = 1, ...,m,$ then applying H\"{o}lder's inequality with exponents $\beta_{j}=\frac{\tau_{j}}{\theta_{j}}$, $\frac{1}{\beta_{j}}+ \frac{1}{\beta_{j}'}=1$, $j=1,...,m$ we get $$ \sigma_{1}(f,n)=\left\|\left\{ \prod_{j=1}^{m} \mu_{j}(s_{j}) \|\delta_{\bar s}(f)\|_{X(\bar{\varphi})}^{*} \right\}_{\bar{s}\in Y^{m}(\bar{\gamma},n)}\right\|_{ l_{\bar\tau}}\leqslant C \left\|\left\{b_{\bar s}(n) \right\}_{\bar{s}\in\mathbb{Z}_{+}^{m}}\right\|_{l_{\bar\varepsilon}}, \eqno(27) $$ where $\bar{\varepsilon}=(\varepsilon_{1},...,\varepsilon_{m}),$ $\varepsilon_{j}=\tau_{j}\beta_{j}',$ \,\, $j=1,...,m.$ Let us prove item 2). If $\theta_{j} \leqslant \tau_{j}<+\infty,$ $j=1,..., m,$ then according to Jensen's inequality (see \cite[Ch. 3, Sec. 3]{6}) we obtain $$ \sigma_{1}(f,n)\leqslant \|f\|_{S_{X(\bar{\varphi}), \bar{\theta}}^{\bar r}B} \sup_{\bar{s}\in Y^{m}(\bar{\gamma},n)} \prod_{j=1}^{m}\mu_{j}(s_{j})2^{-s_{j}r_{j}}. \eqno(28) $$ Inequalities (26)--(28) imply the statements of items 1) and 2) of Theorem 5. \begin{rem} In the case $\varphi_{j}(t)=t^{1/p}$, $p_{j}=\tau_{j}^{(1)}=p $ and $ \psi_{j}(t)=t^{1/q}$, $q_{j}=\tau_{j}^{(2)}=q$, $1\leqslant \theta_{j}=\theta\leqslant \infty$ for $j = 1, ..., m$ Theorem 5 implies the previously known results of Ya.S. Bugrova, E.M. Galeeva, V.N. Temlyakov and A.S. Romanyuk (see, for example, the bibliography in \cite{11}, \cite{12}, \cite{22}). Further, From Theorem 5 with $\varphi_{j}(t)=t^{1/p_{j}}$, $\psi_{j}(t)=t^{1/q_{j}}$, $1< p_{j}, q_{j}< \infty$, $ j = 1, ..., m $ and $\gamma_{j}^{'} =\gamma_{j}=1$ for $j = 1, ..., \nu $ and $\gamma_{j}^{'} < \gamma_{j}$ $j=\nu+1,...,m$ follow Theorem 2 in \cite{13} ( also see \cite{14}) and for $1\leqslant \gamma_{j}^{'} \leqslant \gamma_{j}$ for $ j = 1, ..., m $, Theorem 1 in \cite{16} ( see also \cite{18}). \end{rem} \begin{rem} In the case $X(\bar\varphi)=L_{\bar{\varphi}, \bar{\tau}}^{*}(\mathbb{T}^{m})$ --- the generalized Lorentz space, Theorem 4 and Theorem 5 proved in \cite{25}. \end{rem} This work was supported by a grant Ministry of Education and Science of the Republic of Kazakhstan (Project AP 08855579).
1,941,325,220,852
arxiv
\section{Introduction} All of the long-baseline pulsar timing arrays (PTAs)---namely the North American Nanohertz Observatory for Gravitational Waves \citep[NANOGrav,][]{NANOGrav}, the European Pulsar Timing Array \citep[EPTA,][]{EPTA} and the Parkes Pulsar Timing Array \citep[PPTA,][]{PPTA}---have now reported evidence for the presence of a spectrally-common process in their latest datasets \citep{NG12p5_detection, epta_dr2_gwb, ppta_dr2_gwb}. However, the evidence for Hellings \& Downs (HD) cross correlations \citep{HD}, which is considered the definitive signature of a stochastic gravitational wave background (GWB), was not significant in any of these datasets. A spectrally-common process was also reported by the International Pulsar Timing Array \citep[IPTA,][]{IPTA} consortium in their second data release \citep[DR2,][]{ipta_dr2_dataset}, which used older versions of datasets from the aforementioned PTAs. However this dataset also lacked definitive evidence for the HD signature \citep{ipta_dr2_gwb}. Despite this, the detection of such a spectrally-common process is considered as the first step towards the eventual detection of a GWB \citep{romano_ac_v_cc}. Based on simulations of the NANOGrav 12.5 yr dataset \citep{ng12p5_timing}, and if the signal observed in the 12.5 yr dataset is an astrophysical GWB signal, NANOGrav is expected to have sufficient evidence to report the detection of HD-consistent correlations within the next few years \citep{astro4cast}. Once this signal is confirmed to be a GWB signal in future PTA datasets, the onus will be on characterizing the source of the GWB. An important part of this analysis will be measuring the spatial distribution of power in the GWB. For example, if the source of the GWB is a population of inspiraling supermassive black hole binaries \citep[SMBHBs,][]{Rajagopal1995, Jaffe2003, Sesana2004, BurkeSpolaor2019}, then their spatial distribution might track that of the local matter distribution. In particular, nearby galaxy clusters that host an overabundance of SMBHBs may show up as a hotspot of GW emission on the sky. For example, the Virgo cluster \citep{virgo} has an angular diameter of $\sim$10$^{\circ}$, and could show up as a hotspot on the GW sky at a multipole of $l_{\rm Virgo} = 180^{\circ} / \theta \approx 18$. On the other hand, a single SMBHB that is louder than the GWB will show up as a point source anisotropy, with multiple such single sources producing a ``popcorn''- or pixel-style spatial distribution of GWB power. However, if the GWB is produced by a cosmological source, such as cosmic strings \citep[e.g.,][]{cosmic_string_spectrum} or primoridal GWs \citep[e.g.,][]{primordial_gw_spectrum}, then the GWB power distribution on the sky may not display the same anisotropies as that from a SMBHB-produced GWB. Thus, the anisotropy of the GWB, or the lack of it, allows us to make inferences about the source of the GWB. Multiple techniques have been developed to probe the anisotropy of a GWB with PTAs \citep[e.g.,][]{cornish_eigenmaps, ming_pta_anis, taylor_pta_anis, bumpy_bkgrnd}. While these methods differ in their choice of basis for modeling anisotropy, they all take the pulsar times-of-arrival (TOAs) as their initial data, and employ Bayesian techniques to constrain parameters that describe anisotropy-induced deviations away from the HD signature. Constraints on anisotropy were first presented by the EPTA as part of the analysis of their first data release \citep{epta_anisotropy}, where they showed that the strain amplitude in $l > 0$ spherical-harmonic multipoles is less than $40\%$ of the monopole value ($l = 0$, i.e., isotropy). As PTA datasets grow longer in timespan, add more pulsars, and become denser with higher cadence observations, the analysis time for Bayesian methods based on TOAs are going to increase dramatically. Additionally, Bayesian model selection of anisotropy is predicated on having an appropriate hypothesis of the anisotropy, e.g., a power map built from galaxy catalogs, statistically populated with inspiraling SMBHBs. In other words, Bayesian model selection always requires two models to compare; a null and signal model. By contrast, frequentist techniques allow one to reject a null hypothesis (in this case, isotropy) if the data are in sufficient tension with it. Importantly, rejecting a null hypothesis at a probability given by some $p$-value is not the same as accepting a signal hypothesis with a certainty of $1 - p$. However significant tension with the assumption of isotropy is an important indicator of beyond-HD signatures in the cross-correlation data. To overcome these challenges, we develop a frequentist framework that employs the cross-correlations between pulsar timing residuals across a PTA as sufficient data with which to search for anisotropy. These cross correlations are measured as part of the standard GWB detection pipelines \citep[e.g.,][]{NG12p5_detection, epta_dr2_gwb, ppta_dr2_gwb}, using established optimal two-point correlation techniques \citep{allen_romano_OS, anholm_OS, NG5yr_OS, noise_marg_os_vigeland}. Thus our framework can be easily incorporated into ongoing analysis campaigns. The data volume in our framework is also significantly lower than analyses starting at the TOA-data level, enabling rapid estimation of anisotropy. Finally, as mentioned above, the frequentist framework allows us to infer (although not necessarily claim detection of) anisotropy via rejection of the null hypothesis of isotropy, thereby simplifying the process of constraining anisotropy. Other frequentist techniques for searching for anisotropy with PTAs have been proposed. These include, for example, the Fisher matrix formalism developed in \citet{ali-hamoud_1} and \citet{ali-hamoud-2} where they employ ``principal maps'', which are the eigenmaps of the Fisher matrix, to search for anisotropy. \citet{hotinli}, on the other hand, decompose the timing residual power spectrum onto bipolar spherical harmonics to search for anisotropy using the correlations between pulsars in a PTA. These frequentist techniques also focus on the detection of anisotropy through rejection of the null hypothesis of isotropy. The use of cross correlations between detector baselines is also commonly used when searching for anisotropic GWBs in the LIGO \citep[e.g.,][]{thrane_ligo_anis, ligo_O3_anis_search} and LISA bands \citep[e.g.,][]{banagiri_blip}. We use our framework on simulations of idealized-PTA and near-future NANOGrav data, presenting, for the first time, projections on the sensitivity of NANOGrav and other PTAs to GWB anisotropy in the mid-to-late 2020s. As shown in \citet{astro4cast}, if the common-spectrum signal observed in the NANOGrav 12.5 yr dataset is a GWB, then NANOGrav can be expected to have sufficient statistical evidence to claim a detection of HD correlations within the next few years. By employing the same assumptions and simulation pipeline as \citet{astro4cast} to generate our simulations, we forecast the level of anisotropy that would result in a statistically significant rejection of the null hypothesis of isotropy by the NANOGrav data. The paper is organised as follows: in Section~\ref{sec:methods}, we describe the measurement of cross-correlations and their uncertainties, along with the maximum-likelihood framework and detection statistics that are used to constrain anisotropy, while Section~\ref{sec:connection} connects the cross correlation uncertainty to the noise properties, cadence, and timing baseline of the PTA. Section~\ref{sec:sim_ideal_pta} presents scaling relations for anistoropy decision thresholds of an ``ideal PTA'' as a function of cross-correlation uncertainty and number of pulsars in a PTA, while Section~\ref{sec:sim_real_pta} presents them for a realistic PTA that is generated using the NANOGrav 12.5 yr dataset. Finally, we present a discussion of results and prospects for the future in Section~\ref{sec:discuss}. In Appendix \ref{appendix:real_os} and \ref{appendix:unc_scaling} we present derivations for computing the cross correlations in real PTA datasets and scaling relations for the cross correlation uncertainty with respect to PTA specifications like timing baseline, white noise and observation cadence. \section{Methods} \label{sec:methods} \subsection{The optimal cross correlation statistic and overlap reduction function} \label{subsec:os_orf} A GWB can be uniquely identified through the cross correlations it induces between the times-of-arrivals (TOAs) of pulsars in a PTA \citep{HD, tiburzi_cc}. The timing cross correlations between pulsar pairs, $\rho_{ab}$, and their uncertainties, $\sigma_{ab}$, can be written as \citep{NG5yr_OS, optimal_statistic_chamberlin, siemens_scaling_laws, noise_marg_os_vigeland} % \begin{align} \label{eq:cross_corr} \displaystyle \rho_{ab} &= \frac{\delta\textbf{t}^T_{a} \textbf{P}^{-1}_{a} \hat{\textbf{S}}_{ab} \textbf{P}^{-1}_{b} \delta\textbf{t}^T_{b}}{\textrm{tr}\left[ \textbf{P}^{-1}_{a} \hat{\textbf{S}}_{ab} \textbf{P}^{-1}_{b} \hat{\textbf{S}}_{ba} \right]}, \nonumber\\ \displaystyle \sigma_{ab} &= \left( \textrm{tr}\left[ \textbf{P}^{-1}_{a} \hat{\textbf{S}}_{ab} \textbf{P}^{-1}_{b} \hat{\textbf{S}}_{ba} \right] \right)^{-1/2}, \end{align} % where $\delta\mathbf{t}_a$ is a vector of timing residuals for pulsar-$a$, $\textbf{P}_a = \langle \delta \mathbf{t}_a \delta \mathbf{t}_a^T\rangle$ is the measured autocovariance matrix of pulsar-$a$, and $\hat{\textbf{S}}_{ab}$ is the template scaled-covariance matrix between pulsar-$a$ and pulsar-$b$. This scaled-covariance matrix is a template for the GWB spectral shape only, and is independent of the GWB's amplitude and the cross-correlation signature that it induces. It is related to the full covariance matrix by $\textbf{S}_{ab} = \langle \delta \mathbf{t}_a \delta \mathbf{t}_b^T\rangle = A_{\rm gwb}^2\chi_{ab}\hat{\textbf{S}}_{ab}$, where $A_{\rm gwb}$ is the GWB amplitude corresponding to a given strain-spectrum template, and $\chi_{ab}$ is the GWB-induced cross-correlation value for this pair of pulsars, e.g., the Hellings \& Downs factor in the case of an isotropic GWB. The cross-correlation statistic accounts for fitting and marginalization over the timing ephemeris of each pulsar, as well as its intrinsic white and red noise characteristics, where a power-law spectrum is usually assumed for the intrinsic red noise. Implementations of the statistic also usually assume a power-law strain spectrum template for the GWB following $f^{-2/3}$ \citep{phinney} filtering it across the pairwise-correlated timing residuals in order to extract an optimal measurement of the GWB amplitude \citep{optimal_statistic_chamberlin,noise_marg_os_vigeland}. However, we note that this statistic is flexible enough to allow for different parametrized spectral templates. See \autoref{appendix:real_os} for details of how this is implemented in real PTA searches for the GWB, including determining the noise weighting for each pulsar, setting the GWB's spectral template, and how the parameters of the pulsar timing ephemeris are marginalized over. In this paper we simulate and analyze data at the level of $\{\rho_{ab}, \sigma_{ab} \}$, then connect our results later to the underlying geometry and noise characteristics of the PTA (see \autoref{appendix:unc_scaling}). A PTA with $N_{\rm psr}$ pulsars has $N_{\rm cc} = N_{\rm psr} (N_{\rm psr} - 1) / 2$ distinct cross-correlation values. The angular dependence of these empirically measured cross-correlations can be modelled by the detector overlap reduction function (ORF)\footnote{We note that the term ORF in ground- and space-based literature usually involves a frequency dependence. However this frequency-dependence factors out in the PTA regime, such that we use the term ORF to denote only the angular dependence of the pairwise cross-correlated data.} \citep{orf_citation, ming_pta_anis, taylor_pta_anis, Gair_pta_cmb_anis, bumpy_bkgrnd} such that, for an unpolarized GWB, % \begin{align} \label{eq:orf} \Gamma_{ab} \propto \int_{S^2} d^2\hat\Omega \,\,P(\hat\Omega) &\left[ \mathcal{F}^+(\hat{p}_a,\hat\Omega)\mathcal{F}^+(\hat{p}_b,\hat\Omega) \right. \nonumber\\ &\left. +\, \mathcal{F}^\times(\hat{p}_a,\hat\Omega)\mathcal{F}^\times(\hat{p}_b,\hat\Omega) \right], \end{align} % where $P(\hat\Omega)$ is the angular power of the GWB in direction $\hat\Omega$, normalized such that $\int_{S^2}d^2\hat\Omega \,\,P(\hat\Omega) = 1$, and $\mathcal{F}^A(\hat{p},\hat\Omega)$ is the antenna response pattern of a pulsar in unit-vector direction $\hat{p}_a$ to each GW polarization $A\in[+,\times]$: % \begin{equation} \displaystyle \mathcal{F}^A (\hat{p}, \hat{\Omega}) = \frac{1}{2} \frac{\hat{p}^i \hat{p}^j}{1 - \hat{\Omega} \cdot \hat{p}} e_{ij}^A (\hat{\Omega}), \label{eq:antenna_resp_def} \end{equation} % where $e_{ij}^A(\hat{\Omega})$ are polarization basis tensors, and $(i, j)$ are spatial indices. We can recast the sky integral in \autoref{eq:orf} as a sum over equal-area pixels \citep{Gair_pta_cmb_anis, bumpy_bkgrnd}. Assuming an unpolarized GWB, and ignoring random pulsar term contributions to the cross correlations, this can be written as % \begin{equation} \displaystyle \Gamma_{ab} \propto \sum_{k} P_{k} \left[\mathcal{F}^+_{a,k}\mathcal{F}^+_{b,k} + \mathcal{F}^\times_{a,k}\mathcal{F}^\times_{b,k}\right], \label{eq:orf_full} \end{equation} % where $k$ denote pixel indices. Or, in a general matrix form % \begin{equation} \displaystyle \mathbf{\Gamma} = \mathbf{R} \textbf{P}, \label{eq:orf_matrix} \end{equation} % where $\mathbf{\Gamma}$ is an $N_\mathrm{cc}$ vector of ORF values for all distinct pulsar pairs, $\mathbf{P}$ is an $N_\mathrm{pix}$ vector of GWB power values at different pixel locations, and $\mathbf{R}$ is a $(N_\mathrm{cc}\times N_\mathrm{pix})$ overlap response matrix given by % \begin{equation} R_{ab,k} = \frac{3}{2N_{\rm pix}} \left[\mathcal{F}^+_{a,k}\mathcal{F}^+_{b,k} + \mathcal{F}^\times_{a,k}\mathcal{F}^\times_{b,k}\right], \end{equation} % where the normalization is chosen so that the ORF matches the Hellings \& Downs values in the case of an isotropic GWB with $P_k=1\,\,\forall k$. Thus, in our notation the expected value of $\rho_{ab}$ is such that $\langle\rho_{ab}\rangle = A^2\Gamma_{ab}$. However, in the remainder of this paper we will deal with amplitude-scaled cross-correlation values, $\rho_{ab}/A^2$, where we assume that in a real search an initial fit for $A^2$ will be performed on $\{\rho_{ab}\}$ under the assumption of isotropy. These amplitude-scaled cross-correlation values can then be directly compared with the ORF model to probe anisotropy. We suppress the explicit amplitude scaling in the remainder of our notation, such that $\rho_{ab}$ henceforth implies amplitude-scaled cross-correlation values. Assuming a stationary Gaussian distribution for the cross correlation uncertainty, the likelihood function for the cross correlations can be written as % \begin{equation} \displaystyle p(\boldsymbol{\rho} | \mathbf{P}) = \frac{\textrm{exp} \left[ -\frac{1}{2} (\boldsymbol{\rho} - \mathbf{R} \mathbf{P})^T \, \mathbf{\Sigma}^{-1} \, (\boldsymbol{\rho} - \mathbf{R} \mathbf{P}) \right]}{\sqrt{\mathrm{det}(2\pi\mathbf{\Sigma})}}, \label{eq:anis_lkl} \end{equation} % where $\mathbf{\Sigma}$ is the diagonal covariance matrix of cross-correlation uncertainties. We now discuss several different bases on which to decompose the angular power vector, $\mathbf{P}$. \subsubsection{Pixel basis} The GWB power can be parametrized using HEALPix sky pixelization \citep{healpix}, where each equal-area pixel is independent of the pixels surrounding it % \begin{equation} \displaystyle P(\hat{\Omega}) = \sum_{\hat{\Omega}^{\prime}} P_{\hat{\Omega}^{\prime}} \delta^2(\hat{\Omega}, \hat{\Omega}^{\prime}). \label{eq:pix_basis} \end{equation} % The number of pixels is set by $N_{\rm pix} = 12 N_{\rm side}^2$ and $N_{\rm side}$ defines the tessellation of the healpix sky \citep{healpix}. The rule of thumb for PTAs is to have $N_{\rm pix} \leq N_{\rm cc}$ \citep{cornish_eigenmaps, romano_cornish_review}. This basis is well suited for detection of pixel-scale anisotropy, which can arise from individual sources of GWs, and where an isotropic GWB would be represented by equal power (within uncertainties) in each pixel on the sky. \subsubsection{Spherical harmonic basis} Alternatively, the GWB power can be decomposed onto the spherical harmonic basis \citep[e.g.][]{thrane_ligo_anis}, where the lowest order multipole ($l = 0$) defines an isotropic background, while higher multipoles add anisotropy. The GWB power in this basis can be written as \begin{equation} \displaystyle P(\hat{\Omega}) = \sum_{l = 0}^{\infty} \sum_{m = -l}^{l} c_{lm} Y_{lm}(\hat{\Omega}), \label{eq:sph_basis} \end{equation} where $Y_{lm}$ are the real valued spherical harmonics and $c_{lm}$ are the spherical harmonic coefficients of $P(\hat{\Omega})$. In this basis, the ORF anisotropy coefficient for the $l,m$ components between pulsars $a,b$ can be written as \citep{bumpy_bkgrnd}, % \begin{equation} \displaystyle \Gamma_{(lm)(ab)} = \kappa \sum_{k} c_{lm} Y_{lm,k} \left[ \mathcal{F}_{a,k}^{+} \mathcal{F}_{b,k}^{+} + \mathcal{F}_{a,k}^{\times} \mathcal{F}_{b,k}^{\times} \right], \label{eq:orf_sph_basis} \end{equation} % where $k$ represents the pixel index corresponding to $\hat{\Omega}$ and the constant $\kappa$ accounts for the pixel area in the healpix sky tessellation. This basis representation, contrary to the pixel basis, is better suited for modeling large-scale anisotropies in the GWB. Based on diffraction-limit arguments, the highest order mode, $l_{\rm max}$, that can be used for modeling the anisotropy depends on the number of pulsars in the PTA, $l_{\rm max} \sim \sqrt{N_{\rm psr}}$ \citep{pen_boyle_pta_resolution, romano_cornish_review}. However, \citet{higher_lmax_limit} have shown that while the diffraction limit is attuned to maximizing the significance of the detection of anisotropy, values of $l > l_{\rm max}$ can be included in spherical harmonic decompositions to improve the localization of any anisotropy after its detection. The results can be expressed in terms of $C_l$, which is the squared angular power in each mode $l$ % \begin{equation} \displaystyle C_l = \frac{1}{2l + 1} \sum_{m = -l}^{l} |c_{lm}|^2. \label{eq:angular_power} \end{equation} % Physically, $C_l$ represents the amplitude of statistical fluctuations in the angular power of the GWB at scales corresponding to $\theta = 180^{\circ} / l$. An isotropic background in this basis will contain power only in the $l = 0$ multipole, thus filling the entire sky, while an anisotropic background will have power in the higher $l$ multipoles. On the other hand, the variance of the angular power distribution can be written as \citep{alex_jenkins_disst} % \begin{equation} \displaystyle {\rm Var}[P(\hat{\Omega})] \approx \int {\rm d (ln }\,l) \frac{l (l + 1)}{2\pi} C_l. \end{equation} % The quantity $l(l + 1)C_l / 2 \pi$ thus represents the variance per logarithmic multipole bin, and is what we use to present our results in this work. As pointed out in \citet{alex_jenkins_disst}, reporting $C_l$ is analogous to reporting the GWB strain power spectral density, with $C_l = $ constant representing a white angular power spectrum, while reporting $l(l + 1)C_l / 2 \pi$ is analogous to reporting the GWB energy density spectrum $\Omega_{\rm gw}(f)$, with $l(l + 1)C_l / 2 \pi = $ constant representing a scale-invariant angular power spectrum. For these bases, since the problem is linear in the regression coefficients, the maximum likelihood solution can be derived analytically \citep{thrane_ligo_anis, romano_cornish_review, ivezic_book} % \begin{equation} \displaystyle \hat{\mathbf{P}} = \mathbf{M}^{-1} \mathbf{X}, \label{eq:max_lkl_power} \end{equation} % where $\mathbf{M} \equiv \mathbf{R}^{T} \mathbf{\Sigma}^{-1} \mathbf{R}$ is the Fisher information matrix, with the uncertainties on the $c_{lm}$ coefficients given by the diagonal elements of $\mathbf{M}^{-1}$, and $\mathbf{X} \equiv \mathbf{R}^{T} \mathbf{\Sigma}^{-1} \boldsymbol{\rho}$ is the ``dirty map'', an inverse-noise weighted representation of the total power on the sky as ``seen" through the response of the pulsars in the array. \subsubsection{Square-root spherical harmonic basis} \label{subsubsec:sqrt_basis} A drawback of both the pixel and spherical harmonic bases is that they allow the GWB power to assume negative values, which is an unphysical realization of the GWB. While these tendencies can be curbed through the use of regularization techniques or rejection priors \citep{taylor_pta_anis}, this results in the addition of a hyperparameter to the analysis that requires further optimization or non-analytic priors \citep{taylor_pta_anis, bumpy_bkgrnd}. A more elegant solution is to use a basis that intrinsically conditions the GWB power to be positive over the whole sky. Such a basis can be generated by modeling the square-root of the GWB power, $P(\hat{\Omega})^{1/2}$, rather than modelling the power itself. This technique was introduced in a Bayesian context in \citet{cg_anis_ligo} for LIGO, \citet{banagiri_blip} for LISA, and in \citet{bumpy_bkgrnd} for PTAs. Decomposing the square-root power onto spherical harmonics, the GWB power can be written as % \begin{equation} \displaystyle P(\hat{\Omega}) = [P(\hat{\Omega})^{1/2}]^2 = \left[ \sum_{L = 0}^{\infty} \sum_{M = -L}^{L} b_{LM} Y_{LM} \right]^2, \label{eq:sqrt_basis} \end{equation} % where $Y_{LM}$ are real valued spherical harmonics and $b_{LM}$ are the search coefficients. \citet{banagiri_blip} showed that the search coefficients in this basis can be related to the spherical harmonic coefficients via % \begin{equation} \displaystyle c_{lm} = \sum_{LM} \sum_{L^{\prime} M^{\prime}} b_{LM} b_{L^{\prime} M^{\prime}} \beta_{lm}^{LM, L^{\prime} M^{\prime}}, \label{eq:sqrt_to_sph} \end{equation} % where $\beta_{lm}^{LM, L^{\prime} M^{\prime}}$ is defined as % \begin{equation} \displaystyle \beta_{lm}^{LM, L^{\prime} M^{\prime}} = \sqrt{ \frac{(2L + 1) (2L^{\prime} + 1)}{4 \pi (2l + 1)}} C^{lm}_{LM, L^{\prime} M^{\prime}} C^{l0}_{L0, L^{\prime} 0}, \label{eq:cg_coeff} \end{equation} % with $C^{lm}_{LM, L^{\prime} M^{\prime}}$ being the Clebsch-Gordon coefficients. \citet{bumpy_bkgrnd} showed that even though a full reconstruction of $c_{lm}$ requires an infinite sum over the $b_{LM}$ coefficients, restricting the maximum mode to $L \leq L_{\rm max} = l_{\rm max}$ is sufficient to produce an accurate reconstruction of the GWB power. Since the problem in this basis is non-linear in the regression coefficients, the likelihood in \autoref{eq:anis_lkl} cannot be maximized analytically. The maximum likelihood solution thus has to be calculated through numerical optimization techniques. In this work, we use the {\sc lmfit} \citep{lmfit} Python package, where we use the Levenberg-Marquardt (LM) optimization algorithm \citep{levenberg1944method, marquardt1963algorithm} to determine the maximum likelihood solution. The goodness-of-fit is assessed through the $\chi_{\rm dof}^2$ that is reported by {\sc lmfit}. \autoref{fig:realistic_example} shows an example of recovering an anisotropic background using this basis. To produce the simulated cross-correlation data, we inject a GW power map corresponding to the synthesized population of SMBHBs from \citet{bumpy_bkgrnd} into an ``ideal PTA'' consisting of 100 pulsars, with a constant cross-correlation measurement uncertainty of $0.01$. Note that this is an uncertainty on cross correlations that can assume any value between $-0.2$ and $0.5$, and thus represents extremely accurate measurement of the cross correlations between pulsars in the PTA (see \S\ref{subsec:define_ideal_pta} for our definition of an ``ideal PTA''). We see that this basis is capable of reproducing the injected angular power spectrum, which translates into an accurate identification of the anisotropy in the GWB. \subsection{Detection statistics} In addition to estimating the values of the anisotropy search coefficients (i.e., pixels or spherical harmonic coefficients), we also need to quantify the evidence for the presence of anisotropy in the cross correlation data. As described earlier, when searching for anisotropy, we seek to reject the null hypothesis of isotropy. In this section, we present two frequentist detection statistics that can quantify the evidence for anisotropy by measuring the (in)compatibility of the data with the null hypothesis. \subsubsection{Signal-to-noise ratio} \label{subsubsec:sn} Confidence in the detection of anisotropy can be quantified through the signal-to-noise (S/N) ratio, which is defined as the ratio of the maxima of the likelihood functions between any two models \begin{equation} \displaystyle \textrm{S/N} = \sqrt{2 \textrm{ln}[\Lambda_{\rm ML}]}, \label{eq:sn_def} \end{equation} % where $\Lambda_{\rm ML} = p(\boldsymbol{\rho} | \mathbf{P}_{\mathrm{ML},1})\, /\, p(\boldsymbol{\rho} | \mathbf{P}_{\mathrm{ML},2})$ is the ratio of the maxima of the likelihood functions for the two models that are under consideration. When searching for anisotropy, we can define three S/N statistics which together provide a complete description of the evidence for the GWB signal present in the cross correlation data: \begin{enumerate} \item ``Total S/N'': This is defined as the ratio of the maximum likelihood values of an anisotropic model (i.e. $p(\boldsymbol{\rho} | \mathbf{P}_{\rm ML}(l_{\rm max} > 0))$) to a model with only spatially-uncorrelated noise (i.e., $p(\boldsymbol{\rho} | \mathbf{P}_{\rm ML}=0)$). The total S/N quantifies the evidence for the presence of any signal in the cross correlations. \item ``Isotropic S/N'': This is defined as the ratio of the maximum likelihood values of an isotropic model (i.e., $p(\boldsymbol{\rho} | \mathbf{P}_{\rm ML}(l_{\rm max} = 0))$) to a model with noise only (i.e., $p(\boldsymbol{\rho} | \mathbf{P}_{\rm ML}~=~0)$). Note that the isotropic S/N is equivalent to the optimal S/N statistic defined in \citet{optimal_statistic_chamberlin}. The isotropic S/N ratio quantifies how well the cross correlations are described by a purely isotropic model. \item ``Anisotropic S/N'': This is defined as the ratio of the maximum likelihood values of a model with anisotropy (i.e., $p(\boldsymbol{\rho} | \mathbf{P}_{\rm ML}(l_{\rm max} > 0))$) to an isotropic model (i.e., $p(\boldsymbol{\rho} | \mathbf{P}_{\rm ML}(l_{\rm max} = 0))$). The anisotropic S/N ratio quantifies the evidence in favor of inclusion of modes $l > 0$. \end{enumerate} \subsubsection{Decision threshold} Another method for determining the significance of possible anisotropy is to assess the certainty with which the null hypothesis of isotropy can be rejected. If we can quantify the distribution of the angular power, $C_l$, under the null hypothesis, then we can also quantify how (in)consistent the measured angular power is with isotropy through a test statistic like the $p$-value{}. To calculate the null distribution of $C_l$ under the null hypothesis, we generate many realizations of cross-correlation data, where we assume that the measurements are Gaussian distributed around the Hellings \& Downs curve, with the spread of the distribution given by the uncertainty on the cross correlation values. We define the decision threshold, $C_l^{\rm th}$, as the value of $C_l$ corresponding to a $p$-value{} of $3\times10^{-3}$, where a measurement of angular power greater than this threshold would indicate a tension with the null hypothesis at the $\sim3\sigma$ level. \begin{figure} \centering \subfloat{\includegraphics[width = 0.4\textwidth]{"example_Cl".pdf}} \hfill \subfloat{\includegraphics[width = 0.4\textwidth]{"example_skymap".pdf}} \caption{Example recovery of an anisotropic GWB using the square root spherical harmonic basis described in \S\ref{sec:methods}. These simulations were based on an ideal PTA consisting of 100 pulsars, with a cross correlation uncertainty of 0.01 across all pulsar pairs, while the anisotropy is based on a realistic population of inspiraling SMBHBs \citep{bumpy_bkgrnd}. \textit{Top:} The true and recovered angular power spectrum, as well as the percent difference between them, where both are normalized such that the power in the $l = 0$ mode is $C_{0} = 4\pi$. \textit{Bottom:} The sky map of the GWB power corresponding to the recovered angular power spectrum. The contours represent the true distribution of the GWB power on the sky for the anisotropic GWB, while the stars represent the positions of the simulated pulsars on the sky.} \label{fig:realistic_example} \end{figure} \section{Cross-correlation uncertainties from PTA design} \label{sec:connection} The uncertainty on the cross-correlation measurements introduced in \autoref{eq:cross_corr} depends on the pulsars that are used to construct the PTA \citep{anholm_OS, optimal_statistic_chamberlin, siemens_scaling_laws}. As shown in \citet{siemens_scaling_laws}, the trace in \autoref{eq:cross_corr} can be written as % \begin{equation} \displaystyle \textrm{tr}\left[ \textbf{P}^{-1}_{a} \hat{\textbf{S}}_{ab} \textbf{P}^{-1}_{b} \hat{\textbf{S}}_{ba} \right] = \frac{2T}{A^4} \int_{f_l}^{f_h} df \frac{P_g^2(f)}{P_a(f) P_b(f)}, \label{eq:trace_eq} \end{equation} % where $T$ is the duration of time that a pulsar is observed, i.e., the timing baseline, $A$ is the amplitude of the GWB, $P_g(f)$ is the power spectrum of the timing residuals induced by the GWB, $P_a(f)$ and $P_b(f)$ are the intrinsic power spectra of pulsars $a$ and $b$, $f_l = 1/T$ and $f_h$ are the low and high frequency cutoffs used in the GWB analysis. Using the same analysis as presented in \citet{siemens_scaling_laws} and \citet{optimal_statistic_chamberlin}, we can show that (see Appendix~\ref{appendix:unc_scaling}) in the weak-signal regime, the cross-correlation uncertainty between a given pair of pulsars scales as % \begin{equation} \displaystyle \sigma \propto \frac{w^2 T^{-\gamma}}{c}, \label{eq:weak_sig_scaling} \end{equation} % while in the strong-signal regime % \begin{equation} \displaystyle \sigma \propto \frac{A^2}{\sqrt{cT}}, \label{eq:strong_sig_scaling} \end{equation} % where $w$ is the white noise RMS, $c=1/\Delta t$ is the observing cadence, $\gamma = 3 - 2\alpha$ \citep{NG12p5_detection} is the slope of the timing-residual power spectrum induced by the GWB, and $A$ is the amplitude of the GWB whose characteristic strain spectrum is given by $h_c(f) = A (f / f_{\rm yr})^{\alpha}$, with $\alpha = -2/3$ for a SMBHB GWB. These scaling relations imply that the uncertainty on the cross correlations can be reduced by observing pulsars for longer duration (timing baseline), or with a higher cadence. The effect of the timing baseline and cadence is strongest in the weak-signal regime while it becomes weaker as we move into the strong-signal regime. Similarly, in the weak-signal regime, the cross-correlation uncertainty can be reduced by decreasing the intrinsic white noise by, for example, increasing the receiver bandwidth or increasing the integration time for the pulsars. The white noise does not affect the cross-correlation uncertainty in the strong-signal regime, where the GWB signal dominates over the intrinsic noise in the pulsars at all frequencies. \section{Simulations with an ideal PTA} \label{sec:sim_ideal_pta} \subsection{Defining an ideal PTA} \label{subsec:define_ideal_pta} In the framework described above, the rejection of isotropy in the GWB depends on three variables: the uncertainty on the cross correlation values, the number of pulsars in the PTA (which defines the number of cross correlation values that are measured), and the distribution of the pulsars on the sky. The first two variables primarily dictate the strength of the rejection of isotropy, while the third variable is important for the characterization of the anisotropy. In this section, we examine how the cross correlation uncertainty and the number of pulsars in the PTA affect an \textit{ideal} PTA, which we define as a PTA that has pulsars distributed uniformly on the sky, and all pulsars in the PTA having identical noise properties. The latter constraint implies that all measured cross-correlations have the same uncertainty. This is different from current, real PTAs, where each pulsar is unique and thus the uncertainties on the cross correlations between each pulsar pair are different. We examine realistic PTAs in \S\ref{sec:sim_real_pta}. \subsection{Scaling relations} \label{subsec:scaling_rel} For a given level of anisotropy, its detection significance will depend on the number of pulsars in the array, as well as on the accuracy with which the cross correlations between different pulsar pairs can be measured. \citet{romano_cornish_review} showed that the total S/N for linear anisotropy models can be written as \begin{equation} \displaystyle \textrm{total S/N} = \left(\mathbf{\hat{P}}^T \left[ \mathbf{R}^T \mathbf{\Sigma}^{-1} \mathbf{R} \right] \mathbf{\hat{P}}\right)^{1/2}, \label{eq:sn_scaling_unc} \end{equation} where $\mathbf{\hat{P}}$ is the maximum-likelihood estimate of the GWB power. Since $\mathbf{\Sigma}$ is a diagonal matrix of the squared cross correlation uncertainties, the total S/N $\propto \sigma^{-1}$. Similarly, the total S/N is proportional to the square root of the number of data points available for inference (assuming all cross correlation uncertainties are the same for all pairs), i.e. \begin{equation} \displaystyle \textrm{total S/N} \propto \sqrt{N_{\rm cc}} = \sqrt{\frac{N_{\rm psr} (N_{\rm psr} - 1)}{2}}, \label{eq:sn_scaling_npsr} \end{equation} where for sufficiently large $N_{\rm psr}$, the total S/N will scale linearly with the number of pulsars in the PTA. We show that both of these scaling relations for the total S/N are also satisfied when using the non-linear maximum likelihood approach described in \S\ref{subsubsec:sqrt_basis}. \autoref{fig:SN} and the left panel in \autoref{fig:sn_scalings} show that the total S/N scales inversely with the uncertainty on the cross correlations, while the right hand panel of \autoref{fig:sn_scalings} show that the total S/N scales proportionally to the number of pulsars in the PTA. Since the injected signal here is an isotropic GWB, \autoref{fig:SN} shows that the total and isotropic S/N values are identically large, and decrease as the uncertainty on the cross correlations increases. The anisotropic S/N shows little evolution across uncertainties, though the better fit provided by the additional degrees of freedom in an anisotropic model prevent it from being consistent with zero for small cross-correlation uncertainties. For large uncertainties, \autoref{fig:SN} shows that we lose the ability to detect and distinguish between isotropic or anisotropic signals in the data. \begin{figure} \centering \includegraphics[width = \columnwidth]{"SN".pdf} \caption{The evolution of the total, isotropic, and anisotropic S/N values over $10^4$ noise realizations as a function of cross-correlation uncertainty for an ideal PTA with $100$ pulsars and an isotropic GWB injection. The points represent the median, while the errorbars represent the 95\% confidence intervals on the distribution of the S/N values. The black dashed line corresponding to the scaling relation in \S\ref{subsec:scaling_rel} is shown for reference. For low cross correlation uncertainties, the total and isotropic S/N values have identically large values relative to the anisotropic S/N, implying strong evidence for an isotropic GWB in the data. These S/N values decrease as the cross-correlation uncertainty increases, implying loss of confidence in the detection of a GWB as well as losing the ability to distinguish isoptropy from anisotropy. The anisotropic S/N shows little evolution across uncertainties, though the better fit provided by the additional degrees of freedom in an anisotropic model prevents this S/N from being consistent with zero for small cross-correlation uncertainties.} \label{fig:SN} \end{figure} \begin{figure*} \centering \includegraphics[width = 1\textwidth]{"total_sn.pdf"} \caption{The evolution of the total S/N values for an ideal PTA with 100 pulsars and an isotropic injected GWB. The points represent the median values across $10^4$ noise realizations, while the errors represent 95\% confidence intervals. \textit{Left:} Evolution of the total S/N as a function of number of pulsars in the PTA for different values of cross-correlation uncertainty. The black dashed line corresponding to the scaling relation in \S\ref{subsec:scaling_rel} is shown for reference. \textit{Right:} Evolution of the total S/N as a function of the cross-correlation uncertainty for different number of pulsars in the PTA. The black dashed line corresponding to the scaling relation in \S\ref{subsec:scaling_rel} is shown for reference. Together, these results show that the total S/N is larger for a PTA with small cross-correlation uncertainties and a large number of pulsars. } \label{fig:sn_scalings} \end{figure*} % % \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{"dec_thres.pdf"} \caption{The evolution of the decision threshold, $C_{l}^{th}$, for an ideal PTA with an isotropic injected GWB. The points represent the median values while the errors represent the 95\% confidence interval values across $10^4$ noise realizations. \textit{Top:} Evolution of the decision threshold per mode for different numbers of pulsars in the PTA. \textit{Bottom:} Evolution of the decision threshold for different cross-correlation uncertainties. These values were generated for an ideal PTA with 100 pulsars and a cross-correlation uncertainty of 0.1. These results show that the decision threshold is lower for PTAs that have a large number of pulsars and smaller cross-correlation uncertainties.} \label{fig:dec_thres_scalings} \end{figure} We can similarly compute scaling relations for the decision threshold as a function of the cross correlation uncertainty and number of pulsars in the PTA. Since this is an empirically constructed detection statistic, we do not have analytical expressions for its scaling relations, though we can derive the scaling expressions computationally, as shown in \autoref{fig:dec_thres_scalings}. As expected, as we increase the number of pulsars in the PTA, the decision threshold decreases across all multipoles. Similarly, as the uncertainty on the cross correlation measurements decreases, so do the multipole-dependent decision thresholds, corresponding to an improved sensitivity to deviations away from isotropy. \subsection{Sensitivity figure of merit} \label{subsec:summ_stat} Rather than treating the number of pulsars and the cross-correlation uncertainty as separate variables, we can consider a combination that is inspired by the weighted arithmetic mean of cross-correlation measurements involved in, e.g., S/N calculations. In such calculations, operations like $\sum_{ab}(\cdots)/\sigma_{ab}^2$ are proportional to $N_\mathrm{cc} / \sigma^2 \propto (N_\mathrm{psr}/\sigma)^2$ in the limit of equal cross-correlation measurement uncertainties, or where we can characterise the distribution of uncertainties by its mean or median over pairs. Therefore we define a sensitivity figure of merit (FOM), $N_{\rm psr} / \sigma$, and quantify the dependence of our detection statistics with respect to this. This also allows us to quantify the trade-off between the number of pulsars in the PTA and the cross-correlation uncertainties, which, in turn, are related to the noise characteristics of the pulsars. \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{"FOM".pdf} \caption{The evolution of detection statistics with the sensitivity FOM, $N_{\rm psrs} / \sigma$, defined in \S\ref{subsec:summ_stat} for an ideal PTA with an isotropic injected GWB. The points represent the medians, while the errorbars represent the 95\% confidence intervals across $10^4$ noise realizations. \textit{Top:} Evolution of the total S/N as a function of the sensitivity FOM. This scaling relation implies that PTAs with either a large number of pulsars or small cross-correlation uncertainties (or both) will return a larger total S/N value than a PTA with fewer pulsars and/or larger cross-correlation uncertainty. \textit{Bottom:} The evolution of the decision threshold as a function of the sensitivity FOM. This implies that PTAs with fewer pulsars or larger cross-correlation uncertainties will have higher decision thresholds, while PTAs with more pulsars and smaller cross-correlation uncertainties will have lower decision thresholds.} \label{fig:sum_stat} \end{figure} The relation of the total S/N and decision threshold to the sensitivity FOM is shown in \autoref{fig:sum_stat}. We confirm that the total S/N is proportional to the sensitivity FOM, $N_{\rm psr} / \sigma$ in logarithmic space. This implies that, as expected, fewer pulsars or larger uncertainties on the cross correlation measurements result in a reduced total S/N, while larger numbers of pulsars or smaller uncertainties result in an increase in the total S/N value. Similarly, \autoref{fig:sum_stat} shows the dependence of the decision threshold for each anisotropy multipole on $N_{\rm psr} / \sigma$. Similar to the total S/N, a PTA with fewer pulsars or larger uncertainties on the cross correlation measurements will be able to reject the null hypothesis with lower significance than a PTA with more pulsars and/or smaller uncertainties on the cross correlations. \section{Simulations with a realistic PTA} \label{sec:sim_real_pta} The ideal PTA described in \S\ref{sec:sim_ideal_pta} was useful in discerning scaling relationships that map between the PTA design and detection statistics. However, unlike the ideal PTA in \S\ref{sec:sim_ideal_pta}, real PTAs do not (yet) consist of pulsars distributed uniformly across the sky, nor are all the pulsars in the array identical. The latter fact implies that the cross-correlation uncertainties in a real PTA will be described by a distribution, rather than a constant value as assumed in \S\ref{sec:sim_ideal_pta}. To simulate a realistic PTA, we use the methods developed in \citet{astro4cast}. We base our simulations on the NANOGrav $12.5$~year dataset \citep{ng12p5_timing}, and extend the dataset to a $20$-year timing baseline to forecast the sensitivity of NANOGrav to anisotropies in the GWB. The TOA timestamps of the initial $12.5$~year portion are the same as those in the real NANOGrav dataset, while the radiometer uncertainties and pulse-phase jitter noise that are injected are obtained from the maximum-likelihood pulsar noise analysis performed as part of the NANOGrav $12.5$~year analysis \citep{NG12p5_detection}. The injection values for the intrinsic per-pulsar red noise were taken from a global PTA analysis that also modeled a common-spectrum process. This is done to isolate the intrinsic red noise in each pulsar's dataset so that it is not contaminated by the common process reported in \citet{NG12p5_detection}. Once the 45 simulated pulsars from the NANOGrav $12.5$~year dataset are generated using the above recipe, the dataset is then extended into the future by generating distributions for the cadence and measurement uncertainties using the last year's worth of data for each pulsar. We then draw TOAs using these distributions until the dataset has a maximum baseline of $20$~years. Finally, we inject $100$ statistically random realizations of an isotropic gravitational wave background with an amplitude of $A_{\rm GWB} = 2 \times 10^{-15}$ and spectral index $\alpha = -2/3$, consistent with the common process observed in \citet{NG12p5_detection}. \begin{figure} \centering \includegraphics[width = \columnwidth]{"a4c_sigma".pdf} \caption{The evolution of the cross-correlation uncertainty across all pulsars and 100 noise realizations of the realistic PTA dataset simulations described in \S\ref{sec:sim_real_pta}. The evolution of the median cross-correlation uncertainty can be approximately described by $\sigma\propto T^{-7/2}$, which is shallower than the scaling law prediction of $\sigma\propto T^{-13/3}$ for the weak-signal regime in \S\ref{subsec:scaling_rel}, but steeper than the strong-signal regime prediction of $\sigma\propto T^{-1/2}$. This implies that the NANOGrav PTA is in the intermediate signal regime, which is corroborated by the fact that the lowest frequencies of the PTA are now dominated by a common-spectrum process (interpreted as the GWB) as shown in \citet{NG12p5_detection}.} \label{fig:a4c_sigma} \end{figure} We then pass all 100 realizations of the dataset through the standard NANOGrav detection pipeline \citep{ng11_detection, NG12p5_detection, astro4cast} to calculate the cross correlations and their uncertainties between all pairs in the 45 pulsar dataset (see also \autoref{appendix:real_os}). The evolution of these cross-correlation uncertainties across the 100 realizations as a function of the timing baseline is shown in \autoref{fig:a4c_sigma}. As we can see, the median cross-correlation uncertainty reduces from $\sim$5 at $13$~years (similar to the total baseline of the $12.5$~year dataset) to $\sim$1 at $20$~years, implying a scaling relation $\sigma\propto T^{-7/2}$. This is shallower than the spectral index predicted for the weak-signal regime in \S\ref{subsec:scaling_rel}, which implies that the NANOGrav PTA is in the intermediate-signal regime, which is corroborated by the fact that the lowest frequencies of the PTA are dominated by a common-spectrum process (interpreted as a GWB) as shown in \citet{NG12p5_detection}. Combining the median uncertainty with the 45 pulsars in the PTA, we obtain values for our sensitivity FOM $N_{\rm psr} / \sigma \approx 9$ at $13$~years, and $N_{\rm psr} / \sigma \approx 45$ at $20$~years. \begin{figure*} \centering \includegraphics[width = 1\textwidth]{"a4c_sn".pdf} \caption{Evolution of the S/N values for the realistic simulations described in \S\ref{sec:sim_real_pta}. The total, isotropic, and anisotropic S/N are shown by the blue, green, and orange histograms, respectively. Since the injected GWB is isotropic, we see the total and isotropic S/N values increase as a function of timing baseline, while the anisotropic S/N stays consistent with zero for all baselines.} \label{fig:a4c_sn} \end{figure*} We pass the cross correlations measured from these $100$ realizations through the statistical framework described in \S\ref{sec:methods} to search for the presence of anisotropy under realistic PTA and data-quality conditions. The evolution of the three S/N statistics as a function of time are shown in \autoref{fig:a4c_sn}. Since the injected GWB is isotropic, the total and isotropic S/N increase with the timing baseline. This is consistent with the reduction in the uncertainties on the cross correlations allowing for a stronger detection of the isotropic background. By contrast, the anisotropic S/N does not increase with time, and has support at $\mathrm{S/N}=0$ for all baselines. Note that the total S/N seen in these realistic simulations is consistent with the prediction made in \autoref{fig:sum_stat} and \citet{astro4cast}. % \begin{figure*} \centering \includegraphics[width = \textwidth]{"a4c_dec_thres".pdf} \caption{Evolution of decision threshold for realistic simulations described in \S\ref{sec:sim_real_pta}. \textit{Left:} Evolution of the decision threshold as a function of timing baseline for all spherical harmonic multipoles. The decision threshold decreases with an increase in the timing baseline, and higher spherical harmonic multipoles have a higher decision threshold than lower multipoles. \textit{Right:} Evolution of the decision threshold across spherical harmonic multipoles for different timing baselines. We also plot the Bayesian 95\% upper limits on anisotropy derived in \citet{epta_anisotropy} from the EPTA Data Release $1$. As these realistic simulations have 45 pulsars with different noise properties resulting in different cross-correlation uncertainties per pulsar pair, we see the sensitivity of the PTA saturate at higher multipoles.} \label{fig:a4c_dt} \end{figure*} % Similarly, \autoref{fig:a4c_dt} shows the evolution of the decision threshold as a function of spherical harmonic multipole $l$ and the timing baseline. We find that the anisotropy decision threshold is such that, in terms of the $C_l$ values, GWB anisotropies at levels $C_{l=1} / C_{l=0} \gtrsim 0.3$ (i.e. greater than 30\% of the power in the monopole) would be inconsistent with the null hypothesis of isotropy at the $p = 3\times10^{-3}$ level for the 20 yr baseline. For comparison, in \autoref{fig:a4c_dt} we also plot the Bayesian $95\%$ upper limits on GWB anisotropy using six pulsars from the EPTA's first data release \citep{epta_anisotropy} for a model extending to $l_{\rm max} = 4$. This dataset had a maximum baseline of $17.7$~years, which is toward the upper end of the baselines that we simulate for the NANOGrav data. However, the number of pulsars (6) in the EPTA analysis is significantly lower than the number of pulsars in our simulations (45). The longer EPTA timing baseline allows the Bayesian EPTA upper limits and NANOGrav anisotropy decision thresholds to be comparable at low multipoles until NANOGrav's timing baseline exceeds that of the EPTA. However, the larger number of pulsars in NANOGrav not only gives it higher spatial resolution (and thus access to higher multipoles), but also improves the sensitivity of NANOGrav at higher multipoles relative to the EPTA $2015$ limit. As shown in \citet{higher_lmax_limit}, access to these higher multipoles can aid in the localization of GWB anisotropies caused by individual GW sources, or due to finiteness in the source population constituting the GWB. This highlights the importance of including more pulsars in a PTA for spatial resolution, even with the shorter timing baseline of such new additions. \section{Discussion and Conclusion} \label{sec:discuss} We have explored the detection and characterization of stochastic GWB anisotropy through pulsar cross correlations in a PTA. Using a frequentist maximum likelihood approach, we can search for anisotropy by modeling GWB power in individual sky pixels or through a weighted sum of spherical harmonics. Anisotropy would then manifest in measured cross-correlations between pulsar timing residuals through a power-weighted overlap of pulsar GW antenna response functions. As a refinement on previous approaches, we prevent the GWB power from assuming unphysical negative values by adopting a model that naturally restricts this; we have referred to this as the \textit{square-root spherical harmonic basis} throughout our analysis. We have also defined two detection metrics: $(a)$ the signal-to-noise ratio, S/N, defined as the ratio between the maximum likelihood values between a signal and noise model, and $(b)$ the anisotropy decision threshold, $C_{l}^\mathrm{th}$, defined as the level at which the measured angular power is inconsistent with isotropy at the $p = 3\times10^{-3}$ ($\sim3\sigma$) level. The S/N comes in three flavors: $(i)$ the total S/N, which measures the strength of an anisotropic GWB model against noise alone; $(ii)$ the isotropic S/N which measures the strength of an isotropic GWB model against noise alone (this directly corresponds to the usual optimal statistic S/N used in PTA data analysis); and $(iii)$ the anisotropic S/N, which measures the statistical preference for anisotropy against isotropy. We examined the evolution of these detection statistics as a function of the uncertainty on the measured cross correlations, as well as on the number of pulsars in the PTA. We showed that the cross-correlation uncertainty and the number of pulsars in a PTA can be combined into a single figure of merit for the PTA sensitivity, $N_{\rm psrs} / \sigma$, which succintly maps the PTA configuration and noise specifications to the detectability of anisotropy. Our scaling relations show that increasing the number of pulsars in an array, while reducing the uncertainty on the cross correlation measurements, leads to higher total S/N and lower anisotropy decision thresholds. As shown in \S\ref{sec:connection}, the cross-correlation uncertainty scales inversely with both the timing baseline and cadence of observation for each pulsar, with the power-law index dependent on the signal regime occupied by the PTA. The pulsar timing baseline is set to increase as PTAs continue operation into the future, as well as through IPTA data combinations \citep[e.g.,][]{ipta_dr2_dataset}. Improving observing cadence is more challenging due to constraints on the available telescope time for each PTA, though once again IPTA data combinations can help alleviate this problem. In addition to IPTA data combinations, the CHIME radio telescope \citep{CHIME_pulsar} will offer $\sim$daily cadence to the NANOGrav PTA \citep{NANOGrav}, which is an order of magnitude improvement over the current $\sim$monthly cadence employed by NANOGrav. Finally, we examined the evolution of the S/N and anisotropy decision thresholds as a function of timing baseline using realistic NANOGrav data. Since we injected an isotropic GWB signal in these data, we found that the anisotropic S/N remains consistent with zero at all times and across all signal realizations. By contrast, the total and isotropic S/N increase with time, as expected \citep{siemens_scaling_laws}. We find that any anisotropic GWB power distribution with $C_{l = 1} \gtrsim 0.3 C_{l = 0}$ would be in tension with an isotropic model at the $p = 3\times10^{-3}$ ($\sim3\sigma$) level. We note that these simulations held the number of pulsars in the array fixed to the $45$ that were included in the NANOGrav $12.5$~year dataset. However, this number will increase in future NANOGrav datasets. Based on results in \S\ref{subsec:summ_stat}, this will lead to larger total S/N values and lower anisotropy decision thresholds, implying improved sensitivity to any anisotropy that might be present in the real GWB. IPTA data combinations will further increase the timing baseline and number of pulsars in the array allowing further improvements on the ability to detect anisotropy. Furthermore, new instruments such as ultra-wideband receivers, and new telescopes such as MeerKAT \citep{meerkat_psr}, SKA \citep{SKAPTA}, and DSA-2000 \citep{dsa2000} will also aid in the detection and characterization of anisotropy with PTAs. The techniques and scaling relations that we have developed in this work are PTA-agnostic and can be projected onto any PTA specification, allowing for immediate usage by the broader PTA community. Yet we have made assumptions in our framework that can be generalized in the future. For example, while our techniques operate on PTA data at the level of cross correlations rather than TOAs, in order to get to that stage we have implicitly assumed that the GWB characteristic strain spectrum is well-described by a power-law model. This follows the same approach as \citet{NG12p5_detection}, where an average power-law spectrum $h_c\propto f^{-2/3}$ is assumed for the GWB. For a GWB produced by a population of inspiraling SMBHBs, this power-law representation is an approximation to the true spectrum \citep{phinney}, where there are different SMBHBs contributing to the GWB at different frequencies \citep{Kelley_real_SMBHB_spectrum}. Thus, a more appropriate way to characterize anisotropy in the SMBHB GW background would be to measure the cross correlations of pulsar TOAs as a function of GW frequency, rather than our current approach of computing cross correlations that are filtered against a power-law GWB spectral template (see \autoref{appendix:real_os}). We would then have a more general data structure that includes pulsar cross-correlations and uncertainties for each GW frequency, for which the methods developed here can be applied at each of those frequencies independently. We also note that the methods developed here can be modified to search for multiple backgrounds \citep{multiple_anis_bkgnd}, where an astrophysical background \citep[e.g. from SMBHBs,][]{Kelley_real_SMBHB_spectrum, Sesana2004, BurkeSpolaor2019} would be expected to be anisotropic, while a cosmological background \citep[e.g. from cosmic strings,][]{olmez_anisotropy} may be isotropic. We plan to explore these improvements and generalizations in future analyses in a bid to extract as much spatial and angular information as possible from the exciting new PTA datasets now under development. As mentioned, these techniques will not only aid in the detection of GWB anisotropy, but also in its characterization for the purposes of isolating regions of excess power that may be indicative of individually-resolvable GW sources, and as leverage for the separation of potentially multiple stochastic GW signals of astrophysical and cosmological origin. \begin{acknowledgements} NP was supported by the Vanderbilt Initiative in Data Intensive Astrophysics (VIDA) Fellowship. SRT acknowledges support from NSF AST-200793, PHY-2020265, PHY-2146016, and a Vanderbilt University College of Arts \& Science Dean's Faculty Fellowship. JDR was partially supported by start-up funds provided by Texas Tech University. This work has been carried out by the NANOGrav collaboration, which is part of the International Pulsar Timing Array. The NANOGrav project receives support from National Science Foundation (NSF) Physics Frontiers Center award number \#1430284 and \#2020265. This work was conducted using the resources of the Advanced Computing Center for Research and Education at Vanderbilt University, Nashville, TN. \section*{Software:} We use \texttt{lmfit} \citep{lmfit} for the non-linear least-squares minimization in the square-root spherical harmonic basis. We use \texttt{libstempo} \citep{libstempo} to generate our realistic PTA datasets and to inject the pulsar noise parameters and GWB signals in these datasets. We use the software packages \texttt{Enterprise} \citep{enterprise} and \texttt{enterprise\_extensions} \citep{enterpriseextensions} for model construction, along with \texttt{PTMCMCSampler} \citep{PTMCMC} as the Markov Chain Monte Carlo sampler for our realistic PTA Bayesian analyses. We also extensively used Matplotlib \citep{Matplotlib2007}, NumPy \citep{Numpy2020}, Python \citep{Python2007,Python2011}, and SciPy \citep{Scipy2020}. \end{acknowledgements}
1,941,325,220,853
arxiv
\section{Introduction} In classical systems where physical quantity (especially, dynamical variables) can be a linear map for structures, a set of special structure that has maximum or minimum physical quantity should always be restricted by spatial constraint on constituents of given system. This can be quantitatively determined by the landscape of so-called \textit{configurational polyhedra} (CP),\cite{cp} which determines maximum and minimum value that basis functions for describing structures (typically, generalized Ising model\cite{ce} (GIM) descrpition is employed) can take: Such special structures always locate on vertices of the CP. Since the landscape of the CP can be determined without any information about energy or constituents, we can a priori know a set of candidate structure to exhibit extremal physical quantities when condition of spatial constraint on constituents of the system is given. Using this characteristics, efficient prediction of alloy ground-state structures have amply been investigated so far, and the concept of the CP has recently been extended to finite-temperature properties,\cite{ycp} where configurational density of states (CDOS) along CP-based special coordination well characterize temperature dependence of internal energy near order-disorder phase transition. The practical problem for using CP is that number of its vertices exponentially increases at high-dimensional configuration space considered. It is thus fundamentally important to find out as many extremal structures as possible at low-dimensional space based on geometrically low-dimensional figures, such as symmetry-nonequivalent pairs on lattice. However, it has been shown\cite{cp} that a set of structure at vertices of CP is invariant with linear transformation of the coordination within given subspace, which directly means that when we would like to further figure out extremal structures based on the conventional CP approach, we should explicitly include information about longer-range and/or higher-dimensional figures on lattice, but such CP projected onto desired low-dimensional space typically lose significant information about vertices that are originally found at low-dimensional space based on low-dimensional figures. In order to overcome this problem in GIM-based conventional CP, the present study proposes construction of CP based on extendend graph representation unified with GIM-based description for crystalline solids, whose spectrum not only contains information about GIM pair correlations, but also includes higher-dimensional figures (or links) consisting of the corresponding pairs. We will demonstrate that the proposed CP retains characteristic vertices found for conventional CP, and also can figure out other special structures in terms of graph, on the same low-dimensional space considered. \section{Methodology} Very recently, we successfully construct unified representation of microscopic structure on periodic lattice (i.e., atomic configuration) in terms of both GIM and graph description. Explicit relationship between GIM and graph representation for given figure $R$ on given lattice is expressed by\cite{yg} \begin{eqnarray} \label{eq:g} \Braket{\psi \left(\vec{\sigma}\right)}_R = \rho_R^{-1} \sum_{m} C_m\cdot \textrm{Tr} \left[ \left(\sum _{l\in R} \bm{B}_l\left(\vec{\sigma}\right)\pm {}^t\!\bm{B}_l\left(\vec{\sigma}\right) \right)^{N_R} \right], \end{eqnarray} where left-hand side corresponds to multisite correlation for figure $R$ in GIM descripition. Matrix $\bm{B}$ denotes upper triangular part of extended graph Laplacian, defined by \begin{eqnarray} \label{eq:lp} \bm{B}_R^{\left(\alpha\right)}\left(i,j\right) = \begin{cases} \sqrt{\phi_p\left(\sigma_i\right)\phi_q\left(\sigma_j\right)} & \left(p,q \in \alpha, i,j\in R, i < j\right), \\ 0 & (otherwise) \end{cases} \end{eqnarray} where $p$ and $q$ denotes conventional GIM basis function index on each lattice point $i$ and $j$, and $\sigma_k$ is the pseudo-spin variable to specify atomic occupation. $\bm{A} = \bm{B} + {}^t\!\bm{B}$ corresponds to adjacency matrix. $N_R$ is the dimension of figure, $C_m$ is integer depending on the type of figure, and $\rho_R$ denotes number of the possible path to construct considered figure $R$. Here, we construct graph Laplacian (and adjacency matrix) composed of symmetry-equivalent neighboring edges: Therefore, summation in the right-hand side is taken over possible pair figure $l$ that corresponds to subfigure of figure $R$. From Eq.~(\ref{eq:g}), we have shown that information about higher-order links (or multisite correlations for higher-dimensional figure) in the structure can be explicitly included in the landscape of graph spectrum composed of the corresponding pair figures. It is thus naturally expected that when we construct CP based on the extended graph representation, we would obtain additional atomic configurations on vertices of the CP compared with GIM-based CP. From Eq.~(\ref{eq:lp}), it is clear that adjacency matrix for any given figure $R$ is traceless. Therefore, in order to characterize the landscape of the resultant graph spectrum for $\bm{A}$s, we here employ graph energy that is the sum of absolute of all eigenvalues for graph spectrum. Graph energy has been extensively investigated to characterize such as regularity by determining upper and/or lower bound of its value, and corresponding application has been done for molecules to relate to its energetics.\cite{ge1,ge2,ge3,ge4} In the present study, we prepare examples of all possible atomic configurations for A-B binary system on $4\times 4\times 4$ expansion of fcc conventional unit cell having minimal unit consisting of up 16 atoms, and calculate corresponding graph spectrum, where compositions $x$ (A$_x$B$_{1-x}$) of 0.5 and 0.75 are considered. Spin variable at each lattice point, $\sigma_i$, takes +1 (0) for occupation of A (B) element. For these atomic configurations, we calculate second-order moment $\mu_2$ of $\bm{A}$ for up to 3rd nearest-neighbor (3NN) pair (i.e., GIM pair correlations up to 3NN pair) to construct conventional CP, and also calculate graph energy for linear combination of the same set of $\bm{A}$ above. \section{Results and Discussions} Figure~\ref{fig:cp0.5} shows the constructed CP for convensional GIM (left) and extended graph (right) representation at $x=0.5$, in terms of adjacency matrix of 1-3 NN pairs. From Fig.~\ref{fig:cp0.5} (a), we can find five ordered structures of L1$_0$, "40", Z2, Block and 2-(110) at the vertices. L1$_0$ and "40" are known to be ground-state atomic configurations for real alloys of Cu-Au and Pt-Rh system,\cite{ord1, ord2} and Z2 of alternating two-layer stacking along (001) has been predicted as ground-state for Pt-Ru alloy based on systematic first-principles study\cite{ord3}): We can certainly confirm that structures at vertices of CP can be candidate for ground-state structure. When we construct the extended CP using linear combination of the same $\bm{A}$s, the resultant CP is found to have more vertices than the convensional one, as shown in Fig.~\ref{fig:cp0.5} (b). We can clearly see that three additional structures of T1, T2 and T3 are found at the extended CP, and five structures at vertices of convensional CP are all locates at those of the extended CP, which has been confirmed in our previous study. When we focus the lower two figures of Fig.~\ref{fig:cp0.5} for CP of 2 and 3NN pair figures, similar tendency can be found: Three ordered structures of L1$_0$, "40" and Z2 can be found at vertices of both convensional and extended CP, and two additional structures of T1 and T3 are also found at those of the extended CP. Figure~\ref{fig:str0.5} shows atomic configuration of the nine ordered structures at vertices of CPs in Fig.~\ref{fig:cp0.5}. These results certainly indicates that the proposed extended CP based on graph theory can not only capture a set of structure found at vertices of conventional CP, but also can find other charcteristic structures in terms of graph (i.e., having characteristic links). \begin{figure}[h] \begin{center} \includegraphics[width=1.00\linewidth]{mu-GE-x-0.5.eps} \caption{Configurational polyhedra (CP) at composition $x=0.5$. (Left) Convensional CP in terms of second-order moment of $\bm{A}_1$, $\bm{A}_2$ and $\bm{A}_3$, corresponding to GIM correlation for 1-3NN pair. (Right) Extended CP based on the proposed graph spectrum, using graph energy for $\bm{A}_1$, $\bm{A}_2$ and $\bm{A}_3$. Structure at the vertices are emphasized by closed circles. } \label{fig:cp0.5} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.95\linewidth] {str-x-0.5.eps} \caption{ Atomic configuration of nine ordered structures that are found at vertices of convensional and/or extended CP in Fig.~\ref{fig:cp0.5}. } \label{fig:str0.5} \end{center} \end{figure} In order to see the composition dependence of these tendencies, we also construct conventional and extended CPs at $x=0.75$, based on $\bm{A}_1$, $\bm{A}_2$ and $\bm{A}_3$. Figure~\ref{fig:cp0.75} shows the resultant CPs, in terms of adjacency matrix of 1-3 NN pairs. From Fig.~\ref{fig:cp0.75} (a) and (c), we find L1$_2$ and D0$_{22}$ ordered structures that have been considered ground states for real alloys, and additional three ordered structures of Rhombo, Z1 and Block-2 are found. Using the same $\bm{A}$s, the extended CPs in Fig.~\ref{fig:cp0.75} (b) and (d) successfully retains these five ordered structures at their vertices, and also have additional four ordered structures of U1, U2, U3 and U4. These atomic arrangements are illustrated in Fig.~\ref{fig:str0.75}. Therefore, we can see that the proposed extended graph representation generally retains vertices of conventional CP as well as provides other characteristic ordered structures using the same set of pair figures on the same dimension of configuration space, which is a desired property as described above. \begin{figure}[h] \begin{center} \includegraphics[width=1.00\linewidth] {mu-GE-x-0.75.eps} \caption{Configurational polyhedra (CP) at composition $x=0.75$. (Left) Convensional CP in terms of second-order moment of $\bm{A}_1$, $\bm{A}_2$ and $\bm{A}_3$, corresponding to GIM correlation for 1-3NN pair. (Right) Extended CP based on the proposed graph spectrum, using graph energy for $\bm{A}_1$, $\bm{A}_2$ and $\bm{A}_3$. Structure at the vertices are emphasized by closed circles. } \label{fig:cp0.75} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.95\linewidth] {str-x-0.75.eps} \caption{Atomic configuration of nine ordered structures that are found at vertices of convensional and/or extended CP in Fig.~\ref{fig:cp0.75}.} \label{fig:str0.75} \end{center} \end{figure} The reason why the proposed representation can find out more characteristic structures than conventional one certainly reflects that the landscape of the proposed graph spectrum has more structural information (especially, higher-order structural links) that cannot be explicitly included in the GIM description. For instance, graph energy naturally include the information about asymmetric landscape of graph spectrum. When we quantitatively determine this asymmetry as third-order moment of the spectrum, $\mu_3$, this can be explicitly given by, for instance, \begin{widetext} \begin{eqnarray} \label{eq:3m} &&\mu_3 \left[ \mathrm{Spec}\left(\bm{A}_1 - \bm{A}_3\right) \right] = N^{-1}\sum_i\sum_j\sum_k \Braket{i|A_{1\overline{3}}|j} \Braket{j|A_{1\overline{3}}|k} \Braket{k|A_{1\overline{3}}|i} \nonumber \\ &&= \mu_3 \left[ \mathrm{Spec}\left(\bm{A}_1 \right) \right] - \mu_3 \left[ \mathrm{Spec}\left( \bm{A}_3\right) \right] + N^{-1}\left\{\sum_{i,j,k\in \left(133\right)} \Braket{i|A_{13}|j} \Braket{j|A_{13}|k} \Braket{k|A_{13}|i} - \sum_{i,j,k\in \left(113\right)} \Braket{i|A_{13}|j} \Braket{j|A_{13}|k} \Braket{k|A_{13}|i} \right\}, \nonumber \\ \end{eqnarray} \end{widetext} where $\bm{A}_{13} = \bm{A}_1 + \bm{A}_3$ and $\bm{A}_{1\overline{3}} = \bm{A}_1 - \bm{A}_3$. Summation in the third term of right-hand equation is taken over symmetry-equivalent triplet composed of one 1NN and two 3NN pairs ($i,j,k\in\left(133\right)$) and of two 1NN and one 3NN pairs ($i,j,k\in\left(113\right)$). From the above equation, it is now clear that takeing a linear combination of $\bm{A}$s results in characterizing other higher-order links since $\mu_3\left[\mathrm{Spec} \left(\bm{A}_1 - \bm{A}_3\right)\right] \neq \mu_3 \left[ \mathrm{Spec}\left(\bm{A}_1 \right) \right] - \mu_3 \left[ \mathrm{Spec}\left( \bm{A}_3\right) \right] $. More explicitly, differences in asymmetry corresponds to difference in number of closed triplet links composed of A element (i.e., element of $\sigma = +1$) with one (two) 1NN and two (one) 3NN pairs. Graph energy not only contains information about third order moment, but also contains second-order moment (corresponding to GIM pair correlations) as well as higher-order moment (corresponding to higher-order structural links composed of $\sigma=+1$ element), which successfully results in the desired property found for the extended CP based on graph representation compared with the conventional one, as seen in Figs.~\ref{fig:cp0.5} and~\ref{fig:cp0.75}. \section{Conclusions} Based on the graph theory and generalized Ising model, we propose theoretical approach to construct configurational polyhedra (CP) for crystalline solids. The extended CP have vertices not only include those found in conventional CP, but also include other characteristic structures on the same dimensional configuration space with the same set of figures composed of underlying lattice points, which therefore has significant advantage over the conventional approach. We confirm that this desired property can be naturally due to the fact that the proposed representation has more structural information, especially about higher-order structural links for selected element, than conventional GIM description. \section*{Acknowledgement} This work was supported by a Grant-in-Aid for Scientific Research (16K06704) from the MEXT of Japan, Research Grant from Hitachi Metals$\cdot$Materials Science Foundation, and Advanced Low Carbon Technology Research and Development Program of the Japan Science and Technology Agency (JST).
1,941,325,220,854
arxiv
\section{Introduction} \vspace*{-0.9ex} Logic-based encodings of the verification problem are more and more widespread in software verification~\cite{BGMR15}. However, the generated formulae are often too large to be directly handled by the back end solver. Classical divide-and-conquer techniques suggest themselves to cope with such large problems. Work on interprocedural verification, e.g.\ \cite{KGC14,CDK+15}, follows the syntactical, procedural structure of the program to perform a decomposition of the formula. This does not seem ideal, but has been shown to significantly increase efficiency in comparison with monolithic solving. In recent work, we used a synthesis engine \cite{SK16,BJKS15} to solve for multiple predicates at once even when they are mutually dependent. Since this scales badly to large formulae, we have to decompose the formula in order to reduce the load on the synthesis engine. However, the decomposition may introduce additional abstractions, in particular when mutually dependent predicates are concerned. \paragraph{Outline} We first show the encoding of a verification problem using the example of universal termination verification. Then we discuss the challenges associated with decomposing this problem and the interdependencies with the solving process. \vspace*{-0.9ex} \section{Encoding} \vspace*{-0.9ex} We assume that programs are given in terms of call graphs, where individual {\function}s $f$ are given in terms of symbolic input/output transition systems. Formally, the input/output transition system of a {\function} $f$ is a triple of characteristic predicates for relations $(\mathit{Init}_f,\mathit{Trans}_f,\mathit{Out}_f)$, where $\mathit{Trans}_f({\vec{\x}},{\vec{\x}'})$ is the transition relation; the input relation $\mathit{Init}_f({\vx^{in}}, {\vec{\x}})$ defines the initial states of the transition system and relates them to the inputs ${\vx^{in}}$; the output relation $\mathit{Out}_f({\vec{\x}},{\vx^{out}})$ connects the transition system to the outputs ${\vx^{out}}$ of the {\function}. Inputs are {\function} parameters, global variables, and memory objects that are read by $f$. Outputs are return values, global variables, and memory objects written by $f$. Internal states ${\vec{\x}}$ are usually the values of variables at the loop heads in $f$. These relations are given as \emph{first-order logic formulae} resulting from the logical encoding of the program semantics. Let $F$ denote the set $\{f_1,\ldots,f_n\}$ of {{procedures}} in a given program. $H_f$ is the set of {\function} calls to {{procedures}} $h \in F$ at calls sites $i$ in {\function} $f$. The vectors of input and output arguments ${\vx^{p\_in}}_{h_i}$ and ${\vx^{p\_out}}_{h_i}$ are intermediate variables in $\mathit{Trans}$. We denote the termination argument $RR_f$, i.e.\ the conditions that ensure the termination of {\function} $f$, such as a well-founded transition invariant. \paragraph{Example} By encoding Hoare-style verification rules (cf.~\cite{GLPR12}) into second-order logic,% \footnote{Mind that we use the notation $\exists_2$ to stress the fact that the quantifier binds a predicate.} we obtain the following formula. Its satisfiability guarantees universal termination of the program. \begin{equation}\label{equ:enc} \small \begin{array}{@{\hspace*{-1em}}lrl} \multicolumn{3}{@{\hspace*{-1em}}l}{\exists_2 \mathit{Summary}_{f_1},\ldots,\mathit{Summary}_{f_n}: \bigwedge_{f\in F}} \\ & \multicolumn{2}{l}{\exists_2 \mathit{Inv}_f, RR_f: \forall {\vx^{in}}_f, {\vec{\x}}_f, {\vec{\x}'}_f, {\vx^{out}}_f:} \\ && \mathit{Init}_f({\vx^{in}}_f,{\vec{\x}}_f) \Longrightarrow \mathit{Inv}_f({\vec{\x}}_f)\\ & \wedge &\mathit{Inv}_f({\vec{\x}}_f)\wedge\mathit{Trans}_f({\vec{\x}}_f,{\vec{\x}'}_f) \wedge \bigwedge_{h_i \in H_f} \mathit{Summary}_h({\x^{p\_in}}_{h_i},{\x^{p\_out}}_{h_i}) \Longrightarrow \mathit{Inv}_f({\vec{\x}'}_f) \wedge RR_f({\vec{\x}}_f,{\vec{\x}'}_f) \\ & \wedge & \mathit{Init}_f({\vx^{in}}_f,{\vec{\x}}_f)\wedge\mathit{Inv}_f({\vec{\x}'}_f)\wedge \mathit{Out}_f({\vec{\x}'}_f,{\vx^{out}}_f) \Longrightarrow \mathit{Summary}_f({\vx^{in}}_f,{\vx^{out}}_f) \end{array} \end{equation} \noindent In this formula, recursive {{procedures}} produce cyclic dependencies of their $\mathit{Summary}_f$ predicates. If abstractions are used to lazily solve the formula, the invariant $\mathit{Inv}_f$ and the termination argument $RR_f$ become interdependent. Similarly, invariants of nested loops are dependent on each other. Rewriting nested loops into a single loop with invariant $\mathit{Inv}_f$ only ''hides'' these dependencies as relational dependencies between the loop variables. \vspace*{-0.9ex} \section{Decomposition} \vspace*{-0.9ex} We can decompose a formula that encodes a verification problem, such as (\ref{equ:enc}) above, into a sequence of subproblems that are solved by the synthesis engine. The soundness of the analysis result is ensured by (1) the soundness of the analysis of individual subproblems, (2) the soundness of the combination of the subproblem results, (3) and induction over the decomposition hierarchy. Decomposition causes the following issues: (A) It may introduce additional interdependent predicates. (B) The subproblems may be inference, and not verification problems; hence their solving requires optimisation (like our synthesis engine) instead of decision procedures. \begin{figure}[t] \begin{subfigure}{0.3\textwidth} \begin{tikzpicture}[scale=0.95] \node (invfo) at (0,0) {$\mathit{Inv}_f$}; \node (ccfo) at (-1,1) {$\mathit{CallCtx}_f$}; \node (sumfo) at (1.5,1) {$\mathit{Sum}_f$}; \node (rrf) at (1.5,0) {$RR_f$}; \node (sumsho) at (1.5,-1) {$\mathit{Sum}_h$}; \node (ccho) at (-1,-1) {$\mathit{CallCtx}_h$}; \draw[->] (invfo) -- (ccfo); \draw[->] (sumfo) -- (invfo); \draw[<->] (rrf) -- (invfo); \draw[->] (invfo) -- (sumsho); \draw[->] (ccho) -- (invfo); \draw[dashed,->] (sumsho) -- (ccho); \draw[blue, rotate=34] (-0.7,-0.15) ellipse (1.35 and 0.7); \draw[darkgreen, rotate=34] (0.95,0.05) ellipse (1.35 and 0.7); \draw[red] (1.5,0) ellipse (0.6 and 0.4); \end{tikzpicture} \caption{\label{fig:uniterm} Interprocedural universal termination verification problem } \end{subfigure} \hspace{1em} \begin{subfigure}{0.65\textwidth} \begin{tikzpicture}[scale=0.95] \node (invfo) at (0,0) {$\mathit{Inv}^o_f$}; \node (ccfo) at (-1,1) {$\mathit{CallCtx}^o_f$}; \node (sumfo) at (1.5,1) {$\mathit{Sum}^o_f$}; \node (rrf) at (2.75,0) {$RR_f$}; \node (sumsho) at (1.5,-1) {$\mathit{Sum}^o_h$}; \node (ccho) at (-1,-1) {$\mathit{CallCtx}^o_h$}; \node (invfu) at (5.5,0) {$\mathit{Inv}^u_f$}; \node (ccfu) at (7,1) {$\mathit{CallCtx}^u_f$}; \node (sumfu) at (4,1) {$\mathit{Sum}^u_f$}; \node (prefu) at (4,2) {$\mathit{Precond}^u_f$}; \node (sumshu) at (4,-1) {$\mathit{Sum}^u_h$}; \node (cchu) at (7,-1) {$\mathit{CallCtx}^u_h$}; \draw[->] (invfo) -- (ccfo); \draw[->] (sumfo) -- (invfo); \draw[->] (rrf) -- (invfo); \draw[->] (invfo) -- (sumsho); \draw[->] (ccho) -- (invfo); \draw[dashed,->] (sumsho) -- (ccho); \draw[->] (invfu) -- (ccfu); \draw[->] (sumfu) -- (invfu); \draw[->] (prefu) -- (sumfu); \draw[->] (invfu) -- (rrf); \draw[->] (rrf) -- (sumfu); \draw[->] (invfu) -- (sumshu); \draw[->] (cchu) -- (invfu); \draw[dashed,->] (sumshu) -- (cchu); \draw[->] (invfu) to [bend left=15] (invfo); \draw[blue, rotate=34] (-0.7,0.00) ellipse (1.35 and 0.7); \draw[darkgreen, rotate=34] (0.95,0.05) ellipse (1.35 and 0.7); \draw[darkgreen, rotate=-34] (3.6,3.1) ellipse (1.35 and 0.7); \draw[blue, rotate=-33] (5.6,3.15) ellipse (1.5 and 0.75); \draw[dashed,red] (4.1,0.4) ellipse (1.9 and 1.0); \end{tikzpicture} \caption{\label{fig:condterm} Interprocedural sufficient preconditions for termination inferenc } \end{subfigure} \caption{\label{fig:cyclic} Dependent predicates in the encodings and decompositions } \end{figure} \paragraph{Example} In \cite{CDK+15} we followed the classical approach of a procedural decomposition. We emulate the traversal of the call graph top-down analysing each {\function} separately and propagating the summaries back up. This decomposition splits the $\mathit{Summary}_{h}$ predicate for a call to {\function} $h$ at call site $i$ into a \emph{calling context} predicate $\mathit{CallCtx}_{h}$ that transfers information from the caller to the callee,% \footnote{ The calling context can be inferred by synthesising a predicate $\mathit{CallCtx}_{h_i}$ s.t. $\forall {\vec{\x}}_f,{\vec{\x}'}_f,{\vx^{p\_in}}_{h_i},{\vx^{p\_out}}_{h_i}: \mathit{Inv}_f({\vec{\x}}_f) \wedge \mathit{Trans}_f({\vec{\x}}_f,{\vec{\x}'}_f) \Longrightarrow \mathit{CallCtx}_{h_i}({\vx^{p\_in}}_{h_i},{\vx^{p\_out}}_{h_i})$. } and a summary predicate $\mathit{Sum}_{h}$ that transfers information from the callee to the caller. These two predicates are mutually dependent as illustrated by the cycle in the dependency graph in Figure~\ref{fig:uniterm}. The dashed arrows are dependencies resulting from unfolding the diagram along the call graph. The blue, green and red ellipses indicate the decomposition, i.e.\ predicates that are solved for at once. The algorithm in~\cite{CDK+15} uses a greatest fixed point computation to resolve this dependency. However, this is very imprecise for recursive {{procedures}}. Figure~\ref{fig:condterm} shows the predicate dependencies for the inference of sufficient preconditions for termination. Without going into details (see~\cite{CDK+15}), we want to direct the attention to the dependency (red dashed ellipse) between the \emph{under-approximating} summary $\mathit{Sum}^u_f$ (of which the sufficient precondition is a projection), the termination argument, and the invariants -- which is a maximisation problem. \vspace*{-0.9ex} \section{Lessons Learned and Prospects} \vspace*{-0.9ex} We have to accommodate the following two conflicting goals: (1) Solving as large subformulae as possible to increase precision and reduce the need for later refinement. (2) Solving as small subformulae as possible to be scalable. In Figure~\ref{fig:uniterm}, we solve for invariants (green) and termination arguments (red) separately because our synthesis engine currently does not support product domains that could infer both at once, each with their optimised domains, thus eliminating cyclic dependencies. Some domains require least, others greatest fixed point computations, our engine is currently unable to combine both in a single query. Moreover, programs are rarely written with verification in mind; they are often badly structured. Therefore we need a property-, precision-, and capacity-driven dynamic (de)composition to achieve goals (1) and (2). Re-partitioning the verification problem by eliminating predicates and introducing new ones seems essential. Decompositions introducing cyclic dependencies should only be used if the solving capacity is exceeded. On the other hand, precision can be increased by expansion, i.e.\ unrolling of loops and inlining of recursions, if the capacity allows it. Many of these issues are akin to open problems in neighbouring areas of research, e.g.\ \cite{HW13}. \vspace*{-0.9ex} \bibliographystyle{eptcs}
1,941,325,220,855
arxiv
\section{INTRODUCTION} Globular cluster (GC) systems are always present in large galaxies. It is generally believed that GCs form when starbursts occur in galaxies, and as `fossil records', contain vital information on the formation and evolution of their parent galaxies \citep{searle78,harris91,ashman98,west04,brodie06,mglee10a}. Despite their close interplay between GCs and field stars, the comparative studies have uncovered a fundamental difference in the observed shapes of their metallicity distribution functions (MDFs), even within the relatively simple type of galaxies -- elliptical galaxies \citep{harris02,rejkuba05,harris07a,harris07b,bird10}. The cause of the discrepancy between these two prime stellar components of galaxies has been the topic of much interest both on theoretical \cite[e.g.,][]{beasley02,pipino07} and observational grounds \cite[e.g.,][]{forbes01,forte05,forte07,liu11} because the disagreement signifies highly decoupled evolutionary paths between GC systems and their parent galaxies. The most common technique for measuring metallicities of a substantially large sample of GCs is {\it photometry} -- obtaining their broadband colors, although it is no substitute for spectroscopy. Because GCs in the Milky Way and other galaxies are usually older than 10 Gyr and age does not strongly affect GC broadband colors of GCs this old, the main parameter governing GC colors is metallicity. Indeed, the overall, first-order feature of the color-metallicity relations (hereafter ``CMRs'') is that GC colors scale {\it linearly} with their metallicities. Empirical relationships between the most often used color, $V-I$, and [Fe/H], fitted mainly to the Galactic GCs \citep{couture90,kissler-patig97,kissler-patig98,kundu98,barmby00}, are approximately linear. Using the more metallicity-sensitive colors $C-T_1$ or $C-R$, both \citet{harris02} and \citet{cohen03} found a mildly quadratic or broken linear relationship between [Fe/H] and color to be a better fit, thus improving on the relation of \citet{geisler90}. With the linear or mildly curved color-to-metallicity conversion, the now well-documented observation of bimodality in GC color distributions \cite[e.g.,][]{zepf93,ostrov93,whitmore95,mglee98,gebhardt99,harris01,kundu01,larsen01,peng06,harris06,jordan09,sinnott10,liu11} has been translated into bimodality of their MDFs. This is where a sharp distinction between GCs and field stars takes place; independent studies via {\it direct} photometry of spatially-resolved constituent field stars in a dozen nearby galaxies have shown that their MDFs have, in general, strongly-peaked, {\it unimodal} [Fe/H] distributions with broad metal-poor tails. More recent observations and modeling of old star clusters, however, suggest that the relations between metallicity and broadband color for GCs have a subtle, second-order feature; they appear to be nonlinear with a quasi-inflection at intermediate metallicities. For instance, \citet{peng06} presented an empirical relationship between the $g-z$ colors and spectroscopic metallicities for GCs in the Milky Way and the giant elliptical galaxies, M49 and M87 (see their Figure 11). With this dataset, they showed that the relationship between [Fe/H] and $g-z$ is steep for [Fe/H] $<$ $-$0.8, shallow up to [Fe/H] $\simeq$ $-$0.5, and then possibly steep again at higher metallicities. Independently, \citet[][hereafter Paper I]{yoon06} presented a theoretical metallicity-to-color relationship that has a significant inflection and thus reproduces well the observed feature. This nonlinear nature of the relation between intrinsic metallicity and its proxy, colors, may hold the key to understanding the color bimodality phenomenon. Paper I showed that the wavy feature projects equidistant metallicity intervals near the quasi-inflection point onto larger color intervals, and thus produces bimodal GC color distributions when the underlying distribution in [Fe/H] is broad, even if it is unimodal. The scenario gives a simple and cohesive explanation for the key observations, including ($a$) the overall shape of color histograms, ($b$) the number ratio of blue and red GCs as a function of host galaxy luminosity, and ($c$) the peak colors of {\it both} blue and red GCs as a function of host luminosity. If the bona-fide shape of the color-metallicity relationship is highly inflected, what has been thought to be the MDFs of GC systems may deviate significantly from the true distributions. In this paper, we present an alternative way of resolving the long-standing discrepancy in the MDFs between GCs and halo stars in bright elliptical galaxies. The nonlinear conversion from metallicities to colors (Paper I) should not be irreversible, and here we try to {\it inverse-transform} color distributions of GCs into metallicity distributions using the nonlinear CMRs. Section 2 presents the pros and cons of the nonlinearity of CMRs, on which the present work is based. Section 3 applies the nonlinear color-to-metallicity conversion to the actual GC color distributions, and examines the inferred [Fe/H] distributions for M87 and M84 (\S\S\,3.1) and for the 100 early-type galaxies in the ACS Virgo Cluster Survey \cite[ACSVCS,][]{cote04} (\S\S\,3.2). Section 4 compares the inferred GC MDFs with the MDFs of spatially-resolved constituent stars of nearby galaxies (\S\S\,4.1 and \S\S\,4.2) and the MDFs from a simple chemical enrichment model of galaxies (\S\S\,4.3). Section 5 discusses the implications of our results on the color bimodality issues (\S\S\,5.1), addresses the question of whether the GC formation is coupled with the bulk formation of the stellar population of host galaxies (\S\S\,5.2), and finally presents our view on the formation and evolution of GC systems and their parent galaxies (\S\S\,5.3). \section{NONLINEARITY OF COLOR-METALLICITY RELATIONS: PROS~AND~CONS} The core of this work is that the MDFs derived from GC optical color distributions are similar to those of constituent halo stars in galaxies. This work is based on the nonlinearity-CMR hypothesis (Paper I), that has been a target of dispute after its announcement. The issue is important enough that we devote a section to discuss it. \subsection{Observed and Predicted Color-Metallicity Relations} Simple linear conversion of photometric colors is frequently used for estimating metallicities for large samples of extragalactic GCs. This is a reasonable first-order assumption for obtaining mean metallicities, but for investigating the detailed structure of the MDF, including possible subpopulations, the form of the CMR must be known to higher order. However, the best empirical color-metallicity calibrations currently available \cite[e.g.,][]{peng06,mglee08,beasley08,sinnott10,woodley11,alves11} exhibit notable observational scatter. Moreover, compared to tens of thousands of GCs in a typical giant elliptical, the calibration samples are still relatively small and sparsely populated at the high-metallicity end. Larger samples of high-quality spectroscopic metallicities are needed to establish the precise forms of CMRs. Such samples would implicitly include any correlations with age or other parameters, and would provide strong constraints on the theoretical models. In general, the slope of the dependence of a given photometric color on the logarithmic metallicity [Fe/H] will change as a function of metallicity, and several recent studies have found departures from linearity. For instance, it has been known for decades that the color of the giant branch in Galactic GCs is a nonlinear function of [Fe/H] \cite[e.g.,][]{michel84}. \citet{richtler06}, using the observed $C-T_1$ versus [Fe/H] relation of \citet{harris02}, showed that the shape of the color distribution would differ from that of the MDF, and a non-peaked metallicity distribution could result in a bimodal color distribution. \citet{mglee08} fitted nonlinear relations for $C-T_1$ color versus [Fe/H], and found that the inferred MDF changed significantly depending on the adopted relation. \citet{blakeslee10} fitted a quartic relation to the \citet{peng06} data and showed that the empirical fit produces bimodal $g-z$ colors from unimodal MDFs. As the observations have improved, so have the models, and these also tend to predict nonlinear CMRs (e.g., Lee \etal\ 2002; Paper I; Cantiello \& Blakeslee 2007). This is partly, but not totally, due to improved modeling of the horizontal branch. Kissler-Patig \etal\ (1998a) showed that the Worthey (1994) models predict a nonlinear relation between [Fe/H] and $V-I$, with a wavy form qualitatively similar to that found empirically by \citet{peng06}, although the inflection occurs at higher metallicity because the colors of these models are generally too red at the high-metallicity end (see Blakeslee \etal\ 2001). This is interesting because the Worthey models do not realistically model the horizontal-branch morphology, but treat it as a red clump near the giant branch with a position that varies according to age. The Lee \etal\ (2002) model colors clearly showed nonlinear behavior as a function of metallicity even without any horizontal-branch component, but the nonlinearity in the optical colors was more pronounced with the horizontal branch. Despite the advances, more work is needed, especially on the behavior of the horizontal branch in extragalactic GC systems, as this is a complex multi-parameter problem. Metallicity is the primary factor governing horizontal-branch temperature, and the transition occurs in a nonlinear way at intermediate metallicities (e.g., Lee \etal\ 1994). However, variations in other parameters, including age, helium content, and central density, can create significant scatter in horizontal-branch morphology at a given metallicity (e.g., Sandage \& Wildey 1967; Zinn 1980; Stetson \etal\ 1996, Sarajedini \etal\ 1997; Buonanno \etal\ 1997; Sweigart \& Catelan 1998; see also the recent discussions by Yoon \etal\ 2008; Gratton \etal\ 2010; Dotter \etal\ 2010). There are also intricate intercorrelations among these parameters, as well as correlations with GC mass. For instance, the color-magnitude relation found among the brightest GCs in external systems (e.g. Harris \etal\ 2006; Strader \etal\ 2006; Mieske \etal\ 2006, 2010; Peng \etal\ 2009) is likely the result of a mass-metallicity relation. In Galactic GCs, the presence of an extreme blue horizontal branch correlates strongly with GC mass (Lee \etal\ 2007), and is likely related to the presence of helium-enhanced subpopulations (e.g., Norris \etal\ 2004; Lee \etal\ 2005b; Piotto \etal\ 2005; Yoon \etal\ 2008; Han \etal\ 2009; Gratton \etal\ 2010). More massive GCs also have smaller half-light radii and higher central densities (van den Bergh 1996). Finally, there is some evidence for correlations between age and metallicity in both the Galactic and extragalactic GC systems (Puzia \etal\ 2005; Beasley \etal\ 2008; Mar{\'{\i}}n-Franch \etal\ 2009), but the degree of correlation must depend on the formation history of the GC system. To summarize, the best current data indicate that optical CMRs for GC systems are nonlinear, and multiple sets of models support this general result. However, exactly how colors vary with metallicity depends on ages, the age-metallicity relations, and variations in other parameters such as $\alpha$-element and helium contents. Thus, a complete picture of the multi-band color-metallicity behavior in extragalactic GC systems would include an understanding of the interplay of the various stellar population parameters. Lacking more stringent observational constraints, we consider only some simple, reasonable assumptions on the standard stellar parameters, and for the present work, we use our best predicted CMRs. \subsection{Assessing the Evidence for Metallicity Bimodality} \subsubsection{The Milky Way} The GC system of the Milky Way Galaxy follows a bimodal MDF (Zinn 1985). The bimodality is confirmed by the radial number density profiles of metal-poor and metal-rich GCs and their orbital characteristics. The metallicity distribution of GCs and field stars in our Galaxy are known much more accurately than those in any other galaxy. For instance, the metal-poor GCs in the Milky Way have formed with an efficiency $\sim$ 20 times greater than the metal-rich GCs with respect to their associated stellar populations. Although these were well known, there was little expectation that it would be a general property of much larger GC systems in giant ellipticals. Such galaxies contain 10-100 times as many GCs as the Milky Way, and reside predominantly in cluster environments where they likely experienced very different evolutionary histories (e.g., Peng \etal\ 2008); thus, the analogy with our own Galaxy was unclear. In the context of the Toomre (1977) idea that elliptical galaxies are the remnants of dissipationally merged spirals, Ashman \& Zepf (1992) discussed bimodal MDFs with a population of higher metallicity GCs forming in the major merger. Nonetheless, given the simplicity of that model, and the advances in hierarchical structure formation theory, it came as a surprise when most giant ellipticals exhibited bimodal GC color distributions (interpreted as linearly reflecting metallicity), and other scenarios were proposed to account for the bimodality (e.g., Forbes \etal\ 1997; C\^ot\'e \etal\ 1998; Beasley \etal\ 2002; Kravtsov \& Gnedin \etal\ 2005; Muratove \& Gnedin 2010). However, the presumed bimodality of the MDF is taken as an input constraint in these models, rather than being a clear prediction (apart from perhaps the original Ashman \& Zepf scenario, which is difficult to reconcile with the more complex assembly histories of ellipticals found in cosmological simulations). In light of this, it is worth reexamining the direct evidence for bimodal MDFs in elliptical galaxies, and other galaxies with comparably large GC systems. \subsubsection{M31} M31 is the largest galaxy in the Local Group and contains over 450 confirmed GCs, three times as many as the Milky Way, and hundreds of additional candidates (Galleti \etal\ 2004; Huxor \etal\ 2011; Caldwell \etal\ 2009, 2011). Barmby \etal\ (2000) studied the optical and near-IR color distributions for a large sample of M31 GCs. Unlike in most giant ellipticals, the colors did not appear bimodal, but Barmby \etal\ suggested that errors in reddening and photometry could ``wash out'' the bimodality. A KMM analysis of the $V-K$ colors favored a double Gaussian model over a single Gaussian with 92\% confidence ($<2\sigma$), but although the distribution appeared asymmetric, it did not show two distinct components as in the Milky Way. The result was similar for the sample of $\sim160$ GCs with spectroscopic metallicities compiled by these authors (see their Figure~19). Galleti \etal\ (2009) presented a homogeneous set of metallicities from Lick index measurements for 245 GCs. Again, they found that multiple Gaussian component models were favored because the MDF is broad and asymmetric, but it lacks the two distinct metallicity peaks seen in the Milky Way (see their Figure~15). when the sample was restricted to M31 GCs with errors $<0.3$~dex, there was no appearance of bimodality. Galleti \etal\ (2009) conclude, ``The MD[F] of M31 GCs does not present any obvious structure like the bimodality encountered in the GC systems of the MW [Milky Way]. Nevertheless, the distribution for M31 clusters does not seem to be well represented by a single Gaussian distribution$....$ While clearly not conclusive, the above analysis suggests that there may be actual structures in the MD[F] of M31 GCs.'' In the most recent study of the M31 GC MDF, \citet{caldwell11} present high signal-to-noise spectroscopic data on the M31 GC system. Metallicities were estimated using a calibration of Lick indices with [Fe/H] provided by Galactic GCs. Although Caldwell et al. sample does not include many outer-halo metal-poor GCs that would increase the significance of the metal-poor side of the MDF, the metallicity distribution of over 300 old GCs has a significant population of intermediate-metallicity GCs and is not generally bimodal, in strong distinction with the bimodal Galactic GC distribution. The MDF shows a broad peak, centered at [Fe/H] = $-1$, possibly with minor peaks at [Fe/H] = $-1.4$, $-0.7$, and $-0.2$, suggesting that the GC systems of M31 and the Milky Way had different formation histories. Given the complex accretion history of M31 (McConnachie \etal\ 2009), it is not surprising that the M31 GC MDF would possess significant structure, but the best current data do not present evidence for bimodality in the M31 GC system. We note in passing that the Balmer absorption lines (H$\beta$, H$\gamma$ and H$\delta$) theoretically have nearly the same response to horizontal-branch stars as optical broadband colors in their index-metallicity relations (e.g., Lee \etal\ 2000; Chul \etal\ 2011, in prep.; S. Kim \etal\ 2011 (Paper IV)). Remarkably, \citet{caldwell11}'s sample shows strong nonlinearity in the Balmer lines vs. metal line ($<$Fe$>$) relations (see their Figure 10), and, as a result, exhibits clear Balmer strength bimodality (see their Figure 6) which is a close analogy with optical color bimodality. \subsubsection{Cen~A and the Sombrero} The case for MDF bimodality is better in NGC\,5128 (Cen~A), an S0~pec galaxy\footnote{``NED homogenized morphology,'' http://nedwww.ipac.caltech.edu/} at the center of its own small group (Karachentsev \etal\ 2007). Beasley \etal\ (2008) present spectroscopic metallicities for 207 GCs in this galaxy. The resulting MDF is skewed towards high metallicities and apparently has three closely spaced peaks (see their Figure 5), in contrast to the two well-separated peaks in the Milky Way. This difference in MDF structure likely reflects the very different accretion histories. A very similar MDF was found by Woodley \etal\ (2010) in a spectroscopic study of 72 NGC\,5128 GCs. In this case, a unimodal distribution provided statistically the best fit, but their [MgFe] index distribution was better fitted with a double Gaussian model. In both studies, the NGC\,5128 GC metallicities and [MgFe] values lack the sharply bimodal appearance of the optical colors, especially of $B-V$ and $V-I$ (Peng \etal\ 2004b). However, Spitler \etal\ (2008) find that the optical--[3.6$\mu$m] IR color distributions for 146 NGC\,5128 GCs are distinctly bimodal, providing good evidence for MDF bimodality. They also found that similar data for a smaller sample of GCs in NGC\,4594 (the Sombrero) did not provide significant evidence for or against MDF bimodality in that galaxy. More recently, \citet{alves11} present a spectroscopic MDF for over 200 GCs in this galaxy, which is bimodal with peaks at [Fe/H] $\sim$ $-1.4$ and $-0.6$. \subsubsection{Giant elliptical galaxies} Various studies of GC metallicities in giant ellipticals from spectroscopic and near-IR/optical photometric data (Puzia \etal\ 2002, 2005; Cohen \etal\ 2003; Strader \etal\ 2007; Hempel \etal\ 2007; Kundu \& Zepf 2007) are discussed in detail by Blakeslee \etal\ (2010). We summarize here and refer the reader to that work for the full discussion. Despite the groundbreaking nature of many of these studies, the spectroscopic results tend to be limited by the sample sizes ($<1\%$ of the population), sample definitions, or sensitivity to the treatment of the data. Thus, Blakeslee \etal\ (2010) conclude that the \textit{direct} evidence for metallicity bimodality in giant ellipticals is weak. For example, the bimodality reported by Strader \etal\ (2007) in the sample of 47 M49 GC metallicities derived from the Cohen \etal\ (2003) Lick index measurements depended on the calibration above solar metallicity. Despite the very pronounced color bimodality, there was no significant evidence for bimodality in the metallicities reported by Cohen \etal\ (2003), which extended to higher metallicities, rather than forming a clump near the solar value as in Strader \etal\ (2007). Given the uncertainty in the high-metallicity calibration (see discussion in Cohen \etal), and the sample limitations (0.6\% of the GC population, observed with two spectroscopic masks), the issue remains unresolved for M49. More recently, Foster \etal\ (2010) have studied the {Ca}\,\textsc{ii} triplet (CaT) feature in a sample of 144 GCs in the Eridanus giant elliptical NGC\,1407, which has prominent optical color bimodality. In Galactic GCs, the CaT index is linearly related to metallicity, at least for $\hbox{[Fe/H]}<-0.4$ (Armandroff \& Zinn 1988), and models indicate that it is very insensitive to age (Vazdekis \etal\ 2003). Foster \etal\ (2010) find that bright GCs near the peaks of the color distribution have very similar CaT strengths, indicating very similar metallicities despite the wide separation in color space. The distribution of CaT-derived metallicities does not appear bimodal, but because of its asymmetry, it is better fitted by a double Gaussian model at the $2\sigma$ level. However, if this result were interpreted as MDF bimodality, then the components would differ significantly in amplitude, width, and position from those implied by the optical colors; i.e., it would be a different bimodality. Foster \etal\ remark that if the metallicities are taken at face value, then the very different color and metallicity distributions could be reconciled by a nonlinear CMR causing a unimodal MDF to appear bimodal in color space. In a recent near-IR/optical photometric study, \citet{kundu07} presented the $I-H$ color distribution of 80 GCs in M87, which shows bimodality. More recently, \citet{chies10,chies11a,chies11b} present (optical -- near-IR) colors for the GC systems in 14 early-type galaxies, and find that the bimodality becomes less evident in $g-K_s$ if compared to $g-z$ and even less pronounced in $z-K_s$. \citet{chies10} point out that the disappearance of bimodality in these colors while evident in the optical $g-z$ color could be attributed to a nonlinear-CMR effect although the observational uncertainties could also account for it. Finally, we note that our new studies combining ACS and WFC3/IR data for NGC\,1399 in the Fornax galaxy cluster (Blakeslee \etal\ 2011, in prep.) and Subaru/MOIRCS near-IR and CTIO optical data for M60 and NGC 4365 in the Virgo (S. Kim \etal\ 2011, in prep.) find independently that the $I-H$ and $I-K_s$ color distributions are not significantly bimodal, despite the strong bimodality in the optical colors of the same sample. Overall, the available data on the MDFs of GCs in giant ellipticals are at best ambiguous. In cases where spectroscopy -- the more {\it direct} measures of metallicity than colors -- suggest bimodality, it is much less apparent than the dramatic double-peaked histograms of colors for the same galaxies (e.g., Peng \etal\ 2006). Thus, at least \textit{some} of the observed color bimodality is likely due to nonlinear behavior of colors with metallicity. As discussed above, there is empirical evidence for such nonlinearity having a form that tends to produce bimodal color distributions. Pipino \etal\ (2007), considering the results of Puzia \etal\ (2005), also concluded that color-metallicity nonlinearity would help significantly in reconciling the photometric and spectroscopic data. The question of the relative importance of color-metallicity nonlinearity and metallicity bimodality in producing the observed GC color distributions remains open. In the meantime, besides the necessity of better spectroscopic samples, it is worthwhile to explore the possibility of deriving MDFs from the observed colors under the assumption of nonlinear CMRs. The following sections use the latest stellar population models to invert the colors for very large photometric samples and to examine the implications of the resulting MDFs. \section{COLOR AND METALLICITY DISTRIBUTIONS OF EXTRAGALACTIC GLOBULAR CLUSTER SYSTEMS} Our main objective is to investigate the MDFs for GC systems when inferred from the nonlinear CMRs. Compared to the metallicity-to-color conversion shown in Paper I, the inverse-conversion from colors to metallicity is more susceptible to the inevitable incompleteness of current population synthesis models. With a theoretical CMR that is somewhat incorrect in the color direction, for example, the metallicity-to-color conversion will still give color distributions with the correct shape, but the inverse-conversion will yield erroneous metallicity distributions. Moreover, the inverse-conversion from colors to metallicity may be hampered by the varying observational uncertainties depending on the colors of interest. With these caveats in mind, however, careful inverse-conversions may shed light on the structure of the GC MDFs, including possible subpopulations. In this section, we apply the transformations of Paper I to the color distributions of GC systems and present their inferred MDFs for M87 and M84 (\S\S\,3.1) and for 100 early-type galaxies imaged in the ACSVCS (\S\S\,3.2). \subsection{Globular Cluster Systems in M87 and M84} In this section, we present the results of our multiband photometry for GC systems of M87 (NGC 4486) and M84 (NGC 4374). We have selected M87 and M84, giant elliptical galaxies in the Virgo cluster, because they both have GC systems with confirmed color bimodality in $g-z$ and are among very few elliptical galaxies with deep $u$-band observations available. We refer the reader to Yoon \etal\ (2011, hereafter Paper II) for greater details on the multiband photometry of the M87 GC system in the context of nonlinear CMRs. The archival F336W images of {\it HST}/WFPC2 and {\it HST}/WFC3 were used to obtain $u_{F336W}$ for GC candidates in M87 and M84, respectively. Our $u_{F336W}$-band catalogs were matched with ACS/WFC $g_{F475W}$- and $z_{F850LP}$-band photometry of Jord\'{a}n et al (2009). We hereafter refer to $u_{F336W}$, $g_{F475W}$, and $z_{F850LP}$ mags as $u$, $g$, and $z$, respectively. Jord\'{a}n et al (2009) selected {\it bona-fide} GCs with their magnitudes, $g-z$ colors, and sizes. We further employed color cuts in the $u$-band colors to filter out contaminating sources, especially background star-forming galaxies. We used 591 GCs in M87 and 306 GCs in M84 that have reliable $u$, $g$, and $z$ measurements in common. The samples are $u$-band limited. The merit of the multiband observations is clear: Since the form of CMRs hinges on which color is used, the shape of the color distributions varies significantly depending on the colors in use. Hence a comparative analysis of the GC MDFs that are independently obtained from distributions of different colors will put the nonlinearity hypothesis to the test, as proposed in Paper I and Paper II. Among other optical colors, the $u$-band related colors (e.g., $u-g$ and $u-z$) are theoretically predicted to exhibit the most distinctive CMRs from other preferred CMRs (e.g., for $g-z$), and thus the most adequate to the task. Furthermore, the $u$-band colors are significantly less affected by the variation in the horizontal-branch mean temperature, having less inflected, ``smooth'' CMRs than $g-z$ for given ages. Therefore, for instance, the conversion from $u-g$ color distributions to MDFs via the ($u-g$)-[Fe/H] relation should be more straightforward than the case of $g-z$. The reason why the CMRs for $u$-band colors are less inflected than the $g-z$ CMR is two-folded: ($a$) the integrated $u$-band colors of main-sequence and red-giant-branch stars are smoother functions of metallicity compared to $g-z$, and ($b$) the $u$-band colors are less sensitive to the horizontal-branch temperature variation, which is due to the fact that the blueing effect of the optical spectra with increasing horizontal-branch temperature is held back by the Balmer discontinuity where the $u$-band is located (Yi \etal\ 2004). Such properties make the $u$-band colors good metallicity indicators for a wide range of age, and the $u$-band color distributions are expected to be significantly different from distributions of other optical colors such as $g-z$, $V-I$, and $C-T_1$. See Paper II for detailed discussion on the $u$-band colors as a tool to probe the nonlinearity of CMRs. Figure 1 shows the observed color distributions of the M87 GC system from $u$, $g$, and $z$ photometry and the inferred MDFs. First, Figures 1$a$ and 1$d$ present the $g-z$ vs. $u$ and $u-g$ vs. $u$ diagrams, respectively. Figures 1$b$ and 1$e$ present the $g-z$ and $u-g$ color distributions, respectively. The $g-z$ distribution of the M87 GCs unambiguously displays two peaks around $g-z$ = 1.0 and 1.4. In contrast, the $u-g$ distribution for the identical sample does not appear to have clear bimodality. One may argue that the larger observational uncertainties in $u$-band weaken the bimodality. Table 3 shows that the typical photometric error of $u-g$ is 2.2 times larger than that of $g-z$ for the entire sample, but at the same time the ranges spanned by the colors are $\Delta$($g-z$) = 1.1 mag and $\Delta$($u-g$) = 2.1 mag, that is, the baseline of $u-g$ is 1.9 times longer than that of $g-z$. As a result, the relative sizes of error bars are ($g-z$ : $u-g$) = (1.0 : 1.2). In a relative sense, the errors in the two colors are quite comparable to each other. Moreover, Paper II shows that the $u-z$ color, which has {\it smallest} relative errors, still exhibits weaker bimodality in the color distribution compared to the $g-z$ distribution. It is, therefore, not likely that bimodality in the $u-g$ distribution of M87 GCs is simply blurred by larger observational errors in the $u$-band. The variation in the histogram shape for different colors may suggest that the form of the CMRs varies depending significantly on the colors in use. This is shown in Figures 1$c$ and 1$f$, along with our predictions from the Yonsei Evolutionary Population Synthesis (YEPS) model\footnote{The models in this study are constructed using the Yonsei Evolutionary Population Synthesis (YEPS) code. The YEPS model generates ($a$) synthetic color-magnitude diagrams for individual stars \cite[see, e.g.,][]{lee94,lee99,lee05b,rey01,yoon02,yoon08,han09} and ($b$) synthetic integrated spectra for colors and absorption indices of simple and composite stellar populations \cite[see, e.g.,][]{lee05a,par97,lee00,rey05,rey07,rey09,kaviraj05,kaviraj07a,kaviraj07b,kaviraj07c,ree07,yoon06,yyl09,yoon09,spitler08,mieske08,choi09,cho11,yoon11}. One of the main assets of our model is the consideration of the systematic variation in the mean color of horizontal-branch stars as functions of metallicity, age, and abundance mixture of stellar populations. The standard YEPS model employs the Yonsei-Yale stellar evolution models (Y. Kim \etal\ 2002; Han \etal\ 2011, in prep.) and the BaSeL flux library (Westera \etal\ 2002). The spectro-photometric model data of the entire parameter space are available at http://web.yonsei.ac.kr/cosmic/data/YEPS.htm.} (Chung et al. 2011; Yoon et al. 2011, {\it in prep.}). In Figure 1$c$, the $g-z$ colors are shown as a function of [Fe/H] for GCs in the Milky Way (open circles), and M49 and M87 (filled circles and triangles). The references to the observed data used in the relations are summarized in Table 1. The fifth-order polynomial fit to our model data for 13.9-Gyr GCs is overlaid (thick solid line, see Table 2). For a comparison, the straight grey line represents the linear least-squares fit to the data. Figure 1$f$ is the same as Figure 1$c$, but for $u-g$ color. Open circles, blue and red filled squares represent GCs in the Milky Way, M87, and NGC 5128, respectively. The $u-g$ colors of the GCs in the Milky Way and NGC 5128 were obtained from their $U-B$ colors via the equation, ($u-g$) = 1.014 ($U-B$) + 1.372, derived from model data for synthetic GCs with combinations of age (10 $\sim$ 15 Gyr of 1 Gyr intervals) and [Fe/H] ($-2.5$ $\sim$ 0.5 dex of 0.1 dex intervals). $U$- and $B$-passband are relatively close to $u$- and $g$-passband, respectively, and thus $U-B$ has responses to the horizontal-branch morphology in a way that is very similar to $u-g$. Therefore, $U-B$ are a good proxy to $u-g$, and the $U-B$ vs. $u-g$ relationship is best described by a linear fit over a range of ages and metallicities (Table 1). Guided by the model, the form of the observed CMRs appears to vary from the ($g-z$)-[Fe/H] relation to ($u-g$)-[Fe/H]. One may argue, however, that current data appear to be fit both by the theoretical nonlinear relations and the empirical straight relations, as there is only a weak indication that the modeled relations are actually better fits to the $g-z$ vs. [Fe/H] and $u-g$ vs. [Fe/H] data. Given the current level of observational accuracy and inhomogeneity of the data, the purpose of our study is not to determine the exact shape of color-metallicity relationships, but to investigate the consequences and implications of the possible nonlinear transformations. We now consider the inferred GC MDFs. Figures 1$g$ and 1$h$ show the MDFs for the M87 GC system. On the one hand, Figure 1$g$ presents the GC MDFs converted from $g-z$ (red histogram) and $u-g$ (blue histogram) colors that are based on the traditional {\it linear} color-to-metallicity conversion (thin grey lines in Figures 1$c$ and 1$f$), and thus are just replicas of their color histograms shown in Figures 1$b$ and 1$e$. Note that both the overall shape and the peak positions do not appear to agree between the GC MDFs from the two colors. On the other hand, Figure 1$h$ presents the GC MDFs converted from $g-z$ and $u-g$ colors that are based on the improved {\it inflected} relationship between color and metallicity (thick black lines in Figures 1$b$ and 1$e$). In contrast to Figure 1$g$, the inferred GC MDFs in Figure 1$h$ are modified drastically to have a strong metal-rich peak with a metal-poor tail. The two histograms in Figure 1$h$ are more consistent with each other in terms of their overall shape and peak positions than those shown in Figure 1$g$. We note that our stellar population models show that, for given input parameters, the {\it absolute} quantities of output are rather subject to the choice of model ingredients such as stellar evolutionary tracts and model flux libraries; the different choices can result in up to $\sim$ 0.2 mag $g-z$ and $u-g$ variation among models and the inferred [Fe/H] values accordingly (up to $\sim$ 0.5 dex). Hence, one should put more weight on the {\it relative} values of inferred GC MDFs, i.e., the overall morphology of the MDFs and their unimodality. We, however, wish to emphasize that the typical GC MDF shape is obtained invariably from different colors, i.e., $g-z$ and $u-g$ for M87 GCs. Figure 2 is the same as Figure 1, but for the GC system in M84. First, Figures 2$a$ and 2$d$ present the $g-z$ vs. $u$ and $u-g$ vs. $u$ diagrams for the M84 GCs. Figures 2$b$ and 2$e$ present the $g-z$ and $u-g$ color distributions, respectively. As for the possible role of observational uncertainties in weakening bimodality of the $u-g$ color distribution, the typical photometric error of $u-g$ is 1.7 times larger than that of $g-z$ for the entire sample (Table 3), but at the same time the ranges spanned by the colors are $\Delta$($g-z$) = 1.1 mag and $\Delta$($u-g$) = 1.8 mag, that is, the baseline of $u-g$ is 1.6 times longer than that of $g-z$. As a result, the relative sizes of error bars are ($g-z$ : $u-g$) = (1.0 : 1.1). In a relative sense, the errors in the two colors are comparable. It is, therefore, not likely that bimodality in the $u-g$ distribution of M84 GCs is simply blurred by larger observational errors in the $u$-band. Figures 2$c$ and 2$f$ are the same as Figures 1$c$ and 1$f$, respectively. Finally, Figures 2$g$ and 2$h$ show the inferred MDFs for the GC systems in M84. Again, the inferred GC MDFs in Figure 2$h$ have a strong metal-rich peak with a metal-poor tail. The two histograms in Figure 2$h$ show a better agreement with each other in terms of their overall shape and peak positions than those in Figure 2$g$. In this section, we have obtained multi-band colors of GCs in the two representative giant elliptical galaxies, M87 and M84, and examined their color and metallicity distributions. We have found that the distributions of different colors can be transformed into unimodal metallicity distributions that are strongly peaked with a broad metal-poor tail. The implications of the typical shape of the inferred GC MDFs and its similarity to those from chemical evolution models and field-star observations (see \S\,4) will be discussed in \S\,5. We note, however, that the similarity itself between the GC MDFs from multiband colors does not necessarily represent evidence that the model is correct. Whether the similar MDFs from various colors can be taken as evidence for the nonlinear-CMR scenario for the color bimodality is a sufficiently involved issue and fully explored in Paper II. \subsection{Globular Cluster Systems in the ACS Virgo Cluster Survey} Motivated by the findings above in \S\S 3.1 for the individual galaxies and to avoid possible small-number statistics, we now benefit from the 100 early-type galaxies in the ACSVCS \citep{cote04,peng06,jordan09}. We apply the color-to-metallicity transformation scheme to $\sim$10,000 GCs in ACSVCS, the largest and most homogeneous photometric database of extragalactic GCs currently available. Figure 3 presents the observed color distributions and inferred MDFs of GC systems in bins of host galaxy luminosity. In Figure 3$a$, we show the observed color histograms of GC systems for seven bins of host galaxy magnitude. The data are the same as in Figure 6 of \citet{peng06} and here we listed the data in Table 4. The histograms are normalized by the GC number at their blue peaks, and multiplied by constants, $C$, for clarity. The magnitude bins are 1 mag wide and extend from $M_B$ $\simeq$ $-$21.5 ($-$22 $\leq$ $M_B$ $<$ $-$21, red, $C$ = 1.0) to $\simeq$ $-$15.5 ($-$16 $\leq$ $M_B$ $<$ $-$15, purple, $C$ = 0.4). A Gaussian kernel of $\sigma$($g-z$) = 0.05 is applied. Clearly, the histograms appear bimodal or asymmetric across the entire luminosity range, with all GC systems containing blue peaks and with more prominent red peaks in brighter hosts. A close scrutiny of Figure 3$a$ reveals the tendency of the dip positions (and blue peak positions) in the color histograms to become progressively bluer as the host luminosity decreases. In the context of nonlinear CMRs, this can be explained if GCs in fainter galaxies are slightly younger than those in brighter galaxies. This is because at younger ages, the blue horizontal branch develops at lower metallicity \citep{lee94,yoon08,dotter10} and, as a consequence, the predicted colors of the quasi-inflection points along the CMR move systematically towards the blue at younger ages. This effect is demonstrated in Figure 3$b$ by an example set of the YEPS model predictions with $\Delta$$t$(brightest--faintest) = 3 Gyr. In this example, ages range from 10.5 Gyr (purple) to 13.5 Gyr (red) by equal intervals of 0.5 Gyr. The YEPS $g-z$ data for 9 Gyr to 14 Gyr by steps of 0.5 Gyr are given in Table 2. The thin straight dotted line represents the linear least-squares fit to the data shown in Figures 1$c$ and 2$c$. Figures 3$c$\,--\,3$f$ present the results of the four different color-to-metallicity transformations. Like the color distributions in Figure 3$a$, the inferred GC MDFs are displayed for seven bins of $M_B$ from $M_B$ = $-$22 to $-$15 in steps of 1 mag. To obtain these MDFs, we converted the color of each GC into [Fe/H] using the ($g-z$)-[Fe/H] relations from the YEPS model. Each panel makes a different assumption on the systematic age sequence from the faintest host bin to the brightest. The modeled ages for host luminosity bins are shown in the insets of Figures 3$c$\,--\,3$f$, with age differences, $\Delta$$t$ = 0, 1, 2, and 3 Gyr, respectively, between the faintest ($M_B$ $\simeq$ $-$15.5, purple) and brightest ($-$21.5, red) bins. The brightest, oldest ($M_B$ $\simeq$ $-$21.5, red) bin is set to be 13.5 Gyr. The color distribution shown in Figure 3$a$ was transformed to MDFs via the model CMRs of the corresponding ages. Figure 3$c$ presents the case in which the age is assume to be constant at 13.5 Gyr regardless of host luminosities between $M_B$ $\simeq$ $-$15.5 (purple) and $-$21.5 (red). The GC MDFs for the luminous ($M_B$ $<$ $-$17) host bins (the first to fifth brightest bins) in particular are strongly peaked with a broad metal-poor tail. As Paper I suggested, the strong bimodality seen in the GC color distribution of luminous galaxies is not evident in the MDF once transformed by their wavy CMR. The MDFs in the faintest two bins, however, have broad peaks. This is likely because the less luminous galaxies primarily have blue GCs and their colors are largely bluer than the inflection point in the CMR under the constant age assumption, i.e., $\Delta$$t$(brightest -- faintest) = 0 Gyr. Paper I mentioned that the position of the inflection is bluer for younger stellar populations and so if fainter galaxies host GCs younger than brighter counterparts, then a different CMR may apply. However, in this naive use of the single CMR, not all inferred [Fe/H] distributions appear to be unimodal. Comparative analysis shows that the non-zero age difference of $\Delta$$t$(brightest -- faintest) = 1\,$\sim$\,3 Gyr (Figures 3$d$\,--\,3$f$) results in the GC MDFs for {\it all} the host luminosity bins that fit better with skewed Gaussian distributions with metal-poor tails. The grey dotted histogram in each panel represents the inferred MDF for the brightest (i.e., $M_B$ $\simeq$ $-$21.5) bin based on the simple straight fit in Figure 3$b$, and is intended for comparison to the red solid MDF. Obviously, the strong bimodality seen in the color distributions of luminous galaxies is no longer evident in the MDFs, once transformed via the inflected color-metallicity relationship. The inferred GC MDFs for all seven host luminosity bins have the same characteristic shape -- being sharply peaked with a broad metal-poor tail -- across three orders of magnitude in the host galaxy mass. In addition, the mean [Fe/H] and peak position of the GC MDFs are a strong function of the host luminosity, in the sense that, for brighter host galaxies, the mean [Fe/H] increases and the peak gets redder. Compared to their relative ages, the absolute ages of GC systems are still less certain. To test the robustness of the result shown in Figure 3 against different absolute ages, Figure 4 makes differing assumptions on the age sequence from the faintest host galaxies to the brightest. In this case, the center bin, i.e., the fourth brightest ($M_B$ $\simeq$ $-$18.5, green) bin, is set to be 13 Gyr. The modeled ages for the host luminosity bins are shown in the insets of Figures 4$c$\,--\,4$f$, with age differences of $\Delta$$t$ = 0, 1, 2, and 3 Gyr, respectively, between the faintest ($M_B$ $\simeq$ $-$15.5, purple) and brightest ($-$21.5, red) bins. In Figure 4$b$, another example set of our model prediction with $\Delta$$t$ = 3 Gyr is shown. In this example, ages range from 11.5 Gyr (purple) to 14.5 Gyr (red) in equal intervals of 0.5 Gyr (Table 2). Figures 4$c$\,--\,4$f$ show that the strong bimodality is no longer evident in the inferred MDFs, and the typical shape of the GC MDFs does not depend upon differing age assignment from the faintest host galaxies to the brightest. It also holds true that the peak positions of the GC MDFs are a strong function of the host luminosity in that, for brighter host galaxies, the mean [Fe/H] increases and the peak gets redder. We also tested the stability of the results against the putative age dispersion in the examined data. The typical form of the GC MDFs persists with up to $\sigma_t$ $\simeq$ 2.5 Gyr, where the age spread among GCs in a single host luminosity bin is parameterized by a Gaussian dispersion, $\sigma_t$. The data for the inferred GC MDFs in Figures 3$d$ and 4$d$ (they are identical) are given in Table 5. If one could assume that the intrinsic shapes of the GC MDFs are similar in all host luminosity bins, our results might give a tantalizing hint of varying ages among GC systems in that GCs in more luminous parents are older than GCs in fainter galaxies. However, there is yet no observational support that the assumption is valid. Moreover, the {\it true} shape of CMRs suggested in this study is still unproven. Hence, the purpose of our simulations should be to determine neither the absolute age of the GC systems nor their exact pattern of age sequence from the faintest bin to brightest. Nevertheless, the possible age sequence appears not inconsistent with the ``galaxy downsizing'' picture \cite[e.g.,][]{cowie96} in which brighter galaxies are observed to form earlier. If confirmed, this can be regarded as another indication of the common characteristics shared by stellar populations of GCs and halo field stars (see \S\,4). \section{COMPARISON OF GC MDF'S TO THOSE OF HALO FIELD STARS AND GALAXY CHEMICAL EVOLUTION MODELS} We find that the bimodal GC color distributions commonly observed in luminous early-type galaxies are transformed into unimodal metallicity distributions that are strongly peaked with a broad metal-poor tail. Our sample includes GC systems in M87 and M84 (\S\S\,3.1) and in the ACSVCS galaxies (\S\S\,3.2). A key to comprehending the connection between GCs and halo field stars in galaxies is a direct comparison of their MDFs. This section compares the inferred GC MDFs to those of resolved field stars in nearby early-type galaxies (\S\S\,4.1 and \S\S\,4.2). Also, the GC MDFs are compared to MDFs produced by chemical evolution models of galaxies (\S\S\,4.3). \subsection{Comparison of ACSVCS GCs to Spatially-resolved Halo Field Stars in Nearby Elliptical Galaxies} The necessity of obtaining photometry of spatially-resolved field stars limits us to nearby galaxies. There are several nearby, relatively massive elliptical galaxies whose stellar MDFs have been measured, and our inferred GC MDFs can directly be compared to such stellar MDFs. Figure 5 gives a comparison of the MDFs of the ACSVCS GCs to those of resolved stars of nearby elliptical galaxies \citep{harris02,rejkuba05,harris07a,harris07b,bird10}. The stellar MDFs of individual galaxies were obtained from color-magnitude diagrams of red-giant stars whose colors are highly sensitive to their metallicities. In Figure 5, we plot the GC MDF shown in Figures 3$d$ and 4$d$ (they are identical). The data for the plot are presented in Table 5. The figure shows that the GC MDFs are similar to the MDFs of resolved constituent stars of nearby elliptical galaxies. In particular, the typical shape of the GC MDFs (colored solid line in each panel), characterized by a sharp peak with a metal-poor tail, is remarkably consistent with those of field stars in nearby galaxies. By contrast, GC MDFs obtained using the simple linear [Fe/H] vs. $g-z$ relation (grey dotted lines) do not agree with the stellar MDFs. We note that the metallicity spread of GCs tends to be broader than that of stars. As indicated in Figures 3$c$\,--\,3$f$ and Figures 4$c$\,--\,4$f$, the GC MDFs, on the whole, are broader than the stellar MDFs. This is likely due to the fact that each color histogram of the ACSVCS consists of GCs belonging to 3 $\sim$ 20 diverse host galaxies, giving an ensemble character of each host luminosity bin. Moreover, the inferred MDFs become broader as the observational uncertainty in color is propagated to metallicity space. In addition to the noticeable similarities found in the shape of MDFs between GCs and stars, they share a common feature in that their mean metallicity increases with increasing host luminosity. As a result, the peak positions of GCs and stars are roughly coincident in each luminosity bin --- at [Fe/H] $\simeq$ $-$1.0 (faint hosts) and $-$0.5 (bright hosts). This, however, should be taken with great caution because the mean colors of both GCs \citep{dirsch03,jordan04,tamura06,mglee08} and field stars \citep{harris02,rejkuba05} depend significantly on the sampled radial location in a galaxy. Their mean metallicities gradually decrease with projected radius, as seen in Figures 5$d$ and 5$e$ for the outer- and inner-halo stars of NGC 5128, respectively. Moreover, observations show that the mean GC color is bluer than stars within an elliptical galaxy as a whole \citep{peng06}. More importantly, GCs are on average bluer than the unresolved light of the galaxies at the same radii \cite[e.g.,][]{forte81,strom81,jordan04,tamura06,mglee08,forte09}. For old stellar populations such as those in elliptical galaxies and GCs, the observed bluer colors imply lower metallicities in all stellar models, thus leading to the conclusion that GCs are typically more metal-poor than field stars in a galaxy. Hence it may just be a coincidence that the GC MDFs line up with the field star MDF of the nearest elliptical galaxy. Despite these caveats, the peak and width of the MDFs of GCs and field stars are similar enough to suggest that both were formed in the same events, which built the major part of the galaxies. It should be addressed that the [Fe/H] histogram for NGC 3379 stars (Figure 5$c$), despite the overall resemblance to the compared GC MDF, shows an excess of metal-poor stars with a bump at [Fe/H] $\simeq$ $-$1.2. \citet{harris07b} found the metal-poor stellar halo in NGC 3379 and provided an explanation for why their earlier studies only detected a metal-rich component in the same galaxy. The NGC 3379 {\it HST} field is at a distance of 12 effective radii, which is more than twice as far as the equally small fields of view for NGC 3377 (Figure 5$b$) and NGC 5128 (Figures 5$d$ and 5$e$). Studies of the stellar MDFs of large galaxies have been performed photometrically and concentrate on more central regions of the galaxy where the metal-rich field star population is concentrated. The radial bias seemed to under-sample metal-poor stars in the earlier studies. With the recent evidence of metal-poor halo stars emerging at great galactocentric distances, one should not assume that the stellar MDFs are unimodal at all positions. Therefore, this suggests that our interpretation should be more applicable to their main, inner parts of galaxies, less to the remote outskirts. \subsection{Comparison between Spatially-resolved Halo Field Stars in Nearby Elliptical Galaxies and their Own GC systems} As envisaged by the NGC 3379 case above, each galaxy has its own evolutionary history. So a more direct way to assess the similarity between GCs and stars is to compare between the MDFs of spatially-resolved field stars in a galaxy and its {\it own} GC system. There are four galaxies (M87, NGC 5128, NGC 3377, and NGC 3379) in Figure 5, for which both the GC MDF and the stellar MDF are currently available. Figure 6 gives the comparison for M87 (the top row), NGC 5128 (the second row), NGC 3377 (the third row), and NGC 3379 (the bottom row). The stellar MDFs (grey histograms) are identical to those in Figures 5$b$, 5$c$, 5$d$, and 5$f$. The left-hand panels present the inferred GC MDFs (empty histograms) based on the traditional linear color-to-metallicity transformations. By contrast, the right-hand panels show the inferred GC MDFs (empty histograms) based on the inflected relations from the YEPS model (Table 2). Firstly, for M87, we exploit $g-z$ colors from the ACSVCS \citep{peng06,jordan09} to derive the GC MDF. Since the inferred GC MDFs shown in Figure 1$h$ are based on the $u$-band limited sub-sample, the ACSVCS $g$- and $z$-band data are more representative GC sample. Figure 6$a$ shows that the MDF (empty histogram) for 1745 GCs obtained using the simple linear [Fe/H] vs. $g-z$ relation (shown in Figure 1$c$) exhibits a fundamental difference from the stellar MDF \citep{bird10} measured in a similar region of the same galaxy. By contrast, in Figure 6$e$, the strong bimodality is no longer present once transformed by the inflected CMR (Table 2), and the GC MDF is very similar to that of the brightest host bin in the ACSVCS. As a consequence, the MDFs of GCs and field halo stars in M87 are similar in shape and line up remarkably well with each other. Secondly, for NGC 5128, \citet{peng04a,peng04b} presented the CTIO Blanco 4-m $U$-, $B$-, $V$-, and $I$-band photometry of the GC system. We use the $B-I$ color distribution to derive the GC MDF because $B-I$ is a reasonable substitute for $g-z$. However, our result on NGC 5128 is not affected by the choice of colors. Figure 6$b$ shows the MDF (empty histogram) for 210 GCs that has strong bimodality. On the other hand, in Figure 6$f$ we converted the $B-I$ colors into [Fe/H] using the YEPS relation (Table 2). The strong bimodality seen in Figure 6$b$ is not evident in the MDF, and the strongly peaked GC MDF with a broad metal-poor tail is in reasonably good agreement in shape with the stellar MDF \citep{rejkuba05}. An unavoidable uncertainty in this comparison is that the field stars are sampled from only one or two projected location(s), whereas the GCs cover the wider halo. For M87, the stars were sampled in an inner region (R $\simeq$ 10 Kpc) and the GCs were in the inner halo ($\lesssim$ 10 Kpc). For NGC 5128, the stars were sampled in an inner region (R $\simeq$ 20 and 30 Kpc) and the GCs were in the entire halo of the galaxy. This may partly explain the similarity of the peak positions between stellar and GC MDFs for M87, and the dissimilarity for NGC 5128. Another warning for NGC 5128 is that, although the majority of its GCs ($\sim$ 90 \%) are known to be old ($>$ 10 Gyr) \citep{beasley08}, there should be a certain portion of young GCs for which one needs to apply a different color vs. metallicity relation to derive the GC MDF. Nonetheless, despite these sources of uncertainty, it is engrossing that the inferred GC MDFs based on the inflected color-to-metallicity transformations gives better matches with the field star MDFs of M87 and NGC 5128, compared to those based on the traditional linear relations. Thirdly, for NGC 3377, \citet{cho11} presented the {\it HST}/ACS $g$- and $z$-band photometry of the GC system as part of a deep imaging study of 10 early-type galaxies in low-density environments. Figure 6$c$ shows that the MDF for 157 GCs derived from the simple linear [Fe/H] vs. $g-z$ relation has strong bimodality. On the other hand, in Figure 6$g$ we converted the $g-z$ color into [Fe/H] using the YEPS relation. The strong bimodality seen in Figure 6$c$ is not evident in the MDF, and the sharply peaked MDF with a broad metal-poor tail is in reasonably good agreement in shape with the stellar MDF \citep{harris07a,harris07b}. Lastly, for NGC 3379, \citet{whitlock03} and \citet{rhode04} carried out wide-field photometry of GCs. They verified earlier results that its GC population is quite small. \citet{harris07b} derived the GC MDF from Rhode \& Zepf (2004)'s $B-R$ histogram for 36 GCs with an improved empirical relation, [Fe/H] = 3.13 ($B-R$)$_0$ -- 5.04. Figure 6$d$ shows the GC MDF, which is highly concentrated toward the metal-poor side. If divided at [Fe/H] = $-$1.2 where the metal-poor bump of the filed-star MDF is located, the metal-poor GCs outnumber the metal-rich GCs by 25 to 11. As a result, for the metal-rich half of the MDF, the numbers of GCs are much too small. By contrast, Figure 6$h$ displays the GC MDFs based on the YEPS relation (Table 2). With the GC MDFs of only 36 clusters, it is not yet clear whether the MDFs of GCs and stars have the same shape. It is interesting to note, however, that the GC metallicities are now more evenly distributed, and the proportions of the two metallicity subgroups, when divided at [Fe/H] = $-$1.2, became more comparable (21\,:\,15) than those in Figure 6$d$ (25\,:\,11). Interestingly, the inferred GC MDF in Figure 6$h$ seems very similar to the NGC 3379 field-star MDF in the western half (farther from the galaxy center) of the ACS/WFC field (shaded histogram in Figure 14 of Harris \etal\ 2007b) \subsection{Comparison to Chemical Enrichment Models of Galaxies} The new development in linking the observed GC colors to their intrinsic metallicities leads to GC MDFs that are strikingly similar in shape to MDFs of resolved field stars in nearby elliptical galaxies. Both the inferred GC MDFs and halo stellar MDFs are characterized by a sharp peak with a metal-poor tail. In this section, we proceed to compare the inferred GC MDFs to chemical enrichment models of galaxies. In Figure 7 we compare the inferred GC MDFs for the second brightest ($M_B$ $\simeq$ $-$20.5, orange) and the brightest ($M_B$ $\simeq$ $-$21.5, red) galaxy luminosity bins with the simple closed-box model of chemical evolution \cite[e.g.,][]{pagel75}. We plotted the MDFs for this simple model using yields of [Fe/H] = $-$0.685 and $-$0.505, respectively. With the improved nonlinear relationship between color and metallicity, the general shape of GC MDFs is in remarkable agreement with that of galaxy chemical enrichment models. In contrast, the dotted line in each panel represents the inferred GC MDF for the corresponding bin based on the simple straight fit to the data. Despite their resemblance in the general shape, the inferred GC MDFs have fewer metal-poor GCs than the simple chemical model. This is akin to the ``G-dwarf problem'' in the solar neighborhood \cite[e.g.,][]{vandenbergh62,schmidt63}. We note that the width of the chemical evolution model can be changed using various kinds of gas infall and stellar feedback, and an accreting-box model of chemical evolution yields narrower distributions that provide a better match with the inferred GC MDFs. For instance, \citet{harris02} remarked that the halo of NGC 5128 ($M_B$ = $-$20.9) also suffers from the G-dwarf problem in that it lacks metal-poor stars compared to the simple model. They were able to fit the field star MDF using an accreting-box model of chemical evolution, producing a narrower distribution that is also a better match to the inferred GC MDFs in this study. \section{DISCUSSION} There is substantial evidence that GCs are the remnants of star formation events in galaxies, and are linked to the star formation, chemical enrichment, and merging histories of their parent galaxies \cite[e.g.,][]{mclaughlin99}. However, ever since {\it direct} photometry of spatially-resolved constituent stars in a dozen nearby galaxies became possible thanks to the {\it HST} and large ground-based telescopes, the discrepancy between the MDFs of GCs and field stars has remained a conundrum. If GCs mirror field stars across the galaxy histories, the MDFs of GCs and field stars should be similar. Thus, the curious disagreement in metallicity has been interpreted in the context of highly decoupled formation and evolution histories between GCs and constituent stars of their parent galaxies. Current observational data and modeling point convincingly to nonlinear CMRs, which have significant implications for the interpretation of GC color distributions. We find that the strongly peaked [Fe/H] distributions inferred from nonlinear CMRs are qualitatively similar to the MDFs of field stars in the spheroidal component of nearby galaxies and to those produced by chemical evolution models of galaxies. However, whether the inferred GC MDFs represent the intrinsic, true ones is still unproven, and so it may be partly a coincidence. Nevertheless, if the MDFs obtained using stellar population models more closely represent the true GC MDFs, then this would change much of the current thought on the formation of GC systems and their host galaxies. The next two sections discuss what constraints would our findings pose on formation of GC systems (\S\S\,5.1) and their host galaxies (\S\S\,5.2). In Section 5.3, we present our view on the formation and evolution of GC systems and their parent galaxies. \subsection{What Do the Inferred GC MDFs Imply?} Remarkable progress has occurred over the past few decades in our understanding of extragalactic GC systems. One of the most important discoveries is that many galaxies show bimodality in their color distributions, leading to the notion that galaxies possess two distinct subpopulations of GCs. We showed, however, that the typical GC MDF shape derived from color distributions is unimodal and characterized by a sharp peak with a metal-poor tail. If confirmed, the inferred GC MDFs may appreciably reduce the demand for the separate formation mechanisms to explain the metal-poor and metal-rich division of GCs. We warn, though, that the sample used in this study is the GC systems in the Virgo galaxy cluster obtained from the {\it HST} ACS/WFC, WFPC2, and WFC3 observations. The field of view of ACS/WFC, for example, covers galaxies' haloes within R $\simeq$ 0.6, 0.8, 7.2, 8.3, 8.9, 9.4, and 10.6 $R_e$ (in $z$-band, Ferrarese \etal\ 2006) for the seven host luminosity bins (from the brightest bin to the faintest) of the ACSVCS, respectively. Nevertheless, the dearth of metal-poor GCs in MDFs for inner spheroids of giant ($M_B$ $\lesssim$ $-20$) elliptical galaxies and nearly the entire spheroids of normal ($-20$ $\lesssim$ $M_B$ $\lesssim$ $-15$) ellipticals suggest that the inheritance of metal-poor GCs via dissipationless accretion from dwarf satellites seems to be less significant than previously thought. On the outskirts of giant galaxies, accretion of metal-poor GCs from low-mass satellites and/or from surrounding regions may be an important channel for a galaxy to add GCs on the metal-poor part of GC MDFs \citep{forte82,cote98,cote02,masters10,mglee10a}. Recall, however, that one of the main consequences of the nonlinear metallicity-color relations is that their steepness at the metal-poor end naturally creates a blue peak of GCs in color space, which is a direct cause of the conventional subpopulation of blue GCs. Therefore, even for the outskirts of giant galaxies in cluster environments, whether or not accretion of metal-poor GCs is solely responsible for blue peaks of color distributions is still an open question. The strongest evidence against accretion models is that the blue peak colors are correlated tightly to the host galaxy luminosity (Larsen \etal\ 2001; Strader, Brodie \& Forbes 2004; Peng \etal\ 2006). The metal-poor relation implies that metal-poor GCs, although they formed at very high redshift and were accreted later on, already ``knew'' which galaxy they would ultimately belong to, and thus weakens the accretion scenario for the color bimodality. Alternatively, the nonlinear metallicity-color relations scenario (Paper I) gives cohesive explanations for the observations that the mean colors of {\it both} blue and red GCs increase progressively for more luminous host galaxies. Further counter-evidence of the accretion model is the significant fraction of blue GCs in massive cluster galaxies in relatively lower-density regions (Peng \etal\ 2006) and massive field galaxies in isolation (Cho \etal\ 2011). In such environments, galaxies have few neighboring lower-mass galaxies, and it would be difficult to acquire many metal-poor GCs via accretion. Therefore, the accretion process seems more important when it meets the three conditions: (a) the {\it outskirts} (rather than the inner, main bodies) of (b) {\it giant} (rather than dwarf) ellipticals orbited by a large number of low-mass satellites in (c) {\it cluster} (rather than isolated) environments. In this regard, wide-field, multiband studies of GC systems in cluster and field environments are clearly needed. Wide-field photometry of nearby {\it cluster} galaxies in CTIO 4-m $U$-band (H. Kim \etal\ 2011, in prep.) and Subaru/MOIRCS {\it NIR} (S. Kim \etal\ 2011, in prep.), and a study of massive {\it field} galaxies in {\it HST}/ACS $g$ and $z$ (Cho et al. 2011) are done or in progress to further investigate our alternative scenario. Current hierarchical models of galaxy formation in the $\Lambda$CMD cosmology predict that several thousands of small building blocks were involved for the emergence of one single massive galaxy. This is significant because the extent of complexity may leave little room for the existence of just two GC subpopulations in each massive galaxy. Indeed, the unimodal, skewed MDFs arise naturally in an aggregate of a large number of protogalactic gas clouds from its virtually continuous chemical evolution through many successive rounds of star formation. The strongly peaked unimodal GC MDFs point to GC formation with a relatively short, quasi-monolithic timescale. Remarkably, the typical GC MDF shape emerges across three orders of magnitude in host galaxy mass. This suggests that the processes of GC formation and chemical enrichment are quite universal among a variety of GC systems. \subsection{Do GC Systems Trace Star Formation in Galaxies?} We also address the important issue of whether or not the formation of GCs is coupled with the bulk formation of the stellar population of host galaxies. Or, equivalently, does GC formation really mirror star formation in a galaxy? We have shown that the inferred GC MDFs agree reasonably well with the stellar MDFs of nearby galaxies and the MDFs produced by models of galaxy chemical evolution. The results suggest that the evolutionary histories of GC systems and their parent galaxies are strongly coupled, and thus share a more common origin and closer subsequent evolution than previously thought. An important aspect of the GC-host galaxy co-evolution issue concerns the GC-to-star offset, in the sense that GCs are on average more metal-poor than the unresolved light of an elliptical galaxy at the same radial location \cite[e.g.,][]{forte81,strom81,jordan04,tamura06,mglee08,forte09}. In light of the typical shape of GC MDFs proposed in this study, the lower value of the mean metallicity of GCs compared to that of field stars should not be attributed to the number excess of the metal-poor GCs. Alternatively it is more likely that, at a given radial location, the GC MDF on the whole is shifted toward the metal-poor side with a peak at a lower [Fe/H] value, compared to the field-star MDF. The GC-to-star offset would point to a picture in which GCs are the remnants of vigorous star burst events in the early stages of galaxy formation, and thus preferentially trace the {\it major} mode of star formation in galaxies. If so, GC formation was less prolonged than field star formation. Recent observations show that GCs were at least an order of magnitude more massive at birth than now \cite[e.g.,][]{conroy11}, and the large masses may have been the cause of the earlier truncation of their formation process than that of stars. Consequently, the chemical enrichment process of a GC system appears to have ceased somewhat earlier than that of the field stellar population in each star formation episode. We further showed a possible age difference among GC systems, in that the GC systems in fainter galaxies are on average younger. We refer to this as ``GC system downsizing.'' The GC system downsizing phenomenon appears to further support the similar nature shared by stellar populations of GCs and field stars. Interestingly, recent observations reveal that the GC systems in Milky Way satellites and some Milky Way GCs believed to be accreted from satellite dwarf galaxies are relatively younger than the majority of Galactic GCs (e.g., Mar{\'{\i}}n-Franch \etal\ 2009) Galaxy downsizing is generally defined by more {\it prolonged} residual star formation in fainter galaxies. However, the ``GC system downsizing'' involves the idea that the first generation of GCs in fainter hosts are created later than the counterpart in brighter galaxies, and may provide further information on galaxy downsizing itself. Provided that GC systems became attached to the present host galaxies from the beginning, we can speculate that not only the first GCs but also {\it first field stars} were formed earlier in brighter galaxies than fainter galaxies. That is, massive galaxies today were likely where conditions first favored star formation, thus suggesting a prolonged epoch of galaxy formation in the universe. Combined with the fact that GCs and field stars in brighter galaxies have higher metallicities, a picture emerges in which the formation and accompanying metal enrichment of both GCs and halo stars seem to have started earlier and proceeded more rapidly and efficiently in massive galaxies, presumably in denser environments. It is also important to note that some galaxies have substructure in their stellar MDFs and are thus more complex than a smooth transition from a metal-rich dominance to metal-poor with increasing radius. Examples include the Milky Way \citep{ibata01,majewski03,yanny03,ivezic08} and M31 \citep{kalirai06,koch08,mcconnachie09}. The stellar MDF of the elliptical galaxy NGC 3379 (Figure 5$c$) also shows a fine substructure \citep{harris07b}. The remote outer haloes of these galaxies are inhomogeneous in terms of surface density and show different metallicity distributions from the inner haloes. This indicates that the stellar populations in the outer haloes of galaxies are not well mixed, and in turn supports the build up of halo formation via satellite accretion \citep{forte82,cote98,cote02,masters10}, which can have mixing time scales of a few Gyr in these outer regions \citep{johnston95}. The evidence of the satellite accretion indicates that the outskirts of galaxies do not all build up their stellar populations in one single way. For instance, the NGC 3379 observation fits into a model in which its outskirts were formed by a combination of earlier dissipative mergers and later accretion of dwarf satellites. The remote halo of NGC 3379 has been on the fringes of active, violent gas-rich mergers at the early epochs of galaxy formation, and later experienced dry accretion of stars and GCs from its metal-poor satellite dwarfs. Therefore, one should not assume that the stellar MDFs are unimodal at {\it all} positions, and our new interpretation would be more applicable to the inner, main bodies of galaxies than their remote outskirts. \subsection{An Alternative View on the Formation and Evolution of GC Systems and their Parent Galaxies} Our results may be an important step forward in resolving the long-standing disagreement between GCs and field stars, and in reconstructing the history of GC systems and their parent galaxies. Although the sample used in this study (the {\it HST} ACS/WFC, WFPC2, and WFC3 photometry for the GC systems in the Virgo galaxy cluster) confines our discussion to $R$ $\lesssim$ $R_e$ for giant ellipticals and $\lesssim$ 10 $R_e$ for normal ellipticals, our findings suggest that GC systems and their parent galaxies have shared a more common origin than previously thought, and hence greatly simplify theories of galaxy formation. The star formation and accompanying chemical evolution were virtually continuous via aggregates of a large number of protoclouds and via repeated gas-rich mergers with other galaxies, leading to the unimodal, skewed MDFs of both stars and GCs. The metal enrichment of both stars and GCs proceeded more rapidly and efficiently in massive galaxies, resulting in more metal-rich stars and GCs in those galaxies. The observed radial metallicity gradients are understood if the chemical enrichment in the dense centers was more rapid and efficient than in the less-dense outskirts. The typical GC MDF shape emerges across three orders of magnitude in the host galaxy mass, suggesting that the processes of GC formation and chemical enrichment are quite universal among various GC systems, at lease for the inner, main spheroids of giant ellipticals and the nearly entire haloes of normal ellipticals. The histories of GCs and their host galaxies are reconstructed as follows. \begin{enumerate} \item {\bf Giant ellipticals' inner, main haloes: } The inner, main spheroids (R $\lesssim$ 1 $R_{e}$) of today's giant elliptical galaxies ($M_B$ $\lesssim$ $-$20) were first created in dense regions of the universe via dissipational mergers of a large number of protogalactic gas clouds (e.g., Searl \& Zinn 1978). Their GC systems with the MDF peaks at [Fe/H] $\gtrsim$ $-0.7$ (Figures 3 and 4) were formed together with stars in the galaxies. The GC systems from low to high metallicities (i.e., both blue and red GCs), as a whole, do not need two separate mechanisms for formation. There is strong similarity between blue and red GCs in their mass function; for the less evolved high-mass part of the mass function, these are approximately power laws with indices of $-1.8$ to $-2$. Although power-law distributions are a consequence of a variety of physical processes, their nearly identical power indices may support an identical formation history of blue and red GCs. The chemical enrichment of the galaxies was rapid, and the difference in the formation epochs of the metal-poor and metal-rich ends of MDFs should be small ($\lesssim$ 1 Gyr), as evidenced by observations (e.g., Jord\'an \etal\ 2002). In this view, the metal-poor GCs in massive galaxies are the first generation of GCs in the universe. \item {\bf Normal ellipticals' entire haloes: } The spheroids (out to $R\lesssim$ 10 $R_e$) of normal galaxies ($M_B$ $\gtrsim$ $-$20) were created later than those of massive ones via self-collapses or mergers of protogalactic clouds that did not take part in massive galaxy formation early on. They were outside of tumultuous, dense regions, and have evolved rather independently from central massive galaxies. Their GC systems with the MDF peaks at [Fe/H] $\lesssim$ $-0.7$ (Figures 3 and 4) were formed together with stars in the low-mass galaxies. As a result, the first generation of both stars and GCs in low-mass galaxies are younger than those of massive galaxies, indicating a prolonged epoch of galaxy formation in the universe. This picture is supported by recent observational evidence that the GC systems in Galactic satellites and some Galatic GCs believed to be accreted from satellite dwarf galaxies are relatively younger than the majority of Milky Way GCs (e.g., Mar{\'{\i}}n-Franch \etal\ 2009). \item {\bf Remote outskirts of giant ellipticals in dense environments: } The remote outer haloes of giant elliptical galaxies have been on the fringes of active, violent gas-rich mergers at the early epochs of galaxy formation, and later experienced dissipationless accretion of stars and GCs from its metal-poor satellite galaxies \citep{forte82,cote98,cote02,masters10,mglee10a} and/or from the surrounding regions (Tamura \etal\ 2006a,b; Bergond \etal\ 2007; Schuberth \etal\ 2008; Lee \etal\ 2010b; West \etal\ 2011). Such GCs with an accretion origin tend to be of lower metallicity and on highly elongated orbits with high energy. The GCs generally favor the extended outskirts of giant galaxies (Lee et al. 1998, 2008b; Lee 2003; Dirsch et al. 2005), although they, on highly elongated orbits, must penetrate into the inner body of galaxies. This explains the observations that the bimodality in GC colors corresponds to a change in the kinematic properties of the GCs, such that the blue GCs are dynamically hotter and there is a ``bimodal'' distribution of velocity dispersion. The demarcating radii between the inner, main part (the product of mergers) and the outskirts (the product of merger plus accretion) are not sharp and vary galaxy-to-galaxy depending on their individual histories of mergers and accretions. This is in line with the considerable diversity in the kinematics of the GC systems in giant elliptical galaxies (e.g., Hwang \etal\ 2008; Lee \etal\ 2010a). \end{enumerate} Observationally, the intimately coupled histories of the GCs and constituent field stars of elliptical galaxies is encouraging in the context of mapping their metal contents, which is one of the most important yet least known properties of external galaxies. Even with the forthcoming larger telescopes, individual stars can be spatially resolved only in a few tens nearby galaxies \citep{tolstoy06}. Hence, we anticipate that the estimation of GC metallicities based on their colors, equipped with a proper correction for the GC-to-star offset, will become a preferred practical method in the quest to comprehend the evolution of galaxies beyond the nearby universe. Further refinement of the exact shape of the CMRs for GCs, and the resulting implications for the formation of GC systems and galaxies, should be the topic of much future work. \clearpage \acknowledgments We would like to thank the anonymous referee for useful comments and suggestions. SJY acknowledges support from Mid-career Researcher Program (No. 2009-0080851) and Basic Science Research Program (No. 2009-0086824) through the National Research Foundation (NRF) of Korea grant funded by the Ministry of Education, Science and Technology (MEST), and from the Korea Astronomy and Space Science Institute Research Fund 2011. SJY and YWL acknowledge support by the NRF of Korea to the Center for Galaxy Evolution Research. JPB thanks the Yonsei University Center for Galaxy Evolution Research for the generous hospitality during his visit. EWP acknowledges support from the Peking University 985 fund and grant 10873001 from the National Natural Science Foundation of China. SJY would like to thank Daniel Fabricant, Charles Alcock, Jay Strader, Dongsoo Kim, Jaesub Hong for their hospitality during his stay at Harvard-Smithsonian Center for Astrophysics as a Visiting Professor in 2011--2012. \vspace{1cm}
1,941,325,220,856
arxiv
\section{Introduction} For explaining accelerated expansion of the universe, there is a class of models in which one modifies the laws of gravity whereby a late-time acceleration is produced. A family of these modified gravity models is obtained by replacing the Ricci scalar $R$ in the usual Einstein-Hilbert Lagrangian density for some functions $f(R)$ \cite{carro} \cite{sm}. It is well-known that \cite{maeda} \cite{soko} the gravitational field equations derived from such fourth order gravity models can be conformally transformed to an Einstein frame representation with an extra scalar degree of freedom. This scalar degree of freedom may be regarded as a manifestation of the additional degree of freedom due to the higher order of the field equations in the Jordan frame. This feature opens some questions about a possible correspondence between this geometric scalar field and the usual scalar fields such as quintessence and phantom fields. Along this line, we intend in the present contribution to explore phantom behavior of some $f(R)$ gravity models in both Jordan and Einstein conformal frames. In particular, we shall show that behaviors of the scalar field attributed to $f(R)$ models are similar to those of a coupled quintessence, a quintessence which interacts with the matter sector. \\ The plan of our paper is as follows : In section 2, we consider the cosmology of a scalar field with minimal coupling to gravity. We review the observation that a quintessence field with a minimal coupling can not lead to crossing the phantom boundary. In section 3, we shall consider $f(R)$ gravity in both Jordan and Einstein conformal frames. We show that the generic feature of these models is that the scalar partner of the metric tensor in the Einstein frame is similar to a quintessence rather than a phantom field. The interaction of this scalar degree of freedom with the matter sector plays a key role for crossing the phantom barrier \cite{bis}. We then compare phantom behavior of some viable $f(R)$ models in the two conformal frames. Our results are summarized in section 4. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \section{Minimally coupled Scalar Field} The simplest class of models that provides a redshift dependent equation of state parameter is a scalar field $\varphi$ minimally coupled to gravity whose dynamics is determined by a properly chosen potential function $V(\varphi)$. Such models are described by the action \footnote{We work in units in which $\hbar=c=1$ and the signature is $(-,+,+,+)$.} \begin{equation} S_{\varphi}=\frac{1}{2} \int d^4 x\sqrt{-g}~(\frac{1}{k}R-\alpha ~g^{\mu\nu}\partial_{\mu}\varphi \partial_{\nu}\varphi-2V(\varphi))+S_{m}(g_{\mu\nu}, \psi) \label{ca1}\end{equation} where $k=8\pi G$ with $G$ being the gravitational constant, $g$ is the determinant of $g_{\mu\nu}$ and $R$ is the curvature scalar. Here $S_{m}$ is the action of dark matter which depends on the metric $g_{\mu\nu}$ and some dark matter fields $\psi$. The constant $\alpha$ can take $\alpha=+1, -1$ which correspond to quintessence and phantom fields, respectively. The distinguished feature of the phantom field is that its kinetic term enters (\ref{ca1}) with opposite sign in contrast to the quintessence or ordinary matter. The action (\ref{ca1}) gives the Einstein field equations \begin{equation} G_{\mu\nu}=k (T_{\mu\nu}^{\varphi}+T_{\mu\nu}^{m}) \label{ca2}\end{equation} with \begin{equation} T_{\mu\nu}^{\varphi}=\alpha~\nabla_{\mu}\varphi \nabla_{\nu}\varphi-\frac{1}{2}\alpha~g_{\mu\nu} \nabla_{\gamma}\varphi \nabla^{\gamma}\varphi-g_{\mu\nu} V(\varphi) \label{ca3}\end{equation} Here $T_{\mu\nu}^m$ is the stress-tensor of the matter system defined by \begin{equation} T_{\mu\nu}^m=\frac{-2}{\sqrt{-g}}\frac{\delta S_{m}(g_{\mu\nu},\psi)}{\delta g^{\mu\nu}} \label{a3}\end{equation} The two stress-tensors $T_{\mu\nu}^{\varphi}$ and $T_{\mu\nu}^{m}$ are separately conserved \begin{equation} \nabla^{\mu}T_{\mu\nu}^{\varphi}=\nabla^{\mu}T_{\mu\nu}^{m}=0 \label{aa3}\end{equation} We assume a spatially flat homogeneous and isotropic cosmology described by Friedmann-Robertson-Walker (FRW) spacetime \begin{equation} ds^2=-dt^2+a^2(t)(dx^2+dy^2+dz^2) \label{aa7}\end{equation} where $a(t)$ is the scale factor. In this cosmology, the gravitational field equations (\ref{ca2}) become \begin{equation} 3H^2=k(\rho_{\varphi}+\rho_{m}) \label{a08}\end{equation} \begin{equation} 2\dot{H}=-k[(\omega_{\varphi}+1)\rho_{\varphi}+\rho_{m}] \label{a09}\end{equation} where $H\equiv \frac{\dot{a}}{a}$ is the Hubble parameter and \begin{equation} \rho_{\varphi}=\frac{1}{2}\alpha \dot{\varphi}^2+V(\varphi)~,~~~~~p_{\varphi}=\frac{1}{2}\alpha \dot{\varphi}^2-V(\varphi) \label{ca4}\end{equation} \begin{equation} \omega_{\varphi}=\frac{\frac{1}{2}\alpha \dot{\varphi}^2-V(\varphi)}{\frac{1}{2}\alpha \dot{\varphi}^2+V(\varphi)} \label{ca5}\end{equation} The conservation equations (\ref{aa3}) take the form \begin{equation} \dot{\rho}_m+3H\rho_m=0 \label{ab1}\end{equation} \begin{equation} \dot{\rho}_{\varphi}+3H(\omega_{\varphi}+1)\rho_{\varphi}=0 \label{ab2}\end{equation} In the case of a quintessence field ($\alpha=+1$) with $V(\varphi)>0$ the equation of state parameter remains in the range $-1<\omega_{\varphi}<1$. This is also true for a phantom field ($\alpha=-1$) with a negative potential $V(\varphi)<0$. In the limit of small kinetic term (slow-roll potentials \cite{slow}), it approaches $\omega_{\varphi}=-1$ but does not cross this line. For $\alpha=+1$, we have \begin{equation} \rho_{\varphi}+p_{\varphi}=(\omega_{\varphi}+1)\rho_{\varphi}=\alpha \dot{\varphi}^2>0 \label{wec}\end{equation} The phantom barrier can be crossed by a phantom field ($\alpha<0$) with $V(\varphi)>0$ when we have $2|V(\varphi)|>\dot{\varphi}^2$. This situation corresponds to \begin{equation} \rho_{\varphi}>0~~~~~,~~~~~p_{\varphi}<0~~~~~,~~~~~V(\varphi)>0 \label{a51}\end{equation} In this case, $(\omega_{\varphi}+1)\rho_{\varphi}<0$ and for a sufficiently negative pressure $p_{\varphi}$ the equation (\ref{a09}) gives $\dot{H}>0$. Let us look at this situation in a different way. We combine (\ref{a09}) with (\ref{ab1}) and (\ref{ab2}) which leads to \begin{equation} 2\dot{H}=\frac{k}{3H}(\dot{\rho}_{\varphi}+\dot{\rho}_{m}) \label{a19}\end{equation} In an expanding universe $\dot{\rho}_m<0$. In the case of a quintessence $(\omega_{\varphi}+1)\rho_{\varphi}>0$ and then the equation (\ref{ab2}) implies that $\dot{\rho}_{\varphi}<0$ which leads to $\dot{H}<0$. On the other hand, for a phantom field we have $\alpha<0$ and $(\omega_{\varphi}+1)\rho_{\varphi}<0$. In this case, $\dot{\rho}_{\varphi}>0$ and for an appropriate potential function $V(\varphi)$ the first term on the right hand side of the equation (\ref{a19}) dominates and the equation then results in $\dot{H}>0$, crossing the phantom barrier. The merit of (\ref{a19}) is that it implicitly shows that there is a possibility for an interacting quintessence field to cross the phantom boundary. We will return to this issue in the next section.\\ Here it is assumed that the scalar field has a canonical kinetic term $\pm \frac{1}{2}\dot{\varphi}^2$. It is shown \cite{vik} that any minimally coupled scalar field with a generalized kinetic term (k-essence Lagrangian \cite{k}) can not lead to crossing the PDL through a stable trajectory. However, there are models that employ Lagrangians containing multiple fields \cite{multi} or scalar fields with non-minimal coupling \cite{non} which in principle can achieve crossing the barrier. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \section{f(R) Gravity} The $f(R)$ gravity models are based on a Lagrangian density which depends in a nonlinear way on the curvature scalar. In these models the dynamical variable of the vacuum sector is the metric tensor and the corresponding field equations are fourth order. This dynamical variable can be replaced by a new pair which consists of a conformally rescaled metric and a scalar partner. Moreover, in terms of the new set of variables the field equations are those of General Relativity. The original set of variables is commonly called Jordan conformal frame and the transformed set whose dynamics is described by Einstein field equations is called Einstein conformal frame.\\ In general, the mathematical equivalence of these two conformal frames does not imply a physical equivalence. The physical status of these conformal frames is an open question which has not been completely solved yet. In fact, there are different categories according to various authors' attitude towards the issue of which frame is the physical one \cite{soko} \cite{fa}. Based on the interaction of the scalar degree of freedom with the matter sector, some authors argue that the two conformal frames are physically equivalent provided that one accepts the idea that the units of mass, length and time are varying in the Einstein frame \cite{dick}. According to this idea, physics must be conformally invariant and symmetry group of gravitational theories should be enlarged to include not only the group of diffeomorphisms but also conformal transformations. There are also arguments against this idea and in favor of one of the conformal frames. In particular, some authors go beyond mere theoretical arguments and take an appropriate phenomenon to determine if the two frames are physically equivalent \cite{20}.\\ One important issue in the context of cosmological viability of an $f(R)$ gravity model is capability of the model to exhibit phantom behavior due to curvature corrections. We shall explore this issue in both Jordan and Einstein conformal frames and take it as a relevant issue to distinguish the physical status of the two conformal frames. \subsection{Jordan Frame Representation} The action for an $f(R)$ gravity theory in the Jordan frame is given by \begin{equation} S_{JF}=\frac{1}{2 k} \int d^{4}x \sqrt{-g} f(R) +S_{m}(g_{\mu\nu}, \psi) \label{a1}\end{equation} Stability issues should be considered to make sure that an $f(R)$ model is viable \cite{st}. In particular, stability in matter sector (the Dolgov-Kawasaki instability \cite{dk}) imposes some conditions on the functional form of $f(R)$ models. These conditions require that the first and the second derivatives of $f(R)$ functions with respect to the Ricci scalar $R$ should be positive definite. The positivity of the first derivative ensures that the scalar degree of freedom is not tachyonic and positivity of the second derivative tells us that graviton is not a ghost.\\ The field equations can be derived by varying the action with respect to the metric tensor \begin{equation} f'(R) R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}f(R)-\nabla_{\mu}\nabla_{\nu}f'(R)+g_{\mu\nu} \Box f'(R)=k T_{\mu\nu}^m \label{a2}\end{equation} where prime denotes differentiation with respect to $R$. The trace of (\ref{a2}) is \begin{equation} f'(R) R-2f(R)+3\Box f'(R)=k T^{m} \label{a4}\end{equation} where $T^m=g^{\mu\nu}T^m_{\mu\nu}$. It should be noted that when $f(R)=R$, the equation (\ref{a4}) reduces to $R=-kT^m$ which is the corresponding trace equation in General Relativity. In $f(R)$ gravity, $\Box f'(R)$ does not vanish and contrary to General Relativity the Ricci scalar relates differentially to the trace of the matter system. This is an indication of the fact that one finds a larger variety of solutions in $f(R)$ modified gravity models.\\ The field equations can also be written in the following form \begin{equation} G_{\mu\nu}=k_{eff}(T_{\mu\nu}^m +T_{\mu\nu}^{DE}) \label{a5}\end{equation} where $k_{eff}=k/f'(R)$ and \begin{equation} T_{\mu\nu}^{DE}=\frac{1}{k}[\frac{1}{2}(f(R)-Rf'(R))g_{\mu\nu}+\nabla_{\mu}\nabla_{\nu}f'(R)-g_{\mu\nu}\Box f'(R)] \label{a6}\end{equation} We apply the field equations to a spatially flat FRW spacetime described by (\ref{aa7}). We assume that the matter system is described by a pressureless perfect fluid with energy density $\rho_{m}$. The field equations become \begin{equation} 3H^2=k_{eff}(\rho_m+\rho_{DE}) \label{a8}\end{equation} \begin{equation} 2\dot{H}+3H^2=-k_{eff}~p_{DE} \label{a9}\end{equation} where \begin{equation} \rho_{DE}=\frac{1}{k}[\frac{1}{2}(Rf'(R)-f(R))-3H\dot{R}f''(R)] \label{a10}\end{equation} \begin{equation} p_{DE}=\frac{1}{k}[\frac{1}{2}(f(R)-Rf'(R))+\ddot{R}f''(R)+\dot{R}^2 f'''(R)+2H \dot{R}f''(R)] \label{a11}\end{equation} Then the effective equation of state parameter is \begin{equation} \omega_{eff}=\frac{p_{DE}}{\rho_{m}+\rho_{DE}}=\frac{1}{3H^2f'(R)}[\frac{1}{2}(f(R)-R f'(R))+\ddot{R}f''(R)+\dot{R}^2 f'''(R)+2H \dot{R}f''(R)] \label{a12}\end{equation} where we have used (\ref{a8}). This expression allows us to find those $f(R)$ functions that fulfill $\omega_{eff}<-1$. In general, to find such $f(R)$ gravity models one may start with a particular $f(R)$ function in the action (\ref{a1}) and solve the corresponding field equations for finding the form of $H(z)$. One can then use this function in (\ref{a12}) to obtain $\omega_{eff}(z)$. However, this approach is not efficient in view of complexity of the field equations. An alternative approach is to start from the best fit parametrization $H(z)$ obtained directly from data and use this $H(z)$ for a particular $f(R)$ function in (\ref{a12}) to find $\omega_{eff}(z)$. In this approach, one needs a consistency check to ensure that the given parametrization is compatible with the $f(R)$ function. We will follow the latter approach to find $f(R)$ models that provide crossing the phantom barrier. We begin with the Hubble parameter $H$ which its derivative with respect to cosmic time $t$ is \begin{equation} \dot{H}=\frac{\ddot{a}}{a}-(\frac{\dot{a}}{a})^2 \label{b11}\end{equation} Combining this with the definition of the deceleration parameter \begin{equation} q(t)=-\frac{\ddot{a}}{aH^2} \label{b12}\end{equation} gives \begin{equation} \dot{H}=-(q+1)H^2 \label{bb13}\end{equation} One may use $z=\frac{a(t_{0})}{a(t)}-1$ with $z$ being the redshift, and the relation (\ref{b12}) to write (\ref{bb13}) in its integration form \begin{equation} H(z)=H_{0}~\exp[\int_{0}^{z} (1+q(u))d\ln(1+u)] \label{bc14}\end{equation} where the subscript ``0" indicates the present value of a quantity. Now if a function $q(z)$ is given, then we can find evolution of the Hubble parameter. Here we use a two-parametric reconstruction function characterizing $q(z)$ \cite{wang}\cite{q}, \begin{equation} q(z)=\frac{1}{2}+\frac{q_{1}z+q_{2}}{(1+z)^2} \label{bc15}\end{equation} One of the advantages of this parametrization is that $q(z)\rightarrow \frac{1}{2}$ when $z \gg 1$ which is consistent with observations. Moreover, The behavior of q(z) in this parametrization is quite general. If $q_1 > 0$ and $q_2 > 0$, then there is no acceleration at all. It is also possible that the Universe has been accelerating since some time in the past. If $q_1 < 0$ and $q_2 > -1/2$, then it is possible that the Universe is decelerating and has past acceleration and deceleration. In other words, it has the same behavior as the simple three-epoch model used in \cite{wang} \cite{shap}. The values of $q_1$ and $q_2$ and the behavior of $q(z)$ can be obtained by fitting the model to the observational data. Fitting this model to the Gold data set gives $q_{1}=1.47^{+1.89}_{-1.82}$ and $q_{2}=-1.46\pm 0.43$\footnote{Some other recent samples may be found in \cite{sam}.} \cite{q}. Using this in (\ref{bc14}) yields \begin{equation} H(z)=H_{0}(1+z)^{3/2}~\exp[\frac{q_{2}}{2}+\frac{q_{1}z^2-q_{2}}{2(z+1)^2}] \label{bc16}\end{equation} For the metric (\ref{aa7}), we have $R=6(\dot{H}+2H^2)$ and therefore $\dot{R}=6(\ddot{H}+4\dot{H}H)$. In terms of the deceleration parameter, one can write \begin{equation} R=6(1-q)H^2 \label{b17}\label{bc17}\end{equation}and \begin{equation} \dot{R}=6H^3 \{2(q^2-1)-\frac{\dot{q}}{H}\} \label{b18}\end{equation} \begin{equation} \ddot{R}=6H^3\{6(q^2-1)\frac{\dot{H}}{H}+4q\dot{q}-2\frac{\dot{q}\dot{H}}{H^2}-\frac{\ddot{q}}{H}\} \label{bb18}\end{equation} The latter two equations are equivalent to \begin{equation} \dot{R}=6H^3 \{2(q^2-1)+(1+z)\frac{dq}{dz}\} \label{b19}\end{equation} \begin{equation} \ddot{R}=-6(z+1)H^3\{3\frac{dH}{dz}[2(q^2-1)+(z+1)\frac{dq}{dz}]+H[(4q+1)\frac{dq}{dz}+(z+1)\frac{d^2q}{dz^2}]\} \label{bb19}\end{equation} From the equations (\ref{bc15}) and (\ref{bc16}) we can also write \begin{equation} \frac{dq}{dz}=\frac{(q_{1}-2q_{2})-q_{1}z}{(z+1)^3} \label{b20}\end{equation} \begin{equation} \frac{d^2q}{dz^2}=\frac{2q_{1}(z-2)+6q_{2}}{(z+1)^4} \label{b21}\end{equation} \begin{equation} \frac{dH}{dz}=H_{0}(1+z)^{1/2}[\frac{3}{2}+\frac{q_{1}z+q_2}{(z+1)^2}]~\exp[\frac{q_{2}}{2}+\frac{q_{1}z^2-q_{2}}{2(z+1)^2}] \label{b22}\end{equation} It is now possible to find $R$, $\dot{R}$ and $\ddot{R}$ in terms of the redshift, and then for a given $f(R)$ function, the relation (\ref{a12}) determines the evolution of the equation of state parameter $\omega_{eff}(z)$.\\ As an illustration we apply this procedure to some $f(R)$ functions. Let us first consider the model \cite{cap} \cite{A} \begin{equation} f(R)=R+\lambda R_0(\frac{R}{R_0})^n \label{a13}\end{equation} Here $R_{0}$ is taken to be of the order of $H_{0}^2$ and $\lambda$, $n$ are constant parameters. In terms of the values attributed to these parameters, the model (\ref{a13}) is divided in three cases \cite{A}. Firstly, when $n>1$ there is a stable matter-dominated era which does not follow by an asymptotically accelerated regime. In this case, $n = 2$ corresponds to Starobinsky's inflation and the accelerated phase exists in the asymptotic past rather than in the future. Secondly, when $0<n<1$ there is a stable matter-dominated era followed by an accelerated phase only for $\lambda<0$. Finally, in the case that $n<0$ there is no accelerated and matter-dominated phases for $\lambda>0$ and $\lambda<0$, respectively. Thus the model (\ref{a13}) is cosmologically viable in the regions of the parameters space which is given by $\lambda<0$ and $0<n<1$.\\ For the model (\ref{a13}), one can write $$ f'(R)=1+n\lambda(\frac{R}{H_0^2})^{n-1} $$ \begin{equation} f''(R)=n(n-1)\lambda H_0^{-2}(\frac{R}{H_0^2})^{n-2} \label{a14}\end{equation} $$ f'''(R)=n(n-1)(n-2)\lambda H_0^{-4}(\frac{R}{H_0^2})^{n-3} $$ Putting these results together with (\ref{bc16}), (\ref{bc17}), (\ref{b19}) and (\ref{bb19}) into (\ref{a12}) gives $\omega_{eff}(z)$. Due to complexity of the resulting $\omega_{eff}(z)$ function, we do not explicitly write it here and only plot it in fig.1a for some parameters. The figure indicates that $\omega_{eff}(z)$ crosses the phantom boundary for some values of the parameters. \\ Now we consider the model presented by Starobinsky \cite{star} \begin{equation} f(R)=R-\gamma R_{0} \{1-[1+(\frac{R}{R_{0}})^2]^{-m}\} \label{d12}\end{equation} where $\gamma$, $m$ are positive constants and $R_{0}$ is again of the order of the presently observed effective cosmological constant. For this model, one obtains $$ f'(R)=1-2m\gamma \frac{R}{H_0^2}(1+\frac{R^2}{H_0^4})^{-m-1} $$ \begin{equation} f''(R)=2m\gamma H_0^{-2}(1+\frac{R^2}{H_0^4})^{-m-1} ~[2(m+1)\frac{R^2}{H_0^4}(1+\frac{R^2}{H_0^4})^{-1}-1] \label{a15}\end{equation} $$ f'''(R)=4m(m+1)\gamma \frac{R}{H_0^6}(1+\frac{R^2}{H_0^4})^{-m-3}~[3-(2m+1)\frac{R^{2}}{H_0^4}] $$ The corresponding $\omega_{eff}(z)$ function is plotted in fig.2a. The figure indicates that the Starobinsky's model also realizes crossing the phantom boundary. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Einstein Frame Representation} Any $f(R)$ gravity model described by the action (\ref{a1}) can be recast by a new set of variables \begin{equation} \bar{g}_{\mu\nu} =\Omega~ g_{\mu\nu} \label{b2}\end{equation} \begin{equation} \phi = \frac{1}{2\beta \sqrt{k}} \ln \Omega \label{b3}\end{equation} where $\Omega\equiv f^{'}(R)$ and $\beta=\sqrt{\frac{1}{6}}$. This is indeed a conformal transformation which transforms the action (\ref{a1}) to the Einstein frame \cite{maeda} \cite{soko} \begin{equation} S_{EF}=\frac{1}{2} \int d^{4}x \sqrt{-\bar{g}}~\{ \frac{1}{k} \bar{R}-\bar{g}^{\mu\nu} \partial_{\mu} \phi~ \partial_{\nu} \phi -2V(\phi)\} + S_{m}(\bar{g}_{\mu\nu} e^{2\beta \sqrt{k}\phi}, \psi) \label{b4}\end{equation} In the Einstein frame, $\phi$ is a minimally coupled scalar field with a self-interacting potential which is given by \begin{equation} V(\phi(R))=\frac{Rf'(R)-f(R)}{2 k f'^2(R)} \label{b5}\end{equation} Note that the conformal transformation induces the coupling of the scalar field $\phi$ with the matter sector. The strength of this coupling $\beta$, is fixed to be $\sqrt{\frac{1}{6}}$ and is the same for all types of matter fields. \\ Variation of the action (\ref{b4}) with respect to $\bar{g}_{\mu\nu}$ and $\phi$ gives, \begin{equation} \bar{G}_{\mu\nu}=k (T^{\phi}_{\mu\nu}+\bar{T}^{m}_{\mu\nu}) \label{b6}\end{equation} \begin{equation} \bar{\Box} \phi-\frac{d V}{d \phi}=-\beta \sqrt{k}~ \bar{T}^m \label{bbb6}\end{equation} where \begin{equation} \bar{T}^{m}_{\mu\nu}=\frac{-2}{\sqrt{-g}}\frac{\delta S_{m}(\bar{g}_{\mu\nu} e^{2\beta \sqrt{k}\phi}, \psi)}{\delta \bar{g}^{\mu\nu}}\label{b7}\end{equation} \begin{equation} T^{\phi}_{\mu\nu}=\nabla_{\mu} \phi~\nabla_{\nu} \phi -\frac{1}{2}\bar{g}_{\mu\nu} \nabla_{\gamma} \phi~\nabla^{\gamma} \phi-V(\phi) \bar{g}_{\mu\nu} \label{b8}\end{equation} Here $\bar{T}^{m}_{\mu\nu}$ and $T^{\phi}_{\mu\nu}$ are stress-tensors of the matter system and the minimally coupled scalar field $\phi$, respectively. The trace of (\ref{b6}) is \begin{equation} \nabla^{\gamma}\phi \nabla_{\gamma}\phi+4V(\phi)-\bar{R}/k=\bar{T}^m\label{b8-1}\end{equation} which differentially relates the trace of the matter stress-tensor $\bar{T}^{m}=\bar{g}^{\mu\nu}\bar{T}^m_{\mu\nu}$ to $\bar{R}$. It is important to note that the two stress-tensors $\bar{T}^m_{\mu\nu}$ and $T^{\phi}_{\mu\nu}$ are not separately conserved. Instead they satisfy the following equations \begin{equation} \bar{\nabla}^{\mu}\bar{T}^{m}_{\mu\nu}=-\bar{\nabla}^{\mu}T^{\phi}_{\mu\nu}= \beta \sqrt{k}~\nabla_{\nu}\phi~\bar{T}^{m}\label{b13}\end{equation} We apply the field equations to a spatially flat FRW metric which in the Einstein frame is given by \begin{equation} d\bar{s}^2=-d\bar{t}^2+\bar{a}^2(t)(dx^2+dy^2+dz^2) \label{aaa7}\end{equation} where $\bar{a}=\Omega^{\frac{1}{2}}a$ and $d\bar{t}=\Omega^{\frac{1}{2}}dt$. To do this, we take $\bar{T}^m_{\mu\nu}$ and $T^{\phi}_{\mu\nu}$ as the stress-tensors of a pressureless perfect fluid with energy density $\bar{\rho}_{m}$, and a perfect fluid with energy density $\rho_{\phi}=\frac{1}{2}\dot{\phi}^2+V(\phi)$ and pressure $p_{\phi}=\frac{1}{2}\dot{\phi}^2-V(\phi)$, respectively. In this case, for the metric (\ref{aaa7}) the equations (\ref{b6}) and (\ref{bbb6}) take the form \footnote{Hereafter we will use unbarred characters in the Einstein frame.} \begin{equation} 3H^2=k(\rho_{\phi}+\rho_{m}) \label{b14}\end{equation} \begin{equation} 2\dot{H}=-k[(\omega_{\phi}+1)\rho_{\phi}+\rho_m] \label{b14-1}\end{equation} \begin{equation} \ddot{\phi}+3H\dot{\phi}+\frac{dV(\phi)}{d\phi}=-\beta \sqrt{k}~\rho_{m} \label{b15}\end{equation} The trace equation (\ref{b8-1}) and the conservation equations (\ref{b13}) give, respectively, \begin{equation} \dot{\phi}^2+R/k-4V(\phi)=\rho_{m} \label{b16}\end{equation} \begin{equation} \dot{\rho}_{m}+3H\rho_{m}=Q \label{b17}\end{equation} \begin{equation} \dot{\rho}_{\phi}+3H(\omega_{\phi}+1)\rho_{\phi}=-Q \label{b18}\end{equation} where \begin{equation} Q=\beta \sqrt{k} \dot{\phi}\rho_{m} \label{b-18}\end{equation} is the interaction term. This term vanishes only for $\phi=const.$, which due to (\ref{b3}) corresponds to the case that $f(R)$ linearly depends on $R$. The direction of energy transfer depends on the sign of $Q$ or $\dot{\phi}$. For $\dot{\phi}>0$, the energy transfer is from dark energy to dark matter and for $\dot{\phi}<0$ the reverse is true.\\ The formulation of an $f(R)$ theory in the Einstein frame indicates that such a modified gravity model has one extra scalar degree of freedom compared with General Relativity. This suggests that it is this scalar degree of freedom which drives late-time acceleration or crossing the PDL in a cosmologically viable $f(R)$ gravity. This opens questions about the role of this scalar degree of freedom as quintessence or phantom. Comparing the sign of the kinetic terms of the scalar field in the actions (\ref{b4}) and (\ref{ca1}) immediately reveals that this scalar degree of freedom actually appears as a quintessence rather than a phantom field. It is well-known that a minimally coupled quintessence field may lead to an accelerated expansion but it can not lead to crossing the PDL. This may lead one to conclude that crossing the PDL can not take place in models such as (\ref{b4}). However, there is a basic difference between the two actions (\ref{ca1}) and (\ref{b4}). In the former there is no interaction between the two fluids while in the latter the scalar field $\phi$ interacts with the matter sector. In the following we shall show that this interaction is actually responsible for a possible crossing of the PDL in a viable $f(R)$ model.\\ To do this, we first combine (\ref{b17}) and (\ref{b18}) with (\ref{b14-1}) to obtain \begin{equation} 2\dot{H}=\frac{k}{3H}(\dot{\rho}_{\phi}+\dot{\rho}_{m}) \label{d1}\end{equation} Apart from some similarities between the latter and the equation (\ref{a19}), there is an important difference between the two equations concerning the interaction between the scalar field $\phi$ and the matter part. In general, we can consider two different cases \footnote{The case of $Q=0$ corresponds to $\Lambda$CDM. In this case, the equation (\ref{b17}) states that $\rho_{m}$ always decreases with expansion of the universe.} :\\ a) Firstly, $Q>0$ which corresponds to energy transfer from dark energy (or the scalar degree of freedom $\phi$) to (dark) matter. Due to the fact that $\phi$ appears in our case as a quintessence, we have always $\rho_{\phi}(\omega_{\phi}+1)>0$ for both signs of $Q$. Thus in the case that $Q>0$, $\dot{\rho}_{\phi}$ is always a decreasing function in an expanding universe ($H>0$). From (\ref{b17}) one infers that $\dot{\rho}_m$ can take both positive and negative signs depending on the relative magnitudes of $Q$ and $3H\rho_m$. The equation (\ref{d1}) implies that when $\dot{\rho}_m>0$ one can consider the possibility that $\dot{H}>0$ and crossing the phantom barrier.\\ b) Secondly, $Q<0$ corresponds to a reversed sign of energy transfer. In this case $\dot{\rho}_m$ is definitely negative in an expanding universe. Instead $\dot{\rho}_{\phi}$ can be negative and positive depending on the relative magnitudes of $\rho_{\phi}(\omega_{\phi}+1)$ and $Q$. The case that $\dot{\rho}_{\phi}>0$ may lead the accelerating expansion to cross the PDL.\\ This argument emphasizes the role of interaction in the sign of $\dot{H}$ even though it does not appear explicitly in the equation (\ref{d1}). The effective equation of state parameter is defined by \begin{equation} \omega_{eff}=\frac{p_{eff}}{\rho_{eff}}=\frac{\omega_{\phi}\rho_{\phi}}{\rho_{\phi}+\rho_m} \label{d2}\end{equation} We may write it in a more appropriate form. To do this, we can use (\ref{b14}), (\ref{b17}) and (\ref{b18}) that leads to \begin{equation} \omega_{eff}=-1-\frac{k}{9H^3}(\dot{\rho}_{\phi}+\dot{\rho}_{m})=-1+\frac{k}{3H^2}(\frac{3}{k}H^2+\omega_{\phi}\rho_{\phi}) \label{d4}\end{equation} Using $p_{\phi}=\omega_{\phi}\rho_{\phi}$, the latter takes the form \begin{equation} \omega_{eff}=-1+\frac{k}{3H^2}(\frac{3}{k}H^2+\frac{1}{2}\dot{\phi}^2-V(\phi)) \label{d5}\end{equation} Substituting (\ref{b3}) and (\ref{b5}) into this equation, we obtain \begin{equation} \omega_{eff}=-1+\frac{1}{3H^2}[3H^2-\frac{R}{2f'(R)}+\frac{f(R)}{2f'^2(R)}+\frac{3}{4}(\frac{f''(R)}{f'(R)})^2\dot{R}^2] \label{d6}\end{equation} It is now possible to find evolution of the equation of state parameter $\omega_{eff}(z)$ with the same procedure used in the section 3.1. In fig.1b and fig.2b, the resulting $\omega_{eff}(z)$ is plotted for the models (\ref{a13}) and (\ref{d12}). Both figures indicate that there are regions in the parameters spaces for which crossing the phantom barrier is allowed. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \section{Conclusion} We have studied phantom behavior for some $f(R)$ gravity models in both Jordan and Einstein conformal frames. In Jordan frame, we have used the reconstruction function $q(z)$ fitting to the Gold data set to find evolution of equation of state parameter for some cosmologically viable $f(R)$ models. Both models (\ref{a13}) and (\ref{d12}) indicate phantom behavior in some regions of the parameters space. \\ In Einstein frame, the scalar partner of the metric tensor is separated. Comparing this scalar field with the usual notion of a phantom field, we have made an observation that the former appears as a \emph{quintessence} with a minimal coupling to gravity. However, this quintessence field interacts with matter sector which allows energy transfer between the two components. This interaction plays a key role in the cosmological behavior of a particular $f(R)$ model in Einstein conformal frame. Then we have used the same reconstruction function for $q(z)$ to study the evolution of the effective equation of state parameter. We have found that both models (\ref{a13}) and (\ref{d12}) can display crossing the phantom boundary in the same regions of the parameters spaces that is used in Jordan conformal frame.\\ To compare the behavior of $\omega_{eff}(z)$ for each model in the two conformal frames, this function is plotted for a common set of the parameters. From fig.1 and fig.2, one can see that overall behaviors of the equation of state parameters are very similar. In particular, both models display crossing the phantom boundary nearly at the same epoch in both conformal frames. Despite this similarity, details of the equation of state parameters in the two frames are not exactly the same. This result stands against the idea (which is partly presented in \cite{fla}) that considers both conformal frames with the same physical status.
1,941,325,220,857
arxiv
\section*{Supplementary Material} \setlength{\tabcolsep}{6pt} \begin{table} \caption{Northern and Southern catalogs used in the \emph{a priori} defined source-list searches. For each source: equatorial coordinates (J2000) from 4FGL are given with the likelihood search results: best-fit number of astrophysical neutrino events $\hat{n}_s$, best-fit astrophysical power-law spectral index $\hat{\gamma}$, local pre-trial p-value -$\log_{10}(p_{local})$, 90\% CL astrophysical flux upper-limit ($\phi_{90\%}$). The neutrino 90\% CL flux upper-limit ($\phi_{90\%}$) is parametrized as: $\frac{dN_{\nu_\mu+\bar{\nu}_\mu}}{dE_\nu}=\phi_{90\%}\cdot\Big(\frac{E_\nu}{TeV}\Big)^{-2}\times10^{-13}\text{TeV}^{-1}\text{cm}^{-2}\text{s}^{-1}$. The four most significant sources with pre-trial p-values less than 0.01 are highlighted in \textbf{bold}. The sources are divided into Northern and Southern catalogs with a boundary at -5$^\circ$ in declination. \label{tab:srclist}} \begin{tabular}[c]{| c c c c c c c c|} \hline \multicolumn{8}{ c }{Source List Results}\\ \hline Name & Class & $\alpha\,[\mathrm{deg}]$ & $\delta\,[\mathrm{deg}]$ & $\hat{n}_s$ & $\hat{\gamma}$ & -$\log_{10}(p_{local})$ & $\phi_{90\%}$\\ \hline PKS 2320-035 & FSRQ & 350.88 & -3.29 & 4.8 & 3.6 & 0.45 & 3.3 \\ 3C 454.3 & FSRQ & 343.50 & 16.15 & 5.4 & 2.2 & 0.62 & 5.1 \\ TXS 2241+406 & FSRQ & 341.06 & 40.96 & 3.8 & 3.8 & 0.42 & 5.6 \\ RGB J2243+203 & BLL & 340.99 & 20.36 & 0.0 & 3.0 & 0.33 & 3.1 \\ CTA 102 & FSRQ & 338.15 & 11.73 & 0.0 & 2.7 & 0.30 & 2.8 \\ BL Lac & BLL & 330.69 & 42.28 & 0.0 & 2.7 & 0.31 & 4.9 \\ OX 169 & FSRQ & 325.89 & 17.73 & 2.0 & 1.7 & 0.69 & 5.1 \\ B2 2114+33 & BLL & 319.06 & 33.66 & 0.0 & 3.0 & 0.30 & 3.9 \\ PKS 2032+107 & FSRQ & 308.85 & 10.94 & 0.0 & 2.4 & 0.33 & 3.2 \\ 2HWC J2031+415 & GAL & 307.93 & 41.51 & 13.4 & 3.8 & 0.97 & 9.2 \\ Gamma Cygni & GAL & 305.56 & 40.26 & 7.4 & 3.7 & 0.59 & 6.9 \\ MGRO J2019+37 & GAL & 304.85 & 36.80 & 0.0 & 3.1 & 0.33 & 4.0 \\ MG2 J201534+3710 & FSRQ & 303.92 & 37.19 & 4.4 & 4.0 & 0.40 & 5.6 \\ MG4 J200112+4352 & BLL & 300.30 & 43.89 & 6.1 & 2.3 & 0.67 & 7.8 \\ 1ES 1959+650 & BLL & 300.01 & 65.15 & 12.6 & 3.3 & 0.77 & 12.3 \\ 1RXS J194246.3+1 & BLL & 295.70 & 10.56 & 0.0 & 2.7 & 0.33 & 2.6 \\ RX J1931.1+0937 & BLL & 292.78 & 9.63 & 0.0 & 2.9 & 0.29 & 2.8 \\ NVSS J190836-012 & UNIDB & 287.20 & -1.53 & 0.0 & 2.9 & 0.22 & 2.3 \\ MGRO J1908+06 & GAL & 287.17 & 6.18 & 4.2 & 2.0 & 1.42 & 5.7 \\ TXS 1902+556 & BLL & 285.80 & 55.68 & 11.7 & 4.0 & 0.85 & 9.9 \\ HESS J1857+026 & GAL & 284.30 & 2.67 & 7.4 & 3.1 & 0.53 & 3.5 \\ GRS 1285.0 & UNIDB & 283.15 & 0.69 & 1.7 & 3.8 & 0.27 & 2.3 \\ HESS J1852-000 & GAL & 283.00 & 0.00 & 3.3 & 3.7 & 0.38 & 2.6 \\ HESS J1849-000 & GAL & 282.26 & -0.02 & 0.0 & 3.0 & 0.28 & 2.2 \\ HESS J1843-033 & GAL & 280.75 & -3.30 & 0.0 & 2.8 & 0.31 & 2.5 \\ OT 081 & BLL & 267.87 & 9.65 & 12.2 & 3.2 & 0.73 & 4.8 \\ S4 1749+70 & BLL & 267.15 & 70.10 & 0.0 & 2.5 & 0.37 & 8.0 \\ 1H 1720+117 & BLL & 261.27 & 11.88 & 0.0 & 2.7 & 0.30 & 3.2 \\ PKS 1717+177 & BLL & 259.81 & 17.75 & 19.8 & 3.6 & 1.32 & 7.3 \\ Mkn 501 & BLL & 253.47 & 39.76 & 10.3 & 4.0 & 0.61 & 7.3 \\ 4C +38.41 & FSRQ & 248.82 & 38.14 & 4.2 & 2.3 & 0.66 & 7.0 \\ PG 1553+113 & BLL & 238.93 & 11.19 & 0.0 & 2.8 & 0.32 & 3.2 \\ \textbf{GB6 J1542+6129} & \textbf{BLL} & \textbf{235.75} & \textbf{61.50} & \textbf{29.7} & \textbf{3.0} & \textbf{2.74} & \textbf{22.0} \\ B2 1520+31 & FSRQ & 230.55 & 31.74 & 7.1 & 2.4 & 0.83 & 7.3 \\ PKS 1502+036 & AGN & 226.26 & 3.44 & 0.0 & 2.7 & 0.28 & 2.9 \\ PKS 1502+106 & FSRQ & 226.10 & 10.50 & 0.0 & 3.0 & 0.33 & 2.6 \\ PKS 1441+25 & FSRQ & 220.99 & 25.03 & 7.5 & 2.4 & 0.94 & 7.3 \\ \textbf{PKS 1424+240} & \textbf{BLL} & \textbf{216.76} & \textbf{23.80} & \textbf{41.5} & \textbf{3.9} & \textbf{2.80} & \textbf{12.3} \\ NVSS J141826-023 & BLL & 214.61 & -2.56 & 0.0 & 3.0 & 0.25 & 2.0 \\ B3 1343+451 & FSRQ & 206.40 & 44.88 & 0.0 & 2.8 & 0.32 & 5.0 \\ S4 1250+53 & BLL & 193.31 & 53.02 & 2.2 & 2.5 & 0.39 & 5.9 \\ PG 1246+586 & BLL & 192.08 & 58.34 & 0.0 & 2.8 & 0.35 & 6.4 \\ MG1 J123931+0443 & FSRQ & 189.89 & 4.73 & 0.0 & 2.6 & 0.28 & 2.4 \\ M 87 & AGN & 187.71 & 12.39 & 0.0 & 2.8 & 0.29 & 3.1 \\ ON 246 & BLL & 187.56 & 25.30 & 0.9 & 1.7 & 0.37 & 4.2 \\ 3C 273 & FSRQ & 187.27 & 2.04 & 0.0 & 3.0 & 0.28 & 1.9 \\ 4C +21.35 & FSRQ & 186.23 & 21.38 & 0.0 & 2.6 & 0.32 & 3.5 \\ W Comae & BLL & 185.38 & 28.24 & 0.0 & 3.0 & 0.32 & 3.7 \\ PG 1218+304 & BLL & 185.34 & 30.17 & 11.1 & 3.9 & 0.70 & 6.7 \\ PKS 1216-010 & BLL & 184.64 & -1.33 & 6.9 & 4.0 & 0.45 & 3.1 \\ B2 1215+30 & BLL & 184.48 & 30.12 & 18.6 & 3.4 & 1.09 & 8.5 \\ Ton 599 & FSRQ & 179.88 & 29.24 & 0.0 & 2.2 & 0.29 & 4.5 \\ \hline \end{tabular} \end{table} \begin{table} \begin{tabular}[c]{| c c c c c c c c|} \hline Name & Class & $\alpha\,[\mathrm{deg}]$ & $\delta\,[\mathrm{deg}]$ & $\hat{n}_s$ & $\hat{\gamma}$ & -$\log_{10}(p_{local})$ & $\phi_{90\%}$\\ \hline PKS B1130+008 & BLL & 173.20 & 0.58 & 15.8 & 4.0 & 0.96 & 4.4 \\ Mkn 421 & BLL & 166.12 & 38.21 & 2.1 & 1.9 & 0.38 & 5.3 \\ 4C +01.28 & BLL & 164.61 & 1.56 & 0.0 & 2.9 & 0.26 & 2.4 \\ 1H 1013+498 & BLL & 153.77 & 49.43 & 0.0 & 2.6 & 0.29 & 4.5 \\ 4C +55.17 & FSRQ & 149.42 & 55.38 & 11.9 & 3.3 & 1.02 & 10.6 \\ M 82 & SBG & 148.95 & 69.67 & 0.0 & 2.6 & 0.36 & 8.8 \\ PMN J0948+0022 & AGN & 147.24 & 0.37 & 9.3 & 4.0 & 0.76 & 3.9 \\ OJ 287 & BLL & 133.71 & 20.12 & 0.0 & 2.6 & 0.32 & 3.5 \\ PKS 0829+046 & BLL & 127.97 & 4.49 & 0.0 & 2.9 & 0.28 & 2.1 \\ S4 0814+42 & BLL & 124.56 & 42.38 & 0.0 & 2.3 & 0.30 & 4.9 \\ OJ 014 & BLL & 122.87 & 1.78 & 16.1 & 4.0 & 0.99 & 4.4 \\ 1ES 0806+524 & BLL & 122.46 & 52.31 & 0.0 & 2.8 & 0.31 & 4.7 \\ PKS 0736+01 & FSRQ & 114.82 & 1.62 & 0.0 & 2.8 & 0.26 & 2.4 \\ PKS 0735+17 & BLL & 114.54 & 17.71 & 0.0 & 2.8 & 0.30 & 3.5 \\ 4C +14.23 & FSRQ & 111.33 & 14.42 & 8.5 & 2.9 & 0.60 & 4.8 \\ S5 0716+71 & BLL & 110.49 & 71.34 & 0.0 & 2.5 & 0.38 & 7.4 \\ PSR B0656+14 & GAL & 104.95 & 14.24 & 8.4 & 4.0 & 0.51 & 4.4 \\ 1ES 0647+250 & BLL & 102.70 & 25.06 & 0.0 & 2.9 & 0.27 & 3.0 \\ B3 0609+413 & BLL & 93.22 & 41.37 & 1.8 & 1.7 & 0.42 & 5.3 \\ Crab nebula & GAL & 83.63 & 22.01 & 1.1 & 2.2 & 0.31 & 3.7 \\ OG +050 & FSRQ & 83.18 & 7.55 & 0.0 & 3.2 & 0.28 & 2.9 \\ TXS 0518+211 & BLL & 80.44 & 21.21 & 15.7 & 3.8 & 0.92 & 6.6 \\ \textbf{TXS 0506+056} & \textbf{BLL} & \textbf{77.35} & \textbf{5.70} & \textbf{12.3} & \textbf{2.1} & \textbf{3.72} & \textbf{10.1} \\ PKS 0502+049 & FSRQ & 76.34 & 5.00 & 11.2 & 3.0 & 0.66 & 4.1 \\ S3 0458-02 & FSRQ & 75.30 & -1.97 & 5.5 & 4.0 & 0.33 & 2.7 \\ PKS 0440-00 & FSRQ & 70.66 & -0.29 & 7.6 & 3.9 & 0.46 & 3.1 \\ MG2 J043337+2905 & BLL & 68.41 & 29.10 & 0.0 & 2.7 & 0.28 & 4.5 \\ PKS 0422+00 & BLL & 66.19 & 0.60 & 0.0 & 2.9 & 0.27 & 2.3 \\ PKS 0420-01 & FSRQ & 65.83 & -1.33 & 9.3 & 4.0 & 0.52 & 3.4 \\ PKS 0336-01 & FSRQ & 54.88 & -1.77 & 15.5 & 4.0 & 0.99 & 4.4 \\ NGC 1275 & AGN & 49.96 & 41.51 & 3.6 & 3.1 & 0.41 & 5.5 \\ \textbf{NGC 1068} & \textbf{SBG} & \textbf{40.67} & \textbf{-0.01} & \textbf{50.4} & \textbf{3.2} & \textbf{4.74} & \textbf{10.5} \\ PKS 0235+164 & BLL & 39.67 & 16.62 & 0.0 & 3.0 & 0.28 & 3.1 \\ 4C +28.07 & FSRQ & 39.48 & 28.80 & 0.0 & 2.8 & 0.30 & 3.6 \\ 3C 66A & BLL & 35.67 & 43.04 & 0.0 & 2.8 & 0.30 & 3.9 \\ B2 0218+357 & FSRQ & 35.28 & 35.94 & 0.0 & 3.1 & 0.33 & 4.3 \\ PKS 0215+015 & FSRQ & 34.46 & 1.74 & 0.0 & 3.2 & 0.27 & 2.3 \\ MG1 J021114+1051 & BLL & 32.81 & 10.86 & 1.6 & 1.7 & 0.43 & 3.5 \\ TXS 0141+268 & BLL & 26.15 & 27.09 & 0.0 & 2.5 & 0.31 & 3.5 \\ B3 0133+388 & BLL & 24.14 & 39.10 & 0.0 & 2.6 & 0.28 & 4.1 \\ NGC 598 & SBG & 23.52 & 30.62 & 11.4 & 4.0 & 0.63 & 6.3 \\ S2 0109+22 & BLL & 18.03 & 22.75 & 2.0 & 3.1 & 0.30 & 3.7 \\ 4C +01.02 & FSRQ & 17.16 & 1.59 & 0.0 & 3.0 & 0.26 & 2.4 \\ M 31 & SBG & 10.82 & 41.24 & 11.0 & 4.0 & 1.09 & 9.6 \\ PKS 0019+058 & BLL & 5.64 & 6.14 & 0.0 & 2.9 & 0.29 & 2.4 \\ \hline \hline PKS 2233-148 & BLL & 339.14 & -14.56 & 5.3 & 2.8 & 1.26 & 21.4 \\ HESS J1841-055 & GAL & 280.23 & -5.55 & 3.6 & 4.0 & 0.55 & 4.8 \\ HESS J1837-069 & GAL & 279.43 & -6.93 & 0.0 & 2.8 & 0.30 & 4.0 \\ PKS 1510-089 & FSRQ & 228.21 & -9.10 & 0.1 & 1.7 & 0.41 & 7.1 \\ PKS 1329-049 & FSRQ & 203.02 & -5.16 & 6.1 & 2.7 & 0.77 & 5.1 \\ NGC 4945 & SBG & 196.36 & -49.47 & 0.3 & 2.6 & 0.31 & 50.2 \\ 3C 279 & FSRQ & 194.04 & -5.79 & 0.3 & 2.4 & 0.20 & 2.7 \\ PKS 0805-07 & FSRQ & 122.07 & -7.86 & 0.0 & 2.7 & 0.31 & 4.7 \\ PKS 0727-11 & FSRQ & 112.58 & -11.69 & 1.9 & 3.5 & 0.59 & 11.4 \\ LMC & SBG & 80.00 & -68.75 & 0.0 & 3.1 & 0.36 & 41.1 \\ SMC & SBG & 14.50 & -72.75 & 0.0 & 2.4 & 0.37 & 44.1 \\ PKS 0048-09 & BLL & 12.68 & -9.49 & 3.9 & 3.3 & 0.87 & 10.0 \\ NGC 253 & SBG & 11.90 & -25.29 & 3.0 & 4.0 & 0.75 & 37.7 \\ \hline \hline \end{tabular} \end{table} The effective area for this search corresponds to the efficiency of the analysis cuts and detector effects to observe an astrophysical neutrino flux as a function of energy and declination. The expected rate of muon neutrinos and anti-neutrinos ($\frac{dN_{\nu+\bar{\nu}}}{dt}$) from a point-like source at declination $\delta$ from a flux ($\phi_{\nu+\bar{\nu}}$) as a function of neutrino energy ($E_\nu$) is: \begin{equation} \frac{dN_{\nu+\bar{\nu}}}{dt}=\int_0^\infty A_{eff}^{\nu+\bar{\nu}}(E_\nu, \delta)\times \phi_{\nu+\bar{\nu}}(E_\nu)\text dE_\nu { .} \end{equation} The resulting effective area for the IC86 2012-2018 event selection is shown in Fig.~\ref{fig:enPDF} as a function of simulated neutrino energy in declination bins. The combination of the effective area, angular resolution shown in Fig~\ref{fig:psf}, and the background data rate, determines the analysis sensitivity to a point-like neutrino source. The updated event selection is used to scan each hemisphere for the single most significant point-like neutrino source, and in addition to examine individual sources observed in $\gamma$-rays via the analyses described above. The result of the all-sky scan is discussed above and can be seen in Fig.\ref{fig:skymap}. The details of the source list and the individual results from examining each of the sources in the Northern and Southern catalogs (divided at a declination of $-5^\circ$) can be seen in Table~\ref{tab:srclist}, where the best-fit number of astrophysical neutrino events $\hat{n}_s$ is constrained to be $\geq0$. For sources where $\hat{n}_s=0$, the 90\% C.L. median sensitivity was used in place of an upper limit. \begin{figure} \centering \includegraphics[width=0.44\textwidth]{images/IC86II_enDist.pdf} \includegraphics[width=0.48\textwidth]{images/IC86II_effA.pdf} \caption{\textit{Left:} The 2D distribution of events in one year of data for the final event selection as a function of reconstructed declination and estimated energy. The 90\% energy range for the data (black), as well as simulated astrophysical signal Monte-Carlo (MC) for an $E^{-2}$ and an $E^{-3}$ spectrum are shown in magenta and orange respectively as a guide for the relevant energy range of IceCube. \textit{Right:} The effective area as a function of neutrino energy for the IC86 2012-2018 event selection averaged across the declination band for several declination bins using simulated data.} \label{fig:enPDF} \end{figure} The most significant excess from the Northern Catalog was found in the direction of NGC 1068. Figure \ref{fig:angErr} shows the distribution of observed events as a function of their distance from the 3FGL coordinates of NGC 1068 (blue) or their estimated angular error (orange). Both distributions are weighted by their signal over background likelihood for a given point-like source hypothesis in the direction of NGC 1068 and the best fit spectral shape of $E^{-3.2}$. A minimum angular uncertainty of $0.2^\circ$ is applied because the angular uncertainty $\sigma$ estimated for each event individually does not include systematic uncertainties. It was verified that setting a minimum value up to $0.9^\circ$ does not significantly affect the result in the direction of NGC 1068 as most events contributing to the excess are reconstructed within $\sim1^\circ$ of the \textit{Fermi}-LAT NGC 1068 coordinates. \begin{figure}[ht] \includegraphics[width=0.65\textwidth]{images/TenYr_allskyscan_hotspots.pdf} \caption{Skymap of -$\log_{10}(p_{local})$, where $p_{local}$ is the local pre-trial p-value, for the sky between $\pm82^\circ$ declination in equatorial coordinates. The Northern and Southern hemisphere hotspots, defined as the most significant $p_{local}$ in that hemisphere, are indicated with black circles. } \label{fig:skymap} \end{figure} \begin{figure}[ht] \includegraphics[width=0.35\textwidth]{images/NGC1068_angular_dist.pdf} \caption{Real event distribution of the reconstructed angular uncertainty from the source (paraboloid~\cite{Neunhoffer:2004ha} $\sigma$ in orange) and the angular distances between NGC 1068 and each event($\Delta\Psi$ in blue), both weighted by their signal over background likelihood for a given point-like source hypothesis in the direction of NGC 1068 and the best fit spectral shape of $E^{-3.2}$. } \label{fig:angErr} \end{figure} Finally, to provide more context for such a result, we show the reconstructed muon neutrino spectrum with its large uncertainty compared to gamma-ray data from 7.5 yr of Fermi-LAT observations and an upper limit obtained from 125 hrs of MAGIC observations and about 4 hrs of H.E.S.S. observations \cite{Acciari:2019raw,Aharonian:2005ar,2012ApJ...755..164A} in Fig.~\ref{fig:ngc1068_mwl}. \begin{figure \centering \includegraphics[width=0.35\textwidth]{images/Northern_pop_unblinded.pdf} \includegraphics[width=0.35\textwidth]{images/Southern_pop_unblinded.pdf} \caption{\textit{Left:} Significance of the pre-trial probability of obtaining $k$ excesses with the significance of the $k^{th}$ source or higher from the Northern catalog given background only. \textit{Right:} Equivalent plot for the Southern catalog.} \label{fig:pop} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/ngc1068_mwl.pdf} \caption{The best-fit time-integrated astrophysical power-law neutrino flux obtained using the 10 year IceCube event selection in the direction of NGC 1068. The shaded regions represent the 1, 2 \& 3$\sigma$ error regions on the spectrum as seen in Fig.~\ref{fig:gammaScan}. This fit is compared to the $\gamma$ and corresponding $\nu$ AGN outflow models and the \textit{Fermi} Pass8 (P8) results found in \citet{Lamastra:2016axo} (which do not include modelled absorption effects \cite{Lamastra:2017iyo}). AGN-driven outflow parameters are set at $R_{out}$=100\,pc, $v_{out}$=200\,km/s, $p=2$, and $L_{kin}$=1.5$\times10^{42}$\,erg/s; violet: $L_{AGN}$=4.2$\times10^{44}$\,erg/s, $n_H$=$10^{4}$\,cm$^{-3}$, $F_{cal}=1$, $\eta_p=0.2$, $\eta_e=0.02$, $B_{ISM}=30\,\mu$G; magenta: $L_{AGN}$=2.1$\times10^{45}$\,erg/s, $n_H$=120\,cm$^{-3}$, $F_{cal}=0.5$, $\eta_p=0.5$, $\eta_e=0.4$, $B_{ISM}=250\,\mu$G; pale pink: $L_{AGN}$=4.2$\times10^{44}$\,erg/s, $n_H$=$10^4$\,cm$^{-3}$, $F_{cal}=1$, $\eta_p=0.3$, $\eta_e=0.1$, $B_{ISM}=600\,\mu$G. The upper-limits in $\gamma$-ray observations are taken from from H.E.S.S. (blue)~\citet{Aharonian:2005ar} and from MAGIC (black)~\citet{Acciari:2019raw}. } \label{fig:ngc1068_mwl} \end{figure} \renewcommand{\arraystretch}{1.0} \begin{table} \caption{Galactic sources examined in the stacked searches in three catalogs: Supernova Remnants (SNR), Pulsar Wind Nebula (PWN), and Unidentified Objects (UNID). For each source: equatorial coordinates (J2000), and the relative source weight used for the analysis are given. \label{tab:gallist}} \begin{tabular}[c]{| c c c c c |} \hline \multicolumn{5}{ c }{Stacking Catalogs}\\ \hline Catalog & Name & $\alpha\,[\mathrm{deg}]$ & $\delta\,[\mathrm{deg}]$ & Weight\\ \hline SNR & HESS J1614-518 & 243.56 & -51.82 & 2.80$\times10^{-1}$\\ & HESS J1457-593 & 223.70 & -59.07 & 1.47$\times10^{-1}$\\ & HESS J1731-347 & 262.98 & -34.71 & 1.40$\times10^{-1}$\\ & HESS J1912+101 & 288.33 & 10.19 & 7.13$\times10^{-2}$\\ & SNR G323.7-01.0 & 233.63 & -57.20 & 6.91$\times10^{-2}$\\ & Gamma Cygni & 305.56 & 40.26 & 6.35$\times10^{-2}$\\ & CTB 37A & 258.64 & -38.55 & 5.01$\times10^{-2}$\\ & RX J1713.7-3946 & 258.36 & -39.77 & 3.94$\times10^{-2}$\\ & HESS J1745-303 & 266.30 & -30.20 & 2.77$\times10^{-2}$\\ & Cassiopeia A & 350.85 & 58.81 & 1.89$\times10^{-2}$\\ & HESS J1800-240B & 270.11 & -24.04 & 1.82$\times10^{-2}$\\ & W 51C & 290.82 & 14.15 & 1.65$\times10^{-2}$\\ & HESS J1800-240A & 270.49 & -23.96 & 1.48$\times10^{-2}$\\ & SN 1006 & 225.59 & -42.10 & 1.20$\times10^{-2}$\\ & W28 & 270.34 & -23.29 & 9.06$\times10^{-3}$\\ & CTB 37B & 258.43 & -38.17 & 8.19$\times10^{-3}$\\ & Vela Junior & 133.00 & -46.33 & 4.88$\times10^{-3}$\\ & LMC N132D & 81.26 & -69.64 & 4.83$\times10^{-3}$\\ & IC 443 & 94.51 & 22.66 & 2.51$\times10^{-3}$\\ & SNR G349.7+0.2 & 259.50 & -37.43 & 1.50$\times10^{-3}$\\ & Tycho SNR & 6.34 & 64.14 & 8.83$\times10^{-4}$\\ & W 49B & 287.75 & 9.10 & 5.04$\times10^{-4}$\\ & RCW 86 & 220.12 & -62.65 & 2.54$\times10^{-6}$\\ \hline PWN & HESS J1708-443 & 257.00 & -44.30 & 1.63$\times10^{-1}$\\ & HESS J1632-478 & 248.01 & -47.87 & 1.19$\times10^{-1}$\\ & Vela X & 128.29 & -45.19 & 1.06$\times10^{-1}$\\ & HESS J1813-178 & 273.36 & -17.85 & 6.91$\times10^{-2}$\\ & MSH 15-52 & 228.53 & -59.16 & 6.58$\times10^{-2}$\\ & HESS J1420-607 & 214.69 & -60.98 & 6.27$\times10^{-2}$\\ & HESS J1837-069 & 279.43 & -6.93 & 5.78$\times10^{-2}$\\ & HESS J1616-508 & 244.06 & -50.91 & 5.41$\times10^{-2}$\\ & HESS J1026-582 & 157.17 & -58.29 & 5.05$\times10^{-2}$\\ & HESS J1356-645 & 209.00 & -64.50 & 4.25$\times10^{-2}$\\ & PSR B0656+14 & 104.95 & 14.24 & 4.04$\times10^{-2}$\\ & HESS J1418-609 & 214.52 & -60.98 & 3.81$\times10^{-2}$\\ & HESS J1849-000 & 282.26 & -0.02 & 2.51$\times10^{-2}$\\ & Geminga & 98.48 & 17.77 & 2.26$\times10^{-2}$\\ & HESS J1825-137 & 276.55 & -13.58 & 1.90$\times10^{-2}$\\ & CTA 1 & 1.65 & 72.78 & 1.61$\times10^{-2}$\\ & SNR G327.1-1.1 & 238.63 & -55.06 & 8.37$\times10^{-3}$\\ & SNR G0.9+0.1 & 266.83 & -28.15 & 5.47$\times10^{-3}$\\ & SNR G054.1+00.3 & 292.63 & 18.87 & 5.11$\times10^{-3}$\\ & Crab nebula & 83.63 & 22.01 & 4.57$\times10^{-3}$\\ & HESS J1846-029 & 281.50 & -2.90 & 4.18$\times10^{-3}$\\ & SNR G15.4+0.1 & 274.50 & -15.45 & 3.99$\times10^{-3}$\\ & HESS J1119-614 & 169.81 & -61.46 & 3.49$\times10^{-3}$\\ & VER J2016+371 & 304.01 & 37.21 & 3.14$\times10^{-3}$\\ & HESS J1458-608 & 224.87 & -60.88 & 2.46$\times10^{-3}$\\ & HESS J1833-105 & 278.25 & -10.50 & 2.24$\times10^{-3}$\\ & N 157B & 84.44 & -69.17 & 1.52$\times10^{-3}$\\ & 3C 58 & 31.40 & 64.83 & 1.30$\times10^{-3}$\\ & HESS J1303-631 & 195.75 & -63.20 & 1.22$\times10^{-3}$\\ & DA 495 & 298.06 & 29.39 & 6.29$\times10^{-4}$\\ & HESS J1018-589 B & 154.09 & -58.95 & 3.22$\times10^{-4}$\\ & HESS J1718-385 & 259.53 & -38.55 & 2.56$\times10^{-4}$\\ & HESS J1640-465 & 250.12 & -46.55 & 1.56$\times10^{-5}$\\ \hline \end{tabular} \end{table} \begin{table} \begin{tabular}[c]{| c c c c c |} \hline \multicolumn{5}{ c }{Stacking Catalogs}\\ \hline Catalog & Name & $\alpha\,[\mathrm{deg}]$ & $\delta\,[\mathrm{deg}]$ & Weight\\ \hline UNID & HESS J1702-420 & 255.68 & -42.02 & 1.80$\times10^{-1}$\\ & MGRO J2019+37 & 304.01 & 37.20 & 1.17$\times10^{-1}$\\ & Westerlund 1 & 251.50 & -45.80 & 1.04$\times10^{-1}$\\ & HESS J1626-490 & 246.52 & -49.09 & 5.91$\times10^{-2}$\\ & HESS J1841-055 & 280.23 & -5.55 & 5.60$\times10^{-2}$\\ & HESS J1809-193 & 272.63 & -19.30 & 5.07$\times10^{-2}$\\ & HESS J1843-033 & 280.75 & -3.30 & 4.80$\times10^{-2}$\\ & MGRO J1908+06 & 287.17 & 6.18 & 4.67$\times10^{-2}$\\ & HESS J1857+026 & 284.30 & 2.67 & 2.91$\times10^{-2}$\\ & HESS J1813-126 & 273.35 & -12.77 & 2.90$\times10^{-2}$\\ & 2HWC J1814-173 & 273.52 & -17.31 & 2.61$\times10^{-2}$\\ & HESS J1831-098 & 277.85 & -9.90 & 1.90$\times10^{-2}$\\ & HESS J1852-000 & 283.00 & 0.00 & 1.77$\times10^{-2}$\\ & HESS J1427-608 & 216.97 & -60.85 & 1.71$\times10^{-2}$\\ & TeV J2032+4130 & 308.02 & 41.57 & 1.64$\times10^{-2}$\\ & Galactic Centre ridge & 266.42 & -29.01 & 1.24$\times10^{-2}$\\ & HESS J1708-410 & 257.10 & -41.09 & 1.17$\times10^{-2}$\\ & VER J2227+608 & 336.88 & 60.83 & 1.05$\times10^{-2}$\\ & HESS J1634-472 & 248.50 & -47.20 & 1.00$\times10^{-2}$\\ & 2HWC J1949+244 & 297.42 & 24.46 & 9.92$\times10^{-3}$\\ & HESS J1834-087 & 278.72 & -8.74 & 9.65$\times10^{-3}$\\ & HESS J1507-622 & 226.88 & -62.42 & 9.57$\times10^{-3}$\\ & 2HWC J1819-150 & 274.83 & -15.06 & 9.36$\times10^{-3}$\\ & 2HWC J0819+157\footnotemark[1] & 124.98 & 15.79 & 8.48$\times10^{-3}$\\ & HESS J1641-463 & 250.26 & -46.30 & 7.72$\times10^{-3}$\\ & HESS J1858+020 & 284.58 & 2.09 & 7.56$\times10^{-3}$\\ & HESS J1503-582 & 225.75 & -58.20 & 7.31$\times10^{-3}$\\ & 2HWC J1040+308\footnotemark[1] & 160.22 & 30.87 & 7.14$\times10^{-3}$\\ & Westerlund 2 & 155.75 & -57.50 & 6.80$\times10^{-3}$\\ & HESS J1804-216 & 271.12 & -21.73 & 6.60$\times10^{-3}$\\ & 2HWC J1309-054 & 197.31 & -5.49 & 4.19$\times10^{-3}$\\ & HESS J1828-099 & 277.25 & -9.99 & 4.16$\times10^{-3}$\\ & 2HWC J1928+177 & 292.15 & 17.78 & 3.32$\times10^{-3}$\\ & HESS J1848-018 & 282.12 & -1.79 & 3.03$\times10^{-3}$\\ & HESS J1729-345 & 262.25 & -34.50 & 2.91$\times10^{-3}$\\ & 2HWC J1955+285 & 298.83 & 28.59 & 2.78$\times10^{-3}$\\ & 2HWC J1852+013 & 283.01 & 1.38 & 2.76$\times10^{-3}$\\ & 2HWC J2024+417 & 306.04 & 41.76 & 2.71$\times10^{-3}$\\ & 2HWC J2006+341\footnotemark[2] & 301.55 & 34.18 & 2.64$\times10^{-3}$\\ & HESS J1808-204 & 272.00 & -20.40 & 2.05$\times10^{-3}$\\ & 2HWC J1829+070 & 277.34 & 7.03 & 1.99$\times10^{-3}$\\ & Arc source & 266.58 & -28.97 & 1.99$\times10^{-3}$\\ & 2HWC J1921+131 & 290.30 & 13.13 & 1.69$\times10^{-3}$\\ & 2HWC J1953+294 & 298.26 & 29.48 & 1.65$\times10^{-3}$\\ & HESS J1832-085 & 278.13 & -8.51 & 1.55$\times10^{-3}$\\ & Terzan 5 & 267.02 & -24.78 & 1.54$\times10^{-3}$\\ & 2HWC J1914+117 & 288.68 & 11.72 & 1.51$\times10^{-3}$\\ & HESS J1741-302 & 265.25 & -30.20 & 1.49$\times10^{-3}$\\ & HESS J1844-030 & 281.17 & -3.10 & 1.33$\times10^{-3}$\\ & 2HWC J1938+238 & 294.74 & 23.81 & 9.80$\times10^{-4}$\\ & HESS J1832-093 & 278.19 & -9.37 & 9.22$\times10^{-4}$\\ & HESS J1826-130 & 276.50 & -13.09 & 9.21$\times10^{-4}$\\ & 2HWC J1902+048 & 285.51 & 4.86 & 6.17$\times10^{-4}$\\ & 2HWC J1907+084 & 286.79 & 8.50 & 5.08$\times10^{-4}$\\ & 30 Dor C & 83.96 & -69.21 & 3.07$\times10^{-4}$\\ & Galactic Centre & 266.42 & -29.01 & 1.83$\times10^{-4}$\\ & MAGIC J0223+403 & 35.67 & 43.04 & 9.46$\times10^{-5}$\\ & HESS J1746-308 & 266.57 & 30.84 & 7.88$\times10^{-5}$\\ \hline \hline \end{tabular} \footnotetext[1]{Assumed extension of 2.0$^\circ$} \footnotetext[2]{Assumed extension of 0.9$^\circ$} \end{table} \section{Conclusions}\label{sec:conclusion} \paragraph{Conclusion} This paper presents an updated event selection optimized for point-like neutrino source signals applied to 10 years of IceCube data taken from April 2008 to July 2018. Multiple neutrino source searches are performed: an all-sky scan, a source catalog and corresponding catalog population study for each hemisphere, and 3 stacked Galactic-source searches. The results of these analyses, all searching for cumulative neutrino signals integrated over the 10 years of data-taking, are summarized in Table~\ref{tab:results_summary}. The most significant source in the Northern catalog, NGC 1068, is inconsistent with a background-only hypothesis at 2.9$\,\sigma$ due to being located 0.35$^\circ$ from the most significant excess in the Northern hemisphere and the Northern source catalog provides a 3.3$\,\sigma$ inconsistency with a background-only hypothesis for the entire catalog. This result comes from an excess of significant p-values in the directions of the Seyfert II galaxy NGC 1068, the blazar TXS 0506+056, and the BL Lacs PKS 1424+240 and GB6 J1542+6129. NGC 1068, at a 14.4 Mpc distance, is the most luminous Seyfert II galaxy detected by \textit{Fermi}-LAT \cite{2012ApJ...755..164A}. NGC 1068 is an observed particle accelerator, charged particles are accelerated in the jet of the AGN or in the AGN-driven molecular wind~\cite{2016A&A...596A..68L}, producing $\gamma$-rays and potentially neutrinos. Other work has previously indicated NGC 1068 as a potential CR accelerator \cite{2014ApJ...780..137Y,Lacki:2010vs,Loeb:2006tw}. Assuming that the observed excess is indeed of astrophysical origin and connected with NGC 1068, the best-fit neutrino spectrum inferred from this work is significantly higher than that predicted from models developed to explain the \emph{Fermi}-LAT gamma-ray measurements (see Fig.~\ref{fig:ngc1068_mwl}). However, the large uncertainty from our spectral measurement and the high X-ray and $\gamma$-ray absorption along the line of sight~\cite{2015A&A...584A..20W,Lamastra:2017iyo} prevent a straight forward connection. Time-dependent analyses and the possibility of correlating with multimessenger observations for this and other sources may provide additional evidence of neutrino emission and insights into its origin. Continued data-taking, more refined event reconstruction, and the planned upgrade of IceCube promise further improvements in sensitivity~\cite{vanSanten:2017chb}. \section{Data Selection}\label{sec:data} The IceCube neutrino telescope is a cubic kilometer array of digital optical modules (DOMs) each containing a 10'' PMT~\cite{2010NIMPA.139A} and on-board read-out electronics~\cite{Abbasi:2008aa}. These DOMs are arranged in 86 strings between 1.45 and 2.45\,km below the surface of the ice at the South Pole~\citep{Aartsen:2016nxy}. The DOMs are sensitive to Cherenkov light from energy losses of ultra-relativistic charged particles traversing the ice. This analysis targets astrophysical muon neutrinos and antineutrinos ($\nu_\mu$), which undergo charged-current interactions in the ice to produce a muon traversing the detector. The majority of the background for this analysis originates from CRs interacting with the atmosphere to produce showers of particles including atmospheric muons and neutrinos. The atmospheric muons from the Southern hemisphere are able to penetrate the ice and are detected as track-like events in IceCube at a rate orders of magnitude higher than the corresponding atmospheric neutrinos~\cite{Aartsen:2016nxy}. Almost all of the atmospheric muons from the Northern hemisphere are filtered out by the Earth. However, poorly-reconstructed atmospheric muons from the Southern sky create a significant background in the Northern hemisphere. Atmospheric neutrinos also produce muons from charged-current $\nu_\mu$ interactions, acting as an irreducible background in both hemispheres. Neutral-current interactions or $\nu_e$ and $\nu_\tau$ charged-current interactions produce particle showers with spherical morphology known as cascade events. Tracks at $\sim$\,TeV energies are reconstructed with a typical angular resolution of $\lesssim 1^\circ$, while cascades have an angular resolution of $\sim10^\circ-15^\circ$\citep{Aartsen:2017eiu}. This analysis selects track-like events because of their better angular resolution. Tracks have the additional advantage that they can be used even if the neutrino interaction vertex is located outside of the detector. This greatly increases the detectable event rate. \begin{table} \caption{IceCube configuration, livetime, number of events, start and end date and published reference in which the sample selection is described. } \label{tab:livetimesAbr} \centering \begin{tabular}{p{1.2cm} p{1.2cm} p{1.3cm} p{1.7cm} p{1.7cm} p{0.7cm}} \hline \\[-5pt] \multicolumn{6}{c}{Data Samples} \\[5pt] \hline Year & Livetime (Days) & Number of Events & Start Day & End Day & Ref.\\[5pt] \hline IC40 & 376.4 & 36900 & 2008/04/06 & 2009/05/20 & \citep{Abbasi:2010rd} \\[5pt] \hline IC59 & 352.6 & 107011 & 2009/05/20 & 2010/05/31 & \citep{Aartsen:2013uuv} \\[5pt] \hline IC79 & 316.0 & 93133 & 2010/06/01 & 2011/05/13 & \citep{Schatto:2014kbj} \\[5pt] \hline IC86-2011 & 332.9 & 136244 & 2011/05/13 & 2012/05/15 & \citep{Aartsen:2014cva} \\[5pt] \hline IC86-2012-18 & 2198.2 & 760923 & 2012/04/26\footnotemark[1] & 2018/07/10 & This work \\[5pt] \hline \end{tabular} \footnotetext[1]{start date for test runs of the new processing. The remainder of this run began 2012/05/15} \end{table} During the first three years of data included here, IceCube was incomplete and functioned with 40, 59, and 79 strings. For these years and also during the first year of data taking of the full detector (IC86), the event selection and reconstruction was updated until it stabilized in 2012, as detailed in Table.~\ref{tab:livetimesAbr}. Seven years of tracks were previously analyzed to search for point sources~\cite{Aartsen:2016oji}. Subsequently, an eight-year sample of tracks from the Northern sky used for diffuse muon neutrino searches was also analyzed looking for point sources~\cite{Aartsen:2018ywr}. The aim of this work is to introduce a selection which unifies the event filtering adopted in these two past searches. Additionally, the direction reconstruction~\citep{Ahrens:2003fg,Aartsen:2013bfa} has been updated to use the deposited event energy in the detector. This improves the angular resolution by more than 10$\%$ for events above 10\,TeV compared to the seven-year study~\cite{Aartsen:2016oji}, and achieves a similar angular resolution to the eight-year Northern diffuse track selection~\cite{Aartsen:2018ywr} which also uses deposited event energy in the direction reconstruction (see Fig.~\ref{fig:psf}). The absolute pointing accuracy of IceCube has been demonstrated to be $\lesssim0.2^\circ$~\cite{Aartsen:2013zka} via measurements of the effect of the Moon shadow on the background CR flux. Different criteria are applied to select track-like events from the Northern and Southern hemisphere (with a boundary between them at declination $\delta=-5^\circ$), because the background differs in these two regions. Almost all the atmospheric muons in the Northern hemisphere can be removed by selecting high-quality track-like events. In the Southern hemisphere, the atmospheric background is reduced by strict cuts on the reconstruction quality and minimum energy, since the astrophysical neutrino fluxes are expected to have a harder energy spectrum than the background of atmospheric muons and neutrinos. This effectively removes almost all Southern hemisphere events with an estimated energy below $\sim10$\,TeV (see Fig.~\ref{fig:enPDF} in the supplementary material). \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/IC86II_PSF.pdf} \caption{The median angle between simulated neutrino and reconstructed muon directions as a function of energy for the data selection used in the latest 6 years compared to that in Ref.~\cite{Aartsen:2016oji} (solid and dashed lines are for Northern and Southern hemispheres respectively) and in Ref.~\cite{Aartsen:2018ywr} for the Northern hemisphere. } \label{fig:psf} \end{figure} In both hemispheres, atmospheric muons and cascade events are further filtered using multi-variate Boosted Decision Trees (BDT). In this analysis, a single BDT is trained to recognize three classes of events in the Northern hemisphere: single muon tracks from atmospheric and astrophysical neutrinos, atmospheric muons, and cascades, where neutrino-induced tracks are treated as signal. This BDT uses 11 variables related to event topology and reconstruction quality. The Northern BDT preserves $\sim90\%$ of the atmospheric neutrinos and $\sim0.1\%$ of the atmospheric muons from the initial selection of track-like events, also applied in previous muon neutrino searches~\citep{Aartsen:2016oji,Aartsen:2018ywr}. In the Southern hemisphere, the BDT and selection filters are taken from Ref.~\citep{Aartsen:2016oji}. The final all-sky event rate of $\sim2\,$mHz is dominated by muons from atmospheric neutrinos in the Northern hemisphere and by high-energy, well-reconstructed muons in the Southern hemisphere. This updated selection applied to the final 6 years of data shown in Table~\ref{tab:livetimesAbr}. The preceding four years of data are handled exactly as in the past. \section{Methods and Results}\label{sec:method} The point-source searches conducted in this paper use the existing maximum-likelihood ratio method which compares the hypothesis of point-like signal plus diffuse background versus a background-only null hypothesis. This technique, described in Refs.~\citep{Abbasi:2010rd,Braun:2008bg}, was also applied in the seven and eight-year point source searches~\cite{Aartsen:2016oji,Aartsen:2018ywr}. The all-sky scan and the selected source catalog searches look for directions which maximize the likelihood-ratio in the Northern and Southern hemisphere separately. Since this analysis assumes point-like sources it has sub-optimal to those with extended neutrino emission regions. The sensitivity of this analysis to a neutrino flux with an $E^{-2}$ spectrum, calculated according to \cite{Abbasi:2010rd}, shows a $\sim35\%$ improvement compared to the seven-year all-sky search~\citep{Aartsen:2016oji} due to the longer livetime, updated event selection and reconstruction. While the sensitivity in the Northern hemisphere is comparable to the eight-year study for an $E^{-2}$ spectrum~\citep{Aartsen:2018ywr}, the analysis presented in this work achieves a $\sim30\%$ improvement in sensitivity to sources with a softer spectrum, such as $E^{-3}$. \paragraph{All-Sky Scan:} The brightest sources of astrophysical neutrinos may differ from the brightest sources observed in the electromagnetic (EM) spectrum. For example, cosmic accelerators can be surrounded by a dense medium which attenuates photons emission while neutrinos could be further generated by cosmic-ray interactions in the medium. For this reason, a general all-sky search for the brightest single point-like neutrino source in each hemisphere is conducted that is unbiased by EM observations. This involves evaluating the signal-over-background likelihood-ratio at a grid of points across the entire sky with a finer spacing ($\sim0.1^\circ \times \sim0.1^\circ$) than the typical event angular uncertainty. The points within 8$^\circ$ of the celestial poles are excluded due to poor statistics and limitations in the background estimation technique. At each position on the grid, the likelihood-ratio function is maximized resulting in a maximum test-statistic (TS), a best fit number of astrophysical neutrino events ($\hat{n}_s$), and the spectral index ($\hat{\gamma}$) for an assumed power-law energy spectrum. The local pre-trial probability (p-value) of obtaining the given or larger TS value at a certain location from only background is estimated at every grid point by fitting the TS distribution from many background trials with a $\chi^2$ function. Each background trial is obtained from the data themselves by scrambling the right ascension, removing any clustering signal. The location of the most significant p-value in each hemisphere is defined to be the hottest spot. The post-trial probability is estimated by comparing the p-value of the hottest spot in the data with a distribution of hottest spots in the corresponding hemisphere from a large number of background trials. The most significant point in the Northern hemisphere is found at equatorial coordinates (J2000) right ascension $40.9^\circ$, declination $\text{-}0.3^\circ$ with a local p-value of $3.5\times10^{\text{-}7}$. The best fit parameters at this spot are $\hat{n}_s=61.5$ and $\hat{\gamma}=3.4$. Considering the trials from examining the entire hemisphere reduces this significance to 9.9$\times10^{\text{-}2}$ post-trial. The probability skymap in a 3$^\circ$ by 3$^\circ$ window around the most significant point in the Northern hemisphere is plotted in Fig.~\ref{fig:nHS}. This point is found 0.35$^\circ$ from the active galaxy NGC 1068, which is also one of the sources in the Northern source catalog. The most significant hotspot in the Southern hemisphere, at right ascension $350.2^\circ$ and declination -$56.5^\circ$, is less significant with a pre-trial p-value of $4.3\times10^{\text{-}6}$ and fit parameters $\hat{n}_s=17.8$, and $\hat{\gamma}=3.3$. The significance of this hotspot becomes 0.75 post-trial. Both hotspots alone are consistent with a background-only hypothesis. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{images/Northern_hotspot_paper3x3.pdf} \caption{Local pre-trial p-value map around the most significant point in the Northern hemisphere. The black cross marks the coordinates of the galaxy NGC 1068 taken from \textit{Fermi}-4FGL. } \label{fig:nHS} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/sens_plus_UL_ANT.pdf} \caption{90$\%$ C.L. median sensitivity and 5\,$\sigma$ discovery potential as a function of source declination for a neutrino source with an $E^{-2}$ and $E^{-3}$ spectrum. The 90$\%$ upper-limits are shown excluding an $E^{-2}$ and $E^{-3}$ source spectrum for the sources in the source list. The grey curves show the 90\% C.L. median sensitivity from 11 yrs of ANTARES data~\cite{Aublin:2019zzn}.} \label{fig:UL} \end{figure} \paragraph{Source Catalog Searches:} The motivation of this search is to improve sensitivity to detect possible neutrino sources already observed in $\gamma$-rays. A new catalog composed of 110 sources has been constructed which updates the catalog used in previous sources searches~\citep{Aartsen:2016oji}. The new catalog uses the latest $\gamma$-ray observations and is based on rigorous application of a few simple criteria, described below. The size of the catalog was chosen to limit the trial factor applied to the most significant source in the catalog such that a 5\,$\sigma$ p-value before trials would remain above 4\,$\sigma$ after trials. These 110 sources are composed of Galactic and extragalactic sources which are selected separately. The extragalactic sources are selected from the \textit{Fermi}-LAT 4FGL catalog~\citep{2019arXiv190210045T} since it provides the highest-energy unbiased measurements of $\gamma$-ray sources over the full sky. Sources from 4FGL are weighted according to the integral \textit{Fermi}-LAT flux above 1\,GeV divided by the sensitivity flux for this analysis at the respective source declination. The 5$\%$ highest-weighted BL Lacs and flat spectrum radio quasars (FSRQs) are each selected. The minimum weighted integral flux from the combined selection of BL Lac and FSRQs is used as a flux threshold to include sources marked as unidentified blazars and AGN. Eight 4FGL sources are identified as starburst galaxies. Since these types of objects are thought to host hadronic emission~\citep{Loeb:2006tw,Murase:2013rfa}, they are all included in the final source list. To select Galactic sources, we consider measurements of VHE $\gamma$-ray sources from TeVCat~\cite{tevcat,2008ICRCTevCat} and gammaCat~\cite{gammacat}. Spectra of the $\gamma$-rays were converted to equivalent neutrino fluxes, assuming a purely hadronic origin of the observed $\gamma$-ray emission where $E_\gamma\simeq2E_\nu$, and compared to the sensitivity of this analysis at the declination of the source (Fig.~\ref{fig:UL}). Those Galactic objects with predicted energy fluxes $>50\%$ of IceCube's sensitivity limit for an $E^{-2}$ spectrum, were included in the source catalog. A total of 12 Galactic $\gamma$-ray sources survived the selection. The final list of neutrino source candidates is a Northern-sky catalog containing 97 objects (87 extragalactic and 10 Galactic) and a Southern-sky catalog containing 13 sources (11 extragalactic and 2 Galactic). The large North-South difference is due to the difference in the sensitivity of IceCube in the Northern and Southern hemispheres. The post-trial p-value for each catalog describes the significance of the single most significant source in the catalog and is calculated as the fraction of background trials where the pre-trial p-value of the most significant fluctuation is smaller than the pre-trial p-value found in data. The obtained pre-trial p-values are provided in Tab.~\ref{tab:srclist} and their associated 90\% C.L. flux upper-limits are shown in Fig.~\ref{fig:UL}, together with the expected sensitivity and discovery potential fluxes. The most significant excess in the Northern catalog of 97 sources is found in the direction of the galaxy NGC 1068, analyzed for the first time by IceCube in this analysis, with a local pre-trial p-value of $1.8\times10^{-5}$ (4.1\,$\sigma$). The best fit parameters are $\gamma=3.2$ and $\hat{n}_s=50.4$, consistent with the results for the all-sky Northern hottest spot, $0.35^\circ$ away. From Fig.~\ref{fig:angErr} and Fig.\ref{fig:nHS} it can be inferred that the significance of the all-sky hotspot and the excess at NGC 1068 are dominated by the same cluster of events. The parameters of the best fit spectrum at the coordinates of NGC 1068 are shown in Fig.~\ref{fig:gammaScan}. When the significance of NGC 1068 is compared to the most significant excesses in the Northern catalog from many background trials, the post-trial significance is $2.9\,\sigma$. To study whether the $0.35^\circ$ offset between the all-sky hotspot and NGC 1068 is typical of the reconstruction uncertainty of a neutrino source, we inject a soft-spectrum source similar to the best-fit $E^{-3.2}$ flux at the position of NGC 1068 in our background samples. Scanning in a $5^\circ$ window around the injection point, we find that the median separation between the most significant hotspot and the injection point is 0.35$^\circ$. Thus, if the excess is due to an astrophysical signal from NGC 1068, the offset between the all-sky hotspot and \textit{Fermi}-LAT's coordinates is consistent with the IceCube angular resolution for such a source. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/GammaScanNGC1068.pdf} \caption{Likelihood map at the position of NGC 1068 as a function of the astrophysical flux spectral index and normalization at 1\,TeV. Contours show 1, 2, 3, and 4\,$\sigma$ confidence intervals assuming Wilks' theorem with 2 degrees of freedom~\citep{Wilks:1938dza}. The best fit spectrum is point marked with ``$\times$". } \label{fig:gammaScan} \end{figure} Out of the 13 different source locations examined in the Southern catalog, the most significant excess has a pre-trial p-value of 0.06 in the direction of PKS 2233-148. The associated post-trial p-value is 0.55, which is consistent with background. Four sources in the Northern catalog found a pre-trial p-value $<0.01$: NGC 1068, TXS 0506+056, PKS 1424+240, and GB6 J1542+6129. Evidence has been presented for TXS 0506+056 to be a neutrino source~\citep{IceCube:2018cha} using an overlapping event selection in a time-dependent analysis. In this work, in which we only consider the cumulative signal integrated over ten years, we find a pre-trial significance of 3.6\,$\sigma$ at the coordinates of TXS 0506+056 for a best fit spectrum of $E^{-2.1}$, consistent with previous results. In addition to the single source search, a source population study is conducted to understand if excesses from several sources, each not yet at evidence level, can cumulatively indicate a population of neutrino sources in the catalog. The population study uses the pre-trial p-values of each source in the catalog and searches for an excess in the number of small p-values compared to the uniform background expectation. If the number of objects in the search catalog is $N$, and the number of sources below a given threshold $p_k$ is $k$, then the probability of background producing $k$ or more sources with p-values smaller than $p_k$ is given by the cumulative binomial probability: \begin{equation} p_\mathrm{bkg}=\sum _{i=k}^{N}P_\mathrm{binom}(i|p_k,N)=\sum _{i=k}^{N}\binom{N}{i}p_k^i(1-p_k)^{N-i}\text{ .} \end{equation} In order to maximize sensitivity to any possible population size of neutrino sources within the catalog, the probability threshold ($p_k$) is increased iteratively to vary $k$ between 1 and $N$. The result of this search is the most significant $p_{bkg}$ from $N$ different tested values of $k$, then the post-trial p-value from this search must take into account a trial factor for the different tested values of $k$. The most significant $p_{bkg}$ from the Northern catalog population analysis is $3.3\times 10^{\text{-}5}$ (4.0$\,\sigma$) which is found when $k=4$ (See Fig.\ref{fig:pop}). The four most significant sources which contribute to this excess are those with p-value $<0.01$ as described above. When accounting for the fact that different signal population sizes are tested, the post-trial p-value is $4.8\times 10^{\text{-}4}$ (3.3$\,\sigma$). Since evidence has already been presented for TXS 0506+056 to be a neutrino source~\citep{IceCube:2018cha}, an \textit{a posteriori} search is conducted removing this source from the catalog. The resulting most significant excess is 2.3$\,\sigma$ post-trial due to the remaining three most significant sources. For the Southern catalog, the most significant excess is 0.12, provided by 5 of the 13 sources. The resulting post-trial p-value is 0.36. \paragraph{Stacked Source Searches} In the case of catalogs of sources that produce similar fluxes, stacking searches require a lower flux per source for a discovery than considering each source individually. Three catalogs of Galactic $\gamma$-ray sources are stacked in this paper. Sources are selected from VHE $\gamma$-ray measurements and categorized into pulsar wind nebulae (PWN), supernova remnants (SNR) and unidentified objects (UNID), with the aim of grouping objects likely to have similar properties as neutrino emitters. The final groups consist of 33 PWN, 23 SNR, and 58 UNID described in Table~\ref{tab:gallist}. A weighting scheme is adopted to describe the relative contribution expected from each source in a single catalog based on the integral of the extrapolated $\gamma$-ray flux above 10\,TeV. All three catalogs find p-values $>0.1$ \begin{table} \caption{Summary of final p-values (pre-trial and post-trial) for each point-like source search implemented in this paper.} \label{tab:results_summary} \centering \begin{tabular}{ p{1.7cm} p{1.4cm} p{2.5cm} p{2.6cm} }\hline \\[-5pt] Analysis & Category & Pre-trial significance ($p_{local}$) & Post-trial significance \\ \hline \hline All-Sky & North & $3.5\times10^{-7}$ & $9.9\times10^{-2}$ \\ Scan & South & $4.3\times10^{-6}$ & 0.75 \\ \hline Source List & North & $1.8\times10^{-5}$ & $2.0\times10^{-3}$ (2.9$\,\sigma$)\\ & South & $5.9\times10^{-2}$ & 0.55 \\ \hline Catalog & North & 3.3$\times10^{-5}$ & $4.8\times10^{-4}$ (3.3$\,\sigma$) \\ Population & South & 0.12 & 0.36 \\ \hline Stacking & SNR & -- & 0.11 \\ Search& PWN & -- & 1.0 \\ & UNID & -- & 0.4 \\ \hline \end{tabular} \end{table}
1,941,325,220,858
arxiv
\section{Introduction} One of the main goals of the future Electron - Ion Colliders (EIC) \cite{Raju,Boer,Accardi,LHeC,Aschenauer:2017jsk} is to perform a detailed investigation of the hadronic structure in the non-linear regime of the Quantum Chromodynamics (QCD) and, in particular, to be able to determine the presence of gluon saturation effects, the magnitude of the associated non-linear corrections and what is the correct theoretical framework for their description \cite{hdqcd}. Such expectations are motivated by the enhancement of the non-linear effects with the nuclear mass number through the nuclear saturation scale, $Q^2_{s,A}$, which determines the onset of non-linear effects in the QCD dynamics, being enhanced with respect to the nucleon one by a factor $ \propto A^{\frac{1}{3}}$. Consequently, it is possible to access in electron - nucleus ($eA$) collisions the high parton densities that would be achieved in an electron - proton collider at energies that are at least one order of magnitude higher those probed at HERA. A smoking gun of the gluon saturation effects in $eA$ collisions is the analysis of diffractive events, which are predicted to contribute with half of the total cross section in the asymptotic limit of very high energies, with the other half being formed by all inelastic processes \cite{Nikolaev,simone2}. For the kinematical range of the future EIC, it is expected that the contribution of the diffractive events is $\approx 20 \%$ \cite{Nik_schafer,Kowalski_prc,erike_ea2}, which have motivated an intense phenomenology about the implications of the gluon saturation effects in the diffractive production of different final states. A promising observable is the exclusive vector meson production off large nuclei \cite{vmprc,Caldwell,Lappi_inc,Toll,Lappi_bal,diego,Mantysaari:2017slo,Mantysaari:2018zdd}. In the QCD dipole approach \cite{nik}, such process can be factorized in terms of the fluctuation of the virtual photon into a $q \bar{q}$ color dipole, the dipole-nucleus scattering by a color singlet exchange and the recombination into the exclusive final state. An important characteristic of the exclusive vector meson production is that it is experimentally clean, with the final state being unambiguously identified by the presence of a rapidity gap. Moreover, such processes are driven by the gluon content of the target. As the cross section is proportional to the square of the scattering amplitude, the exclusive vector meson production is strongly sensitive to the underlying QCD dynamics. Another advantage of the study of this process in $eA$ collisions is the possibility of the study of coherent and incoherent interactions, which provide different insights about the nuclear structure and the QCD dynamics at high energies. The coherent and incoherent vector meson production in $eA$ collisions are represented in Fig. \ref{fig:diagram}. If the nucleus scatters elastically, the process is called coherent production, and the associated cross section measures the average spatial distribution of gluons in the target. On the other hand, if the nucleus scatters inelastically, i.e., breaks up due to the $p_T$ kick given to the nucleus, the process is denoted incoherent production. In this case, one sums over all final states of the target nucleus, except those that contain particle production. The associated cross section probes the fluctuations and correlations in the gluon density. In both cases, the final state is characterized by a rapidity gap. It is expected that the coherent production dominates at small squared transverse momentum transfer $t$ ($|t|\cdot R_A^2/3 \ll 1$, where $R_A$ is the nuclear radius), with its signature being a sharp forward diffraction peak. On the other hand, incoherent production should dominate at large $t$ ($|t|\cdot R_A^2/3 \gg 1$), with the associated $t$-dependence being to a good accuracy the same as in the production off free nucleons. As the momentum transfer is Fourier conjugate to the impact parameter, the coherent and incoherent exclusive vector meson production are sensitive to different aspects of the geometric structure of the target, which at high energies can be identified with the spatial gluon distribution of the target. In the coherent case, the averaged density profile of the gluon density is probed. In contrast, the incoherent cross sections constrain the event - by - event fluctuations of the gluonic fields in the target. Our goal in this paper is to present a detailed investigation of the coherent and incoherent exclusive vector meson electroproduction in $eA$ collisions considering the energy-dependent hot -- spot model proposed in Ref. \cite{Cepila:2016uku} for a proton target and extended for the nuclear case in Refs. \cite{Cepila:2017nef,Cepila:2018zky} (For similar approaches see, e.g. Refs. \cite{Mantysaari:2016ykx,Mantysaari:2016jaz,Traini:2018hxd}). In this model, the hadronic structure is described in terms of subnucleonic degrees of freedom representing regions of high gluon density, denoted hot -- spots, which increase in number with the decreasing of the Bjorken - $x$ variable. Such energy dependence is motivated by the fact that the non - linear QCD dynamics predicts that the transverse density profile of the target change with the energy. As demonstrated in Refs. \cite{Cepila:2017nef,Cepila:2018zky,Bendova:2018bbb}, such model is able to describe the current data for the exclusive and dissociative production of vector mesons in $ep$ collisions, as well find a satisfactory agreement with the data for the exclusive $J/\Psi$ photoproduction in ultraperipheral heavy ion collisions. In this paper we will estimate the coherent and incoherent cross sections for the production of light ($\rho$ and $\phi$) and heavy ($J/\Psi$ and $\Upsilon$) vector mesons considering different nuclear targets ($A = Au, Xe$ and $Ca$) and assuming two distinct models for the nuclear profile. We will present predictions for the dependencies of the cross sections on the energy, atomic number, photon virtuality and squared momentum transfer. Our results demonstrate that the ratio between the incoherent and coherent cross sections is strongly sensitive to the presence of subnucleonic degrees of freedom in the form of hot spots. This paper is organized as follows. In the next Section, we present a brief review of the formalism and discuss the two models for the nuclear profile used in our calculations. In Section \ref{sec:results} we present our results for the coherent and incoherent cross sections, considering the kinematical range that will be probed by the electron -- ion facilities that are under design: the EIC in the USA and the LHeC project at CERN. Finally, in Section \ref{sec:sum} we summarize our main conclusions. \begin{figure}% \centering \subfigure{% \includegraphics[width=0.48\textwidth]{vm_eA_coh.pdf}}% \qquad \subfigure{% \includegraphics[width=0.48\textwidth]{vm_eA_incoh.pdf}} \label{fig:diagram}% \caption{The coherent (left) and incoherent (right) exclusive vector meson production in $eA$ collisions.} \end{figure} \section{Review of the formalism} The coherent and incoherent exclusive vector meson electroproduction in $eA$ collisions are represented in the left and right panels of the Fig. \ref{fig:diagram}, respectively. The reaction is given by $e(l) + A(P) \rightarrow e(l^\prime) + Y(P^\prime) + V (P_V)$, where $Y = A$ in the coherent case and $Y = A^*$ for incoherent interactions. Moreover, $l$ and $l^\prime$ are the electron momenta in the initial and final state, respectively, while $P$ and $P^\prime$ are the inital and final nucleus momenta. Finally, $P_V$ is the momentum of the vector meson in the final state. The kinematics is described by the following Lorentz invariant quantities: $Q^2 = - q^2 = - (l - l^\prime)^2$, $t = - (P^\prime - P)^2$, $W^2 = (P + q)^2$ and $x = (M^2 + Q^2 -t)/(W^2+Q^2)$ (note, this definition of $x$ differs from the one in \cite{Cepila:2016uku,Cepila:2017nef}), where $Q^2$ is the photon virtuality, $W$ is the center of mass energy of the virtual photon -- nucleus system and $M$ is the mass of the vector meson. In the color dipole formalism, the $eA \rightarrow e V Y$ process can be factorized in terms of the fluctuation of the virtual photon into a $q \bar{q}$ color dipole, the dipole-nucleus scattering by a color singlet exchange and the recombination into the exclusive final state $V$. The amplitude for producing a vector meson diffractively in an electron - nucleus scattering is given by \begin{eqnarray} {\cal A}_{T,L}({x},Q^2,\Delta) = i \,\int d^2\mbox{\boldmath $r$} \int \frac{dz}{4\pi} \int \, d^2\mbox{\boldmath $b$} \, e^{-i[\mbox{\boldmath $b$} -(1-z)\mbox{\boldmath $r$}].\mbox{\boldmath $\Delta$}} \,\, (\Psi^{V*}\Psi)_{T,L} \,\,\frac{d\sigma_{dA}}{d^2\mbox{\boldmath $b$}}({x},\mbox{\boldmath $r$},\mbox{\boldmath $b$}) \label{amp} \end{eqnarray} where $T$ and $L$ denotes the transverse and longitudinal polarizations of the virtual photon, $(\Psi^{V*}\Psi)_{i}$ denotes the wave function overlap between the virtual photon and vector meson wave functions, $\Delta = \sqrt{-t}$ is the momentum transfer and $\mbox{\boldmath $b$}$ is the impact parameter of the dipole relative to the target. The variables $\mbox{\boldmath $r$}$ and $z$ are the dipole transverse radius and the momentum fraction of the photon carried by a quark (an antiquark carries then $1-z$), respectively. Moreover, ${d\sigma_{dA}}/{d^2\mbox{\boldmath $b$}}$ is the dipole-nucleus cross section (for a dipole at impact parameter $\mbox{\boldmath $b$}$) which encodes all the information about the hadronic scattering, and thus about the non-linear and quantum effects in the hadron wave function. Such quantity depends on the $\gamma^*A$ center - of - mass reaction energy, $W$, the photon virtuality and mass of the vector meson, through the Bjorken - $x$ variable. Consequently, the cross section for the production of light vector mesons at low $Q^2$ is much more sensitive to low $x$ effects than the one for production of heavy mesons. In addition, the study of the $\rho$ and $\phi$ production at different photon virtualities allows to probe the transition between the non-linear and linear regimes of the QCD dynamics. In principle, ${d\sigma_{dA}}/{d^2\mbox{\boldmath $b$}}$ can be derived using the Color Glass Condensate (CGC) formalism \cite{CGC}, which is characterized by the infinite hierarchy of equations, the so called Balitsky-JIMWLK equations \cite{BAL,CGC}, which reduces in the mean field approximation to the Balitsky-Kovchegov (BK) equation \cite{BAL,kov}. As in analysis presented in Ref. \cite{Cepila:2017nef}, in this paper we will describe the dipole - nucleus cross section derived using the Glauber-Gribov formalism \cite{gribov}, which is given by \begin{eqnarray} \frac{d\sigma_{dA}}{d^2\mbox{\boldmath $b$}} = 2\,\left( 1 - \exp \left[-\frac{1}{2} \, \sigma_{dp}(x,\mbox{\boldmath $r$}^2) \,T_A(\mbox{\boldmath $b$})\right]\right) \,\,, \label{enenuc} \end{eqnarray} where $\sigma_{dp}$ is the dipole-proton cross section and $T_A(\mbox{\boldmath $b$})$ is the nuclear profile function. Such equation takes into account the multiple elastic rescattering diagrams of the $q \overline{q}$ pair and is justified in the large coherence length regime ($l_c \gg R_A$). In this limit the transverse separation $\mbox{\boldmath $r$}$ of partons in the multiparton Fock state of the photon becomes a conserved quantity, {\it i.e.}, the size of the pair $\mbox{\boldmath $r$}$ becomes eigenvalue of the scattering matrix. Following Refs. \cite{Cepila:2016uku,Cepila:2017nef}, we will assume that $\sigma_{dp}(x,\mbox{\boldmath $r$}^2) = \sigma_0 \, {\cal{N}}_p(x,\mbox{\boldmath $r$}^2)$, where the value of $\sigma_0$ is fixed by the value of the proton profile in impact parameter space and ${\cal{N}}_p$ is forward dipole scattering amplitude, which we chose to be given by the model of Golec-Biernat and Wusthoff \cite{GBW,Cepila:2016uku,Cepila:2018zky} \begin{equation}\label{eq:N} {\cal{N}}_p (x,\mbox{\boldmath $r$}^2) = (1 - \exp[ -r^2 Q^2_s(x)/4]), \quad Q^2_s(x) = Q^2_0(x_0/x)^\lambda \end{equation} with the saturation scale, $Q^2_s$, given by the parameters $\lambda, x_0$ and $Q_0^2$. It is important to emphasize that this model for ${d\sigma_{dA}}/{d^2\mbox{\boldmath $b$}}$ allows to describe the current experimental data on the nuclear structure function \cite{armesto,Kowalski_prc,erike_ea2}. In order to estimate the amplitude ${\cal A}_{T,L}({x},Q^2,\Delta)$ we also should to assume a model for the overlap function $(\Psi^{V*}\Psi)_{i}$. In what follows we will assume the boosted-Gaussian model for the vector meson wave functions \cite{Nemchik:1994fp,Nemchik:1996cw}, with the numerical values of the parameters as in \cite{Kowalski:2006hc}. For coherent interactions, the nucleus is required to remain in its ground state, i.e., intact after the interaction, which corresponds to take the average over the configurations of the nuclear wave function at the level of the scattering amplitude. Consequently, the coherent cross section is obtained by averaging the amplitude before squaring it and the differential distribution will be given by \begin{equation}\label{eq:xsec-coh} \left.\frac{d\sigma^{\gamma A \rightarrow V\,A}}{dt}\right|_{T,L} = \frac{1}{16\pi}\left| \left\langle \mathcal{A}(x,Q^2, \Delta)_{T,L} \right\rangle \right|^2. \end{equation} On the other hand, for incoherent interactions the average over configurations is at the cross section level, the nucleus can break up and the resulting incoherent cross section will be proportional to the variance of the amplitude with respect to the nucleon configurations of the nucleus, i.e., it will measure the fluctuations of the gluon density inside the nucleus. In this case, the differential cross sections will be expressed as follows: \begin{equation}\label{eq:xsec-inc} \left.\frac{d\sigma^{\gamma A \rightarrow V\,Y}}{dt}\right|_{T,L} = \frac{1}{16\pi} \left( \left\langle\left| \mathcal{A}(x,Q^2,\vec \Delta)_{T,L} \right|^2 \right\rangle - \left| \left\langle \mathcal{A}(x,Q^2,\vec \Delta)_{T,L} \right\rangle \right|^2\right), \end{equation} where $Y = A^*$ represents the dissociative state. In our calculations we will include the skewedness correction by multiplicating the coherent and incoherent cross sections by the factor $(R_g^{T,L})^2$ as given in Ref. \cite{Shuvaev:1999ce}. The coherent and incoherent cross sections depend on the description of the nuclear profile $T_A(\mbox{\boldmath $b$})$. It is useful in the literature to estimate this quantity assuming a given model for the nuclear density function $\rho_A (\vec{r})$, which implies the smooth behaviour for $T_A(\mbox{\boldmath $b$})$ represented in the left panel of Fig. \ref{fig:TAcompareMap}. In principle, such model is realistic if the observable that we are interested is sensitive only to the averaged behaviour over the configurations in the nuclear wave function. However, as discussed above, the incoherent cross section is sensitive to fluctuations in the configurations. Therefore, a more detailed model should be considered to estimate this observable. One possibility is to assume that each nucleon in the nucleus has a Gaussian profile of width $B_p$, centered at random positions $\mbox{\boldmath $b$}_i$ sampled from a Woods-Saxon nuclear profile \begin{equation}\label{eq:Ths0} T_A(\mbox{\boldmath $b$}) = \frac{1}{2\pi B_p} \sum_{i=1}^{A} \exp\left[ - \frac{(\mbox{\boldmath $b$} - \mbox{\boldmath $b$}_i)^2}{2B_p} \right] \,\,. \end{equation} A typical configuration for this model, denoted {\it nu} model hereafter, is represented in the central panel of Fig. \ref{fig:TAcompareMap}. On the other hand, we also can assume that the nucleons inside the nucleus are themselves made up of hot spots with a Gaussian profile of width $B_{hs}$, distributed according to a Gaussian of width $B_{p}$ inside the nucleon at arbitrary position $\mbox{\boldmath $b$}_j$ \begin{equation}\label{eq:Ths} T_A(\mbox{\boldmath $b$}) = \frac{1}{2\pi B_{hs}} \sum_{i=1}^{A} \frac{1}{N_{hs}} \sum_{j=1}^{N_{hs}} \exp\left[ - \frac{(\mbox{\boldmath $b$} - \mbox{\boldmath $b$}_i - \mbox{\boldmath $b$}_j)^2}{2B_{hs}} \right] \end{equation} where in this case $N_{hs}$ is a random number drawn from a zero-truncated Poisson distribution \cite{Cepila:2017nef}, where the Poisson distribution has a mean value \begin{equation}\label{eq:Nhs} \langle N_{hs}(x) \rangle = p_0 x^{p_1} (1 + p_2 \sqrt{x})\,\,. \end{equation} A typical configuration of the hot - spot ({\it hs}) model is presented in the right panel of Fig. \ref{fig:TAcompareMap}. Some additional comments are in order. In the left panel, nuclear profile is calculated from the nuclear density $\rho_A(\vec r)$ function (for Gold) by integration over longitudinal coordinate $z$. Since it is an average, this nuclear profile has a zero variance for event-by-event calculation. The nuclear profile of the $nu$ model (central panel) is represented by $A$ Gaussians seeded according to the distribution $\rho_A(\vec r)$. This model leads to different configuration for every event, as was suggested, e.g. in \cite{Kowalski:2006hc, Mantysaari:2017dwh}, leading to the non-zero variance. Finally, in the $hs$ model (right panel), for every nucleon of the $nu$ model are generated hot spots whose number is controlled by Eq. \eqref{eq:Nhs} (here for $x=0.001$). If we compare both visualizations we can find that the $hs$ model predicts a more dilute and non - uniform distribution in comparison to the $nu$ one. In what follows we will estimate the coherent and incoherent cross sections considering the {\it nu} and {\it hs} models of the profile nuclear function. Our goal is to verify if these cross sections are able to discriminate between these two models for the description of the nucleon configurations in the nucleus and if they are sensitive to the presence of hot - spots inside the nucleons. All parameters present in our calculations have been fixed from a comparison to data on $J/\psi$ and $\rho$ photoproduction off protons \cite{Cepila:2018zky}. The values of the parameters and the associated discussion can be found in \cite{Cepila:2016uku} and will not be repeated here. Particularly, we would like to highlight the discussion of the chosen values of $B_p$ according to the studied vector mesons in \cite{Cepila:2017nef,Cepila:2018zky,Bendova:2018bbb}. Some comments are in order. In our study, following the previous studies performed in Refs. \cite{Cepila:2016uku,Cepila:2017nef,Cepila:2018zky} we are assuming that $B_p$ and $B_{hs}$ are constants, with its values being determined by fitting the HERA data. As demonstrated in \cite{Cepila:2016uku,Cepila:2017nef,Cepila:2018zky}, such simplified approach describes the experimental data for the total cross sections and $t$-distributions. In principle, such quantities can be energy dependent. An alternative is to assume that $B_{hs}$ is related to the saturation scale by $B_{hs} = 1/Q_s^2(x)$, which implies that at larger energies we will have smaller hot-spots. Moreover, it is also possible to assume that the radius of the nucleons has a logarithmic growth with energy. This energy dependence will imply a change of the number of hot-spots, such that when the nucleon is larger, more hot spots will be needed to fill the phase space. In other words, there is a correlation between $B_p$ and $N_{hs}$. The study of such alternatives are the subject of a separate publication. The impact on our predictions for large nuclear targets is small. Finally, a comment about the numerical method used in our calculations is important. The mean value in Eqs. \eqref{eq:xsec-coh} and \ref{eq:xsec-inc} refers to the average and the variance over the configurations (events). Due to the enormous computer resources needed to evaluate the integrals in Eq. \eqref{amp}, we have only used 200 configurations. The error from the integration of individual configuration is low ($\approx 2\%$). However, the statistical error of the set of 200 configurations is higher roughly 10\% for incoherent cross section (coherent cross section error is approx. 3\% ), but for $\rho$ meson with increasing nucleon number $A$ and decreasing scale $Q^2$ can reach up to 30\% for Gold and $Q^2=0.05$~GeV$^2$. In our predictions we will present the associated uncertainty bands. \begin{figure}[!tbp] \centering \begin{minipage}[b]{0.325\textwidth} \includegraphics[width=\textwidth]{avg.pdf} \end{minipage} \hfill \begin{minipage}[b]{0.325\textwidth} \includegraphics[width=\textwidth]{nu.pdf} \end{minipage} \hfill \begin{minipage}[b]{0.325\textwidth} \includegraphics[width=\textwidth]{hs.pdf} \end{minipage} \caption{Comparison of nuclear profiles, $T_{A}(b)$, calculated from nuclear density function $\rho_A(\vec r)$ (left), calculated using $nu$ model where each nucleon is represented by Gaussian (center), and calculated using the hot shots model, where every nucleon from $nu$ model is represented by a set of hot spots (right).} \label{fig:TAcompareMap} \end{figure} \section{Results} \label{sec:results} In what follows we will present our predictions for the energy, photon virtuality, atomic number and momentum transfer of the coherent and incoherent cross sections considering the two models for the nuclear profile function discussed in the previous Section. We will consider the production of two light vector mesons ($\rho$ and $\phi$) as well two heavy vector mesons ($J/\Psi$ and $\Upsilon$). Our focus will be in the kinematical range that probably will probed in the future electron - ion colliders under design: the EIC in USA ($\sqrt{s} \approx 100$ GeV) and the LHeC at CERN ($\sqrt{s} \approx 1000$ GeV). As we are interested in the low - $x$ region, we will present results for $Q^2 \le 10$ GeV$^2$ and will consider different atomic nuclei ($A = Au, \, Xe$ and $Ca$). \begin{figure}% \centering \subfigure{% \includegraphics[width=0.4\textwidth]{017_Au_rho_vs_Q2_coh.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{019_Au_phi_vs_Q2_coh.pdf}}\\% \subfigure{% \includegraphics[width=0.4\textwidth]{021_Au_jpsi_vs_Q2_coh.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{023_Au_upsilon_vs_Q2_coh.pdf}}% \caption{Predictions for coherent exclusive vector meson production considering a Gold target as a function of energy $W$ for different values of $Q^2$. The solid (dashed) lines correspond to the predictions of the $hs$ ($nu$) model for the nuclear profile.} \label{fig:xseccoh-energy}% \end{figure} In Fig. \ref{fig:xseccoh-energy} we present our predictions for the energy dependence of the coherent cross sections for different values of the photon virtuality $Q^2$ considering an $eAu$ collision. The dashed lines correspond to the predictions obtained assuming that nuclear profile is made up of nucleons, denoted $nu$, and the solid lines to the nuclear profile made up of hot spots, $hs$. In agreement with previous studies \cite{vmprc,Caldwell,Lappi_inc,Toll,Lappi_bal,diego,Mantysaari:2017slo}, we have that the cross sections increase with energy and decrease with $Q^2$. Moreover, we have that for a fixed $Q^2$ the increasing with $W$ is steeper for heavier vector mesons, which is directly associated to the fact that for these mesons the cross section is dominated by smaller color dipoles and, therefore, the impact of non - linear effects in the QCD dynamics is reduced, independently of the value of the photon virtuality. In contrast, in the case of the light vector mesons, we have that at small photon virtualities the main contribution comes from large size dipoles, with the dynamics being determined by the gluon saturation effects. When the photon virtuality increases, the contribution of smaller dipoles becomes larger, reducing the contribution of non - linear effects. As a consequence, the cross sections for the light vector meson production become steeper with the energy at larger values of $Q^2$. Such behaviour is observed in the upper panels of Fig. \ref{fig:xseccoh-energy}. Regarding the impact of the modeling of $T_A(\mbox{\boldmath $b$})$, we have that the $nu$ and $hs$ predictions are almost identical, which is expected by the fact that the coherent cross section probes the average over configurations of the nuclear wave function. Such behaviour is also verified when we consider the electron - ion collisions for different nuclei, as presented in Fig. \ref{fig:xseccoh-nuclei}. \begin{figure}% \centering \subfigure{% \includegraphics[width=0.4\textwidth]{049_rho_Q2-05_vs_nucl_coh.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{051_phi_Q2-05_vs_nucl_coh.pdf}}\\% \subfigure{% \includegraphics[width=0.4\textwidth]{053_jpsi_Q2-05_vs_nucl_coh.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{055_upsilon_Q2-05_vs_nucl_coh.pdf}}% \caption{Predictions for the energy dependence of the cross sections for the coherent exclusive vector meson production considering different values of the atomic number and $Q^2 = 0.05$ GeV$^2$. The solid (dashed) lines correspond to the predictions of the $hs$ ($nu$) model for the nuclear profile.} \label{fig:xseccoh-nuclei}% \end{figure} \begin{figure}% \centering \subfigure{% \includegraphics[width=0.4\textwidth]{018_Au_rho_vs_Q2_inc.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{020_Au_phi_vs_Q2_inc.pdf}}\\% \subfigure{% \includegraphics[width=0.4\textwidth]{022_Au_jpsi_vs_Q2_inc.pdf}} \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{024_Au_upsilon_vs_Q2_inc.pdf}} \caption{Predictions for incoherent exclusive vector meson production considering a Gold target as a function of energy $W$ for different values of $Q^2$. The solid (dashed) lines correspond to the predictions of the $hs$ ($nu$) model for the nuclear profile.} \label{fig:xsecinc-energy}% \end{figure} \begin{figure}% \centering \subfigure{% \includegraphics[width=0.4\textwidth]{050_rho_Q2-05_vs_nucl_inc.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{052_phi_Q2-05_vs_nucl_inc.pdf}}\\% \subfigure{% \includegraphics[width=0.4\textwidth]{054_jpsi_Q2-05_vs_nucl_inc.pdf}} \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{056_upsilon_Q2-05_vs_nucl_inc.pdf}}% \caption{Predictions for the energy dependence of the cross sections for the incoherent exclusive vector meson production considering different values of the atomic number and $Q^2 = 0.05$ GeV$^2$. The solid (dashed) lines correspond to the predictions of the $hs$ ($nu$) model for the nuclear profile.} \label{fig:xsecinc-nuclei}% \end{figure} In Figs. \ref{fig:xsecinc-energy} and \ref{fig:xsecinc-nuclei} we present our predictions for the exclusive vector meson production in incoherent interactions. Similarly to the coherent case, the cross sections decrease with $Q^2$ and increase with the energy, with the increase being dependent on the vector meson considered. However, for the incoherent production, the predictions are sensitive to the description of the nuclear profile. We have that the $hs$ model implies larger values for the incoherent cross sections, with the enhancement in comparison to the $nu$ model being present for all values of $Q^2$ and atomic number. In order to quantify this impact and reduce the systematic uncertainty in our calculations, we will estimate the ratio between the incoherent and coherent cross sections. Our predictions for the energy, $Q^2$ and atomic number dependencies of this ratio are presented in Figs. \ref{fig:ratio-energy} and \ref{fig:ratio-nuclei}. We have that both models predict that the ratio decreases at smaller values of the photon virtuality and larger nuclei. The difference between the $nu$ and $hs$ predictions is larger when the photon virtuality is increased, especially for lighter vector mesons. Such a result is associated with the fact that $x \propto Q^2 + M^2$ for a fixed energy. Consequently, at larger $Q^2$ we are probing larger values of $x$, where the number of hot - spots is smaller, which implies a larger variance in the event - by - event configurations. For heavy mesons, the predictions become sensitive to $Q^2$ only when $Q^2 \gg M^2$. The dependence on $A$ of the ratio is connected to the fact that for lighter nuclei we have a smaller number of nucleons in longitudinal coordinate $z$, which induces more inter-nucleon (inter-nucleon-hot-spots) space and, consequently, increases the variance between configurations. Finally, in contrast with the $nu$ model, the $hs$ model predicts that the ratio is strongly dependent on the energy, with the difference between the predictions being smaller at larger energies. This behaviour is directly associated to the energy dependence of the number of hot - spots. With the increasing of the energy $W$, and, consequently, decreasing of $x$, one has a higher number of hot spots as given by Eq. \eqref{eq:Nhs}. The growing number of hot-spot tends to fill up the nucleon, and both models approach each other. Our results indicate that future experimental analysis of this ratio can be useful to constrain the description of the nuclear profile and, in particular, to probe the presence of hot-spots inside the nucleons. \begin{figure}% \centering \subfigure{% \includegraphics[width=0.4\textwidth]{089_Au_rho_vs_Q2_ratio.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{090_Au_phi_vs_Q2_ratio.pdf}}\\ \subfigure{% \includegraphics[width=0.4\textwidth]{091_Au_jpsi_vs_Q2_ratio.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{092_Au_upsilon_vs_Q2_ratio.pdf}}\\ \caption{Predictions for the ratio between the incoherent and coherent cross sections as as a function of energy $W$ for different values of $Q^2$ considering $eAu$ collisions. The solid (dashed) lines correspond to the predictions of the $hs$ ($nu$) model for the nuclear profile.} \label{fig:ratio-energy}% \end{figure} \begin{figure}% \centering \subfigure{% \includegraphics[width=0.4\textwidth]{105_rho_Q2-05_vs_nucl_ratio.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{106_phi_Q2-05_vs_nucl_ratio.pdf}} \\ \subfigure{% \includegraphics[width=0.4\textwidth]{107_jpsi_Q2-05_vs_nucl_ratio.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{108_upsilon_Q2-05_vs_nucl_ratio.pdf}}% \caption{Predictions for the ratio between the incoherent and coherent cross sections as as a function of energy $W$ for different atomic nuclei and $Q^2 = 0.05$ GeV$^2$. The solid (dashed) lines correspond to the predictions of the $hs$ ($nu$) model for the nuclear profile.} \label{fig:ratio-nuclei}% \end{figure} \begin{figure}% \centering \subfigure{% \includegraphics[width=0.4\textwidth]{202_Au_rho_Q2-05_W-100_t.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{235_Au_rho_Q2-05_W-1000_t.pdf}}\\ \subfigure{% \includegraphics[width=0.4\textwidth]{211_Au_phi_Q2-05_W-100_t.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{250_Au_phi_Q2-05_W-1000_t.pdf}}\\ \subfigure{% \includegraphics[width=0.4\textwidth]{220_Au_jpsi_Q2-05_W-100_t.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{241_Au_jpsi_Q2-05_W-1000_t.pdf}}\\ \subfigure{% \includegraphics[width=0.4\textwidth]{228_Au_upsilon_Q2-05_W-100_t.pdf}}% \qquad \subfigure{% \includegraphics[width=0.4\textwidth]{205_Au_upsilon_Q2-05_W-1000_t.pdf}} \caption{Predictions for the $t$ - dependence of the coherent and incoherent cross sections for $\gamma^*Au$ interactions at $W=100$~GeV (left) and $W=1000$~GeV (right). The solid (dashed) lines correspond to the predictions of the $hs$ ($nu$) model for the nuclear profile.} \label{fig:tdist2}% \end{figure} Finally, in Fig. \ref{fig:tdist2} we present our predictions for the $t$ - distributions considering the exclusive vector meson production in coherent and incoherent interactions. We assume $Q^2 = 0.05$ GeV$^2$ and consider $W = 100$ and 1000 GeV. The results are presented in the left and right panels, respectively. The incoherent (coherent) predictions are represented by solid (dashed) lines. The coherent cross sections clearly exhibit the typical diffractive pattern and are characterized by a sharp forward diffraction peak. In contrast, the incoherent cross sections are characterized by a $t$ - dependence similar to that observed in the vector meson $\rho$ production off free nucleons. One has that the incoherent processes dominate at large - $|t|$ and the coherent ones at small values of the momentum transfer. Such behaviour is expected: with the increasing of the momentum kick given to the nucleus the probability that it breaks up becomes larger. As a consequence, the vector meson production at large - $|t|$ is dominated by incoherent processes. The results presented in Fig. \ref{fig:tdist2} are in accordance with those presented previously in Refs. \cite{Cepila:2017nef,Mantysaari:2017dwh} for the $J/\Psi$ case, but are here extended to different vector mesons and energies. One has that the difference between incoherent cross sections for $hs$ and $nu$ models increases with increasing momentum transfer $t$. The increasing difference can also be observed for the coherent cross section with increasing $t$. However, this plays a very small role in the integrated cross section because of small absolute contributions. Focusing on the comparison between energy $W=100$~GeV (left figures) and 1000~GeV (right figures), we do not observe significant difference except the higher difference between $nu$ and $hs$ model for the coherent cross section at large $t$. \section{Conclusions} \label{sec:sum} The study of exclusive processes in deep inelastic scattering (DIS) electron - nucleus processes probe the QCD dynamics at high energies, driven by the gluon content of the nucleus, which is strongly subject to non-linear effects (parton saturation). Our goal in this paper was to present a comprehensive analysis of the energy, virtuality, nuclear mass number and transverse momentum dependencies of the cross sections for the vector meson production in the kinematical range which could be accessed in future electron - ion colliders. In particular, our focus was in the incoherent vector meson production which is sensitive to fluctuations in the transverse density profile of the target. In this study, we have considered two models for the profile functions, with one them considering that the nucleons can have subnucleonic degrees of freedom, denoted hot - spots. Our results demonstrate that the impact of the hot - spots is larger for larger virtualities and lighter nuclei. In particular, future analysis of the ratio between the incoherent and coherent cross sections and the momentum transfer distributions can be useful to constrain the description of the hadronic structure of the nucleus. \section{Acknowledgement} The authors are deeply grateful to the very useful discussions with J. G. Contreras and J. D. Tapia Takaki. VPG is grateful to the members of the Department of Physics and Astronomy of the University of Kansas by the warm hospitality during the initial phase of this study. JC has been supported by the grant 17-04505S of the Czech Science Foundation (GACR). VPG was partially financed by the Brazilian funding agencies CNPq, FAPERGS and INCT-FNA (process number 464898/2014-5). MK was supported by the Conicyt Fondecyt grant Postdoctorado N.3180085 (Chile) and by the grant LTC17038 of the Ministry of Education, Youth and Sports of the Czech Republic. Access to computing and storage facilities of the National Grid Infrastructure MetaCentrum provided under the programme CESNET LM2015042 of the Czech Republic is greatly appreciated. \bibliographystyle{utphys} \addcontentsline{toc}{section}{References}
1,941,325,220,859
arxiv
\section{\label{sec:level1}Introduction} The forward and backward hopping rates of a random walk on a one-dimensional lattice are linked to the average velocity ($v$) and diffusion coefficient ($D$). The latter macroscopic parameters can be estimated from the trajectory of the random walk, i.e., locations on the lattice measured with a regular time interval. Fitting of the mean square displacements (MSD) vs time lag data with $ MSD=\sigma^2+2D\tau +(v\tau)^2$, where $\sigma$ is the localization noise, has been widely used to estimate the macroscopic parameters. However, it is known that there are practical issues such as the optimization of the range of the time lags and the influence of motion blurs when this approach is applied to the single particle dynamics of a biological molecule \cite{Michalet:2010hd, Berglund:2010ff, Michalet:2012do}. Recently, a novel approach based on the covariance between the displacements between adjacent time points (covariance-based estimator, CVE) has been proposed to be a more simple, accurate and robust alternative to estimate the diffusion coefficient of a unbiased random walk \cite{Vestergaard:2016ko,Vestergaard:2015jx,Vestergaard:2014bk}. Here, we extend the CVE-based approach to the random walk on a two-dimensional (2D) lattice, in which a particle takes stochastic hops to the eight surrounding sites with distinct rates. We start with the well-known relation between velocity and the diffusion coefficient and the 1D hopping rates and the CVE of the diffusion coefficient in this case. We then show that in a 2D random walk, x- and y-velocities and diffusion coefficients plus four (higher-order) co-moments of the observed two-dimensional displacement series are linked to the eight hopping rates. The procedure for calculating the CVEs for the macroscopic parameters, i.e., the co-moments of the 2D displacements, is provided. This allows us to infer the eight hopping rates from the trajectories of a 2D random walk even with temporal and spatial resolutions at which individual hopping events can't be captured. \section{\label{sec:level2}Results} \subsection{\label{sec:level2a}CVE-based approach for 1D diffusion with drift} Here we consider a potentially biased random walk $X(t)$ on a 1D lattice with the grid size, $a$\ (Fig.~\ref{fig:stepping}a). Let $k_{+1}$ and $k_{-1}$ be the stochastic forward and backward hopping rates , respectively (we can assume $k_{+1}>k_{-1}>0$ without loss of generality). $X(t)$ is the sum of $x_i$, the displacements in a small fraction of time $\delta t = t/n$, i.e., $ X(t) = x_1 + x_2 + \ldots +x_n $. Since $x_i$ are indistinguishable but independent of each other, the ensemble averages of $x_i$ \begin{eqnarray*} \langle x_1 \rangle = \langle x_2 \rangle = \ldots = \langle x_n \rangle \equiv \langle x \rangle \\ \langle x_1^2 \rangle = \langle x_2^2 \rangle = \ldots = \langle x_n^2 \rangle \equiv \langle x^2 \rangle \end{eqnarray*} where $\langle x \rangle$ and $\langle x^2 \rangle$ are the mean displacement and the mean square displacement of $X(t)$ in $\delta t$, respectively. In general, $\langle x_i^2 \rangle \neq \langle x_i \rangle^2 $, while for $i \neq j$, $\langle x_ix_j \rangle = \langle x_i \rangle \langle x_j \rangle $. The ensemble average of $X(t)$ is \begin{eqnarray} \langle X(t) \rangle &=& \langle x_1 + x_2 + \ldots +x_n \rangle = n\langle x \rangle \nonumber \\ &=& n\bigl[ (+a)\times k_{+1}\delta t + (-a)\times k_{-1}\delta t\bigr] = a(k_{+1} - k_{-1}) \cdot n\delta t \nonumber \\ &=& a(k_{+1} - k_{-1})t \nonumber \end{eqnarray} \begin{figure*}[htb!] \includegraphics[width=0.6\textwidth, angle=0]{stepping.pdf} \caption{\label{fig:stepping} Hopping on 1D (a) and 2D (b) lattices. \newline The gray dot represents the current position on the lattice. It takes a stochastic hop to the 2 and 8 nearby sites at the indicated rates ($k_{+1}$ and $k_{-1}$ on the 1D lattice (a) and $k_1 \sim k_8 $ on the 2D lattice (b)). $k_F=k_1+k_2+k_3$, $k_B=k_6+k_7+k_8$, $k_L=k_1+k_4+k_6$, and $k_R=k_3+k_5+k_8$ are the rates of hopping towards forward, backward, left and right, respectively.} \end{figure*} With $\langle X(t)^2 \rangle = \langle (x_1 + x_2 + \ldots +x_n)^2 \rangle = \langle \sum_i x_i^2 + \sum_j \sum_{i, i \neq j} x_i x_j \rangle = n \langle x^2 \rangle + n(n-1) \langle x \rangle^2$, the variance of $X(t)$ is \begin{eqnarray} \mathrm{var} (X(t)) &=&\langle (X(t)-\langle X(t) \rangle)^2 \rangle =\langle X(t)^2 \rangle - \langle X(t) \rangle ^2 \nonumber \\ &=& n \langle x^2 \rangle + n(n-1) \langle x \rangle^2 - (n\langle x \rangle)^2 = n \langle x^2 \rangle - n \langle x \rangle^2 \nonumber \\ &=& n\bigl[ (+a)^2\times k_{+1}\delta t + (-a)^2\times k_{-1}\delta t\bigr] - n \bigl[ (+a)\times k_{+1}\delta t + (-a)\times k_{-1}\delta t\bigr]^2 \nonumber \\ &=& a^2(k_{+1} + k_{-1}) \cdot n\delta t - a^2(k_{+1} - k_{-1})^2 (n\delta t)\delta t \nonumber \\ &\rightarrow & a^2(k_{+1} + k_{-1})t \ \ (\delta t \rightarrow 0)\nonumber \end{eqnarray} Thus, the velocity of the constant drift and the diffusion coefficient, $v$ and $D$, respectively, are linked to the hopping rates \begin{eqnarray} v = a (k_{+1} - k_{-1}) \\ 2D = a^2 (k_{+1} + k_{-1}) \end{eqnarray} As is well known, this indicates that we can infer the stochastic hopping rates by determining the macroscopic parameters, $v$ and $D$. Hereafter, for simplicity, we omit $a$ by assuming that $X(t)$ is a dimensionless value measured with $a$ as the unit, i.e., the physical location on the lattice is $X(t)\cdot a$. Thus, \begin{eqnarray} v = k_{+1} - k_{-1} \\ 2D = k_{+1} + k_{-1} \end{eqnarray} In typical single molecule/particle observations, we can't determine the true coordinate $X(t)$ as a continuous function. We can only measure the positions at a discrete time points $t=0, \Delta t, 2\Delta t, \dots , n \Delta t$ and the measured values $X_0, X_1, X_2, \dots X_n $ suffer from the motion blur due to the movement during image capturing and the error in determining the position of the molecule/particle by image analysis. Under this circumstance, $X_k$ is related to the true $X(t)$ as \begin{equation} X_k = \int_0^{\Delta t} s(t) X(t + (k-1)\Delta t) dt + \varepsilon_k \end{equation} where $s(t)$ defines the state of the shutter ($s(t)=0$ means closed shutter and $s(t)>0$ means open shutter, for normalization $\int_0^{\Delta t} s(t) dt =1$) and $\varepsilon_k$ the Gaussian error in localization by image analysis ($\langle \varepsilon_i\rangle =0$ and $\langle \varepsilon_i^2\rangle =\sigma^2$ represents the precision of the measurement when the target is immobile) \cite{Berglund:2010ff}. The traditional approaches to determine $v$ and $D$, for example, by fitting a quadratic curve to the mean square displacement data, suffer from complications arising from non-zero $s(t)$ and $\varepsilon$. However, for unbiased diffusion (i.e. $v=0$), it has recently been shown \cite{Vestergaard:2014bk, Vestergaard:2015jx, Vestergaard:2016ko} that a combination of the adjacent displacements $\Delta X_k = X_{k+1} - X_k$ and $\Delta X_{k+1} = X_{k+2} - X_{k+1}$ cancels out the terms containing $s(t)$ and $\varepsilon$ and results in a simple relation \begin{equation} \langle \Delta X_k^2 \rangle + 2 \langle \Delta X_{k+1}\cdot \Delta X_k \rangle = 2D \Delta t. \end{equation} $D$ calculated with this relation provides a more reliable estimator of the diffusion coefficients. The above relation doesn't hold in the presence of a bias in the hopping rates. However, if we consider $Z(t) = X(t) - vt$, the deviation from the constant drift, and its observed counterpart $Z_k = X_k- v\cdot k\Delta t$, we obtain $\langle Z_k \rangle = 0$, and thus the series of its displacements $\Delta Z_k = Z_{k+1} - Z_k = \Delta X_k - v\Delta t$ satisfy \begin{equation} \langle \Delta Z_k^2 \rangle + 2 \langle \Delta Z_{k+1}\cdot \Delta Z_k \rangle = 2D \Delta t. \end{equation} Since $\langle (X(t) - \langle X(t) \rangle)^2 \rangle = \langle (Z(t) + vt - \langle Z(t) +vt\rangle)^2 \rangle = \langle (Z(t) - \langle Z(t) \rangle)^2 \rangle$, the value of $D$ determined with this formula gives the diffusion coefficient of the biased random walk $X(t)$. \subsection{\label{sec:level2b} Biased random walk on a 2D lattice: hopping rates and co-moments of the displacements} Here we discuss a biased random walk $(X(t), Y(t))$ on a 2D lattice measured with the x- and y-grid size $a$ and $b$, respectively. Hopping can occur to the eight surrounding sites with the distinct rates $k_1, k_2, k_3, k_4, k_5, k_6, k_7, k_8$. Let the velocities and diffusion coefficients of the movement along the grid axes be $v_x$ and $v_y$, and $D_x$, and $D_y$, respectively. Then \begin{eqnarray*} \langle X(t) \rangle = v_x t = (k_F-k_B) t = (k_1 + k_2 + k_3 - k_6 - k_7 - k_8) t \\ \langle ( X(t) - \langle X(t) \rangle)^2 \rangle = 2 D_x t = (k_F+k_B) t = (k_1 + k_2 + k_3 + k_6 + k_7 + k_8) t\\ \langle Y(t) \rangle = v_y t = (k_L-k_R) t= (k_1 + k_4 + k_6 - k_3 - k_5 - k_8) t\\ \langle ( Y(t) - \langle Y(t) \rangle)^2 \rangle = 2 D_y t = (k_L+k_R) t= (k_1 + k_4 + k_6 + k_3 + k_5 + k_8) t \end{eqnarray*} as we discussed in the above section. Here, we consider the covariance between $X(t)$ and $Y(t)$. \begin{eqnarray*} && \mathrm{cov}(X, Y) = \langle (X - \langle X \rangle)(Y - \langle Y \rangle) \rangle = \langle XY \rangle - \langle X\rangle \langle Y \rangle \\ &=& \langle (x_1+x_2+ \ldots + x_n)(y_1+y_2+ \ldots +y_n) \rangle - \langle x_1 + x_2 + \ldots + x_n \rangle\langle y_1 + y_2 + \ldots + y_n \rangle \\ &=& \langle \sum_{i=1}^n x_i y_i + \sum_{i\neq j}x_i y_j \rangle - n\langle x \rangle \cdot n \langle y \rangle \\ &=& n \langle xy \rangle + n(n-1) \langle x\rangle \langle y \rangle - n^2 \langle x \rangle \langle y \rangle \\ &=& n \langle xy \rangle - n \langle x\rangle \langle y \rangle \\ &=& n \cdot \bigl[ (+1)(+1)\cdot k_1\delta t + (-1)(+1)\cdot k_3\delta t + (+1)(-1)\cdot k_6\delta t + (-1)(-1)\cdot k_8\delta t \bigr] \\&&\ \ \ \ - n \cdot (k_F - k_B)\delta t \cdot (k_L - k_R)\delta t \\ &=& (k_1-k_3 -k_6+k_8) n\delta t - (k_F - k_B) (k_L - k_R) n\delta t \cdot \delta t \\ &\rightarrow& (k_1-k_3 -k_6+k_8)t\ \ (\delta t \rightarrow 0) \end{eqnarray*} We move on to higher-order moments. Since \begin{eqnarray} \langle X^2 Y \rangle &=& \Bigl\langle \sum_k \sum_j \sum_i x_i x_j y_k \Bigr\rangle \nonumber \\ &=& n \langle x^2 y \rangle +n(n-1) \langle x^2\rangle \langle y \rangle + 2n(n-1)\langle x \rangle \langle xy \rangle + n(n-1)(n-2) \langle x \rangle ^2 \langle y \rangle \nonumber , \end{eqnarray} we get \begin{eqnarray*} && \bigl\langle (X - \langle X \rangle)^2( Y - \langle Y \rangle) \bigr\rangle = \langle X^2 Y \rangle - \langle X^2 \rangle \langle Y \rangle -2 \langle X \rangle (\langle XY \rangle - \langle X \rangle \langle Y \rangle ) \\ &=& n \langle x^2 y \rangle - n \langle x^2 \rangle \langle y \rangle - 2n \langle x\rangle \langle xy \rangle +2n \langle x \rangle^2 \langle y \rangle \\ & =& (k_1 - k_3 +k_6 - k_8) n\delta t - \Bigl[(k_F + k_B)(k_L - k_R) + 2(k_F - k_B)(k_1 - k_3 -k_6 + k_8)\Bigr] n \delta t \cdot \delta t \\ && \ \ \ \ \ \ \ \ \ \ +2(k_F - k_B)^2 (k_L - k_R) n\delta t \cdot \delta t^2 \\ & \rightarrow & (k_1 - k_3 +k_6 - k_8)t\ \ (\delta t \rightarrow 0). \end{eqnarray*} Similarly, \begin{equation*} \bigl\langle (X - \langle X \rangle)( Y - \langle Y \rangle)^2 \bigr\rangle \rightarrow (k_1+k_3 - k_6 -k_8) t \ \ (\delta t \rightarrow 0) \end{equation*} Finally, using \begin{eqnarray*} && \langle X^2 Y^2 \rangle = \Bigl \langle \sum_l \sum_k \sum_j \sum_i x_i x_j y_k y_l \Bigr \rangle \\ &=& n\langle x^2 y^2 \rangle + 2n(n-1) \langle x \rangle \langle x y^2 \rangle + 2n(n-1) \langle y \rangle \langle x^2 y \rangle + 2n(n-1)\langle xy\rangle^2 \nonumber \\ &&\ \ \ \ + n(n-1)\langle x^2 \rangle \langle y^2 \rangle + n(n-1)(n-2) \langle x^2 \rangle \langle y \rangle^2 +4n(n-1)(n-2) \langle xy \rangle \langle x\rangle \langle y \rangle \nonumber \\ &&\ \ \ \ \ \ \ \ \ + n(n-1)(n-2)\langle x \rangle^2 \langle y^2 \rangle + n(n-1)(n-2)(n-3) \langle x \rangle^2 \langle y \rangle^2 \nonumber \end{eqnarray*} we get \begin{eqnarray*} &&\Bigl\langle (X- \langle X \rangle)^2 (Y- \langle Y \rangle)^2 \Bigr\rangle\\ &=& \langle X^2 Y^2 \rangle - 2\langle X^2 Y \rangle \langle Y \rangle -2\langle X Y^2 \rangle \langle X \rangle + \langle X^2 \rangle \langle Y \rangle^2 + 4\langle XY \rangle\langle X \rangle\langle Y \rangle +\langle Y^2 \rangle \langle X \rangle^2 -3 \langle X\rangle^2 \langle Y\rangle^2 \\ &=& n \langle x^2 y^2 \rangle + 2n(n-1) \langle xy \rangle ^2 + n(n-1)\langle x^2\rangle \langle y^2 \rangle \\ &&\ \ \ \ \ \ \ - 2n \bigl[ \langle x^2 y \rangle \langle y \rangle+\langle xy^2 \rangle \langle x \rangle \bigr] - n(n-2) \bigl[\langle x^2\rangle \langle y \rangle^2 +4 \langle xy \rangle \langle x \rangle \langle y \rangle + \langle y^2\rangle \langle x \rangle^2 -3\langle x \rangle^2 \langle y \rangle^2\bigr] \\ &\rightarrow & (k_1+k_3+k_6+k_8) t + 2(At)^2 + 2D_x t \cdot 2D_y t \ \ (\delta t \rightarrow 0) \end{eqnarray*} In summary, we have obtained a set of formulae that relate macroscopic observations to the microscopic hopping rates: \begin{eqnarray} \label{def_pars1} \langle X \rangle &=& v_x t \\ \label{def_pars2} \langle Y \rangle &=& v_y t \\ \label{def_pars3} \langle (X - \langle X \rangle)^2 \rangle &=& 2D_x t \\ \label{def_pars4} \langle (Y - \langle Y \rangle)^2 \rangle &=& 2D_y t \\ \label{def_pars5} \langle (X - \langle X \rangle)(Y - \langle Y \rangle) \rangle &=& A t \\ \label{def_pars6} \langle (X - \langle X \rangle)^2(Y - \langle Y \rangle) \rangle &=& B t \\ \label{def_pars7} \langle (X - \langle X \rangle)(Y - \langle Y \rangle)^2 \rangle &=& C t \\ \label{def_pars8} \langle (X - \langle X \rangle)^2(Y - \langle Y \rangle)^2 \rangle \nonumber - 2 \langle (X - \langle X \rangle)(Y - \langle Y \rangle) \rangle^2\ \ \ \ \ \ \ \ &&\\ - \langle (X - \langle X \rangle)^2 \rangle \langle (Y - \langle Y \rangle)^2 \rangle &=& E t \end{eqnarray} where \begin{equation} \label{pars_rates_relation} \begin{pmatrix} v_x \\ v_y \\ 2D_x \\ 2D_y \\ A \\ B \\ C \\ E \\ \end{pmatrix} = \begin{pmatrix} 1 & 1 & 1 & 0 & 0 & -1 & -1 & -1 \\ 1 & 0 & -1 & 1 & -1 & 1 & 0 & -1 \\ 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 \\ 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 \\ 1 & 0 & -1 & 0 & 0 & -1 & 0 & 1 \\ 1 & 0 & -1 & 0 & 0 & 1 & 0 & -1 \\ 1 & 0 & 1 & 0 & 0 & -1 & 0 & -1 \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 \\ \end{pmatrix} \begin{pmatrix} k_1 \\ k_2 \\ k_3 \\ k_4 \\ k_5 \\ k_6 \\ k_7 \\ k_8 \\ \end{pmatrix} \end{equation} \begin{figure*}[htb!] \includegraphics[width=0.8\textwidth, angle=0]{rates_pars.pdf} \caption{\label{fig:rates_pars} Visual presentation of the relationships between the macroscopic parameters ($v_x, v_y, D_x, D_y, A, B, C\ \mathrm{and\ } E$), which are the proportional coefficients of the time development of $\langle X \rangle \mathrm{\ and\ } \langle Y \rangle$ ($v_x$ and $v_y$) or the moments of the drift-adjusted displacements, $Z = X -v_x t, W=Y-v_y t$ ($D_x, D_y, A, B, C\ \mathrm{and\ } E$)(\ref{def_pars1} to \ref{def_pars8}), and the microscopic hopping rates ($k_1, k_2, \ldots, k_8$) on a 2D lattice. The red and blue colors on the grid indicate the signs (positive and negative, respectively) of the rates of hopping from the current position (gray) to the corresponding sites in assembling the macroscopic parameters (\ref{pars_rates_relation}).} \end{figure*} \subsection{\label{sec:level2c}Covariance-based estimator for $A$} Based on formula \ref{def_pars1} and \ref{def_pars2}, the velocities of constant drift, $v_x$ and $v_y$, can be estimated with \begin{eqnarray} \label{cve_vx} \langle \Delta X \rangle = v_x \Delta t \\ \label{cve_vy} \langle \Delta Y \rangle = v_y \Delta t \end{eqnarray} As discussed in section A, drift-adjusted displacements $\Delta Z_k = \Delta X_k - v_x \Delta t$ and $\Delta W_k = \Delta Y_k - v_y \Delta t$ based on $Z(t)=X(t) - v_x t$ and $W(t)=Y(t) - v_y t$ give the diffusion coefficients along x- and y-axes, $D_x$ and $D_y$, respectively. \begin{eqnarray} \label{cve_2Dx} \langle \Delta Z_k^2 \rangle + 2\langle \Delta Z_{k+1} \Delta Z_k \rangle = 2D_x \Delta t \\ \label{cve_2Dy} \langle \Delta W_k^2 \rangle + 2\langle \Delta W_{k+1} \Delta W_k \rangle = 2D_y \Delta t \end{eqnarray} Here we derive analogous covariance-based estimators of $A$, $B$, $C$, and $E$. Considering the symmetry between $Z(t)$ and $W(t)$, we guess the formula for $A$ as \begin{equation*} \sum_{\substack{\alpha, \beta=0\ \mathrm{or}\ 1 \\ \mathrm{not\ } \alpha=\beta=1}} \langle \Delta Z_{k+\alpha} \Delta W_{k+\beta}\rangle = \langle \Delta Z_k \Delta W_k \rangle + \langle \Delta Z_{k+1} \Delta W_k\rangle + \langle \Delta Z_k \Delta W_{k+1}\rangle = A \Delta t \end{equation*} We shall now prove this. With \begin{eqnarray} \Delta Z_{k+\alpha} &=& \int_0^{\Delta t} s(t) \Bigl[ Z(t+(k+\alpha)\Delta t) - Z(t+(k+\alpha-1)\Delta t) \Bigr] dt +(\varepsilon_{k+\alpha+1} - \varepsilon_{k+\alpha}) \nonumber \\ \Delta W_{k+\beta} &=& \int_0^{\Delta t} s(t') \Bigl[ W(t'+(k+\beta)\Delta t) - W(t'+(k+\beta-1)\Delta t) \nonumber \Bigr] dt' +(\varepsilon'_{k+\beta+1} - \varepsilon'_{k+\beta}) \end{eqnarray} \begin{eqnarray*} &&\langle \Delta Z_{k+\alpha} \Delta W_{k+\beta} \rangle \\ &=& \int_0^{\Delta t} \int_0^{\Delta t} s(t)s(t') \Bigl \langle \Bigl( Z(t+(k+\alpha)\Delta t) - Z(t+(k+\alpha-1)\Delta t)\Bigr) \\ &&\ \ \times \Bigl( W(t'+(k+\beta)\Delta t) - W(t'+(k+\beta-1)\Delta t) \Bigr) \Bigr \rangle dt dt' +\langle (\varepsilon_{k+\alpha+1} - \varepsilon_{k+\alpha})(\varepsilon'_{k+\beta+1} - \varepsilon'_{k+\beta}) \rangle \end{eqnarray*} Since $(Z(t), W(t))$ is a random walk that fulfils $\langle Z(t)W(t) \rangle =At$, $\langle Z(t)W(t') \rangle =A\cdot \mathrm{min}(t, t')$. Using this, the factor to be integrated in the first term of $\langle \Delta Z_{k+\alpha} \Delta W_{k+\beta} \rangle $ is evaluated to be \begin{eqnarray*} &&\Bigl \langle \Bigl( Z(t+(k+\alpha)\Delta t) - Z(t+(k+\alpha-1)\Delta t)\Bigr) \times \Bigl( W(t'+(k+\beta)\Delta t) - W(t'+(k+\beta-1)\Delta t) \Bigr) \Bigr \rangle \\ &=& \bigl \langle Z(t+(k+\alpha)\Delta t) W(t'+(k+\beta)\Delta t) \bigr \rangle - \bigl \langle Z(t+(k+\alpha)\Delta t) W(t'+(k+\beta-1)\Delta t) \bigr \rangle\\ && - \bigl \langle Z(t+(k+\alpha-1)\Delta t) W(t'+(k+\beta)\Delta t) \bigr \rangle + \bigl \langle Z(t+(k+\alpha-1)\Delta t) W(t'+(k+\beta-1)\Delta t) \bigr \rangle \\ &=& A \Bigl[\{ k\Delta t + \mathrm{min}(t + \alpha \Delta t, t'+\beta \Delta t)\} - \{k\Delta t + \mathrm{min}(t + \alpha \Delta t, t'+ (\beta-1) \Delta t)\} \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \{k\Delta t +\mathrm{min}(t + (\alpha-1) \Delta t, t'+ \beta \Delta t)\} + \{(k-1)\Delta t + \mathrm{min}(t + \alpha \Delta t, t'+\beta \Delta t) \} \Bigr]\\ &=& A \Bigl[ \mathrm{min}(t + \alpha \Delta t, t'+\beta \Delta t) - \mathrm{min}(t + \alpha \Delta t, t'+ (\beta-1) \Delta t) \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \mathrm{min}(t + (\alpha-1) \Delta t, t'+ \beta \Delta t) + \mathrm{min}(t + \alpha \Delta t, t'+\beta \Delta t) -\Delta t \Bigr] \end{eqnarray*} Here, considering $ 0 \leq t \leq \Delta t, 0 \leq t' \leq \Delta t $, the factor within the above brackets \begin{eqnarray*} g(\alpha, \beta, t, t') &=& \mathrm{min}(t + \alpha \Delta t, t'+\beta \Delta t) - \mathrm{min}(t + \alpha \Delta t, t'+ (\beta-1) \Delta t) \\ &&\ \ \ \ \ \ \ \ \ \ - \mathrm{min}(t + (\alpha-1) \Delta t, t'+ \beta \Delta t) + \mathrm{min}(t + \alpha \Delta t, t'+\beta \Delta t) -\Delta t \end{eqnarray*} is calculated to be \begin{eqnarray*} g(0, 0, t, t^{\prime}) &=& \Delta t - \bigl(t + t^{\prime} - 2\cdot \mathrm{min}(t, t^{\prime}) \bigr)\\ g(1, 0, t, t^{\prime}) &=& t^{\prime} - \mathrm{min}(t, t^{\prime}) \\ g(0, 1, t, t^{\prime}) &=& t - \mathrm{min}(t, t^{\prime}) \end{eqnarray*} Thus, $g(0, 0, t, t') + g(1, 0, t, t') + g(0, 1, t, t') = \Delta t$. The x- and y-localization errors at the same time point are not independent. With Kronecker's delta $\delta (\alpha, \beta) = 1\ (\alpha=\beta)\ \mathrm{or}\ 0\ (\alpha \neq \beta)$ , \begin{eqnarray*} &&\Bigl\langle ( \varepsilon_{k+\alpha+1} - \varepsilon_{k+\alpha}) ( \varepsilon'_{k+\beta+1} - \varepsilon'_{k+\beta}) \Bigr\rangle \\ &=& \langle \varepsilon_{k+\alpha+1}\varepsilon'_{k+\beta+1} \rangle - \langle \varepsilon_{k+\alpha+1} \varepsilon'_{k+\beta} \rangle - \langle \varepsilon_{k+\alpha} \varepsilon'_{k+\beta+1}\rangle + \langle \varepsilon_{k+\alpha} \varepsilon'_{k+\beta} \rangle \\ &=& \sigma_{xy}^2 (\delta(\alpha, \beta) - \delta(\alpha+1, \beta) - \delta(\alpha, \beta+1) + \delta(\alpha, \beta))\\ &=& \sigma_{xy}^2 (2 \delta(\alpha, \beta) - \delta(\alpha+1, \beta) - \delta(\alpha, \beta+1)) = \sigma_{xy}^2 \Omega(\alpha, \beta), \end{eqnarray*} where we define $\Omega(\alpha, \beta)=2 \delta(\alpha, \beta) - \delta(\alpha+1, \beta) - \delta(\alpha, \beta+1)$. Thus, $ \Omega(0, 0) + \Omega(0, 1)+ \Omega(1, 0) = (2\cdot 1 - 0 - 0) + ( 2\cdot 0 - 1 - 0) + (2\cdot 0 - 0 - 1) =0$. Finally, we get \begin{eqnarray} \label{cve_A} &&\sum_{\substack{\alpha, \beta=0\ \mathrm{or}\ 1 \\ \mathrm{not\ } \alpha=\beta=1}} \langle \Delta Z_{k+\alpha} \Delta W_{k+\beta}\rangle \nonumber \\ &=& \int_0^{\Delta t} \int_0^{\Delta t} s(t)s(t') A \cdot \bigl[g(0, 0, t, t') + g(1, 0, t, t') + g(0, 1, t, t')\bigr] dt dt' \nonumber \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\sigma_{xy}^2 \bigl[\Omega(0, 0) + \Omega(1, 0)+ \Omega(0, 1)\bigr] \nonumber \\ &=& A \Delta t \end{eqnarray} \subsection{\label{sec:level2d}Covariance-based estimators for $B$ and $C$ } The above results imply that for random walks $U(s), V(t),\ldots$, if a function $f(x, y,\ldots )$ exists such that $ f(U(s), V(t), \dots) = K \cdot \mathrm{min}(s, t, \dots)$ for some constant $K$ (for example, for $f(x, y)=\langle xy\rangle$, $f(X(s), X(t))=2D_x \cdot \mathrm{min}(s,t)$, and $f(X(s), Y(t))=A \cdot \mathrm{min}(s, t)$), then the equation below holds \begin{equation} \label{cve_recipe} \sum_{\substack{\alpha, \beta, \ldots =0\ \mathrm{or\ }1 \\ \mathrm{not\ }\alpha=\beta=\ldots = 1}} f(\Delta U_{k+\alpha}, \Delta V_{k+\beta},\dots) = K\Delta t \end{equation} where $ \Delta U_k = U_{k+1} - U_{k}$, $ \Delta V_k = V_{k+1} - V_{k}\ldots$ are the observed displacements (adjusted for the constant drift), providing the general recipe for the CVE for $K$. Below is a proof for the cases of $B$ and $C$. Since $\langle Z(t)^2 W(t) \rangle = B t$, we choose $ f(x, y, z) = \langle xyz \rangle $. \begin{eqnarray*} && f(\Delta Z_{k+\alpha}, \Delta Z_{k+\beta}, \Delta W_{k+\gamma} ) = \langle \Delta Z_{k+\alpha} \cdot \Delta Z_{k+\beta} \cdot \Delta W_{k+\gamma} \rangle \\ &=& \Bigl \langle \Bigl(\int_0^{\Delta t} s(t)\Bigl[Z(t + (k + \alpha)\Delta t) - Z(t + (k + \alpha-1)\Delta t)\Bigr] dt +(\varepsilon_{k+\alpha+1}- \varepsilon_{k+\alpha}) \Bigr) \\ &&\ \ \ \ \ \ \ \times \Bigl( \int_0^{\Delta t} s(t')\Bigl[Z(t' + (k + \beta)\Delta t) - Z(t' + (k + \beta-1)\Delta t)\Bigr] dt' +(\varepsilon_{k+\beta+1}- \varepsilon_{k+\beta}) \Bigr) \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \Bigl( \int_0^{\Delta t} s(t'')\Bigl[W(t'' + (k + \gamma)\Delta t) - W(t'' + (k + \gamma-1)\Delta t)\Bigr] dt'' +(\delta_{k+\gamma+1}- \delta_{k+\gamma}) \Bigr) \Bigr \rangle \\ &=& \int_0^{\Delta t} \int_0^{\Delta t} \int_0^{\Delta t} s(t)s(t')s(t'') \\ &&\ \ \ \ \ \ \Bigl \langle \Bigl[Z(t + (k + \alpha)\Delta t) - Z(t + (k + \alpha-1)\Delta t)\Bigr] \times \Bigl[Z(t' + (k + \beta)\Delta t) - Z(t' + (k + \beta-1)\Delta t)\Bigr] \\ &&\ \ \ \ \ \ \ \ \ \ \ \times \Bigl[W(t'' + (k + \gamma)\Delta t) - W(t'' + (k + \gamma-1)\Delta t)\Bigr] \Bigr \rangle dt dt' dt'' \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \Bigl \langle (\varepsilon_{k+\alpha+1}- \varepsilon_{k+\alpha}) (\varepsilon_{k+\beta+1}- \varepsilon_{k+\beta}) (\delta_{k+\gamma+1}- \delta_{k+\gamma}) \Bigr \rangle, \\ \end{eqnarray*} in which we used $ \langle Z(t+(k+\alpha)\Delta t) - Z(t+(k+\alpha-1)\Delta t) \rangle = 0$, etc. and $\langle \varepsilon_{k+\alpha+1}- \varepsilon_{k+\alpha} \rangle = 0$, etc. For unbiased random walks $A(s), B(t), C(u)$ ($\langle A(s) \rangle=\langle B(t) \rangle=\langle C(u) \rangle = 0$, and their displacements are independent for non-overlapping time sections), if we assume $s<t <u$, \begin{eqnarray} && \langle A(s) B(t) C(u) \rangle \nonumber = \langle A(s) \rangle \langle (B(t) - B(s))(C(t)-C(s)) \rangle + \langle A(s)C(s)\rangle ( \langle B(t) \rangle - \langle B(s) \rangle) \nonumber \\ &&\ \ \ + \langle A(s)B(s)\rangle ( \langle C(t) \rangle - \langle C(s) \rangle) + \langle A(s) B(s) C(s) \rangle \label{wiener_process} = \langle A(s) B(s) C(s) \rangle \end{eqnarray} In general, with $\tau = \mathrm{min}(s, t, u)$, $ \langle A(s) B(t) C(u) \rangle = \langle A(\tau) B(\tau) C(\tau) \rangle$. Applying this to $Z(t)$ and $W(t)$ that satisfy $\langle Z(t)^2 W(t) \rangle = B t$, we get $ \langle Z(t)Z(t')W(t'') \rangle = B\cdot \mathrm{min} (t, t', t'') $. Using this, we evaluate \begin{eqnarray*} &&\Bigl \langle \Bigl[Z(t + (k + \alpha)\Delta t) - Z(t + (k + \alpha-1)\Delta t)\Bigr] \times \Bigl[Z(t' + (k + \beta)\Delta t) - Z(t' + (k + \beta-1)\Delta t)\Bigr] \\ &&\ \ \ \ \ \ \ \ \ \ \ \times \Bigl[W(t'' + (k + \gamma)\Delta t) - W(t'' + (k + \gamma-1)\Delta t)\Bigr] \Bigr \rangle \\ & =& \bigl \langle Z(t + (k + \alpha)\Delta t) \cdot Z(t' + (k + \beta)\Delta t) \cdot W(t'' + (k + \gamma)\Delta t) \bigr \rangle \\ &&\ \ -\bigl \langle Z(t + (k + \alpha)\Delta t) \cdot Z(t' + (k + \beta)\Delta t) \cdot W(t'' + (k + \gamma-1)\Delta t) \bigr \rangle \\ &&\ \ -\bigl \langle Z(t + (k + \alpha)\Delta t) \cdot Z(t' + (k + \beta-1)\Delta t) \cdot W(t'' + (k + \gamma)\Delta t) \bigr \rangle \\ &&\ \ +\bigl \langle Z(t + (k + \alpha)\Delta t) \cdot Z(t' + (k + \beta-1)\Delta t) \cdot W(t'' + (k + \gamma-1)\Delta t) \bigr \rangle \\ &&\ \ - \bigl \langle Z(t + (k + \alpha-1)\Delta t) \cdot Z(t' + (k + \beta)\Delta t) \cdot W(t'' + (k + \gamma)\Delta t) \bigr \rangle \\ &&\ \ +\bigl \langle Z(t + (k + \alpha-1)\Delta t) \cdot Z(t' + (k + \beta)\Delta t) \cdot W(t'' + (k + \gamma-1)\Delta t) \bigr \rangle \\ &&\ \ +\bigl \langle Z(t + (k + \alpha-1)\Delta t) \cdot Z(t' + (k + \beta-1)\Delta t) \cdot W(t'' + (k + \gamma)\Delta t) \bigr \rangle \\ &&\ \ - \bigl \langle Z(t + (k + \alpha-1)\Delta t) \cdot Z(t' + (k + \beta-1)\Delta t) \cdot W(t'' + (k + \gamma-1)\Delta t) \bigr \rangle \\ &=& B \cdot g(\alpha, \beta, \gamma, t, t', t''),\\ &&\mathrm{where}\\ &&g(\alpha, \beta, \gamma, t, t', t'') \\ &=& \mathrm{min}\bigl(t+\alpha\Delta t,t'+\beta\Delta t, t''+ \gamma\Delta t\bigr) - \mathrm{min}\bigl(t+\alpha\Delta t,t'+\beta\Delta t, t''+(\gamma-1)\Delta t\bigr)\\ &-& \mathrm{min}\bigl(t+\alpha\Delta t,t'+(\beta-1)\Delta t, t''+\gamma\Delta t\bigr) + \mathrm{min}\bigl(t+\alpha\Delta t,t'+(\beta-1)\Delta t, t''+(\gamma -1)\Delta t\bigr)\\ &-& \mathrm{min}\bigl(t+(\alpha-1)\Delta t,t'+\beta\Delta t, t''+\gamma\Delta t\bigr) + \mathrm{min}\bigl(t+(\alpha-1)\Delta t,t'+\beta\Delta t, t''+(\gamma-1)\Delta t\bigr)\\ &+& \mathrm{min}\bigl(t+(\alpha-1)\Delta t,t'+(\beta-1)\Delta t, t''+\gamma\Delta t\bigr) - \mathrm{min}\bigl(t+(\alpha-1)\Delta t,t'+(\beta-1)\Delta t, t''+(\gamma-1)\Delta t\bigr). \end{eqnarray*} $g(\alpha, \beta, \gamma, t, t', t'') $ can be calculated to be \begin{eqnarray*} g(0, 0, 0, t, t', t'') &=& \Delta t - (t + t' +t'') + \mathrm{min}(t', t'' ) + \mathrm{min}(t, t'' ) + \mathrm{min}(t, t' ) \\ g(1, 0, 0, t, t', t'') &=& \mathrm{min}(t', t'') - \mathrm{min}(t, t', t'')\\ g(0, 1, 0, t, t', t'') &=& \mathrm{min}(t, t'') - \mathrm{min}(t, t', t'') \\ g(0, 0, 1, t, t', t'') &=& \mathrm{min}(t, t') - \mathrm{min}(t, t', t'') \\ g(1, 1, 0, t, t', t'') &=& t'' - \mathrm{min}(t, t'' ) - \mathrm{min}(t', t'') + \mathrm{min}(t, t', t'' ) \\ g(1, 0, 1, t, t', t'') &=& t' - \mathrm{min}(t, t' ) - \mathrm{min}(t', t'') + \mathrm{min}(t, t', t'' ) \\ g(0, 1, 1, t, t', t'') &=& t - \mathrm{min}(t, t' ) - \mathrm{min}(t, t'') + \mathrm{min}(t, t', t'' ) . \end{eqnarray*} Thus, \begin{equation*} \sum_{\substack{\alpha, \beta, \gamma=0\ \mathrm{or\ }1 \\ \mathrm{not\ }\alpha=\beta=\gamma= 1}} g(\alpha, \beta, \gamma, t, t', t'') = \Delta t \end{equation*} The error term \begin{equation*} \Omega(\alpha, \beta, \gamma) = \Bigl\langle ( \varepsilon_{k+\alpha+1} - \varepsilon_{k+\alpha}) ( \varepsilon_{k+\beta+1} - \varepsilon_{k+\beta}) ( \delta_{k+\gamma+1} - \delta_{k+\gamma}) \Bigr\rangle \end{equation*} is calculated to be \begin{eqnarray*} \Omega(0, 0, 0) &=& 0 \\ \Omega(1, 0, 0) &=& \Omega(0, 1, 0) = \Omega(0, 0, 1) = -\langle \varepsilon_k^2 \delta_k \rangle \\ \Omega(0, 1, 1) &=& \Omega(1, 0, 1) = \Omega(1, 1, 0) = \langle \varepsilon_k^2 \delta_k \rangle. \end{eqnarray*} Thus, \begin{equation*} \sum_{\substack{ \alpha, \beta, \gamma=0\ \mathrm{or\ }1\\ \mathrm{not\ }\alpha=\beta=\gamma= 1}} \Omega(\alpha, \beta, \gamma) = 0 \end{equation*} Finally, we obtain \begin{eqnarray} \label{cve_B} &&\sum_{\substack{ \alpha, \beta, \gamma=0\ \mathrm{or\ }1\\ \mathrm{not\ }\alpha=\beta=\gamma= 1}} \langle \Delta Z_{k+\alpha} \cdot \Delta Z_{k+\beta} \cdot \Delta W_{k+\gamma} \rangle \nonumber \\ && \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \int_0^{\Delta t} \int_0^{\Delta t} \int_0^{\Delta t} s(t)s(t')s(t'') B \Delta t dt dt' dt '' = B\Delta t. \end{eqnarray} Similarly, \begin{equation} \label{cve_C} \sum_{\substack{ \alpha, \beta, \gamma=0\ \mathrm{or\ }1\\ \mathrm{not\ }\alpha=\beta=\gamma= 1}} \langle \Delta Z_{k+\alpha} \cdot \Delta W_{k+\beta} \cdot \Delta W_{k+\gamma} \rangle = C\Delta t \end{equation} \subsection{\label{sec:level2e}Covariance-based estimators for $E$} $Z$ and $W$ are linked to $E$ by \begin{equation*} \langle Z(t)^2 W(t)^2 \rangle - 2\langle Z(t)W(t)\rangle ^2 - \langle Z(t)^2 \rangle \langle W(t)^2\rangle = Et \end{equation*} With $f(x, y, z, w) = \langle xyzw \rangle - \langle xy\rangle \langle zw\rangle - \langle xz\rangle \langle yw\rangle - \langle xw\rangle \langle yz\rangle $, this relation is written \begin{equation*} f(Z(t), Z(t), W(t), W(t)) = Et \end{equation*} Thus, the relation that gives the covariance-based estimator of $E$ is predicted to be \begin{equation} \label{cve_E} \sum_{\substack{\alpha, \beta, \gamma, \lambda =0\ \mathrm{or\ }1 \\ \mathrm{not\ }\alpha=\beta=\gamma=\lambda= 1}} f( \Delta Z_{k+\alpha}, \Delta Z_{k+\beta} , \Delta W_{k+\gamma}, \Delta W_{k+\lambda} ) = E\Delta t \end{equation} This can be proven by some algebra similar to the above proofs for the CVEs of $A$, $B$ and $C$ (see appendix A for details), confirming that the recipe for constructing CVEs (\ref{cve_recipe}) holds for the necessary cases to infer the anisotropic hopping rates based on the relation (\ref{pars_rates_relation}) (Fig. \ref{fig:rates_pars}). \subsection{\label{sec:level2f}Inference of the hopping rates from the 2D trajectories simulated with motion blur and localization error} We now examine the versatility of the covariance-based estimators of $D_x, D_y, A, B, C,\mathrm{and\ } E $ (formulas \ref{cve_2Dx}, \ref{cve_2Dy}, \ref{cve_A}, \ref{cve_B}, \ref{cve_C}, and \ref{cve_E}), the coefficients of the time development of the co-moments of the displacements, and whether we can infer the anisotropic hopping rates from the trajectory data of 2D random walks, using the relations that link the macroscopic coefficients to the microscopic hopping rates (formula \ref{pars_rates_relation}). Let us first compare two examples of random walks, RW1 and RW2, respectively, that are generated by Monte Carlo simulations to have the same x- and y-velocities and diffusion coefficients ($v_x, v_y, D_x, D_y$), but have different hopping rates (Fig. \ref{fig:trjs_pars} a and b). The values of the velocities, $v_x = 16\ \mathrm{s}^{-1}$ and $v_y = 1\ \mathrm{s}^{-1}$ (hops per second) correspond, for example, to the helical motion of a kinesin-like motor protein with a helical pitch of $\sim$1.7 $\mu\mathrm{m}$ around a microtubule that has a 2D lattice of the discrete motor-binding sites on its surface consisting of parallel 13 protofilaments, i.e., linear arrays of tubulin subunits aligned at 8 nm periodicity \cite{Chretien:1991ct, Wade:1993ib}. To mimic a typical condition of image acquisition using an EM-CCD camera used for the single molecule/particle observation, a series of ‘true’ positions are generated by the Gillespie algorithm \cite{Gillespie:1977ww} (blue lines in Fig. \ref{fig:trjs_pars} c and d) and the ‘observed’ positions at regular time points (every 0.1 s for 200 time points, corresponding to observation for 20 sec) were calculated by averaging the positions during the open shutter (90\% of the cycle) and by adding Gaussian noises of the standard deviation of half the size of the lattice spacing (red lines in Fig. \ref{fig:trjs_pars} c and d). \begin{figure*}[htb!] \includegraphics[width=0.85\textwidth, angle=0]{trjs_pars.pdf} \caption{\label{fig:trjs_pars} Monte Carlo simulation of anisotropic 2D random walks observed with motion blur and localization errors.\\ (a, b) Two random walks, RW1 and RW2, with the same x- and y- velocities and diffusion coefficients were generated based on the different sets of hopping rates. (c, d) Examples of simulated `true' positions on a 2D lattice (blue) and the `observed' trajectories (red) were shown. (e) Summary of the macroscopic parameters for the simulated RW1 and RW2 (magenta and green, respectively). For each random walk, 50 trajectories each consisting of 200 observations of the 0.1 s interval were simulated, and the CVEs of the parameters were calculated for each trajectory (gray dots). Means and standard errors are indicated in magenta and green. Black segments indicate the values expected by the theory (formula (\ref{pars_rates_relation})) } \end{figure*} Fifty trajectories of RW1 and RW2 were generated, respectively, and the macroscopic parameters $v_x, v_y, D_x, D_y, A, B, C,\mathrm{and\ } E $ for each trajectory were calculated based on the formulas \ref{cve_vx}, \ref{cve_vy}, \ref{cve_2Dx}, \ref{cve_2Dy}, \ref{cve_A}, \ref{cve_B}, \ref{cve_C}, and \ref{cve_E}. As shown in Fig. \ref{fig:trjs_pars} e, the covariance-based estimators calculated from the simulated trajectories (gray dots) distributed around the theoretical values predicted by the formula (\ref{pars_rates_relation}) (solid lines). Although, due to the intrinsic stochasticity of the random walk, the calculated values from individual trajectories spread around the theoretical `true' values, the means of 50 trajectories were found close to the corresponding theoretical values within twice the standard error. Importantly, as expected, while there was no significant difference between RW1 and RW2 in $v_x, v_y, D_x, \mathrm{and\ } D_y$, a clear difference between them was found in the distribution of $A$. This suggests that our recipe gives reasonable estimates of the coefficients of the temporal development of the co-moments of the x- and y-displacements in a realistic scenario. This allows us to detect a difference between the two random walks that have distinct preferences in the hopping direction but look exactly the same if we only consider the velocities and the diffusion coefficients along the two axes, separately. We move on to examine the power of our approach to reveal the hopping behaviors of anisotropic random walks. With known macroscopic parameters $(v_x, v_y, D_x, D_y, A, B, C, E)$, we can calculate $(k_1, k_2, k_3, k_4, k_5, k_6, k_7, k_8)$, the hopping rates to the 8 surrounding sites, by solving by the formula (\ref{pars_rates_relation}). However, due to the intrinsically stochastic behavior of the random walks, a simple solution might result in negative values of the hopping rates. To avoid this, we performed a Bayesian inference with a model that the observed macroscopic parameters are probabilistic variables that distribute around the true values that are defined by the microscopic hopping rates, which don't take negative values (Fig. \ref{fig:rates_probs} a). This was implemented with Stan \cite{Rstan:2020} on R \cite{R_core_team:2019} with the overall hopping rate, $k$ as a positive real number and the hopping preferences $\bf{p}$ $=(p_1, p_3, p_4, p_5, p_6, p_7, p_8)$ as simplex variables ($0\leq p_i \leq 1, \sum p_i =1$), where the individual hopping rates $k_i = k \cdot p_i$ with $k=\sum_i k_i$. As priors, uniform probabilities were used. Fig. \ref{fig:rates_probs} b shows the distributions of the posterior probabilities of the hopping rates, obtained with the fifty sets of macroscopic parameters $(v_x, v_y, D_x, D_y, A, B, C, E)$ for each Brownian motion from Fig. \ref{fig:trjs_pars} e as the data for the Bayesian inference. As expected, RW1 (magenta) and RW2 (green) showed clearly distinct patterns in the hopping preferences, which are consistent with the theoretical values used for generation of the random walks by simulation. For example, the (posterior) probability of the hopping rate to the forward-left, $k_1$, of RW1 showed a distribution near the theoretical value $0\ \mathrm{s^{-1}}$ while that of RW2 had a peak near $2\ \mathrm{s^{-1}}$. This indicates that our approach can properly infer the microscopic parameters to the precision levels that are sufficient to distinguish the two example cases of random walk, which would look the same if we only analyze the movements in x- and y-directions separately. \begin{figure*}[htb!] \includegraphics[width=0.86\textwidth, angle=0]{rates_probs.pdf} \caption{\label{fig:rates_probs} Inference of the hopping rates and hopping preferences.\newline (a) Procedure of Bayesian inference of the hopping rates and hopping preferences from the simulated trajectories. Covariance-based estimators of the macroscopic coefficients ($\textbf{D}$) calculated for individual trajectories were used as data to compute the posterior probabilities of the hopping rates ($\textbf{k}$) and hopping preferences ($\textbf{p}$), which are linked to each other via the overall hopping rate ($k$), based on a model that the observed macroscopic coefficients are probabilistic variables that distribute around the theoretical values, $M \textbf{k}$, where $M$ is the matrix that appears in formula \ref{pars_rates_relation}. (b) Posterior probability distributions of the hopping rates (RW1:magenta, RW2:green). Curves represent the results of the four independent chains of Bayesian inference. Dashed lines indicate the theoretical values. (c to e) Influences of the data size (c), localization errors (d) and the correlation between the x- and y-localization errors (e). Dots linked with solid lines and shaded regions represent the averages of the means and 95\% credible intervals of 100 independent trials, respectively. Horizontal dashed lines are true values. Vertical dashed lines indicate the parameter values used in the simulation in (b). (f) Loss of the estimation accuracy by increased localization errors and its recovery by expansion of the data size (the number of the observations per trajectory or the number of the trajectories). Means and 95\% credible intervals. } \end{figure*} If we look closer, we realize that the peak positions of the posterior probabilities of the hopping rates don't exactly match with the theoretical values. This is likely due to the probabilistic uncertainties in the observed random walks, which are influenced by the length and the number of the trajectories and by the precision of the measurements (the motion blurs and localization errors), in combination with the natural condition that the hopping rates can't be less than zero. To assess the effects of the measurement conditions, the cycle of the generation of the observed trajectories by simulation, estimation of the coefficients for each trajectory, and inference of the rates and preferences of hopping was repeated a hundred times per condition. The average of the means and 95\% credible intervals of the posterior distribution of the 100 independent trials were averaged and shown in Fig. \ref{fig:rates_probs} c to e and Supplemental Figures. As expected, increasing the size of data by increasing the number of trajectories sharpened the distribution of the posterior probabilities of the rates and preferences of hopping (Fig. \ref{fig:rates_probs} c and Supplementary Figure 1). Importantly, their means, which showed deviations from the theoretical values when the number of input trajectories was small as mentioned above, asymptotically approached their respective theoretical values as the available trajectories increase. Similar sharpening of the distribution and approaching to the theoretical value were observed also when the length of each trajectory was increased instead (Supplementary Figure 2). The localization errors don't explicitly appear in the formulas of the CVEs (\ref{cve_vx}, \ref{cve_vy}, \ref{cve_2Dx}, \ref{cve_2Dx}, \ref{cve_A}, \ref{cve_B}, \ref{cve_C}, and \ref{cve_E}) because we took averages of infinitely many combinations of the displacements. With the finite displacement data, however, the noise terms would not be completely canceled out and affect the estimations. Indeed, increasing the amplitude of the localization errors significantly broadens the posterior probability distributions of the hopping preferences (and the hopping rates) (Fig. \ref{fig:rates_probs} d and Supplementary Figure 3). For example, with the data of the 50 trajectories with 200 observations each, although the $p_4$ values of RW1 and RW2 were distinguishable in the presence of the localization errors up to half the size of the lattice unit, they became indistinguishable when the amplitude of the errors was increased to be the same as the lattice unit (Fig. \ref{fig:rates_probs} d). This is in contrast with little impact of the extent of correlation between the x- and y-measurement errors for each observation (Fig. \ref{fig:rates_probs} e, Supplementary Figure 4). Interestingly, the perturbation by the localization errors could be overcome by increasing the size of data, either by increasing the number of observations per trajectory or by observing more trajectories (Fig. \ref{fig:rates_probs} f). The difference between RW1 and RW2 that was once obscured by the increased localization errors (1 grid size) became distinguishable again with a 10-fold increase in the amount of data either by increasing the number of observations made per trajectory or by increasing the number of trajectories observed. Although the hopping rates of RW1 and RW2 were set to have identical theoretical values of $v_x, v_y, D_x, \mathrm{and\ } D_y$, their theoretical values of $A$ were different (0 vs 3 $\mathrm{s^{-1}}$). Thus, we can't exclude the possibility that the above success in discriminating between the hopping patterns of RW1 and RW2 might solely rely on the distinct values of $A$, irrespective of the coefficients for the other co-moments. To test whether our approach can distinguish two random walks that have identical $v_x, v_y, D_x, D_y, \mathrm{and\ }A$, we finally consider another random walk, RW3 (Fig. \ref{fig:rw3} a), whose theoretical $A$, in addition to the theoretical values of $v_x, v_y, D_x, \mathrm{and\ } D_y$, is identical to that of RW1. As expected, the coefficients $v_x, v_y, D_x, D_y, \mathrm{and\ } A$ calculated from the 50 RW3 trajectories each with 200 observations showed distributions indistinguishable from the corresponding ones from the RW1 trajectories (Fig. \ref{fig:rw3} b). In contrast, the distributions of the calculated $C$ and $E$ values of RW3 were distinct from those of RW1. As expected, the posterior probabilities of the hopping rates of RW3 closely reproduced the values set for the simulation of the trajectories, exhibiting a difference from the corresponding one of RW1 ($k_1, k_4, k_6, \mathrm{and\ }k_7$) (Fig. \ref{fig:rw3} c). This suggests that our approach provides reasonable inference of the hopping behaviors under a realistic setting for the single particle observation even in a case where $B, C$ or $E$, the coefficients for the higher-order co-moments of the drift-adjusted x- and y-displacements, are the sole observable clues. \begin{figure*}[htb!] \includegraphics[width=0.9 \textwidth, angle=0]{rw3.pdf} \caption{\label{fig:rw3} Further test of our approach with RW3, a random walk with theoretical values of $v_x, v_y, D_x, D_y, \mathrm{and\ } A$ identical to those of RW1. \newline (a) Hopping rates of random walk, RW3, designed to result in the same theoretical values of $v_x, v_y, D_x, D_y, \mathrm{and\ } A$ as those of RW1. (b) Macroscopic coefficients of RW1 and RW3 calculated from the 50 trajectories, respectively, each with 200 observations generated by Monte Carlo simulation. The means and standard errors (magenta and blue, respectively) are shown with the values for individual trajectories (gray dots). Black segments indicate the theoretical values by the formula (\ref{pars_rates_relation}). (c) Distributions of the posterior probabilities of the hopping rates inferred with the coefficients in (b). Curves represent the results of four independent chains performed for each random walk. Dashed vertical lines indicate the theoretical values.} \end{figure*} \section{Discussion} Here we studied the anisotropic random walk on a 2D lattice and derived the relationship between the macroscopic coefficients of the time-development of the drift-adjusted displacements and the microscopic hopping rates. We then extended the covariance-based estimators of the 1D diffusion coefficient to the higher-order co-moments of the drift-adjusted displacements and used them to infer the hopping rates from the trajectory data affected by motion blur and localization errors. The versatility of this novel approach was evaluated with the trajectory data generated by simulation. We could demonstrate that our approach can distinguish 2D random walks that have exactly the same x- and y-velocities and diffusion coefficients but have distinct hopping patterns. An important advantage of the covariance-based estimators \cite{Vestergaard:2016ko,Vestergaard:2015jx,Vestergaard:2014bk} is that the equations to calculate them don't explicitly contain terms for localization errors nor motion blur after taking the average of infinite terms of observed x- and y-displacements combined. This assumption, of course, is not true with the real-world data of finite size. With a finite size of data, the failure in cancellation of the interfering factors as well as the intrinsic uncertainly of the random process results in an error in the estimation of the coefficients that characterize the random walk. Although it had been demonstrated that the CVE-based approach is superior to the traditional ones in the case of 1D unbiased diffusion \cite{Vestergaard:2016ko,Vestergaard:2015jx,Vestergaard:2014bk}, it was unclear how robust a 2D version of the CVE-based approach would be since it is involved with the higher-order terms of the displacements. However, through the analysis of the simulated model cases of 2D random walk, it was confirmed that the CVE-based approach combined with Bayesian inference can reasonably estimate the anisotropic hopping patterns, with a realistic size of data when the precision of the measurement is smaller than the size of the grid. Even with a lower precision of the measurements, increasing the size of data could restore the accuracy of the estimation. For real-world data, careful optimization will be necessary, considering rather complicated and intertwined influences of the frequency of image acquisition, exposure time, the intensity of illumination \cite{Shen:2017dq,Hoze:2017wa,Vestergaard:2016ko,Manzo:2015dc,Chenouard:2014kg}. It has recently been reported that some kinesin-like motor proteins show a helical motion around a microtubule \cite{Bormuth:2012kc, Mitra:2019eq, Bugiel:2018df, Mitra:2018bn}. The tubular structure of a microtubule consists of 13 or 14 protofilaments, i.e., the linear arrays of $\alpha$- and $\beta$-tubulin heterodimers with the 8 nm periodicity, presenting the motor-binding sites as the 2D lattice on the surface \cite{Chretien:1991ct, Wade:1993ib}. Interestingly, the reported pitches of the helical are intermediate between the shortest helical pitch of the lattice due to the staggered alignment of the protofilaments and the longest helical pitch observed in the 14-protofilament tubule. The observed helical motion corresponds to the protofilament switch that occurs once per $\sim$10 forward steps on average, implying the stochastic stepping patterns to the neighboring sites. Our approach might be applicable to estimate such patterns based on the observed helical trajectories. On a 3D lattice, there are $26 (=9+8+9)$ choices for stochastic hopping to a nearby site. The relation between the macroscopic coefficients and the hopping rates for a 3D lattice analogous to the 2D version (\ref{pars_rates_relation}) will contain 26 hopping rates and 26 coefficients for the time-development of the combinations of (higher-order co-)moments of the drift-adjusted displacements up to the 6th order. Whether the recipe to derive the covariance-based estimators of the coefficients for the higher-order co-moments developed here for a 2D lattice (\ref{cve_recipe}) is applicable to a 3D lattice will be a future question. \begin{acknowledgments} The author would like to thank Huong T. Vu (University of Warwick, UK), Matthew Turner (University of Warwick, UK), Junichiro Yajima (University of Tokyo, Japan) and Izumu Mishima (Durham University, UK) for insightful discussions and critical reading of the manuscript. \end{acknowledgments}
1,941,325,220,860
arxiv
\section*{Abstract} In this work, we demonstrate the importance of zero velocity information for \ac{GNSS} based navigation. The effectiveness of using the zero velocity information with \ac{ZUPT} for inertial navigation applications have been shown in the literature. Here we leverage this information and add it as a position constraint in a \ac{GNSS} factor graph. We also compare its performance to a \ac{GNSS}/\ac{INS} coupled factor graph. We tested our \ac{ZUPT} aided factor graph method on three datasets and compared it with the GNSS-only factor graph. \section{Introduction} Localization is an integral component of any mobile robot system, which plays an important role in many core robotic capabilities like motion planning, obstacle avoidance, and mapping. One of the most common methods for calculating positioning information of the robots is using \ac{GNSS}~\cite{enge1994global}. However, the availability of this system relies on the observation of multiple satellites~\cite{grovebook} and is frequently unable to obtain a precise and robust state estimate in urban and forested areas due to environmental constraints (e.g., poor satellite geometry and multipath effects)~\cite{merry2019smartphone}. An \ac{IMU} can be used in a standalone manner in the \ac{INS} to estimate states but suffers from drifting due to the accumulation of errors through the integration of acceleration and angular velocity measurements \cite{kok2017using,chen2018ionet,brossard2020ai,narasimhappa2019mems}. A commonly adopted localization strategy is coupling \ac{GNSS} and \ac{INS}, which combines range measurements from satellites and IMU measurements to calculate states~\cite{zhao2016analysis,li2006low,miller2012sensitivity,hu2015derivative}. This coupling strategy alleviates some of the vulnerabilities that standalone \ac{GNSS} technology faces in urban environments and forested areas~\cite{merry2019smartphone}. Though Kalman filters have been the preferred choice for \ac{GNSS} and \ac{GNSS}/\ac{INS} state estimation for a long time, factor graphs~\cite{dellaert2017factor} have emerged as an alternative framework for solving the localization problem mostly because of the advent of open-source graph optimization libraries like GTSAM~\cite{dellaert2012factor}, g2o~\cite{grisetti2011g2o}. Factor graph optimization has some advantages over the Kalman filters. For example, it uses multiple iterations to solve all states in a batch form instead of just one iteration in a Kalman filter, which is not affected by future measurements (unless smoothing is done). Factor graphs have also been seen to explore better the time correlation between past and current epochs, which helps in outlier removal~\cite{wen2021factor}. The factor graph framework also makes it easy to add existing and new robust estimation methods~ \cite{watson2018evaluation,watson2020robust,pfeifer2018robust} which helps reduce localization failure due to large noise. In order to provide a cost-effective system, one can utilize the pseudo-measurements in certain conditions from the sensors already on-board. For instance, \ac{ZUPT} is commonly used as an aiding process for pedestrian navigation~\cite{kwakkel2008gnss,zhang2017adaptive}. \ac{ZUPT} can bound the velocity error and help to calibrate \ac{IMU} sensor noises~\cite{skog2010zero}. This process helps to reduce the \ac{INS} positioning error growth from cubic to linear since the error state model justifies the correlation between the position and velocity errors of the error covariance matrix. Using \ac{ZUPT} in state estimation does not need any dedicated sensor to provide zero velocity information, and this information can be obtained by the sensors already on-board (e.g., \ac{IMU}, wheel encoders). \ac{ZUPT} requires stationary conditions, and it can be used as an opportunistic navigational update if a wheeled robot stops for other reasons (e.g., obstacle avoidance, re-planning, waiting for pedestrians, stopping at traffic lights). Also, \ac{ZUPT} can be used to improve \ac{WO}/\ac{INS} proprioceptive localization with periodic stops in \ac{GNSS}-denied~(or degraded), poor lighting/feature areas~\cite{kilic2019improved} and with autonomous stops by deciding when to stop~\cite{kilic2020}. The small wheeled robots have more freedom of stopping rather than autonomous cars, which makes utilizing \ac{ZUPT} a well-suited application for them. This paper offers the following contributions: 1) detailing a \ac{GNSS}/\ac{WO}/\ac{INS} fusion strategy by exploiting the zero velocity information for both \ac{GNSS} and \ac{INS} to be used for wheeled robots, 2) validation of the provided method on actual hardware in field tests with detailed specifications of the implementation and hardware, so the interested reader can easily replicate our work, and 3) making our algorithm publicly available\footnote{\url{https://github.com/wvu-navLab/gnss-ins-zupt}} using open-access data~\cite{vz7z-jc84-20}. The rest of this paper is organized as follows. Section~\ref{method} details and explains the components of the algorithm. Section~\ref{experimental_results} provides the experimental results. Finally, Section~\ref{conclusion} provides a conclusion and insights for future works to improve the approach. \section{Methodology} \label{method} \subsection{GNSS Factor Graph} \label{FG} The state estimation framework used for processing \ac{GNSS} data is a factor graph~\cite{dellaert2017factor}. Factor graphs solve the \ac{MAP} estimation problem by maximizing the product of factors that are probabilistic constraints between states at different time steps and between states and measurements from the current time steps. A depiction of an example factor graph is given in~Fig \ref{fig:factorgraph}. \begin{figure}[h] \centering \includegraphics[width=0.85\linewidth]{figures/factorgraph_L2.png} \caption{GNSS factor graph example. } \label{fig:factorgraph} \end{figure} A detailed description of creating factors with \ac{GNSS} observations is presented in~\cite{watson2018evaluation}. The states estimated in the GNSS factor graph are the position, tropospheric delay, clock bias, and phase bias. In Fig.~\ref{fig:factorgraph}, $\psi$ represents any constraint that might exist between the states and the measurements. For example, $\psi^p$ represents any prior belief on each state, $\psi^b$ is the motion constraint between two consecutive states along the trajectory, and $\psi^m$ is the measurement constraint between a state, where the measurements that were perceived from that state. Under the Gaussian assumption, the maximum product of the factors problem converts to a non-linear least squares problem. Following GTSAM nomenclature, we refer to $\psi^b$ as the between-factor. The cost form of the factors is shown in the equation~\ref{eq:factorcost}. Each component of the sum is a Mahalanobis cost, the left one being the prior factor cost, the middle one being the between-factor cost($\Delta$ is the measured displacement), and the right one is the GNSS factor cost. \begin{equation} \hat{X}=\underset{x}{\operatorname{argmax}}\left\{\prod_{i=1}^{I} \psi_i^p \prod_{j=1}^{J} \psi_{j-1,j}^b \prod_{k=1}^{K} \psi_k^m\right\} \end{equation} \begin{equation} \label{eq:factorcost} \hat{X}=\underset{x}{\operatorname{argmin}}\left[\sum_{i=1}^{I}\left\|x_{o}-x_{i}\right\|_{\Sigma}^{2}+\sum_{j=1}^{J}\left\|(x_{j}-x_{j-1})-\Delta\right\|_{\Lambda}^{2}+\sum_{k=1}^{K}\left\|z_{k}-h_{k}\left(x_{k}\right)\right\|_{\Xi}^{2}\right] \end{equation} As new \ac{GNSS} measurements are received, new factors are added using pseudo-range and phase measurements to the existing factor graph and solved incrementally for every epoch using the Incremental Smoothing and Mapping~(iSAM2) formulation~\cite{kaess2012isam2}. It converts the factor graph to the Bayes tree's data structure, whose vertices represent cliques for efficient inference. Thus, adding a new constraint to the factor graph only results in the re-linearization of the states within the specific clique. \subsection{INS/WO Sensor Fusion (CoreNav)} \label{corenav} The method in~\cite{kilic2019improved} is adopted to improve the proprioceptive localization framework, an error-state \ac{EKF} that uses IMU for state propagation and \ac{WO} for state correction. It also uses \ac{ZUPT} and non-holonomic constraints as pseudo-measurements to reduce the solution drift. In this \ac{INS}/\ac{WO} sensor fusion framework, it is assumed that a small four-wheeled robot uses a set of sensors, including a \ac{GNSS} receiver and antenna, an \ac{IMU}, and wheel encoders. The architecture of the \ac{INS}/\ac{WO} sensor fusion is provided in Fig.~\ref{fig:corenav}. This sensor fusion method is referred to as CoreNav in this paper. \begin{figure}[h] \centering \includegraphics[width=0.78\linewidth]{figures/cnalgo.png} \caption{The architecture of the \ac{INS}/\ac{WO} sensor fusion (CoreNav) framework. The state error is propagated with \ac{IMU} and corrected with \ac{WO} measurements in an error-state \ac{EKF} framework. \ac{ZUPT} and non-holonomic update are used to provide valuable information to calibrate the \ac{IMU} biases. This is done when a \ac{ZUPT} is triggered with stationary conditions whereas the non-holonomic update is performed for each state. } \label{fig:corenav} \end{figure} Following the formulation based on~\cite{grovebook,kilic2019improved}, the error state vector can be constructed in a local navigation frame, \begin{equation} \label{errorstate} \mathbf{x}_{err}^{n}={\biggl( \delta\mathbf{ \Psi}_{nb}^{n} \ \ \mathbf{\delta v}_{eb}^{n} \ \ \delta\mathbf{p}_{b} \ \ \mathbf{b}_a \ \ \mathbf{b}_g \biggr )}^{\mathbf{T}} \end{equation} where, $\delta\mathbf{ \Psi}_{nb}^{n}$ is the attitude error, $\mathbf{\delta v}_{eb}^{n}$ is the velocity error, $\delta\mathbf{p}_{b}$ is the position error, $\mathbf{b}_a$ is the \ac{IMU} acceleration bias, and $\mathbf{b}_g$ is the \ac{IMU} gyroscope bias. For the sake of completeness, the measurement innovations for \ac{ZUPT}, non-holonomic update, and \ac{WO} are provided in this section. The detailed derivations and implementation details can be found in~\cite{grovebook,kilic2019improved}. \subsubsection{Zero Velocity Update} \ac{ZUPT} can be used in three ways for wheeled robots: 1) passive \ac{ZUPT}, as an opportunistic navigational update when rover needs to stop for various reasons, 2) active \ac{ZUPT}, with periodic stopping~\cite{kilic2019improved}, and 3) active \ac{ZUPT}, by deciding when to stop autonomously~\cite{kilic2020}. Assuming a stationary rover has zero velocity and zero angular rate (i.e., constant heading), the measurement innovation for a \ac{ZUPT} can be given as \begin{equation} \mathbf{\delta z}_{Z,k}^{n }=[-\mathbf{\hat{v}}_{eb}^{n} -\mathbf{\hat{\omega}}_{ib}^{b}]_{k}^{ T} \end{equation} where $\mathbf{\delta z}_{Z,k}^{n -}$ is measurement innovation, $\mathbf{\hat{v}}_{eb}^{n} $ is estimated velocity vector, and $\mathbf{\hat{\omega}}_{ib}^{b}$ estimated gyro bias. \subsubsection{Non-Holonomic Update} If a wheeled-robot cannot move sideways with its wheels, then the rover's velocity is zero along the rotation axis of its wheels, assuming it does not experience side-slip. Additionally, wheeled-robot cannot move perpendicular to the traversal surface. For this reason, these motion constraints can be used as non-holonomic pseudo-measurement update in the filter. Non-holonomic measurement innovation can be given as \begin{equation} \mathbf{\delta z}_{RC,k}^{n}=-\begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} (\mathbf{C}_{n}^{b} \mathbf{v}_{eb}^{n} -\mathbf{\omega}_{ib}^{b} \times \mathbf{L}_{rb}^{b})_{k} \end{equation} where $\mathbf{C}_{b}^{n}$ is the coordinate transformation matrix from the body frame to the locally level frame, $\mathbf{L}_{rb}^{b}$ is rear wheel to body lever arm, and $\mathbf{\omega}_{ib}^{b}$ is angular rate measurement. \subsubsection{Wheel Odometry} In the CoreNav method, the \ac{WO} measurements are used as an aiding state correction in the error-state \ac{EKF}. Following the same procedure as described in \cite{kilic2019improved}, the measurement innovation for the wheel odometry estimation can be given as \begin{equation} \delta \mathbf{z}_{O}=\begin{pmatrix} {\tilde{v}}_{lon,O}- {\tilde{v}}_{lon,i}\\ -{\tilde{v}}_{lat,i}\\-{\tilde{v}}_{ver,i}\\ \tilde{\dot{{\psi}}}_{nb,o}-\tilde{\dot{{\psi}}}_{nb,i}\overline{cos{\hat{\theta}_{nb}}} \end{pmatrix} \end{equation} \begin{equation} \begin{aligned} \begin{bmatrix} {\tilde{v}}_{lat,i}\\ {\tilde{v}}_{lon,i}\\ {\tilde{v}}_{ver,i} \end{bmatrix}=&\frac{1}{\tau_0}\int_{t-\tau_{0}}^{t} \mathbf{I}_{3}\big(\mathbf{C}_{n}^{b}(\tau) \mathbf{v}_{eb}^{n}(\tau) +\mathbf{w}_{eb}^{b}(\tau) \times \mathbf{L}_{br}^{b}\big)d\tau \end{aligned} \end{equation} where $\tilde{v}_{lon}$, $\tilde{v}_{lat}$, and $\tilde{v}_{ver}$ are estimated longitudinal, lateral and vertical wheel speed, respectively. The subscript $i$ denotes the \ac{INS} estimation, and $O$ denotes the \ac{WO} measurements. \subsection{GNSS/WO/INS Integration with ZUPT} \label{fusion} This section presents two factor graph strategies: 1) utilizing only the zero velocity information in a factor graph method, 2) leveraging the CoreNav position estimates (which are improved with ZUPT and non-holonomic constraints) in a factor graph method. We used the GTSAM library for graph optimization and the standard squared loss function~(L2). The first method does not use \ac{INS} state estimates in the factor graph directly. It only uses zero velocity signals when \ac{INS} detects the rover has stopped. To do this, a high certainty zero displacement between-factor $\psi_{b,j}$, referred to in the Fig.~\ref{fig:zuptfactorgraph} as $\psi^z$, is added between the two adjacent state vertices, which are known to be stationary. A low certainty zero-displacement between-factor is also added between non-\ac{ZUPT} vertices, where the rover is moving, to represent process noise and let the \ac{ZUPT} information propagate throughout the graph and improve the overall solution. The $\psi^z$ factors are also referred to as the ZUPT factors in the results section. An illustration of this first method is given in Fig.~\ref{fig:zuptfactorgraph} and called L2-ZUPT. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/factorgraph_zuptL2.PNG} \caption{GNSS factor graph with ZUPT factors, L2-ZUPT } \label{fig:zuptfactorgraph} \end{figure} The second method has a more direct coupling between the \ac{GNSS} and the \ac{INS}/\ac{WO} part. Here instead of utilizing the zero velocity information directly in the factor graph, the positioning solution from the CoreNav error state \ac{EKF} sensor fusion method following \cite{kilic2019improved}(see Fig.~\ref{fig:corenav}) is used. Note that, in the CoreNav method, the zero velocity information is used as a \ac{ZUPT} and also includes the non-holonomic constraints to improve the localization solution further. To couple the EKF estimates with the GNSS factor graph, the obtained positioning estimates from the CoreNav method are added as between-factors $\psi^{CN}$ among all state vertices. The $\psi^{CN}$ factors are referred to as the CoreNav factors. A depiction of the second method is given in Fig.~\ref{fig:cnfactorgraph} and called L2-CN. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/factorgraph_CNL3.png} \caption{GNSS factor graph with CoreNav factors, L2-CN. } \label{fig:cnfactorgraph} \end{figure} \section{Experimental Results} \label{experimental_results} \subsection{Robot System Design} \label{system_design} In order to collect the data for validating the method, the Pathfinder testbed rover~\cite{kilic2019improved} is used. The robot is a skid-steered four-wheeled platform that can be utilized as a testing platform. A depiction of the rover is shown in Fig.~\ref{fig:pf1}. The localization sensor suite setup includes a Novatel pinwheel L1/L2 GNSS antenna~\cite{novatel2}, and receiver\cite{novatel1}, an Analog Devices ADIS-16495 \ac{IMU}~\cite{adis}, and quadrature wheel encoders. The robot is also equipped with an Intel RealSense T265 camera~\cite{intelt265}; however, this sensor is not used in the localization solution. The computer is an Intel NUC Board NUC7i7DN~\cite{intelNUC} which hosts an i7-8650U processor. The software runs on the robot under Robot Operating System (ROS)~\cite{quigley2009ros}. \begin{figure}[htb!] \centering \includegraphics[width=0.80\linewidth]{figures/pathfinder4.png} \caption{A small wheeled rover named Pathfinder, in an off-road environment located at Point Marion, PA, where the data were obtained. } \label{fig:pf1} \end{figure} \subsection{Field Test Datasets} To evaluate the methods, we use three datasets from \cite{vz7z-jc84-20} referred in the paper as Test~1 (\texttt{ashpile\_mars\_analog1.zip}), Test~2 (\texttt{ashpile\_mars\_analog2.zip}) and Test~3 (\texttt{ashpile\_mars\_analog3.zip}). We also generate a noisy version of these datasets by adding simulated multipath noise based on the elevation of the satellites following the early-minus-late discriminator formulation in~\cite{liu2009tracking}. The noise is randomly added to 2~\% of the data in each dataset to create the noisy versions. Histogram plots of the simulated range and phase noises are shown in Fig.~\ref{fig:multipath}. The rover stops nine times for Test 1, 19 times for Test 2, and 20 times for Test 3 to obtain the zero velocity information. The distances covered in Test 1, 2, and 3 are 671m, 652m, and 663m, respectively. The reference position solutions are obtained by integer-ambiguity-fixed carrier-phase differential GPS (DGPS) processed with RTKLIB~\cite{rtklib}. \begin{figure}[h] \centering \includegraphics[width=0.68\linewidth]{figures/multipath2.png} \caption{Histogram of simulated multipath phase and range errors\cite{liu2009tracking} } \label{fig:multipath} \end{figure} \subsection{Evaluation and Discussion} The three methods compared here are the standard \ac{GNSS} factor graph (Fig.~\ref{fig:factorgraph}), the GNSS factor graph with ZUPT factors (Fig.~\ref{fig:zuptfactorgraph}) and the GNSS factor graph with CoreNav factors (Fig.~\ref{fig:cnfactorgraph}). These are referred to in the figures and tables as L2, L2-ZUPT, and L2-CN, respectively. A comparison of the methods for the clean version of the datasets is given in Table~\ref{tab:results1}. The comparison for the noisy datasets is shown in Table~\ref{tab:results2}. Figures~\ref{fig:resultENUclean} and~\ref{fig:resultENU} show the time variation of the errors in the East-North-Up frame. The larger peaks in Fig.~\ref{fig:resultENU} in the standard factor graph results are due to the large simulated multipath noise (see Fig.~\ref{fig:multipath}.) \vspace{10 pt} \begin{table} [htb] \centering \footnotesize \begin{threeparttable} \caption{Comparison of the methods for the clean datasets.} \label{tab:results1} \centering \begin{tabular}{@{}llccccc@{}} \hline \multicolumn{2}{c}{Clean Dataset} & \multicolumn{4}{c}{RMSE (m)} & \multicolumn{1}{c}{Max Norm Error (m)} \\ & & \scriptsize{E}& \scriptsize{N}& \scriptsize{U}& \scriptsize{3D}& \scriptsize{3D}\\ \hline\hline & L2 &0.62 &0.87 &3.82 &3.97 &5.65\\ Test 1 & L2-ZUPT &0.62 &\colorbox{green}{0.34} &\colorbox{green}{2.09} &\colorbox{green}{2.20} &\colorbox{green}{3.06} \\ & L2-CN &0.62 &0.78 &3.34 &3.49 &4.97 \\ \hline & L2 &0.49 &0.31 &1.30 &1.43 &4.31 \\ Test 2 & L2-ZUPT &\colorbox{green}{0.46} &0.93 &1.24 &1.62 &\colorbox{green}{2.62} \\ & L2-CN &0.51 &\colorbox{green}{0.24} &\colorbox{green}{0.90} &\colorbox{green}{1.06} &3.08 \\ \hline & L2 &0.16 &0.92 &3.86 &3.97 &6.14 \\ Test 3 & L2-ZUPT &0.16 &0.96 &\colorbox{green}{3.55} &\colorbox{green}{3.69} &\colorbox{green}{4.47} \\ & L2-CN &0.16 &0.92 &3.58 &3.70 &5.65 \\ \hline\hline \end{tabular} \begin{tablenotes} \item[*] The best results marked up with green boxes \end{tablenotes} \end{threeparttable} \end{table} \begin{figure}[htb!] \centering \includegraphics[width=0.96\linewidth]{figures/figall2clean.png} \caption{Time variation of the errors (m) in the East-North-Up frame for clean datasets. } \label{fig:resultENUclean} \end{figure} \begin{table} [htb] \centering \footnotesize \begin{threeparttable} \caption{Comparison of the methods for noisy datasets.} \label{tab:results2} \centering \begin{tabular}{@{}llccccc@{}} \hline \multicolumn{2}{c}{Noisy Dataset} & \multicolumn{4}{c}{RMSE (m)} & \multicolumn{1}{c}{Max Norm Error (m)} \\ & & \scriptsize{E}& \scriptsize{N}& \scriptsize{U}& \scriptsize{3D}& \scriptsize{3D}\\ \hline\hline & L2 &0.88 &0.87 &2.85 &3.10 &67.81 \\ Test 1 & L2-ZUPT &0.88 &0.78 &\colorbox{green}{2.27} &\colorbox{green}{2.55} &22.39 \\ & L2-CN &\colorbox{green}{0.67} &\colorbox{green}{0.77} &2.72 &2.90 &\colorbox{green}{7.29} \\ \hline & L2 &0.53 &0.40 &1.77 &1.90 &16.12 \\ Test 2 & L2-ZUPT &0.59 &0.33 &1.36 &1.52 &4.85 \\ & L2-CN &0.53 &\colorbox{green}{0.24} &\colorbox{green}{1.01} &\colorbox{green}{1.16} &\colorbox{green}{3.41} \\ \hline & L2 &0.23 &0.90 &3.59 &3.71 &7.52 \\ Test 3 & L2-ZUPT &\colorbox{green}{0.17} &0.93 &3.65 &3.77 &\colorbox{green}{5.08} \\ & L2-CN &0.23 &\colorbox{green}{0.88} &\colorbox{green}{3.55} &\colorbox{green}{3.66} &6.33 \\ \hline\hline \end{tabular} \end{threeparttable} \end{table} \begin{figure}[htb!] \centering \includegraphics[width=0.98\linewidth]{figures/figall2.png} \caption{Time variation of the errors (m) in the East-North-Up frame for noisy datasets. } \label{fig:resultENU} \end{figure} \newpage L2-ZUPT and L2-CN have better performance than the GNSS-only factor graph, L2, for clean and noisy datasets 1 and 2. We see comparable performances from all three methods for dataset 3. L2-ZUPT is found to perform best for clean data, and L2-CN performs best for noisy data. The effect of leveraging zero velocity information can be seen in the noisy data results, where the larger errors in the standard factor graph are dampened by constraining the states during the ZUPTs to be the same since the rover is stationary at that time. The better performance of L2-CN for noisy data can be explained by the fact that the factor graph utilizes more information since it uses the IMU-WO solution from CoreNav. CoreNav also uses additional non-holonomic constraints which are not used in the L2 and L2-ZUPT methods. The performance gap between L2-ZUPT and L2-CN is affected by the number of stops in the datasets. For example, we see a more significant performance gap between L2-ZUPT and L2-CN in Test 1, which has fewer stops. This performance gap is smaller in Test 2, and 3 where the numbers of stops are more than in Test 1, which indicates that using only ZUPT factors in the GNSS factor graph can provide similar localization performance as using a GNSS/WO/INS coupled solution. \section{Conclusions and Future Work} \label{conclusion} This work presents GNSS/WO/INS fusion strategies with a factor graph by exploiting the zero velocity information to improve the positioning solution for wheeled robots. The presented methods are compared with a standard GNSS factor graph. This comparison shows that the zero velocity information is a valuable constraint to further improving the positioning solution in a GNSS-degraded environment. Moreover, we observed that using only zero velocity information in a GNSS factor graph can provide comparable positioning performance as using a GNSS/INS/WO coupled position solution. To evaluate our \ac{ZUPT} aided factor graph methods, we used open access datasets~\cite{vz7z-jc84-20}. Also, we made our software implementation publicly available. Future works will involve testing with more extended datasets with real multipath noise and investigating the connection with robust filtering algorithms. We also experimented with the incremental covariance estimation method discussed in \cite{watson2020robust}. The ZUPT factors are expected to help learn the measurement noise covariance model better. However, the parameter tuning of incremental covariance estimation and ZUPT became a challenge, and due to this, we could not find consistent results with these datasets, which could be attributed to the shorter length of the trajectories. \section*{Acknowledgements} This work was supported in part by NASA EPSCoR Research Cooperative Agreement WV-80NSSC17M0053, and in part by the Benjamin M. Statler Fellowship. \bibliographystyle{IEEEtran}
1,941,325,220,861
arxiv
\section{Concluding remarks}\label{sec:conclusions} Over the last twenty years, subgraph counting has been under increased focus in the network science community, specially since the introduction of networks motifs~\cite{milo2002network}, and its status as an important tool for network analysis, as well as graphlets~\cite{przulj2007} which are now established measures for network alignment. In this survey we explored existing practical methods to solve the subgraph counting problem from three perspectives: (i) algorithms that efficiently perform exact counting, which is an intrinsically computationally hard task, (ii) algorithms that perform an approximation of subgraph frequencies, making the process faster but taking into account the accuracy of their estimation, and (iii) algorithms that efficiently exploit parallel architectures despite the unbalanced nature of subgraph counting. We showed that all three of these categories are still attracting new work, with new methods still emerging in an attempt to improve previous work. The aim of this work was precisely to describe and classify the major algorithmic ideas in each of these three categories, offering a structured and thorough review of how they work and what are their advantages and disadvantages. Moreover, we provided more than two hundred references that allow further exploration of any aspects that might be of particular interest to the reader, including direct links to the existing practical implementations of the methods. We feel that this survey provides valuable insight both from a more practical point of view, offering solutions and application ideas for those who view subgraph counting as a tool for network analysis, and from a more methodological angle, being not only a very strong starting point for new researchers joining the area, but also a very useful and comprehensive summary of recent research results for more established researchers. \section{Approximate Counting}\label{sec:sampling} Despite the significant advances made towards faster subgraph counting algorithms, current state of the art algorithms that determine exact frequencies still take hours, if not days, for very large networks. With the ever increasing amount of data our society generates (e.g., in big social networks such as Twitter and Facebook new members/nodes join every second), it is unfeasible to count all possible subgraphs. To solve this problem, subgraph counting research drifted towards approximating these frequencies, making a trade-off between losing accuracy but gaining time. Additionally, in some applications, approximate subgraphs counts might be sufficient~\cite{kashtan2004efficient,ribeiro2010estimation}. Throughout this section we make a distinction between algorithms that estimate (i) subgraph frequencies or (ii) subgraph concentrations. Estimating subgraph frequencies is harder since the algorithm needs to know the magnitude of the values, whereas to estimate concentrations the algorithm only needs to know the different proportions of each subgraph in the network. Obtaining subgraph concentrations from subgraph frequencies is trivial but the reverse requires extra computational tasks. We further split the approximate counting algorithms in five broad categories: \textbf{randomised enumeration}, \textbf{enumerate-generalize}, \textbf{path sampling}, \textbf{random walk}, and \textbf{colour coding}. In each subsection, we provide an algorithmic overview of each strategy and delve into the individual algorithms that implement it and how they differ between themselves. Tables~\ref{tab:approx_algs} and~\ref{tab:restaccess_algs} summarize the algorithms we discuss in the section. We split the methods into algorithms where the full topology is assumed (Table~\ref{tab:approx_algs}) and algorithms tailored to networks with restricted access (Table~\ref{tab:restaccess_algs}). Although some algorithms from each category may work in the other setting, they excel for the task they were designed for and the distinction should be made clear. The tables summarize our proposed taxonomy composed of five aspects, ordered by their publication year: (i) the type of {\bf output} (frequencies or concentrations), (ii) \textbf{$k$-restrictions} (does the method only work for certain subgraph sizes?), (iii) \textbf{directed} (is the method applicable to directed graphs?), (iv) the {\bf strategy} it employs, according to our taxonomy, and (v) if code is \textbf{publicly available}. Note that some authors do not have executable versions publicly available, but will be happy to share them through email. We mark these algorithms with a \cmark in the code column of the table. \begin{table}[!h] \small % \centering \def1.0{1.0} \caption{Algorithms for approximate subgraph counting.} \label{tab:approx_algs} \begin{tabular}{$l^l^l^c^c^l^c } % \rowstyle{\bfseries} & Year & Output & $k$-restriction & Directed & Strategy & Code\\ & & & & \\[-8pt] \hline \textsc{ESA}~\cite{kashtan2004efficient} & 2004 & Conc. & None & \cmark & Random Walk & \cite{mfinder} \\[2pt] \textsc{RAND-ESU}~\cite{wernicke2005faster} & 2005 & Freq. & None & \cmark & Rand. Enum. & \cite{fanmod} \\[2pt] \textsc{TNP}~\cite{prvzulj2006efficient} & 2006 & Conc. & 5 & \xmark & Enum. - Generalize & \xmark \\[2pt] \textsc{RAND-GTrie}~\cite{ribeiro2010estimation} & 2010 & Freq. & None & \cmark & Rand. Enum. & \cite{gtries}\\[2pt] \textsc{GUISE}~\cite{bhuiyan2012guise} & 2012 & Conc. & 5 & \xmark & Random Walk & \cite{guise} \\[2pt] \textsc{RAND-SCMD}~\cite{wang2012symmetry} & 2012 & Freq. & None & \cmark & Enum. - Generalize & \xmark \\[2pt] \textsc{Wedge Sampling}~\cite{seshadhri2013triadic} & 2013 & Freq. & 3 & \cmark & Path Sampling & \cite{wedgesamp} \\[2pt] \textsc{GRAFT}~\cite{rahman2014graft} & 2014 & Freq. & 5 & \xmark & Enum. - Generalize & \cite{graft} \\[2pt] \textsc{PSRW \& MSS}~\cite{wang2014efficiently} & 2014 & Conc. & None & \xmark & Random Walk & \xmark \\[2pt] \textsc{MHRW}~\cite{saha2015finding} & 2015 & Conc. & None & \xmark & Random Walk & \cmark \\[2pt] \textsc{RAND-FaSE}~\cite{paredes2015rand} & 2015 & Freq. & None & \cmark & Rand. Enum. & \cite{fase} \\[2pt] \textsc{Path Sampling}~\cite{jha2015path} & 2015 & Freq. & 4 & \xmark & Path Sampling & \xmark \\[2pt] \textsc{$k$-profile sparsifier}~\cite{elenberg2015beyond,elenberg2016distributed} & 2016 & Freq. & 4 & \xmark & Enum. - Generalize & \cite{eelenberggit} \\[2pt] \textsc{MOSS}~\cite{wang2018moss} & 2018 & Freq. & 5 & \xmark & Path Sampling & \cite{moss} \\[2pt] \textsc{SSRW}~\cite{yang2018ssrw} & 2018 & Freq. & 7 & \xmark & Random Walk & \xmark \\[2pt] \textsc{CC}~\cite{bressan2018motif} & 2018 & Freq. & None & \xmark & Color Coding & \cite{bressancc} \\[2pt] \end{tabular} \end{table} \begin{table}[!h] \small % \centering \def1.0{1.0} \caption{Algorithms for approximate subgraph counting with restricted access.} \label{tab:restaccess_algs} \begin{tabular}{$l^l^l^c^c^l^c } % \rowstyle{\bfseries} & Year & Output & $k$-restriction & Directed & Strategy & Code\\ & & & & \\[-8pt] \hline \textsc{WRW}~\cite{han2016waddling} & 2016 & Conc. & None & \xmark & Random Walk & \xmark \\[2pt] \textsc{IMPR}~\cite{chen2016mining} & 2016 & Freq. & 5 & \xmark & Random Walk & \cite{impr} \\[2pt] \textsc{CSS} \& \textsc{NB-SRW}~\cite{chen2016general} & 2016 & Conc. & None & \xmark & Random Walk & \cmark \\[2pt] \textsc{Minfer}~\cite{wang2017inferring} & 2017 & Conc. & 5 & \cmark & Enumerate - Generalize & \xmark \\[2pt] \end{tabular} \end{table} \subsection{Randomised Enumeration} These algorithms are adaptations of older enumeration algorithms that perform exact counting. They have the particularity that they all induce a tree-like search space in the computation, where the leaves are the subgraph occurrences, and thus perform the approximation in a similar manner. Each level of the search tree is assigned a value, $p_i$, which denotes the probability of transitioning from parent node to the child node in the tree. In this scheme, each leaf in this tree is reachable with probability $P = \prod_{i=1}^{k} p_i$ and the frequency of each subgraph is estimated using the number of samples obtained of that subgraph divided by $P$. Figure~\ref{fig:trees_probs_samp} illustrates how probabilities are added to the search tree. In this specific example, which could be equivalent to searching subgraphs of size 4, the first two levels of the tree have probability 100\%, so their successors are all explored. On the other hand, in the last two levels, the probability of exploring a node in the tree is only 80\%, therefore some nodes, marked as grey, are not visited. \begin{figure}[h] \includegraphics[width=0.8\textwidth]{figures/probabilities_tree.png} \vspace{-0.35cm} \caption{Example of a tree-like search space induced by a Randomised Enumeration algorithm and a possible distribution of transition probabilities per tree level.} \label{fig:trees_probs_samp} \end{figure} The first algorithm to implement this strategy was \textsc{RAND-ESU} by~\citeAutRef{wernicke2005faster}, an approximate version of \textsc{ESU} (described in Section~\ref{sec:exact_class}). Recall that \textsc{ESU} maintains two sets $V_S$ and $V_E$, the set of vertices in the subgraph and the set of candidate vertices for extending the subgraph. When adding a vertex from $V_E$ to $V_S$, this vertex is added with probability $p_{|V_S|}$, where $|V_S|$ is the depth of the search tree. Using the more efficient {\it g-trie} data structure,~\citeAutRef{ribeiro2010estimation} proposed \textsc{RAND-GTrie} and~\citeAutRef{paredes2015rand} proposed \textsc{RAND-FaSE}. Each level of the {\it g-trie} is assigned a probability, $p_i$. When adding a new vertex to a subgraph of size $d$, corresponding to depth $d$ in the {\it g-trie}, this is done with probability $p_d$. \subsection{Enumerate-Generalize} The general idea of these algorithms is to perform an exact count on a smaller network that was obtained from the original one (e.g., a sample, or a compressed network). From the frequencies of each subgraph in the smaller network, the frequencies in the original network are estimated. Algorithms vary on (i) how the smaller network is obtained and on (ii) which estimator they use. The first example of an algorithm in this category is Targeted Node Processing (\textsc{TNP}) by~\citeAutRef{prvzulj2006efficient}. This algorithm is specially tailored for protein-protein interaction ,that, according to the authors, have a periphery that is sparser than the more central parts of the network. Using this information, it performs an exact count of the subgraphs in the periphery of the network and uses their frequencies to estimate the frequencies in the rest of the network. The authors claim that, due to the uniformity of the aforementioned networks, the distribution of the subgraphs in the fringe is representative of the distribution in the rest of the network. \textsc{SCMD} by \citeAutRef{wang2012symmetry} (already covered in Section~\ref{sec:exact_encap}) allows the use of any approximate counting method in the compressed graph. There is no guarantee that subgraphs are counted uniformly in the compressed graph, introducing a bias that needs to be corrected. The authors give the example of this bias when using their method in conjunction with \textsc{RAND-ESU}. If each leaf (subgraph) of depth $k$ in the search tree is reached with probability $P$ and a specific subgraph in the compressed graph is sampled with probability $\rho$, then, to correct the sampling bias, the probability of decompressing the relevant $k$-subgraph is $P/\rho$. In \textsc{GRAFT}, \citeAutRef{rahman2014graft} provide a strategy for counting undirected graphlets of size up to 5, using edge sampling. The algorithm starts by picking an edge $e_g$ from each of the 29 graphlets and a set of edges sampled from the graph $\mathcal{S}$, without replacement. For each edge $e \in \mathcal{S}$ and for each graphlet $g$, the frequency of $g$ is calculated such that $e$ has the same position in $g$ as $e_g$ ($e$ is said to be aligned with $e_g$). These frequencies are summed for all edges and divided by a normalising factor, based on the automorphisms of each graphlet, which becomes the estimation for the frequency of that graphlet in the whole network. Note that if $\mathcal{S}$ is equal to $E(G)$, the algorithm outputs an exact answer. \citeAut{elenberg2015beyond} create estimators for the frequency of size 3~\cite{elenberg2015beyond} and 4~\cite{elenberg2016distributed} subgraphs. A major difference from this work to previous ones is that \citeAut{elenberg2015beyond} estimate the frequencies of subgraphs that are not connected, besides the usual connected ones. The authors start by removing each edge from the network with a certain probability and computing the exact counts in this ``sub-sampled'' network. Then, they craft a set of linear equations that relate the exact counts on this smaller network to the ones of the original network. Using these equations, the estimation of the frequency of the subgraphs in the original network follows. \citeAutRef{wang2017inferring} introduce an algorithm that aims to estimate the subgraph concentrations of a network when only a fraction of its edges are known. They call this a ``RESampled Graph'', obtained from the real network through random edge sampling, a common scenario on applications such as network traffic analysis. A key aspect of this algorithm is the number of non- induced subgraphs of a size $k$ graphlet that are isomorphic to another size $k$ graphlet, an example of this calculation can be found in Table~\ref{tab:noninduc_rel4}. Using this number and the proportion of edges sampled to form the smaller network, the authors compute the probability that a subgraph in the ``RESampled Graph'' is isomorphic to another subgraph in the original graph. Then, an exact counting algorithm is applied to the ``RESampled Graph'' and by composing the results from this algorithm with the aforementioned probability, the subgraph concentrations in the original network are estimated. \subsection{Path Sampling} This family of algorithms relies on the idea of sampling path subgraphs to estimate the frequencies of the other subgraphs. Path subgraphs are composed by 2 exterior nodes and $k-2$ interior nodes (where $k$ is the size of the subgraph) arranged in a single line; the interior nodes all have degree of 2, while the exterior nodes have degree of 1. Examples of these are the subgraphs $G_1$, $G_3$ and $G_9$ in Figure~\ref{fig:345undirgraphlets}. The main idea for these algorithms, mainly for $k \geq 4$, is relating the number of non-induced occurrences of each subgraph of size $k$ in the other size $k$ subgraph. For example, when $k = 4$, there are 4 non-induced occurrences of $G_3$ in $G_5$ or 12 non-induced occurrences of $G_3$ in $G_8$. Table~\ref{tab:noninduc_rel4} shows this full relationship when $k = 4$. \begin{figure}[!ht] \includegraphics[width=0.85\textwidth]{figures/undir_graphlets.pdf} \vspace{-0.35cm} \caption{The 29 isomorphic classes of undirected subgraphs between size 3 and 5.} \label{fig:345undirgraphlets} \end{figure} \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|} \hline & $g_3$ & $g_4 $ & $g_5$ & $g_6$ & $g_7$ & $g_8$\\ \hline \hline $g_3$ & 0 & 1 & 2 & 4 & 6 & 12\\ \hline $g_4$ & 1 & 0 & 1 & 0 & 2 & 4\\ \hline $g_5$ & 0 & 0 & 0 & 1 & 1 & 3\\ \hline $g_6$ & 0 & 0 & 1 & 0 & 4 & 12\\ \hline $g_7$ & 0 & 0 & 0 & 0 & 1 & 6\\ \hline $g_8$ & 0 & 0 & 0 & 0 & 0 & 1\\ \hline \end{tabular} \caption{Number of non-induced occurences of each undirected graph of size 4 in each other. Position $(i,j)$ in the table indicates the number of times that graph $i$ occurs non-induced in graph $j$.} \label{tab:noninduc_rel4} \end{center} \end{table} \citeAutRef{seshadhri2013triadic} introduced the idea of \emph{wedge sampling}, where wedges denote size 3 path subgraphs. The premise of the algorithm is simple, they select a number of wedges uniformly at random and check whether they are closed or not. The fraction of closed wedges sampled is an estimation from for the clustering coefficient, from which the number of triangles can be derived. Building on the idea of \emph{wedge sampling},~\citeAutRef{jha2015path} propose \emph{path sampling} to estimate the frequency of size 4 graphlets. The main primitive of the algorithm is sampling non-induced occurrences of $G_3$ and determining which graphlet is induced by that sample. The estimator relies on both the number of induced subgraphs counted via the sampling and information contained in Table~\ref{tab:noninduc_rel4}. Finally, the authors determine an equation to count the number of stars with 4 nodes ($G_4$) based on the frequencies of each other graphlet, since $G_4$ does not have any non-induced occurrence of $G_3$. Applying the same concepts to size 5 subgraphs, \citeAutRef{wang2018moss} present \textsc{MOSS}-5. For size 5, sampling paths is not enough to estimate the frequencies of all different subgraphs, as there are 3 subgraphs that do not have a non-induced occurrence of a path: $G_{10}$, $G_{11}$ and $G_{14}$. On the other hand, $G_{11}$ does not have a non-induced occurrence in 3 subgraphs as well ($G_9$, $G_{10}$ and $G_{15}$). Using this knowledge, the authors create an algorithm divided in two parts: first it samples non-induced size 5 paths ($G_{9}$), similarly to~\citeAutRef{jha2015path}, and then they repeat the procedure but sampling occurrences of $G_{11}$ instead. Combining the results from these two sampling schemes, the authors are able to estimate the frequency of every size 5 subgraph. To the best of our knowledge, \textsc{MOSS}-5 is the algorithm that achieves the best trade-off of accuracy and time to estimate the frequency of $5$-subgraphs, as it is able to reach very small errors (magnitude $10^{-2}$) with a very limited number of samples, even for big networks. However the ideas behind \textsc{MOSS}-5 are not easily extendable to directed subgraphs and larger sized undirected subgraphs due to the ever increasing number of dependencies between the number of non-induced occurrences, making it harder to use the information contained in a table similar to Table~\ref{tab:noninduc_rel4} for these cases. \subsection{Random Walk} A random walk in a graph $G$ is a sequence of nodes, $R$, of the form $R = (n_1, n_2, \ldots)$, where $n_1$ is the seed node and $n_i$ the $i$th node visited in the walk. A random walk can also be seen as a Markov chain. We identify two main approaches to sample subgraphs using random walks. The first is incrementing the size of the walk until a sequence of $k$ distinct nodes is drawn, forming a $k$-subgraph, which is then identified by an isomorphism test. The second approach is considering a graph of relationships between subgraphs, where two subgraphs are connected if one can be obtained from the other by adding or removing a node or by adding or removing an edge. A random walk is then performed on this graph instead of on the original one. \citeAutRef{kashtan2004efficient}, in their seminal work commonly called \textsc{ESA} (Edge Sampling), implemented one of the first subgraph sampling methods in the \textsc{MFinder} software. The authors propose to do a random walk on the graph, sampling one edge at a time until a set of $k$ nodes is found, from which the subgraph induced by that set of nodes is discovered. This method resulted in a biased estimator. To correct the bias, the authors propose to re-weight the sample, which takes exponential time in the size of the subgraphs. \citeAutRef{bhuiyan2012guise} develop \textsc{GUISE} that computes the graphlet degree distribution for subgraphs of size 3, 4 and 5 in undirected networks. The algorithm is based on Monte Carlo Markov Chain (MCMC) sampling. It works by sampling a seed graphlet, calculating its neighbourhood (a set of other graphlets), picking one randomly and calculating an acceptance probability to transition to this new graphlet. This process is then repeated until a predefined number of samples is taken from the graph. The neighbourhood of a graphlet is similar to the graph of relationships previously mentioned, but to obtain a $k$-graphlet from another $k$-graphlet, a node from the original one is removed and, if the remaining $k-1$ nodes are connected, their adjacency lists are concatenated and nodes are picked from there to form the new $k$-graphlet. A similar approach to \textsc{GUISE} is used by~\citeAutRef{saha2015finding}, where MCMC sampling is also used to compute subgraph concentration. A difference to \textsc{GUISE} is that the size of graphlets is theoretically unbound and only a specific size $k$ is counted, whereas \textsc{GUISE} counts graphlets of size 3, 4 and 5 simultaneously. They also suggest a modified version where the acceptance probability is always one (that is, there is always a transition to the new subgraph), which introduces a bias towards graphlets with nodes with a high degree. In turn, they propose an estimator that re-weights the concentration to remove this bias. \citeAutRef{wang2014efficiently} propose a random walk based method to estimate subgraph concentrations that aims to improve on the approach taken by \textsc{GUISE}. The main improvement over \textsc{GUISE} is that no samples are rejected, avoiding a cost of sampling without any gain of information. The authors use a graph of relationships between connected induced subgraphs, where two $k$-subgraphs are connected if they share $k-1$ nodes, but this graph is not explicitly built, reducing memory costs. The basic algorithm is just a simple random walk over this graph of relationships. The authors also present two improvements: \emph{Pairwise Subgraph Random Walk} (\textsc{PSRW}), estimates size $k$ subgraph by looking at the graph of relationships composed by $k-1$-subgraphs; \emph{Mixed Subgraph Sampling} (\textsc{MSS}), estimates subgraphs of size $k-1$, $k$ and $k+1$ simultaneously. \citeAutRef{han2016waddling} present an algorithm to estimate subgraph concentration based on random walks. Their algorithm, \emph{Waddling Random Walk} (\textsc{WRW}), gets its name from how the random walk is performed, allowing sampling of nodes not only on the path of the walk, but also query random nodes in the neighbourhood. Let $l$ be the number of vertices (with repetition) in the shortest path of a particular $k$-graphlet. The goal of the waddling is to reduce the number of steps the walk has to take to identify graphlets with $l > k$. While executing a random walk to identify a $k$-subgraph, the waddling approach limits the number of nodes explored to the size of the subgraph, $k$. \citeAutRef{chen2016mining} propose a random walk based algorithm to estimate graphlet counts in online social networks, which are often restricted and the entire topology is hidden behind a prohibitive query cost. With this context in mind, the authors introduced the concepts of \emph{touched} and \emph{visible} subgraphs. The former are subgraphs composed of vertices whose neighbourhood is accessible. The latter possess one and only one vertex with inaccessible neighbourhood. Their method, \textsc{IMPR}, works by generating $k-1$-node \emph{touched} subgraphs via random walk and combining them with their node's neighbourhood for obtain $k$-node \emph{visible} subgraphs, which form the $k$-node samples. \citeAutRef{chen2016general} introduce a new framework that incorporates \textsc{PSRW} as a special case. To sample $k$-subgraphs, the authors also use a graph of relationships between connected induced $d$-subgraphs, $d \in \{1,..,k-1\}$, and perform a random walk over this graph. The difference to \emph{PSRW} is that \emph{PSRW} only uses $d$ as $k-1$, which becomes ineffective as $k$ grows to larger sizes. The authors also augment this method of sampling with a different re-weight coefficient to improve estimation accuracy and add non-backtracking random walks, which eliminates invalid states in the Markov Chain that do not contribute to the estimation. \citeAutRef{yang2018ssrw} introduce another algorithm using random walks, \emph{Scalable subgraph Sampling via Random Walk} (\textsc{SSRW}), able to compute both frequencies and concentrations of undirected subgraphs of size up to 7. The next nodes in the random walk are picked from the concatenation of the neighbourhoods of all nodes previously selected to be a part of the sampled subgraph. The authors present an unbiased estimator and compare it against \citeAutRef{chen2016general} and \citeAutRef{han2016waddling}, getting better results than both for the single network tested. \subsection{Colour Coding} The technique of colour coding~\cite{alon1995color} has been adapted to the problem of approximating subgraph frequencies by \citeAutRef{zhao2010subgraph}, \citeAutRef{zhao2012sahad} and \citeAutRef{slota2013fast}. However, all these works focus on specific categories of subgraphs, for example, \textsc{SAHad}~\cite{zhao2012sahad} attempts to only find subgraphs that are in the form of a tree. More recently, \citeAutRef{bressan2018motif} present a general algorithm using colour coding, that works for any undirected subgraph of size theoretically unbound. The algorithm works in two phases. The first, based on the original description of~\cite{alon1995color}, is counting the number of non-induced trees, \emph{treelets}, in the graph but with a particularity, the nodes were previously partitioned into $k$ sets and attributed a label (a \emph{colour}). These treelets then must be constituted solely of nodes with different colours. This part of the algorithm outputs counters $C(T,S,v)$, for every $v \in V(G)$, which are the number of treelets rooted in $v$ isomorphic to $T$, whose colours span the colour set $S$. The second phase of the algorithm is the sampling part, which is focused on sampling treelets uniformly at random. To pick a treelet with $k$ nodes, the authors choose a random node $v$, a treelet $T$ with probability proportional to $C(T,[k],v)$ and then pick one of the treelets that is rooted in $v$, is isomorphic to $T$ and is coloured by $[k]$. Given a treelet $T_k$, the authors consider the graphlet $G_k$ induced by the nodes of $T_k$ and increment its frequency by $\frac{1}{\sigma(G_k)}$, where $\sigma(G_k)$ is the number of spanning trees of $G_k$. \section{Parallel Strategies}\label{sec:parallel} By this point it should be clear that subgraph counting is a computationally hard problem. As discussed in Section~\ref{sec:exact}, analytic approaches are much more efficient than enumeration algorithms; however, they are specific to certain sets of small subgraphs. Sampling strategies can produce results in a fraction of the time; but there's a trade-off between time and accuracy. Therefore, speeding up subgraph counting remains a crucial task. The availability of parallel environments, such as multicores, hybrid clusters, and GPUs gave rise to strategies that leverage on these resources. Here we follow a different organizational approach than Sections~\ref{sec:exact} and~\ref{sec:sampling}: we first give an historic overview of the parallel algorithms put forward throughout the years and then we discuss the strategies on a higher level. This is done because most parallel algorithms have a sequential counterpart (already described in previous sections) and many common aspects can be found between the parallel strategies. Table~\ref{tab:paralgs1} summarizes our proposed taxonomy composed of seven aspects, ordered by their publication year: (i) their computational \textbf{platform}, (ii) the \textbf{initial work-units} (what part of the graph is divided initially), (iii) the \textbf{runtime work-units} (what part of the graph is divided during runtime), (iv) the \textbf{search traversal} strategy (how the graph is explored), (v) the \textbf{work division} strategy (how work-units are distributed), (vi) how \textbf{work sharing} is performed (if applicable) between workers (e.g., CPU processors, or CPU/GPU threads), and (vii) if code is \textbf{publicly available}. \begin{table}[!h] % \footnotesize \centering \def1.0{1.0} \caption{Parallel algorithms for subgraph counting.} \label{tab:paralgs1} \begin{tabular}{c^c^c^c^c^c^c^c^c^l } % \rowstyle{\bfseries} & \multirow{2}{*}{Year} & \multirow{2}{*}{Platform} & \multicolumn{2}{c}{\bf Work-units} & Search & Work & Work & Public\\ \rowstyle{\bfseries} & & & Initial & Runtime & Traversal & Division & Sharing & Code \\[2pt] \hline & & & & & & & & \\[-6pt] \rowstyle{} \textsc{ParWang}~\cite{wang2005parallel} & 2005 & DM & Vertices & \xmark & DFS & Static & \xmark & \xmark \\[2pt] \textsc{DM-Grochow}~\cite{schatz2008parallel} & 2008 & DM & Isoclasses & Isoclasses & DFS & First-Fit & \xmark & \xmark \\[2pt] \textsc{MPRF}~\cite{liu2009mapreduce} & 2009 & MapReduce & Edges & Subgraphs & BFS & Static & \xmark & \xmark \\ \textsc{DM-ESU}~\cite{ribeiro2010parallelesu} & 2010 & DM & Vertices & Subgraph-trees & DFS & Diagonal & M-W & \xmark \\ \textsc{DM-Gtries}~\cite{ribeiro2010efficient} & 2010 & DM & Vertices & Subgraph-trees & DFS & Diagonal & W-W & \xmark \\ \textsc{SM-Gtries}~\cite{aparicio2014parallel} & 2014 & SM & Vertices & Subgraph-trees & DFS & Diagonal & W-W &\cite{gtscanner}\\ \textsc{SM-FaSE}~\cite{aparicio2014scalable} & 2014 & SM & Vertices & Subgraph-trees & DFS & Diagonal & W-W & \cite{gtscanner}\\ \textsc{Subenum}~\cite{shahrivari2015fast} & 2015 & SM & Edges & Subgraphs & DFS & First-Fit & \xmark & \cite{subenum}\\ \textsc{GPU-Orca}~\cite{milinkovic14contribution} & 2015 & GPU & Vertices & Subgraphs & BFS & Static & \xmark & \xmark\\ \textsc{Lin}~\cite{lin2015network} & 2015 & GPU & Vertices & Subgraphs & BFS & Static & \xmark & \xmark\\ \textsc{MRSUB}~\cite{shahrivari2015distributed} & 2015 & MapReduce & Edges & Subgraphs & BFS & Static & \xmark & \xmark\\ \textsc{PGD}~\cite{ahmed2015efficient} & 2015 & SM & Edges& \xmark & DFS & Static & \xmark & \cite{PGD}\\ \textsc{GPU-PGD}~\cite{rossi2016leveraging} & 2016 & CPU+GPU & Edges & Subgraph-trees & BFS & First-Fit & W-W & \xmark \\ \textsc{Elenberg}~\cite{elenberg2016distributed} & 2016 & DM & Vertices & Subgraphs & DFS & First-Fit & \xmark & \cite{eelenberggit}\\ \textsc{MR-Gtries}~\cite{ahmad2017scalable} & 2017 & MapReduce & Vertices & Subgraph-trees & DFS & Timed & M-W & \xmark\\ \end{tabular} \end{table} \subsection{Historical Overview}\label{sec:par_hist} One key aspect necessary to achieve a scalable parallel computation is finding a balanced work division (i.e., splitting work-units \emph{evenly} between workers -- parallel processors/threads). A naive possibility for subgraph counting is to assign $\frac{|V(G)|}{|P|}$ nodes from network $G$ to each worker $p \in P$. This egalitarian division is a poor choice since two nodes induce very different search spaces% ; for instance, $hub$-like nodes induce many more subgraph occurrences than nearly-isolated nodes. Instead of performing an egalitarian division, \citeAutRef{wang2005parallel} discriminate nodes by their degree and distribute them among workers, the idea being that each worker gets roughly the same amount of \textit{hard} and \textit{easy} work-units. Despite achieving a more balanced division than the naive version, there is still no guarantee that the node-degree is sufficient to determine the actual complexity of the work-unit. Distributing work immediately (without runtime adjustments) is called a \textbf{static division}. Wang et al. did not assess scalability in~\cite{wang2005parallel}, but they showed that their parallel algorithm was faster than \textsc{Mfinder}~\cite{milo2002network} in an E. Coli transcriptional regulation network. Since their method was not named, we refer to it as \textsc{ParWang} henceforth. % \iffalse The first approach targeting \underline{multicore machines} was by \textbf{Schreiber and Schw\"obbermeyer (FPF/MAVisto)} \cite{schreiber2005frequency,schreiber2005mavisto} and, similar to Wang et al.~\cite{wang2005parallel}, it was a static division method. They reported \textcolor{red}{speed-ups of $\approx$4-5x using a 8-core machine on ten different metabolic which were more significant when bigger subgraphs were counted (5 or 6 nodes)}. Both \cite{wang2005parallel} and \cite{schreiber2005frequency} parallelized \underline{network-centric} sequential algorithms.\fi The first parallel strategy with a \textbf{single-subgraph-search} algorithm at its core, namely \textsc{Grochow}~\cite{grochow2007network}, was by \citeAutRef{schatz2008parallel}. Since the algorithm was not named, and it targets a \textbf{distributed memory (DM)} architecture (i.e., parallel cluster), we refer to it as \textsc{DM-Grochow}. In order to distribute query subgraphs (also called \textbf{isoclasses}) among workers they employed two strategies: naive and \textbf{first-fit}. The naive strategy is similar to \textsc{ParWang}'s. In the first-fit model, each slave processor requests a subgraph type (or \textbf{isoclass}) from the master and enumerates all occurrences of that type (e.g., cliques, stars, chains). This division is \textbf{dynamic}, as opposed to static, but it is not balanced since different isoclasses induce very different search trees. For instance, in sparse networks $k$-cliques are faster to compute than $k$-chains. Using 64 cores, Schatz et al. obtained $\approx$10-15x speedups over the sequential version on a yeast PPI network. They also tried another novel approach by partitioning the network instead of partitioning the subgraph-set. However, finding adequate partitions for subgraph counting is a very hard problem due to partition overlaps and subgraphs traversing different partitions, and no speedup was obtained using this strategy. We should note that parallel graph partitioning remains an active research problem to this day~\cite{bulucc2016recent,meyerhenke2017parallel}, but is out of the scope of this work. All parallel algorithms mentioned so far traverse occurrences in a \textbf{depth-first (DFS)} fashion, since doing so avoids having to store intermediate states. By contrast, \citeAutRef{liu2009mapreduce} use a \textbf{breadth-first search (BFS)} where, at each step, all subgraph occurrences found in the previous one are expanded by one node. Their algorithm, \textsc{MPRF}, is implemented following a \textbf{MapReduce} model~\cite{dean2008mapreduce} which is intrinsically a BFS-like framework. In MPRF, mappers extend size $k$ occurrences to size $k+1$ and reducers remove repeated occurrences. At each BFS-level, \textsc{MPRF} divides work-units evenly among workers. We still consider this to be a static division since no adjustments are made in runtime. Thus, in our terminology, static divisions can be performed only once (at the start of computation in DFS-like algorithms) or multiple times (once per level in BFS-like algorithms). Overhead caused by reading and writing to files reduces \textsc{MRPF}'s efficiency, but the authors report speedups of $\approx7x$ on a 48-node cluster, when compared to the execution on a single-processor. % DFS-based algorithms discussed so far either perform a complete work-division right at the beginning (\textsc{ParWang}), or they perform a partial work-division at the beginning and then workers request work when idle (\textsc{DM-Grochow}). In both cases, a worker has to finish a work-unit before proceeding a new one. Therefore, it is possible that a worker gets stuck processing a very computationally heavy work-unit while all the others are idle. This has to do with work-unit granularity: work-units at the top of the DFS search space have high (coarse) granularity since the algorithm has to explore a large search space. BFS-based algorithms mitigate this problem because work-units are much more fine grained (usually a worker only extends his work-unit(s) by one node). The work by \citeAutRef{ribeiro2010parallelesu} was the first to implement \textbf{work sharing} during parallel subgraph counting, alleviating the problem of coarse work-unit granularity of DFS-based subgraph counting algorithms. Workers have a splitting threshold that dictates how likely it is to, instead of fully processing a work-unit, putting part of it in a global work queue. A work-unit is divided using \textbf{diagonal work splitting} which gathers unprocessed nodes at level $k$ (i.e., nodes that are reached by expanding the current work-unit) and recursively goes up in the search tree, also gathering unprocessed nodes of level $k-i$, $i < k$, until reaching level $1$. This process results in a set of finer-grained work-units that induces a more balanced search space than static and first-fit divisions. In \cite{ribeiro2010parallelesu} Ribeiro et al. use \textsc{ESU} as their core enumeration algorithm and propose a \textbf{master-worker (M-W)} architecture where a master-node manages a work-queue and distributes its work-units among slave workers. This strategy, \textsc{DM-ESU}, was the first to achieve near-linear speedups ($\approx$128x on a 128-node cluster) on a set of heterogeneous network. A subsequent version~\cite{ribeiro2010efficient} used \textsc{GTries} as their base algorithm and implemented a \textbf{worker-worker (W-W)} architecture where workers perform work stealing. \textsc{DM-Gtries} improves upon \textsc{DM-ESU }by using a faster enumeration algorithm (\textsc{GTries}) and having all workers perform subgraph enumeration (without wasting a node in work queue management). Similar implementations (based on W-W sharing and diagonal splitting) of \textsc{GTries} and \textsc{FASE} were also developed for \textbf{shared memory (SM) environments}, which achieved near-linear speedups in a 64-core machine~\cite{aparicio2014parallel,aparicio2014scalable}. The main advantages of SM implementations is that work sharing is faster (since no message passing is necessary) and SM architectures (such as multicores) are a commodity while DM architectures (such as a cluster) are not. Instead of developing efficient work sharing strategies, \citeAutRef{shahrivari2015fast} try to avoid the unbalanced computation induced by vertice-based work-unit division. \textsc{Subenum} is an adaptation of \textsc{ESU} which uses edges as starting work-units, achieving near-linear speedup ($\approx$10x on a 12-core machine). Using edges as starting work-units is also more suitable for the MapReduce model since edges are finer-grained work-units than vertices. In a follow-up work~\cite{shahrivari2015distributed}, Shahrivari and Jalili propose a MapReduce algorithm, \textsc{MRSUB}, which greatly improves upon~\cite{liu2009mapreduce}, reporting a speedup of $\approx34$x on a 40-core machine. Like \textsc{Subenum}, \textsc{MRSUB} does not support work sharing between workers. A MapReduce algorithm with work sharing was put forward by \citeAutRef{ahmad2017scalable}, henceforth called \textsc{MR-Gtries}. Using work sharing with \textbf{timed redistribution} (i.e., after a certain time, every worker stops and work is fully redistributed), they report a speedup of $\approx26$x on a 32-core machine. While \textsc{MRSUB} and \textsc{MR-GTries} efficiency is comparable ($\approx80\%$), the latter has a much faster sequential algorithm at its core; therefore, in terms of absolute runtime, \textsc{MR-Gtries} is the fastest MapReduce subgraph counting algorithm that we know of. Graphics processing units (\textbf{GPUs}) are processors specialized in image generation, but numerous general purpose tasks have been adapted to them~\cite{fang2008parallel,hong2011efficient,merrill2012scalable}. GPUs are appealing due to their large number of cores, reaching hundreds or thousands of parallel threads whereas commodity multicores typically have no more than a dozen. However, algorithms that rely on graph traversal are not best suited for the GPU framework due to branching code, non-coalesced memory accesses and coarse work-unit granularity~\cite{merrill2012scalable}. \textbf{Milinkovi\'c et al.}~\cite{milinkovic14contribution} were one of the firsts to follow a GPU approach (\textsc{GPU-Orca}), with limited success. \citeAutRef{lin2015network} put forward a GPU algorithm (henceforth refereed to as \textsc{Lin} since it was unnamed) mostly targeted at network motif discovery but also with some emphasis on efficient subgraph enumeration. \textsc{Lin} avoids duplicate in a similar fashion to ESU \cite{wernicke2005faster} and auxiliary arrays are used to mitigate uncoalesced memory accesses. A BFS-style traversal is used (extending each subgraph 1 node at a time) to better balance work-units among threads. They compare \textsc{Lin} running on a 2496-core GPU (Tesla K20) against parallel CPU algorithms and report a speedup of $\approx$10x to a 6-core execution of the fastest CPU algorithm, \textsc{DM-GTries}. \citeAut{rossi2016leveraging} proposed the first algorithm that \textbf{combines multiple GPUs and CPUs}~\cite{rossi2016leveraging}. Their method dynamically distributes work between CPUs and GPUs, where unbalanced computation is given to the CPU whereas GPUs compute the more regular work-units. Since their method was not named, we refer to it as \textsc{GPU-PGD}. Their hybrid CPU-GPU version achieves speedups of $\approx 20$x to $\approx 200$x when compared to sequential \textsc{PGD}, depending largely on the network. As mentioned in Section~\ref{sec:exact}, \textsc{PGD} is one of the fastest methods for sequential subgraph counting. As such, \textsc{GPU-PGD} is the fastest subgraph counting algorithm currently available as far as we know. However, \textsc{GPU-PGD} is limited to 4-node subgraphs, while \textsc{DM-GTries} is the fastest general approach. \subsection{Platform} Different parallel platforms offer distinct advantages and are more suited for particular strategies. Next we discuss the strategic differences between platforms. \subsubsection{Distributed Memory (DM)} A parallel cluster offers the opportunity to use multiple (heterogenous) machines to speedup computation. Clusters can have hundreds of processors and therefore, if speedup is linear, computation time is reduced from weeks to just a few hours. For work sharing to be efficiently performed on DM architectures one can either have a master-node mediating work sharing~\cite{ribeiro2010parallelesu} or have workers directly steal work from each other~\cite{ribeiro2010efficient,ribeiro2012parallel}. Usually DM approaches are implemented directly using MPI~\cite{wang2005parallel,schatz2008parallel,ribeiro2010efficient,ribeiro2010parallel,ribeiro2012parallel} but higher level software, such as GraphLab, can also be used~\cite{elenberg2016distributed}. DM has the drawback of workers having to send messages through the network, making network bandwidth a bottleneck. \subsubsection{Shared Memory (SM)} SM approaches have the advantage in their underlying hardware being a commodity (multicore computers). Furthermore, workers in a SM environment do not communicate via network messages (since they can communicate directly in main memory), thus avoiding a bottleneck in the network bandwidth. However, the number of cores is usually very low when compared to DM, MapReduce, and GPU architectures. Algorithms on multicores tend to traverse the search space in a DFS fashion~\cite{aparicio2014parallel,aparicio2014scalable,shahrivari2015fast,ahmed2015efficient} thus avoiding the storage of large number of subgraph occurrences in disk or main memory. \subsubsection{MapReduce} The MapReduce paradigm has been successfully applied to problems where each worker executes very similar tasks, which is the case of subgraph counting. MapReduce is an inherently BFS method, whereas most subgraph counting algorithms are DFS-based. The biggest setback of using MapReduce is the huge amount of subgraph occurrences that are stored in files between each BFS-level iteration (corresponding to a node expansion)~\cite{liu2009mapreduce,shahrivari2015distributed}. To avoid this setback, one can instead store them in RAM when the number of occurrences fits in memory~\cite{ahmad2017scalable}. \subsubsection{GPU} GPUs are very appealing due to their large amount of parallel threads. Despite linear speedups being rare in the GPU, since they have such a large number of cores the gains can still be substantial. However, they are not well-suited for graph traversal algorithms. One of current best pure BFS algorithms~\cite{merrill2012scalable} on the GPU only achieve a speedup of $\approx 8$x (on a 448-core NVIDIA C2050) when compared to a 4-core CPU BFS algorithm~\cite{leiserson2010work}. By contrast, Monte Carlo calculations on a NVIDIA C2050 GPU achieve a speedup of $\approx 30$x~\cite{ding2011evaluation} when compared to a 4-core CPU implementation. This is mainly due to branching problems, uncoalesced memory accesses and coarse work-unit granularity, sometimes leading to almost non-existent speedups in subgraph counting~\cite{milinkovic14contribution}. Using additional memory to efficiently store neighbors and smart work division help achieve some speedup~\cite{lin2015network}. Another approach is to combine CPUs and GPUs: CPUs handle unbalanced computation while GPUs execute regular computation~\cite{rossi2016leveraging}. \subsection{Work-units} When a program can be split into a series of (nearly) independent tasks, efficient parallelism greatly reduces execution times. Each \emph{worker} (be it a thread, CPU-core, or GPU-core) is assigned \emph{work-units} (parallel tasks), either at the start of the computation (\emph{initial work-units}) or during runtime (\emph{runtime work-units}). Work-units in subgraph counting can be either (a) \emph{vertices}, (b) \emph{edges}, (c) \emph{subgraphs}, (d) isomorphic classes (or \emph{isoclasses} for short), or (e) \emph{subgraph-trees}, and each option is described next. Figure~\ref{fig:workunits} illustrates each base work-unit and gives an example of how each can be divided. \begin{figure} \centering \includegraphics[width=0.85\linewidth]{figures/workunits} \vspace{-0.2cm} \caption{Different base work-units, and their division to two workers $P$ and $Q$.} \label{fig:workunits} \end{figure} \subsubsection{Vertices}\label{sec:wu_vertices} One possibility is to consider each vertex $v \in V(G)$ as a work-unit and split them among workers. A worker $p$ then computes all size-$k$ subgraph occurrences that contain vertex $v$. Naive approaches have different workers finding repeated occurrences that need to be removed~\cite{wang2005parallel}, but efficient sequential algorithms have canonical representations that eliminate this problem~\cite{wernicke2005faster,kashani2009kavosh,ribeiro2014g}, making each work-unit independent. Using vertices as work-units has the drawback of creating very coarse work-units: different vertices induce search spaces with very different computational costs. For instance, counting all the subgraph occurrences that start (or eventually reach) a hub-like node is much more time-consuming than counting occurrences of a nearly isolated node. For algorithms with vertices as work-units to be efficient they can either try to find a good initial division \cite{wang2005parallel} or enable work sharing between workers \cite{ribeiro2010parallel,ribeiro2012parallel,aparicio2014parallel,aparicio2014scalable}. Each of these work division strategies is discussed in Section~\ref{sec:workdivision}. \subsubsection{Edges} Due to the unbalanced search tree induced by vertex division, some algorithms use edges as work-units \cite{liu2009mapreduce,shahrivari2015fast,ahmed2015efficient,shahrivari2015distributed}. The idea is similar to vertice division: distribute all $e(v_i, v_j) \in E(G)$ evenly among the workers. An initial edge division guarantees that all workers have an equal amount of 2-node subgraphs, which is not true for vertex division. However, for $k \geq 3$ this strategy offers no guarantees in terms of workload balancing. Therefore, in regular networks (i.e. networks where all nodes have similar clustering coefficients) this strategy achieves a good speedup, but it is not scalable in general. Some methods~\cite{schatz2008parallel,rossi2016leveraging} perform dynamic first-fit division (discussed in Section~\ref{sec:firstfit}) instead the simple static division described. \subsubsection{Subgraphs} At the start of computation, only vertices and edges from the network are known. As the $k$-subgraph counting process proceeds, subgraphs of sizes $k-i, i < k$ are found. Thus, the work-units divided among threads can be these intermediate states (incomplete subgraphs). Some BFS-based algorithms~\cite{liu2009mapreduce,milinkovic14contribution,shahrivari2015distributed,lin2015network} begin with either edges or vertices as initial work-units and, at the end of each BFS-level, intermediate subgraphs are found and divided among workers. DFS-based methods expand each subgraph work-unit by one node until they reach a $k$-subgraph~\cite{shahrivari2015distributed, elenberg2016distributed}. \subsubsection{Isoclass} Instead of partitioning the graph, like the previous three strategies do, one can instead chose to partition the set of isoclasses being enumerated~\cite{schatz2008parallel}. Work-units split in this fashion have similar problems to the previously discussed: isomorphic classes do not induce computationally equivalent search spaces. For instance, in sparse networks it is much more time-consuming to enumerate chains or hubs than cliques. \subsubsection{Subgraph-trees} This approach is applicable only for DFS-like algorithms where, since the search tree is explored in a depth-first fashion, a work-tree is implicitly built during enumeration: when the algorithm is at level $k$ of the search, unexplored candidates of stages $\{k-1, k-2, ... , 1\}$ were previously generated. Then, instead of splitting top vertices from stage 1 only (as described in Section~\ref{sec:wu_vertices}), the search-tree is split among sharing processors \cite{ribeiro2010efficient,ribeiro2010parallel,aparicio2014parallel,aparicio2014scalable} (more details on this in Section~\ref{sec:diagonalsplit}). Subgraph-trees are \emph{expected} to be similar since both coarse- and fine-grained work-units are generated. Nevertheless, it is not guaranteed that work-units from the same level of the search tree induce similar work. This strategy also incurs the additional complexity of building the candidate-set of each level and splitting them among workers. \subsection{Search Traversal} Discounting analytic approaches presented in Section~\ref{sec:anal}, subgraph counting algorithms typically count occurrences by traversing the graph. How graph traversal is performed greatly influences the parallel performance and is dependent on the platform, as is discussed next. \subsubsection{Breadth-First Search}\label{sec:bfs} Algorithms that adopt this strategy are typically MapReduce methods~\cite{liu2009mapreduce,shahrivari2015distributed,ahmad2017scalable} or GPU~\cite{milinkovic14contribution,lin2015network,rossi2016leveraging} approaches. MapReduce works intrinsically in BFS fashion, and GPUs are very inefficient when work is unbalanced and contains branching code. BFS starts by (i) splitting edges among workers, (ii) the processors compute the patterns of size-3 from each edge (size-2 subgraphs), (iii) the patterns of size-3 are themselves split among processors and (iv) this process is repeated until the desired size-$k$ patterns are obtained. The idea of BFS is to give large amounts of fine-grained work-units to each worker, thus making work division more balanced since these work-units induce similar work, making this approach more suitable for methods that require regular data. However, the main drawback is that these algorithms need to store partial results (which grow exponentially as $k$ increases) and synchronize at the end of each BFS-level. \subsubsection{Depth-First Search} To avoid the cost of synchronization and of storing partial results, most subgraph counting algorithms traverse the search space in a depth-first fashion \cite{wang2005parallel,grochow2007network,ribeiro2010efficient,ribeiro2010parallel,ribeiro2012parallel,aparicio2014parallel,aparicio2014scalable,shahrivari2015fast,ahmed2015efficient}. This means that the algorithm starts with $V_{sub} = \{v\}$ and incrementally adds a new node to $V_{sub}$ until it obtains a match of the desired size, and backtracks to find new matches. This strategy leads to unbalanced search spaces, caused by coarse-grained work-units, that need to be controlled. \subsection{Work Division}\label{sec:workdivision} Splitting work is obviously essential for a parallel approach. Work can be divided at two moments: (i) an initial work division before subgraph counting starts and/or (ii) divisions during runtime. \subsubsection{Static} The simplest form of work division is to produce an initial distribution of work-units and proceed with the parallel computation, without ever spending time dividing work during runtime. Trying to obtain an estimation of the work beforehand~\cite{wang2005parallel,rossi2016leveraging} is valuable but limited: if the estimation is done quickly but is not very precise (such as using node-degrees or clustering coefficients to estimate work-unit difficulty) little guarantees are offered that the work division is balanced, and obtaining a very precise estimation is as computationally expensive as doing subgraph enumeration itself. Following a BFS approach~\cite{liu2009mapreduce,milinkovic14contribution,shahrivari2015distributed} helps balancing out the work-units and a static work division at each BFS-level is usually sufficient to obtain good results. However, those strategies have limitations as discussed in Section~\ref{sec:bfs}. Some analytic works, which do not rely on explicit subgraph enumeration, do not need advanced work division strategies because their algorithm is almost embarrassingly parallel~\cite{rossi2017estimation}. \subsubsection{Dynamic: First-fit}\label{sec:firstfit} Instead of trying to estimate a good division one can generate work on-demand during runtime. One isomorphic class~\cite{schatz2008parallel} or small portions of the graph~\cite{shahrivari2015fast} can be initially given at each processor and, when that computation is done, idle processors request more work. This strategy has the penalty of maintaining a global queue of work-units to be processed. Furthermore, the last $|P|$ work-units (where $|P|$ is the number of workers) can have different granularities (and thus computational cost), so the speedup is largely dependent on how well-balanced they are. \subsubsection{Dynamic: Diagonal Work Splitting}\label{sec:diagonalsplit} Algorithms that employ this strategy~\cite{ribeiro2010efficient,ribeiro2010parallel,aparicio2014parallel,aparicio2014scalable} perform an initial static work division. They do not need a sophisticated criteria to choose to whom work-units are assigned because work will be dynamically redistributed during runtime: whenever workers are idle, some work will be relocated from busy workers to them. Furthermore, instead of simply giving half of their top-level work-units away and keeping the other half, a busy worker fully splits its work tree The main idea is to build work-units of both fine- and coarse-grained sizes, and this is particularly helpful in cases where a worker becomes stuck managing a very complex initial work-unit; this way, that work-unit is split in half, and it can be split iteratively to other workers if needed. These work-units can then either be stored in a global work queue, which a master worker is responsible of managing~\cite{ribeiro2010efficient,ribeiro2010parallel}, or sharing is conducted between worker threads themselves~\cite{aparicio2014parallel,aparicio2014scalable} (more details on Sections~\ref{sec:masterworker} and~~\ref{sec:workerworker}, respectively). \subsubsection{Dynamic: Timed Redistribution}\label{sec:adaptiveredist} Timed Redistribution is a way to avoid estimating work during runtime while guaranteeing that every worker has work (after a while). Workers first receive work and try to process as much as they can. After a certain time, they all stop and work is redistributed. This strategy is specially useful when worker communication is not practical, such as in a MapReduce environment~\cite{ahmad2017scalable} on in the GPU. Setting an adequate threshold for work redistribution has a great impact: redistributing work too quickly has the drawback of wasting too much time in work division, while redistributing work too late has the drawback of having idle workers. One solution is to use an adaptive threshold~\cite{ahmad2017scalable}: if workers are too often without work, the threshold of the next iteration is lower, if workers are too often with much work left to compute, the threshold of the next iteration is higher. \subsection{Work Sharing} Since work is unbalanced for enumeration algorithms, work sharing can be used in order to balance work during runtime. \subsubsection{Master-Worker (M-W)}\label{sec:masterworker} This type of work sharing is mostly adopted in distributed memory (DM) environments since workers do not share positions of memory that they can easily access and use to communicate. A master worker initially splits the work-units among the workers (slaves) and then manages load balancing. Load balancing can be achieved by managing a global queue where slaves put some of their work, to be later redistributed by the master~\cite{ribeiro2010parallelesu}. This strategy implies that the master is not being used the enumeration and that there is a need communication over the network. \subsubsection{Worker-Worker (W-W)}\label{sec:workerworker} Shared memory (SM) environments allow for direct communication between workers, therefore a master node is redundant. In this strategy, an idle worker asks a random worker for work~\cite{aparicio2014parallel,aparicio2014scalable}. One could try to estimate which worker should be polled for work (which is computationally costly) but random polling has been established as an efficient heuristic for dynamic load balancing \cite{sanders1994detailed}. After the sharing process, computation resumes with each worker evolved in the exchange computing their part of the work. Computation ends when all workers are polling for work. This strategy achieves a balanced work-division during runtime, and the penalty caused by worker communication is negligible~\cite{aparicio2014parallel,aparicio2014scalable}. Most implementations of W-W sharing are built on top of relatively homogeneous systems, such as multiworkered CPUs~\cite{shahrivari2015fast} or clusters of similar processors~\cite{schatz2008parallel}. In these systems, since all workers are equivalent, it is irrelevant which ones get a specific easy (or hard) work-unit, thus only load balancing needs to be controlled. Strategies that combine CPUs with GPUs, for instance, can split tasks in a way that takes advantage of both architectures: GPUs are very fast for regular tasks while CPUs can deal with irregular ones. For instance, a shared deque can be kept where workers, either GPUs or CPUs, put work on or take work from~\cite{rossi2016leveraging}; the queue is ordered by complexity: complex tasks are placed at the front, and simple tasks at the end. The main idea is that CPUs handle just a few complex work-units from the front of the deque while GPUs take large bundles of work-units from the back. \section{Exact Counting} \label{sec:exact} As subgraph counting evolved over the years, a multitude of algorithms and methods were developed that address the problem in different ways and for distinct purposes. As such, it is useful, although not easy, to group strategies together in order to facilitate their understanding as well as learn why and how they came about. With this in mind we divided this section into two major groups of algorithms, namely enumeration and analytic approaches, which are further subdivided in their respective section. Table~\ref{tab:over_exact} summarizes our proposed taxonomy composed of six aspects, ordered by their publication year: (i) \textbf{approach} (enumeration or analytic), (ii) \textbf{type} (a subgroup of the underlying approach), (iii) \textbf{$k$-restriction} (does the method only work for certain subgraph sizes?), (iv) \textbf{orbit awareness} (does the method also count orbits?), (v) \textbf{directed} (is the method applicable to directed graphs?) and (vi) if code is \textbf{publicly available}. At the end of this section, we also present some related theoretical results that influenced some of the algorithms we discuss. \begin{table}[H] \small \centering \def1.0{1.0} \caption{Overview of all major exact algorithms.} \label{tab:over_exact} \vspace{-0.2cm} \begin{tabular}{$l^c^c^c^c^c^c^c} \rowstyle{\bfseries} & Year & Approach & Type & $k$-restriction & Orbit & Directed & Code \\ \hline \textsc{Mfinder} \cite{milo2002network} & 2002 & Enum. & Classical & None & \xmark & \cmark & \cite{mfinder}\\ \textsc{ESU} \cite{wernicke2005faster,wernicke2006fanmod} & 2005 & Enum. & Classical & None & \xmark & \cmark & \cite{fanmod} \\ \textsc{Itzhack} \cite{itzhack2007optimal} & 2007 & Enum. & Classical & $\leq 5$ & \xmark & \cmark & \xmark \\ \textsc{Grochow} \cite{grochow2007network} & 2007 & Enum. & Single-subgraph & None & \xmark & \cmark & \xmark \\ \textsc{Kavosh} \cite{kashani2009kavosh} & 2009 & Enum. & Classical & None & \xmark & \cmark & \cite{kavosh}\\ \textsc{Gtries} \cite{ribeiro2010g,ribeiro2014g} & 2010 & Enum. & Encapsulation & None & \cmark & \cmark & \cite{gtries}\\ \textsc{Rage} \cite{marcus2010efficient,marcus2012rage} & 2010 & Analytic & Decomposition & $\leq 5$ & \xmark & \cmark & \cite{rage} \\ \textsc{NeMo} \cite{koskas2011nemo} & 2011 & Enum. & Single-subgraph& None & \xmark & \cmark & \cite{nemo} \\ \textsc{Netmode} \cite{li2012netmode} & 2012 & Enum. & Encapsulation & $\leq 6$ & \xmark & \cmark & \cite{netmode}\\ \textsc{SCMD} \cite{wang2012symmetry} & 2012 & Enum. & Encapsulation & None & \xmark & \xmark & \xmark\\ \textsc{acc-Motif} \cite{meira2012accelerated,meira2014acc} & 2012 & Analytic & Decomposition & $\leq 6$ & \xmark & \cmark & \cite{accmotif}\\ \textsc{ISMAGS} \cite{demeyer2013index,houbraken2014index} & 2013 & Enum. & Single-subgraph & None & \xmark & \cmark & \cite{ismags}\\ \textsc{Quatexelero} \cite{khakabimamaghani2013quatexelero} & 2013 & Enum. & Encapsulation & None & \xmark & \cmark & \cite{quatexelero}\\ \textsc{FaSE} \cite{paredes2013towards} & 2013 & Enum. & Encapsulation & None & \xmark & \cmark & \cite{gtscanner} \\ \textsc{ENSA} \cite{zhang2014motif} & 2014 & Enum. & Encapsulation & None & \xmark & \cmark & \xmark \\ \textsc{Orca} \cite{hovcevar2014combinatorial,hovcevar2017combinatorial} & 2014 & Analytic & Matrix-based & $\leq 5$ & \cmark & \xmark & \cite{orca}\\ \textsc{Hash-ESU} \cite{zhao2015hashesu} & 2015 & Enum. & Encapsulation & None & \xmark & \cmark & \xmark \\ \textsc{Song} \cite{song2015method} & 2015 & Enum. & Encapsulation & None & \xmark & \cmark & \xmark \\ \textsc{Ortmann} \cite{ortmann2016quad,ortmann2017efficient} & 2016 & Analytic & Matrix-based & $\leq 4$ & \cmark & \cmark & \xmark\\ \textsc{PGD} \cite{ahmed2015efficient,ahmed2016estimation} & 2016 & Analytic & Decomposition & $\leq 4$ & \cmark & \xmark & \cite{PGD}\\ \textsc{Patcomp} \cite{jain2017impact} & 2017 & Enum. & Encapsulation & None & \xmark & \cmark & \xmark \\ \textsc{Escape} \cite{pinar2017escape} & 2017 & Analytic & Decomposition & $\leq 5$ & \cmark & \xmark & \cite{escape} \\ \textsc{Jesse} \cite{melckenbeeck2017,melckenbeeck2019optimising} & 2017 & Analytic & Matrix-based & None & \cmark & \xmark & \cite{jesse} \\ \end{tabular} \end{table} \subsection{Enumeration approaches} A significant part of the history of practical subgraph counting algorithms is intertwined with network motif analysis. This is because when motifs were first proposed~\cite{milo2002network}, they raised the interest and necessity for efficient subgraph counting, which has since been growing and establishing itself as a very important graph analysis primitive with multidisciplinary applicability. Exact subgraph counting consists of \emph{counting} and \emph{categorizing} (i.e., determining the isomorphic class of) all subgraph occurrences. Early methods \emph{first enumerate} all connected subgraphs with $k$-vertices and \emph{only afterwards categorize} each subgraph found using a graph isomorphism tool like \textsc{nauty} \cite{mckay2003nauty}. We refer to these as {\bf classical methods}. Many methods followed this strategy, until new methods appeared that counted the frequency of a single-subgraph category instead, thus avoiding the categorization step necessary by the classical methods. This was done by only enumerating one particular subgraph of interest. Even though they were not the fastest methods for a network-centric application, they were an important milestone towards the methods that followed. We refer to these as {\bf single-subgraph-search methods}. The next step was to combine the two previous ideas into a more efficient approach: merge the enumeration and categorization steps together. This was achieved in different ways, such as using common topological features of subgraphs or pre-computing some information about subgraphs to avoid repeated computations of isomorphism. We refer to these as {\bf encapsulation methods}. The next sections thoroughly delve into the most well-known methods of each category, giving an historical perspective on each, in an effort to understand each method's breakthroughs and drawbacks, and how subsequent algorithms built upon them to reduce (or mitigate) their limitations. \subsubsection{Classical methods} \label{sec:exact_class} In the seminal work, \citeAutRef{milo2002network} first defined the concept of network motif and also proposed \textsc{MFinder}, an algorithm to count subgraphs. \textsc{MFinder} is a recursive backtracking algorithm, that is applied to each edge of the network. A given edge is initially stored on a set $S$, which is recursively grown using edges that are not in $S$ but share one endpoint with at least one edge in $S$. When $|S| = k$, the algorithm checks if the subgraph induced by $S$ has been found for the first time by keeping a hash table of subgraphs already found. If the subgraph was reached for the first time, the algorithm categorizes it and updates the hash table (otherwise, the subgraph is ignored). Another very important work, by \citeAutRef{wernicke2005faster}, proposed a new algorithm called \textsc{ESU}, also known as \textsc{FANMOD} due to the graphical tool that uses \textsc{ESU} as its core algorithm~\cite{wernicke2006fanmod}. This algorithm greatly improved on \textsc{MFinder} by never counting the same subgraph twice, thus avoiding the need to store all subgraphs in a hash table. \textsc{ESU} applies the same recursive method to each vertex $v$ of the input graph $G$: it uses two sets $V_S$ and $V_E$, which initially are set as $V_S = \{v\}$ and $V_E = N(v)$. Then, for each vertex $u$ in $V_E$, it removes it from $V_E$ and makes $V_S = V_S \cup \{u\}$, effectively adding it to the subgraph being enumerated and $V_E = V_E \cup \{u \in N_{exc}(u, V_S) : L(u) > L(v)\}$ (where $v$ is the original vertex to be added to $V_S$). The $N_{exc}$ here makes sure we only grow the list of possibilities with vertices not already in $V_S$ and the condition $L(u) > L(v)$ is used to break symmetries, consequently preventing any subgraph from being found twice. This process is done several times until $V_S$ has $k$ elements, which means $V_S$ contains a single occurrence of a $k$-subgraph. At the end of the process, \textsc{ESU} performs isomorphism tests to assess the category of each subgraph occurrence, which is a considerable bottleneck. \citeAutRef{itzhack2007optimal} proposed a new algorithm that was able to count subgraphs using constant memory (in relation to the size of the input graph). Itzhack et al. did not name their algorithm, so we will refer to it as \textsc{Itzhack} from here on. \textsc{Itzhack} avoids explicitly computing the isomorphism class of each counted subgraph by caching it for each different adjacency matrix, seen as a bitstring. This strategy only works for subgraphs of $k$ up to 5, since it would use too much memory for higher values. Additionally, the enumeration algorithm is also different from \textsc{ESU}. This method is based on counting all subgraphs that include a certain vertex, then removing that node from the network and repeating the same procedure for the remaining nodes. For each vertex $v$, first the algorithm considers the tree composed of the $k$ neighborhood of $v$, that is, a tree of all vertices at a distance of $k - 1$ or less from $v$. This is very similar to the tree obtained from performing a breadth-first search starting on $v$, with the difference that vertices that appear on previous levels of the tree are excluded if visited again. This tree can be traversed in a way that avoids actually creating it by following neighbors, and thus only using constant memory. To perform the actual search, the method uses the concept of {\it counting patterns}, which are different combinatorial ways of choosing vertices from different levels of the tree. For instance, if we are searching for 3-subgraphs, and considering that at the tree root level we can only have one vertex, we could have the combinations with pattern 1-2 (one vertex at root level 0, two vertices at level 1) or with pattern 1-1-1 (one vertex at root level 0, one at level 1 and one at level 2). In an analogous way, 4-subgraphs would lead to patterns 1-1-1-1, 1-1-2, 1-2-1 and 1-3. Itzhack et al. claimed that \textsc{Itzhack} is over 1,000 times faster than \textsc{ESU}, however the author of \textsc{ESU} disputed this claim in \cite{wernicke2011comment}, stating that the experimental setup was faulty and claimed that \textsc{Itzhack} is only slightly faster than \textsc{ESU} (its speedup could be attributed mainly to the caching procedure). \citeAutRef{kashani2009kavosh} proposed a new algorithm called \textsc{Kavosh}. Like \textsc{ESU} and \textsc{Itzhack}, the core idea of the \textsc{Kavosh} is to find all subgraphs that include a particular vertex, then remove that vertex and continue from there iteratively. Its functioning is very similar to that of \textsc{Itzhack}: it builds an implicit breadth-first search tree and then uses a similar concept to the counting patterns used by \textsc{Itzhack}. However, it is a more general method since it does not perform any caching of isomorphism information, allowing the enumeration of larger subgraphs. \subsubsection{Single-subgraph-search methods} The idea that it is possible to obtain a very efficient method of counting a single subgraph category was first noticed by \citeAutRef{grochow2007network}. Their base method consists on a backtracking algorithm that is applied to each vertex. It tries to build a partial mapping from the input graph to the target subgraph (the subgraph it is trying to count) by building all possible assignments based on the number of neighbours. Grochow and Kellis also suggested an improvement based on symmetry breaking, using the automorphisms of the target subgraph to build set of conditions, of the form $L(a) < L(b)$, to prevent the same subgraph from being counted multiple times. This symmetry breaking idea allowed for considerable improvements in runtime, specially for higher values of $k$. Grochow and Kellis did not name their algorithm, so we will refer to it as the \textsc{Grochow} algorithm from here on. \citeAutRef{koskas2011nemo} presented a new algorithm which they called \textsc{NeMo}. This method draws some ideas from \textsc{Grochow}, since it performs a backtrack based search with symmetry breaking in a similar fashion. Although, instead of using conditions on vertex labels, it finds the orbits of the target subgraph and forces an ordering between the labels of the vertices from the input graph that match vertices in the target subgraph with the same orbit. Additionally, it uses a few heuristics to prune the search early, such as ordering the vertices from the target graph such that for all $1 \leq i \leq k$, its first $i$ vertices are connected. \textsc{ISMAGS}, which is based on its predecessor \textsc{ISMA} \cite{demeyer2013index}, was proposed by \citeAutRef{houbraken2014index}. The base idea of this method is similar to the one in \textsc{Grochow}, however, the authors use a clever node ordering and other heuristics to speedup the partial mapping procedure. Additionally, their symmetry breaking conditions are significantly improved by applying several heuristic techniques based on group theory. \subsubsection{Encapsulation methods} \label{sec:exact_encap} The ideas applied to \textsc{Grochow} introduced a way of escaping the classic setup of enumerating and then categorizing subgraphs, albeit focusing on a single subgraph. The next step would be to extend this idea to a more general algorithm, which is appropriate to a full subgraph counting. This was first done by \citeAutRef{ribeiro2010g} using a new data-structure they called the {\it g-trie}, for {\it graph trie}. The {\it g-trie} is a prefix tree for graphs, each node represents a different graph, where the graph of a parent node has shared common substructures with the graph of its child node, which are characterized precisely by the vertices of the graph of the child node. The root represents the one vertex graph with one child, a node representing the edge graph, which in turn has two children representing the triangle graph and the 3-path, and so on. This tree can be augmented by giving each node symmetry breaking conditions similar to those from \textsc{Grochow}. The authors show how to efficiently build this data-structure and augment with the symmetry breaking conditions for any set of graphs. Also, they describe a subgraph counting algorithm based on using this data-structure along with an enumeration technique similar to that of \textsc{Grochow}. However, since this data-structure encapsulates the information of multiple graphs in an hierarchical order, it achieves a much faster full subgraph counting algorithm. The usage of this data-structure has been significantly extended since its original publication, such as a version for colored networks \cite{ribeiro2014discovering} or an orbit aware version \cite{aparicio2016extending}. A more detailed discussion of the data-structure and the subgraph counting algorithm is presented in \cite{ribeiro2014g}. Also, even though the subgraph counting algorithm was not named, we will refer to it as the \textsc{Gtrie} algorithm from here on. \textsc{Gtrie} encapsulates common topological information of the subgraphs being counted, but there are other approaches, such as \citeAutRef{li2012netmode}, who developed \textsc{Netmode}. It builds on \textsc{Kavosh}, by using its enumeration algorithm, but instead of using \textsc{nauty} to perform the categorization step, it makes use of a cache to store isomorphism information and thus is able to perform it in constant time. This is very similar to what \textsc{Itzhack} does, however, Li et al. suggested an improvement that allows \textsc{Netmode} to scale to $k = 6$ without using too much memory. This improvement is based on the {\it reconstruction conjecture}~\cite{harary1974survey}, that states that two graphs with 3 or more vertices are isomorphic if their deck (the set of isomorphism classes of all vertex-deleted subgraphs of a graph) is the same. This is known to be false for directed graphs with $k = 6$, but there are very few counter-examples that can be directly stored such as in the $k \leq 5$ case, thus \textsc{Netmode} applies the conjecture for all the remaining cases by building their deck, hashing its value and storing its count in a table. \citeAutRef{wang2012symmetry} proposed a new method called \textsc{SCMD} that counts subgraphs in compressed networks. \textsc{SCMD} applies a symmetry compression method that finds sets of vertices that are in an homeomorphism to cliques or empty subgraphs, which have the additional property that any other vertex that connects to a vertex in the set is connected to all other vertices in the set. These sets of vertices form a partition of the graph that is obtained using a method published in~\cite{macarthur2008symmetry}, which is based on looking at vertices in the same orbit. This is a versatile method that can use algorithms like \textsc{ESU} or \textsc{Kavosh} to enumerate all subgraphs of sizes from 1 to $k$ in the compressed network. Finally, \textsc{SCMD} ``decompresses'' the results by looking at all the different enumerated subgraphs and calculating all the combinations that can form a decompressed subgraph. For example, for $k = 3$, if a compressed 2-subgraph is found containing two vertices: one compressed vertex representing a clique of 5 uncompressed vertices and a compressed vertex representing a single vertex from the uncompressed graph, it results in $\binom{5}{2} + \binom{5}{3}$ triangles from the uncompressed graph, obtained by taking two vertices from the clique vertex and one from the other vertex, which are all connected and thus form a triangle, $\binom{5}{2}$, plus taking three vertices from the clique vertex $\binom{5}{3}$. The authors argue that most complex networks exhibit high symmetries and thus are improved by the application of this technique. Even though their work only includes undirected graphs, the authors affirm it is easy to extend the same concepts to directed networks. {\bf Xu et al.} described another algorithm that enumerates subgraphs on compressed networks, called \textsc{ENSA}~ \cite{xu2014new,zhang2014motif}. Their method is based on an heuristic graph isomorphism algorithm, and they also discuss an optimization based on identifying vertices with unique degrees. Following the ideas first applied in \textsc{Gtrie}, \citeAutRef{khakabimamaghani2013quatexelero} proposed a new algorithm they called \textsc{Quatexelero}. \textsc{Quatexelero} is built upon any incremental enumeration algorithm, like \textsc{ESU}, and it implements a data structure similar to a quaternary tree. Each node in the tree represents a graph, that can be built by looking into the nodes from the path from it to the root of the tree. Additionally, all graphs represented by a single node belong to the same isomorphism class. To fill the tree, initially a pointer to the root of the tree is set. Whenever a new vertex is added to the partial enumeration map, \textsc{Quatexelero} looks into the existing edges between the newly added node and the previously existing nodes in the mapping and stores its information in the quaternary tree. For each vertex in the mapping, depending on whether there is no edge, an inedge, an outedge or a biedge between it and the newly added vertex, the pointer is assigned to one of its four children, creating it if it was nonexistent. Parallel to the publishing of the work of \textsc{Quatexelero}, \citeAutRef{paredes2013towards} proposed \textsc{FaSE}. The idea of \textsc{FaSE} is similar to the one from \textsc{Quatexelero}, however, instead of using a quaternary tree, it uses a data-structure similar to the g-trie, albeit without the symmetry breaking condition augmentation. This data-structure has the same property as the quaternary tree that every node represents a graph and each node is built using the adjacency information of a newly added vertex in relation to the vertices present in its parent. Other works that extend these ideas have been proposed subsequently. For example, \citeAutRef{zhao2015hashesu} propose \textsc{Hash-ESU}, an algorithm based on the same idea from \textsc{Quatexelero} and \textsc{FaSE}, but which hashes the adjacency information instead of storing it in a tree. Another example is the work by \citeAutRef{song2015method}. They describe a method that starts by enumerating all $k = 3$ subgraphs using \textsc{ESU} and then use dynamic programming to grow connected sets and perform the counting. Their algorithm was not named, so we will refer to it as the \textsc{Song} algorithm from here on. Both \textsc{Quatexelero} and \textsc{FaSE} have potential memory issues, since there may be several nodes representing the same graph, which is not a problem for \textsc{Gtrie} since it only stores one copy of each possible graph. To address this, \citeAutRef{jain2017impact} proposed \textsc{Patcomp}. Their method compresses the quaternary tree using a technique similar to a radix tree, however, their method is 2 to 3 times slower and only saves around $10\%$ of the memory usage. \subsection{Analytic approaches}\label{sec:anal} Since the overall goal of the problem we are aiming to solve is to count subgraphs, it is not necessary to explicitly enumerate each connected set of size $k$. Here lies the difference between {\it counting} and {\it enumerating} or {\it listing}. It was with this in mind that a new class of methods emerged striving to avoid enumerating all subgraphs in a graph. We can point two main approaches to this type of counting. The first one tries to relate the frequency of each subgraph with the frequencies of other subgraphs of the same or smaller size. This permits constructing a matrix of linear equations between subgraphs frequencies that can be solved using traditional linear algebra methods. We refer to these as {\bf matrix based methods}. The second approach targets each subgraph individually by decomposing it in several smaller patterns of graph properties, like common neighbors, or triangles that touch two vertices. We refer to these as {\bf decomposition methods}. \subsubsection{Matrix based methods} The first known method to apply a practical analytic approach based on matrix multiplication to subgraph counting was \textsc{ORCA}, a work by \citeAutRef{hovcevar2014combinatorial}, which is based on counting orbits and not directly subgraphs. Their original work was targeted at orbits in subgraphs up to 5 vertices and, because of that, they count induced subgraphs specifically, while most analytic approaches count non-induced occurrences. \textsc{ORCA} works by setting up a system of linear equations per vertex of the input graph that relate different orbit frequencies, which are the system's variables. This system of linear equations contains information about the input graph. By construction, the matrix has a rank equal to the number of orbits minus 1, thus to solve it one only need to find the value of one the orbit frequencies and use any standard linear algebra method to solve it. Usually, the orbit pertaining to the clique is chosen, since there are efficient algorithms to count this orbit and, for sparse enough networks, it is usually the one with the least occurrences, making it less expensive to count. Later, the authors of \textsc{ORCA} extended their work by suggesting a way of producing equations for arbitrary sized subgraphs~\cite{hovcevar2017combinatorial}, although their available practical implementation is still limited to size 5~\cite{orca}. Another possible extension for \textsc{ORCA} was proposed by~\cite{melckenbeeck2017} with the \textsc{Jesse} algorithm, which was further complemented with a strategy for optimizing the computation by carefully selecting less expensive equations~\cite{melckenbeeck2019optimising}. Similar to \textsc{ORCA}, but using a different strategy, \citeAutRef{ortmann2016quad} proposed a new method, which they further improved and better described in \cite{ortmann2017efficient}. They also target orbits, but for subgraphs of size up to 4. Their approach is based on looking into non-induced subgraphs using them to build linear equations that are less expensive to compute. Additionally, they also apply an improved clique counting algorithm. \citeAutRef{ortmann2016quad} did not name their algorithm, so we will refer to it as the \textsc{Ortmann} algorithm from here on. \subsubsection{Decomposition methods} Before \textsc{ORCA} was proposed, the first ever practical method that used an analytic approach to subgraph counting was \textsc{Rage}, by {\bf Marcus and Shavitt}~\cite{marcus2010efficient,marcus2012rage}. Their method is based on~\cite{gonen2009approximating} which employs similar techniques but with a more theoretical focus. \textsc{Rage} targets non-induced subgraphs and orbits of size 3 and 4. It does so by running a different algorithm for each of the 8 existing subgraphs. Each algorithm is based on merging the neighborhoods of pairs of vertices to ensure that a given quartet of vertices have the desired edges to form a certain subgraph. \textsc{acc-Motif}, which was proposed by \citeAutRef{meira2012accelerated} and then further improved in \cite{meira2014acc}, was also one of the first methods to employ an analytic strategy, but stands out as the only known analytic method that also works for directed subgraphs. \textsc{acc-Motif} also targets non-induced subgraphs and their latest version supports up to size 6 subgraphs. Another method that followed this trend of decomposition methods is \textsc{PGD}, proposed by {\bf Ahmed et al.}~\cite{ahmed2015efficient,ahmed2017graphlet}. This method builds on the classic triangle counting algorithm to count several primitives that are then used to obtain the frequency of each subgraph and orbit. It is currently one of the fastest methods, however it can only count undirected subgraphs of size 3 and 4. Additionally, as most analytic methods, it is highly parallelizible. Due to its versatile nature, \textsc{PGD} has been expanded to other frequency metrics and it stands out as one of the only available efficient methods that can count motifs incident to a vertex or edge of the graph~\cite{ahmed2016estimation}, in what is called a ``local subgraph count''. More recently, \textsc{ESCAPE} was proposed by \citeAutRef{pinar2017escape}. This method is based on a divide and conquer approach that identifies substructures of each counting subgraph to partition them into smaller patterns. It is a very general method, but with the correct choices for decomposition, it is possible to describe a set of formulas to compute the frequency of each subgraph. The original paper only describes the resulting formulas to subgraphs up to size 5, however larger sizes can be obtained with some effort. As of this writing, it is possibly the most efficient algorithm to count undirected subgraphs and orbits up to size 5. \subsection{Theoretical Results} Even though the focus of this work is to look at the proposed practical algorithms, it is important to note that some of the existing work drew inspiration from numerous more theoretical-oriented works. Thus, it is of relevance to briefly summarize some of the achievements in this area and we will do so with a special interest in those that directly influenced some of the algorithms discussed in this section. The first interest in subgraph counting stemmed from the world of enumeration algorithms. The book ``Enumeration in Graphs''~\cite{bezem1987enumeration} surveyed several methods to enumerate several different structures in a graph, such as cycles, trees or cliques. Even though these are specific subpatterns, they often represent the fundamental computation that needs to be done in order to enumerate any subgraph. These ideas were translated into works that count subgraphs by efficiently enumerating simpler substructures like these~\cite{kloks2000finding,itzhack2007optimal}. Approximation schemes can also be developed with this in mind, which approximates the frequency of several subgraph families like cycles or paths and then generalize these for all size 4 subgraphs~\cite{gonen2009approximating}. Another example of an initially purely theoretical technique is the work by~\citeAutRef{kowaluk2013counting}, which was one of the inspirations for the multitude of matrix based analytic algorithms for counting subgraphs. In fact, the most efficient algorithms are based on several theoretical foundations that allow a tighter analysis of runtime. Due to this interplay, it is worth mentioned a few more recent papers on subgraph counting and enumerating. There is an interest in finding efficient algorithms that are parameterized or sensitive to certain properties of the graph, such as independent sets~\cite{williams2013finding} or its maximum degree~\cite{bjorklund2018counting}. Another current interest is in counting and enumerating subgraphs in a dynamic or online environment~\cite{lin2012arboricity}. Finally, another active theoretical topic is to find optimal algorithms for enumeration, as in~\cite{ferreira2013efficiently}, as well as proving lower bounds on their time complexity, as~\citeAutRef{bjorklund2014listing} does for triangle listing. \subsection{Applications and Related Problems}\label{sec:applications} \subsubsection{Subgraph Isomorphism} Given two graphs $G$ and $H$, the \textit{subgraph isomorphism} problem is the computational task of determining if $G$ contains a subgraph isomorphic to $H$. Although efficient solutions might be found for specific graph types (e.g., linear solutions exist for planar graphs~\cite{eppstein2002subgraph}), this is a known NP-Complete problem for general graphs~\cite{cook1971complexity}, and can be seen as much simpler version of counting, that is, determining if the number of occurrences is bigger than zero. This task is closely related to the \emph{graph isomorphism} problem~\cite{mckay1981practical, mckay2014practical}, that is, the task of determining if two given graphs are isomorphic. Since many subgraph counting approaches rely on finding the subgraphs contained in a large graph and then checking to what isomorphic class the subgraphs found belong to, subgraph isomorphism can be seen as an integral part of them. The well known and very fast \textsc{nauty} tool~\cite{mckay2003nauty} is used by several subgraph counting algorithms to assess the type of the subgraph found~\cite{wernicke2006fanmod,paredes2013towards,ribeiro2014g}. \subsubsection{Subgraph Frequencies} The small patterns found in large graphs can offer insights about the networks. By considering the frequency of all $k$-subgraphs, we have a very powerful and rich feature vector that characterizes the network. There has been a long tradition on using the triad census on the analysis of social networks~\cite{wasserman1994social}, and they have been used as early as in the 70s to describe local structure~\cite{holland1976local}. Examples of applications in this field include studying social capital features such as brokerage and closure~\cite{prell2008looking}, discovering social roles~\cite{doran2014triad}, seeing the effect of individual psychological differences on network structure~\cite{kalish2006psychological} or characterizing communication~\cite{uddin2013dyad} and social networks~\cite{charbey2019stars}. Given the ubiquity of graphs, these frequencies have also been used on many other domains, such as in biological~\cite{sole2007spontaneous}, transportation~\cite{wandelt2015evolution} or interfirm networks~\cite{madhavan2004two}. \subsubsection{Network Motifs} A subgraph is considered a \textit{network motif} if it is somehow exceptional. Instead of simply using a frequency vector, motif based approaches construct a \textit{significance profile} that associates an importance to each subgraph, typically related to how overrepresented it is. This concept first appeared in 2002 and it was first defined as subgraphs that occurred more often than expected when compared against a null model~\cite{milo2002network}. The most common null model is to keep the degree sequence and with this we can obtain characteristic network fingerprints that have been shown to be very rich and capable of classifying networks into distinct superfamilies~\cite{milo2004superfamilies}. Network motif analysis has since been in a vast range of applications, such as in the analysis of biological networks (e.g., brain~\cite{sporns2004motifs}, regulation and protein interaction~\cite{yeger2004network} or food webs~\cite{bascompte2005simple}), social networks (e.g., co-authorship~\cite{choobdar2012comparison} or online social networks~\cite{duma2014network}), sports analytics (e.g., football passing~\cite{bekkers2017flow}) or software networks (e.g., software architecture~\cite{valverde2005network} or function-call graphs~\cite{wu2018software}). In order to compute the significance profile of motifs in a graph $G$, most conceptual approaches rely on generating a large set of $R(G)$ of similar randomized networks that serve as the desired null model. Thus, subgraph counting needs to be performed both on the original network and on the set of randomized networks. If the frequency of a subgraph $S$ is \emph{significantly bigger} in $G$ than it its average frequency in $R(G)$, we can consider $S$ to be a network motif of $G$~\cite{kashtan2004efficient}. Other approaches try to avoid exhaustive generation of random networks and, thus, avoid also counting subgraphs on them, by following a more analytical approach capable of providing estimations of the expected frequencies (e.g., using an expected degree model~\cite{picard2008assessing, schbath2008assessing,micale2018fast} or a scale-free model~\cite{stegehuis2019variational}. Nevertheless, there is always the need of counting subgraphs in the original network. While network motifs are usually about induced subgraph occurrences~\cite{milo2002network,wong2012biological}, there are some motif algorithms that count non-induced occurrences instead~\cite{omidi2009moda,li2012netmode}. Moreover, although most of the network motifs usages assume the previously mentioned statistical view on significance as overrepresentation, there are other possible approaches~\cite{xia2019survey} such as using information theory concepts (e.g., motifs based on entropy~\cite{adami2011information,choodbdar2012weighted}, subgraph covers~\cite{wegner2014subgraph}, or minimum description length~\cite{bloem2017large}). We should also note that some approaches try to better navigate the space of "interesting" subgraphs, so that reaching larger motif sizes can be reached not by searching all possible larger $k$-subgraphs, but instead by leveraging computations of smaller motifs~\cite{patra2018motif,luo2018efficient}. Finally, we should note that several authors use the term motif to refer to small subgraphs, even when it does not imply any significance value beyond simple frequency on the original network. \subsubsection{Orbit-Aware Approaches and Network Alignment} When authors use the term \textit{graphlet}, they commonly take orbits into consideration, and use metrics such as the graphlet-degree distribution (GDD, see details in section~\ref{sec:terminology}), a concept that appeared in 2007~\cite{przulj2007}. In this way, graphlet algorithms count how many times each node appears in each orbit. Unlike motifs, graphlets do not usually need a null model (i.e., networks are directly compared by comparing their respective GDDs). These orbit-aware distributions can be used for comparing networks. For instance, they have shown that protein interaction networks are more akin to random geometric graphs than to traditional scale-free networks~\cite{przulj2007}. Moreover, they are also used to compare nodes (using graphlet-degree vectors). This makes them useful for \textit{network alignment} tasks, where one needs to establish topological similarity between nodes from different networks~\cite{milenkovic2010optimal}. Several graphlet-based network alignment algorithms have been proposed and shown to work very well for aligning biological networks~\cite{kuchaiev2010topological,kuchaiev2011integrative,malod2015graal,sun2015simultaneous,aparicio2019temporal}. \subsubsection{Frequent Subgraph Mining (FSM)}\label{sec:fsm} FSM algorithms find subgraphs that have a \emph{support} higher than a given threshold. The most prevalent branch of FSM takes as input a bundle of networks and finds which subgraphs appear in a vast number of them - refereed to as \emph{graph transaction based FSM}~\cite{jiang2013survey}. These algorithms~\cite{yan2002gspan,huan2003efficient,nijssen2005gaston} heavily rely on the Downward Closure Property (DCP) to efficiently prune the search space. Algorithms for subgraph counting, which is our focus, can not, in general, rely on the DCP since it is not possible to know if growing an infrequent $k$-node subgraph will result, or not, in a frequent $k+1$ subgraph. Furthermore, we are not only interested in frequent subgraphs but in all of them, since rare subgraphs can also give information about the network's topology. A less prominent branch of FSM, \emph{single graph based FSM}, targets frequent subgraphs in a single large network, much like our subgraph counting problem. However, they adopt various support metrics that allow for the DCP to be verified, which, as stated previously, is not the case in the general subgraph counting problem~\cite{jiang2013survey}. \section{Introduction} Networks (or graphs) are a very flexible and powerful way of modeling many real-world systems. In its essence, they capture the interactions of a system, by representing entities as nodes and their relations as edges connecting them (e.g., people are nodes in social networks and edges connect those that have some relationship between them, such as friendships or citations). Networks have thus been used to analyze all kinds of social, biological and communication processes~\cite{costa2011analyzing}. Extracting information from networks is therefore a vital interdisciplinary task that has been emerging as a research area by itself, commonly known as Network Science~\cite{lewis2011network,barabasi2016network}. One very common and important methodology is to look at the networks from a subgraph perspective, identifying the characteristic and recurrent connection patterns. For instance, network motif analysis~\cite{milo2002network} has identified the feed-forward loop as a recurring and crucial functional pattern in many real biological networks, such as gene regulation and metabolic networks \cite{mangan2003structure, zhu2005structural}. Another example is the usage of graphlet-degree distributions to show that protein-protein interaction networks are more akin to geometric graphs than with traditional scale-free models~\cite{przulj2007}. At the heart of these topologically rich approaches lies the subgraph counting problem, that is, the ability to compute subgraph frequencies. However, this is a very hard computational task. In fact, determining if one subgraph exists at all in another larger network (i.e., \textit{subgraph isomorphism}~\cite{ullmann1976algorithm}) is an \mbox{NP-Complete} problem~\cite{cook1971complexity}. Determining the exact frequency is even harder, and millions or even billions of subgraph occurrences are typically found even in relatively small networks. Given both its usefulness and hard tractability, subgraph counting has been raising a considerable amount of interest from the research community, with a large body of published literature. This survey aims precisely to organize and summarize these research results, providing a comprehensive overview of the field. Our main contributions are the following: \begin{itemize} \item \textbf{A comprehensive review of algorithms for \textit{exact} subgraph counting.} We give a structured historical perspective on algorithms for computing exact subgraph frequencies. We provide a complete overview table in which we employ a taxonomy that allows to classify all algorithms on a set of key characteristics, highlighting their main similarities and differences. We also identify and describe the main conceptual ideas, giving insight on their main advantages and possible limitations. We also provide links to existing implementations, exposing which approaches are readily available. \item \textbf{A comprehensive review of algorithms for \textit{approximate} subgraph counting.} Given the hardness of the problem, many authors have resorted to approximation schemes, which allow trading some accuracy for faster execution times. As on the exact case, we provide historical context, links to implementations and we give a classification and description of key properties, explaining how the existing approaches deal with the balance between precision and running time. \item \textbf{A comprehensive review of \textit{parallel} subgraph counting methodologies.} It is only natural that researchers have tried to harness the power of parallel architectures to provide scalable approaches that might decrease the needed computation time. As before, we provide an historical overview, coupled with classification on a set of important aspects, such as the type of parallel platform or availability of an implementation. We also give particular attention to how the methodologies tackle the unbalanced nature of the search space. \end{itemize} We complement this journey trough the algorithmic strategies with a \textit{clear formal definition of the subgraph counting problem} being discussed here, an \textit{overview of its applications} and complete and a large number of \textit{references to related work} that is not directly in the scope of this article. We believe that this survey provides the reader with an insightful and complete perspective on the field, both from a methodological and an application point of view. The remainder of this paper is structured as follows. Section~\ref{sec:preliminaries} presents necessary terminology, formally describes subgraph counting, and describes possible applications related subgraph counting. Section~\ref{sec:exact} reviews exact algorithms, divided between full enumeration and analytical methods. Approximate algorithms are described in Section~\ref{sec:sampling} and parallel strategies are presented in Section~\ref{sec:parallel}. Finally, in Section~\ref{sec:conclusions} we give our concluding remarks. \section{Preliminaries}\label{sec:preliminaries} \subsection{Concepts and Common Terminology} \label{sec:terminology} This section introduces concepts and terminology related to subgraph counting that will be used throughout this paper. A \emph{network} is modeled with the mathematical object \emph{graph}, and the two terms are used interchangeably. Networks considered in this work are \emph{simple labeled graphs}. Here we are interested in algorithms that count \emph{small, connected, non-isomorphic subgraphs} on a single network. \smallskip \begin{itemize}[label={},leftmargin=*,itemsep=7pt] \item \textbf{Graph:} A graph $G$ is comprised of a set $V(G)$ of \emph{vertices/nodes} and a set $E(G)$ of \emph{edges/connections}. Nodes represent entities and edges correspond to relationships between them. Edges are represented as pairs of vertices of the form $(u, v)$, where $u, v \in V(G)$. In \emph{directed} graphs, edges $(u, v)$ are ordered pairs ($u \rightarrow v$) whereas in \emph{undirected} graphs there is no order since nodes are always reciprocally connected ($u \rightleftarrows v$). The \emph{size} of a graph is the number of vertices in the graph and it is written as $|V(G)|$. A $k$-graph is a graph of size $k$. A graph is considered \emph{simple} if it does not contain multiple edges (two or more edges connecting the same vertex pair) nor self-loops (an edge connecting a vertex to itself). Nodes are \emph{labeled} from 0 to $|V(G)|-1$, and $L(u) < L(v)$ means than $u$ has a smaller label than $v$. \item \textbf{Neighborhood and Degree:} The \emph{neighborhood} of vertex $u \in V(G)$, denoted as $N(u)$, is composed by the set of vertices $v \in V(G)$ such that $(u,v) \in E(G)$. The \emph{degree} of $u$, written as $deg(u)$, is the given by $|N(u)|$. The exclusive neighborhood $N_{exc}(u, S)$ are the neighbors of $u$ that are not neighbors of any $v \in S$ with $u \neq v$. \item \textbf{Graph Isomorphism:} A \emph{mapping} of a graph is a bijection where each vertex is assigned a value. In the context of this work, since graphs are \emph{labeled}, a mapping is a permutation of the node labels. Two graphs $G$ and $H$ are said to be isomorphic if there is a one-to-one mapping between the vertices of both graphs, such that there is an edge between two vertices of $G$ if and only if their corresponding vertices in $H$ also form an edge (preserving direction in the case of directed graphs). More informally, isomorphism captures the notion of two networks having the same edge structure -- the same topology -- if we ignore distinction between individual nodes. Figure~\ref{fig:isomorphic} illustrates this concept. Despite looking different, the structure of the graphs is the same, and they are isomorphic. The labels in the nodes illustrate mappings that would satisfy the conditions given for isomorphism. \begin{figure}[h] \vspace{-0.1cm} \centering \includegraphics[width=0.65\linewidth]{figures/isomorphic} \vspace{-0.2cm} \caption{Four isomorphic undirected graphs of size 6.} \label{fig:isomorphic} \vspace{-0.4cm} \end{figure} \item \textbf{Subgraphs:} A \emph{subgraph} $G_k$ of a graph $G$ is a $k$-graph such that $V(G_k) \subseteq V(G) $ and $E(G_k) \subseteq E(G)$. A subgraph is \emph{induced} if $\forall(u,v) \in E(G_K) \leftrightarrow (u,v) \in E(G)$ and is said to be \emph{connected} when all pairs of vertices have a sequence of edges connecting them. \textit{Graphlets}~\cite{przulj2007} are small, connected, non-isomorphic, induced subgraphs. Figure~\ref{fig:graphlets_u4} presents all 4-node undirected graphlets. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{figures/graphlets_u4} \vspace{-0.2cm} \caption{All non-isomorphic undirected subgraphs (or graphlets) of size 4.} \label{fig:graphlets_u4} \end{figure} \item \textbf{Orbit: } The set of isomorphisms of a graph into itself is called the group of \emph{automorphisms}: two vertices are said to be equivalent when there exists some automorphism that maps one vertex into the other. This equivalence relation partitions $V(G)$ into equivalence classes, which we refer to as \emph{orbits}. Therefore, \emph{orbits} are all the unique positions of a subgraph. For instance, a $k-$hub has $k$ nodes but only 2 orbits: one \emph{center-orbit} inhabited by a single node and a \emph{leaf-orbit} where the remaining $k$-1 nodes are. Nodes at the same orbit are topologically equivalent. Figure~\ref{fig:orbits_u4} shows all different orbits of the graphlets from Figure~\ref{fig:graphlets_u4}. \begin{figure}[h] \vspace{-0.1cm} \centering \includegraphics[width=0.65\linewidth]{figures/orbits_u4} \vspace{-0.3cm} \caption{The 10 orbits of all 4-node undirected graphlets.} \label{fig:orbits_u4} \vspace{-0.4cm} \end{figure} \item \textbf{Match and Frequency:} A \emph{match} of graph $H$ in graph $G$ occurs when there is a set of nodes from $V(G)$ that induce $H$. In other words, $G_k$ is a subgraph of $G$ that is isomorphic to $H$. Figure~\ref{fig:orbits_occ} shows the matches of three different subgraphs ($A$, $B$ and $C$) on graph $G$. The \emph{frequency} of $H$ in $G$ is the number of \emph{different} $G_k \subseteq G$ that induce $H$. Two matches are considered different if they do not share all nodes and edges. \item \textbf{Graphlet-Degree Distribution :} It is an extension of the node-degree distribution and both can be used for graph characterization and comparison. Notice that the node-degree can be seen as simply the orbit $a$ in Figure~\ref{fig:orbits_occ}. The graphlet-degree vector $GDV(v)$ is a feature vector of $v$ specifying how many times it occurs in each orbit. The graphlet-degree distribution $GDD_G$ is a feature matrix of graph $G$ where cell $(i,j)$ indicates the number of nodes that appear $i$ times in orbit $j$, and can be constructed from $Fr_G$, the frequency matrix where each line is the $GDV$ of a single node. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{figures/orbit_occ} \caption{$GDV(v)$ obtained by enumerating all undirected graphlet-orbits of sizes 2 and 3 ($A$, $B$ and $C$) touching $v$, and resulting $Fr_G$ and $GDD_G$ matrices for the complete $3$-subgraph census % } \label{fig:orbits_occ} \end{figure} \end{itemize} \subsection{Problem statement} Making use of previous concepts and terminology, we now give a more formal definition of the problem tackled by this survey: \begin{mydef2}[\textbf{Subgraph Counting}] \label{def:genproblem} Given a set $\mathcal{G}$ of non-isomorphic subgraphs and a graph $G$, determine the frequency of all induced matches of the subgraphs $G_s \in \mathcal{G}$ in $G$. Two occurrences are considered different if they have at least one node or edge that they do not share. \end{mydef2} This problem is also known as \textit{subgraph census}. In short, one wants to extract the occurrences of all subgraphs of a given size, or just a smaller set of "interesting" subgraphs, contained in a large graph $G$. Note how here the input is a single graph, in contrast with Frequent Subgraph Mining (FSM) where collections of graphs are more commonly used (differences between Subgraph Counting and FSM are discussed in Section~\ref{sec:fsm}). Approaches diverge on which subgraphs are counted in $G$. \textit{Network-centric} methods extract all $k-$node occurrences in $G$ and then assess each occurrence's isomorphic type. On the other end of the spectrum, \textit{subgraph-centric} methods first pick a isomorphic class and then only count occurrences matching that class in $G$. Therefore, subgraph-centric methods are preferable to network-centric algorithms when only one or a few different subgraphs are to be counted. \textit{Set-centric} approaches are middle-ground algorithms that take as input a set of interesting subgraphs and only count those on $G$. This work is mainly focused on network-centric algorithms, while not limited to them, since: (a) exploring all subgraphs offers the most information possible when applying subgraph counting to a real dataset, (b) hand-picking a set of interesting subgraphs might might be hard or impossible and could be heavily dependent on our knowledge of the dataset, (c) it is intrinsically the most general approach. It is obviously possible to use subgraph-centric methods to count all isomorphic classes, simply by executing the method once per isomorphic type. However, that option is only feasible for small subgraph sizes because larger $k$ values produce too many subgraphs (see Table~\ref{tab:n_subgraphs}) and it is likely that a network only has a small subset of them, meaning that the method would spend a considerable amount of time looking for features that do not exist, while network-centric methods always do useful work since they count occurrences in the network. \begin{table*}[!h] \centering \small \caption{Number of different undirected and directed subgraphs (i.e., isomorphic classes), as well as their respective orbits, depending on the size of the graphlets.} \label{tab:n_subgraphs} \begin{tabular}{|c||c|c||c|c|c|} \cline{2-5} \multicolumn{1}{c|}{}& \multicolumn{2}{c||}{Undirected} & \multicolumn{2}{c|}{Directed} \\ \hline $k$ & \#Subgraphs & \#Orbits & \#Subgraphs & \#Orbits \\ \hline \hline 2 & 1 & 1 & 2 & 3 (1.5 $\times$ \#Subgraphs) \\ \hline 3 & 2 & 3 & 15 & 30 (2.0 $\times$ \#Subgraphs) \\ \hline 4 & 6 & 11 & 214 & 697 (3.3 $\times$ \#Subgraphs) \\ \hline 5 & 21 & 58 & 9,578 & 44,907 (4.7 $\times$ \#Subgraphs)\\ \hline 6 & 112 & 407 & 1,540,421 & 9,076,020 (5.9 $\times$ \#Subgraphs) \\ \hline 7 & 823 & 4,306 & 872,889,906 & $\approx$ 7 $\times$ \#Subgraphs \\ \hline 8 & 11,117 & 72,489 & 1,792,473,955,306 & $\approx$ 8 $\times$ \#Subgraphs \\ \hline 9 & 261,080 & 2,111,013 & 13,026,161,682,466,252 & $\approx$ 9 $\times$ \#Subgraphs \\ \hline \end{tabular} \end{table*} Here we are mainly interested in algorithms that count induced subgraphs, but non-induced subgraphs counting algorithms are also considered. Counting one or the other is equivalent since it is possible to obtain induced occurrences from non-induced occurrences, and vice-versa. However, we should note that, at the end of the counting process, induced occurrences need to be obtained by the algorithm. This choice penalizes non-induced subgraph counting algorithms since the transformation is quadratic on the number of subgraphs~\cite{floderus2015induced}. Some algorithms count orbits instead of subgraphs~\cite{hovcevar2014combinatorial}. However, counting orbits can be reduced to counting subgraphs and, therefore, these algorithms are also considered. We should note that we only consider the most common and well studied subgraph frequency definition, in which different occurrences might share a partial subset of nodes and edges, but there are other possible frequency concepts, in which this overlap is explicitly disallowed~\cite{schreiber2005frequency,elhesha2016identification}. \subsection{Algorithms Not Considered} In this work we focus on practical algorithms that are capable of counting all subgraphs of a given size. Therefore, algorithms that only target specific subgraphs are not considered (e.g., triads~\cite{schank2005finding}, cliques~\cite{finocchi2015clique, aliakbarpour2018sublinear}, stars~\cite{gonen2011counting,danisch2018listing} or subtrees~\cite{li2018mtmo}). Furthermore, given our focus on generalizability, we do not consider algorithms that are only capable of counting sugraphs in specific graphs (e.g., bipartite networks~\cite{sanei2018butterfly}, trees~\cite{czabarka2018number}), or that only count local subgraphs~\cite{dave2017clog}. Graphs used throughout this work are simple, have a single layer of connectivity and do not distinguish the node or edge types with qualitative or quantitative features. Therefore we do not discuss here algorithms that use colored nodes or edges~\cite{guillemot2013finding,gholami2013rangi,ribeiro2014discovering}, and neither those that consider networks that are heterogeneous~\cite{gu2018homogeneous,rossi2019heterogeneous}, multilayer ~\cite{ren2019finding,boekhout2019efficiently}, labelled/attributed~\cite{mongiovi2018glabtrie}, probabilistic~\cite{sarkar2018new} or any kind of weighted graphs~\cite{williams2013finding}. Finally, the networks we consider are static and do not change their topology. We should however note that there has been an increasing interest in temporal networks, that evolve over time~\cite{holme2012temporal}. Some algorithms beyond the scope of this survey try to tackle temporal subgraph counting, either by considering temporal networks as a series of static snapshots~\cite{hulovatyy2015exploring,aparicio2018graphlet}, by timestamping edges~\cite{paranjape2017motifs}, or by considering a stream of small updates to the graph topology~\cite{schiller2015stream,silva2017network,cannoodt2018incgraph,kallaugher2018sketching}. \input{sections/Applications}\ \subsection{Other Surveys and Related Work} To the best of our knowledge there is no other comparable work to this survey in terms of scope, thoroughness and recency. Most of the already existing surveys that deal with subgraph counting are directly related to network motif discovery. Some of them are from before 2015 and therefore predate many of the most recent algorithmic advances~\cite{ribeiro2009strategies,wong2012biological,masoudi2012building,kim2013network,tran2014current}, and all of them only present a small subset of the strategies discussed here. There are more recent review papers, but they all differ from our work and have a much smaller scope. \citeAutRef{al2018triangle} only consider triangle counting, \citeAutRef{xia2019survey} focus mainly on significance metrics, and finally, while we here present a structured overview of more than 50 exact, approximate and parallel algorithmic approaches, ~\citeAutRef{jain2019network} presents a much simpler description of 5 different algorithms.
1,941,325,220,862
arxiv
\section{Introduction} In many scientific and application domains, data is supported by a graph or network structure. To deal with such data, a collection of graph convolution networks (GCNs) have been proposed by generalizing architectures from the Euclidean domain to the graph one. Among them, MPGCNs \citep{bruna2013spectral,gilmer2017neural,velikovi2018graph,hamilton2017inductive,xu2018how,klicpera2019diffusion} built on diverse MP schemes are more prevalent with their flexible and intuitive formulations of graph convolution, and have presented state-of-the-art performance in a series of tasks, like node classification and graph classification. The message passing schemes in MPGCNs, however, mostly focus on the low frequency characteristics of the data. For example, GCN \citep{kipf2016semi} performs as Laplacian smoothing \citep{li2018deeper}, and the mean aggregator adopted in GraphSage \cite{hamilton2017inductive} has naturally low-pass properties. However, in addition to low-frequency components, middle- and high-frequency components that correspond to the large variation of signals may also contain rich information in graph data. Besides, given that the importance of different frequency components may vary with the tasks, it is beneficial to be able to exploit more than the low-frequency information in an adaptive way. Furthermore, MPGCNs usually adopt a single message passing and aggregation scheme. This limits the capability of MPGCNs to handle graphs whose features or signals, like in most applications, are multi-channel and encode heterogeneous information with different frequency characteristics. For instance, node features correspond to user profiles in social networks may include gender, age, hobby, etc., and the gender and age features will definitely present different variations across the graph. In order to address these issues, we propose a novel message passing graph convolution operator termed BankGCN that utilizes an adaptive and learnable filter bank to process heterogeneous graph data in the spectral (frequency) domain, as presented in Fig.~\ref{fig.frame}. Firstly, we decompose multi-channel graph signals into a collection of subspaces through projection, in order to separate input data according to its spectral characteristic. In each subspace, a learnable filter is employed to capture the frequency properties of the particular signals. Notably, the filter is based on the universal design \citep{tremblay2018design} that is defined over a continuous range in the spectral domain instead of a specific discrete spectrum, and thereby are adaptable to graphs with arbitrary topologies \citep{levie2019transferability}. Besides, it is designed as a finite impulse response (FIR) filter and corresponds to a local message passing scheme in the spatial domain that jointly considers multi-hop topological information and signal information. Filters of all the subspaces together form a filter bank, and they are simultaneously learned from data together with subspace projections. Furthermore, a diversity condition is proposed to regularize the filters in the filter bank to have diverse frequency responses, in order to capture and handle various patterns in the graph signal. In this way, BankGCN is more powerful than most MPGCNs, such as GCN \citep{kipf2016semi}, GraphSage \citep{hamilton2017inductive}, and GIN \citep{xu2018how}, in that it is able to exploit more than `low-pass' features in the data and adapts to its heterogeneous properties. It further adopts a group of learnable message passing strategies, and exploits multi-hop rather than one-hop information per layer. In contrast with spectral methods, like ChebNets \citep{defferrard2016convolutional} and CayleyNets \citep{levie2018cayleynets}, it largely reduces the number of free parameters and leads to better generalization in the experimental validation. The proposed convolution operator is stackable, as most MPGCNs, and can be optimized together with other modules in the GCNs like graph pooling. In this paper, we evaluate it in the task of graph classification on a collection of benchmark datasets, where it achieves superior performance. Remarkably, it is promising to extend BankGCN to tasks like link prediction and applications into non-Euclidean data like 3-D point clouds. The remainder of the paper is organized as follows. Section~\ref{sec.related} briefly overviews the related work on graph convolution and spectral filtering. In Section~\ref{sec.pb}, some preliminaries are introduced and a collection of graph convolution operators are compared in terms of spectral filtering. Section~\ref{sec.method} elaborates the proposed BankGCN algorithm. Evaluations on graph classification tasks are presented in Section~\ref{sec.eval}. Finally, Section~\ref{sec.con} concludes this paper. \section{Related Work}\label{sec.related} We briefly overview below several spatial GCNs in terms of their respective message schemes. Then we introduce spectral filtering as well as the design of filters and filter banks in graph signal processing (GSP), and compare several spectral GCNs. \textbf{Message Passing Graph Convolution Networks.} Several MPGCNs \citep{bruna2013spectral,gilmer2017neural,velikovi2018graph,hamilton2017inductive,xu2018how,klicpera2019diffusion} have been proposed to generalize convolution operation to graph data with a variety of message propagation and aggregation schemes in the spatial domain. For instance, message are aggregated with the node-wise mean or max in a localized neighborhood in GraphSage \citep{hamilton2017inductive}, or based on attention scores in GAT \citep{velikovi2018graph}. A more expressive scheme, GIN, is further proposed with summing features in a neighborhood followed by a multi-layer perceptrons (MLP) to approximate any injective function on multiset in \citep{xu2018how}. However, these methods are constrained to a single message passing (MP) strategy, mostly capturing `low-pass' characteristics, and the MP range is usually of one-hop neighborhood per layer. Focusing on these issues, \citeauthor{klicpera2019diffusion} expand the MP range to multi-hop neighborhoods by adopting graph diffusion convolution (GDC) \citep{klicpera2019diffusion}; diffusion wavelets are introduced in Scattering GCN \citep{MinWW20} from geometric scattering transform \citep{gao2019geometric} to complement the `low-pass' features of GCN \citep{kipf2016semi} with `band-pass' features; and multiple aggregations, like mean, sum, and standard variation, are used to aggregate neighborhood information in \citep{corso2020principal}. In contrary to these methods, we employ an adaptive and learnable filter bank rather than predefined wavelets and impose a proper decoupling between sets of learnable filters, which correspond to different MP schemes within multi-hop neighborhoods. \textbf{Filtering on Graphs.} Frequency filtering, or spectral filtering, is generalized to graph data via spectral graph theory \citep{shuman2013emerging}. Correspondingly, several spectral GCNs are derived with graph filters, or equivalently graph convolutions, designed in graph spectral domain directly, as pioneered by \citet{bruna2013spectral}. To achieve constant learning complexity and avoid expensive eigendecomposition of the graph Laplacian, similarly to dictionary learning methods \citep{thanou2014learning} in GSP, the filters are adopted as specific functions of eigenvalues of the graph Laplacian, such as Chebyshev polynomials in ChebNets \citep{defferrard2016convolutional} and TIGraNet \citep{khasanova2017graph}, Cayley polynomials in CayleyNets \citep{levie2018cayleynets}, and auto-regressive moving average (ARMA) filters in \citep{bianchi2019graph}. In contrast with these spectral convolution methods that focus on the implementation of filters with desirable properties, like localization and narrowband specialization, our paper focuses on the design of the filter bank, whose filters can be built on polynomial kernels, in order to reduce the number of filters and thereby the number of parameters, and improve the generalization. Finally, there are several papers in graph signal processing literature about the design of graph filter banks for graph signal decomposition \citep{narang2012perfect,tanaka2014m} and multiscale analysis \citep{hammond2011wavelets,narang2013compact}. However, they usually work under a rigid constraint, perfect reconstruction, and thereby their produced (sparse) representations may not be flexible enough to adapt to diverse tasks in machine learning. In contrast, our method relaxes the perfect reconstruction condition and rather regularizes filters in a filter bank to be different in the spectral domain. It further learns the filters and subspace projections jointly through end-to-end optimization in GCNs to flexibly handle multi-channel signals. \begin{figure}[tp] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.99\columnwidth]{figures/fig_1.pdf}} \caption{Illustration of the framework. The proposed convolution operator and its equivalent filter banks are respectively presented in (a) and (b). Graph signal $\bm{x}$ is mapped to a collection of subspaces and processed by an adaptive filter bank to extract information corresponding to distinct frequency properties. As illustrated in (b), an adaptive filter bank permits the proposed convolution operator to adaptively extract information from diverse combinations of different frequency components to produce graph representations.} \label{fig.frame} \end{center} \vskip -0.2in \end{figure} \section{Preliminaries and Problems} \label{sec.pb} In this paper, we consider data that is represented on undirected graphs. An input graph is described as $\mathcal{G}=(\mathcal{V},\mathcal{E})$, with $\mathcal{V}$ and $\mathcal{E}$ respectively denoting the set of vertices and the set of edges. We generally use capital letters for matrices and bold lowercase letters for vectors. The graph topology is characterized by the adjacency matrix $A$ with a non-zero value $(A)_{ij}$ indicating the weight of the edge connecting vertices $v_i$ and $v_j$. If available \footnote{If node attributes are not available, we adopt the structural information like the one-hot encoding representation of node degree and the local clustering coefficient as node signals.}, the $d$-channel signals or attributes of nodes are represented as $\bm{x}(v_{m}) \in \mathbb{R}^{d}$ for $\forall v_m \in \mathcal{V}$, and $X=[\bm{x}(v_{1}),\bm{x}(v_{2}), \cdots, \bm{x}(v_{n})]^T$ is for the whole graph $\mathcal{G}$ with $n=| \mathcal{V}|$ nodes. Specially, for vector features, $x_i(v_{m}) $ indicates the $i$-th channel of the signal on vertex $v_m$, and $\bm{x_i}=[x_i(v_{1}),x_i(v_{2}), \cdots, x_i(v_{n})]^T$ represents this signal on the whole graph. For the hierarchical structure of GCNs, we employ a subscript $l$ to indicate variables or parameters belonging to the $l$-th layer. In most cases, we ignore $l$ for clarity without causing confusion. The Graph Fourier Transform (GFT) is defined based on the graph Laplacian matrix $L$ \citep{shuman2013emerging}. We adopt the symmetric normalized graph Laplacian operator, \emph{i.e.}, $L=I- D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ with the diagonal degree matrix $D$ defined as $(D)_{ii}=\sum_{j}(A)_{ij}$. The eigendecomposion of the Laplacian matrix is denoted by $L=U \Lambda U^{*}$, where $U=\lbrack \bm{u_1},\bm{u_2}, \dots, \bm{u_{n}} \rbrack$ is composed of the eigenvectors $\bm{u_k}$, $k=1,\cdots,n$ and $ \Lambda$ is a diagonal matrix with eigenvalues $\lambda_1,\lambda_2,\dots,\lambda_{n}$, with $0=\lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_{n}\leq 2$. The eigenvalues $\{ \lambda_k\}$ construct the spectrum of the graph $\mathcal{G}$ \footnote{In analogy to classical Fourier analysis, eigenvalues provide a corresponding notion of frequency and lead to frequency filtering.}, and the GFT of a signal $\bm{x}$ is calculated as the inner-product between each of its component $\bm{x_i}$ and the eigenvectors $\{\bm{u_k}\}$ \citep{shuman2013emerging}: \begin{equation}\label{e.gft} \hat{x}_i(\lambda_k)=\sum_{m=1}^{n}x_i(v_m)u_k^*(v_m), \end{equation} where $*$ indicates conjugate transpose and $u_k^*(v_m)$ denotes the $m$-th element in $\bm{u}^*_k$. The Inverse Graph Fourier Transform (IGFT) is defined as \begin{equation}\label{e.igft} x_i(v_m)=\sum_{k=1}^{n} \hat{x}_i(\lambda_k)u_k(v_m). \end{equation} According to the convolution theorem, the convolution between the $i$-th channel of the signal ${x}_i(v_m)$ and the corresponding filter $g(v_m)$ (with frequency response $\hat{g}(\lambda_k)$ ) is defined as: \begin{equation}\label{e.conv} (x_i \ast g)(v_m)=\sum_{k=1}^{n}\hat{x}_i(\lambda_k)\hat{g}(\lambda_k)u_k(v_m) \end{equation} Let us define $\hat{g}(\Lambda)={\rm diag}([\hat{g}(\lambda_1), \hat{g}(\lambda_2), \dots, \hat{g}(\lambda_n)])$. We have for $\bm{x}_i(v_m)$ that \begin{equation} (\bm{x_i} \ast g)=U\hat{g}(\Lambda)U^*\bm{x_i}. \end{equation} Finally, a graph filter bank is composed of a set of filters $\{\hat{g}_i(\Lambda)\}$ to decompose a graph signal into a series of signals with different frequency components \citep{narang2012perfect}. We now show how filtering can be implemented by messaging passing, which is used in many state-of-the-art graph representation learning methods. Filtering in the frequency domain and MP in the spatial domain is closely related \citep{shuman2013emerging}. Taking the most popular GCN \citep{kipf2016semi} as an example, according to its initial formation, the $j$-channel of a filtered signal is \begin{equation}\label{e.mpfrec} \bm{h_j}=\sum_{i=1}^{d_l}(I+D^{-\frac{1}{2}}AD^{-\frac{1}{2}})\Theta \bm{x_i}=\sum_{i=1}^{d_l}(2I-L)\Theta_{i,j}\bm{x_i}=\sum_{i=1}^{d_l}\Theta_{i,j}U\hat{g}(\Lambda)U^*\bm{x_i} \end{equation} where $\Theta$ denotes learnable parameters, and $\hat{g}(\Lambda)=2-\Lambda$ is a low-pass filter that is consistent with the analysis in \citep{nt2019revisiting,MinWW20}. The renormalized version of GCN and GIN can be formulated similarly. Please refer to \citep{balcilar2021analyzing} for their specific equivalent supports of $\hat{g}(\Lambda)$. Besides, GraphSage uses the mean or max operator to aggregate information in neighborhoods. Although it is hard to explicitly formulate its corresponding frequency response, the element-wise mean operation is a low-pass operation by nature and the element-wise max operator performs as an envelope extraction that suppresses the high-frequency components. Another line of works, spectral GCNs, are directly derived from spectral filtering. For ChebNets and CayleyNets as representatives of spectral GCNs, the filters are defined in the frequency domain as $\hat{g}(\lambda)= \sum_{k=0}^{K} \theta^{(k)}T_{k}(\lambda)$ using the Chebyshev polynomial or Cayley polynomial basis $\{T_{k}(\cdot)\}$ and correspondingly \begin{equation}\label{e:conv.spec} \bm{h_j}=\sum_{i=1}^{d_l} U\hat{g}_{ij}(\Lambda)U^*\bm{x_i}=\sum_{i=1}^{d_l} U\sum_{k=0}^{K} \Theta^{(k)}_{i,j}T_{k}(\Lambda) U^*\bm{x_i}, \end{equation} where $K$ is the order of the polynomial filters and $\{ \Theta^{(k)}\}$ are learnable parameters. Notably, the above MPGCNs employ just one single filter to handle all the channels of signals for each input graph, and then takes different (linear) combinations of the filtered signals to obtain the output signals of different channels. Thereby, most MPGCNs are restricted in the number and types of filters and have a limited capacity from a spectral perspective. On the contrary, ChebNets and CaylayNets adopt $d_l \times d_{l+1}$ ($d_l$, $d_{l+1}$ denote the respective number of channels of input and output signals) different $K$-order polynomial filters, as presented in Eq.~\eqref{e:conv.spec}. The number $K$ determines the order and thereby the capacity of filters. Spectral convolution operators employ such a large number of learnable filters that are powerful to handle various graph signals but at the expense of numerous parameters to learn. This will lead to overfitting. Based on the analysis of the limitations of existing spectral and message passing graph convolutions, we focus now on the following problems for designing the message passing scheme and graph convolution operator. \begin{itemize} \item Q1: How to design a message passing scheme that can capture diverse features beyond mere low frequency components for different machine learning tasks like graph classification? \item Q2: How to design a graph convolution operator that is capable to handle heterogeneous node features adaptively and differently without introducing excessive extra parameters? \end{itemize} \section{The BankGCN Algorithm}\label{sec.method} In this section, we first outline the main elements of BankGCN. Subsequently, the role of subspace projections, the design of filters in the filter bank, and the regularization for the filter bank to guarantee diversity are introduced in detail. \subsection{Graph Convolution with Adaptive Filter Banks} Multi-channel graph signals (features) are usually composed of multiple patterns, and different signal channels vary differently over nodes, leading to diverse frequency characteristics. To handle the multi-channel signals, we employ a filter bank composed of a set of filters with different frequency responses in the design of BankGCN. Furthermore, subspace projection is adopted to decompose the multi-channel information into several components with similar spectral properties. For a multi-channel input node signal $\bm{x}(v_{m})$, we first adaptively decompose it into different subspaces and then employ different filters to deal with the signal components in each subspace separately. The decomposition aims to map the components of signals with similar frequency characteristics into the same subspace in order to facilitate the subsequent adaptive filtering. This is implemented with learnable subspace projections. Mathematically, the input signal $\bm{x}(v_{m})$ is projected into $s$ subspaces with a group of projection functions denoted by $\{f_{[p]}(\cdot)\}$, \begin{equation} \bm{r}_{[p]}(v_{m})=f_{[p]}(\bm{x}(v_{m})), \quad p=1, 2, \dots, s, \ \forall v_m \in \mathcal{V}. \end{equation} Here, the subscript ``$[p]$'' indicates the terms belonging to the $p$-th subspace and $\bm{r}_{[p]}(v_{m})$ is the projected signal. The choices for projection functions will be introduced in Section~\ref{sec.subpro}. Subsequently, an adaptive filter $g_{[p]}(\cdot)$ is employed to learn and represent the sepectral components independently in each subspace, \begin{align} \bm{h}_{[p]}(v_{m})&=(g_{[p]} \ast \bm{r}_{[p]}) (v_{m}), \end{align} where we reuse `*' to denote the convolution between each channel of signal $\bm{r}_{[p]}$ and the filter $g_{[p]}$. The filtered signals $\bm{h}_{[p]}(v_{m})$, $p=1,2,\cdots,s$, of all the subspaces are concatenated to produce the output features. A Rectified Linear Unit (ReLU) is then used as a non-linear activation function, \begin{gather} \bm{h}(v_{l,m})={\rm Concat}(\bm{h}_{[1]}(v_{l,m}),\bm{h}_{[2]}(v_{l,m}), \dots, \bm{h}_{[s]}(v_{l,m})), \\ \bm{x}(v_{l+1,m})=\text{ReLU}(\bm{h}(v_{l,m}), \end{gather} where the subscript $l$ indicates variables or parameters in the $l$-th layer to describe the forward propagation between different layers in a hierarchical architecture. Specially, considering that some filters are only supported on medium-to-high frequency components that mean large signal variations in the graph domain, we further introduce a shortcut in each subspace corresponding to full-pass in the frequency domain in order to make the filtered signals stable. Correspondingly, the filtered signal $\bm{h}_{[p]}(v_{m})$ in each subspace becomes \begin{equation} \bm{h}_{[p]}(v_{m})=(g_{[p]} \ast \bm{r}_{[p]}) (v_{m}) +\bm{r}_{[p]}(v_{m}). \end{equation} The shortcut can further facilitate back propagation of gradients, as for CNNs on grid-like data \citep{he2016deep}. The proposed graph convolution operator is stackable, and the subspace mapping function in the upper layers will further combine features from different subspaces in the preceding layers to enable information interaction between different channels. \subsection{Subspace Projection}\label{sec.subpro} For the projection function $f_{[p]}(\cdot)$, there is a variety of design choices. Here, we take the linear mapping as an example, since it is simple yet able to separate and recombine different channels of signals. Specifically, the $d$-channel \footnote{ For the input signal with $d=1$, we set $s$ as the feature channels in the neural network in the first layer so that the signal is scaled differently in each subspace. In the following hidden layers, it can be used as the other cases to handle multi-channel features produced by previous layers in GCNs. } signal $ \bm{x}(v_m)$ is projected into $s$ different subspaces with learnable matrices $\{W_{[p]}\}_{p=1}^{s}$. For the sake of simplicity, all the subspaces have the same dimension in this paper. To increase flexibility, we further introduce a learnable bias $\bm{b}_{[p]}$ for each subspace. \begin{gather}\label{e.linmap} \bm{r}_{[p]}(v_m)=f_{[p]}(x(v_m))=W_{[p]}^T \bm{x}(v_m)+\bm{b}_{[p]}, \quad p=1, 2, \dots, s, \end{gather} The introduction of subspace projection brings in two advantages: (i) it simplifies the learning process. Since the multi-channel graph signals have been decomposed, the filter in each subspace just needs to learn to capture frequency characteristics of the corresponding signal components; (ii) through projecting signals into low-dimensional subspaces, it limits the dimension of output features that are produced by the adaptive filter bank, and thereby reduces free parameters and computations in the following layers. \subsection{Design of Filters}\label{sec.filter} In order to handle graphs with arbitrary topologies and diverse signals, we adopt universal and adaptive filters to construct the filter bank. For any function $t: \mathcal{D}=[0,2] \rightarrow \mathbb{R}$, we can obtain a corresponding filter whose frequency response is $\hat{g}_{[p]}(\lambda)=t(\cdot)$, and its spatial construction is computed through IGFT: \begin{equation} g_{[p]}(v_m)=\sum_{i=1}^{n} \hat{g}_{[p]}(\lambda_i)u_i(v_m), \end{equation} with $\bm{u_i}$, $i=1,\cdots,n$, the eigenvectors of the symmetric normalized graph Laplacian of a graph. Notably, we directly design the frequency response of the filter for the continuous range $\mathcal{D}=[0,2]$ in which the spectrum of an arbitrary graph locates as introduced in Section~\ref{sec.pb}. In other words, given the discrete spectrum $\{\lambda_0, \lambda_1, \dots, \lambda_n\}$ of an arbitrary graph, we have $\lambda_i \in \mathcal{D}$ and obtain the corresponding filter value on these specific discrete values $\hat{g}_{[p]}(\lambda_i)=t(\lambda_i)$, for $i=1, 2, \dots, n$. Thereby, the filter $g_{[p]}(\cdot)$ is adaptable to any graph even with different topologies, \emph{i.e.,} it is a universal form \citep{tremblay2018design,levie2019transferability}. However, with such graph filters, the computation of graph convolution (Eq.~\eqref{e.conv}) needs computation-intense eigendecomposition of the graph Laplacian. To reduce the computational complexity, we constrain the filter to the $K$-order polynomial function space, similarly to parametric dictionary learning \citep{thanou2014learning} in GSP. It corresponds to an FIR filter. Mathematically, the frequency response of a filter can be represented as \begin{equation}\label{e.filter} \hat{g}_{[p]}(\lambda)=\sum_{k=0}^{K} \alpha^{(k)}_{[p]}T_{k}(\lambda), \end{equation} where $\{T_{k}\}$ denotes a specific polynomial basis such as Chebyshev polynomials, and $\{\alpha^{(k)}_{[p]}\}$ indicates corresponding coefficients. With $\{\alpha^{(k)}_{[p]}\}$ learnable, we obtain an adaptive filter whose frequency response adapts to the data and to the target task. Correspondingly, for the signal projected to the $p$-th subspace $\bm{r}_{[p]}(v_m)$ , the filtered signal is calculated as \begin{equation}\label{e.adpconv} \bm{h}_{[p]}(v_m)=(\bm{r}_{[p]} \ast g_{[p]})(v_m)=\sum_{i=1}^{n}u_i(v_m)\hat{g_{[p]}}(\lambda_i) \bm{\hat{r}}_{[p]}(\lambda_i)=\sum_{k=0}^{K}\alpha_{[p]}^{(k)} (T_k(L)R_{[p]})^T_m \\ \end{equation} where $R_{[p]}=[\bm{r}_{[p]}(v_1),\bm{r}_{[p]}(v_2), \dots, \bm{r}_{[p]}(v_n)]^T$. Or equivalently, \begin{equation} \bm{h}_{[p]}(v_m)= c_{mm}\bm{r}_{[p]}(v_m) + \sum_{v_o \in N^K(v_m) } c_{mo}\bm{r}_{[p]}(v_o), \label{eq.eqamp.1} \end{equation} with \begin{equation} c_{mo}=\sum_{k=0}^{K}\alpha_{[p]}^{(k)} (T_k(L))_{m,o} \quad \forall m, o \in \{1,2, \dots, n \}. \label{eq.eqamp.2} \end{equation} Eq.~\eqref{eq.eqamp.1} and Eq.~\eqref{eq.eqamp.2} imply that the filtering strategy corresponds to a message passing scheme within a $K$-hop neighborhood. In other words, the adaptive filter in each subspace equivalently defines a $K$-hop message passing scheme that is learned from the data and exploits the multi-hop topological information of graphs through polynomials of the graph Laplacian. Signal information is also taken into consideration through the learnable parameters $\{\alpha_{[p]}^{(k)}\}$ as well as in the subspace projection step. More importantly, it permits to represent features that do not only have ``low-pass'' properties and learns the specific frequency components in a data-driven manner. \subsection{Diversity Regularization for Filter Banks} The filters constituting a filter bank should ideally have diverse frequency responses so that the signal is decomposed through filtering into a series of signals with different frequency characteristics. In GSP, the filters are usually band-pass and divide the spectrum into different parts. Considering that strict band-pass filters are difficult to fit through polynomial functions and that the filter bank is used here to extract useful representation rather than reconstruct input signals, we relax this strict band-pass requirement and rather target filters with diverse frequency responses, which we call ``diversity condition''. With the filter $\hat{g}_{[p]}(\lambda)$ given as a $K$-order polynomial function, the regularization on the filter is imposed on the respective polynomial coefficients $\{ \alpha^{(k)}_{[p]} \}_{p,k}$ in Eq.~\eqref{e.filter}. To achieve the diversity condition, we regularize the polynomial coefficients to be well distributed in the parameter space. Considering that the distances of the coefficient vectors of two scaled filters may still be large in terms of the Euclidean distance, we thereby take the cosine distance to measure the distance between the polynomial coefficients of filters. Specifically, the regularization term is: \begin{equation}\label{e.diverse} \Omega(\alpha)=\max_{p \neq q} \frac{|<\bm{\alpha}_{[p]}, \bm{\alpha}_{[q]}>|}{\|\bm{\alpha}_{[p]} \|_2 \|\bm{\alpha}_{[q]}\|_2}, \end{equation} where $\bm{\alpha}_{[p]}=[\alpha_{[p]}^{(0)}, \alpha_{[p]}^{(1)}, \dots, \alpha_{[p]}^{(K)}]^T$. The max function reflects the maximum similarity between the polynomial coefficients of any pair of filters. Through minimizing Eq.~\eqref{e.diverse}, the most similar filters will have different orientations in the parameter space. Thus, all the pairs of filters tend to be different. When $\{T_k(L)\}_k$ is an orthogonal basis such as Chebyshev polynomials, the diversity of polynomial coefficients $\{\bm{\alpha}_{[p]}\}_p$ implies that filters defined as Eq.~\eqref{e.filter} are different in the frequency domain. More intuitively, the message passing schemes in the spatial domain induced by the filters are different with diverse $\{\bm{\alpha}_{[p]}\}_p$, as presented in Eq.~\eqref{eq.eqamp.1} and Eq.~\eqref{eq.eqamp.2}. We can note that, if the filter is defined on the basis composed of rectangular pulse functions (not the case in this paper), \emph{i.e.,} \begin{equation} T_k(\lambda)= \begin{cases} 1& \frac{2k}{K+1} \leq \lambda < \frac{2(k+1)}{K+1}\\ 0& \text{others} \end{cases}. \end{equation} the ideal subband filter banks, whose filters have different passbands in GSP, just corresponds to the optimal solution to the regularization with $\Omega(\alpha)=0$ in Eq.~\eqref{e.diverse}, when $K \geq s$. With $T_{\Theta}(\mathcal{G}, Y)$ generally representing a target function, the objective function is then formulated as \begin{equation}\label{e.obj} \min_{\Theta}\ T_{\Theta}(\mathcal{G}, Y) + \gamma \ \Omega(\alpha), \end{equation} where $Y$ indicates ground truth labels, $\Theta$ denotes the parameter set including $\{\bm{\alpha}_{[p]}, W_{[p]}, \bm{b}_{[p]}\}_{p=1}^s$, and $\gamma$ is a hyperparameter to adjust the contribution of regularization term. Like most popular graph convolution operators, BankGCN can be optimized via gradient-based methods together with modules such as graph pooling operators in GCNs. It achieves linear computational complexity with $O(K|\mathcal{E}|d)$ and constant learning complexity, similarly to most existing MPGCNs. \section{Experiments}\label{sec.eval} In this section, we evaluate the proposed BankGCN in graph classification tasks on several benchmark datasets, and compare it with several popular GCNs. \subsection{Experimental Settings} \begin{table}[tp] \centering \caption{Dataset statistics and properties (L indicates node categorical features and A denotes node attributes).}\label{t:0} \resizebox{.99\textwidth}{!}{ \begin{tabular}{l|ccccccccc} \toprule Method &ENZ & D\&D & PROT & NCI1 &NCI109&MUTA& FRAN & CIFAR-10&Ogbg-molhiv\\ \midrule Avg $|\mathcal{V}|$ &32.63&284.32&39.06&29.87&29.68&30.32&16.90&117.63&25.51\\ Avg $|\mathcal{E}|$ &62.14&715.66&72.82&32.30&32.13&30.77&17.88&564.86&27.47\\ Node feature&L+A&L&L&L&L&L&A&A&A\\ Dim(feat) &3+18 &89 &3 &37 &38 &14 &780&5&9\\ \#Classes&6&2&2&2&2&2&2&10&2\\ \#Graphs &600&1,178&1,113&4,110&4,127&4,337&4,337&60,000 &41,127\\ \bottomrule \end{tabular}} \end{table} \textbf{Datasets and data splits.} For TU datasets \citep{KKMMN2016}, we conduct experiments on seven widely used public benchmark graph classification datasets, including ENZYMES, D\&D, PROTEINS, NCI1, NCI109, MUTAGENICITY, and FRANKENSTEIN \footnote{Datasets could be downloaded from https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets}. We adopt node categorical features (one-hot encoding) and node attributes as node signals, depending on availability on the datasets. Specifically, node attributes are adopted on FRANKENSTEIN, node categorical features and node attributes (normalized in range $[0, 1]$) on ENZYMES, and node categorical features on the other datasets. The statistics and properties of the datasets are summarized in Table~\ref{t:0}. According to \citep{DBLP:conf/icml/LeeLK19}, we use stratified sampling to randomly split each dataset into training, validation and test sets with a ratio of 8:1:1. The trained model with the best validation performance is selected for test. In order to alleviate the impact of data partition and network initialization, we conduct 20 random runs with different data splits and network initializations on each dataset, and report the mean accuracy with standard deviation of these 20 test results. We further adopt two large benchmark datasets, CIFAR-10 \citep{dwivedi2020benchmarking} and Ogbg-molhiv \citep{hu2020open}, in the experiments. The graph version of CIFAR-10 is composed from the superpixels of images, and RGB intensities and normalized coordinates form node signals. Ogbg-molhiv is a molecule graph dataset, with 9-dimensional node features including atomic number, chirality, and additional atom features. We divide data on these two datasets in accordance with \citep{dwivedi2020benchmarking} and \citep{hu2020open}, respectively. Similarly, results are achieved with 3 runs on CIFAR-10 and 10 runs on Ogbg-molhiv in order to alleviate the impact of network initialization. \textbf{Network architectures.} In the experiments, we adopt a similar architecture to \citep{xu2018representation}, and the network consists of four convolution layers, one graph-level readout module and one prediction module for all the datasets. Specifically, the graph convolution layer is designed as introduced in Section~\ref{sec.method} or defined by various baseline models. An $l_{2}$ normalization function is utilized in each convolution layer to stabilize and accelerate the training process, as in \citep{hamilton2017inductive}. A graph-level readout module then aggregates the graph features from all the convolution layers to generate the graph representation $h_{\mathcal{G}}$. It consists of node-wise mean and maximum operators (or mean operator on Ogbg-molhiv as in \citep{hu2020open}), represented by $\omega(\cdot)$. \begin{equation} h_{\mathcal{G}}={\rm Concat}(\omega( X_{l})| l=1,2,3,4). \end{equation} Finally, a prediction module composed of a linear fully connected layer and a softmax layer makes the prediction of the category of the input graph. The cross-entropy (represented as $T_{\Theta}(\mathcal{G}, Y)$) is adopted as the loss function, which together with the regularization $\Omega(\alpha)$ for diversity condition composes the objective function: \begin{equation} T_{\Theta}(\mathcal{G}, Y) + \gamma \ \Omega(\alpha) \end{equation} where $\Theta$ denotes all the free parameters and $Y$ indicates ground truth categorical labels. In more details, we consider two versions of the architectures in the experiments in order to evaluate the models under the same number of features and the same number of parameters, respectively. For the first case, all of models have the same number of feature maps per hidden layer, whereas models are composed of nearly the same number of parameters per hidden layer for the latter case. We consider the first version with TU datasets in Section~\ref{exp.tu} with 64 feature maps per hidden layer, and the second version in Section~\ref{exp.real} with nearly 16,500 and 65,800 free parameters per layer for all the models on CIFAR-10 and Ogbg-molhiv, respectively. \textbf{Configurations.} We implement the proposed models in Pytorch \citep{paszke2017automatic} with geometric package \citep{Fey/Lenssen/2019}, and optimize all of the models with the Adam optimizer \citep{DBLP:journals/corr/KingmaB14} on workstations with GPU GeForce GTX 1080 Ti for TU datasets as well as CIFAR-10 and with GPU RTX 2080 Ti for Ogbg-molhiv. In more details, for TU datasets, the learning rate is 0.001 and the batch size is 64. The number of training epochs is set as 500, and early stopping is employed with patience 30. Finally, we obtain the following optimal hyper-parameters through grid search: weight decay $\in \{0, 1e^{-5}, 1e^{-4}\}$ and $\gamma \in \{0, 0.1, 10\}$. For CIFAR-10 and Ogbg-molhiv, the batch size is increased to 256 due to their large scale. Similarly to \citep{dwivedi2020benchmarking}, we adopt a dynamic learning rate that is initialized as 0.001 and decays by 0.1 when validation loss is not improved for 20 epochs until the minimum learning rate $1e^{-5}$. The number of training epochs is 500 with early stoping (patience 50). The other settings are the same as TU datasets. For the polynomial basis in the filter design of BankGCN, we adopt the widely used Chebyshev polynomials. \begin{equation} T_0(\lambda)=1, \ T_1(\lambda)=\lambda, \ T_k(\lambda)=2\lambda T_{k-1}-T_{k-2}. \end{equation} The graph Laplacian is adopted as $\tilde{L}=L-I$ for numerical stability, similarly to \citep{defferrard2016convolutional}. \textbf{Baselines.} We compare BankGCNs with several state-of-the-art graph convolution methods. For the MPGCNs, we consider GCN \citep{kipf2016semi}, GraphSage \citep{hamilton2017inductive} with mean aggregation, GAT ($8$-heads) \citep{velikovi2018graph}, and GIN using SUM-MLP (2 layers) that achieves the best performance in \citep{xu2018how}. With the filters in our method defined as Chebyshev polynomials, it is necessary to compare to its counterpart in spectral graph convolution operators, ChebNets \citep{defferrard2016convolutional}. For fair comparison, the results of baseline models are obtained with the same configurations as BankGCNs using the public versions provided in the pytorch-geometric package \citep{Fey/Lenssen/2019}. \begin{table*}[tp] \caption{Results on graph classification with 20 runs for different datasets.}\label{t:1} \vskip 0.15in \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|ccccccc|c} \toprule &ENZY&DD&NCI1&PROT&NCI109&MUTA&FRAN&\#Para/layer\\ \midrule GCN&62.75 $\pm$ 5.83&77.75 $\pm$ 3.55&79.00 $\pm$ 1.93&74.87 $\pm$ 4.08&78.90 $\pm$ 1.52&81.34 $\pm$ 1.61&62.21 $\pm$ 2.41&4160\\ GraphSage&66.75 $\pm$ 6.31&75.21 $\pm$ 2.72&80.97 $\pm$ 1.87&75.13 $\pm$ 4.04&79.54 $\pm$ 2.24&82.30 $\pm$ 1.48&63.91 $\pm$ 1.96&8256 \\ GIN&61.08 $\pm$ 4.92&75.42 $\pm$ 3.31&81.19 $\pm$ 2.27&74.91 $\pm$ 3.88&80.71 $\pm$ 2.38&81.66 $\pm$ 2.48&68.11 $\pm$ 2.09&8320 \\ GAT &62.67 $\pm$ 7.52&77.50 $\pm$ 2.14&79.43 $\pm$ 2.38&75.09 $\pm$ 4.05&79.16 $\pm$ 1.85&81.28 $\pm$ 2.20&63.89 $\pm$ 1.53&4288\\ ChebNets ($K=2$)&66.75 $\pm$ 4.79&77.67 $\pm$ 2.91&81.80 $\pm$ 2.35&74.64 $\pm$ 4.75&81.27 $\pm$ 1.89&82.50 $\pm$ 1.58&68.35 $\pm$ 2.65 &12353\\ \midrule BankGCN \small ($K=2$, $s=8$)&\textbf{68.00 $\pm$ 5.23}&\textbf{78.14 $\pm$ 2.81}&\textbf{82.06 $\pm$ 1.75}&75.67 $\pm$ 4.19&81.54 $\pm$ 2.13&\textbf{82.89 $\pm$ 1.61}&67.82 $\pm$ 2.30&4163\\ BankGCN \small ($K=2$, $s=16$)&66.83 $\pm$ 5.19&77.42 $\pm$ 3.50&81.93 $\pm$ 2.15&\textbf{76.12 $\pm$ 5.08}&\textbf{81.62 $\pm$ 1.87}&82.57 $\pm$ 1.61&\textbf{68.43 $\pm$ 1.98}&4163\\ \bottomrule \end{tabular}} \vskip -0.1in \end{table*} \begin{table*} \caption{Study on the order $K$ of filters and the number of subspaces $s$ per layer. }\label{t:2} \vskip 0.15in \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|l|ccccccc|c} \toprule \multicolumn{2}{c}{} &ENZY&DD&NCI1&PROT&NCI109&MUTA&FRAN&\#Para/layer\\ \midrule $K=1$& \multirow{4}{*}{$s=8$}&67.17 $\pm$ 5.68&76.99 $\pm$ 2.99&81.02 $\pm$ 1.88&\textbf{75.89 $\pm$ 5.07}&80.92 $\pm$ 1.66&82.40 $\pm$ 1.89&66.95 $\pm$ 1.91&4162\\ $K=2$&&\textbf{68.00 $\pm$ 5.23}&\textbf{78.14 $\pm$ 2.81}&82.06 $\pm$ 1.75&75.67 $\pm$ 4.19&\textbf{81.54 $\pm$ 2.13}&\textbf{82.89 $\pm$ 1.61}&67.82 $\pm$ 2.30&4163\\ $K=3$&&65.75 $\pm$ 5.54&77.75 $\pm$ 2.66&81.85 $\pm$ 1.92&74.96 $\pm$ 5.77&80.82 $\pm$ 1.84&82.53 $\pm$ 1.56&68.35 $\pm$ 2.13&4164\\ $K=4$&&65.17 $\pm$ 6.62&77.75 $\pm$ 3.01&\textbf{82.46 $\pm$ 1.98}&75.09 $\pm$ 4.94&81.17 $\pm$ 2.11&82.26 $\pm$ 1.71& \textbf{68.35 $\pm$ 1.92}&4165 \\ \midrule \multirow{4}{*}{$K=2$}&$s=1$&63.58 $\pm$ 6.31&76.40 $\pm$ 2.34&80.46 $\pm$ 2.34&74.38 $\pm$ 4.80&79.23 $\pm$ 2.29&82.09 $\pm$ 1.51&65.52 $\pm$ 2.44&4163\\ &$s=4$&66.75 $\pm$ 5.61&78.14 $\pm$ 2.81&81.62 $\pm$ 1.84&75.67 $\pm$ 4.61&81.19 $\pm$ 2.08&82.70 $\pm$ 1.63&67.48 $\pm$ 2.09&4163\\ & $s=8$&\textbf{68.00 $\pm$ 5.23}&\textbf{78.14 $\pm$ 2.81}&\textbf{82.06 $\pm$ 1.75}&75.67 $\pm$ 4.19&81.54 $\pm$ 2.13&\textbf{82.89 $\pm$ 1.61}&67.82 $\pm$ 2.30&4163\\ &$s=16$&66.83 $\pm$ 5.19&77.42 $\pm$ 3.50&81.93 $\pm$ 2.15&\textbf{76.12 $\pm$ 5.08}&\textbf{81.62 $\pm$ 1.87}&82.57 $\pm$ 1.61&\textbf{68.43 $\pm$ 1.98}&4163\\ \bottomrule \end{tabular}} \vskip -0.1in \end{table*} \begin{figure*}[tp] \subfigure[\tiny $K=2, \gamma=0$.]{ \centering \includegraphics[width=0.18\linewidth]{figures/fig_2_a.pdf} } \subfigure[\tiny $K=2, \gamma=0.1$.]{ \centering \includegraphics[width=0.18\linewidth]{figures/fig_2_b.pdf} } \subfigure[\tiny $K=2, \gamma=10$.]{ \centering \includegraphics[width=0.18\linewidth]{figures/fig_2_c.pdf} } \subfigure[\tiny $K=3, \gamma=10$.]{ \centering \includegraphics[width=0.18\linewidth]{figures/fig_2_d.pdf} } \subfigure[\tiny $K=2, \gamma=0$.]{ \centering \includegraphics[width=0.18\linewidth]{figures/fig_2_e.pdf} } \caption{Comparisons of the frequency responses of the learned filters of BankGCN ($s=8$) in the first layer of networks. (a) $\sim$ (d) are on NCI109 and (e) on FRANKENSTEIN.} \label{fig.3} \vskip -0.1in \end{figure*} \subsection{Results and Analysis on TU-benchmarks}\label{exp.tu} As presented in Table~\ref{t:1}, BankGCNs outperforms all the MP baselines with nearly the same number of parameters (BankGCNs has even less parameters than GraphSage and GIN). It also achieves better performance than its spectral counterpart ChebNets with much less free parameters, \emph{i.e.}, about $1/(K+1)$. Fig.~\ref{fig.3} shows that the learned filters do not only focus on low frequency components. Furthermore, filters in a filter bank have different frequency responses. Some suppress high-frequency components, and some focus on middle-frequency frequencies, as demonstrated in Fig.~\ref{fig.3}. With such a bank of filters, BankGCN handles the multi-channel signals flexibly and thereby achieves the superior performance. We then go one step further to evaluate the adaptive filtering capabilities related to the problem Q1 presented in Section~\ref{sec.pb}. As presented in Fig.~\ref{fig.3}, the learned filters have different frequency responses on various datasets as they are adapted to the data characteristics. Tables~\ref{t:1} and~\ref{t:2} show that the BankGCN ($s=1$) employing one single adaptive filter still outperforms GCN with `low-pass' filtering on most datasets. These validate the benefits of adopting adaptive filters to flexibly capture the frequency characteristics of data. \textbf{Study on the order of filters.} The order of polynomials determines the function space of filters. In the frequency domain, as demonstrated in Fig.~\ref{fig.3}, the bandpass property of filters can be better realized with larger $K$ but it will overfit the spectrum of training data. In the graph domain, the value of $K$ corresponds to the neighborhood range to aggregation information, and large $K$ will impact the locality of signals. Thereby, a tradeoff is needed. As shown in Table~\ref{t:2}, BankGCNs with $K=2$ achieves the best performance on most datasets. We notice that for the cases with complex node signals, like FRANKENSTEIN with node attributes with $780$ channels, relative larger $K$ is needed in order to exploit their various frequency characteristics; and smaller $K$ is preferred on simple datasets, such as the PROTEINS dataset with $3$-channel node category features. \begin{table*}[tp] \caption{Ablation study on the diversity regularization with BankGCN ($K=2$, $s=8$).}\label{t:3} \vskip 0.15in \centering \resizebox{\textwidth}{!}{ \begin{tabular}{ll|ccccccc} \toprule & &ENZY&DD&NCI1&PROT&NCI109&MUTA&FRAN\\ \midrule &$\gamma=0$ &65.83 $\pm$ 6.66&77.03 $\pm$ 4.08&81.89 $\pm$ 1.95&75.36 $\pm$ 4.68&81.03 $\pm$ 1.95&82.44 $\pm$ 1.69&\textbf{67.82 $\pm$ 2.30}\\ &$\gamma=0.01$&66.50 $\pm$ 6.39&77.54 $\pm$ 3.21&81.63 $\pm$ 2.33&75.49 $\pm$ 4.07&81.27 $\pm$ 2.16&82.87 $\pm$ 1.94&67.71 $\pm$ 1.96\\ &$\gamma=0.1$ &\textbf{68.00 $\pm$ 5.23}&\underline{78.09 $\pm$ 2.18}&\textbf{82.06 $\pm$ 1.75}&75.67 $\pm$ 4.19&\underline{81.34 $\pm$ 1.92}&82.83 $\pm$ 1.87&67.68 $\pm$ 2.10 \\ &$\gamma=1$ &66.42 $\pm$ 6.20&77.97 $\pm$ 3.57&81.81 $\pm$ 1.97&\textbf{76.29 $\pm$ 4.84}&81.00 $\pm$ 2.21&\textbf{82.95 $\pm$ 1.44}&\underline{67.76 $\pm$ 1.65}\\ &$\gamma=10$ &66.75 $\pm$ 5.90&\textbf{78.14 $\pm$ 2.81}&\underline{82.06 $\pm$ 2.10}&75.31 $\pm$ 4.64&\textbf{81.54 $\pm$ 2.13}&\underline{82.89 $\pm$ 1.61}&67.20 $\pm$ 1.67 \\ &$\gamma=100$ &\underline{67.08 $\pm$ 5.42}&77.75 $\pm$ 3.26&81.81 $\pm$ 2.18&\underline{75.98 $\pm$ 4.40}&81.14 $\pm$ 1.94&82.44 $\pm$ 1.71&67.52 $\pm$ 1.83 \\ \bottomrule \end{tabular}} \vskip -0.15in \end{table*} \textbf{Study on the number of filters.} Furthermore, we study the impact of the number of filters in the filter bank (equivalently $s$, the number of subspaces), on the classification performance to show the benefits of subspace projection described in the question Q2 in Section~\ref{sec.pb}. With a group of filters, the ability of convolution operators to handle information is enhanced. As presented in the bottom part of Table~\ref{t:2}, as $s$ increases from $1$ to $8$, the performance of BankGCN is improved on most datasets. This validates the necessity of using more than one filter. With $s$ further increased into $16$, the performance is degraded on several datasets. Given that the total dimension of all the subspaces is fixed, the dimension of each space decreases and the representation capacity of each subspace probably declines with the growth of $s$. \begin{table}[t] \renewcommand{\baselinestretch}{1.0} \renewcommand{\arraystretch}{1.0} \renewcommand{\abovecaptionskip}{0pt} \centering \caption{Classification accuracy on CIFAR-10 and Ogbg-molhiv (no edge attributes) datasets. }\label{t:4} \begin{tabular*}{0.99\columnwidth}{@{\extracolsep{\fill}}l|cc|c|c} \toprule Method &CIFAR-10 &\multicolumn{2}{c|}{ CIFAR-10 (1000)}& Ogbg-molhiv\\ & Acc & Acc & Decrease & ROC-AUC \\ \midrule GCN&55.64 $\pm$ 0.11&36.47 $\pm$ 0.31 &\small-34.5\% &75.18 $\pm$ 1.85\\ GraphSage&63.51 $\pm$ 0.40&40.03 $\pm$ 0.56 &\small-37.0\%&75.39 $\pm$ 1.64\\ GIN&50.04 $\pm$ 0.06&31.97 $\pm$ 0.20 &\small -36.1\%&71.52 $\pm$ 1.45\\ GAT&60.34 $\pm$ 0.19&36.08 $\pm$ 0.04 &\small-40.2\%&75.08 $\pm$ 0.39\\ ChebNets ($K=2$)&64.33 $\pm$ 0.14&39.46 $\pm$ 0.75 &\small-38.7\%&74.69 $\pm$ 2.08\\ ChebNets ($K=3$)&63.62 $\pm$ 0.23&37.91 $\pm$ 0.40 &\small-40.4\%&73.17 $\pm$ 1.57\\ \midrule BankGCN($K=2, s=16$)&\textbf{66.17 $\pm$ 0.34}&42.82 $\pm$ 0.33 &\small-35.3\%&\textbf{77.95 $\pm$ 1.96}\\ BankGCN($K=3, s=16$)&66.00 $\pm$ 0.51&\textbf{42.95 $\pm$ 0.49} &\small-34.9\%&75.72 $\pm$ 1.45 \\ \bottomrule \end{tabular*} \vskip -0.1in \end{table} \textbf{Ablation Study.} We further evaluate the effect of the diversity regularization proposed in Section 4.4. In Table~\ref{t:3}, we consider the values of $\gamma \in \{0, 0.01, 0.1, 1, 10, 100\}$ to adjust its contribution in the objective function Eq.~\eqref{e.obj}. The regularization improves the classification performance on almost all the datasets. Mostly, the best performance is achieved with $\gamma=0.1$ or $\gamma=10$, and thereby we consider $\gamma \in \{0, 0.1, 10\}$ in adjusting the contribution of regularization in the experiments. On the FRANKENSTEIN dataset whose signal is composed of $780$-channel attributes, the regularization is not helpful. We infer that the information in such high-channel signal is complex enough to induce different filters, as presented in Fig.~\ref{fig.3}(e). Fig.~\ref{fig.3}(a)-(c) show the learned filters in a filter bank with regularization present better diversity in terms of frequency response, than those without regularization. For example, the filters denoted by blue and red in Fig.~\ref{fig.3}(a) are with similar frequency responses, while they are more diverse in Fig.~\ref{fig.3}(b) and Fig.~\ref{fig.3}(c). This is further verified by the maximum similarity scores of the polynomial coefficients that define the filters $\Omega(\alpha)=0.997$, $0.744$, and $0.649$ (computed as Eq.~\eqref{e.diverse}) for Fig.~\ref{fig.3}(a)-(c), respectively. \subsection{Results and Analysis on CIFAR-10 and Ogbg-molhiv }\label{exp.real} BankGCNs achieves the best performance on both CIFAR-10 and Ogbg-molhiv, as presented in Table~\ref{t:4}, with all the models having a similar number of free parameters per hidden layer. Furthermore, we construct a reduced CIFAR-10 (1000) dataset by taking 100 graphs per category to form the training set, while maintaining the validation and testing set. BankGCNs is still superior on the reduced CIFAR-10 and is among the models with the least performance loss compared with the full dataset. Together with ROC-AUC being a measure of the generalization ability of a model, BankGCN performs well in the sense of generalization, especially when compared with its spectral counterpart ChebNets. \section{Conclusion}\label{sec.con} In this paper, we propose a novel graph convolution operator, termed BankGCN, constructed on an adaptive filter bank to process heterogeneous information of graph data. The filter bank is equivalent to a group of learnable message passing schemes in $K$-hop neighborhoods. Further with subspace projection, BankGCN presents powerful capacity to adaptively handle information of diverse frequency components with significantly less parameters than its competitors, and achieves excellent performance on graph classification tasks. An interesting direction for future research resides in discussing the capacity of the proposed graph convolution operator in terms of graph isomorphism test. It may also be promising to employ BankGCN in a variety of tasks on non-Euclidean data like 3-D point cloud classification and segmentation.
1,941,325,220,863
arxiv
\section{Introduction}\label{intro} One of the crowning achievements of the Golden Age of Relativity is the discovery that the black holes exhibit thermodynamic properties. A black hole has a natural temperature associated with its surface gravity and the entropy associated with its area. These quantities follow classical laws of thermodynamics. In the semi-classical treatment, the black holes radiate and evaporate eventually. Though the Schwarzschild black hole in an asymptotically flat space-time has negative specific heat, and is thus thermodynamically unstable, the Schwarzschild black hole in an asymptotically anti-de Sitter (AdS) space possesses positive specific heat at high temperature and is therefore thermodynamically stable. In their remarkable work \cite{H-P}, Hawking and Page further showed that these AdS-Schwarzschild black holes acquire negative free energy relative to AdS space-time at high temperatures and exhibit a first order phase transition as one tunes the temperature. More recently, the study of black holes in AdS space-time has gained a lot of attention due to Maldacena's discovery of the AdS/CFT conjecture \cite{Maldacena}. With in this context, the physics of the black holes or, more precisely, the thermodynamical properties of the black hole in the bulk AdS space-time play a crucial role in triggering novel behaviour, including phase transitions, of strongly coupled dual gauge theories that reside on the boundary of the asymptotically AdS space. This line of investigations started with the work of Witten who showed that the phase transition that takes place between the thermal AdS at low temperatures and the AdS- Schwarzschild black hole at high temperatures could be realized as the confinement/deconfinement transition in the language of boundary $SU(N_c),\, \,{\mathcal N = 4}$ SYM theory \cite{Witten1, Witten2}. Subsequently, several other extensions of this work appeared. These includes the consideration of the R-charges \cite{Chamblin, Myers, Rabin}, addition of the Gauss-Bonnet \cite{Nojiri, Cai, Cvetic, Cho, Tori, Dey1, Myung1} corrections or the Born-Infeld \cite{Fernando, Dey2, Cai2, Fernando2, Myung} term ( separately and combination \cite{Dey3, Zou, Wei, Zou1} of these terms in the AdS-Schwarzschild black hole) into the action. There has also been interest to search for the gravity dual of $SU(N_c),\, \, {\mathcal N = 4}$ SYM theory coupled to $N_f$ massless fundamental flavors at finite temperature and baryon density \cite{Guen, Karch, Head, Big, shankha, Kumar}. The fundamental flavors in the dual closed string representation of $SU(N_c),\, \, {\mathcal N = 4}$ SYM theory corresponds to adding open string sector - with one end of the string attached to the boundary of the AdS space and the body hanged into the bulk and extended up to the center of the AdS space or horizon of the black hole. In the dual gauge theory, the attached end point of the string corresponded to the quark or the anti-quark and the body of the string corresponded to the gluonic field of the dual gauge theory. In \cite{Head}, they have also studied the stability of the gravity configurations from the free energy calculation. The free energy is a function of the Polyakov-Maldacena loop(PML). The loop is computed by the area of a certain minimal surface in the dual supergravity background. To fix the (average) area of the appropriate minimal surface they introduce a Lagrange multiplier term into the bulk action. This term, which can also be viewed as a chemical potential for the PML, contributes to the bulk stress tensor like a string stretching from the horizon to the boundary. They find the corresponding “hedgehog” black hole solutions numerically, within the $SO(6)$ preserving ansatz. Motivated by these developments, in this paper we first study the thermodynamics of the recently developed AdS-Schwarzschild black hole in presence of an external string cloud \cite{shankha}. In this work they have considered the gravity action of the AdS-Schwarzschild space time with the contribution of the external matter which comprises of uniformly distributed strings, each of whose one end is stuck on the boundary. We observe that the black hole configuration is a stable one at any temperature compared to the AdS configuration. Even at zero temperature, there is a black hole with a minimum radius. The size crucially depends on the density of the string cloud. We see that the density of this cloud plays not only an important role in finding out the minimum radius of black hole, it is also an important parameter controlling the number of black holes present at any given temperature. If the cloud density is greater than a critical value, there exists only one black hole. While for the cloud density less than the critical value, and for the value of curvature constant one, within a certain range of temperature there exist three black holes. Beyond this temperature range again we have a single black hole configuration. Depending on their sizes, we call them small, medium and large black holes. Among these three holes, the small and the large come with positive specific heat, and, the remaining one has a negative specific heat. Therefore, except the medium one, the other two can be stable. Due to the presence of large number of strings the AdS background is deformed to a small black hole background. We study the stability of these two black holes by analyzing their free energies as a function of the temperature and the Landau function as a function of their radii (at different temperatures). Our observation is that there is a critical temperature below which the small black hole is the stable configuration and above the critical temperature large black hole is the stable one. At the critical temperature a transition between the small and the large black holes takes place. Though our approach is different but in the context of stability of the space time configuration our result is almost same as \cite{Head}. Therefore, we suspect that this may lead to an instability of the bound states of quark and anti-quark pairs in the dual gauge theory. In order to test our suspect finally we study the dual gauge theory along the line of \cite{Yang, Yuan}. In the dual gauge theory we consider a probe string whose end points are attached on the boundary of the black hole background and the body of the string is hanged in to the bulk space-time. In the bulk space-time, there are two configurations of the open string. One is the U-shape configuration, where the body of the string reach up to a maximum distance from the boundary. The other one is the straight configuration where the open string reach up to the horizon of the black hole. The first configuration corresponds to the confined state of quark and anti-quark pairs and the second configuration corresponds to deconfined state of quark and anti-quark pairs. We see that for any black hole back ground, an open string is in the U-shape configuration for short distance between quark and anti-quark pair and is in the straight shape configuration for large distance. Thus black hole configuration of the gravity corresponds to a deconfined state of quark and anti-quark pairs in the dual gauge theory. We have organized our paper as follows: we start by writing the action of the AdS-Schwarzschild black hole with the matter contribution coming from the infinitely long string and the corresponding black hole solution in section \ref{gravitydual}. Then we compute the thermodynamical quantities in section \ref{thermo}. Section \ref{phase} is devoted to the study the different phases of the black holes. Before summarising our work we study the dual gauge theory in section \ref{gaugetheory}. Finally we summarize in section \ref{conclude}. \section{ AdS-Schwarzschild black hole in presence of external string cloud}\label{gravitydual} We start this section by considering the $(n + 1)$ dimensional gravitational action in presence of cosmological constant with the contribution of the external string cloud, \begin{equation} \mathcal{S} = \frac{1}{16 \pi G_{n+1}} \int dx^{n+1} {\sqrt {-g}}( R - 2 \Lambda ) + S_m, \label{totac} \end{equation} here $S_m$ represents the contribution of the string cloud and can be expressed by the following way; \begin{equation} S_m = -{\frac{1}{2}} \sum_i {\cal{T}}_i \int d^2\xi {\sqrt{-h}} h^{\alpha \beta} \partial_\alpha X^\mu \partial_\beta X^\nu g_{\mu\nu}, \label{matac} \end{equation} where $g^{\mu\nu}$ and $h^{\alpha \beta}$ are the space-time and world-sheet metric respectively with $\mu, \nu$ represents space-time directions and $\alpha, \beta$ stands for world sheet coordinates. $S_m$ is a sum over all the string contributions and ${\cal{T}}_i$ is the tension of $i$'th string. The integration in ({\ref{matac}}) is taken over the two dimensional string coordinates. The action (\ref{totac}) possesses black hole solutions and the metric solution of this black hole can be written as \begin{equation} ds^2 = -g_{tt}(r) dt^2 + g_{rr}(r) dr^2 + r^2 g_{ij} dx^i dx^j. \label{genmet} \end{equation} Here $g_{ij}$ is the metric on the $(n-1)$ dimensional boundary and \begin{equation} g_{tt}(r) = K +\frac{r^2}{l^2} - \frac{2 m}{r^{n-2}} - \frac{2 a}{(n-1) r^{n-3}}= \frac{1}{g_{rr}}, \label{comp} \end{equation} where $K = 0, 1, -1$ depending on whether the $(n-1)$ dimensional boundary is flat, spherical or hyperbolic respectively, having the boundary curvature $(n-1)(n-2) K$ and volume $V_{n-1}$. The uniformly distributed string cloud density $a$ can be written as \begin{equation} a(x) = T \sum_i \delta_i^{(n-1)}(x - X_i),~~~{\rm with} ~a > 0. \label{denfun} \end{equation} In writing $g_{tt}(r)$, the cosmological constant is parameterized as $\Lambda = - n(n-1)/(2 l^2)$. With equation (\ref{comp}), the metric (\ref{genmet}) represents a black hole with singularity at $r=0$ and the horizon is located at $g_{tt}(r) =0$. The horizon radius, denoted by $r_+$, satisfies the equation \begin{equation} K +\frac{r_+^2}{l^2} - \frac{2 m}{r_+^{n-2}} - \frac{2 a}{(n-1) r_+^{n-3}} = 0. \label{hor} \end{equation} This allows us to write the integration constant $m$ in terms of horizon radius as follows \begin{equation} m = K\frac{r_+^{n-2}}{2}+\frac{(n-1) r_+^n - 2 a l^2 r_+}{2 (n-1) l^2}. \label{mas} \end{equation} The integration constant $m$ is related to the ADM ($M$) mass of the black hole as, \begin{equation} M = \frac{(n-1) V_{n-1} m}{8 \pi G_{n+1}}. \end{equation} Therefore the mass of the black hole can finally be written in the following form \begin{equation} M =\frac{(n-1) V_{n-1} }{8 \pi G_{n+1}} \Big[K\frac{r_+^{n-2}}{2}+\frac{(n-1) r_+^n - 2 a l^2 r_+}{2 (n-1) l^2}\Big]. \label{mas} \end{equation} Analysing the black hole metric solution and mass, in the next sections we discuss the thermodynamics of this type of black holes. We therefore first compute the thermodynamical quantities. \section{Thermodynamical quantities}\label{thermo} It has been well understood that black holes behave as thermodynamic systems. The laws of black hole mechanics become similar to the usual laws of thermodynamics after appropriate identifications between the black hole parameters and the thermodynamical variables. In order to study the thermodynamics of black holes we first come across various thermodynamical quantities as calculated in the following portions; Firstly, the temperature of the black holes is found by the following standard formula; \begin{equation} T = \frac{1}{4 \pi}\frac{d g_{tt}}{dr}|_{r = r_+} = \frac{ n(n-1)r_+^{n+2} + K(n-1)(n-2)l^2r_+^n - 2 a l^2 r_+^3}{4 \pi (n-1) l^2 r_+^{n+1}}. \label{flattemp} \end{equation} To find out the entropy we expect that these black holes satisfy the first law of thermodynamics. Therefore by using the first law of thermodynamics, we calculate the entropy and it takes the form as; \begin{equation} S = \int T^{-1} dM, \end{equation} which satisfies the universal area law of the entropy, \begin{equation} S = \frac{V_{n-1}r_+^{n-1}}{4 G_{n+1}}. \end{equation} Then we compute the specific heat associated with the black holes and is found as; \begin{equation} C = \frac{\partial M}{\partial T} = \frac{(n-1)V_{n-1}}{4 G_{n+1}}\Big[\frac{n(n-1)r_+^{2n-2}+K(n-1)(n -2)l^2 r_+^{2n-4} - 2 a l^2 r_+^{n-1}}{n(n-1) r_+^{n-1} -K(n-1)(n-2)l^2r_+^{n-3} + 2(n-2) a l^2}\Big]. \label{sh} \end{equation} Free energy can be calculated by using the formula \begin{equation}\label{fenergy} F=E-TS= \frac{V_{n-1}}{16\pi G_{n+1}}\Big[K r_+^{n-2}-\frac{r_+^n}{l^2}-\frac{(n-2)2a r_+}{(n-1)}\Big]. \end{equation} Where $E$ is the energy of the black hole which is considered equal to the mass of the black hole. Finally we also compute the Landau function around the critical point by considering the radius of the black hole as an order parameter to get the better understanding on the phase structure. The Landau function depends on the order parameter $r_+$ and temperature $T$ in the following way, \begin{equation}\label{landau} G=\frac{V_{n-1}}{16\pi l^2G_{n+1}}\Big[(n-1)r_+^n -4\pi l^2 T r_+^{n-1}+ K (n-1)l^2r_+^{n-2}-2 a l^2 r_+\Big]. \end{equation} At the extreme point of this function that is when $\frac{\partial G}{\partial r_+} =0$, we get back the expression of the temperature given in (\ref{flattemp}). Also, if we plug in the expression of temperature in to (\ref{landau}), $G$ reduces to the free energy given in (\ref{fenergy}). Many interesting features of these black holes, related to local and global stabilities, can be studied from the detailed analysis of the thermodynamic quantities. We study the thermodynamical phases of these kind of black holes in the next section. \section{Phases of black hole}\label{phase} In this section we consider the black holes in five dimensions $(n = 4)$ and the results can easily be extrapolated in the higher dimensions. We start the study by considering two dimensionless quantities $\bar a = \frac{a}{l}$ and $ \bar r = \frac{r_+}{l}$ . In terms of these dimensionless quantities the temperature can be expressed in the following form; \begin{equation} \bar T=\frac{1}{6\pi l \bar r^2}\big[6\bar r^3 + 3 K \bar r - \bar a\big]. \end{equation} The behaviour of temperature with respect to $\bar r$ for string cloud density $\bar a$ less or greater than a critical value $\bar a_c$ and $ K =1$ are drawn in figure \ref{temp}. \begin{figure}[h] \begin{center} \begin{psfrags} \psfrag{T}{$\bar T$} \psfrag{r}{$\bar r$} \mbox{\subfigure[]{\includegraphics[width=6.5 cm]{temperature.eps}} \quad \subfigure[]{\includegraphics[width=6.5 cm]{temperature2.eps}}} \end{psfrags} \caption{The plot (a) is for $\bar a=0.3 < \bar a_c, K=1 \,{\rm and}\, l=2$ and plot (b) is for $\bar a=0.5 > \bar a_c, K=1 \,{\rm and}\, l=2$. } \label{temp} \end{center} \end{figure} For $\bar a < \bar a_c$, we notice that even at zero temperature black hole exists. The size of the zero temperature black hole can be found in terms of power series of $\bar a$ which takes the form \begin{equation} \bar r_{0} = \frac{\bar a}{3} + \mathcal{O}(\bar a^2). \label{rmin} \end{equation} Further from figure \ref{temp}(a), it is also observed that at low temperature only one black hole exist. As temperature increases the size of this black hole slowly increases and at a critical temperature with the existing black hole, two new black holes nucleate. Radius of one of these two new black holes reduces and the other one increases when temperature increases. Depending on the size of these three black holes we call them small, medium and large. Up to a certain value of temperature all these three black holes exist and after that small and medium size black holes merge together and vanish. Finally, only large black hole exists at high temperature. For $\bar a > \bar a_c$, the figure shows that at any temperature only one black hole exists. However when the size of the black hole becomes small, the associated temperature is negative. To avoid the negative temperature of the black hole, the radius of the black hole should be protected by a minimum size which is equal to $ \bar r_{0}$. Therefore for any value of string cloud density there will be a black hole of finite size with non-zero entropy. From the above discussion it is clear that the space time configuration crucially depends on the critical value of string cloud density $\bar a_c$. Later on we see that the behaviour of thermodynamical quantities also depends on the $\bar a_c$. Therefore we should find out the critical value $\bar a_c$. To calculate it, we focus on figure \ref{temp}. If $\bar a$ is above the critical value $\bar a_c$, temperature is monotonically increasing function of radius of black hole. However, below the critical value, temperature has two extrema for two real values of black hole radius. So, $\frac{\partial T}{\partial \bar r} =0$, should give two real values of radius. To get the two real values, we find that the string cloud density $\bar a$ has to be less than the critical value; $$\bar a_c =\frac{1}{\sqrt 6}\approx 0.408$$ In order to study the stability of these black holes we study the specific heat associated with them. The specific heat by considering the above dimensionless quantities can be written as; \begin{equation} \bar C=\frac{3 l^3 V_3}{4 G_5}\Big[\frac{6\bar r^6 + 3 K \bar r^4 - \bar a \bar r^3}{6\bar r^6 - 3 K \bar r + 2\bar a}\Big] \end{equation} We study the specific heat as per the figure \ref{specific} where the specific heat is plotted as a function of $\bar r$. \begin{figure}[h] \begin{center} \begin{psfrags} \psfrag{CP}{$\bar C$} \psfrag{r}[][]{$\bar r$} \mbox{\subfigure[]{\includegraphics[width=6.5 cm]{specificheat.eps}} \quad \subfigure[]{\includegraphics[width=6.5 cm]{specificheat2.eps}}} \end{psfrags} \caption{The plot (a) is for $\bar a=0.39 < \bar a_c, K=1 \,{\rm and}\, l=2$ and plot (b) is for $\bar a=0.5 > \bar a_c, K=1 \,{\rm and}\, l=2$.} \label{specific} \end{center} \end{figure} From the figure \ref{specific}, it becomes evident that for $\bar a < \bar a_c$, the specific heat is positive for small and large sized black hole, while it is negative for the medium sized black hole. Therefore it can be expected that the black holes with positive specific heat can be stable while the black hole with negative specific heat is unstable. For $\bar a > \bar a_c$, we notice that the specific heat monotonically increases from zero value with the increase in the radius of the black hole. So this black hole can also be stable. We then analyse the free energy to check the stability further. The free energy in terms of dimensionless quantity $ \bar r$ can be rewritten as; \begin{equation} \bar F=\frac{V_3 l^2 \bar r}{16\pi G_5}\big[K \bar r -\bar r^3- \frac{4}{3}\bar a\big] \end{equation} \begin{figure}[h] \begin{center} \begin{psfrags} \psfrag{F}[][]{$\bar F$} \psfrag{r}[][]{$\bar r$} \mbox{\subfigure[]{\includegraphics[width=6.5 cm]{freeenergy.eps}} \quad \subfigure[]{\includegraphics[width=6.5 cm]{freeenergy2.eps}}} \end{psfrags} \caption{The plot (a) is for $\bar a=0.2 < \bar a_c, K=1 \,{\rm and}\, l=2$ and plot (b) is for $\bar a=0.45 > \bar a_c, K=1 \,{\rm and}\, l=2$.} \label{freeenergy} \end{center} \end{figure} The figure \ref{freeenergy} shows that for $\bar a < \bar a_c$, the free energy starts from zero value at $\bar r = 0 $ and increases towards the negative value with the increase of black hole radius. At a certain value of radius, free energy reaches to the minimum value and then goes to the maximum value with the increase of radius. Again it drops down to the negative region and continues to increase towards the negative value with the increase of radius. Therefore the first extrema which corresponds to small size black hole will be preferable configuration compared to the AdS configuration since its free energy is less than the latter one. However, the free energy of the small black hole is greater than the large size black hole. So the large size black hole should be more stable compared to the smaller one and there is a possibility of having a phase transition between these two black holes. For $\bar a > \bar a_c$, again the free energy monotonically decreases with the radius. So the black hole configuration is the stable one. Now to verify the possibility of the phase transition between these black holes we study the free energy in terms of temperature. \begin{figure}[h] \begin{center} \begin{psfrags} \psfrag{F}{$\bar F$} \psfrag{T}{$\bar T$} \mbox{\subfigure[]{\includegraphics[height=6.0cm,width=6.5 cm]{freeenergy-temp.eps}} \quad \subfigure[]{\includegraphics[height=4.5cm,width=6.5 cm]{freeenergy-temp2.eps}}} \end{psfrags} \caption{The plot (a) is for $\bar a=0.2 < \bar a_c, K=1 \,{\rm and}\, l=2$ and plot (b) is for $\bar a=0.45 > \bar a_c, K=1 \,{\rm and}\, l=2$.} \label{F-T} \end{center} \end{figure} Figure \ref{F-T} represents the plot of free energy as a function of temperature. From the plot we take notice of the following scenario. For $\bar a > \bar a_c$, there is only one branch with negative free energy. Thus this branch will be stable. For $\bar a < \bar a_c$, at low temperature free energy has only one branch (I) and as the temperature is increased, two new branches (II and III) with positive value appear at temperature $\bar T_1$. If temperature increases further free energy of both the branches continues to decrease. Branch III cuts branch I at temperature $\bar T_2$ and becomes more and more negative at temperature $\bar T_3$ where branch II meets branch I and both disappear. These three branches represent respectively small, intermediate and large black holes. Out of these three, the intermediate black hole is unstable with negative specific heat while the other two are stable with positive specific heat. Bellow temperature $\bar T_1$ only branch I exist with the free energy less than AdS configuration. Within the range of temperature $\bar T_1$ and $\bar T_2$, the free energy of the branch III is higher than the branch I. Thus branch I should be stable configuration than the branch III. Once temperature crosses $\bar T_2$ the scenario is just opposite and the branch III will be stable configuration. Therefore at low temperature there is only small black hole and once temperature increases and approaches towards $\bar T_1$ there is a nucleation of medium and large black holes occurs where as medium one is unstable. At $\bar T_2$ cross over from the small black hole to the large black hole takes place. Above this temperature only large black hole exists. Therefore, it can be said that AdS configuration can not be the most stable one. Only the small or the large black hole configurations survive. To render further support for such a scenario, we now study the Landau function. In terms of the dimensionless quantities the Landau function can be written as follows: \begin{equation} \bar G=\frac{V_3 l^2 }{16\pi G_5}\big[3 \bar r^4 -4\pi T l \bar r^3 + 3 K\bar r^2- 2\bar a \bar r\big]. \end{equation} To analyze the different phases we plot this Landau function with respect to $\bar r$ for different temperature. \begin{figure}[h] \begin{center} \begin{psfrags} \psfrag{G}[][]{$\bar G$} \psfrag{r}[][]{$\bar r$} \mbox{\subfigure[]{\includegraphics[width=6.5 cm]{gibbsfreeenergy.eps}} \quad \subfigure[]{\includegraphics[width=6.5 cm]{gibbsfreeenergy2.eps}}} \end{psfrags} \caption{The plot (a) is for $\bar a=0.3 < \bar a_c, K=1 \,{\rm and}\, l=2$ and plot (b) is for $\bar a=0.45 > \bar a_c, K=1 \,{\rm and}\, l=2$. The blue curve corresponds to the temperature $\bar T_1 =0.208$, the green curve corresponds to $\bar T_2 = 0.21$ and the pink curve corresponds to $\bar T_3 = 0.213$ such that $\bar T_1 < \bar T_2 < \bar T_3$.} \label{gibbs} \end{center} \end{figure} In figure \ref{gibbs}(a), for $\bar a < \bar a_c$, we plot the free energy for three different temperatures. There exists two black hole solutions corresponding to the temperature $\bar T_1$ but the energy of the small black hole is less than the large one. So the small one will be stable. Similarly at temperature $\bar T_2$ two black hole solutions co-exist. Finally at temperature $\bar T_3$, the energy of the large black hole is small compared to the small black hole and the large black hole will be stable configuration. Therefore we conclude that at low temperature small black hole will be the stable configuration and at temperature above a critical value large one will be stable and at the critical temperature there will be a phase transition between large and small black hole. For $\bar a > \bar a_c$, the Landau function has one minimum with negative value. The qualitative behaviour of the Landau function is same for different temperatures. Therefore we have only one stable black hole. All the above calculations were done for curvature constant $K = 1$. For $K = 0 \, \rm and -1$ we find that the qualitative feature of the above thermodynamical quantities are similar to the case of $K = 1$ with $\bar a > \bar a_c$. So at any temperature, there exists a single black hole phase of finite size with non-zero entropy. As a concluding remark the AdS space is not a stable configuration. Either small or large black hole configuration is the stable configuration. Therefore, in the dual gauge theory we may think that the bound states of a quark and anti-quark pairs do not exist. In the next section we study the dual gauge theory to check the stability of the bound state of a quark and anti-quark pairs. \section{Dual gauge theory}\label{gaugetheory} In the dual gauge theory we consider a probe string whose end points are attached on the boundary of the black hole background and the body of the string is hanged in to the bulk space-time. In the bulk space-time, there are two configurations of the open string. One is the U-shape configuration, where the body of the string reach up to a maximum distance from the boundary. The other one is the straight configuration where the open string reach up to the horizon of the black hole. The first configuration corresponds to the confined state of quark and anti-quark pairs and the second configuration corresponds to deconfined state of quark and anti-quark pairs. Since we are going to study the existence of bound state of quark and anti-quark pairs, so we must study the distance between the quark and anti-quark pairs living on the boundary of the black hole space time. Therefore it is convenient to move from polar coordinate to cartesian coordinate. With this aim we replace radial coordinate $r$ as $\frac{l^2}{u}$ and the metric solution of equation (\ref{genmet}) reduces to the following form, \begin{equation}\label{dualgrav} ds^2= f(u)\big[-h(u)dt^2 + dx^2 + dy^2 + dz^2 + {du^2\over h(u)}\big], \end{equation} \begin{equation} f(u)= {l^2\over u^2} \quad\quad{\rm and}\quad\quad h(u)= 1+ \frac{u^2}{l^2}-{2m u^4\over l^6} - {2\over 3}{\bar a u^3\over l^3}.\nonumber \end{equation} In this coordinate system the boundary appears at u = 0 and the modified radius of horizon can be constructed by solving the equation, \begin{eqnarray} h(u_{+})= 1-{2m u_{+}^4\over l^6} - {2\over 3}{\bar a u_{+}^3\over l^3}=0 \label{hor} \end{eqnarray} The large and small black hole is defined as $u_+$ is small and large respectively. In order to study the distance between quark and anti-quark pair we start with the probe string world sheet action in the above black hole back ground. The world sheet action can be written as, \begin{equation} S =\int d^2\xi \mathcal{L} = \int d^2\xi \sqrt{det\; h_{\alpha \beta}} . \end{equation} Where the induced metric $h_{\alpha\beta} = \partial_\alpha X^\mu\partial_\beta X^\nu g_{\mu\nu}$. In this dual theory we prefer to work in the following static gauge: $\xi^0 = t,\,\, \xi^1 = x$. For these choices the induced metric in string frame can be written as, \begin{equation} ds^2 = f(u)\big[-h(u)dt^2 + \big\{1 + \frac{u'^2}{h(u)}\big\}dx^2\big]. \end{equation} Here $u'$ denotes a derivative of $u$ with respect to $x$. The Lagrangian and Hamiltonian of the quark and anti-quark pair can easily be calculated as, \begin{equation} \mathcal {L} = \sqrt{-det h_{\alpha \beta}} = f(u) \sqrt{h(u) + u'^2}, \end{equation} \begin{equation}\label{hamiltonian} \mathcal { H} = (\frac{\partial \mathcal L}{\partial u'})u' -\mathcal {L} = -f(u) \frac{h(u)}{\sqrt{h(u) + u'^2}}. \end{equation} Following these boundary conditions; \begin{equation} u(x = \pm \frac{L}{2}) = 0, u(x=0) = u_0 \,\,{\rm and}\,\, u'(x=0)=0, \end{equation} we can obtain the conserved energy of the quark and anti-quark pair as, \begin{equation} \mathcal {H}(x=0) = -f(u_0) \sqrt{h(u_0)}. \end{equation} From equation (\ref{hamiltonian}), $u'$ can also be found as, \begin{equation} u' = \sqrt{h(u)\Big[\frac{\sigma^2(u)}{\sigma^2(u_0)}-1\Big]}, \end{equation} where $$\sigma(u)= f(u)\sqrt h(u).$$ Finally the distance $L$ between the quark and anti-quark pair can be calculated as, \begin{equation} L = \int_{-\frac{L}{2}}^{\frac{L}{2}}dx = 2 \int_0^{u_0} \frac{1}{u'}du =2 \int_0^{u_0}du \Big[ h(u)\Big\{\frac{\sigma^2(u)}{\sigma^2(u_0)}-1\Big\} \Big]^{-\frac{1}{2}}, \end{equation} where $u_0$ is the maximum depth that the string can reach towards the black hole horizon of the background. \begin{figure}[h] \begin{center} \begin{psfrags} \psfrag{f}[][]{$L$} \psfrag{u}[][]{$u_0$} \mbox{\subfigure[]{\includegraphics[width=6.5 cm]{distancebetweenqqbar11.eps}} \quad \subfigure[]{\includegraphics[width=6.5 cm]{distancebetweenqqbar15.eps}}} \end{psfrags} \caption{Both the plots are drawn for $K=1 \,{\rm and}\, l=1$. In plot (a) $u_+ = 8,\; \bar a=0.3(\rm green\; curve)$ and $\bar a=0(\rm blue\; curve)$. In plot (b) $u_+ = 20,\; \bar a=0.5(\rm pink\; curve),\;\bar a=0.3(\rm red\; curve)$ and $\bar a=0(\rm blue\; curve)$. } \label{distance} \end{center} \end{figure} In figure \ref{distance} we plot the distance $L$ between quark and anti-quark pair with respect to $u_0$. Notice that for large size black hole that is for $u_+$ small, irrespective of the density of the string cloud, the distance $L$ initially increases as $u_0$ approaches towards black hole horizon and finally it takes its maximum value when $u_0$ goes near to the horizon value and then breaks down to zero value when $u_0$ reaches the horizon $u_+$. However for the small black hole that is for $u_+$ large, there are two scenario. One for the string cloud density is zero, the nature of the distance $L$ is expectedly similar to the large black hole. The other one for the non zero string cloud density, the distance $L$ takes its local maximum value and then it breaks down to the zero value before reaching $u_0$ to the horizon. Thus for any black hole back ground, an open string is in the U-shape configuration for short distance between quark and anti-quark pair and is in the straight shape configuration for large distance. Only depending on the cloud density the maximum depth $u_0$ of the probe string is changing. Therefore, for any black hole configuration, only stable deconfined phase of quark and anti-quark pairs exist in the dual gauge theory. Which is not matching with the result of \cite{Yang} since our background is different from them. Though we have given the graph for $\bar a < \bar a_c$ and $K= 1$; but the qualitative nature of the graph is same for the other values of $\bar a$ and $K$. Thus we are not providing the plots for different values of $\bar a$ and $K$ and also we are not repeating the same analysis again. \section{Summary}\label{conclude} In this work we first study the thermodynamics of AdS-Schwarzschild black hole in presence of external string cloud. We observe that for all values of curvature constant the black hole configuration is stable compared to the AdS configuration. However, when the value of the curvature constant equals to one and when the string cloud density is less than a critical value, within a certain range of temperature, there are three black holes, while outside this range there is only one black hole. Depending on the size these three black holes we call them as small, medium and large black holes. Among these black holes small and large one come with positive specific heats and the medium has negative one. Due to the presence of large number of strings the AdS background is deformed to a small black hole background. In order to test their stability we study the free energy and Landau function. Finally we observe that within the aforesaid temperature regime a phase transition take place among the small and large black holes which leads us to conclude that the bound state of quark and anti-quark pairs may not exist. Therefore, we study the existence of bound state of quark and anti-quark pairs in the dual gauge theory. We have shown that any black hole configuration corresponds to usual deconfined state of the gauge theory. We therefore conclude that due to the presence of string cloud, the AdS space-time deformed to a black hole space-time. Depending on the density of the string cloud either large or small black hole background becomes stable and in the gauge theory only deconfined phase will be stable. \section*{Acknowledgements} I would like to acknowledge Shankhadeep Chakrabortty, Pronita Chettri,Sudipta Mukherji and Pei-Hung Yang for going through the draft and giving their valuable comments.
1,941,325,220,864
arxiv
\section{Introduction \label{sec_intro}} There exists classes of stars that differ from the vast majority of the stars known in the sense that they are hydrogen-deficient. They are called extreme Helium stars (eHe), hydrogen-deficient Carbon stars (HdC) and R Coronae Borealis (RCB) stars. The last two are supergiant carbon-rich stars and have therefore strong spectroscopic similarity, but only RCBs are known to undergo unpredictable fast and large photometric declines (up to 9 mag over a few weeks) due to carbon clouds formed close to the line of sight that obscure the photosphere. Such particular events in such peculiar stars have made RCBs much-followed objects among many generations of astronomers. Nowadays, RCBs have become even more interesting as they are increasingly suspected to result from the merger of two white dwarfs (one CO- and one He-), called the double degenerate (DD) scenario. The DD model has been strongly supported by the observations of an $^{18}$O overabundance in HdC and cool RCB stars \citep{2007ApJ...662.1220C,2010ApJ...714..144G}, and of surface abundance anomalies of a few elements, fluorine in particular \citep{2008ApJ...674.1068P,2011MNRAS.414.3599J}. RCBs are actually the favoured candidates to be the lower mass counterpart of Supernovae type Ia objects in a DD scenario \citep{2008ASPC..391..335F,2008arXiv0811.4646D}, therefore detailed studies of their peculiar and disparate atmosphere composition would help us to constrain simulations of merging events \citep{2011MNRAS.414.3599J}. RCB stars are rare: we currently know about 50 of them in the Milky Way \citep[see][and references therein]{2008A&A...481..673T}, which is only a factor of two higher than in the Magellanic Clouds, where 23 RCBs are known \citep{2001ApJ...554..298A,2004A&A...424..245T,2009A&A...501..985T}. The small number of known RCB stars and the bias due to their discovery from different surveys prevent us from having a clear picture of their true spatial distribution. Different views are found in the literature. \citet{1985ApJS...58..661I} reported a scale height of $h\sim400$ pc assuming $\mathrm{M_{Bol} = -5}$ and concluded that RCBs are part of an old disk-like distribution. However, \citet{1998PASA...15..179C} noted that the Hipparcos velocity dispersion of RCB stars is similar to those of other cool Hydrogen-deficient carbon stars and extreme Helium stars, suggesting that RCB stars might have a bulge-like distribution. Recently, adding to the confusion, \citet{2008A&A...481..673T} found that the majority of Galactic RCB stars seem to be concentrated in the bulge with the surprising peculiarity of being distributed in a thin disk structure ($\mathrm{61<h^{RCB}_{bulge}<246}$ pc, 95\% c.l.). It is therefore necessary to increase the number of known RCB stars to constrain their spatial distribution and their age, but also understand their past evolution. I note that with an RCB phase lifetime of about $10^5$ years, as predicted by theoretical evolution models \citep{2002MNRAS.333..121S}, and an estimated He-CO white dwarfs merger birthrate between $\sim10^{-3}$ and $\sim5\times10^{-3}$ per year \citep{2001A&A...365..491N,2009ApJ...699.2026R}, we can expect between 100 and 500 RCB stars to exist in our Galaxy. RCB stars are very bright, $\mathrm{-5\leqslant M_V\leqslant-3.5}$ \citep[Fig. 3]{2009A&A...501..985T}, and can therefore easily be found anywhere in our Galaxy. However, numerous observations are necessary to look for their main signature, large declines in luminosity. Well-sampled light curves, with a limiting magnitude of $\sim18$ for a large fraction of the sky, will not be available until the arrival of the LSST telescope \footnote{LSST: Large Synoptic Survey Telescope \citep{2008SerAJ.176....1I}}. Fortunately, RCBs are also known to possess relatively warm and bright circumstellar shells, easily detectable in the mid-infrared. These shells are made of amorphous carbon dust which translates to an almost featureless mid-infrared spectrum \citep{2011ApJ...739...37A}, unlike the spectra of classical old stars which have silicate and hydrogen rich dust shells. Therefore, one can imagine finding RCBs using only publicly available infrared catalogues. Such attempts have been made : \citet{2011A&A...529A.118T} found two new RCBs in the Galactic bulge using mainly mid-infrared Spitzer\footnote{Spitzer is a space telescope launched in 2003 to do infrared imaging and spectroscopy \citep{2004ApJS..154....1W}} GLIMPSE\footnote{GLIMPSE: Galactic Legacy Infrared Mid-Plane Survey Extraordinaire \citep{2009PASP..121..213C}} data and OGLE-III\footnote{OGLE-III: Optical Gravitational Lensing Experiment \citep{2003AcA....53..291U}} light curves, and \citet{2005ApJ...631L.147K} found cool objects in the Small Magellanic Cloud (SMC) with featureless spectra using the Spitzer spectrograph. Therefore it is interesting to look at how the first data release of the all-sky mid-infrared survey WISE (Wide-Field Infrared Survey Explorer) \citep{2010AJ....140.1868W} can improve our search for RCBs. This is the goal of the research described in this article. In Section~\ref{sec_wise}, I describe briefly the WISE survey and catalogue and explain the impact of the [4.6] band bias observed on bright objects using the spectral energy distribution of known RCB stars. I describe in Sect.~\ref{sec_ana} all the criteria used in the analysis to select a small subsample of the WISE catalogue, enriched in RCB stars. Section~\ref{sec_discu} is a discussion of the outcome of the analysis and of the characteristics of the newly formed catalogue. \section{WISE catalogue and the known RCB stars \label{sec_wise}} The Wide-field Infrared Survey Explorer (WISE) mapped the entire sky during 2010 in 3.4, 4.6, 12 and 22 $\mu$m with, respectively, 6.1, 6.4, 6.5 and 12.0 arcsec in angular resolution and 0.08, 0.11, 1.0 and 6.0 mJy in point source sensitivities at 5 sigma \citep{2010AJ....140.1868W}. A few months later, a preliminary data release (WISE-PDR1) was delivered from the first 105 days of observation. The catalogue contains 257 million sources spread over 57\% of the sky. Fortunately for an RCB star search, the released sky area covers the entire Galactic bulge (see Figure~\ref{map_lb}), where most of the known RCBs are located. Indeed, 46 of 52 Galactic and 4 of 23 Magellanic RCBs are catalogued in WISE-PDR1, but also all 5 known HdCs and 4 DY Per type stars. All WISE magnitudes and the 1-sigma error associated are listed in Table~\ref{tab.WISE} for each object. In Table~\ref{tab.AKARI}, I also list fluxes and 1-sigma errors of all bright Galactic RCBs observed by the AKARI satellite, which did a mid-infrared all-sky survey in 2006 in 6 bands (centred at 9 and 18 $\mu$m with the IRC camera, and 65, 90, 140 and 160 $\mu$m with the FIS camera) \citep{2007PASJ...59S.369M}. Photometry of bright non-saturated sources in WISE-PDR1 has an accuracy of about 2\% in [3.4], [4.6] and [12], and about 3\% for [22] \citep[see for more details][the WISE preliminary release explanatory supplement document]{2011wise.rept....1C}. The sensitivity varies significantly due to the different depth of coverage ($\sim10$ epochs on average), the background emission, and the source confusion. Saturation begins to affect sources brighter than approximately 8.0, 6.7, 3.8 and -0.4 mag respectively in all four bands. Most of the known RCBs are brighter than these limits, therefore only the PSF-fitting magnitudes will be used in the study. Measurements of saturated sources are realised with the non-saturated pixels in the objects'wings. Profile-fit photometry begins to fail for sources brighter than 1.0, 0.0, -2.0 and -6.0 mag, which are fortunately brighter than the brightest known RCB star. Photometric bias due to saturation remains small ($<0.1$ mag) for the [3.4], [12] and [22] WISE bands; however, an over-estimate in brightness is observed in the [4.6] band, up to nearly 1 mag for objects brighter than 3 mag above the saturation limit (which corresponds to $[4.6]\sim3.5$ mag). The impact of this bias on bright [4.6] objects will be discussed several times in this article, especially in Sect.~\ref{satbias}. \begin{figure} \centering \includegraphics[scale=0.4]{./Gal_lb_map_article.eps} \caption{Representation of the Galactic map, \textit{b} vs \textit{l}, with all objects selected by the analysis (black dots) and the known Galactic RCB stars (larger red dots). The black lines represent the approximate limits of the sky area released in the WISE-PDR1 catalogue. } \label{map_lb} \end{figure} \subsection{WISE variability and classification flags\label{WISEflags}} RCB shell brightness varies. \citet{1997MNRAS.285..317F} have shown that one can also observe in the L band (3.5 $\mu$m) the short-term oscillations that are observed in the optical ($\pm$0.5 mag), as the shell reflects light from the photosphere. However, the typical RCB large photometric declines are not observed in the mid-infrared as only the photosphere gets obscured by clouds. The effect of these clouds on the shell luminosity is different. Clouds produced in any direction, not just in the line of sight, will obscure the light coming from the star and therefore make the shell luminosity gradually fainter. Mid-infrared monitoring could therefore be used as a good indicator of change in the dust formation rate around the photosphere of an RCB star. A clear example is presented by \citet{1997MNRAS.285..317F} with UW Cen, where a variation of up to 3 mag in a $\sim1000$ days was observed. This phenomenon is also well described by \citet{1999ApJ...517L.143C}. WISE-PDR1 gives a variability flag for each band. Therefore one can expect to get a positive variability flag for some of the RCBs catalogued. However, none of them was flagged to be variable, which is surprising as pulsations with periods between 30 to 60 days and amplitudes of $\sim0.5$ mag should have been noticed. The variability flag was calculated if an object was observed at least 11 times, but only 45\% of the RCBs have passed this threshold. (Z Umi has the record with 34 measurements, followed by R CrB and Y Mus with 20 measurements.) Furthermore, successive observations of a WISE field were made on a short time scale. They span at maximum a time range of a few days, which could be too short to observe variability in RCBs. A classification (star/galaxy) flag is also available in WISE-PDR1, but it will not be used in further analysis as most of the catalogued RCBs were found to be not consistent with a point source. \begin{table*} \caption{The optical and near-infrared maximum apparent magnitudes determined from the AAVSO, ASAS, DENIS and 2MASS datasets. No interstellar extinction was applied to the magnitudes. \label{tab.RCBsMAG}} \medskip \centering \begin{tabular}{lccccccccc} \hline \hline Name & B & V & R & I & J & H & K & $\mathrm{E(B-V)}$ $^{a}$ & $\mathrm{A_V}$ $^{b}$ \\ \hline \object{DY Cen} & 13.4 & 13.1 & 12.9 & 12.6 & 12.180 & 11.974 & 11.563 & 0.37 & 1.15 \\ \object{ES Aql} & 12.75 & 11.68 & -- & -- & -- & -- & -- & 0.32 & 0.97 \\ \object{FH Sct} & 13.6 & 12.2 & 11.5 & 10.7 & 9.212 & 8.549 & 7.585 & 0.82 (0.59)$^c$ & 2.52 (1.80)$^c$\\ \object{GU Sgr} & 11.7 & 10.35 & 9.8 & 9.326 & 8.545 & 8.269 & 7.993 & 0.49 & 1.52 \\ \object{MV Sgr} & 13.6 & 13.3 & 13.15 & 12.7 & 11.054 & 9.856 & 8.873 & 0.38 & 1.17 \\ \object{R CrB} & 6.6 & 5.8 & -- & -- & 5.364 & 5.089 & 4.564 & 0.03 & 0.11 \\ \object{RS Tel} & 10.66 & 9.9 & 9.35 & 9.12 & 8.707 & 8.449 & 7.915 & 0.09 & 0.27 \\ \object{RT Nor} & 11.25 & 10.2 & 9.6 & 9.2 & 8.684 & 8.457 & 8.171 & 0.24 & 0.73 \\ \object{RZ Nor} & 11.15 & 10.4 & 9.5 & 9.433 & 8.4 & 7.9 & 7.4 & 0.57 (0.24)$^c$ & 1.76 (0.73)$^c$\\ \object{RY Sgr} & 7.0 & 6.3 & 6.05 & 5.75 & 5.577 & 5.423 & 5.139 & 0.09 & 0.27 \\ \object{S Aps} & 11.0 & 9.7 & 9.0 & 8.25 & 7.269 & 6.844 & 6.412 & 0.14 & 0.42 \\ \object{SU Tau} & 10.8 & 9.8 & -- & -- & 7.60 & 7.05 & 6.5 & 0.73 & 2.24 \\ \object{SV Sge} & 12.25 & 10.45 & -- & 8.4 & 6.951 & 6.434 & 5.899 & 0.88 & 2.71 \\ \object{U Aqr} & 12.2 & 11.2 & -- & -- & 9.562 & 9.283 & 8.961 & 0.04 & 0.11 \\ \object{UV Cas} & 12.0 & 10.6 & 10.3 & 9.1 & 7.823 & 7.455 & 7.181 & 1.26 (0.76)$^c$ & 3.87 (2.35)$^c$\\ \object{UW Cen} & 10.0 & 9.4 & 8.8 & 8.5 & 8.0 & 7.4 & 6.6 & 0.39 & 1.19 \\ \object{UX Ant} & 12.6 & 12.0 & 11.7 & 11.5 & 11.148 & 10.976 & 10.695 & 0.09 & 0.29 \\ \object{V1157 Sgr} & -- & 11.5 & -- & -- & 8.497 & 7.843 & 7.083 & 0.14 & 0.43 \\ \object{V1783 Sgr} & -- & 10.6 & -- & 9.074 & 7.837 & 7.367 & 6.838 & 0.58 & 1.78 \\ \object{V2552 Oph} & -- & 10.9 & -- & -- & 8.686 & 8.387 & 8.163 & 0.82 & 2.52 \\ \object{V348 Sgr} & -- & 11.6 & -- & 11.353 & 10.253 & 8.824 & 7.260 & 0.34 & 1.03 \\ \object{V3795 Sgr} & -- & 10.95 & -- & 10.098 & 9.133 & 8.754 & 8.257 & 0.83 (0.29)$^c$ & 2.56 (0.91)$^c$\\ \object{V4017 Sgr} & -- & -- & -- & -- & 9.260 & 8.853 & 8.395 & 0.26 & 0.80 \\ \object{V482 Cyg} & 12.3 & 10.70 & 9.85 & 9.1 & 8.090 & 7.733 & 7.474 & 1.13 (0.70)$^c$ & 3.49 (2.17)$^c$\\ \object{V517 Oph} & -- & 11.4 & -- & -- & 7.507 & 6.803 & 6.104 & 0.71 & 2.19 \\ \object{V739 Sgr} & -- & 12.90 & -- & -- & 9.467 & 8.851 & 8.105 & 0.39 & 1.21 \\ \object{V854 Cen} & 7.6 & 7.05 & 6.85 & 6.55 & 6.106 & 5.695 & 4.875 & 0.12 & 0.36 \\ \object{V CrA} & -- & 9.7 & -- & -- & 8.700 & 8.307 & 7.491 & 0.11 & 0.34 \\ \object{VZ Sgr} & 11.1 & 10.4 & 9.9 & 9.5 & 9.013 & 8.743 & 8.243 & 0.31 & 0.95 \\ \object{WX CrA} & -- & 10.6 & -- & -- & 7.9 & 7.45 & 6.5 & 0.16 & 0.49 \\ \object{XX Cam} & 7.8 & 7.3 & 6.9 & 6.4 & 5.812 & 5.635 & 5.506 & 1.30 (0.25)$^c$ & 4.00 (0.76)$^c$\\ \object{Y Mus} & -- & 10.25 & -- & 9.55 & 8.602 & 8.406 & 8.243 & 0.83 (0.14)$^c$ & 2.56 (0.42)$^c$\\ \object{Z Umi} & 12.0 & 10.9 & 10.1 & 9.25 & 8.411 & 7.922 & 7.309 & 0.09 & 0.29 \\ \hline \object{DY Per} & -- & 10.6 & -- & -- & 5.809 & 4.772 & 4.120 & 0.55 & 1.71 \\ \hline \hline \multicolumn{10}{l}{$^a$ From \citet{1998ApJ...500..525S} map, using 4 pixels interpolation. $^b$ Calculated using $\mathrm{E(B-V)/A_V=3.08}$ } \\ \multicolumn{10}{l}{$^c$ Values in parentheses are the reddening $\mathrm{E(B-V)}$ and the extinction $\mathrm{A_V}$ determined using the photosphere}\\ \multicolumn{10}{l}{ \hspace{2 mm} Effective temperatures found by \citet{2000A&A...353..287A}. The related stars are close to the Galactic plane ($\mathrm{\vert b \vert\leqslant5}$ deg).} \\ \end{tabular} \end{table*} \subsection{Spectral energy distributions} The spectral energy distributions (SEDs) of all bright Galactic RCB and HdC stars were reconstructed using catalogues from 6 different surveys done in the last 10 years (see Figures \ref{SED1} to \ref{SED4}). In the optical, it is important to measure the maximum brightness of an RCB and only well sampled monitoring on a long time scale can give us a confident result. Therefore, light curves published by AAVSO\footnote{American Association of Variable Star Observers, URL: http://www.aavso.org/vstar/vsots/0100.shtml} (4 bands : B, V, R and I) and ASAS-3 \footnote{All Sky Automated Survey \citep{1997AcA....47..467P}, URL: http://www.astrouw.edu.pl/asas/?page=main} (V band) surveys have been analysed. The DENIS\footnote{DENIS:The Deep Near-Infrared Southern Sky Survey \citep{1994Ap&SS.217....3E}} I magnitude was also used if the epoch corresponded to a maximum brightness phase, as shown by AAVSO or ASAS-3 lightcurves. Overall, the maximum brightness measured in each optical band is accurate to $\sim0.05$ mag (1-sigma standard error). In the near-infrared, the 2MASS\footnote{2MASS: Two Micron All Sky Survey \citep{2006AJ....131.1163S}} J, H and K magnitudes were used. Due to the pulsating variability of RCB stars brightness, the accuracy of the maximum brightness measurement in these 3 bands is low, and estimated to be $\sim0.3$ mag. A carbon extinction correction was applied if the 2MASS epoch corresponded to a fading phase of an RCB. This was the case for only 7 stars : SU Tau, UW Cen, RZ Nor, V517 Oph, WX CrA, V348 Sgr and DY Per. We used for such corrections the $\Delta V$ magnitude variation observed at that particular epoch and the absorption coefficients of pure amorphous carbon dust presented by \citet[Fig. 2]{1995A&A...293..463G}. All magnitudes used in the study are listed in Table~\ref{tab.RCBsMAG} together with the interstellar reddening and extinction factors, $\mathrm{E(B-V)}$ and $\mathrm{A_V}$, obtained from the COBE/DIRBE\footnote{COBE: COsmic Background Explorer ; DIRBE: Diffuse Infrared Background Experiment} map \citep{1998ApJ...500..525S} with a 4 pixel interpolation. The 1-sigma error on E(B-V) is about 0.1 mag and becomes higher at higher extinction, up to 0.3 mag at $\mathrm{E(B-V)\sim1.2}$ mag \citep{2008A&A...481..673T}. These extinction coefficients were applied to all optical and near-infrared magnitudes using \citet{1992ApJ...395..130P} extinction curves, except for 7 stars located at low Galactic latitude ($\mathrm{\vert b\vert\leqslant5}$ deg) where \citet{1998ApJ...500..525S} note that the calculated reddening is uncertain and untrustworthy as no contaminating sources were removed\footnote{Effectively, for 5 of the 7 RCBs, the fitted photosphere effective temperatures were higher than 12000 K, when the \citet{1998ApJ...500..525S} map was used to correct for interstellar reddening. This is about 5000 K higher than expected.}. For these stars, the reddening was calculated from the \citet{2000A&A...353..287A} photosphere effective temperature determined using high resolution spectra, and the $\mathrm{V-I}$ colour index of the stars before reddening correction. The SEDs were fitted using the program DUSTY \citep{2000ASPC..196...77N} with two (or three if necessary) black bodies, a shell made entirely of amorphous carbon grains and an MRN \citep{1977ApJ...217..425M} size distribution from 0.005 to 0.25 $\mu$m. The DUSTY models used have a photosphere temperature ranging from 2800 and 20000 K with a step of 100 K and a shell temperature ranging from 300 to 1200 K with a step of 50 K. The shell visual optical depth, $\mathrm{\tau_V}$, varied also between $10^{-3}$ and 10 on a logarithmic scale with 40 increments. The results are presented in Figures \ref{SED1} to \ref{SED4} together with the fitted effective temperatures. \citep[As mentioned earlier, for 7 RCBs located at low Galactic latitude, the photosphere effective temperature was fixed to the temperature determined by][]{2000A&A...353..287A}. In the mid-infrared, only three WISE magnitudes were used: the [4.6] band was not considered. AKARI magnitudes were fitted if no WISE magnitudes were available - this is the case with UV Cas, U Aqr and V482 Cyg - or if a clear third cold black body was observed (as with DY Cen, MV Sgr, UW Cen and WX CrA). As described in Sect~\ref{WISEflags}, RCB' shell brightness varies. It is therefore not surprising to observe differences in luminosity between the WISE and AKARI data, which were taken 4 years apart. Such differences are visible on the SEDs presented, particularly with RT Nor, where a $\sim2$ mag variation is observed, but also with V1783 Sgr and V3795 Sgr. I note that the 2MASS K band can also be affected by shell brightness variation in the case of a hot shell (see WX CrA SED for example). The K flux was not used in the SED fit if it clearly contradicted the mid-infrared fluxes. On average, the accuracy on the temperature and optical depth estimates is not better than $\sim10$\%, because of the combined effect of the interstellar dust extinction uncertainty (main effect), the missing U band magnitude and the fact that I did not use a model of a carbon rich photosphere\footnote{Carbon molecules CN and $C_2$ create strong absorption features in the optical and the near-infrared continuum.} \citep[see][]{1998A&A...330..659L}. I compared the photosphere effective temperature found to the more reliable one estimated by \citet{2000A&A...353..287A} for 17 RCBs using high resolution spectra \citep[temperatures from][are also indicated on the SED Figures]{2000A&A...353..287A} . Clearly, the SED method overestimates the photosphere effective temperature by about 600 K on average. On the other hand, for 4 of the 7 RCB stars (RZ Nor, Y MUs, UV Cas and V3795 Sgr) located at low Galactic latitude, the best SED model found with a photosphere temperature fixed to the one estimated by \citet{2000A&A...353..287A} and with an interstellar reddening deduced from this temperature, does not fit well the near-infrared magnitudes, indicating that the photosphere temperatures could be underestimated. \begin{figure} \centering \includegraphics[scale=0.5]{./distrib_temp_tau_article.eps} \caption{From left to right, distribution of the fitted effective temperature of the RCBs' photosphere and shell, and the shell optical depth.} \label{DistribTemp} \end{figure} Nevertheless, with this simple exercise, I can confirm that RCB stars have a wide range of photosphere temperatures, with the vast majority having a temperature between $\sim4000$ and 8000 K (spectral type K to F), with a few exceptions being hotter than 10000 K (DY Cen, MV Sgr and V348 Sgr) \citep{2002AJ....123.3387D}. RCBs also have a wide range of shell temperatures ranging from $\sim400$ to $\sim1000$ K. The typical RCB shell temperature is about 700 K with a visual optical depth of $\mathrm{\tau_V\sim0.4}$. The distributions of these three parameters are presented in Figure~\ref{DistribTemp}. The SEDs of HdC stars, presented in Figure~\ref{SED4}, confirm that their spectral types are F-G, with an effective temperature ranging from 5500 to 7000 K. As no brightness declines have ever been recorded with HdC stars, it is not surprising not to detect a mid-infrared excess in their SEDs, but interestingly in the case of one HdC, HD 175893, one can fit a second blackbody which corresponds to a shell with an effective temperature of $\sim500$ K. HD 175893 could therefore be an RCB star going through a long period of low activity, corresponding to a low, if any, dust production rate. Furthermore, I note that HD 175893 has passed all the RCB selection criteria described below in Sect.~\ref{sec_ana}. \subsubsection{WISE [4.6] band bias on bright objects\label{satbias}} An interesting output of this work comes from the difference of the [4.6] magnitude published by WISE and the one expected from the best SED model found. Figure~\ref{4.6ObsModel} summarises the situation. As mentioned earlier, the WISE team reported a bias for bright objects in the [4.6] band, whose brightness could be overestimated by almost 1 mag. I confirm this effect. One can see that for RCBs brighter than $\mathrm{[4.6]\sim5.0}$ mag, there exists a gradual increase of an overestimation of the [4.6] brightness, up to $\sim0.9$ mag, and a decrease after $\mathrm{[4.6]\sim3.0}$ mag down to the nominal brightness. RCBs whose [4.6] magnitude is certainly affected (with $\mathrm{5.0\leqslant [4.6] \leqslant 1.5}$ mag) are indicated in Table~\ref{tab.WISE}. The WISE team has not yet found the source of this bias. This significant effect will be taken into account in the analysis described below. \begin{figure} \centering \includegraphics[scale=0.29]{./4.6_Obs_Model.eps} \caption{Diagram representing the measured WISE [4.6] magnitudes of known RCB stars versus the difference in magnitude between the WISE measurement and the best SED model found. For objects brighter than $\mathrm{[4.6]\sim5.0}$, the [4.6] WISE brightness is overestimated, reaching $\sim0.9$ mag at $\mathrm{[4.6]\sim3.0}$ mag. The red line is a representation of this bias.} \label{4.6ObsModel} \end{figure} \section{Analysis \label{sec_ana}} Using the WISE and 2MASS catalogues, a series of pragmatic selection criteria was developed and optimised to create a reasonable size catalogue enriched in RCB stars located within 50 kpc. The criteria are described below. The detection efficiency of each cut was controlled using 52 known RCBs (see Table~\ref{tab.Selection}). Colour-Colour cuts on all stars detected in every four WISE and three 2MASS broadbands are the main selection criteria used, as they do not impact on the distance of the RCB. These selection criteria are as follows. \begin{enumerate} \item The first selection criterion is simply to keep all objects detected and listed in the WISE catalogue. This process is made difficult by the brightness of RCB stars (often saturated) but also by the blending effect at low Galactic latitude where many RCB stars are expected to be. 52 known RCBs are expected to have been catalogued in WISE-PDR1 from the sky area released (see Figure~\ref{map_lb}). However, two are missing, OGLE-GC-RCB-1 \& -2 which were discovered already from their shell brightness and colours using the Spitzer-GLIMPSE catalogue \citep{2011A&A...529A.118T}. These 2 RCBs are located at very low Galactic latitude ($b\sim3.0$ and $\sim-1.8$ deg) in a very crowded field. They are clearly bright and distinguishable on WISE reference images, in all bands, but the WISE source finder did not succeed in retrieving them. This first criterion has therefore an important effect on the overall analysis and final outcome completeness. However, this issue may be corrected in further WISE data releases. \item The objects selected have to have an entry in all of the four WISE bands. The main WISE criterion to accept an object in the catalogue is that this object should have been detected with a signal-to-noise higher than 7 in at least 1 band and detected in 4 frames minimum. At high Galactic latitude, the detected source distribution remains relatively uniform up to $\mathrm{\left[3.0\right]<16.5}$, $\mathrm{\left[4.6\right]<16.0}$, $\mathrm{\left[12\right]<12.5}$ and $\mathrm{\left[22\right]<9.0}$ mag. The fact that the faintest Magellanic RCB, EROS2-LMC-RCB-5, detected in WISE-PDR1 has magnitudes of $\sim11.9$, $\sim11.4$, $\sim8.6$ and $\sim7.4$ mag respectively in all four WISE bands, gives us strong confidence that all RCBs located within $\sim50$ kpc could be detected. RY Sgr is the only RCB star that did not pass this criterion. The problem is related to the other end of the brightness source distribution, as RY Sgr is the second brightest RCB star known in the optical after R CrB, but the brightest in the mid-infrared. RY Sgr failed to have an entry in the $\left[12\right]$ band. The impact on the overall detection efficiency is minimal, as the vast majority of the RCBs we are looking for are fainter. \begin{figure} \centering \includegraphics[scale=0.5]{./plot_fig1_article.eps} \caption{Diagram of [4.6]-[12] vs [12]-[22]. The black dots represent 3\% of all objects that have been catalogued in all the WISE and 2MASS bands. Larger green dots represent bright known Galactic RCB stars (the two outliers are discussed in the text); red dots correspond to the Galactic RCBs found toward the bulge (with a majority being located inside the bulge) \citep{2008A&A...481..673T, 2005AJ....130.2293Z}; blue dots are 4 RCB stars located in the southern-east area of the Large Magellanic Cloud. The purple stars correspond to the 5 known HdC stars and the circles to DYPer type stars with a colour coding identical to RCB stars. The red lines represent the selection cuts used in the analysis.} \label{M4M12M22big} \end{figure} \item All selected objects have to have an entry in all J, H and K bands of the 2MASS catalogue. All 49 remaining known RCBs have passed this selection. The 2MASS point source magnitude limits are $\sim15.8$, $\sim15.1$ and $\sim14.3$ mag respectively in J, H and K. With $\mathrm{M_K\sim-6}$, an RCB star located within $\sim50$ kpc should have an entry in the 2MASS catalogue if there is not a very strong interstellar extinction on the line of sight ($\mathrm{A^{inter.}_V\lesssim 15}$ mag). Also, the maximum extinction due to a cloud of carbon soot is known to be $\Delta\sim9$ mag in V \citep{1996PASP..108..225C}, which corresponds to $\sim3$ mag in J and $\sim1.6$ mag in K. Even in that extreme scenario, an RCB star located within $\sim50$ kpc should have been detected by the 2MASS survey under reasonable interstellar extinction conditions ($\mathrm{A^{clouds}_V\lesssim4}$ mag). \item The first Colour-Colour selection cut was applied on the WISE catalogue to select a reasonable number of objects to work with : \begin{equation} $$ 0.75<[4.6]-[12]<3.0 \hspace{3 mm}\&\hspace{3 mm} [12]-[22]<1.3 $$. \label{eq.cut1} \end{equation} The selection is illustrated in Figure~\ref{M4M12M22big}. One can see that this selection is very effective as it selects all objects with a clear shell signal and eliminates most of the galaxies located in the top-right corner of the diagram. It keeps only 5\% of all objects catalogued, but rejects only 2 of the RCB stars used as references. These 2 RCBs are DY Cen and MV Sgr. They do not resemble the majority of RCB stars known, as they are hot ($\mathrm{T_{eff}\sim12000}$ K, see Figure~\ref{SED1}) and surrounded with multiple shells. Furthermore, and more interestingly, RCB stars are located in an underpopulated area of the diagram. They lie in an area well separated from the vast majority of the cooler AGB stars. This selection is refined in selection 6, and a discussion of the impact of the bias that affects bright objects in [4.6] is also presented. \item RCB stars are known to present an excess in the near-infrared compared to classical supergiant stars \citep[see][Fig.4]{2009A&A...501..985T} due to the warm circumstellar shell that contributes significantly to the near-infrared fluxes. A selection has therefore been applied in the $\mathrm{J-H}$ vs. $\mathrm{H-K}$ diagram. Because the interstellar extinction affects these magnitudes significantly, cuts were developed for the 3 following Galactic latitude ranges, A, B and C: A: $b>2$ deg, B: $\mathrm{1\leqslant b < 2}$ deg (which corresponds to $\mathrm{\left\langle A_V \right\rangle \sim2}$ mag) and C: $\mathrm{b\leqslant 1}$ deg (with $\mathrm{\left\langle A_V \right\rangle \sim5}$ mag). Three selection cuts were applied and are described below. The parameters for each Galactic latitude range are given in square brackets as follows [A,B,C]: \begin{eqnarray} && \hspace{5 mm}(H-K)>[0.2,0.32,0.5] \nonumber \\ && \mathrm{if}\hspace{1 mm}(H-K)\leqslant[0.8,0.92,1.1] : \nonumber \\ && \hspace{5 mm}(J-H)<(H-K)+[0.2,0.28,0.4] \nonumber \\ && \mathrm{if}\hspace{1 mm}(H-K)>[0.8,0.92,1.1] : \nonumber \\ && \hspace{5 mm}(J-H)<(5/3)(H-K)-1/3 \label{eq.cut2} \end{eqnarray} The interstellar extinction vector corresponds to $\mathrm{(J-H)/(H-K)\sim5/3}$, therefore the last selection cut is identical for each Galactic latitude range. These 3 selection cuts are represented on Figure~\ref{JHK_M4M12M22}. Only 2\% of the remaining objects passed this criterion, and only two RCB stars, Y Mus and XX Cam, were eliminated. \begin{figure} \centering \includegraphics[scale=0.5]{./plot_fig4_article.eps} \caption{Colour-magnitude diagram K vs $\mathrm{ J-K}$. Black points represent the remaining objects after application of selection criterion 6. The large dots correspond to RCB or HdC stars with the same colour coding as in Figure~\ref{M4M12M22big}. Extinction due to amorphous carbon grains (C) and interstellar dust (I) is represented with two vectors. The red lines represent selection cut number 7.} \label{K_JK} \end{figure} \item The selection criterion 4 is readjusted to reject most of the AGB stars, and the applied cut is illustrated in Figure~\ref{JHK_M4M12M22} : \begin{equation} $$ [4.6]-[12] > 1.8([12]-[22]) + 0.03$$ \label{eq.cut3} \end{equation} As discussed in Sect.~\ref{satbias}, the brightness of bright objects in [4.6] is overestimated. RCBs affected by this bias are indicated in Table~\ref{tab.WISE}, and the estimated corrections are represented in Figure~\ref{JHK_M4M12M22} with black vertical lines. After correction, one can see that none of the RCBs reached the upper limit set at 3.0 mag on $\mathrm{[4.6]-[12]}$. The impact of the [4.6] bias on this selection criteria is therefore very limited. On the other hand, two out of the three RCBs that are eliminated by the present selection would have been selected if the bias did not exist. As we can expect that many of the RCBs we are looking for are relatively bright in the [4.6] band, all 3 known RCBs rejected will nevertheless be counted in the detection efficiency estimation. \item The present criterion target RCB brightness and therefore their distance modulus. A selection was applied on the colour - magnitude diagram K vs $\mathrm{J-K}$. The colour $\mathrm{J-K}$ of RCBs is due to three effects: the interstellar reddening, the warm circumstellar shell contribution to near-infrared fluxes and for some RCBs the reddening due to clouds made of carbon soot during an extinction event (at maximum $\mathrm{\Delta(J-K)_{cloud}\sim 1.4}$ mag). The RCBs' $\mathrm{J-K}$ colour is therefore not simple and the selection limits slope were chosen with care. They are illustrated on Figure~\ref{K_JK}. The goal of this analysis is to create a catalogue enriched with RCB stars located within 50 kpc, therefore all RCBs discovered in the Large Magellanic Cloud (LMC) were represented in this diagram (they all have an entry in the 2MASS catalogue). They were used to estimate the lower end of the brightness cut. Only 2, EROS2-LMC-RCB-2 \& -3, out of 19 LMC RCBs were not selected. \begin{equation} $$ (5/8)(J-K)+4 \hspace{1 mm}< K < \hspace{1 mm}(5/8)(J-K)+12 $$ \label{eq.cut4} \end{equation} On the bright end of the selection, 2 of the brightest known RCBs, R CrB and V854 Cen, are located close to the upper limit. Very bright RCBs are not the primary target of this analysis as very few are expected, therefore only a small margin was used in this choice. \begin{figure*} \centering \includegraphics[scale=0.45]{./plot_fig2_lines_article.eps} \caption{In both figures, all points represent objects that have passed the first 4 selection criteria. They are colour coded to represent their Galactic latitude \textit{b} and therefore the interstellar reddening that affects their near-infrared J, H and K magnitudes: black points correspond to objects with $\textit{b}>2$ deg, light blue ones are objects with $\mathrm{1\leqslant\textit{b}\leqslant2}$ deg (corresponding to average extinction of $\mathrm{\left\langle A_V \right\rangle \sim2}$ mag), and purple ones are objects located close to the Galactic plane, with $\mathrm{\textit{b}<1}$ deg (with $\mathrm{\left\langle A_V \right\rangle \sim5}$ mag). The larger dots and stars represent known RCB and HdC stars with identical colour coding to Figure~\ref{M4M12M22big}. The dashed and solid red lines represent the selection cuts number 5 (left) and 6 (right) used in the analysis. Left: $\mathrm{J-H}$ vs $\mathrm{H-K}$ diagram. Right: $\mathrm{[4.6]-[12}$ vs $\mathrm{[12]-[22]}$ diagram. The black vertical lines correspond to the brightness correction that needs to be applied to bright RCBs in the [4.6] band due to the photometric bias (see Sect.~\ref{satbias}). } \label{JHK_M4M12M22} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.45]{./plot_fig5-6_article.eps} \caption{Both diagrams represent $\mathrm{J-K}$ versus $\mathrm{J-[12]}$. Left: the black dots represent the remaining objects after application of selection criterion 7. The red line represents the selection cut number 8. The larger dots correspond to RCB or HdC stars that have also passed the first 7 criteria, with the same colour coding as in Figure~\ref{M4M12M22big}. The vector represents the interstellar reddening. The black curve corresponds to the combination of blackbodies consisting of a 6000 K star and a 700 K shell made of amorphous dust grains, in various proportions ranging from all "star" to all "shell". Right: the symbols represent the classification found in SIMBAD. About 15\% of the objects selected with the first seven criteria have an entry in the SIMBAD database with a matching radius of 5 arcsec. The dashed line represents the selection cut number 8.} \label{JK_J12} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.45]{./I_DENIS_USNO_article.eps} \caption{Left: distribution of the I magnitude for 711 of 1602 objects selected by the analysis. Only 711 objects have a valid I band magnitude in the DENIS and/or USNO-B1 catalogues. The dotted lines represent the same distribution for known RCB stars. Right: $\mathrm{I_{DENIS}}$ vs $\mathrm{I_{DENIS}-I_{USNO-B1}}$ for the 285 of 1602 objects that have a valid I magnitude in both catalogues. The difference in magnitude between both catalogues covers about 9 magnitudes, showing that many selected objects undergo large photometric variation. } \label{Denis_USNO_I} \end{figure*} \item As mentioned earlier, the RCB's warm circumstellar shell can contribute significantly to the near-infrared fluxes. This effect can be understood in a $\mathrm{J-K}$ vs $\mathrm{J-[12]}$ Colour-Colour diagram. Effectively, a correlation between both colours is observed for RCBs (see Fig.~\ref{JK_J12}). When the RCB shell becomes brighter, its flux contribution to the K band increases and as a consequence $\mathrm{J-K}$ becomes redder. This effect is in combination with the interstellar reddening effect. Interestingly, it is also relatively simple in that diagram to reject the remaining main contaminants, the Mira type stars. Indeed, they are located in an isolated area of the diagram. The applied cuts were designed to optimise the rejection of Miras without impacting too much on the RCB detection efficiency. The second selection follows the interstellar reddening vector: \begin{equation} $$ J-K\leqslant2.25 \hspace{2 mm} \mathrm{if}\hspace{2 mm} (J-[12])<6$$ \end{equation} \begin{equation} $$ J-K\leqslant0.61 (J-[12])-1.43 \hspace{2 mm}\mathrm{if}\hspace{2 mm} (J-[12])\geqslant6$$ \label{eq.cut5} \end{equation} Only 2 known RCB stars, out of the 42 that passed all the precedent criteria, were rejected by the present selection. They are EROS2-RCB-CG-12 and MACHO-308-38099.66. It is interesting to note that the former one has a light curve that presents large oscillations and may therefore be a Mira type star, instead of an RCB \citep[see][Fig.9]{2008A&A...481..673T}. \end{enumerate} \begin{table*} \caption{Number of selected objects after each selection criterion. \label{tab.Selection}} \medskip \centering \begin{tabular}{lrcl} \hline \hline Selection criterion & Number of & Number of & RCB stars \\ & WISE objects & known RCBs & eliminated\\ \hline 0: Located in WISE-PDR1 sky area & & 52 & \\ 1: Catalogued by WISE & 257310278 & 50 & OGLE-GC-RCB-1 \& -2\\ \hspace{3 mm} Catalogued in the [22] band & 12192351 & 50 & \\ 2: Catalogued in all four WISE bands & 9636678 & 49 & RY Sgr\\ 3: Catalogued in all three 2MASS bands & 6832051 &49 & \\ 4: Cut on ($\mathrm{[4.6]-[12]}$ vs $\mathrm{[12]-[22]}$) & 337359 & 47 & DY Cen and MV Sgr \\ 5: Cut on ($\mathrm{J-H}$ vs $\mathrm{H-K}$) & 9660 & 45 & Y Mus and XX Cam\\ 6: Cut on ($\mathrm{[4.6]-[12]}$ vs $\mathrm{[12]-[22]}$) & 4146 & 42 & EROS2-RCB-CG-3 \& -5 and SV Sge\\ 7: Cut on (K vs $\mathrm{J-K}$) & 3058 & 42 & \\ 8: Cut on ($\mathrm{J-K}$ vs $\mathrm{J-[12]}$) & 1643 & 40 & EROS2-RCB-CG-12 and MACHO-308-38099.66\\ \hline \hspace{2 mm}Final (without the known RCBs + HD 175893) & 1602 & - & \\ \hline \end{tabular} \end{table*} \section{Discussion \label{sec_discu}} 1602 objects have passed the pragmatic selection criteria that have just been enumerated. Of the 52 already known RCBs that are located inside the sky area covered by the WISE survey preliminary data release and used as reference, 41 have passed all 8 criteria, which corresponds to a detection efficiency of about 77\%. For the detection efficiency calculation, it would have been preferable to use a Monte-Carlo simulation of RCBs with a range of photosphere and shell temperatures, but our knowledge of the distribution of these last parameters is very limited. It is worth noting that the RCB sample used as reference is biased toward bright RCBs and therefore had a negative impact on the detection efficiency (see criteria 2 and 6). Bright RCBs are not the primary target of this analysis as very few of them, if any, are expected to have not been discovered yet. At the other end, the selection is biased against RCB stars with faint shells and RCB stars in a crowded environment (see criteria 1 and 5). In the last case, a detection algorithm correction in a future WISE data release may resolve the issue. About 70\% of the objects selected are located towards the Galactic bulge (see Figure~\ref{map_lb}), at $\pm$5 deg from the Galactic plane, and 85 ($\sim5$\%) are located in the Large Magellanic Cloud, whose sky area has been only partially covered in WISE-PDR1. The final catalogue was cross-matched with the DENIS and USNO-B1\footnote{USNO: United States Naval Observatory \citep{2003AJ....125..984M}} catalogues, using a strict matching radius of 1 arcsec. Only 895 objects have an entry in the DENIS catalogue and 538 in the USNO-B1 one. DENIS did a survey of all the southern sky in I, J and K bands, while USNO-B1 covers the entire sky in B, R and I (2 epochs in B and R). Overall, using both catalogues, 711 objects have a valid I band magnitude, but surprisingly only 285 have a valid entry in both catalogues. The I band magnitude distribution is presented in Figure~\ref{Denis_USNO_I}, as well as the difference in magnitude between $\mathrm{I_{DENIS}}$ and $\mathrm{I_{USNO-B1}}$. With an absolute magnitude $\mathrm{M_I\sim-5}$ mag \citep{2009A&A...501..985T}, an RCB star located at 50 kpc would have an apparent magnitude of $I\sim13.5$ mag during a maximum brightness phase, which is brighter than the median ($\sim$14) of the distribution presented. The $\mathrm{I_{DENIS}}$ and $\mathrm{I_{USNO-B1}}$ magnitudes should be used to prioritise further follow-up. During some large photometric decline phases, some RCB stars could have been observed at fainter magnitude up to the magnitude limit ($\mathrm{I_{DENIS,lim}\sim18.5}$ mag, or $\sim17$ mag in crowded area): Figure~\ref{Denis_USNO_I}, on the right, shows that some of the selected objects have known photometric variation up to 6 mags between both survey epochs. The fact that only $\sim33$\% of the 1602 selected objects have an entry in the USNO-B1 catalogue indicates that most of those 1602 objects are faint, with a visual magnitude lower than 20. The interstellar extinction has certainly an influence on this low percentage, but it is not the absolute answer as half of these faint objects are located at 2 degrees or more from the galactic plane ($\mathrm{\mid\textit{b}\mid >2}$ deg) where extinction $\mathrm{A_V}$ is lower than $\sim2$ mag. Under such extinction, an RCB star at maximum brightness and located within 50 kpc is still detectable, therefore these objects could be intrinsically optically faint and a contaminant of our research. However, they may also be (or a fraction of them) RCB stars that are heavily dust enshrouded. If one doesn't expect that during the lifetime of an RCB star, there exists a long phase of high dust production rate, then these objects should be considered as weak candidates. However, it is worth noting that the RCB star V854 Cen remained 7 mag below its maximum brightness for about 50 years \citep{1986IAUC.4245....1G}. I note also that a third of the 1602 selected objects have a WISE $\mathrm{[4.6]}$ magnitude brighter than 5 mag. The $\mathrm{[4.6]}$ brightness of these objects is therefore overestimated (see Sect.~\ref{satbias}). The selected list of objects was also cross-matched with the SIMBAD database with a matching radius of 5 arcsec. Figure~\ref{JK_J12}, right side, summarises the situation. 569 of the 1602 objects were found to have a classification, but 307 of those are classified only as IRAS sources and 30 others only as stars, without further informations. However, of the 232 remaining, 62 stars are classified interestingly as Carbon stars; they are stars that will need to be looked at closely in future photometric and spectroscopic follow-ups. Also, 46 objects are classified as Variable, Pulsating, Semi-regular or RV Tauri stars. RCB stars are known to present periodic pulsating variability with an amplitude of $\sim0.5$ mag; therefore these stars will be interesting to monitor and study more closely. Despite the fact that many of the objects classified as Mira or OH/IR stars were rejected using selection criterion 8, 49 remained in the final sample. Most of them are certainly genuine, particularly at $\mathrm{J-[12]>9}$ mag, but it is also worth indicating that a few Miras are located in the $\mathrm{J-K}$ vs $\mathrm{J-[12]}$ diagram at a position not expected for these type of object and can therefore be suspected of misclassification. There are 14 such objects with $\mathrm{J-K<3}$ mag. A visual inspection of the ASAS-3 lightcurves of these objects have shown that 2 of them present Mira-type photometric variation (Id 45 and 1197), 2 remain stable (Id 623 and 1409), but more interestingly 3 present clear photometric variations typical of RCB stars: Id 683, 793 and 917, which are named respectively \object{V653 Sco}, \object{IO Nor} and \object{V581 CrA}. The remaining 7 objects have not been catalogued and followed-up by the ASAS-3 survey. I note that 26 objects are classified as galaxies or clusters of galaxies. These objects are faint, with $K>12$ mag, indicating that galaxies could therefore be a major contaminant below this limit (270 on the 1602 selected objects are fainter than $K>12$). Finally, 17 objects are classified as Emission stars; such objects could be contaminants in our search for RCB stars, but it is interesting to note that hot RCBs, such as DY Cen, MV Sgr and V348 Sgr, present also an emission type spectrum \citep{2002AJ....123.3387D}. Furthermore, during faint phases, RCBs spectrum generally presents many emission lines \citep{1996PASP..108..225C}. There does not exist any reference sample of stars to estimate the fraction of RCB stars that are potentially in the catalogue. Furthermore, little is known on the different phases during the lifetime of an RCB star and as mentioned earlier, a significant fraction of RCB stars could be heavily dust-enshrouded and therefore faint. However one can use the objects that have been classified in the SIMBAD database to obtain a crude estimate of this fraction: $\sim15$\% of these objects are RCBs, it would then corresponds to about 240 new RCBs. The RCB enriched catalogue is available from the following URL: http://www.mso.anu.edu.au/$\sim$tisseran/RCB/ and will also be available through the VizieR\footnote{URL: http://vizier.u-strasbg.fr/viz-bin/VizieR} catalogue service. A short version of it is given as example in Table~\ref{tab.short_version}. The equatorial and Galactic coordinates as well as all four WISE, three 2MASS, three DENIS magnitudes and their 1-sigma errors are listed for all 1602 objects selected. The five USNO-B1 magnitudes are also listed, but not the individual measurement error as they were not delivered in the original catalogue \citep[see][for an estimate of the photometric accuracy]{2003AJ....125..984M}. If one magnitude was not available, its value was replaced by the number -99. Also, if more than one epoch were available in the DENIS or USNO-B1 catalogues for one particular object, only the epoch related to the brightest magnitudes was kept. The last column of the catalogue gives the SIMBAD classification\footnote{URL of the object types in SIMBAD (classification version : 12-Jul-2011): http://simbad.u-strasbg.fr/simbad/sim-display?data=otypes}, as of July 2011, found using a 5 arcsec matching radius. An underscore character was given to the objects that had no classification in SIMBAD. \section{Conclusion\label{sec_concl}} Using both the 2MASS and WISE preliminary data release catalogues, I have created a catalogue enriched with Galactic RCB stars that lie within a distance $\sim50$ kpc. The selection criteria to select a subsample of objects with mid- and near- infrared properties similar to RCBs, were mostly based on colour-colour diagrams. This catalogue contains only 1602 entries, with $\sim70$\% of the selected objects being located toward the Galactic bulge. A reference sample of 52 known RCB stars was used to monitor the detection efficiency of the selection. About 77\% of them were recovered. Such a high detection efficiency gives strong support to the RCB content of this catalogue. Each selected object will now need to be followed-up spectroscopically to discover its true nature. It is encouraging to observe that 3 of these 1602 selected objects, Id 683, 793 and 917 (resp. \object{V653 Sco}, \object{IO Nor} and \object{V581 CrA}) were found to present ASAS-3 lightcurves with brightness variations typical of RCB stars, but are misclassified in the SIMBAD database as Miras. Also, \citet{2011PASP..123.1149K} has recently discovered that NSV 11154 is a new RCB stars. This star is listed in the catalogue under the Id 1240. An analysis of the Spectral Energy Distribution of known RCB stars confirms that RCBs effective temperatures range mostly between 4000 and 8000 K (spectral type K to F), with a few exceptions being hotter than 10000 K. The RCB shell effective temperatures range between 400 and 1000 K. The typical shell temperature being $\sim700$ K with a visual optical depth of $\tau_V\sim0.4$. HD 175893 is the only HdC star (spectral type G to F) that presents an infrared excess indicating the existence of a warm circumstellar shell. Furthermore, HD 175893 has passed all the selection criteria to create the RCB enriched catalogue, therefore HD 175893 should be considered to be an RCB star under a phase of low activity in terms of dust production. \begin{figure*} \centering \includegraphics[scale=0.9]{./SED_Stars_KnownRCBs_Model_1.eps} \caption{Spectral energy distributions of known bright Galactic RCB stars, normalised to flux in V. Red dots represent fluxes in the optical (B, V, R and I) and the near-infrared (J, H and K) obtained from AAVSO and the ASAS, DENIS and 2MASS surveys. The blue and black dots represent mid-infrared fluxes from the WISE and AKARI surveys respectively. The red line is the best fit found with DUSTY models (see text for more details); the related effective temperatures found ($\sim10$\% level accuracy) are listed at the bottom left corner in the following order, from top to bottom : photosphere, first and second circumstellar shell. If the photosphere effective temperature is between square brackets, its value was fixed during the fit and corresponds to the photosphere effective temperature determined by \citet{2000A&A...353..287A}, indicated below the name for 17 RCB stars. The values in brackets on the right side of the shell temperatures correspond to the visual optical depth found. The broken black line represents a simple blackbody function with the photosphere temperature.} \label{SED1} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.9]{./SED_Stars_KnownRCBs_Model_2.eps} \caption{Spectral energy distributions of known bright Galactic RCB stars, normalised to flux in V. Same caption as Figure~\ref{SED1}.} \label{SED2} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.9]{./SED_Stars_KnownRCBs_Model_3.eps} \caption{Spectral energy distributions of known bright Galactic RCB stars, normalised to flux in V. Same caption as Figure~\ref{SED1}.} \label{SED3} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.8]{./SED_Stars_HdC.eps} \caption{Spectral energy distributions of the 5 HdC stars known, normalised to flux in V. Red dots represent fluxes in the optical and the near-infrared obtained from AAVSO and the ASAS, DENIS and 2MASS surveys. The blue and black dots represent the mid-infrared fluxes from the WISE and AKARI surveys respectively. The red line is simply a spline function that connects the different fluxes. The broken black line represents a simple blackbody function with the temperature of the photosphere. In the case of HD175893, the dotted line represents the sum of two blackbodies, the photosphere and the shell. The black bodies' effective temperatures of the photosphere (top) and the circumstellar shell (bottom) are listed at the bottom left corner.} \label{SED4} \end{figure*} \begin{table*} \caption{WISE magnitudes of catalogued RCB, HdC and DY Per stars. \label{tab.WISE}} \medskip \centering \begin{tabular}{lcccccccc} \hline \hline Name & [3.0] & $\sigma_{[3.0]}$ & [4.6] & $\sigma_{[4.6]}$ & [12] & $\sigma_{[12]}$ & [22] & $\sigma_{[22]}$ \\ \hline \hline & & & Galactic RCB stars & & & & &\\ \hline \object{DY Cen} & 10.438 & 0.025 & 9.176 & 0.021 & 4.106 & 0.014 & 2.398 & 0.013 \\ \object{ES Aql} & 5.954 & 0.044 & 4.660$^b$ & 0.035 & 3.344 & 0.020 & 2.837 & 0.017 \\ \object{FH Sct} & 6.076 & 0.044 & 4.811$^b$ & 0.031 & 3.589 & 0.019 & 3.004 & 0.021 \\ \object{GU Sgr} & 5.993 & 0.051 & 4.289$^b$ & 0.039 & 3.021 & 0.019 & 2.624 & 0.016 \\ \object{MV Sgr} & 8.144 & 0.024 & 7.400 & 0.021 & 4.695 & 0.019 & 2.234 & 0.017 \\ \object{R CrB} & 3.505 & 0.067 & 1.920$^b$ & 0.002 & -0.273 & 0.003 & -0.833 & 0.019 \\ \object{RS Tel} & 7.075 & 0.031 & 5.672 & 0.027 & 3.676 & 0.018 & 3.057 & 0.021 \\ \object{RT Nor} & 5.877 & 0.044 & 3.987$^b$ & 0.035 & 2.856 & 0.020 & 2.628 & 0.018 \\ \object{RZ Nor} & 6.687 & 0.030 & 4.820$^b$ & 0.031 & 2.836 & 0.021 & 2.087 & 0.018 \\ \object{RY Sgr} & 2.615 & 0.004 & 1.434$^b$ & 0.001 & -- & -- & -0.564 & 0.012 \\ \object{S Aps} & 6.064 & 0.044 & 5.498 & 0.027 & 3.757 & 0.018 & 2.854 & 0.018 \\ \object{SU Tau} & 4.928 & 0.055 & 3.041$^b$ & 0.028 & 1.484 & 0.008 & 0.987 & 0.013 \\ \object{SV Sge} & 5.600 & 0.048 & 5.025 & 0.033 & 3.322 & 0.019 & 2.191 & 0.015 \\ \object{UW Cen} & 5.705 & 0.029 & 3.322$^b$ & 0.016 & 1.442 & 0.006 & 0.655 & 0.012 \\ \object{V1157 Sgr} & 6.291 & 0.038 & 4.582$^b$ & 0.031 & 3.118 & 0.022 & 2.549 & 0.020 \\ \object{V1783 Sgr} & 6.128 & 0.041 & 5.082 & 0.027 & 3.133 & 0.021 & 2.333 & 0.018 \\ \object{V2552 Oph} & 7.360 & 0.027 & 6.224 & 0.023 & 4.760 & 0.019 & 4.285 & 0.028 \\ \object{V348 Sgr} & 5.406 & 0.020 & 3.042$^b$ & 0.009 & 1.838 & 0.013 & 1.361 & 0.015 \\ \object{V3795 Sgr} & 7.153 & 0.029 & 5.439 & 0.025 & 3.045 & 0.020 & 2.410 & 0.017 \\ \object{V4017 Sgr} & 7.276 & 0.028 & 6.345 & 0.023 & 4.653 & 0.018 & 3.941 & 0.021 \\ \object{V517 Oph} & 4.969 & 0.058 & 3.121$^b$ & 0.033 & 2.031 & 0.018 & 1.624 & 0.016 \\ \object{V739 Sgr} & 6.448 & 0.037 & 5.283 & 0.028 & 3.818 & 0.019 & 3.225 & 0.021 \\ \object{V854 Cen} & 3.120 & 0.011 & 1.813$^b$ & 0.002 & 0.448 & 0.003 & 0.398 & 0.011 \\ \object{V CrA} & 6.008 & 0.050 & 4.211$^b$ & 0.046 & 2.387 & 0.012 & 1.624 & 0.013 \\ \object{VZ Sgr} & 7.405 & 0.028 & 6.474 & 0.019 & 4.938 & 0.019 & 4.260 & 0.027 \\ \object{WX CrA} & 6.206 & 0.041 & 5.100 & 0.031 & 3.798 & 0.017 & 3.259 & 0.023 \\ \object{XX Cam} & 5.431 & 0.053 & 4.760$^b$ & 0.033 & 3.427 & 0.019 & 2.867 & 0.018 \\ \object{Y Mus} & 8.133 & 0.025 & 7.855 & 0.022 & 5.846 & 0.018 & 4.612 & 0.025 \\ \object{Z Umi} & 5.982 & 0.045 & 4.673$^b$ & 0.030 & 3.429 & 0.018 & 2.933 & 0.017 \\ \object{MACHO-135.27132.51} & 8.337 & 0.024 & 7.033 & 0.019 & 5.043 & 0.013 & 4.323 & 0.031 \\ \object{MACHO-301.45783.9} & 8.540 & 0.027 & 7.177 & 0.021 & 5.351 & 0.018 & 4.578 & 0.030 \\ \object{MACHO-308.38099.66} & 7.647 & 0.023 & 6.495 & 0.021 & 4.829 & 0.016 & 4.071 & 0.025 \\ \object{MACHO-401.48170.2237} & 5.901 & 0.044 & 4.576$^b$ & 0.035 & 3.407 & 0.019 & 2.858 & 0.028 \\ \object{EROS2-CG-RCB-1} & 6.666 & 0.039 & 5.036 & 0.026 & 3.216 & 0.019 & 2.334 & 0.018 \\ \object{EROS2-CG-RCB-3} & 5.768 & 0.047 & 3.954$^b$ & 0.068 & 3.099 & 0.018 & 2.585 & 0.020 \\ \object{EROS2-CG-RCB-4} & 6.411 & 0.036 & 5.080 & 0.030 & 3.635 & 0.017 & 2.998 & 0.027 \\ \object{EROS2-CG-RCB-5} & 6.319 & 0.035 & 4.867$^b$ & 0.030 & 3.784 & 0.015 & 3.144 & 0.022 \\ \object{EROS2-CG-RCB-6} & 7.140 & 0.031 & 5.933 & 0.025 & 4.314 & 0.020 & 3.670 & 0.034 \\ \object{EROS2-CG-RCB-7} & 7.210 & 0.031 & 6.143 & 0.023 & 4.628 & 0.018 & 3.985 & 0.029 \\ \object{EROS2-CG-RCB-8} & 7.050 & 0.032 & 5.795 & 0.026 & 4.189 & 0.019 & 3.595 & 0.036 \\ \object{EROS2-CG-RCB-9} & 7.250 & 0.028 & 5.406 & 0.027 & 3.423 & 0.018 & 2.649 & 0.023 \\ \object{EROS2-CG-RCB-10} & 6.739 & 0.036 & 4.485$^b$ & 0.039 & 2.546 & 0.023 & 1.958 & 0.022 \\ \object{EROS2-CG-RCB-11} & 6.581 & 0.035 & 5.456 & 0.026 & 4.138 & 0.016 & 3.523 & 0.024 \\ \object{EROS2-CG-RCB-12} & 8.067 & 0.026 & 7.380 & 0.022 & 6.452 & 0.019 & 6.263 & 0.055 \\ \object{EROS2-CG-RCB-13} & 6.839 & 0.034 & 5.541 & 0.026 & 3.985 & 0.016 & 3.378 & 0.026 \\ \object{EROS2-CG-RCB-14} & 6.375 & 0.034 & 4.695$^b$ & 0.031 & 3.292 & 0.019 & 2.678 & 0.019 \\ \hline & & & Magellanic RCB stars & & & & &\\ \hline \object{MACHO-12.10803.56} & 10.287 & 0.025 & 9.367 & 0.021 & 7.744 & 0.020 & 7.208 & 0.055 \\ \object{EROS2-LMC-RCB-4} & 10.001 & 0.025 & 8.551 & 0.021 & 6.637 & 0.018 & 6.139 & 0.034 \\ \object{EROS2-LMC-RCB-5} & 11.854 & 0.025 & 11.365 & 0.022 & 8.623 & 0.022 & 7.437 & 0.067 \\ \object{EROS2-LMC-RCB-6} & 10.604 & 0.025 & 9.082 & 0.022 & 6.857 & 0.018 & 6.102 & 0.035 \\ \hline & & & & & & & &\\ & & & HdC stars & & & & &\\ \hline \object{HD137613} & 5.090 & 0.059 & 4.909$^b$ & 0.033 & 5.000 & 0.018 & 4.854 & 0.028 \\ \object{HD148839} & 6.787 & 0.033 & 6.804 & 0.023 & 6.731 & 0.019 & 6.770 & 0.063 \\ \object{HD173409} & 8.095 & 0.018 & 8.067 & 0.019 & 8.021 & 0.025 & 8.508 & 0.349 \\ \object{HD175893} & 7.165 & 0.029 & 6.832 & 0.021 & 5.104 & 0.019 & 4.175 & 0.025 \\ \object{HD182040} & 4.953 & 0.054 & 4.511$^b$ & 0.033 & 4.730 & 0.019 & 4.739 & 0.032 \\ \hline & & & DY Per stars & & & & &\\ \hline \object{DY Per} & 3.030 & 0.089 & 2.254$^b$ & 0.007 & 1.941 & 0.009 & 2.221 & 0.022 \\ \object{EROS2-CG-RCB-2} & 8.189 & 0.035 & 7.856 & 0.030 & 7.388 & 0.045 & 7.376 & null \\ \object{MACHO-15.10675.10} & 9.525 & 0.023 & 9.344 & 0.020 & 8.751 & 0.021 & 8.679 & 0.135 \\ \object{EROS2-LMC-DYPer-5} & 10.304 & 0.024 & 10.178 & 0.021 & 9.616 & 0.027 & 9.091 & 0.240 \\ \hline \hline \multicolumn{9}{l}{$^b$[4.6] magnitude affected by bias (see Fig.~\ref{4.6ObsModel}) } \\ \end{tabular} \end{table*} \begin{table*} \caption{AKARI fluxes (mJy) of catalogued bright Galactic RCB stars and DY Persei. \label{tab.AKARI}} \medskip \centering \begin{tabular}{lcccccccccccc} \hline \hline Name & [9.0] & $\sigma_{[9.0]}$ & [18] & $\sigma_{[18]}$ & [65] & $\sigma_{[65]}$ & [90] & $\sigma_{[90]}$ & [140] & $\sigma_{[140]}$ & [160] & $\sigma_{[160]}$ \\ \hline \hline \object{DY Cen} & 0.538 & 0.022 & 0.889 & 0.016 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{ES Aql} & 1.935 & 0.127 & 0.875 & 0.036 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{FH Sct} & 1.397 & 0.032 & 0.661 & 0.053 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{GU Sgr} & 1.290 & 0.140 & 0.751 & 0.100 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{MV Sgr} & 0.330 & 0.019 & 1.008 & 0.008 & -- & -- & 0.496 & 0.103 & -- & -- & -- & -- \\ \object{R CrB} & 52.990 & 2.440 & 21.480 & 0.029 & 2.656 & 0.170 & 1.494 & 0.114 & -- & -- & 0.606 & 0.333 \\ \object{RS Tel} & 1.870 & 0.102 & 0.767 & 0.011 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{RT Nor} & 0.257 & 0.037 & 0.228 & 0.021 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{RZ Nor} & 3.119 & 0.252 & 1.695 & 0.076 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{RY Sgr} & 48.000 & 3.660 & 20.180 & 1.020 & 3.495 & 0.104 & 2.605 & 0.139 & -- & -- & -- & -- \\ \object{S Aps} & 1.971 & 0.016 & 1.001 & 0.021 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{SU Tau} & 14.720 & 0.050 & 6.161 & 0.046 & 0.351 & -- & 1.179 & 0.080 & -- & -- & -- & -- \\ \object{SV Sge} & 3.849 & 0.441 & 2.293 & 0.095 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{U Aqr} & 1.260 & 0.142 & 0.750 & 0.002 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{UV Cas} & 1.125 & 0.046 & 0.802 & 0.026 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{UW Cen} & 9.760 & 0.070 & 5.697 & 0.047 & 6.645 & 0.413 & 7.302 & 0.332 & 4.179 & 0.349 & 2.809 & 1.020 \\ \object{UX Ant} & 0.101 & 0.007 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{V1157 Sgr} & 2.569 & 0.004 & 1.145 & 0.015 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{V1783 Sgr} & 3.918 & 0.252 & 1.900 & 0.032 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{V348 Sgr} & -- & -- & 2.813 & 0.028 & -- & -- & 1.638 & 0.018 & -- & -- & -- & -- \\ \object{V3795 Sgr} & 4.574 & 0.144 & 1.951 & 0.069 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{V482 Cyg} & 1.128 & 0.037 & 0.459 & 0.020 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{V517 Oph} & 6.756 & 0.243 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{V739 Sgr} & 1.638 & 0.065 & 0.840 & 0.001 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{V854 Cen} & 22.970 & 1.170 & 7.364 & 0.033 & -- & -- & 0.705 & 0.036 & -- & -- & -- & -- \\ \object{V CrA} & 3.606 & 0.210 & 2.166 & 0.013 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{VZ Sgr} & -- & -- & 0.283 & 0.098 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{WX CrA} & 1.742 & 0.097 & 0.810 & 0.001 & -- & -- & 1.548 & 0.105 & 2.919 & 6.340 & -- & -- \\ \object{XX Cam} & -- & -- & 1.391 & 0.065 & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{Y Mus} & 0.230 & 0.017 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- \\ \object{Z Umi} & 1.724 & 0.029 & 0.765 & 0.024 & -- & -- & -- & -- & -- & -- & -- & -- \\ \hline \object{DY Per} & 12.210 & 0.823 & 2.600 & 0.157 & -- & -- & -- & -- & -- & -- & -- & -- \\ \hline \hline \end{tabular} \end{table*} \begin{sidewaystable*} \caption{First 10 rows of the published catalogue. \label{tab.short_version}} \centering \begin{tabular}{lccccccccccccccc} \hline ID & Ra & Dec & l & b & \multicolumn{2}{c}{WISE} & \multicolumn{2}{c}{WISE} & \multicolumn{2}{c}{WISE} & \multicolumn{2}{c}{WISE} & \multicolumn{2}{c}{2MASS} \\ & (deg) & (deg) & (deg) & (deg) & [3.0] & $\sigma_{[3.0]}$ & [4.6] & $\sigma_{[4.6]}$ & [12] & $\sigma_{[12]}$ & [22] & $\sigma_{[22]}$ & J mag & $\sigma_{J}$ \\ \hline \hline 1 & 105.4079971 & -41.4713669 & 251.85892 & -15.80245 & 4.852 & 0.058 & 3.017 & 0.016 & 1.736 & 0.017 & 1.197 & 0.016 & 11.324 & 0.024 \\ 2 & 259.1376648 & -33.8375053 & 352.49854 & 2.49227 & 8.529 & 0.029 & 6.471 & 0.022 & 3.917 & 0.016 & 2.809 & 0.019 & 17.348 & -99 \\ 3 & 109.6635208 & -24.9482384 & 238.16979 & -5.55101 & 9.701 & 0.026 & 8.951 & 0.021 & 7.202 & 0.020 & 6.375 & 0.085 & 11.004 & 0.022 \\ 4 & 107.7039337 & -6.3668070 & 220.80645 & 1.38275 & 6.906 & 0.029 & 4.715 & 0.034 & 2.462 & 0.022 & 1.557 & 0.021 & 13.736 & 0.030 \\ 5 & 273.2704468 & -25.0299644 & 6.43576 & -3.35900 & 10.990 & 0.078 & 11.098 & 0.085 & 9.010 & 0.158 & 8.046 & 0.329 & 12.274 & -99 \\ 6 & 258.5602112 & -21.4371471 & 2.42136 & 10.03461 & 6.800 & 0.032 & 5.829 & 0.022 & 4.418 & 0.019 & 4.017 & 0.023 & 9.783 & 0.024 \\ 7 & 273.8857422 & -25.5371208 & 6.25302 & -4.08940 & 10.657 & 0.050 & 10.018 & 0.040 & 8.367 & 0.031 & 7.509 & 0.178 & 12.799 & -99 \\ 8 & 273.0597229 & -25.4018135 & 6.01729 & -3.36891 & 10.507 & 0.068 & 9.501 & 0.038 & 7.216 & 0.025 & 6.009 & 0.070 & 12.616 & 0.029 \\ 9 & 272.6348572 & -25.3729763 & 5.85848 & -3.01816 & 9.329 & 0.035 & 9.041 & 0.026 & 8.114 & 0.030 & 8.043 & 0.289 & 10.410 & 0.021 \\ 10 & 273.3207092 & -25.5264320 & 6.02011 & -3.63545 & 9.215 & 0.028 & 7.982 & 0.021 & 6.220 & 0.018 & 5.708 & 0.043 & 14.477 & 0.068 \\ \hline \multicolumn{16}{c}{}\\ \multicolumn{16}{c}{ Colunms continued }\\ \hline \multicolumn{2}{c}{2MASS} & \multicolumn{2}{c}{2MASS} & \multicolumn{2}{c}{DENIS} & \multicolumn{2}{c}{DENIS} & \multicolumn{2}{c}{DENIS} & \multicolumn{5}{c}{USNO-B1} & SIMBAD \\ H mag & $\sigma_{H}$ & K mag & $\sigma_{K}$ & I mag & $\sigma_{I}$ & J mag & $\sigma_{J}$ & K mag & $\sigma_{K}$ & B1 mag & B2 mag & R1 mag & R2 mag & I mag & Class. \\ \hline \hline 9.154 & 0.024 & 7.287 & 0.027 & 15.768 & 0.06 & 12.201 & 0.06 & 8.076 & 0.06 & -99 & 17.06 & -99 & 16.97 & 15.91 & C* \\ 16.261 & -99 & 13.424 & 0.050 & -99 & -99 & -99 & -99 & 12.049 & 0.11 & -99 & -99 & -99 & -99 & -99 & \_ \\ 10.798 & 0.026 & 10.549 & 0.023 & -99 & -99 & -99 & -99 &-99 & -99 & -99 & -99 & -99 & -99 &-99 & *iC \\ 10.807 & 0.027 & 8.466 & 0.023 & -99 & -99 & 13.650 & 0.09 & 8.349 & 0.07 & -99 & -99 & -99 & -99 & -99 & \_ \\ 11.694 & 0.049 & 11.312 & 0.038 & -99 & -99 & -99 & -99 & 11.294 & 0.08 & -99 & -99 & -99 & -99 & -99 & \_ \\ 9.036 & 0.024 & 8.479 & 0.021 & 10.806 & 0.04 & 9.393 & 0.07 & 8.261 & 0.25 & 14.13 & 11.90 & 15.26 & 11.89 & 10.42 & C* \\ 12.844 & 0.064 & 12.196 & 0.034 & -99 & -99 & -99 & -99 & -99 & -99 & -99 & -99 & -99 & -99 & -99 & \_ \\ 12.018 & 0.027 & 11.493 & 0.021 & 14.293 & 0.04 & 12.961 & 0.06 & 11.776 & 0.09 & 16.67 & 14.12 & 16.13 & 15.39 & 13.81 & \_ \\ 10.008 & 0.025 & 9.735 & 0.021 & 12.816 & 0.32 & 11.404 & 0.40 & 10.819 & 0.55 & 14.20 & 12.67 & 14.15 & 13.05 & 13.38 & \_ \\ 12.586 & 0.094 & 11.067 & 0.039 & 16.327 & 0.09 & 13.232 & 0.13 & 10.515 & 0.13 & -99 & -99 & -99 & -99 & -99 & V* \\ \hline \end{tabular} \end{sidewaystable*} \begin{acknowledgements} I thank personally Tony Martin-Jones, Julia Jane Karrer and Mike Bessell for their careful reading and comments. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This publication also makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Centre, California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. The DENIS data have also been used. DENIS is the result of a joint effort involving human and financial contributions of several Institutes mostly located in Europe. It has been supported financially mainly by the French Institut National des Sciences de l'Univers, CNRS, and French Education Ministry, the European Southern Observatory, the State of Baden-Wuerttemberg, and the European Commission under networks of the SCIENCE and Human Capital and Mobility programs, the Landessternwarte, Heidelberg and Institut d'Astrophysique de Paris. This article is also based on observations with AKARI, a JAXA project with the participation of ESA. Finally, I thank the AAVSO association which provides variable star light curves with observations contributed by observers worldwide, and to the ASAS south survey that provides light curves of bright stars from the entire southern sky. \end{acknowledgements} \bibliographystyle{aa}
1,941,325,220,865
arxiv
\section{Introduction} The Beilinson conjectures \cite{bei1,bei2} are some very general statements extending the class number formula which relates the values of $L$-functions at integers to regulators. For an elliptic curve $E$ over $\mathbb{Q}$, the conjecture concerning $L(E, 2)$ is originally due to Bloch \cite{blo1,blo2} and was proved by himself when $E$ has complex multiplication. The non-CM case follows from Beilinson's work on modular curves \cite{bei2} and the modularity of $E$ due to Wiles. The regulator map that we consider is given by \begin{equation*} r_{\mathscr{D}}: \hspace{2mm} H^{2}_{\mathscr{M}}(E, \mathbb{Q}(2))_{\mathbb{Z}} \longrightarrow H^{2}_{\mathscr{D}}(E_{\mathbb{R}}, \mathbb{R}(2)) \end{equation*} from the motivic cohomology to the Deligne cohomology (see Section \ref{regulator}). Let $E_{N}$ be an elliptic curve over $\mathbb{Q}$ of conductor $N$. In this paper, we treat the cases $N=$ 27, 32 and 64, i.e. \begin{align*} E_{27} : y^{2} &= x^{3}- \frac{27}{4}, \\ E_{32} : y^{2} &= x^{3}+ 4x, \\ E_{64} : y^{2} &= x^{3}- 4x . \end{align*} Note that $E_{27}$ is isogenous to the Fermat curve of degree $3$ and has complex multiplication by $\mathbb{Z}[(-1+\sqrt{-3})/2]$, and each of $E_{32}$ and $E_{64}$ is a quotient of the Fermat curve of degree $4$ and has complex multiplication by $\mathbb{Z}[\sqrt{-1}]$. For Fermat curves, Ross \cite{ross1,ross2} constructed an element of the motivic cohomology group. Otsubo \cite{otsubo2,otsubo} expressed its regulator image in terms of special values of generalized hypergeometric functions $_{3}F_{2}$ \begin{align*} {_{3}F_{2}}\left[ \left. \begin{matrix} a,b,c \\ e,f \end{matrix} \right| z \right] &:= \sum_{n=0}^{\infty} \frac{(a)_{n} (b)_{n} (c)_{n}}{(e)_{n} (f)_{n}}\frac{z^{n}}{n!} \end{align*} where $(a)_{n} := \Gamma(a+n)/\Gamma(a)$ denotes the Pochhammer symbol (see Theorems \ref{n=3noreg}, \ref{n=4noregfor} and \ref{n=4noregfor2}). By comparing the regulator image of Bloch's element with that of Ross' element, Otsubo \cite{otsubo} expressed the values $L^{\prime}(E_{27}, 0)$ and $L^{\prime}(E_{32}, 0)$ in terms of values of $_{3}F_{2}$ at $z=1$. Note that we have the functional equation (cf.\cite{d.w.}) \begin{equation} L^{\prime}(E_{N}, 0) = \pm\frac{N}{(2\pi)^{2}} L(E_{N}, 2). \label{functional} \end{equation} On the other hand, in \cite{r.z}, Rogers and Zudilin expressed the value $L(E_{27}, 2)$ in terms of values of $_{3}F_{2}$ at $z=1$ directly by an analytic method (see Theorem \ref{n=3nol}). The first purpose of this paper is to prove the following formulas. \\ {\bf Theorem} (Theorems \ref{n=4nol} and \ref{cond64}) \begin{align*} L(E_{32}, 2) &= \frac{\sqrt{\pi}\Gamma^{2}\left( \frac{1}{4}\right)}{32\sqrt{2}} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2}, 1 \\ \frac{3}{2}, \frac{3}{4} \end{matrix} \right| 1 \right] - \frac{\sqrt{\pi}\Gamma^{2}\left( \frac{3}{4}\right)}{8\sqrt{2}} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2}, 1 \\ \frac{3}{2}, \frac{5}{4} \end{matrix} \right| 1 \right] , \\ L(E_{64}, 2) &= \frac{\sqrt{\pi}\Gamma^{2}\left( \frac{1}{4}\right)}{32} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{4}, \frac{1}{4}, 1 \\ \frac{1}{2}, \frac{5}{4} \end{matrix} \right| 1 \right] - \frac{\sqrt{\pi}\Gamma^{2}\left( \frac{3}{4}\right)}{48} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{3}{4}, \frac{3}{4}, 1 \\ \frac{3}{2}, \frac{7}{4} \end{matrix} \right| 1 \right] . \end{align*} Rogers \cite[p.4036, (46)]{rog1} expressed the value $L(E_{32},2)$ in terms of a value of $_{3}F_{2}$ at $z = 1/2$. Zudilin \cite[p.391, Theorem 3]{zud2} expressed $L(E_{32}, 2)$ as a sum of values of $_{3}F_{2}$ at $z=1$. Our new representation of the value $L(E_{32}, 2)$, however, is appropriate for the comparison with regulators. To prove the formulas above, we follow an analogous method to that of Rogers and Zudilin \cite{r.z}. The modularity theorem shows that the $L$-function of an elliptic curve is equal to the Mellin transform of a weight-two modular form. We know that the modular form corresponding to $E_{32}$ (resp. $E_{64}$) is $\eta^{2}(q^{4})\eta^{2}(q^{8})$ (resp. $\frac{\eta^{8}(q^{8})}{\eta^{2}(q^{4})\eta^{2}(q^{16})}$) (cf.\cite{onoken}), where $\eta (q)$ is the Dedekind eta function. Hence we have \begin{align} L(E_{32}, 2) &= - \int_{0}^{1} \eta^{2}(q^{4})\eta^{2}(q^{8})\log q \frac{dq}{q}, \label{e32}\\ L(E_{64}, 2) &= - \int_{0}^{1} \frac{\eta^{8}(q^{8})}{\eta^{2}(q^{4})\eta^{2}(q^{16})} \log q \frac{dq}{q} \label{e64}. \end{align} By Jacobi's triple product formula and Jacobi's imaginary transformation formula, each integral can be expressed as an integral of a product of Jacobi's theta functions. Then certain transformation reduces each of $L(E_{32}, 2)$ and $L(E_{64}, 2)$ to an integral of elementary functions. The second purpose of this paper is to compare the regulators with the values $L^{\prime}(E_{N}, 0)$ via hypergeometric functions. Let $e_{E_{N}} \in H^{2}_{\mathscr{M}}(E_{N}, \mathbb{Q}(2))_{\mathbb{Z}}$ be an element which is constructed by mapping Ross' element, $\omega_{E_{N}}$ be the normalized real holomorphic differential form on $E_{N}$ and $\Omega_{\mathbb{R}}$ be its real period (see Section \ref{regulator}). By comparing the representations of the regulators due to Otsubo \cite{otsubo} with the representations of the values of $L$-functions at $s=2$ explained above, we prove the Beilinson conjectures for $E_{27}$, $E_{32}$ and $E_{64}$. \\ {\bf Theorem} (Theorems \ref{comp1}, \ref{comp2} and \ref{comp3}) \begin{align*} r_{\mathscr{D}}(e_{E_{27}}) &= - \frac{3}{2}L^{\prime}(E_{27}, 0) \Omega_{\mathbb{R}}(\omega_{E_{27}} - \overline{\omega_{E_{27}}}), \\ r_{\mathscr{D}}(e_{E_{32}}) &= - \frac{1}{2} L^{\prime}(E_{32},0)\Omega_{\mathbb{R}}(\omega_{E_{32}} - \overline{\omega_{E_{32}}}), \\ r_{\mathscr{D}}(e_{E_{64}}) &= - \frac{1}{2} L^{\prime}(E_{64},0)\Omega_{\mathbb{R}}(\omega_{E_{64}} - \overline{\omega_{E_{64}}}) . \end{align*} Note that, for $E_{27}$ and $E_{32}$, Otsubo \cite[Propositions 5.1 and 5.4]{otsubo} compared $r_{\mathscr{D}}(e_{E_{N}})$ with the regulator image of Bloch's element. Hence the first two formulas can also be obtained by Bloch's result. On the other hand, for $E_{64}$, $r_{\mathscr{D}}(e_{E_{64}})$ and the regulator image of Bloch's element has not been compared. Our result gives a rigorous proof of the formula which was found numerically in \cite{otsubo3}. By Martin and Ono \cite{onoken}, those CM elliptic curves whose corresponding cusp form is an eta quotient are $E_{27}$, $E_{32}$, $E_{36}$, $E_{64}$ and $E_{144}$. The remaining cases are \begin{align*} E_{36} : \hspace{2mm} y^{2} &= x^{3} + 1, \\ E_{144} : \hspace{2mm} y^{2} &= x^{3} - 1. \end{align*} Note that each of $E_{36}$ and $E_{144}$ is a quotient of the Fermat curve of degree $6$. Otsubo \cite{otsubo} expressed $r_{\mathscr{D}}(e_{E_{36}})$ and $r_{\mathscr{D}}(e_{E_{144}})$ in terms of values of $_{3}F_{2}$ at $z=1$. Hence a similar study of $L(E_{36}, 2)$ and $L(E_{144}, 2)$ would lead to a proof of the Beilinson conjectures. Further, for more general CM abelian varieties, one might be able to approach the Beilinson conjectures via hypergeometric functions. The structure of this paper is as follows. In Section \ref{l-formulas}, we express the values $L(E_{32}, 2)$ and $L(E_{64}, 2)$ in terms of values of $_{3}F_{2}$ at $z=1$. Their proofs are analogous to the method of Rogers and Zudilin. In Section \ref{comparison}, we stablish the relationship between $L^{\prime}(E_{27}, 0)$, $L^{\prime}(E_{32}, 0)$ and $L^{\prime}(E_{64}, 0)$ and the regulators via hypergeometric functions and in this way prove our main results. \section{$L$-values}\label{l-formulas} In this section, we express the values $L(E_{32}, 2)$ and $L(E_{64}, 2)$ in terms of values of hypergeometric functions. \subsection{Conductor 32}\label{conductor 32} First we prove the following formula. \begin{prop}\label{thetaprod} \begin{equation*} L(E_{32}, 2) = \frac{\pi}{32}\int_{0}^{1} \theta_{2}(q)\theta_{3}(q)\left(\theta_{3}^{2}(q) - \theta_{2}^{2}(q) \right) \log \left( \frac{\theta_{3}(q^{2})}{\theta_{2}(q^{2})}\right) \frac{dq}{q}. \end{equation*} \end{prop} We know that the modular form corresponding to $E_{32}$ is $\eta^{2}(q^{4})\eta^{2}(q^{8})$ (cf.\cite[p.3173, Theorem 2]{onoken}). To show the formula above, we express this eta product as a product of Jacobi's theta functions. \begin{lemm}\label{etatotheta} \begin{equation*} \eta^{2}(q^{4})\eta^{2}(q^{8}) = \frac{1}{4}\theta_{2}^{2}(q^{2})\theta_{4}^{2}(q^{4}) \end{equation*} \end{lemm} \begin{proof} By Jacobi's triple product formula, we have (cf.\cite[p.3174]{onoken}) \begin{equation*} \eta^{3}(q^{8}) = \sum_{n=0}^{\infty} (-1)^{n}(2n+1)q^{(2n+1)^{2}}, \hspace{4mm} \frac{\eta^{2}(q)}{\eta(q^{2})} = \sum_{n=-\infty}^{\infty} (-1)^{n}q^{n^{2}} = \theta_{4}(q). \end{equation*} If we use the following formula \cite[p.68, Proposition 3.1]{borwein}, \cite[p.40, Entry 25]{ramanujan3} \begin{equation*} 2\sum_{n=0}^{\infty} (-1)^{n}(2n+1)q^{(2n+1)^{2}} = \theta_{2}(q^{4})\theta_{3}(q^{4})\theta_{4}(q^{4}), \hspace{2mm} 2\theta_{2}(q^{2})\theta_{3}(q^{2})=\theta_{2}^{2}(q), \end{equation*} then we obtain the lemma. \end{proof} By \eqref{e32} and Lemma \ref{etatotheta}, we have \begin{align*} L(E_{32}, 2) = - \frac{1}{4}\int_{0}^{1} \theta_{2}^{2}(q^{2})\theta_{4}^{2}(q^{4}) \log q \frac{dq}{q} . \end{align*} By substituting $q^{2} \mapsto q$ and setting $q = e^{-2\pi u}$, we obtain \begin{equation*} L(E_{32}, 2) = \frac{\pi^{2}}{4}\int_{0}^{\infty}\theta_{2}^{2}(e^{-2\pi u})\cdot u\theta_{4}^{2}(e^{-4\pi u})du. \end{equation*} We use the following Lambert series expansion \begin{equation*} \theta_{2}^{2}(q) =4\sum_{n=0}^{\infty} \frac{q^{n+1/2}}{1+q^{2n+1}} =4 \sum_{n=1}^{\infty} \frac{\chi_{-4}(n)q^{n/2}}{1-q^{n}} =4\sum_{n,k=1}^{\infty} \chi_{-4}(n)q^{n(k-1/2)} \end{equation*} where $\chi_{-4} (n) := {\rm Im} ( i^{n})$. This follows from the following formulas \cite[p.115, (8.2)]{ramanujan3}, \cite[p.35, (2.1.8)]{borwein} \begin{equation*} \theta_{3}^{2} (q) =1 + 4\sum_{n=1}^{\infty} \frac{q^{n}}{1+q^{2n}}, \hspace{7mm} \theta_{2}^{2}(q^{2}) = \theta_{3}^{2}(q) - \theta_{3}^{2}(q^{2}) . \end{equation*} By Jacobi's imaginary transformation formula \cite[p.40, (2.3.3)]{borwein}, we have \begin{equation*} u\theta_{4}^{2}(e^{-4\pi u}) = \frac{1}{4} \theta_{2}^{2}(e^{-\frac{\pi}{4u}}) = \sum_{r,s=1}^{\infty} \chi_{-4}(r)e^{-\frac{\pi r(s-1/2)}{4u}}. \end{equation*} Therefore we obtain \begin{align*} L(E_{32}, 2) = \pi^{2} \int_{0}^{\infty} \sum_{n,k,r,s=1}^{\infty} \chi_{-4}(nr) e^{-2\pi un(k-1/2)} \cdot e^{-\frac{\pi r(s-1/2)}{4u}} du. \end{align*} By substituting $u \mapsto ru/(k-1/2)$, we have \begin{equation} L(E_{32}, 2) =\pi^{2}\int_{0}^{\infty} \left( \sum_{n,r=1}^{\infty}r\chi_{-4}(nr)e^{-2\pi unr}\right) \left( \sum_{k,s=1}^{\infty}\frac{e^{-\frac{\pi}{4u}(s-1/2)(k-1/2)}}{k-\frac{1}{2}} \right) du . \label{totyuu} \end{equation} We compute the two series in the integral in the following lemmas. \begin{lemm}\label{seriescal1} \begin{equation*} \sum_{k,s=1}^{\infty}\frac{e^{-\frac{\pi}{4u}(s-1/2)(k-1/2)}}{k-\frac{1}{2}} = \frac{1}{2} \log \frac{\theta_{3}(q^{8})}{\theta_{2}(q^{8})} \end{equation*} where $q = e^{-2\pi u}$. \end{lemm} \begin{proof} We have \begin{align*} &\sum_{k,s=1}^{\infty}\frac{e^{-\frac{\pi}{4u}(s-1/2)(k-1/2)}}{k-\frac{1}{2}} =\log \prod_{s\geqq 1}\left| \frac{1+e^{-\frac{\pi (2s-1)}{16u}}}{1-e^{-\frac{\pi (2s-1)}{16u}}} \right| \\ &=\log \prod_{s\geqq 1}\left| \frac{\left( 1-e^{-\frac{\pi s}{8u}} \right)^{3}}{\left( 1-e^{-\frac{\pi s}{4u}} \right) \left( 1-e^{-\frac{\pi s}{16u}} \right)^{2}} \right| =\log \left| \frac{\eta^{3}(e^{-\frac{\pi}{8u}})}{\eta(e^{-\frac{\pi}{4u}})\eta^{2}(e^{-\frac{\pi}{16u}})} \right|. \end{align*} Now, if we apply the involution for the eta function \begin{equation*} \eta(e^{\frac{-2\pi i}{\tau}}) = \sqrt{-i \tau} \eta(e^{2\pi i \tau}), \end{equation*} then we obtain \begin{align*} \frac{\eta^{3}(e^{-\frac{\pi}{8u}})}{\eta(e^{-\frac{\pi}{4u}})\eta^{2}(e^{-\frac{\pi}{16u}})} =\frac{\eta^{3}(e^{-32\pi u})}{\sqrt{2} \eta(e^{-16\pi u}) \eta^{2}(e^{-64\pi u})} =\frac{\eta^{3}(q^{16})}{\sqrt{2} \eta(q^{8}) \eta^{2}(q^{32})} \end{align*} where we set $q = e^{-2\pi u}$. If we use the formulas \cite[p.3174]{onoken} \begin{align} \frac{\eta^{5}(q^{2})}{\eta^{2}(q)\eta^{2}(q^{4})} = \theta_{3}(q), \hspace{2mm} \frac{\eta^{2}(q)}{\eta(q^{2})}=\theta_{4}(q), \label{jacobifor1} \end{align} and \begin{align} \eta^{3}(q^{8}) = \frac{1}{2}\theta_{2}(q^{4})\theta_{3}(q^{4})\theta_{4}(q^{4}),\label{jacobifor2} \end{align} then \begin{align*} \frac{\eta^{3}(q^{16})}{\sqrt{2}\eta(q^{8}) \eta^{2}(q^{32})} &= \frac{\eta^{5}(q^{16})}{\eta^{2}(q^{8}) \eta^{2}(q^{32})} \cdot \frac{\eta(q^{8})\eta(q^{16})}{\sqrt{2}\eta^{3}(q^{16})} \\ &=\theta_{3}(q^{8}) \cdot \frac{\frac{1}{\sqrt{2}}\theta_{2}^{\frac{1}{2}}(q^{8})\theta_{3}^{\frac{1}{2}}(q^{8})\theta_{4}(q^{8})} {\sqrt{2}\cdot \frac{1}{2}\theta_{2}(q^{8})\theta_{3}(q^{8})\theta_{4}(q^{8})} =\left( \frac{\theta_{3}(q^{8})}{\theta_{2}(q^{8})}\right) ^{\frac{1}{2}}. \end{align*} Hence we have the lemma. \end{proof} \begin{lemm}\label{seriescal2} \begin{equation*} \sum_{n,r=1}^{\infty}r\chi_{-4}(nr)q^{nr} = \frac{1}{2} \theta_{2}(q^{4})\theta_{3}(q^{4})\left(\theta_{3}^{2}(q^{4}) - \theta_{2}^{2}(q^{4}) \right). \end{equation*} \end{lemm} \begin{proof} Since $\chi_{-4} (n) = {\rm Im}( i^{n} ) $, \begin{align*} \sum_{n,r=1}^{\infty}r\chi_{-4}(nr)q^{nr} = {\rm Im}\left( \sum_{n,r \geqq 1} r \left( i q \right) ^{nr} \right) = {\rm Im} \left( \sum_{r \geqq 1} \frac{r \left( i q \right) ^{r}}{1-\left( i q \right) ^{r}} \right) = -\frac{1}{24}{\rm Im}\left( L(iq)\right) \end{align*} where \begin{equation*} L(q) := 1 -24\sum_{n=1}^{\infty} \frac{nq^{n}}{1-q^{n}}. \end{equation*} Ramanujan proved \cite[p.377, Entry 38]{ramanujan4} that \begin{equation*} 3\theta_{3}^{4}(q) = 4L(q^{4}) - L(q), \end{equation*} hence we have \begin{align*} \sum_{n,r=1}^{\infty}r \chi_{-4}(nr)q^{nr} = \frac{1}{8}{\rm Im}\left( \theta_{3}^{4}(iq) \right). \end{align*} If we use the following formula \cite[p.73]{borwein} \begin{equation*} \theta_{3}(iq) = \theta_{3}(q^{4}) + i\theta_{2}(q^{4}), \end{equation*} then we obtain \begin{align*} {\rm Im}\left( \theta_{3}^{4}(iq) \right) = 4\theta_{3}^{3}(q^{4})\theta_{2}(q^{4}) - 4\theta_{3}(q^{4})\theta_{2}^{3}(q^{4}). \end{align*} Therefore we have the lemma. \end{proof} By applying Lemmas \ref{seriescal1} and \ref{seriescal2} to \eqref{totyuu}, we obtain \begin{equation*} L(E_{32}, 2)= \frac{\pi}{8}\int_{0}^{1} \theta_{2}(q^{4})\theta_{3}(q^{4})\left(\theta_{3}^{2}(q^{4}) - \theta_{2}^{2}(q^{4}) \right) \left( \log \frac{\theta_{3}(q^{8})}{\theta_{2}(q^{8})} \right) \frac{dq}{q}. \end{equation*} Then, by substituting $q^{4} \mapsto q$, we have Proposition \ref{thetaprod}. $\qed$ \\ Now we express the value $L(E_{32}, 2)$ in terms of the values of $_{3}F_{2}$ at $z=1$. \begin{theo}\label{n=4nol} \begin{equation*} L(E_{32}, 2) = \frac{\sqrt{\pi}\Gamma^{2}\left( \frac{1}{4}\right)}{32\sqrt{2}} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2}, 1 \\ \frac{3}{2}, \frac{3}{4} \end{matrix} \right| 1 \right] - \frac{\sqrt{\pi}\Gamma^{2}\left( \frac{3}{4}\right)}{8\sqrt{2}} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2}, 1 \\ \frac{3}{2}, \frac{5}{4} \end{matrix} \right| 1 \right] . \end{equation*} \end{theo} \begin{proof} Let \begin{equation*} z(x) := {_{2}}F_{1} \left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2} \\ 1 \end{matrix} \right| x \right], \hspace{5mm} y(x) := \pi\frac{z(1-x)}{z(x)}. \end{equation*} Set \begin{equation*} q = e^{-y(x)} . \end{equation*} We know \cite[p.87, Entry 30]{ramanujan2}, \cite[p.101, Entry 6]{ramanujan3} that \begin{equation*} \theta_{3}^{4}(q)\frac{dq}{q} = \frac{dx}{x(1-x)}. \end{equation*} It is also known \cite[Entry 10, 11]{ramanujan3} that \begin{align*} \theta_{3}(q) = \sqrt{z(x)}, &\hspace{5mm} \theta_{3}(q^{2}) = \sqrt{\frac{z(x)}{2}}\left( 1+\sqrt{1-x} \right) ^{\frac{1}{2}}, \\ \theta_{2}(q) = \sqrt{z(x)}x^{\frac{1}{4}}, &\hspace{5mm} \theta_{2}(q^{2}) = \sqrt{\frac{z(x)}{2}}(1- \sqrt{1-x})^{\frac{1}{2}}. \end{align*} Therefore, \begin{align*} L(E_{32}, 2) &=-\frac{\pi}{32}\int_{0}^{1} \frac{\theta_{2}(q)}{\theta_{3}(q)}\left(1 - \frac{\theta_{2}^{2}(q)}{\theta_{3}^{2}(q)} \right) \log \left( \frac{\theta_{2}(q^{2})}{\theta_{3}(q^{2})}\right) \theta_{3}^{4}(q)\frac{dq}{q} \\ &=-\frac{\pi}{32} \int_{0}^{1} x^{1/4}\left(1 - x^{1/2} \right) \log \left( \frac{(1- \sqrt{1-x})^{\frac{1}{2}}}{ (1+\sqrt{1-x}) ^{\frac{1}{2}}} \right) \frac{dx}{x(1-x)} \\ &=-\frac{\pi}{32}\int_{0}^{1} x^{1/4}\left(1 - x^{1/2} \right) \log \left( \frac{1- (1-x)^{1/2}}{ x^{1/2}}\right) \frac{dx}{x(1-x)} . \end{align*} If we use the formula \begin{equation*} \log \left( \frac{1- (1-x)^{1/2}}{ x^{1/2}}\right) = \sum_{n=1}^{\infty} \frac{(1-x)^{n}-2(1-x)^{\frac{n}{2}}}{2n}, \end{equation*} and perform term-by-term integration using beta integrals \begin{equation*} B(\alpha, \beta) := \int_{0}^{1} x^{\alpha -1}(1-x)^{\beta -1} dx, \end{equation*} we obtain \begin{align*} L(E_{32},2) =-\frac{\pi}{32} \sum_{n=1}^{\infty}\frac{1}{2n}\left( B\left(\frac{1}{4},n\right) - 2B\left(\frac{1}{4},\frac{n}{2}\right) - B\left(\frac{3}{4},n \right) + 2 B\left(\frac{3}{4}, \frac{n}{2} \right) \right). \end{align*} Using Pochhammer symbols, the values of beta function are represented as follows: \begin{align*} B\left( \frac{1}{4}, n \right) &= \frac{\Gamma\left( \frac{1}{4}\right)\Gamma (n)}{\Gamma\left( \frac{1}{4}+n \right)} = \frac{\Gamma(n)}{\left( \frac{1}{4} \right)_{n}} ,\hspace{5mm} B\left( \frac{3}{4}, n \right) = \frac{\Gamma\left( \frac{3}{4}\right)\Gamma (n)}{\Gamma\left( \frac{3}{4}+n \right)} = \frac{\Gamma(n)}{\left( \frac{3}{4} \right)_{n}}, \\ B\left( \frac{1}{4}, \frac{n}{2} \right) &= \left\{ \begin{aligned} &B\left( \frac{1}{4}, \frac{2m}{2} \right) = \frac{\Gamma(m)}{\left( \frac{1}{4} \right)_{m}} &(n \equiv 0 \mod 2), \\ &B\left( \frac{1}{4}, \frac{2m+1}{2} \right) = \frac{\Gamma\left( \frac{1}{4}\right)\Gamma\left( \frac{1}{2}\right)\left( \frac{1}{2} \right)_{m}} {\Gamma\left( \frac{3}{4} \right)\left( \frac{3}{4} \right)_{m}} &(n \equiv 1 \mod 2), \end{aligned} \right. \\ B\left( \frac{3}{4}, \frac{n}{2} \right) &= \left\{ \begin{aligned} &B\left( \frac{3}{4}, \frac{2m}{2} \right) = \frac{\Gamma(m)}{\left( \frac{3}{4} \right)_{m}} &(n \equiv 0 \mod 2), \\ &B\left( \frac{3}{4}, \frac{2m+1}{2} \right) = \frac{4\Gamma\left( \frac{3}{4}\right)\Gamma\left( \frac{1}{2}\right)\left( \frac{1}{2}\right)_{m}} {\Gamma\left( \frac{1}{4}\right)\left( \frac{5}{4}\right)_{m}} &(n \equiv 1 \mod 2). \end{aligned} \right. \end{align*} Hence we obtain \begin{align*} L(E_{32}, 2) = \frac{\pi\Gamma\left( \frac{1}{4}\right)\Gamma\left( \frac{1}{2}\right)}{32\Gamma\left( \frac{3}{4}\right)} \sum_{m=0}^{\infty} \frac{\left( \frac{1}{2} \right)_{m}}{(2m+1)\left( \frac{3}{4} \right)_{m}} - \frac{\pi\Gamma\left( \frac{3}{4}\right)\Gamma\left( \frac{1}{2}\right)}{8\Gamma\left( \frac{1}{4}\right)} \sum_{m=0}^{\infty} \frac{\left( \frac{1}{2} \right)_{m}}{(2m+1)\left( \frac{5}{4} \right)_{m}} . \end{align*} Note that these series are hypergeometric functions. In fact, we have \begin{align*} \sum_{m=0}^{\infty} \frac{\left( \frac{1}{2} \right)_{m}}{(2m+1)\left( \frac{3}{4} \right)_{m}} = \sum_{m=0}^{\infty} \frac{\left( \frac{1}{2} \right)_{m}\left( \frac{1}{2} \right)_{m}(1)_{m}} {(2m+1)\left( \frac{1}{2} \right)_{m}\left( \frac{3}{4} \right)_{m}(1)_{m}} ={_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2}, 1 \\ \frac{3}{2}, \frac{3}{4} \end{matrix} \right| 1 \right]. \label{n=4nobaainol1} \end{align*} Similarly, we have \begin{align*} \sum_{m=0}^{\infty} \frac{\left( \frac{1}{2} \right)_{m}}{(2m+1)\left( \frac{5}{4} \right)_{m}} ={_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2}, 1 \\ \frac{3}{2}, \frac{5}{4} \end{matrix} \right| 1 \right]. \end{align*} Hence we obtain \begin{align*} L(E_{32}, 2) = \frac{\pi\Gamma\left( \frac{1}{4}\right)\Gamma\left( \frac{1}{2}\right)}{32\Gamma\left( \frac{3}{4} \right)} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2}, 1 \\ \frac{3}{2}, \frac{3}{4} \end{matrix} \right| 1 \right] - \frac{\pi\Gamma\left( \frac{3}{4}\right)\Gamma\left( \frac{1}{2}\right)}{8\Gamma\left( \frac{1}{4} \right)} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2}, 1 \\ \frac{3}{2}, \frac{5}{4} \end{matrix} \right| 1 \right] . \end{align*} If we use the formulas \begin{equation*} \Gamma\left( \frac{1}{4}\right) \Gamma\left( \frac{3}{4}\right) = \sqrt{2}\pi, \hspace{5mm} \Gamma\left( \frac{1}{2}\right)=\sqrt{\pi}, \end{equation*} then we finally obtain the theorem. \end{proof} \subsection{Conductor 64} We know that the modular form corresponding to $E_{64}$ is $\frac{\eta^{8}(q^{8})}{\eta^{2}(q^{4})\eta^{2}(q^{16})}$ (cf.\cite{onoken}). \begin{prop}\label{lnothetadenohyouji2} \begin{equation*} L(E_{64}, 2) = \frac{\pi}{64}\int_{0}^{1} \theta_{2}(q)\theta_{3}(q)\left(\theta_{3}^{2}(q) - \theta_{2}^{2}(q) \right) \log \left( \frac{\theta_{3}(q^{4})}{\theta_{2}(q^{4})}\right) \frac{dq}{q}. \end{equation*} \end{prop} \begin{proof} We have the identity \begin{equation*} \frac{\eta^{8}(q^{8})}{\eta^{2}(q^{4})\eta^{2}(q^{16})} = \frac{1}{4}\theta_{2}^{2}(q^{2})\theta_{4}^{2}(q^{8}). \end{equation*} This identity follows from \eqref{jacobifor1}, \eqref{jacobifor2} and $\theta_{3}(q) \theta_{4}(q) = \theta_{4}^{2}(q^{2})$ \cite[p.34, (2.1.7ii)]{borwein}. Therefore we obtain \begin{equation*} L(E_{64}, 2) = -\frac{1}{4}\int_{0}^{1} \theta_{2}^{2}(q^{2})\theta_{4}^{2}(q^{8}) \log q \frac{dq}{q} = -\frac{1}{16}\int_{0}^{1} \theta_{2}^{2}(q)\theta_{4}^{2}(q^{4}) \log q \frac{dq}{q}. \end{equation*} By similar calculations as in Proposition \ref{thetaprod}, we have the proposition. \end{proof} Now we express the value $L(E_{64}, 2)$ in terms of values of $_{3}F_{2}$ at $z=1$. \begin{theo}\label{cond64} \begin{equation*} L(E_{64}, 2) = \frac{\sqrt{\pi}\Gamma^{2}\left( \frac{1}{4}\right)}{32} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{4}, \frac{1}{4}, 1 \\ \frac{1}{2}, \frac{5}{4} \end{matrix} \right| 1 \right] - \frac{\sqrt{\pi}\Gamma^{2}\left( \frac{3}{4}\right)}{48} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{3}{4}, \frac{3}{4}, 1 \\ \frac{3}{2}, \frac{7}{4} \end{matrix} \right| 1 \right] . \end{equation*} \end{theo} \begin{proof} If we set \begin{equation*} q = e^{-y(x)}, \end{equation*} then we know \cite[Entry 10, 11]{ramanujan3} \begin{align*} \theta_{3}(q) = \sqrt{z(x)}, &\hspace{5mm} \theta_{3}(q^{4}) = \frac{1}{2}\sqrt{z(x)}(1+(1-x)^{1/4}), \\ \theta_{2}(q) = \sqrt{z(x)}x^{\frac{1}{4}}, &\hspace{5mm} \theta_{2}(q^{4}) = \frac{1}{2}\sqrt{z(x)}(1-(1-x)^{1/4}). \end{align*} Therefore, \begin{align*} L(E_{64}, 2) &=\frac{\pi}{64}\int_{0}^{1} \frac{\theta_{2}(q)}{\theta_{3}(q)}\left(1 - \frac{\theta_{2}^{2}(q)}{\theta_{3}^{2}(q)} \right) \log \left( \frac{\theta_{2}(q^{4})}{\theta_{3}(q^{4})}\right) \theta_{3}^{4}(q)\frac{dq}{q} \\ &=\frac{\pi}{64} \int_{0}^{1} x^{1/4}\left(1 - x^{1/2} \right) \log \left( \frac{1+(1-x)^{1/4}}{1-(1-x)^{1/4}} \right) \frac{dx}{x(1-x)}. \end{align*} If we use the following formula \begin{equation*} \log \left( \frac{1+(1-x)^{1/4}}{1-(1-x)^{1/4}} \right) = -2\sum_{n=1}^{\infty} \frac{(1-x)^{n/2}- 2(1-x)^{n/4}}{2n}, \end{equation*} then \begin{align*} L(E_{64},2) =-\frac{\pi}{32} \sum_{n=1}^{\infty}\frac{1}{2n}\left( B\left(\frac{1}{4},\frac{n}{2}\right) - 2B\left(\frac{1}{4},\frac{n}{4}\right) - B\left(\frac{3}{4},\frac{n}{2} \right) + 2 B\left(\frac{3}{4}, \frac{n}{4} \right) \right). \end{align*} By similar calculations as in Theorem \ref{n=4nol}, we obtain the theorem. \end{proof} \begin{remk} {\rm After writing this paper, the author learned that the same formula was obtained independently in the unpublished notes of Rogers \cite[Theorem 5]{rog2}.} \end{remk} \section{Comparisons}\label{comparison} In this section, we compare the regulators with the $L$-values for $E_{27}$, $E_{32}$ and $E_{64}$ via hypergeometric functions. \subsection{Regulator of Curves}\label{regulator} Here we recall the Beilinson regulator map for curves (cf.\cite{sch}). Let $C$ be a projective smooth curve over $\mathbb{Q}$. The {\it regulator map} $r_{\mathscr{D}}$ defined by Beilinson is a canonical map from the {\it integral part of the motivic cohomology group} $H^{2}_{\mathscr{M}}(C, \mathbb{Q}(2))_{\mathbb{Z}}$ to the {\it real Deligne cohomology group} $H^{2}_{\mathscr{D}}(C_{\mathbb{R}}, \mathbb{R}(2))$ (cf.\cite{sch}) \begin{equation*} r_{\mathscr{D}}: \hspace{2mm} H^{2}_{\mathscr{M}}(C, \mathbb{Q}(2))_{\mathbb{Z}} \longrightarrow H^{2}_{\mathscr{D}}(C_{\mathbb{R}}, \mathbb{R}(2)). \end{equation*} We have an isomorphism (cf.\cite{jan}) \begin{equation*} H^{2}_{\mathscr{M}}(C, \mathbb{Q}(2)) \cong {\rm Ker}\left( \tau \otimes \mathbb{Q} : K_{2}^{M}(\mathbb{Q}(C)) \otimes \mathbb{Q} \longrightarrow \bigoplus_{x \in C^{(1)}} \kappa(x)^{*} \otimes \mathbb{Q} \right). \end{equation*} Here $C^{(1)}$ is the set of closed points on $C$, $\kappa(x)$ is the residue field, and $\tau = (\tau_{x})$ is the {\it tame symbol} on $K_{2}^{M}(\mathbb{Q}(C))$ \begin{equation*} \tau_{x}(\{f, g \}) = (-1)^{{\rm ord}_{x}f{\rm ord}_{x}g} \left(\frac{f^{{\rm ord}_{x}g}}{g^{{\rm ord}_{x}f}} \right)(x). \end{equation*} The {\it integral part} $H^{2}_{\mathscr{M}}(C, \mathbb{Q}(2))_{\mathbb{Z}}$ is defined to be the image of the $K$-group of a regular model of $C$ proper and flat over $\mathbb{Z}$. On the other hand, we have an isomorphism (cf.\cite{e.v}) \begin{equation*} H^{2}_{\mathscr{D}}(C_{\mathbb{R}}, \mathbb{R}(2)) \cong H^{1}(C(\mathbb{C}), \mathbb{R}(1))^{+} . \end{equation*} Here $+$ denotes the part fixed by the {\it de Rham conjugation} $F_{\infty} \otimes c_{\infty}$, where the {\it infinite Frobenius} $F_{\infty}$ is the complex conjugation acting on $C(\mathbb{C})$ and $c_{\infty}$ is the complex conjugation on the coefficients. Let $E$ be an elliptic curve. Let $\omega_{E} \in H^{0}(E(\mathbb{C}), \Omega ^{1})^{+}$ be the real holomorphic differential form normalized so that \begin{equation*} \frac{1}{2\pi \sqrt{-1}} \int_{E(\mathbb{C})} \omega_{E} \wedge \overline{\omega_{E}} = -1 \end{equation*} where $\overline{\omega_{E}} := c_{\infty}\omega_{E} = F_{\infty}\omega_{E}$. Note that $H^{1}(E(\mathbb{C}), \mathbb{R}(1))^{+}$ is generated by $\omega_{E} - \overline{\omega_{E}}$. Let $E(\mathbb{R})^{0}$ be the connected component of the origin with the orientation such that the real period \begin{equation*} \Omega_{\mathbb{R}} = \int_{E(\mathbb{R})^{0}} \omega_{E} \end{equation*} is positive. Then the Beilinson conjectures read that there exists an element $e \in H_{\mathscr{M}}^{2}(E, \mathbb{Q})_{\mathbb{Z}}$ such that \begin{equation*} r_{\mathscr{D}}(e) = L^{\prime}(E, 0) \Omega_{\mathbb{R}}(\omega_{E} - \overline{\omega_{E}}) . \end{equation*} Let $X_{n}$ be the Fermat curve of degree $n$ \begin{equation*} X_{n} : \hspace{2mm} u^{n} + v^{n} = 1. \end{equation*} We have a finite map $f: X_{n} \longrightarrow E_{N}$ which is defined by \begin{align*} \begin{aligned} &f(u,v) = \left( \frac{3v}{1-u}, \frac{9(1+u)}{2(1-u)} \right) &\mbox{for}\hspace{2mm} (n, N) = (3, 27), \\ &f(u,v) = \left( \frac{2(1-v^{2})}{u^{2}}, \frac{4(1-v^{2})}{u^{3}} \right) &\mbox{for}\hspace{2mm} (n, N) = (4, 32), \\ &f(u,v) = \left( \frac{2(u^{2}-1)}{v^{2}}, \frac{4u(u^{2}-1)}{v^{3}} \right) &\mbox{for}\hspace{2mm} (n, N) = (4, 64). \end{aligned} \end{align*} Let $e_{n} := \{1-u, 1-v \} \in H^{2}_{\mathscr{M}}(X_{n}, \mathbb{Q}(2))_{\mathbb{Z}}$ be Ross' element \cite{ross2}, and set \cite{otsubo2} \begin{equation*} e_{E_{N}} := f_{*}(e_{n}) \in H^{2}_{\mathscr{M}}(E_{N}, \mathbb{Q}(2))_{\mathbb{Z}} . \end{equation*} Otsubo \cite{otsubo2, otsubo} expressed its regulator image in terms of values of hypergeometric functions $\tilde{F}$ \begin{equation*} \tilde{F}(\alpha, \beta) := \left(\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha + \beta)} \right)^{2} {_{3}}F_{2} \left[ \left. \begin{matrix} \alpha, \beta, \alpha + \beta -1 \\ \alpha + \beta , \alpha + \beta \end{matrix} \right| 1 \right]. \end{equation*} This is monotonically decreasing with respect to each parameter \cite[Proposition 4.25]{otsubo2}. If we use Thomae's formula \cite[p.14, (1)]{bailey} \begin{equation*} {_{3}F_{2}}\left[ \left. \begin{matrix} a,b,c \\ e,f \end{matrix} \right| 1 \right] = \frac{\Gamma(e)\Gamma(f)\Gamma(s)}{\Gamma(a)\Gamma(b+s)\Gamma(c+s)} {_{3}F_{2}}\left[ \left. \begin{matrix} e-a, f-a , s \\ s+c, s+b \end{matrix} \right| 1 \right] \end{equation*} where $s:= e+f - (a+b+c)$, we have \begin{align} \tilde{F}(\alpha, \beta) = \frac{\Gamma(\alpha)\Gamma(\beta)}{\beta\Gamma(\alpha+\beta)} {_{3}}F_{2} \left[ \left. \begin{matrix} \beta, \beta, 1 \\ \alpha + \beta , \beta +1 \end{matrix} \right| 1 \right]. \label{dixon} \end{align} \subsection{Conductor 27} Otsubo proved the following formula. See \cite{otsubo}, Section 5.2 for the relation of $\omega_{E_{27}}$ with a form on the Fermat curve. \begin{theo}[{\cite[Theorem 3.2]{otsubo}}]\label{n=3noreg} With the notations as above, we have \begin{equation*} r_{\mathscr{D}}(e_{E_{27}}) = -\frac{1}{6}\sqrt{\frac{\sqrt{3}}{2\pi}} \left( \tilde{F}\left( \frac{1}{3}, \frac{1}{3} \right) - \tilde{F}\left(\frac{2}{3} , \frac{2}{3} \right) \right) \left( \omega_{E_{27}} - \overline{\omega_{E_{27}}}\right) . \end{equation*} and $\tilde{F}\left( \frac{1}{3}, \frac{1}{3} \right) - \tilde{F}\left(\frac{2}{3} , \frac{2}{3} \right) \neq 0$. \end{theo} On the other hand, Rogers and Zudilin proved the following formula. \begin{theo}[{\cite[Theorem 1]{r.z}}] \label{n=3nol} \begin{equation*} L(E_{27}, 2) = \frac{\Gamma^{3}\left(\frac{1}{3} \right)}{27} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{3}, \frac{1}{3}, 1 \\ \frac{2}{3}, \frac{4}{3} \end{matrix} \right| 1 \right] - \frac{\Gamma^{3}\left(\frac{2}{3} \right)}{18} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{2}{3}, \frac{2}{3}, 1 \\ \frac{4}{3}, \frac{5}{3} \end{matrix} \right| 1 \right] . \end{equation*} \end{theo} By comparing the formulas above, we prove the Beilinson conjectures for $E_{27}$. \begin{theo}\label{comp1} With the notations as above, we have \begin{equation*} r_{\mathscr{D}}(e_{E_{27}}) = - \frac{3}{2}L^{\prime}(E_{27}, 0) \Omega_{\mathbb{R}}(\omega_{E_{27}} - \overline{\omega_{E_{27}}}). \end{equation*} \end{theo} \begin{proof} By \eqref{dixon}, we have \begin{align*} \tilde{F}\left( \frac{1}{3}, \frac{1}{3} \right) &= \frac{3\sqrt{3}}{2\pi}\Gamma^{3}\left(\frac{1}{3}\right) {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{3}, \frac{1}{3}, 1 \\ \frac{2}{3}, \frac{4}{3} \end{matrix} \right| 1 \right], \\ \tilde{F}\left(\frac{2}{3} , \frac{2}{3}\right) & =\frac{9\sqrt{3}}{4\pi}\Gamma^{3}\left(\frac{2}{3}\right) {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{2}{3}, \frac{2}{3}, 1 \\ \frac{4}{3}, \frac{5}{3} \end{matrix} \right| 1 \right]. \end{align*} Hence, by Theorems \ref{n=3noreg} and \ref{n=3nol}, we have \begin{equation*} r_{\mathscr{D}}(e_{E_{27}}) = -\frac{27}{4\pi}\sqrt{\frac{3\sqrt{3}}{2\pi}} L(E_{27}, 2)\left(\omega_{E_{27}} - \overline{\omega_{E_{27}}} \right) . \end{equation*} By the functional equation \eqref{functional} and the fact that the root number (the sign of the functional equation) is $1$, we have \begin{align*} r_{\mathscr{D}}(e_{E_{27}}) = -\sqrt{\frac{3\pi\sqrt{3}}{2}} L^{\prime}(E_{27}, 0)\left( \omega_{E_{27}} - \overline{\omega_{E_{27}}} \right) . \end{align*} We know $\Omega_{\mathbb{R}} = \sqrt{\frac{2\pi}{\sqrt{3}}}$ (cf.\cite{otsubo}), hence we obtain the theorem. \end{proof} \subsection{Conductor 32} The formula for $r_{\mathscr{D}}(e_{E_{32}})$ due to Otsubo is as follows (see \cite{otsubo}, Section 5.2 for the formula of $f^{*}\omega_{E_{32}}$). \begin{theo}[{\cite[Theorem 3.2]{otsubo}}]\label{n=4noregfor} Let $e_{E_{32}} \in H^{2}_{\mathscr{M}}(E_{32}, \mathbb{Q}(2))_{\mathbb{Z}}$ be the element defined in Section \ref{regulator}. Then we have \begin{equation*} r_{\mathscr{D}}(e_{E_{32}}) = -\frac{\sqrt{2}}{16\sqrt{\pi}}\left( \tilde{F}\left(\frac{1}{4}, \frac{1}{2} \right) - \tilde{F}\left(\frac{3}{4},\frac{1}{2}\right) \right) \left( \omega_{E_{32}} - \overline{\omega_{E_{32}}} \right) \end{equation*} and $\tilde{F}\left(\frac{1}{4}, \frac{1}{2} \right) - \tilde{F}\left(\frac{3}{4},\frac{1}{2}\right) \neq 0$. \end{theo} By comparing the formula above with Theorem \ref{n=4nol}, we prove the Beilinson conjectures for $E_{32}$. \begin{theo}\label{comp2} With notations as above, we have \begin{equation*} r_{\mathscr{D}}(e_{E_{32}}) = - \frac{1}{2} L^{\prime}(E_{32},0)\Omega_{\mathbb{R}}(\omega_{E_{32}} - \overline{\omega_{E_{32}}}). \end{equation*} \end{theo} \begin{proof} By \eqref{dixon}, we have \begin{align*} \tilde{F}\left( \frac{1}{4}, \frac{1}{2} \right) &= \sqrt{\frac{2}{\pi}}\Gamma^{2}\left(\frac{1}{4}\right) {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2}, 1 \\ \frac{3}{2}, \frac{3}{4} \end{matrix} \right| 1 \right], \\ \tilde{F}\left(\frac{3}{4} , \frac{1}{2}\right) & =4\sqrt{\frac{2}{\pi}}\Gamma^{2}\left(\frac{3}{4}\right) {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{2}, \frac{1}{2}, 1 \\ \frac{3}{2}, \frac{5}{4} \end{matrix} \right| 1 \right]. \end{align*} Hence, by Theorems \ref{n=4nol} and \ref{n=4noregfor}, we have \begin{equation*} r_{\mathscr{D}}(e_{E_{32}}) = -\frac{4\sqrt{2}}{\pi\sqrt{\pi}} L(E_{32}, 2)\left(\omega_{E_{32}} - \overline{\omega_{E_{32}}} \right) . \end{equation*} By the functional equation \eqref{functional} and the fact that the root number is $1$, we have \begin{align*} r_{\mathscr{D}}(e_{E_{32}}) = -\frac{\sqrt{2\pi}}{2} L^{\prime}(E_{32}, 0)\left( \omega_{E_{32}} - \overline{\omega_{E_{32}}} \right) . \end{align*} We know $\Omega_{\mathbb{R}} = \sqrt{2\pi}$ (cf.\cite{otsubo}), hence we obtain the theorem. \end{proof} \subsection{Conductor 64} The formula for $r_{\mathscr{D}}(e_{E_{64}})$ due to Otsubo is as follows. It is not difficult to see that $f^{*}\omega_{E_{64}}$ is proportional to $\widetilde{\omega}_{4}^{1,1}$ (see \cite[Section 3.2]{otsubo} for the notation). Then similarly as in loc. cit., we obtain $f^{*}\omega_{E_{64}} = \frac{\sqrt{\pi}}{2}\widetilde{\omega}_{4}^{1,1}$. \begin{theo}[{\cite[Theorem 3.2]{otsubo}}]\label{n=4noregfor2} Let $e_{E_{64}} \in H^{2}_{\mathscr{M}}(E_{64}, \mathbb{Q}(2))_{\mathbb{Z}}$ be the element defined in Section \ref{regulator}. Then we have \begin{equation*} r_{\mathscr{D}}(e_{E_{64}}) = -\frac{1}{16\sqrt{\pi}}\left( \tilde{F}\left(\frac{1}{4}, \frac{1}{4} \right) - \tilde{F}\left(\frac{3}{4},\frac{3}{4}\right) \right) \left( \omega_{E_{64}} - \overline{\omega_{E_{64}}} \right) \end{equation*} and $\tilde{F}\left(\frac{1}{4}, \frac{1}{4} \right) - \tilde{F}\left(\frac{3}{4},\frac{3}{4}\right) \neq 0$. \end{theo} By comparing the formula above with Theorem \ref{cond64}, we prove the Beilinson conjectures for $E_{64}$. \begin{theo}\label{comp3} With notations as above, we have \begin{equation*} r_{\mathscr{D}}(e_{E_{64}}) = - \frac{1}{2} L^{\prime}(E_{64},0)\Omega_{\mathbb{R}}(\omega_{E_{64}} - \overline{\omega_{E_{64}}}). \end{equation*} \end{theo} \begin{proof} By \eqref{dixon}, we have \begin{align*} \tilde{F}\left( \frac{1}{4}, \frac{1}{4} \right) &= \frac{4\Gamma^{2}\left(\frac{1}{4}\right)}{\sqrt{\pi}} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{1}{4}, \frac{1}{4}, 1 \\ \frac{1}{2}, \frac{5}{4} \end{matrix} \right| 1 \right], \\ \tilde{F}\left(\frac{3}{4} , \frac{3}{4}\right) & =\frac{8\Gamma^{2}\left(\frac{3}{4}\right)}{3\sqrt{\pi}} {_{3}F_{2}}\left[ \left. \begin{matrix} \frac{3}{4}, \frac{3}{4}, 1 \\ \frac{3}{2}, \frac{7}{4} \end{matrix} \right| 1 \right]. \end{align*} Hence, by Theorems \ref{cond64} and \ref{n=4noregfor2}, we have \begin{equation*} r_{\mathscr{D}}(e_{E_{64}}) = -\frac{8}{\pi\sqrt{\pi}} L(E_{64}, 2)\left(\omega_{E_{64}} - \overline{\omega_{E_{64}}} \right) . \end{equation*} By the functional equation \eqref{functional} and the fact that the root number is $1$ (cf.\cite[p.84, Theorem]{kob}), we have \begin{align*} r_{\mathscr{D}}(e_{E_{64}}) = -\frac{\sqrt{\pi}}{2} L^{\prime}(E_{64}, 0)\left( \omega_{E_{64}} - \overline{\omega_{E_{64}}} \right) . \end{align*} We know $\Omega_{\mathbb{R}} = \sqrt{\pi}$ (cf.\cite{otsubo}), hence we obtain the theorem. \end{proof} \section*{Acknowledgment} This paper is based on the author's master's thesis at Chiba University. I am very grateful to Noriyuki Otsubo and Shigeki Matsuda for valuable advises. I would like to thank Mathew Rogers and Wadim Zudilin for helpful comments.
1,941,325,220,866
arxiv
\section{Introduction} \label{Introduction} Calibration and Direction-of-Arrival (DoA) estimation is a major issue in array processing \cite{stoica1990maximum,vorobyov2005maximum}. The latter has been studied in several applications, e.g., radar, sonar, satellite, wireless communication and radio interferometric systems \cite{van2013signal,godara1997application}, where we commonly use largely distributed sensors elements aiming to achieve high resolution. In all these sensor network applications, calibration is required as some parameters are not exactly known due to imperfect instrumentation or propagation conditions \cite{ng1996sensor}. Let us note that calibration algorithms are distinguished by the presence \cite{ng1995active} or absence \cite{weiss1989array} of one or more cooperative sources, named calibrator sources. Indeed, prior source information can be available \cite{ng1996sensor} and consists mainly in the true/nominal directions and powers of calibrator sources (i.e., without any perturbation effects or antenna imperfections). Furthermore, most calibration algorithms are based on the least squares approach, with a sequential procedure updating each parameter alternatively \cite{van2013signal}. The least squares estimator is indeed equivalent to the Maximum Likelihood (ML) method under a (unrealistic) Gaussian noise model. The aim of the proposed methodology here is to estimate successively the unknown sensor gains and phase errors, along with the calibrator and noise parameters, through minimization of a proper weighting cost function. In this work, uncertainties are estimated from the array covariance matrix, since dealing directly with time series data and operating on the signal domain quickly becomes computationally unfeasible for a large number of samples \cite{wijnholds2016blind}. The scenario under study is general but could be adapted to any practical application as in the radio astronomy context, where the number of parameters to estimate is tremendous and frequency bands are wide. In the multi-frequency scenario, a suboptimal way to perform calibration is to consider one wavalength bin at a time, with only one centralized processor, which has access to data in the whole available range of wavelengths. In this work, we study an accelerated version based on the scalable form of the Alternating Direction Method of Multipliers (ADMM) \cite{boyd2011distributed,kazemi2013clustered} with a specific network topology: there is no fusion center and agents exchange information only among themselves. The goal being to reduce the complexity in operation flow and signaling exchanging \cite{erseghe2012distributed,shi2014linear,mota2013d,ollierfast,yatawatta2016fine,yatawatta2012gpu}. For estimation of the directional gains, the compressive sensing framework, especially the sparse representation method, is well-adapted and has already been applied for source localization in fully and partially calibrated arrays \cite{malioutov2005sparse,steffens2014direction,haardt2014subspace,ollier2015article}. The notation used through this paper is the following: $(.)^*$, $(.)^{T}$, $(.)^{H}$, $(.)^{\odot \alpha}$, $\Re(.)$ and $[.]_{n}$ denote, respectively, the complex conjugate, transpose, Hermitian operator, element-wise raising to $\alpha$, real part and the $n$-th element of a vector. The expectation operator is $\mathcal{E}\{.\}$, $\otimes$, $\circ$ and $\odot$ denote, respectively, the Kronecker, the Khatri-Rao and the Hadamard product. The operator $\diag(.)$ converts a vector into a diagonal matrix, $\blkdiag(.)$ is the block-diagonal operator, whereas $\vectdiag(.)$ produces a vector from the main diagonal of a matrix and $\vect(.)$ stacks the columns of a matrix on top of one another. The operators $\left\|.\right\|_2$ and $\left\|.\right\|_{\mathsf{F}}$ refer to the $l_2$ and Frobenius norms, respectively. Finally, $\mathbf{I}_P$ is the $P \times P$ identity matrix and $ | \cdot|$ refers to the cardinality of a set. \section{Model setup} \label{setup} Let us consider $Q$ emitting signal sources and $P$ sensor elements in the array. Each source direction $q \in \{1, \hdots,Q\}$ is defined by a 2-dimensional vector $\mathbf{d}_q = \left[d_q^{l},d_q^{m}\right]^{T}$, s.t., all nominal/true known directions, without any disturbances, are stacked in $\mathbf{D}^{\mathrm{K}} = \left[ \mathbf{d}_{1}^{\mathrm{K}} , \ldots , \mathbf{d}_{Q}^{\mathrm{K}} \right] \in \mathbb{R}^{2 \times Q}$. Propagation conditions induce wavelength dependent distortions, leading to apparent source directions $\mathbf{D}_{\lambda}=[\mathbf{d}_{1,\lambda}, \hdots,\mathbf{d}_{Q,\lambda}]$ different from the true ones. Under the narrowband assumption, the array response matrix reads $\mathbf{A}_{\mathbf{D}_{\lambda}} = \frac{1}{\sqrt{P}} \exp \left( -j \frac{2\pi}{\lambda} \boldsymbol{\Xi} \mathbf{D}_{\lambda} \right) $ in which $\boldsymbol{\Xi} =[\boldsymbol{\xi}_1,\hdots,\boldsymbol{\xi}_P]^T \in \mathbb{R}^{P \times 2}$ includes the known Cartesian coordinates describing each sensor location in the array, s.t., for $p \in \{1, \hdots,P\}$, $\boldsymbol{\xi}_p = [x_p,y_p]^T$. Therefore, the $P \times 1$ narrowband signals measured by all antennas is written as follows, for the \textit{n}-th time sample and wavelength $\lambda$, \begin{equation} \label{start} \mathbf{x}_{\lambda}(n) = \mathbf{G}_{\lambda}\mathbf{A}_{\mathbf{D}_{\lambda}}\boldsymbol{\Gamma}_{\lambda}\mathbf{s}_{\lambda}(n) + \mathbf{n}_{\lambda}(n) \end{equation} where the undirectional antenna gains are collected in the complex diagonal matrix $\mathbf{G}_{\lambda}= \mathrm{diag}\{\mathbf{g}_{\lambda}\}\in \mathbb{C}^{P \times P}$ and the directional gain responses, assumed identical for all antennas, are modeled by the diagonal matrix $\boldsymbol{\Gamma}_{\lambda}\in \mathbb{C}^{Q \times Q}$. Finally, $\mathbf{s}_{\lambda}(n) \sim \mathcal{CN}(\mathbf{0},\boldsymbol{\Sigma}_{\lambda})$ and $\mathbf{n}_{\lambda}(n) \sim \mathcal{CN}(\mathbf{0},\boldsymbol{\Sigma}^{n}_{\lambda})$ are the i.i.d. calibrator source signal and additive Gaussian thermal noise vectors with their corresponding diagonal covariance matrices $\boldsymbol{\Sigma}_{\lambda} = \mathrm{diag}\{\boldsymbol{\sigma}_{\lambda}\} \in \mathbb{R}^{ Q \times Q}$ and $\boldsymbol{\Sigma}^n_{\lambda} = \mathrm{diag}\{\boldsymbol{\sigma}^n_{\lambda}\} \in \mathbb{R}^{P \times P}$, respectively. From (\ref{start}), we deduce the following covariance matrix \footnotemark \begin{equation} \label{model_cov} \mathbf{R}_{\lambda}( \mathbf{p}_{\lambda})= \allowbreak \mathcal{E} \left\lbrace \mathbf{x}_{\lambda} \mathbf{x}_{\lambda}^{\mathsf{H}} \right\rbrace = \mathbf{E}_{\mathbf{D}_{\lambda}} \mathbf{M}_{\lambda}\mathbf{E}^H_{\mathbf{D}_{\lambda}} + \boldsymbol{\Sigma}^{n}_{\lambda} \end{equation} where $ \mathbf{E}_{\mathbf{D}_{\lambda}} = \mathbf{G}_{\lambda} \mathbf{A}_{\mathbf{D}_{\lambda}} \boldsymbol{\Sigma}^{1/2}_{\lambda}$ and $ \mathbf{M}_{\lambda} = \boldsymbol{\Gamma}_{\lambda} \boldsymbol{\Gamma}^H_{\lambda} = \mathrm{diag}\{\mathbf{m}_{\lambda}\}.$ \footnotetext{ As in \cite{martinjournalnew}, some commonly used assumptions are considered here to overcome scaling ambiguities, such as fixed phase for the first element and one reference source with fixed direction and directional gain/apparent power. } In this context, the calibration problem consists in estimating the parameter vector of interest $ \mathbf{p} = [ \mathbf{p}^T_{\lambda_1}, \hdots, \mathbf{p}^T_{\lambda_F} ]^T$ with $F$ the total number of available wavelengths and $ \mathbf{p}_{\lambda} = [ \mathbf{g}^T_{\lambda}, \mathbf{d}^T_{1,\lambda}, \hdots, \mathbf{d}^T_{Q,\lambda},\mathbf{m}^T_{\lambda}, \boldsymbol{\sigma}^{\mathrm{n}^T}_{\lambda}]^T$. To this end, we exploit sample covariance matrices $\hat{\mathbf{R}}_{\lambda}$, defined as $\hat{\mathbf{R}}_{\lambda} = \frac{1}{N} \sum _{n=1}^N \mathbf{x}_{\lambda}(n) \mathbf{x}^H_{\lambda}(n)$ for wavelength $\lambda$. In estimation theory, the ML estimator is well-known for its statistical efficiency but not always easy to implement in practice. The Weighting Least Squares approach is an appropriate alternative as it is asymptotically equivalent to the ML for a large number of samples $N$. Therefore, we wish to minimize the following local cost function, associated to wavelength $\lambda$ \begin{equation} \label{cost_here} \kappa_{\lambda}(\mathbf{p}_{\lambda}) = || \left ( \mathbf{R}_{\lambda} ( \mathbf{p}_{\lambda}) - \hat{\mathbf{R}}_{\lambda} \right) \odot \boldsymbol{\Omega}_{\lambda} ||^2 _{F} \end{equation} where $\boldsymbol{\Omega}_{\lambda} = (\boldsymbol{\sigma}_{\lambda}^{\mathrm{n}} \boldsymbol{\sigma}_{\lambda}^{\mathrm{n}^T})^{\odot - \frac{1}{2}}$. Most sources are assumed buried beneath the noise and antennas are identical in the array with negligible mutual coupling. The aim of the designed calibration algorithm is to minimize the global cost function $\kappa(\mathbf{p}) = \sum_{\lambda \in \Lambda} \kappa_{\lambda}(\mathbf{p}_{\lambda})$ in a parallel and step-wise approach, with $\Lambda=\{\lambda_1,\hdots,\lambda_F\}$ the total set of available wavelengths. Usually, minimization is conducted w.r.t. one specific parameter while fixing the others in $ \mathbf{p}_{\lambda}$ \cite{martinjournalnew} Here, our approach is different: we propose an accelerated version where estimation is performed directly w.r.t. the consensus (hidden) variables, as described in Algorithm 1 and detailed in the following. \section{Description of the proposed estimator} To achieve multi-frequency calibration in the sensor array, coherence is imposed along wavelength subbands for both directional and undirectional gains, by imposing available constraints or enforcing smooth variation. The choice of the basis functions is motivated by the application under analysis and can be adapted accordingly. \subsection{Coherence model for the undirectional antenna gains} \label{cohe_undi} To impose coherence along subbands, we introduce a set of smooth wavelength dependent basis functions and express the gains as linear combinations. Let us define $\boldsymbol{\alpha}_{p}= [\alpha_{1,p}, \hdots, \alpha_{K_g,p} ]^T \in \mathbb{C}^{K_g}$, the consensus vector for the $p$-th sensor with unknown linear coefficients. Therefore, for $p \in \{ 1, \hdots, P\}$ and $\lambda \in \Lambda$, $ [\mathbf{g}_{\lambda}]_{p} = \sum_{k=1}^{K_g} b_{k, \lambda} \alpha_{k,p} = \mathbf{b}_{\lambda}^{\mathsf{T}} \boldsymbol{\alpha}_{p},$ in which $\mathbf{b}_{\lambda} = \left[b_{1, \lambda}, \ldots, b_{K_g, \lambda} \right]^{\mathsf{T}} \in \mathbb{R}^{K_g}$ stands for the polynomial terms, describing the variation of the undirectional gains w.r.t.~wavelength. For instance, we can consider the typical basis function $b_{k, \lambda} = \left(\frac{f-f_0}{f_0}\right)^{k-1}$ in which $f=c / \lambda$ is the studied frequency of interest with $c$ the speed of light and $f_0$ is the reference frequency \cite{martinjournalnew,yatawatta2015distributed}. By stacking all vectors $\boldsymbol{\alpha}_{p}$, we obtain the global consensus vector $\boldsymbol{\alpha} = \left[\boldsymbol{\alpha}_{1}^{\mathsf{T}}, \ldots, \boldsymbol{\alpha}_{P}^{\mathsf{T}} \right]^{\mathsf{T}} \in \mathbb{C}^{P K_g},$ leading to \begin{equation} \label{express_gains_undi} \mathbf{g}_{\lambda} = \mathbf{B}_{\lambda} \boldsymbol{\alpha}, \end{equation} with $\mathbf{B}_{\lambda}= \left( \mathbf{I}_P \otimes \mathbf{b}_{\lambda}^{\mathsf{T}} \right)$. \subsection{Coherence model for the directional gains} Similarly as for the undirectional gains, the coherence model is defined as follows: let us consider $\boldsymbol{\alpha}_{q} \in \mathbb{R}^{K_m}$, for $q \in \{ 1, \ldots, Q\}$, such that for $\lambda \in \Lambda$, \begin{equation} \label{express_gains_di_q} [\mathbf{m}_{\lambda}]_{q} = \mathbf{b}_{\mathbf{m}_{\lambda}}^{\mathsf{T}} \boldsymbol{\alpha}_{\mathbf{m}_{q}}, \end{equation} in which $\boldsymbol{\alpha}_{\mathbf{m}_{q}}$ is the vector of hidden variables for the $q$-th calibrator source, associated to directional gains $\mathbf{m}_{\lambda}$, while $\mathbf{b}_{\mathbf{m}_{\lambda}}$ is the corresponding basis vector. As in section \ref{cohe_undi}, all $\boldsymbol{\alpha}_{\mathbf{m}_{q}}$ are stacked in $ \boldsymbol{\alpha}_{\mathbf{m}} = \left[\boldsymbol{\alpha}^T_{\mathbf{m}_{1}}, \ldots, \boldsymbol{\alpha}_{\mathbf{m}_{Q}}^{\mathsf{T}} \right]^{\mathsf{T}} \in \mathbb{R}^{Q K_m}$, finally leading to \begin{equation} \label{express_gains_di} \mathbf{m}_{\lambda} = \mathbf{B}_{\mathbf{m}_{\lambda}} \boldsymbol{\alpha}_{\mathbf{m}} \end{equation} with $\mathbf{B}_{\mathbf{m}_{\lambda}}= \left( \mathbf{I}_Q \otimes \mathbf{b}_{\mathbf{m}_{\lambda}}^{\mathsf{T}} \right)$. We assume identical behavior for all sources but the process can be straightforwardly adapted to different behavior. In \cite{martinjournalnew}, the directional gains in $\boldsymbol{\Gamma}_{\lambda}$ were assumed inversely proportional to $\lambda$ but here the algorithm can be adjusted to any general existing models. \subsection{Distributed network with a fusion center} \label{estim_alpha_z} Dealing with large data volumes delivered by advanced sensor array systems requires computationally efficient calibration algorithms, with a huge number of unknowns to solve. To improve both computational cost and estimation accuracy, distributed calibration has been proposed by exploiting data parallelism across frequency. Contrary to a centralized hardware architecture which processes all frequency bands at a single location and is therefore computationally challenging, distributed optimization introduces more than one compute agents and analyzes the data simultaneously across smaller frequency intervals \cite{boyd2011distributed}. By distributing the total computations across the network, we gain a significant reduction in operational and energy cost and each agent receives information indirectly across the whole frequency range, thus improving the calibration accuracy. To handle this, let us consider $Z$ computational agents disposed on a network. Each agent has access to some wavelengths $\lambda \in \Lambda_{z} =\{\lambda^{z}_{1}, \ldots, \lambda^{z}_{J_{z}}\} \subset \Lambda$. The corresponding unknown parameters in $\mathbf{p}$ are estimated locally and consensus is enforced among agents by imposing constraints in (\ref{express_gains_undi}) and (\ref{express_gains_di}). To start with, let us focus on estimation of the undirectionnal sensor gains in section \ref{cohe_undi}. We define $\boldsymbol{\alpha}^{z}$ as the local copy of the common optimization variable $\boldsymbol{\alpha}$ for the $z$-th agent and we note $\{\boldsymbol{\alpha}^{z}\}_{\mathcal{Z}} = \{\boldsymbol{\alpha}^{1}, \ldots, \boldsymbol{\alpha}^{Z}\}$ the set of all $\boldsymbol{\alpha}^{z}$ in the network. Calibration is reformulated as the following constrained problem \begin{equation} \begin{aligned} \hat{\boldsymbol{\alpha}} = \argmin_{\boldsymbol{\alpha}, \{\boldsymbol{\alpha}^{z}\}_{\mathcal{Z}}} \sum_{z=1}^{Z} \kappa^{z}\left(\boldsymbol{\alpha}^{z}\right) \ \ \text{subject to~} \boldsymbol{\alpha}^{z} = \boldsymbol{\alpha} \text{~for~} z \in \{1,\ldots,Z\} \end{aligned} \end{equation} where $\kappa^{z}\left(\boldsymbol{\alpha}^{z}\right)$ is the cost function for the $z$-th agent, i.e., for $\lambda \in \Lambda_{z}$, which depends on the local variable $\boldsymbol{\alpha}^{z}$ and is associated to data $\left\lbrace \hat{\mathbf{R}}_{\lambda}\right\rbrace_{\lambda \in \Lambda_{z}}$. To solve this problem, we use the augmented Lagrangian, given by \cite{LiADMMcomplex} $L \left( \{\boldsymbol{\alpha}^{z}\}_{\mathcal{Z}}, \boldsymbol{\alpha}, \{\mathbf{y}^{z}\}_{\mathcal{Z}} \right) = \sum_{z=1}^{Z} \kappa^{z}\left(\boldsymbol{\alpha}^{z}\right) + \Re\left\{\mathbf{y}^{z \mathsf{H}} \left(\boldsymbol{\alpha}^{z} - \boldsymbol{\alpha} \right)\right\} + \frac{\rho}{2} \left\| \boldsymbol{\alpha}^{z} - \boldsymbol{\alpha} \right\|_{2}^{2} $ where $\{\mathbf{y}^{z}\}_{\mathcal{Z}}$ are the $Z$ Lagrange multipliers and $\rho$ is the regularization term. We resort to the consensus ADMM in the scaled form by introducing the scaled dual variable $\mathbf{u}^{z}=\frac{1}{\rho}\mathbf{y}^{z}$ \cite{boyd2011distributed}. The three updates of the iterative algorithm are therefore given by \begin{align} \boldsymbol{\alpha}^{z [t+1]} &= \argmin_{\boldsymbol{\alpha}^{z}} \kappa^{z}\left(\boldsymbol{\alpha}^{z}\right) + \frac{\rho}{2} \|\boldsymbol{\alpha}^{z}-\boldsymbol{\alpha}^{[t]}+\mathbf{u}^{z [t]}\|_{2}^{2} = \argmin_{\boldsymbol{\alpha}^{z}} \tilde{L}^{z}\left(\boldsymbol{\alpha}^{z},\boldsymbol{\alpha}^{[t]}, \mathbf{u}^{z [t]}\right)\label{eq:zetastep_first} \\ \boldsymbol{\alpha}^{[t+1]} &= \argmin_{\boldsymbol{\alpha}} \sum_{z=1}^{Z} \|\boldsymbol{\alpha}^{z [t+1]}-\boldsymbol{\alpha}+\mathbf{u}^{z [t]}\|_{2}^{2} \label{eq:zetastep} \\ \mathbf{u}^{z [t+1]} &= \mathbf{u}^{z [t]} + \left( \boldsymbol{\alpha}^{z [t+1]} - \boldsymbol{\alpha}^{[t+1]} \right) \end{align} where $t$ is the iteration counter. Minimization (\ref{eq:zetastep}) leads to the following average, computed at the fusion center and sent to all agents in the network, \begin{equation} \hat{\boldsymbol{\alpha}} = \frac{1}{Z} \sum_{z=1}^{Z} \left (\boldsymbol{\alpha}^{z} + \mathbf{u}^{z} \right ), \end{equation} from which the undirectional gains can be directly deduced with (\ref{express_gains_undi}). The local minimization step in (\ref{eq:zetastep_first}) is the computationally most expensive one. To this end, we adopt an iterative approach and notice that the problem is separable w.r.t. each $\boldsymbol{\alpha}^{z}$, i.e., w.r.t. each agent. Let us assume $\boldsymbol{\alpha}^{z}$ and $(\boldsymbol{\alpha}^{z})^{\ast}$ as two independent variables \cite{salvini2014fast}. We then minimize $\tilde{L}^{z}\left(\boldsymbol{\alpha}^{z}, (\boldsymbol{\alpha}^{z})^{\ast},\boldsymbol{\alpha}, \mathbf{u}^{z}\right)$ w.r.t. $\boldsymbol{\alpha}^{z}$, considering $(\boldsymbol{\alpha}^{z})^{\ast}$ as fixed and neglecting the diagonal elements in the cost function. In this case, the local cost function becomes separable w.r.t. the sub-vectors of $\boldsymbol{\alpha}^{z}$, i.e., $\boldsymbol{\alpha}^{z} = \left[\boldsymbol{\alpha}_{1}^{z \mathsf{T}}, \ldots, \boldsymbol{\alpha}_{P}^{z \mathsf{T}} \right]^{\mathsf{T}},$ where $\boldsymbol{\alpha}_{p}^{z}$ is the local consensus vector for the $p$-th sensor at the $z$-th agent. The following decompositions w.r.t. the sensor elements are also possible \begin{align} \label{decompose} \kappa^{z}(\boldsymbol{\alpha}^{z}) = \sum_{p=1}^{P} \kappa^{z}_{p}(\boldsymbol{\alpha}^{z}_{p}) \end{align} and $ \tilde{L}^{z}\left(\boldsymbol{\alpha}^{z};\boldsymbol{\alpha}, \mathbf{u}^{z}\right) = \sum_{p=1}^{P} \tilde{L}^{z}_{p}\left(\boldsymbol{\alpha}^{z}_{p};\boldsymbol{\alpha}_p, \mathbf{u}^{z}_p\right)$ with $\tilde{L}^{z}_{p}\left(\boldsymbol{\alpha}^{z}_{p};\boldsymbol{\alpha}_p, \mathbf{u}^{z}_p\right) = \kappa^{z}_{p}(\boldsymbol{\alpha}^{z}_{p}) + \frac{\rho}{2} \|\boldsymbol{\alpha}^{z}_{p}-\boldsymbol{\alpha}_{p}+\mathbf{u}^{z}_{p}\|_{2}^{2}$ where $\kappa^{z}_{p}(\boldsymbol{\alpha}^{z}_{p})$ corresponds to the cost function for the $p$-th row of $\left\{\hat{\mathbf{R}}_{\lambda}\right\}_{\lambda \in \lambda_{z}}$, which only depends on $\boldsymbol{\alpha}^{z}_{p}$ since the remaining parameters are considered as fixed in this step. Let us define the operator $\mathcal{S}_p(.)$, that converts to a vector the $p$-th row of a matrix and removes the $p$-th element of this selected vector. We also introduce the quantity $\mathbf{R}^{\text{\tiny K}}_{\lambda} = \mathbf{A}_{\mathbf{D}_{\lambda}} \boldsymbol{\Sigma}_{\lambda}\mathbf{M}_{\lambda} \mathbf{A}^{\mathsf{H}}_{\mathbf{D}_{\lambda}}$ (reference source model) and the following vectors \begin{align} \hat{\mathbf{r}}^{\lambda}_{p} = \mathcal{S}_p\left(\hat{\mathbf{R}}_{\lambda}\right) \odot \boldsymbol{\omega}^{\lambda}_{p},\ \ \ \ \ \ \mathbf{z}^{\lambda}_{p} = \mathcal{S}_p \Big(\mathbf{R}^{\text{\tiny K}}_{\lambda} \diag\left(\mathbf{B}_{\lambda} (\boldsymbol{\alpha}^{z})^{\ast}\right) \Big) \odot \boldsymbol{\omega}^{\lambda}_{p} \end{align} in which $\boldsymbol{\omega}^{\lambda}_{p} = \mathcal{S}_p\left(\boldsymbol{\Omega}_{\lambda} \right) $. In addition, let us consider the $J_{z} \times K_g$ matrix $ \mathbf{B}^{z} = \left[\mathbf{b}_{\lambda^{z}_{1}}, \ldots, \mathbf{b}_{\lambda^{z}_{J_{z}}}\right]^{\mathsf{T}}$, $\hat{\mathbf{r}}^{z}_{p} = \left[\hat{\mathbf{r}}^{\lambda^{z}_{1} \mathsf{T}}_{p}, \ldots, \hat{\mathbf{r}}^{\lambda^{z}_{J_{z}} \mathsf{T}}_{p}\right]^{\mathsf{T}} \in \mathbb{C}^{(P-1) J_z \times 1}$, $\mathbf{Z}^{z}_{p} = \blkdiag\left(\mathbf{z}^{\lambda^{z}_{1}}_{p}, \ldots, \mathbf{z}^{\lambda^{z}_{J_{z}} }_{p}\right) \in \mathbb{C}^{(P-1) J_z \times J_z}$ and $\tilde{\mathbf{Z}}^{z}_{p} = \mathbf{Z}^{z}_{p} \mathbf{B}^{z}$. We can thus write $\kappa^{z}_{p}(\boldsymbol{\alpha}^{z}_{p})$ in (\ref{decompose}) as $\kappa^{z}_{p}(\boldsymbol{\alpha}^{z}_{p}) = \left\| \hat{\mathbf{r}}^{z}_{p} - \tilde{\mathbf{Z}}^{z}_{p} \boldsymbol{\alpha}^{z}_{p} \right\|_2^2$ and finally obtain the following estimate \begin{equation} \hat{\boldsymbol{\alpha}}^{z}_{p} = \left( 2\tilde{\mathbf{Z}}^{z \mathsf{H}}_{p} \tilde{\mathbf{Z}}^{z}_{p} + \rho \mathbf{I}_{K_g} \right)^{-1} \left(2 \tilde{\mathbf{Z}}^{z \mathsf{H}}_{p} \hat{\mathbf{r}}^{z}_{p} + \rho \left(\boldsymbol{\alpha}_{p} - \mathbf{u}^{z}_{p} \right)\right). \end{equation} \subsection{Distributed network with no fusion center} \label{sec_no_fus} We consider a specific formulation of the ADMM where every node in the network performs calibration locally and consensus is only reached with clearly identified neighbours without fusion center \cite{erseghe2012distributed}. We note $\mathcal{N}_{z}$ the index set that corresponds to the neighbours of the $z$-th agent. The considered network architecture is exposed in Figure 1 where for example, $\mathcal{N}_{3} = \{2, 4\}$. We define the quantity $(\cdot)^{z,y}$ as the copy available at the $z$-th agent, transferred to the $y$-th agent. In such context, the minimization problem becomes \begin{equation} \label{first_problem} \begin{aligned} \hat{\boldsymbol{\alpha}} &= \argmin_{\{\boldsymbol{\alpha}^{z}, \boldsymbol{\beta}^{z, y}, \forall y \in \mathcal{N}_{z} \}_{\mathcal{Z}}} \sum_{z=1}^{Z} \kappa^{z}\left(\boldsymbol{\alpha}^{z}\right) \\ &\text{subject to~} \boldsymbol{\alpha}^{z} = \boldsymbol{\beta}^{z, y}, \ \ \boldsymbol{\beta}^{y,z} = \boldsymbol{\beta}^{z,y}, \forall y \in \mathcal{N}_{z}, \text{~for~} z\in \{1,\ldots,Z\} \end{aligned} \end{equation} where the auxiliary variables $\boldsymbol{\beta}^{z,y}$ impose consensus contraints on two neighboring agents and are meant to be local copies of $\boldsymbol{\alpha}$. The decentralized strategy enables to cooperatively minimize a sum of local objective functions, the final aim being to converge to a common value, with fast convergence speed and good estimation performance \cite{erseghe2011fast}. To obtain a more compact form of the problem in (\ref{first_problem}), we define $\boldsymbol{\beta}^{z} = \begin{bmatrix} \{\boldsymbol{\beta}^{z, y}\}_{y \in \mathcal{N}_{z}} \end{bmatrix} \ \ \text{and} \ \ \boldsymbol{\beta} = \begin{bmatrix} \{\boldsymbol{\beta}^{z}\}_{z\in\{1,\ldots, Z\}} \end{bmatrix} $, leading to \begin{equation} \begin{aligned} \hat{\boldsymbol{\alpha}} = \argmin_{\{\boldsymbol{\alpha}^{z}, \boldsymbol{\beta}^{z}\}_{\mathcal{Z}}} \sum_{z=1}^{Z} \kappa^{z}\left(\boldsymbol{\alpha}^{z}\right) \ \ \text{subject to~} \mathbf{H}^{z} \boldsymbol{\alpha}^{z} = \boldsymbol{\beta}^{z}, \text{~for~} z\in\{1,\ldots,Z\}, \ \ \boldsymbol{\beta} \in \mathcal{B} \end{aligned} \end{equation} with $\mathcal{B} = \Big\{\boldsymbol{\beta} |\boldsymbol{\beta}^{z,y}=\boldsymbol{\beta}^{y,z}, \forall y \in \mathcal{N}_{z}, \text{~for~} z\in\{1,\ldots,Z\}\Big\}$ and $\mathbf{H}^{z} = \mathbf{1}_{N_{z} \times 1} \otimes \mathbf{I}_{K_g P}$ where $N_{z}= |\mathcal{N}_{z}|$. As in section \ref{estim_alpha_z}, the scaled version of the ADMM leads to \begin{align} \boldsymbol{\alpha}^{z [t+1]} &= \argmin_{\boldsymbol{\alpha}^{z}} \kappa^{z}\left(\boldsymbol{\alpha}^{z}\right) + \frac{\rho_{z}^{[t+1]}}{2} \|\mathbf{H}^{z} \boldsymbol{\alpha}^{z}-\boldsymbol{\beta}^{z [t]}+\mathbf{u}^{z [t]}\|_{2}^{2} = \argmin_{\boldsymbol{\alpha}^{z}} \tilde{L}^{z}\left(\boldsymbol{\alpha}^{z}, \boldsymbol{\beta}^{z [t]}, \mathbf{u}^{z [t]}\right)\label{eq:alpha_d_new} \\ \{\boldsymbol{\beta}^{z [t+1]}\}_{\mathcal{Z}} &= \argmin_{\{\boldsymbol{\beta}^{z}\}_{\mathcal{Z}} \in \mathcal{B}} L \left( \{\boldsymbol{\alpha}^{z [t+1]}, \boldsymbol{\beta}^{z}, \mathbf{u}^{z [t]}\}_{\mathcal{Z}} \right)\label{eq:beta_d_new} \\ \mathbf{u}^{z [t+1]} &= \mathbf{u}^{z [t]} + \left(\mathbf{H}^{z} \boldsymbol{\alpha}^{z [t+1]} - \boldsymbol{\beta}^{z [t+1]} \right) \label{eq:u_d} \end{align} and through decomposition of the problem in (\ref{eq:alpha_d_new}) w.r.t. sensor dependence, we obtain \begin{equation} \label{eq:alpha_z_estim} \hat{\boldsymbol{\alpha}}^{z}_{p} = \left( 2\tilde{\mathbf{Z}}^{z \mathsf{H}}_{p} \tilde{\mathbf{Z}}^{z}_{p} + \rho N_{z} \mathbf{I}_{K_g} \right)^{-1} \left(2 \tilde{\mathbf{Z}}^{z \mathsf{H}}_{p} \hat{\mathbf{r}}^{z}_{p} + \rho \mathbf{H}_{p}^{z \mathsf{H}} \left(\boldsymbol{\beta}^{z}_{p}-\mathbf{u}^{z}_{p} \right)\right) \end{equation} with $ \mathbf{H}^{z}_{p} = \mathbf{1}_{N_{z} \times 1} \otimes \mathbf{I}_{K_g}$. The selected variables $\boldsymbol{\beta}^{z}_{p}$ and $\mathbf{u}^{z}_{p}$ are obtained from $\boldsymbol{\beta}^{z}$ and $\mathbf{u}^{z}$ via an appropriate selection matrix. After considering the projection onto $\mathcal{B}$ and denoting the messages passed between the agent as \begin{equation} \label{eq:gamma} \boldsymbol{\gamma}^{z [t+1]} = \begin{bmatrix} \{\boldsymbol{\gamma}^{z,y [t+1]}\}_{y \in \mathcal{N}_{z}} \end{bmatrix} = \mathbf{H}^{z} \boldsymbol{\alpha}^{z [t+1]} + \mathbf{u}^{z [t]}, \end{equation} we solve (\ref{eq:beta_d_new}) thanks to \begin{align} \label{eq:beta} \boldsymbol{\beta}^{z,y [t+1]} = \frac{1}{2} \left(\boldsymbol{\gamma}^{y,z [t+1]} + \boldsymbol{\gamma}^{z,y [t+1]} \right). \end{align} The steps of the proposed distributed method for calibration of sensor gains are exposed in Algorithm 1.2. \subsection{Estimation of directional gains} In this section, we describe the part of the algorithm dedicated to the estimation of DoA $\mathbf{D}_{\lambda}$ and directional gains $ \mathbf{m}_{\lambda} $, for fixed sensor gains, with a sparse and distributed implementation. Assuming a sparse observed scene, we define dictionaries of steering matrices for $q \in \{1,\ldots Q\}$ and $\lambda \in \Lambda$, as $ \tilde{\mathbf{A}}_{\lambda} = \left[ \tilde{\mathbf{A}}_{1, \lambda}, \ldots ,\tilde{\mathbf{A}}_{Q, \lambda} \right] \in \mathbb{C}^{P \times N_g},$ where $N_g = \sum_{q=1}^{Q} N_q$ denotes the total number of directions on the grid. The sparse vectors in $\tilde{\mathbf{m}}_{\lambda} = \left[ \tilde{\mathbf{m}}_{1, \lambda}^{\mathsf{T}}, \ldots ,\tilde{\mathbf{m}}_{Q, \lambda}^{\mathsf{T}} \right]^{\mathsf{T}} \in \mathbb{R}^{N_g},$ contain the corresponding squared direction dependent gains. The covariance model is rewritten as $ \mathbf{R}_{\lambda} = \tilde{\mathbf{E}}_{\lambda} \tilde{\mathbf{M}}_{\lambda} \tilde{\mathbf{E}}_{\lambda}^{\mathsf{H}} + \boldsymbol{\Sigma}^{\mathrm{n}}_{\lambda} \text{,}$ in which $\tilde{\mathbf{M}}_{\lambda} = \diag(\tilde{\mathbf{m}}_{\lambda})=\left(\mathbf{I}_{N_{g} } \otimes \mathbf{b}^T_{\lambda}\right)\blkdiag \left(\boldsymbol{\alpha}_1,\ldots, \boldsymbol{\alpha}_{N_g}\right)$, $\tilde{\mathbf{E}}_{\lambda} = \mathbf{G}_{\lambda} \tilde{\mathbf{A}}_{\lambda} \tilde{\boldsymbol{\Sigma}}^{\frac{1}{2}}_{\lambda}$ and $\tilde{\boldsymbol{\Sigma}}_{\lambda} = \blkdiag \left(\mathbf{I}_{N_1 } \left[\boldsymbol{\sigma}_{\lambda}\right]_1,\ldots, \mathbf{I}_{N_Q }\left[\boldsymbol{\sigma}_{\lambda}\right]_Q\right)$. To handle the DoA estimation and satisfy both sparsity and positivity requirements, we use the Distributed Iterative Hard Thresholding (IHT) \cite{blumensath2010normalized,patterson2014distributed}. But contrary to \cite{martinjournalnew}, the following hard-thresholding operator $\mathcal{H}_1 \left( \sum_{\lambda \in \Lambda} \left(\check{\mathbf{V}}_{\lambda}^{q \mathsf{T}} \check{\hat{\mathbf{r}}}_{\lambda}^{q}\right)^{\odot 2} \right)$ is considered to provide access to the DoA of the $q$-th source, and a first estimate of the directional gain $\check{\mathbf{m}}^z_{q}$. The quantity $(\cdot)^q$ refers to the $q$-th column of a matrix, the expression $\check{(\cdot)}$ discards the elements corresponding to the diagonal of $\hat{\mathbf{R}}_{\lambda}$ and the hard thresholding operator $\mathcal{H}_s(.)$ keeps the $s$-largest components and sets the remaining entries equal to zero. % Finally, thanks to (\ref{express_gains_di_q}) and dealing with the consensus variables as in section \ref{sec_no_fus}, the minimization problem becomes \begin{equation} \begin{aligned} \hat{\boldsymbol{\alpha}}_{\mathbf{m}_{q}} = \argmin_{\{\boldsymbol{\alpha}^{z,z}_{\mathbf{m}_{q}}, \{\boldsymbol{\alpha}^{z,y}_{\mathbf{m}_{q}}\}_{y \in \mathcal{N}_{z}} \}_{\mathcal{Z}}} \sum_{z=1}^{Z} \eta^{z}_{q}\left(\boldsymbol{\alpha}^{z,z}_{\mathbf{m}_{q}}\right) \ \ \text{subject to~} \boldsymbol{\alpha}^{z,z}_{\mathbf{m}_{q}}= \boldsymbol{\alpha}^{y,z}_{\mathbf{m}_{q}}, \forall y \in \mathcal{N}_{z}, \text{~for~} z \in \{1,\ldots,Z\} \end{aligned} \end{equation} where we benefit from the previous hard-thresholding estimate to define $\eta^{z}_{q}\left(\boldsymbol{\alpha}_{\mathbf{m}_{q}} \right) = \sum_{\lambda \in \Lambda_{z}} \left\| \check{m}_{q,\lambda} - \mathbf{b}^T_{\mathbf{m}_{\lambda}} \boldsymbol{\alpha}_{\mathbf{m}_{q}} \right\|_{2}^{2} = \left\| \check{\mathbf{m}}^{z}_{q} - \mathbf{B}^{z}_{\mathbf{m}} \boldsymbol{\alpha}_{\mathbf{m}_{q}} \right\|_{2}^{2}$ with $\check{\mathbf{m}}^{z}_{q} = [\check{m}_{q,\lambda_{1}^{z}}, \ldots, \check{m}_{q,\lambda_{J_{z}}^{z}}]^T$ and $\mathbf{B}_{\mathbf{m}}^{z} = \left[\mathbf{b}_{\mathbf{m}_{\lambda_{1}^{z}}}, \ldots, \mathbf{b}_{\mathbf{m}_{\lambda_{J_{z}}^{z}}} \right]^{\mathsf{T}}$. As previously, we impose consensus between neighbours thanks to some auxiliary variables but due to lack of space, we only present here the resulting local update for $\boldsymbol{\alpha}^{z}_{\mathbf{m}_{q}}$, \begin{align} \label{eq:estim_alpha_m} \hat{\boldsymbol{\alpha}}^{z}_{\mathbf{m}_{q}} &= \left( 2 \mathbf{B}_{\mathbf{m}}^{z \mathsf{T}} \mathbf{B}_{\mathbf{m}}^{z} + \rho^{z} \mathbf{H}_{\mathbf{m}}^{z^{\mathsf{T}}} \mathbf{H}_{\mathbf{m}}^{z} \right)^{-1} \left( 2 \mathbf{B}_{\mathbf{m}}^{z \mathsf{T}} \check{\mathbf{m}}^{z}_{q} + \rho^{z} \mathbf{H}_{\mathbf{m}}^{z^{\mathsf{T}}} \left( \boldsymbol{\beta}_{\mathbf{m}}^{z} - \mathbf{u}_{\mathbf{m}}^{z} \right) \right) \end{align} where $\mathbf{H}^{z}_{\mathbf{m}} = \mathbf{1}_{N_{z} \times 1} \otimes \mathbf{I}_{K_m \times K_m}.$ From $\hat{\boldsymbol{\alpha}}^{z}_{\mathbf{m}_{q}}$, we obtain an estimate of $[\mathbf{m}_{\lambda}]_q$ and process the next source, as shown in Algorithm 1.3. \section{Numerical simulations} In order to evaluate the method, we consider realistic simulations for the radio astronomy context where the new generation of phased array systems such as the Low Frequency Array (LOFAR) and the Square Kilometre Array (SKA) requires the development of new advanced signal processing techniques for calibration purpose \cite{van2013signal,ollier2016relaxed}. Indeed, lack of calibration leads to dramatic effects and distortions in the reconstructed images. We consider $P=60$ antennas spread over a five-armed spiral \cite{wijnholds2004sky,ollierjournal}, which corresponds to the LOFAR's Initial Test Station. Let us assume a sky model with $Q=3$ strong calibrator sources and $Q^{\text{\tiny U}}=8$ weak unknown sources in the background. The reference frequency $f_0 $ is set to $ \SI{30}{MHz}$ and we consider frequencies ranging from $\SI{29.6}{MHz}$ to $\SI{30.4}{MHz}$, with $Z=3$ agents in the network and $N_{z}=2$. The polynomial orders are chosen as $K_g=K_m=3$. The consensus variables $\boldsymbol{\alpha}$ and $\boldsymbol{\alpha}_{\mathbf{m}}$ are initialized as zeros and the squared directional gains are generated thanks to power law functions $(\lambda / \lambda_0)^{k-1}$ for $k \in \{1,\hdots,K_m\}$. \subsection{Influence of the number of frequency channels} First of all, we investigate the statistical performance of the proposed distributed algorithm as a function of the number of samples $N$ or the Signal-to-Noise Ratio (SNR). The SNR is defined as the ratio between the sum of apparent powers for all $Q$ sources and the noise power. Results are averaged for $100$ Monte-Carlo runs. In Figure 2, we plot the three following cases: $F=3$ and each agent handles one frequency, i.e., $J_z=1$ (green curve), $F=9$ with $J_z=3$ (blue curve) and $F=27$ with $J_z=9$ (red curve). In Figure 2 (a), we plot the Root Mean Square Error (RMSE) as a function of $N$ for the undirectional gains $\mathbf{g}_{\lambda}$, defined as $ \epsilon^{\mathbf{g}_{\lambda}}_{\mathrm{RMSE}} = \frac{1}{\sqrt{PF}} \sum_{\lambda \in \Lambda} \left\| \mathbf{B}_{\lambda}\hat{\boldsymbol{\alpha}} - \mathbf{B}_{\lambda}\boldsymbol{\alpha} \right\|_{2},$ for fixed SNR = $- 36$ dB. A similar figure is presented in Figure 2 (b), for the source directions $\mathbf{D}_{\lambda}$, as a function of the SNR and fixed $N=2^{8}$. We illustrate the performance by comparing with the mono-calibration scenario where each agent handles one single frequency, independently. We notice that mono-calibration is clearly improved, by using a distributed procedure where the whole information is flowing through the entire network. \subsection{Influence of the network architecture} We aim to show the advantages of the proposed distributed network with no fusion center and only exchange of local information between neighboring agents, in terms of complexity. With similar number of iterations in all loops of the algorithm, different estimation performance are attained in Figure 2 (a) while similar RMSE is reachable in Figure 2 (b) but with an additional computational cost if there is a fusion center (an increase of at least a factor 5 in computing time). \subsection{Convergence analysis} We illustrate the convergence behavior of the proposed algorithm by analyzing the following residuals as function of the iteration number. Depending on the iteration in Algorithm 1, we plot the primal residual as a function of the iteration number of Algorithm 1.2, defined as $\epsilon_p^{[t]} = \frac{1}{\sqrt{PK_gZN_z}} \sum_{z=1}^{Z} \left\| \mathbf{H}^{z}\boldsymbol{\alpha}^{z[t]} - \boldsymbol{\beta}^{z[t]} \right\|_{2} .$ Likewise, we also study the different estimates between agents through $ \epsilon^{[t]}_{\mathrm{DIFF}} = \frac{1}{\sqrt{PK_gZ(Z-1)}} \sum_{z,z'=1}^{Z} \left\| \boldsymbol{\alpha}^{z [t]} - \boldsymbol{\alpha}^{z' [t]} \right\|_{2} .$ Similar statistical behavior can be obtained for corresponding residuals in Algorithm 1.3 \section{Conclusion} In this work, we proposed an iterative algorithm for parallel calibration, applied in a general context of sensor array processing: complex electronic gains are imprecisely known and propagation disturbances lead to deviations in the source locations. In order to reduce the communication overhead, the specific variation of parameters across wavelength is exploited in a distributed network with no fusion center and local exchange of information between adjacent connected nodes. The two main steps of the algorithm are based on the scalable form of the ADMM and distributed IHT procedures. We highlighted the effectiveness and time efficiency of the proposed method using simulated data, even in the presence of non-calibrator sources at unknown directions. \begin{algorithm}[h] \KwIn{$\big\lbrace \hat{\mathbf{R}}_{\lambda}\big\rbrace_{\lambda \in \Lambda}, \mathbf{D}^{\text{\tiny K}}, \eta_{\mathbf{p}}$\;} \textbf{Initialize:} set $i = 0, \big\lbrace \mathbf{g}_{\lambda} = \mathbf{g}_{\lambda}^{[0]},\mathbf{D}_{\lambda}=\mathbf{D}^{\text{\tiny K}}, \mathbf{m}_{\lambda} = \mathbf{m}_{\lambda}^{[0]}, \boldsymbol{\Omega}_{\lambda} = \mathbf{1}_{P \times P} \big\rbrace_{\lambda \in \Lambda}$\; \Repeat{$\left\| \mathbf{p}^{[i-1]}-\mathbf{p}^{[i]} \right\|_2 \le \left\| \mathbf{p}^{[i]} \right\|_2 \eta_{\mathbf{p}}$}{ \nl $i = i+1$\; \nl Estimate in parallel $\big\lbrace \mathbf{g}_{\lambda}^{[i]} \big\rbrace_{\lambda \in \Lambda}$ \label{lst:line:g} with \textbf{Algorithm 1.2}\; \nl Estimate in parallel $\big\lbrace \mathbf{D}^{[i]}_{\lambda}, \mathbf{m}_{\lambda}^{[i]}, \boldsymbol{\sigma}^{\mathrm{n}[i]}_{\lambda} \big\rbrace_{\lambda \in \Lambda}$ \label{lst:line:doa} with \textbf{Algorithm 1.3}\; \nl Update locally $\big\lbrace \boldsymbol{\Omega}_{\lambda}^{[i]} \big\rbrace_{\lambda \in \Lambda}$\;} \KwOut{$\hat{\mathbf{p}} = \big[\mathbf{p}_{\lambda_1}^{[i]\mathsf{T}}, \ldots, \mathbf{p}_{\lambda_F}^{[i]\mathsf{T}} \big]^{\mathsf{T}}$\;} \caption[]{Proposed calibration algorithm} \end{algorithm} \begin{algorithm}[ht] \SetAlgoRefName{1.2} \KwIn{$\big\lbrace \hat{\mathbf{R}}_{\lambda} \big\rbrace_{\lambda \in \Lambda}, \mathbf{p}^{[i-1]}, \eta_{\boldsymbol{\alpha}}$\;} \textbf{Initialize:} set $t = 0, \boldsymbol{\alpha}^{z} = \boldsymbol{\alpha}^{[i-1]}$, $\mathbf{R}^{\text{\tiny K}}_{\lambda} = \mathbf{A}_{\mathbf{D}^{[i-1]}_{\lambda}} \boldsymbol{\Sigma}_{\lambda} \mathbf{M}^{[i-1]}_{\lambda} \mathbf{A}^{\mathsf{H}}_{\mathbf{D}^{[i-1]}_{\lambda}}$\; \While{stop criterion unreached}{ \nl $t = t+1$ \; \nl Estimate locally $\boldsymbol{\alpha}^{z [t]}$ with \textbf{Algorithm 1.2.2}\; \nl Calculate locally $\boldsymbol{\gamma}^{z [t]}$ with (\ref{eq:gamma})\; \nl $\Rightarrow$ Broadcast values $\boldsymbol{\gamma}^{z,y [t]}$ to region $y \in \mathcal{N}_{z}$\; \nl $\Leftarrow$ Receive values $\boldsymbol{\gamma}^{y,z [t]}$ from region $y \in \mathcal{N}_{z}$\; \nl Estimate locally $\boldsymbol{\beta}^{z [t]}$ with (\ref{eq:beta}) \; \nl Update locally $\mathbf{u}^{z [t]}$ with (\ref{eq:u_d})\; \nl Update possibly $\rho^{[t]}_{z}$ with \cite{boyd2011distributed,ghadimi2015optimal}\; } \caption{Distributed estimation of consensus variables for undirectional gains} \end{algorithm} \begin{algorithm}[ht] \SetAlgoRefName{1.2.2} \KwIn{$\big\lbrace \hat{\mathbf{R}}_{\lambda}, \mathbf{R}^{\text{\tiny K}}_{\lambda},\big\rbrace_{\lambda \in \Lambda_{z}}, \boldsymbol{\alpha}^{z [t-1]}, \boldsymbol{\beta}^{z [t-1]}, \mathbf{u}^{z [t-1]}, \eta_{\boldsymbol{\alpha}^{z}}$;} \textbf{Initialize:} set $t^{z} = 0, \boldsymbol{\alpha}^{z} = \boldsymbol{\alpha}^{z [t-1]}$\; \While{$\left\| \boldsymbol{\alpha}^{z [t^{z}-1]}-\boldsymbol{\alpha}^{z [t^{z}]}\right\|_2 \ge \left\| \boldsymbol{\alpha}^{z [t^{z}]} \right\|_2 \eta_{\boldsymbol{\alpha}^{z}}$}{ \nl $t^{z} = t^{z}+1$ \; \For{$p \in \{ 1, \ldots, P\}$}{ \nl Update $\tilde{\mathbf{Z}}^{z}_{p}$\; \nl Estimate $\hat{\boldsymbol{\alpha}}^{z [t^{z}]}_{p}$ with (\ref{eq:alpha_z_estim}) \; \nl Update $(\hat{\boldsymbol{\alpha}}^{z}_{p})^{\ast}$ \; } } \KwOut{$\hat{\boldsymbol{\alpha}}^{z} = \boldsymbol{\alpha}^{z [t^{z}]}$\;} \caption{local estimation of $\boldsymbol{\alpha}^{z}$} \end{algorithm} \begin{algorithm}[ht] \SetAlgoRefName{1.3} \KwIn{$\big\lbrace\hat{\mathbf{R}}_{\lambda}\big\rbrace_{\lambda \in \Lambda}, \mathbf{p}^{[i-1]}, \eta_{\boldsymbol{\alpha}_{\mathbf{m}}},\eta_{\mathbf{D}}, \hat{\mathbf{r}}_{\lambda} = \vect\left(\hat{\mathbf{R}}_{\lambda} \odot \boldsymbol{\Omega}_{\lambda}\right)\text{,}$ $ \mathbf{V}_{\lambda} = \left ( \boldsymbol{\Sigma}^{\mathrm{n}}_{\lambda} \right )^{-\frac{1}{2}} \tilde{\mathbf{E}}^{*}_{\lambda} \otimes \left ( \boldsymbol{\Sigma}^{\mathrm{n}}_{\lambda} \right )^{-\frac{1}{2}} \tilde{\mathbf{E}}_{\lambda}$\;} \textbf{Init:} set $k = 0$, $\mathbf{g}_{\lambda}^{[k]} = \mathbf{g}_{\lambda}^{[i]}$, $\mathbf{M}_{\lambda}^{[k]} = \mathbf{M}^{[i-1]}_{\lambda}$, $\mathbf{D}_{\lambda}^{[k]} = \mathbf{D}^{[i-1]}_{\lambda}, \boldsymbol{\sigma}_{\lambda}^{\mathrm{n} [k]} = \boldsymbol{\sigma}_{\lambda}^{\mathrm{n}[i-1]}$\; \While{$\left\| \boldsymbol{\alpha}_{\mathbf{m}}^{[k-1]}- \boldsymbol{\alpha}_{\mathbf{m}}^{[k]} \right\|_2 \ge \left\| \boldsymbol{\alpha}_{\mathbf{m}}^{[k]} \right\|_2 \eta_{\boldsymbol{\alpha}_{\mathbf{m}}}$ and $\sum_{\lambda \in \Lambda} \left\| \mathbf{D}_{\lambda}^{[k-1]}- \mathbf{D}_{\lambda}^{[k]} \right\|_{\mathsf{F}} \ge \eta_{\mathbf{D}}$}{ \nl $k = k+1$\; \For{$q\in \{1,\ldots, Q\}$}{ \ForEach{$\mathsf{A}_z, z \in \{ 1, \ldots, Z\}$}{ \ForEach{$\lambda \in \Lambda_{z}$}{ \nl Calculate locally the residual $\check{\hat{\mathbf{r}}}_{\lambda}^{q}$ as indicated in \cite{martinjournalnew}\; } \nl Compute $\mathcal{H}_1 \left( \sum_{\lambda \in \Lambda} \left(\check{\mathbf{V}}_{\lambda}^{q \mathsf{T}} \check{\hat{\mathbf{r}}}_{\lambda}^{q}\right)^{\odot 2} \right)$ as indicated in \cite{martinjournalnew}\; } \nl Deduce $\hat{\mathbf{d}}_{q,\lambda}$ and $\check{\mathbf{m}}^z_{q}$ for each wavelength and agent\; \nl Estimate $[\mathbf{m}_{\lambda}]_q$ with similar procedure than \textbf{Algorithm 1.2} and (\ref{eq:estim_alpha_m})\; } } \nl Estimate locally $\boldsymbol{\sigma}_{\lambda}^{\mathrm{n}}$ as indicated in \cite{martinjournalnew} \; \caption{Distributed estimation of $\left\lbrace \mathbf{m}_{\lambda}, \mathbf{D}_{\lambda}, \boldsymbol{\sigma}_{\lambda}^{\mathrm{n}}\right\rbrace_{\lambda \in \Lambda}$ \label{alg:iht}} \KwOut{$\big\lbrace\hat{\mathbf{m}}_{\lambda}, \hat{\mathbf{D}}_{\lambda}, \hat{\boldsymbol{\sigma}}_{\lambda}^{\mathrm{n}}\big\rbrace_{\lambda \in \Lambda}$ \;} \end{algorithm} \begin{figure} \centering \input{tikz_dessin.tex} \caption*{Figure 1: Example of distributed network with no fusion center.} \label{fig:dessin} \end{figure} \begin{figure}[t!] \centering \subfigure[]{\label{fig:res2-a}\includegraphics[width=8cm, height=7cm]{Fig1_try.eps}} \hspace{0.5pt} % \subfigure[]{\label{fig:res2-b}\includegraphics[width=8cm, height=7cm]{Fig2_try.eps}} \caption*{Figure 2: (a) RMSE on the undirectional gains as function of the number of samples $N$, (b) RMSE on the apparent source directions as function of the SNR. } \end{figure} \begin{figure}[t!] \centering \subfigure[]{\label{fig:res2-a}\includegraphics[width=8cm, height=7cm]{Fig3_try.eps}} \hspace{0.5pt} % \subfigure[]{\label{fig:res2-b}\includegraphics[width=8cm, height=7cm]{Fig4_try.eps}} \caption*{Figure 3: (a) Statistical comparison between different network topologies for same computational cost, (b) Statistical comparison between different network topologies for different computational cost. } \end{figure} \begin{figure}[t!] \centering \subfigure[]{\label{fig:res2-a}\includegraphics[width=8cm, height=7cm]{Fig5_try.eps}} \hspace{0.5pt} % \subfigure[]{\label{fig:res2-b}\includegraphics[width=8cm, height=7cm]{Fig6_try.eps}} \caption*{Figure 4: (a) Primal residual $\epsilon_{p}$ and (b) estimates difference $\epsilon_{\mathrm{DIFF}}$ of the local consensus variables among agents as function of the iteration $t$ in Algorithm 2, for different values of the iteration $i$ in Algorithm 1. } \end{figure} \newpage \bibliographystyle{elsarticle-num}
1,941,325,220,867
arxiv
\section{Introduction} \setcounter{equation}{0} We are concerned with the concept of orbital stability of the standing wave solutions for the following semilinear Schr\"{o}dinger equation \begin{equation}\label{original.problem} i\psi_t+\Delta \psi+\left(\frac{N-2}{2}\right)^2\frac{\psi}{|x|^2}+|\psi|^{q-2}\psi=0,\;\;\;\;\;\;\; x \in \mathbb{R}^N, \end{equation} where $\psi:\mathbb{R}^N\times \mathbb{R}\rightarrow \mathbb{C},\; N\geq 3$ and $2<q<\frac{4}{N}+2$. A standing wave is a solution of the form \[ \psi(x,t)=e^{i\lambda t}u(x),\;\;\; \lambda \in \mathbb{R},\;\; u:\mathbb{R}^N\rightarrow \mathbb{R}, \] therefore, (\ref{original.problem}) is reduced to the stationary equation \begin{equation}\label{original.stationary.eq} -\Delta u-\left(\frac{N-2}{2}\right)^2\frac{u}{|x|^2}+\lambda u-|u|^{q-2}u=0,\;\;\;\;\;\;\; x \in \mathbb{R}^N. \end{equation} It is well-known that Hardy's inequality \[ \left(\frac{N-2}{2}\right)^2 \int_{\mathbb{R}^N}\frac{|u|^2}{|x|^2}dx \leq \int_{\mathbb{R}^N}|\nabla u|^2dx,\;\;\;\; \mbox{for all}\; u\in C_0^\infty(\mathbb{R}^N),\;\; N\geq 3, \] is closely related to elliptic and parabolic equations involving inverse square potentials. The optimal constant $c_{*}:=\left(\frac{N-2}{2}\right)^2$ is the natural borderline separating existence from nonexistence. We know from \cite{vz00} that the linear heat equation with an inverse square potential, \begin{equation} \label{heat equation} u_t=\Delta u+\frac{c}{|x|^2}u, \end{equation} admits a global solution for $c<c_{*}$, both in the bounded and the whole space case. On the other hand, there are no solutions for $c>c_{*}$, even locally in time, due to instantaneous blow-up. In the critical case $c=c_{*}$, a global solution is defined in \cite{vzog1,vzog2,vz00}, but the functional framework is more complicated. There is a quite extent literature related to the Schr\"{o}dinger operator equipped with an inverse square potential, $H_c:=-\Delta-c|x|^{-2}$. Regarding the stationary problem, we shall refer to some papers related to the semilinear equation \begin{equation} H_cu=f(x,u). \end{equation} In \cite{dupaigne}, the author exhibits some existence, uniqueness and regularity results on various types of solutions for the case $0<c\leq c_{*}$, posed on a bounded domain containing the origin, where $f(x,u)=u^q+tg(x)$, $g\geq 0$ is a smooth, bounded function and $t>0$. The case $t=0$ on a ball of $\mathbb{R}^N$ appears in \cite{brezis.dupaigne}. In the strictly subcritical case $c\in (0,c_{*})$, we refer indicatively to \cite{terracini} on some existence and nonexistence results in $D^{1,2}(\mathbb{R}^N)$, concerning the case $f(x,u)=u^{\frac{N+2}{N-2}}$. We also mention \cite{smets}, where the case $f(x,u)=K(x)u^{\frac{N+2}{N-2}}$ is considered, yielding a nonexistence result for $c\geq c_{*}$, and $K(x)$ is a positive and bounded weight. In \cite{deng.jin}, some existence results are obtained in $H^1(\mathbb{R}^N)$, where $f(x,u)=a(x)u+|u|^{\frac{4}{N-2}}u+g(x,u)$ for certain functions $a,\; g$. In the context of kernel estimates for the semigroup $e^{-tH_c},\;0<c<c_{*}$, we refer to \cite{davies}; see also \cite{bft} for the subcritical case of the potential. Analogue bounds on the Schr\"{o}dinger heat kernel for the case of the Laplace-Beltrami operator on some Riemmanian manifold can be found in \cite{zhang.only}. A result on the long-time behavior for the magnetic Schr\"{o}dinger operator with the critical potential is obtained in \cite{cazacu}. Finally, we refer to \cite{adimurthi.esteban} for the spectral analysis, in $D^{1,2}(\mathbb{R}^N)$, of the operator \[ L_{q,\eta}u:=-\Delta u-c_{*}\frac{qu}{|x|^2}-\eta u, \] where $0\leq q\leq 1$ and $\eta$ satisfies certain growth conditions. We also mention \cite{ef} for some estimates on the moments of the negative eigenvalues of the Schr\"{o}dinger operator in the critical case. In \cite{su.wang.willem}, the existence of radially symmetric ground state solutions is established for the following semilinear equation \begin{equation}\label{plaplacian} -\Delta u+V(|x|)u=Q(|x|)u^{q-1},\;\;\;\;\; x\in \mathbb{R}^N, \end{equation} where $V(|x|)$ is nonnegative and presents a singular behavior at the origin of order $|x|^{-s},s\in [2,N)$. Analogue results are obtained in \cite{su.wang.willem.jde}, by the same authors, for an equation similar to (\ref{plaplacian}), driven by the p-Laplacian. In both cases, the procedure is based on proving compact inclusions for certain weighted Sobolev spaces. Also, we refer to the variational approach of \cite{costa} for the case $V(|x|)\geq -(c_{*}-\alpha)|x|^{-2},\; \alpha>0,\; V(|x|)|x|^{2}\rightarrow +\infty$, whenever $|x|\rightarrow 0$ or $|x|\rightarrow \infty$. Finally, we refer to \cite{bellazzini.bonanno} for the case of a nonnegative potential $V(x)=l^2|x|^{-2}+|x|^{-a}$, where $a>0$ and $l\in \mathbb{Z}$. In the time-dependent case, nonlinear equations of the general form \begin{equation}\label{general.setting} i\psi_t+\Delta \psi+V(x)\psi+f(x,\psi)=0, \end{equation} arise in various contexts of mathematical physics (see for example \cite{rose.weinstein} and the references therein). Perhaps, in many circumstances, the orbital stability of the standing wave solutions is the only extracted information on the asymptotic behavior of nonlinear phenomena governed by (\ref{general.setting}). When $V(x)\equiv 0$, we refer to the stability result in \cite{cazenave.lions} (see also \cite{shibata}). In the same paper, a Hartree type equation is considered, equipped with a potential of the form $V(x)=\sum_1^m\frac{c_i}{|x-x_i|}$, where $c_1,...,c_m$ are positive constants, and $x_1,...,x_m$ are given poles. The case $V\in L^\infty(\mathbb{R}^N)$ can be found in \cite{bellazzini.visciglia,rose.weinstein}. We especially point out \cite{genoud} where, among other, the existence and stability is obtained for a singular potential $V(x)\sim |x|^{-b},\;b\in (0,2)$, near zero. In \cite{debouard.fukuizumi,genoud.stuart,jeanjean}, the authors prove the orbital stability for weighted nonlinearities of certain singularity, that is, they consider the case $f(x,\psi)=Q(x)|\psi|^{q-1}\psi$, where $Q$ is allowed to behave like $|x|^{-b},b\in (0,2)$. Concerning singularities at infinity, we refer to the instability results in \cite{chen,chen.liu} for the case of harmonic potentials $V(x)=|x|^2$. A case of quasilinear equations is treated in \cite{guo.chen}. In the subcritical case $V(|x|)=c|x|^{-2},\; c<c_{*}$, we refer to \cite{zhang.zheng}, where the long-time behavior is studied by applying scattering theory in the $H^1(\mathbb{R}^N)$-setting. On some Strichartz type estimates for the linear Schr\"{o}dinger and the linear wave equation, respectively, equipped with a subcritical inverse square potential, we refer to \cite{burq}. As it turns out, as far as we know, the results obtained here are new and they cover the critical inverse square potential, involving the best constant, on the whole space. In Section \ref{sec.space.setting}, we formulate the functional framework. The proper norm for the setting of problems (\ref{original.problem}) and (\ref{original.stationary.eq}) is the following \begin{equation}\label{right.norm.introd} \left\|u\right\|^2_H:=\lim_{\varepsilon \downarrow 0}(I_{B_\varepsilon^c}(u)-\Lambda_{\varepsilon}(u)) + \left\|u\right\|^2_{L^2(\mathbb{R}^N)}, \end{equation} where \[ I_{B_\varepsilon^c}(u):=\int_{B_\varepsilon^c}|\nabla u|^2dx-\left(\frac{N-2}{2}\right)^2 \int_{B_\varepsilon^c}\frac{|u|^2}{|x|^2}dx, \] is the Hardy functional on the complement of the ball of radius $\varepsilon$ and centered at the origin, and the surface integral \[ \Lambda_{\varepsilon}(u):=\frac{N-2}{2}\varepsilon^{-1}\int_{|x|=\varepsilon}|u|^2dS, \] represents the Hardy energy at the singularity. This unexpected energy formulation was first introduced in the paper \cite{vzog1} in order to overcome a functional difficulty emerged by the eigensolutions presenting the maximal singularity. Further applications to linear singular parabolic problems of the above consideration can be found in \cite{trach.zogr,vzog2}. The present approach is based on the existence of critical points for the following functional \[ E(u):=\frac{1}{2}\, \lim_{\varepsilon \downarrow 0}(I_{B_\varepsilon^c}(u)-\Lambda_{\varepsilon}(u))-\frac{1}{q}\int_{\mathbb{R}^N}|u|^qdx, \] on certain subsets of the energy space $H$ with prescribed $L^2$-norm. Namely, we define the $L^2$-sphere of $H$, \[ \Gamma:=\left\{u\in H:\int_{\mathbb{R}^N}|u|^2dx=\gamma \right\}, \] for a given $\gamma>0$, and we study the following minimization problem \begin{equation}\label{original.minimization.problem} \left \{ \begin{array}{ll} u\in \Gamma, \\ E(u)=\min \left\{E(z):z\in \Gamma,\; z\; \mbox{is radial}\right\}, \end{array} \right. \end{equation} that is, we are interested for radial ground state solutions. The set of all global minimizers of (\ref{original.minimization.problem}) is denoted by \[ S_\gamma:=\left\{u\in \Gamma:u\; \mbox{is a solution of}\; (\ref{original.minimization.problem}) \right\}. \] \vspace{0.2cm} In Section \ref{sec.radial.case}, we recall the notion of orbital stability and we prove the main result which is stated in the following: \begin{theorem}\label{main.result} Assume that $2<q<\frac{4}{N}+2,\;N\geq 3$, and $\gamma>0$ is a given constant. Then, the set $S_\gamma$ is orbitally stable. \end{theorem} Based on a proper transformation as in \cite{vzog1,vzog2}, we derive the behavior of the standing waves at the origin. More precisely, we establish that the minimizers of (\ref{original.minimization.problem}) behave exactly as $|x|^{-(N-2)/2}$ at the singularity, thus they do not belong to $H^1 (\mathbb{R}^N)$. In this case, we address the appearance of the Hardy singularity term at the origin. \begin{theorem} \label{behavior.origin} For each $\gamma>0$, every minimizer of (\ref{original.minimization.problem}) behaves at the origin like $|x|^{-(N-2)/2}$. \end{theorem} At this point we mention that the first nonexistence in $H_0^1 (\Omega)$ result was given in \cite{filippas.tertikas}, concerning the corresponding to (\ref{original.stationary.eq}) linear problem. The first exact description of the behavior at the singularity, and thus nonexistence in the classical Sobolev spaces, was given in \cite{vzog1,vzog2} for all minimizers. In almost all cases, this behavior corresponds to the appearance of the Hardy singularity term as a well-defined real number (positive or negative depending on the case); the Hardy functional is also well-defined and finite as a principal value (also positive or negative depending on the case). Exception to this is the case of the so called $k$-improved Hardy functional (see \cite{vzog2}), where both the associated Hardy functional and the Hardy singularity term tend to infinity. Finally, we mention that results concerning the behavior at the singularity, by a different method, may be found in \cite{felli.ferrero}. Furthermore, we confirm the orbital stability for the following Schr\"{o}dinger equation \begin{equation}\label{equation.hardy.at.infty} i|x|^{-4}w_t+\Delta w+\left(\frac{N-2}{2}\right)^2\, \frac{w}{|x|^2}=-h(x)\,w,\;\;\;\;\;\;\; x \in \mathbb{R}^N, \end{equation} for some $h$ decaying to zero at infinity, which also presents a Hardy-type energy. The somehow unexpected fact is that this energy appears in an additive way to the total energy, as an effect that comes from infinity, and maybe represents the main part of it. The result is based on the arguments of \cite{vzog1, vzog2}. If we consider the corresponding to (\ref{equation.hardy.at.infty}) linear heat equation, \begin{equation}\label{darkrn1} \left\{\begin{array}{ccll} |x|^{-4}\, u_t &=& \Delta u + \left(\frac{N-2}{2}\right)^2\, \displaystyle\frac{u }{|x|^2},\ &x \in \mathbb{R}^N,\; t>0,\\ u(x,0) &=& u_0(x),\;\; &\mbox{for}\;\; x \in \mathbb{R}^N, \nonumber \\ u(x,t)&\to& 0,\;\; &\mbox{for} \;\; |x| \to \infty,\;t>0\,, \end{array}\right. \end{equation} then, using similarity variables, we may prove the existence of the Hardy-type energy. This energy comes from infinity, is additive to the total energy and constitutes the main part of it. In Section \ref{sec.general.case}, we extend Theorem \ref{main.result} by removing the hypothesis of radial symmetry on $\psi$ and $u$. Nevertheless, in the general case, the cost we have to pay is the introduction of a certain weight function on the nonlinearity. Note that there is no possibility for a "non-weighted" Hardy-Sobolev inequality to hold, thus it is unclear to us if standard methods (e.g. see \cite{cazenave}) may be applied. On the other hand, in Subsection \ref{IHS} we prove a weighted (or improved) Hardy-Sobolev inequality. More precisely, we prove the orbital stability of the following equation \begin{equation}\label{general.original.problem} i\psi_t+\Delta \psi+\left(\frac{N-2}{2}\right)^2\frac{\psi}{|x|^2}+g(x)|\psi|^{q-2}\psi=0, \end{equation} on $\mathbb{R}^N\times \mathbb{R}$, for weight functions $g:\mathbb{R}^N\rightarrow [0,+\infty)$ decaying, with a certain rate, to zero at infinity. \section{Space Setting}\label{sec.space.setting} \setcounter{equation}{0} In this section, we introduce a more convenient function space in order to slide over the effect of the Hardy term $c_{*}|x|^{-2}$. We define the space $\H$ to be the completion of the $C^{\infty}_0 (\mathbb{R}^N)$-functions under the norm \begin{equation}\label{normv} ||v||^2_{\H} = \int_{\mathbb{R}^N} |x|^{-(N-2)}\, |\nabla v|^2\, dx + \int_{\mathbb{R}^N} |x|^{-(N-2)}\, |v|^2\, dx. \end{equation} Notice that, setting \begin{equation}\label{brezis.transf.} u(x)=|x|^{-\frac{N-2}{2}}v(x), \end{equation} in (\ref{normv}), the first integral simplifies the Hardy functional, since \[ \int_{\mathbb{R}^N} |x|^{-(N-2)}\, |\nabla v|^2\, dx=\int_{\mathbb{R}^N}|\nabla u|^2dx-\left(\frac{N-2}{2}\right)^2\int_{\mathbb{R}^N}\frac{|u|^2}{|x|^2}dx, \] at least for $C_0^\infty(\mathbb{R}^N)$-functions. In the sequel, we denote transformation (\ref{brezis.transf.}) by $u={\cal T}(v)$, which is an isometry between the spaces $X=L^2(\mathbb{R}^N)$ and $\widetilde{X}=L^2(\mathbb{R}^N,d\mu),\; d\mu=|x|^{-(N-2)}dx$. Our setting is based on this equivalence, as we shall see below. It is clear that $\H$, equipped with the scalar product \begin{equation} (v_1,v_2)_{\cal H}:=\mbox{Re}\int_{\mathbb{R}^N}|x|^{-(N-2)}\nabla v_1 \nabla v_2dx+\mbox{Re}\int_{\mathbb{R}^N}|x|^{-(N-2)}v_1v_2dx, \end{equation} is a well-defined real Hilbert space. An equivalent definition of $\H$ is the following: \begin{lemma}\label{density.first} Every function $v$, such that $||v||_{\H} < \infty$, belongs to $\H$. \end{lemma} \emph{Proof.} We adapt the arguments of \cite{vzog1} in our case. First, we prove the result for $v \in L^{\infty}(\mathbb{R}^N)$. For $\varepsilon>0$ small enough and $t>0$, we define the cutoff function $\rho_{\varepsilon}(t) \in C_0 (\mathbb{R}_{+} \backslash \{0\})$, $0 \leq \rho_{\varepsilon} \leq 1$, as: \[ \rho_{\varepsilon}(t)= \left \{ \begin{array}{lllll} 0, & 0 < t < \varepsilon^2, \\ (\log 1/\varepsilon)^{-1}\, \log (t/\varepsilon^2), & \varepsilon^2 < t < \varepsilon, \\ 1, & \varepsilon < t < \frac{1}{\varepsilon}, \\ (\log \varepsilon)^{-1}\, \log (t \varepsilon^2), & \frac{1}{\varepsilon} < t < \frac{1}{\varepsilon^2}, \\ 0, & t > \frac{1}{\varepsilon^2}. \end{array} \right. \] For a fixed $v \in L^{\infty}(\mathbb{R}^N)$, $||v||_{\H} < \infty,$ we define $v_{\varepsilon}(x) = \rho_{\varepsilon} (|x|)\, v(x)$. Note that \[ |\nabla \rho_{\varepsilon}(|x|)|=\frac{c_{\varepsilon}}{|x|}, \qquad \qquad c_{\varepsilon}:=(\log (1/{\varepsilon}))^{-1}, \] for $x \in A_{\varepsilon} \cup A_{\frac{1}{\varepsilon}}$, where $A_{\varepsilon} = \{x \in \mathbb{R}^N: \ \varepsilon^2 < |x| <\varepsilon \}$ and $A_{\frac{1}{\varepsilon}} = \{x \in \mathbb{R}^N: \ \varepsilon^{-1} < |x| < \varepsilon^{-2} \}$, being zero otherwise. Then, we have \begin{eqnarray}\label{H1a} ||v_{\varepsilon} - v||^2_{\H} &\leq& 2\int_{A_\varepsilon \cup A_{\frac{1}{\varepsilon}}} |x|^{-(N-2)}\, |\nabla \rho_{\varepsilon} (|x|)|^2\, |v|^2\, dx \nonumber \\ && + 2 \int_{B_{\varepsilon} \cup B^c_{\frac{1}{\varepsilon}}} |x|^{-(N-2)}\, (1-\rho_{\varepsilon})^2\, (|\nabla v|^2 + |v|^2)\, dx. \end{eqnarray} We will prove that letting $\varepsilon \downarrow 0$, the integrals in (\ref{H1a}) tend to zero. The only one that is delicate is the first. We have \[ \displaystyle \int_{A_\varepsilon} |x|^{-(N-2)}\, |\nabla \rho_{\varepsilon} (|x|)|^2\, |v|^2\, dx \le C\|v\|_\infty^2 \int_{{\varepsilon}^2}^{{\varepsilon}}c_{\varepsilon}^2\frac{dr}{r}= C\|v\|_\infty^2(\log (1/{\varepsilon}))^{-1}, \] and \[ \displaystyle \int_{A_{\frac{1}{\varepsilon}}} |x|^{-(N-2)}\, |\nabla \rho_{\varepsilon} (|x|)|^2\, |v|^2\, dx \le C\|v\|_\infty^2 \int_{{\varepsilon}^{-1}}^{{\varepsilon}^{-2}}c_{\varepsilon}^2\frac{dr}{r}= C\|v\|_\infty^2(\log (1/{\varepsilon}))^{-1}, \] and these tend to zero as ${\varepsilon}\to 0$. Finally, we will prove that for any $v$ with $||v||_{\mathcal{H}} < \infty$, $v$ is approximated by a sequence of bounded functions in $\H$. Indeed, if we define $v_n$ as follows \[ v_n(x) = \left \{ \begin{array}{ll} v(x), & \mbox{if} \;\;\; |v(x)|\leq n, \\ n, & \mbox{if} \;\;\; |v(x)|>n, \end{array} \right. \] then, we have \begin{equation} ||v_n||_{\H} = \int_{C_n} |x|^{-(N-2)}\, |\nabla v|^2\,dx + \int_{C_n} |x|^{-(N-2)}\, |v|^2\,dx< \infty. \end{equation} Now, it is clear that the sets $C_n=\{x\in \mathbb{R}^N: \ |v(x)|>n \}$ form a monotone family and the measure tends to zero in the limit $n\to \infty$. This means that the above integral goes to zero as $n\to\infty$ and the proof is complete. $\blacksquare$ Now, the approximation of the functions $v_{\varepsilon}$, which vanish around the origin, by $C_0^{\infty} (\mathbb{R}^N \backslash \{0\})$ is standard. \begin{corollary}\label{density.second} The set $C_0^{\infty} (\mathbb{R}^N \backslash \{0\})$ is dense in $\H$. \end{corollary} We introduce the space $H$ to be the isometric space of $\mathcal{H}$ under $\mathcal{T}$, that is, $H$ is defined as the completion of the set \[ \{u=|x|^{-\frac{N-2}{2}}v:\; v\in C_0^{\infty} (\mathbb{R}^N) \}=\mathcal{T}(C_0^{\infty} (\mathbb{R}^N)), \] under the norm \begin{equation}\label{norm.for.u} N^2(u):=\int_{\mathbb{R}^N}|x|^{-(N-2)}|\nabla(|x|^{\frac{N-2}{2}}u)|^2dx+\int_{\mathbb{R}^N}|u|^2dx. \end{equation} Then, the following is immediate: \begin{corollary} The set $C_0^\infty(\mathbb{R}^N \backslash \{0\})$ is dense in $H$. \end{corollary} Note also that \begin{equation} \left\|v\right\|_{{\cal H}}=\left\|u\right\|_H,\;\;\; \mbox{for all}\; u\in C_0^\infty(\mathbb{R}^N \backslash \{0\}). \end{equation} Arguing as in \cite{vzog1}, the norm of $H$ is related to the Hardy functional by means of the formula (\ref{right.norm.introd}). \section{The Radial Case}\label{sec.radial.case} \setcounter{equation}{0} \subsection{Space Properties} In the following, $2^{*}:=2N/(N-2)$ stands for the critical Sobolev exponent. \begin{lemma} \label{compactradial} Let $\H_r$ be the subspace of $\H$ consisting of radial functions. Then, \[ \H_r \hookrightarrow L^{q}(\mathbb{R}^N,|x|^{-\frac{N-2}{2}q}dx), \] with compact inclusion for any $2 < q < 2^{*},\; N\geq 3$. \end{lemma} \emph{Proof.} Let $v_n (r),\; r=|x|$, be a bounded sequence of $C_0^{\infty} (\mathbb{R}^N \backslash \{0\})$- functions in $\H$; without loss of generality we assume that $v_n$ converges weakly to $0$. We will prove that \begin{equation} \label{eq2.3.1} \int_{\mathbb{R}^N} |x|^{-\frac{N-2}{2}q}\, |v_n|^q\, dx = \int_{B_1} |x|^{-\frac{N-2}{2}q}\, |v_n|^q\, dx + \int_{B_1^c} |x|^{-\frac{N-2}{2}q}\, |v_n|^q\, dx \to 0, \end{equation} as $n \to \infty$. We claim that, for $v_n$ in $\H(B_1)$, it holds $v_n \in H^1(B_1 \subset \mathbb{R}^2)$. This is clear from the following \[ ||v_n||_{\H(B_1)} \sim \int_0^1 r\, |v_n'|^2\, dr + \int_0^1 r\, |v_n|^2\, dr. \] Using now the compact imbedding \begin{equation}\label{compimbH1} H^1(B_1 \subset \mathbb{R}^2) \hookrightarrow L^p (B_1 \subset \mathbb{R}^2),\;\;\;\;\;\; \mbox{for any}\;\;\;\; p \in (1,+\infty), \end{equation} we obtain \[ \int_{B_1} |x|^{-(N-2)}\, |v_n|^p\, dx \to 0,\;\;\;\;\;\; \mbox{for any}\;\;\;\; p \in (1,+\infty), \] as $n \to \infty$. Then, for $p$ large enough, \begin{eqnarray*} \int_{B_1} |x|^{-\frac{N-2}{2}q}\, |v_n|^q\, dx & = & \int_{B_1} |x|^{-(N-2)\frac{p-2}{2p}q}\, | |x|^{-\frac{N-2}{p}} v_n |^q\, dx \\ & \leq & \left ( \int_{B_1} |x|^{-(N-2)\frac{p-2}{2(p-q)}q}\, dx \right )^{\frac{p-q}{p}}\, \left ( \int_{B_1} |x|^{-(N-2)}\, |v_n|^p\, dx \right )^{\frac{q}{p}}. \end{eqnarray*} The first integral in the right hand side is finite for $2<q<2^{*}$. Therefore, \begin{equation} \int_{B_1}|x|^{-q\frac{N-2}{2}}|v_n|^qdx\rightarrow 0,\;\;\;\; \mbox{as}\; n\rightarrow \infty. \end{equation} In order to obtain the same for the second integral in (\ref{eq2.3.1}), we use the equivalence of the norm of $\H_r$ with the norm of $H^1_r (\mathbb{R}^2)$, the radial subspace of $H^1 (\mathbb{R}^2)$, and the compact imbedding \[ H^1_r (\mathbb{R}^2) \hookrightarrow L^q (\mathbb{R}^2), \] for any $2 < q < 2^{*}$. Then, \[ \int_{\mathbb{R}^N} |x|^{-(N-2)}\, |v_n|^q\, dx \to 0, \] and finally \begin{equation} \label{eq2.3.1b} \int_{B_1^c} |x|^{-\frac{N-2}{2}q}\, |v_n|^q\, dx \leq \int_{B_1^c} |x|^{-(N-2)}\, |v_n|^q\, dx \to 0, \end{equation} as $n \to \infty$. $\blacksquare$ \vspace{0.2cm}\\ By unitary equivalence, the result holds in terms of $u$, hence the inclusion \[ H_r\hookrightarrow L^q(\mathbb{R}^N),\;\;\; 2<q<2^{*}, \] is compact, where $H_r$ is the subspace of $H$ consisting of radial functions. The following weighted interpolation inequality is immediate by setting $\alpha=\beta=\gamma=-\frac{N-2}{2}$ in the classical paper \cite{ckn}. \begin{lemma}\label{lemma.ckn.radial} Let $2<q<\frac{2N}{N-2}$ and $N\geq 3$. Then, there exists a constant $C=C(N,q)>0$, such that \begin{equation}\label{CKN.without.weight} \int_{\mathbb{R}^N}|x|^{-q\frac{N-2}{2}}|v|^qdx\leq C\left(\int_{\mathbb{R}^N}|x|^{-(N-2)}|\nabla v|^2dx\right)^{\frac{N(q-2)}{4}}\left(\int_{\mathbb{R}^N}|x|^{-(N-2)}|v|^2dx\right)^{\frac{2q-N(q-2)}{4}}, \end{equation} for any $v\in\mathcal{H}$. \end{lemma} \begin{remark} Notice that inequality (\ref{CKN.without.weight}) holds without any hypothesis on radial symmetry. \end{remark} \subsection{Global Solution} We quote the basic points of the standard theory of semilinear Schr\"{o}dinger equations (cf. \cite{cazenave,cazenave.haraux}) ensuring the global well-posedness on the initial value problem \begin{equation}\label{IVP.in.subsection} \left \{ \begin{array}{ll} i\psi_t+\Delta \psi+\left(\frac{N-2}{2}\right)^2\frac{\psi}{|x|^2}+|\psi|^{q-2}\psi=0, \\ \psi(0)=\psi_0. \end{array} \right. \end{equation} The operator defined by \begin{equation} \left \{ \begin{array}{ll} D(\widetilde{{\cal L}}):=\{\varphi \in {\cal H}_r:\widetilde{{\cal L}}\varphi \in \widetilde{X}\}, \\ \widetilde{{\cal L}}\varphi:=|x|^{N-2}\nabla \cdot(|x|^{-(N-2)}\nabla \varphi), \end{array} \right. \end{equation} defines a self-adjoint operator with $\widetilde{{\cal L}}\leq 0$. Therefore, by unitary equivalence, the operator defined by \begin{equation} \left \{ \begin{array}{ll} D({\cal L}):=\left\{\psi \in H_r:{\cal L} \psi \in X\right\}, \\ {\cal L}\psi:=\Delta \psi+\left(\frac{N-2}{2}\right)^2\frac{\psi}{|x|^2}, \end{array} \right. \end{equation} is also self-adjoint with ${\cal L}\leq 0$, where $\psi=|x|^{-(N-2)/2}\varphi$. Furthermore, $i{\cal L}$ defines a skew-adjoint operator and generates a group of isometries on $H_r$. Adapting step by step the arguments of \cite[Theorem 3.3.9]{cazenave}, we may easily conclude the following: \begin{theorem}\label{thm.local.existence} Let $N\geq 3$ and $2<q<\frac{4}{N}+2$. Then, for all $\psi_0 \in H_r$, there exist $T=T(\psi_0)>0$ and unique solution \[ \psi \in C([0,T),H_r)\cap C^1([0,T),H_r^{-1}), \] of (\ref{IVP.in.subsection}), where $H^{-1}_{r}$ is the dual space of $H_r$. In addition, for all $t\in [0,T)$, the following properties hold: \begin{equation}\label{cons.law.charge} \int_{\mathbb{R}^N}|\psi(t)|^2dx=\int_{\mathbb{R}^N}|\psi_0|^2dx \;\;\;\; \mbox{(conservation of charge)}, \end{equation} and \begin{equation}\label{cons.law.energy} E(\psi(t))=E(\psi_0) \;\;\;\; \mbox{(conservation of energy)}. \end{equation} \end{theorem} Set \[ F(u):=\frac{1}{q}\int_{\mathbb{R}^N}|u|^qdx. \] Interpreting inequality (\ref{CKN.without.weight}) in terms of $u$, it follows that there exists $\varepsilon \in (0,1)$, such that \begin{equation} F(u)\leq \frac{1-\varepsilon}{2}\left\|u\right\|_{H}^2+C(\left\|u\right\|_{L^2(\mathbb{R}^N)}),\;\;\; \mbox{for all}\; u\in H_r. \end{equation} Therefore, in analogy to \cite[Theorem 3.4.1]{cazenave}, we may set $T(\psi_0)=\infty$ in Theorem \ref{thm.local.existence}, for all $\psi_0 \in H_r$. \subsection{Stability}\label{stability} We introduce the functional $J:{\cal H}_r\rightarrow \mathbb{R}$ defined by \[ J(v):=E(|x|^{-\frac{N-2}{2}}v)+\frac{1}{2}\int_{\mathbb{R}^N}|x|^{-(N-2)}|v|^2dx. \] Clearly, the minimization problem (\ref{original.minimization.problem}) is equivalent to the following, \begin{equation}\label{minimum.J.v.radial} \left \{ \begin{array}{ll} v\in \widetilde{\Gamma}, \\ J(v)=\min \{J(z):z\in \widetilde{\Gamma} \}, \end{array} \right. \end{equation} where \[ \widetilde{\Gamma}:=\{v\in {\cal H}_r:|x|^{-\frac{N-2}{2}}v\in \Gamma \}. \] Both minimization problems are well-defined since, by inequality (\ref{CKN.without.weight}) and the property $2<q<\frac{4}{N}+2$, there exist $\delta>0$ and $K<\infty$ such that \begin{equation} J(v)\geq \delta \left\|v\right\|^2_{{\cal H}_r}-K,\;\;\; \mbox{for all}\; v\in \widetilde{\Gamma}. \end{equation} Let \[ \widetilde{k}_\gamma:=\inf \{J(v):v\in \widetilde{\Gamma} \}. \] In the following lemma we establish the precompactness of any minimizing sequence with respect to $\widetilde{k}_\gamma$. \begin{lemma}\label{precompactness.property} Let $\gamma>0$. Then, any sequence $(v_n)$ in ${\cal H}_r$ satisfying \begin{equation} J(v_n)\rightarrow \widetilde{k}_\gamma,\;\;J'(v_n)\rightarrow 0,\;\; v_n\in \widetilde{\Gamma}, \end{equation} contains a convergent subsequence in ${\cal H}_r$. Furthermore, its limit solves the minimization problem (\ref{minimum.J.v.radial}). \end{lemma} \emph{Proof.} First, observe that for $n$ large enough, \begin{eqnarray} \widetilde{k}_\gamma+o(1) & \geq & J(v_n)-\frac{1}{q}<J'(v_n),v_n> \nonumber \\ & = & \frac{q-2}{2q}\left\|v_n\right\|_{{\cal H}_r}^2, \end{eqnarray} that is, $\left\|v_n\right\|_{{\cal H}_r}$ is bounded. Going if necessary to a subsequence, still denoted by $(v_n)$, there exists $v\in {\cal H}_r$ such that $v_n\rightharpoonup v$ in ${\cal H}_r$. By Lemma \ref{compactradial}, $v_n\rightarrow v$ in $L^q(\mathbb{R}^N,|x|^{-q\frac{N-2}{2}}dx)$.\\ Also, notice that \begin{eqnarray} \left\|v_n-v\right\|_{{\cal H}_r}^2 & = & <J'(v_n)-J'(v),v_n-v> \nonumber \\ && +\: \int_{\mathbb{R}^N}|x|^{-q\frac{N-2}{2}}(|v_n|^{q-2}v_n-|v|^{q-2}v)(v_n-v)dx. \end{eqnarray} Evidently, \begin{equation} <J'(v_n)-J'(v),v_n-v>\; \rightarrow \; 0,\;\;\; \mbox{as}\; n\rightarrow \infty. \end{equation} Moreover, it follows by H\"{o}lder's inequality that \begin{equation} \left|\int_{\mathbb{R}^N}|x|^{-q\frac{N-2}{2}}(|v_n|^{q-2}v_n-|v|^{q-2}v)(v_n-v)dx\right|\leq C\left(\int_{\mathbb{R}^N}|x|^{-q\frac{N-2}{2}}|v_n-v|^qdx\right)^{1/q}, \end{equation} which tends to zero, as $n \rightarrow \infty$. Therefore, we found a $v\in {\cal H}_r$ and a subsequence $(v_n)$ such that \begin{equation} v_n\rightarrow v,\;\;\; \mbox{in}\; {\cal H}_r. \end{equation} It is obvious that $v\in \widetilde{\Gamma}$. Finally, by the weak lower semicontinuity of $J$ and the definition of $\widetilde{k}_\gamma$, we obtain $J(v)=\widetilde{k}_\gamma$, and the proof is complete. $\blacksquare$ \vspace{0.1cm} By a solution to (\ref{original.stationary.eq}) we mean a couple $(\lambda_\gamma,u_\gamma)\in \mathbb{R}\times H_r$, where $\lambda_\gamma$ is the Lagrange multiplier associated to the critical point $u_\gamma$ of $E$ on $\Gamma$. If $v_\gamma$ is a global minimizer of $J$ on $\widetilde{\Gamma}$, then there exist $(\lambda_\gamma,v_\gamma)\in \mathbb{R}\times {\cal H}_r$ solving the elliptic equation \begin{equation}\label{elliptic.eq.in.vgamma} -\nabla \cdot (|x|^{-(N-2)}\nabla v_\gamma)+\lambda_\gamma |x|^{-(N-2)}v_\gamma-|x|^{-q\frac{N-2}{2}}|v_\gamma|^{q-2}v_\gamma=0. \end{equation} By unitary equivalence, $S_\gamma \neq \emptyset$ and $e^{i\lambda_\gamma t}|x|^{-\frac{N-2}{2}}v_\gamma$ corresponds to a standing wave of (\ref{original.problem}). Observe that if $u_\gamma \in S_\gamma$, then $e^{i\lambda_\gamma t}u_\gamma \in S_\gamma$, for all $t\geq 0$. Also, $e^{i\lambda_\gamma t}u_\gamma$ is a periodic function in time, hence we may say that $S_\gamma$ consists of a class of closed orbits. In this context, the following definition of (local) orbital stability makes sense. \begin{definition} The set $S_\gamma$ is said to be orbitally stable, if for any $\varepsilon>0$, there exists a $\delta>0$, such that for any global solution $\psi(t)$ of (\ref{IVP.in.subsection}) with \begin{equation} dist(\psi_0,S_\gamma)<\delta, \end{equation} it holds that \begin{equation} dist(\psi(t),S_\gamma)<\varepsilon,\;\;\;\; \mbox{for all}\; t\geq 0, \end{equation} where \begin{equation} dist(w,S_\gamma):=\inf_{z\in S_\gamma}\left\|w-z\right\|_{H_r}. \end{equation} \end{definition} \textit{Proof of Theorem \ref{main.result}.} Arguing by contradiction, assume that there exist sequences $(\psi_{0n})\subset H_r,\;(t_n)\subset \mathbb{R}_{+}$, and $\varepsilon_0>0$ with \begin{equation}\label{contadiction.argument.a} \left\|\psi_{0n}-u\right\|_{H_r} \rightarrow 0, \end{equation} for some $u\in S_\gamma$, and such that the global solutions $\psi_n$ with initial values $\psi_{0n}$ satisfy \begin{equation}\label{contadiction.argument.b} \inf_{z\in S_\gamma}\left\|\psi_n(t_n)-z\right\|_{H_r} \geq \varepsilon_0. \end{equation} Let us set \[ u_n:=\psi_n(t_n). \] Clearly, if \[ k_\gamma:=\min \left\{E(u):u\in \Gamma,\; u\; \mbox{is radial} \right\}, \] then \[ \widetilde{k}_\gamma=k_\gamma+\frac{\gamma}{2}. \] We deduce from (\ref{contadiction.argument.a}) that \begin{equation}\label{convergence.of.initial.values} E(\psi_{0n})\rightarrow k_\gamma \;\;\;\; \mbox{and}\;\;\;\; \int_{\mathbb{R}^N}|\psi_{0n}|^2dx\rightarrow \gamma. \end{equation} Applying the conservation laws (\ref{cons.law.charge}) and (\ref{cons.law.energy}), we obtain \begin{equation}\label{convergence.of.maximal.solutions} E(u_n)\rightarrow k_\gamma \;\;\;\; \mbox{and}\;\;\;\; \int_{\mathbb{R}^N}|u_n|^2dx\rightarrow \gamma. \end{equation} We choose a sequence $\beta_n\rightarrow 1$, such that \begin{equation} E(\beta_nu_n)\rightarrow k_\gamma \;\;\;\; \mbox{and}\;\;\;\; \int_{\mathbb{R}^N}|\beta_nu_n|^2dx=\gamma, \end{equation} that is, $\beta_nu_n$ is a minimizing sequence of problem (\ref{original.minimization.problem}). For example, we may insert \begin{equation} \beta_n=\left(\frac{\gamma}{\int_{\mathbb{R}^N}|u_n|^2dx}\right)^{1/2}. \end{equation} Setting $v_n=|x|^{\frac{N-2}{2}}u_n$, we obtain \begin{equation} J(\beta_nv_n)\rightarrow \widetilde{k}_\gamma \;\;\;\; \mbox{and}\;\;\;\; \int_{\mathbb{R}^N}|x|^{-(N-2)}|\beta_nv_n|^2dx=\gamma. \end{equation} Ekeland's variational principle \cite[Theorem 2.4]{willem} yields to the existence of another sequence $\zeta_n\in \widetilde{\Gamma}$ satisfying \begin{equation} J(\zeta_n)\rightarrow \widetilde{k}_\gamma,\;\;\; J'(\zeta_n)\rightarrow 0,\;\;\; \left\|\zeta_n-\beta_nv_n\right\|_{{\cal H}_r}<\frac{1}{n}. \end{equation} By Lemma \ref{precompactness.property}, we deduce that there exist a subsequence in ${\cal H}_r$, still denoted by $\zeta_n$, and a $\zeta \in S_\gamma$ such that $\zeta_n\rightarrow |x|^{\frac{N-2}{2}}\zeta$ in ${\cal H}_r$. Therefore, \begin{eqnarray} \inf_{z\in S_\gamma}\left\|u_n-z\right\|_{H_r} & \leq & ||v_n-|x|^{\frac{N-2}{2}}\zeta ||_{{\cal H}_r} \nonumber \\ & = & ||v_n-\beta_nv_n+\beta_nv_n-\zeta_n+\zeta_n-|x|^{\frac{N-2}{2}}\zeta ||_{{\cal H}_r} \nonumber \\ & \leq & |1-\beta_n|\left\|v_n\right\|_{{\cal H}_r}+\frac{1}{n}+||\zeta_n-|x|^{\frac{N-2}{2}}\zeta ||_{{\cal H}_r}\rightarrow 0, \end{eqnarray} which contadicts (\ref{contadiction.argument.b}) and the proof is complete. $\blacksquare$ \vspace{0.1cm} \subsection{Behavior at the Origin} In this section, we describe the behavior of the standing wave solutions, obtained in Subsection \ref{stability}, around the origin. \vspace{0.2cm} \\ \emph{Proof of Theorem \ref{behavior.origin}}.\ \ Consider a global minimizer $u_\gamma \in S_\gamma$ of problem (\ref{original.minimization.problem}) for some $\gamma>0$, which can be chosen to be nonnegative on $\mathbb{R}^N \backslash \{0\}$. Then, there exists a Lagrange multiplier $\lambda_\gamma$, such that the pair $(\lambda_\gamma,v_\gamma)\in \mathbb{R}\times \mathcal{H}_r$ solves the stationary problem (\ref{elliptic.eq.in.vgamma}), where $v_\gamma=|x|^{(N-2)/2}u_\gamma$. \\ We introduce the following transformation \begin{equation} \widetilde{w}_\gamma(t)=v_\gamma(r),\;\;\;\;\; t=(-\log r)^{-\frac{1}{N-2}},\;\; r=|x|, \end{equation} for $0<r<1$. Note that \[ \lim_{t\rightarrow +\infty}\widetilde{w}_\gamma(t)=\lim_{r\rightarrow 1^{-}}v_\gamma(r). \] Then, $\widetilde{w}_\gamma$ satisfies \[ -\Delta \widetilde{w}_\gamma+\lambda (N-2)^2V_1(t)\widetilde{w}_\gamma+(N-2)^2V_2(t)|\widetilde{w}_\gamma|^{q-2}\widetilde{w}_\gamma=0, \] where \[ V_1(t)=\exp \left(-2t^{-(N-2)}\right)t^{-2(N-1)} \;\;\; \mbox{and}\;\;\; V_2(t)=\exp \left(\left(q\frac{N-2}{2}-2\right)t^{-(N-2)}\right). \] If we set $V_1(0)=V_2(0)=0$, then $V_1,\; V_2$ are continuous functions. Standard regularity results imply that $\widetilde{w}_\gamma (0)$ is a well-defined real number, while Maximum Principle implies that this number cannot be zero. Thus $v_\gamma(0)$ is well-defined and positive, which means that $u_\gamma$ behaves at zero exactly as $|x|^{-(N-2)/2}$.\ $\blacksquare$ \vspace{0.1cm} \\ Consequently, the results of \cite[Section 2.5]{vzog1} yield the presence of a correcting term in the energy norm. The exact value of the norm of $u_\gamma$ is given by the formula \[ \left\|u_\gamma\right\|^2_H=I_{\mathbb{R}^N}(u_\gamma)-\Lambda (u_\gamma)+\left\|u_\gamma\right\|^2_{L^2(\mathbb{R}^N)}, \] where \[\Lambda(u_\gamma)=\frac{N(N-2)}{2}\ \omega_N\ v^2_\gamma(0). \] On the contrary, we shall examine in next section a case where the new energy term acts in an additive way to the total energy. \subsection{The Case of Hardy Energy at Infinity} We consider the following Cauchy problem \begin{equation}\label{IVP.hardy.at.infinity} \left \{ \begin{array}{ll} i|x|^{-4}w_t+\Delta w+\left(\frac{N-2}{2}\right)^2\frac{w}{|x|^2}=-|x|^{q(N-2)-2N}|w|^{q-2}w, \\ w(0)=w_0, \end{array} \right. \end{equation} where $w=w(|x|),\; N\geq 3$ and $2<q<\frac{4}{N}+2$. The approach is based on the unitary equivalence with problem (\ref{IVP.in.subsection}). To this end, we introduce the Kelvin transformation in the form \begin{equation} \psi(y)=|x|^{N-2}w(x),\;\;\; x=\frac{y}{|y|^2}, \end{equation} denoted by $\psi=\mathcal{K}(w)$. Clearly, for smooth functions, it holds that \[ \Delta_y\psi(y)=|x|^{N+2}\Delta_xw(x) \;\;\; \mbox{and}\;\;\;\; \frac{\psi(y)}{|y|^2}=|x|^{N+2}\frac{w(x)}{|x|^2}, \] and the equations are equivalent for $y\neq 0$. The differences appear in the energy near the singularity versus the energy at infinity. As it was mentioned formerly, the energy space corresponding to (\ref{IVP.in.subsection}) is $H_r$ with norm given by (\ref{right.norm.introd}). We shall use this formulation for the definition of the energy space $\mathcal{W}$ corresponding to (\ref{IVP.hardy.at.infinity}). Indeed, following exactly the proposal stated in \cite{vzog1}, the space $\mathcal{W}$ is defined as the isometric space of $H_r$ under transformation $\mathcal{K}$ and the correct energy norm is given by the following formula \[ \left\|w\right\|_{\mathcal{W}}^2=\lim_{\varepsilon \downarrow 0}(I_{1/\varepsilon}(w)+\Lambda_{1/\varepsilon}(w))+\int_{\mathbb{R}^N}|x|^{-4}|w|^2dx, \] where \[ I_{1/\varepsilon}(w):=\int_{B_{1/\varepsilon}}|\nabla w|^2dx-\left(\frac{N-2}{2}\right)^2\int_{B_{1/\varepsilon}}\frac{|w|^2}{|x|^2}dx, \] and \[ \Lambda_{1/\varepsilon}(w):=\frac{N-2}{2}\varepsilon \int_{|x|=\frac{1}{\varepsilon}}|w|^2dS. \] More precisely, $\mathcal{W}$ is defined as the completion of the $C_0^\infty(\mathbb{R}^N \backslash \{0\})$-functions under the norm \begin{equation} \left\|w\right\|_{\mathcal{W}}^2=I_{\mathbb{R}^N}(w)+\int_{\mathbb{R}^N}|x|^{-4}|w|^2dx. \end{equation} \begin{remark} The question of well-posedness, as well as, the existence and orbital stability of standing waves for (\ref{IVP.hardy.at.infinity}) in the space $\mathcal{W}$ is understood through the unitary equivalence with $H_r$. \end{remark} \begin{remark} Note that in the case of a function $w$, which behaves at infinity like $|x|^{-(N-2)/2}$, we have again the appearance of a correction energy term with measure \begin{equation} \Lambda_{\infty}(w)=\frac{N(N-2)}{2}\ \omega_N\ v^2(0), \end{equation} where $v(x/|x|^2)=|x|^{\frac{N-2}{2}}w(x)$. However, there is a notable difference with problem (\ref{IVP.in.subsection}), since in this case the singularity effect acts in an additive way to the usual Hardy functional. \end{remark} \section{The General Case}\label{sec.general.case} \setcounter{equation}{0} In this section, we are concerned with the orbital stability of the Schr\"{o}dinger operator, equipped with the critical inverse square potential, without assuming any symmetry hypothesis. We consider the problem \begin{equation}\label{IVP.in.nonradial.case} \left \{ \begin{array}{ll} i\psi_t+\Delta \psi+\left(\frac{N-2}{2}\right)^2\frac{\psi}{|x|^2}+g(x)|\psi|^{q-2}\psi=0, \\ \psi(0)=\psi_0, \end{array} \right. \end{equation} where $g\in L^1(\mathbb{R}^N)$ is a nonnegative function. \\ The approach is an adaptation of the method used in the radial case. Therefore, we make an outline of the basic steps. \begin{lemma} \label{compactv} Let $1 \leq q < 2^{*}$. If $g$ satisfies \begin{equation}\label{condition.on.weight.g} g\sim r^\omega,\;\; \mbox{with}\; \left \{ \begin{array}{ll} \omega>-N+\frac{q(N-2)}{2}, & \mbox{at}\;\;\; 0, \\ \omega<-N+\frac{q(N-2)}{2}, & \mbox{at}\;\;\; \infty, \end{array} \right. \end{equation} then, the inclusion $\H \hookrightarrow L_g^q (\mathbb{R}^N,|x|^{-\frac{N-2}{2}q}dx)$ is compact, where \[ L_g^q (\mathbb{R}^N,|x|^{-\frac{N-2}{2}q}dx):=\left\{v\in L^1(\mathbb{R}^N):\int_{\mathbb{R}^N}g|x|^{-q\frac{N-2}{2}}|v|^qdx<\infty \right\}. \] \end{lemma} \emph{Proof.} Let $v_n$ be a bounded sequence of $C_0^{\infty} (\mathbb{R}^N \backslash \{0\})$- functions in $\H$; without loss of generality, we assume that $v_n$ converges weakly to $0$. Then, we split the norm of $v_n$ as \[ ||v_n||_{\H} = ||v_n||_{\H(B_1)} + ||v_n||_{\H(B_1^c)}. \] In addition, we study separately the radial parts $v_n^r=v_n^r(r),\; r=|x|$, and the nonradial parts $v_n^{nr}$ of $v_n$. The proof consists of four steps. \emph{Step 1: The radial part $v_n^r$ in $B_1$}. Arguing as in lemma \ref{compactradial}, we obtain that $v_n^r \in H^1(B_1 \subset \mathbb{R}^2)$, whenever $v_n^r\in \H(B_1)$. Therefore, \[ \int_{B_1} |x|^{-(N-2)}\, |v_n^r|^p\, dx \to 0,\;\;\;\;\;\; \mbox{for any}\;\;\;\; p \in (1,+\infty), \] as $n \to \infty$. Then, for $p$ large enough, \begin{eqnarray*} \int_{B_1} |x|^{-\frac{N-2}{2}q}\, g\, |v_n^r|^q\, dx & = & \int_{B_1} |x|^{-(N-2)\frac{p-2}{2p}q}\, g\, ||x|^{-\frac{N-2}{p}} v_n^r|^q\, dx \\ & \leq & \left ( \int_{B_1} |x|^{-(N-2)\frac{p-2}{2(p-q)}q}\, g^{\frac{p}{p-q}}\, dx \right )^{\frac{p-q}{p}}\, \left ( \int_{B_1} |x|^{-(N-2)}\, |v_n^r|^p\, dx \right )^{\frac{q}{p}}. \end{eqnarray*} Let $g \sim r^{\omega}$ at $0$. Then, the first integral of the right hand side is finite if \begin{equation}\label{omegaB1r} \omega > (N-2)\frac{q}{2} - N, \end{equation} Finally, we get that \[ \int_{B_1} |x|^{-\frac{N-2}{2}q}\, g\, |v_n^r|^q\, dx \to 0,\;\;\;\;\;\; \mbox{as}\;\;\;\; n \to \infty, \] for $g \sim r^{\omega}$, at $0$, and $\omega$ satisfying (\ref{omegaB1r}). \vspace{0.2cm} \\ \emph{Step 2: The nonradial part $v_n^{nr}$ in $B_1$}. For the nonradial parts $v_n^{nr}$ of $v_n$, we observe that \begin{equation} \label{nonradialH1} |x|^{-(N-2)/2}\, v_n^{nr} \in H^1 (\mathbb{R}^N). \end{equation} This follows as in \cite{vz00} or as in \cite[pg.196]{filippas.tertikas}; setting $u_n^{nr}=|x|^{-(N-2)/2}\, v_n^{nr}$, we have \[ ||v_n^{nr}||_{\H}^2 = I_{\mathbb{R}^N}(u_n^{nr}) + \int_{\mathbb{R}^N} |u_n^{nr}|^2\, dx. \] However, using decomposition into spherical harmonics, \[ I_{\mathbb{R}^N}(u_n^{nr}) \geq c\, \int_{\mathbb{R}^N} |\nabla u_n^{nr}|^2\, dx, \] and (\ref{nonradialH1}) holds. Moreover, \[ ||v_n^{nr}||_{\H} \geq c\, ||v_n^{nr}||_{H^1(\mathbb{R}^N)}, \] so $v_n^{nr}$ is also bounded in $H^1(\mathbb{R}^N)$. Thus, $|x|^{-(N-2)/2}\, v_n^{nr} \in H^1 (B_1)$, with $H^1 (B_1)$ to be compactly embedded into $L^{p} (B_1)$, $1 \leq p < 2^{*}$. For $\varepsilon >0$ small enough, we set \begin{equation} \label{Aqe} A_{q,\varepsilon} = \left ( 1 - \frac{q}{2^{*} - \varepsilon} \right )^{-1}. \end{equation} It is clear that $A_{q,\varepsilon}$ is increasing as a function of $\varepsilon$, so \[ A_{q,\varepsilon} > \left ( 1 - \frac{q}{2^{*}} \right )^{-1}, \] for any $\varepsilon >0$ small enough. Then, \begin{eqnarray*} \int_{B_1} |x|^{-\frac{N-2}{2}q}\, g\, |v_n^{nr}|^q\, dx &=& \int_{B_1} g\, ||x|^{-\frac{N-2}{2}} v_n^{nr}|^q\, dx \\ &\leq& \left ( \int_{B_1} g^{A_{q,\varepsilon}}\, dx \right )^{\frac{1}{A_{q,\varepsilon}}}\, \left ( \int_{B_1} ||x|^{-\frac{N-2}{2}}\, v_n^{nr}|^{2^{*} -\varepsilon}\, dx \right )^{\frac{q}{2^{*} -\varepsilon}}. \end{eqnarray*} Let $g \sim r^{\omega}$ at $0$. Then, the first integral of the right hand side is finite if (\ref{omegaB1r}) holds. Finally, we get that \[ \int_{B_1} |x|^{-\frac{N-2}{2}q}\, g\, |v_n^{nr}|^q\, dx \to 0,\;\;\;\;\;\; \mbox{as}\;\;\;\; n \to \infty, \] for $g \sim r^{\omega}$, at $0$, and $\omega$ satisfying (\ref{omegaB1r}). \vspace{0.2cm} \\ \emph{Step 3: The radial part $v_n^{r}$ in $B^c_1$}. In the case of the exterior domain $B_1^c$, both in the radial and the nonradial case, we use the following Kelvin transform. We set \begin{equation}\label{kelvintrans} v(x) = w(y),\;\;\;\;\;\; y=\frac{x}{|x|^2}. \end{equation} The determinant of the Jacobian of the Kelvin transformation in dimension $d \geq 2$ is equal to $-|x|^{2d}$ and \[ \left | \nabla_x w \left ( \frac{x}{|x|^2} \right ) \right |^2 = |x|^{-4}\, |\nabla_y w (y)|^2. \] Then, \begin{equation}\label{normB1w} ||v_n||^2_{\H(B_1^c)} = \int_{B_1} |y|^{-(N-2)}\, |\nabla w_n|^2\, dy + \int_{B_1} |y|^{-N-2}\, |w_n|^2\, dy. \end{equation} We restrict now in the radial case. For the radial functions $w_n^r$, we have \[ \int_{B_1} |y|^{-(N-2)}\, |\nabla w_n^r|^2\, dy + \int_{B_1} |y|^{-N-2}\, |w_n^r|^2\, dy \geq ||w_n^r||^2_{H^1(B_1 \subset \mathbb{R}^2)}. \] Hence, as in step 1, \[ |y|^{-\frac{N-2}{p}}\, w_n^r\;\;\;\;\; \mbox{converges to 0 in}\;\; L^p(B_1),\;\; p \geq 1. \] Moreover, \begin{eqnarray*} \int_{B_1^c} |x|^{-\frac{N-2}{2}q}\, g\, |v_n^{r}|^q\, dx & = & \int_{B_1} |y|^{\frac{N-2}{2}q -2N}\, g\, |w_n^r|^q\, dy \\ & = & \int_{B_1} |y|^{\frac{N-2}{2}q -2N + \frac{N-2}{p}q}\, g\, ||y|^{-\frac{N-2}{p}}\, w_n^r|^q\, dy \\ & \leq & \left ( \int_{B_1} |y|^{(N-2)\frac{p+2}{2(p-q)}q - \frac{2Np}{p-q}}\, g^{\frac{p}{p-q}}\, dx \right )^{\frac{p-q}{p}}\, \\ && \left ( \int_{B_1} |x|^{-(N-2)}\, |v_n^r|^p\, dx \right )^{\frac{q}{p}}. \end{eqnarray*} Let $g \sim |x|^{\omega}$ at $\infty$. Then, the first integral of the right hand side is finite if \[ - \omega > -q(N-2)\frac{p+2}{2p} + N + \frac{q}{p}N, \] or, for $p$ large enough, \begin{equation}\label{omegaB1cr} \omega < \frac{q(N-2)}{2} - N. \end{equation} Finally, we conclude \[ \int_{B_1} |x|^{-\frac{N-2}{2}q}\, g\, |v_n^r|^q\, dx \to 0,\;\;\;\;\;\; \mbox{as}\;\;\;\; n \to \infty, \] for $g \sim r^{\omega}$, at $0$, and $\omega$ satisfying (\ref{omegaB1cr}). \vspace{0.2cm} \\ \emph{Step 4: The nonradial part $v_n^{nr}$ in $B^c_1$}. Using transformation (\ref{kelvintrans}), we have that \begin{equation}\label{normRNw} ||v_n||^2_{\H} = \int_{\mathbb{R}^N} |y|^{-(N-2)}\, |\nabla w_n|^2\, dy + \int_{\mathbb{R}^N} |y|^{-N-2}\, |w_n|^2\, dy. \end{equation} Moreover, as in Step 2, we may obtain \[ ||v_n^{nr}||^2_{\H} \geq \int_{\mathbb{R}^N} |\nabla ( |y|^{-\frac{N-2}{2}}\, w_n^{nr} ) |^2\, dy + \int_{\mathbb{R}^N} |y|^{-4}\, ||y|^{-\frac{N-2}{2}}\, w_n^{nr}|^2\, dy. \] Restricting ourselves in $B_1$, we have \[ |y|^{-\frac{N-2}{2}}\, w_n^{nr}\;\;\;\;\;\; \mbox{is bounded in}\;\;\; H^1(B_1), \] so \[ |y|^{-\frac{N-2}{2}}\, w_n^{nr}\;\;\;\;\;\; \mbox{converges to 0 in}\;\;\; L^p(B_1),\;\; 1 \leq p < 2^{*}. \] Then, \begin{eqnarray*} \int_{B_1^c} |x|^{-\frac{N-2}{2}q}\, g\, |v_n^{nr}|^q\, dx & = & \int_{B_1} |y|^{\frac{N-2}{2}q -2N}\, g\, |w_n^{nr}|^q\, dy \\ & = &\int_{B_1} |y|^{(N-2)q -2N}\, g\, | |y|^{-\frac{N-2}{2}}\, w_n^{nr} |^q\, dy \\ & \leq & \left ( \int_{B_1} ( |y|^{(N-2)q -2N}\, g )^{A_{q,\varepsilon}}\, dy \right )^{\frac{1}{A_{q,\varepsilon}}}\, \\ && \left ( \int_{B_1} | |y|^{-\frac{N-2}{2}}\, w_n^{nr} |^{2^{*} -\varepsilon}\, dy \right )^{\frac{q}{2^{*} -\varepsilon}}, \end{eqnarray*} where $A_{q,\varepsilon}$ is defined in (\ref{Aqe}). Let $g \sim |x|^{\omega}$ at $\infty$. Then, the first integral of the right hand side is finite if \[ - \omega > 2N - (N-2)q - \frac{N}{A_{q,\varepsilon}} > N - \frac{N-2}{2}q, \] or $\omega$ satisfies (\ref{omegaB1cr}). Finally, we get that \[ \int_{B_1^c} |x|^{-\frac{N-2}{2}q}\, g\, |v_n^{nr}|^q\, dx \to 0,\;\;\;\;\;\; \mbox{as}\;\;\;\; n \to \infty, \] for $g \sim r^{\omega}$, at $\infty$, and $\omega$ satisfying (\ref{omegaB1cr}). $\blacksquare$ \begin{remark} \label{remcomp} (i) Since \begin{equation}\label{negative} -N + \frac{N-2}{2}q < 0, \end{equation} for every $1 \leq q < 2^{*}$, we conclude that $g$ maybe constant at the origin. On the other hand, it should decay to zero at infinity. As we saw in Lemma \ref{compactradial}, this is not the case if we restrict on radial functions. (ii) Consider the functions $u = |x|^{-(N-2)/2}\, v$. Then, \[ \int_{\mathbb{R}^N} |x|^{-\frac{N-2}{2}q}\, g\, |v|^q\, dx = \int_{\mathbb{R}^N} g\, |u|^q\, dx. \] In the special case $q=2$, $g$ should be a function such that \[ g \sim r^{\omega},\;\;\; \mbox{with} \; \left \{ \begin{array}{ll} \omega > -2, & \mbox{at}\;\;\; 0, \\ \omega < -2, & \mbox{at}\;\;\; \infty. \end{array} \right. \] as it is natural from the Hardy's inequality. (iii) Any function $g$ with $g \in L^{1} (\mathbb{R}^N) \cap L^{\frac{2^{*}}{2^{*} -q}} (\mathbb{R}^N)$ satisfies (\ref{condition.on.weight.g}). \end{remark} By Lemma \ref{lemma.ckn.radial} and the modified behavior of $g$ at zero, the following is immediate: \begin{corollary}\label{ckn.nonradial} Let $2< q<2^{*},\;N\geq 3$. If $g$ satisfies \begin{equation}\label{condition.on.weight.g.modified} g\sim r^\omega,\;\; \mbox{with}\; \left \{ \begin{array}{ll} \omega \geq 0, & \mbox{at}\;\;\; 0, \\ \omega<-N+\frac{q(N-2)}{2}, & \mbox{at}\;\;\; \infty, \end{array} \right. \end{equation} then, there exists a constant $C=C(N,q)>0$, such that \begin{eqnarray} \int_{\mathbb{R}^N}g|x|^{-q\frac{N-2}{2}}|v|^qdx & \leq & \left(\int_{\mathbb{R}^N}|x|^{-(N-2)}|\nabla v|^2dx\right)^{\frac{N(q-2)}{4}} \nonumber \\ && \left(\int_{\mathbb{R}^N}|x|^{-(N-2)}|v|^2dx\right)^{\frac{2q-N(q-2)}{4}}, \end{eqnarray} for all $v\in {\cal H}$. \end{corollary} The energy functional naturally associated to (\ref{IVP.in.nonradial.case}) is defined by \[ E_g(u):=\frac{1}{2}\lim_{\varepsilon \downarrow 0}(I_{B_\varepsilon^c}(u)-\Lambda_{\varepsilon}(u))-\frac{1}{q}\int_{\mathbb{R}^N}g|u|^qdx. \] \begin{theorem} Let $N\geq 3$ and $2<q<\frac{4}{N}+2$. If $g$ satisies (\ref{condition.on.weight.g.modified}), then, for all $\psi_0 \in H$, there exists a unique solution \[ \psi \in C([0,\infty),H)\cap C^1([0,\infty),H^{-1}), \] of (\ref{IVP.in.nonradial.case}), where $H^{-1}$ is the dual space of $H$. In addition, for all $t\geq 0$, the following conservation laws hold: \begin{equation} \int_{\mathbb{R}^N}|\psi(t)|^2dx=\int_{\mathbb{R}^N}|\psi_0|^2dx, \end{equation} and \begin{equation} E_g(\psi(t))=E_g(\psi_0). \end{equation} \end{theorem} Consider the minimization problem \begin{equation}\label{minimum.Eg.nonradial} \left \{ \begin{array}{ll} u\in \Gamma, \\ E_g(u)=\min \left\{E_g(z):z\in \Gamma \right\}, \end{array} \right. \end{equation} which is well-defined due to Corollary \ref{ckn.nonradial}. By using similar arguments to those used in Lemma \ref{precompactness.property}, we may obtain the existence of a standing wave solution $e^{i\lambda_\gamma t}u_\gamma \in H$ of (\ref{IVP.in.nonradial.case}), for a given $\gamma>0$. If \[ S_{g,\gamma}:=\left\{u\in H:u\; \mbox{is a solution of}\; (\ref{minimum.Eg.nonradial}) \right\} \] denotes the set of global minimizers, then a nonradial version of Theorem \ref{main.result} is stated in the following: \begin{theorem} Assume that $N\geq 3,\; 2<q<\frac{4}{N}+2$ and $g$ satisfies condition (\ref{condition.on.weight.g.modified}). Then, the set $S_{g,\gamma}$ is orbitally stable for problem (\ref{IVP.in.nonradial.case}). \end{theorem} \subsection{Improved Hardy-Sobolev Inequality} \label{IHS} As already said in the Introduction, there is no possibility for a Hardy-Sobolev inequality to hold. This is clear since functions behaving at the origin like $|x|^{-(N-2)/2}$, do not belong to $L^{2^{*}}$. As in the bounded domain case, we have to consider Improved Hardy-Sobolev (IHS) inequalities; i.e. we have to find a weight function $h(x)$ such that \begin{equation} \label{IHSRN} ||\phi||_H \geq \left ( \int_{\mathbb{R}^N} h(x)\, |\phi|^{2^{*}} \right )^{(N-2)/N},\;\;\;\;\;\; \mbox{for any}\;\;\; \phi \in C_0^{\infty} (\mathbb{R}^N). \end{equation} Unfortunately, in the case of the whole space we cannot apply the method described in \cite[Lemma 9.1]{vzog2}. This method provides a way of constructing optimal IHS inequalities (i.e. it provides $h(x)$) and it was used in \cite{trach.zogr, vzog2, zogr.jfa}. However, we are in position to present an IHS inequality, although it is not optimal. Before this, we remark the following: \begin{enumerate} \item The nonradial part of $\phi$ belongs to $H^1 (\mathbb{R}^N)$ (see (\ref{nonradialH1})), so in this case (\ref{IHSRN}) holds with $h(x)=1$. \item The radial part $\phi_r$ of $\phi$ satisfies (\ref{IHSRN}) with $h(r)=|x|^2$. This may be obtained by following the arguments of Lemma \ref{compactradial}; in this case $|x|^{(N-2)/2} \phi_r \in H^1 (\mathbb{R}^2)$. \item For any $R>0$ and $\phi \in C_0^{\infty} (B_R)$, inequality (\ref{IHSRN}) holds with \[ h(x) = \left (-\log \left( \frac{|x|}{D}\right) \right )^{-\frac{2(N-1)}{N-2}},\;\;\;\; D>R. \] This optimal (IHS) inequality was proved in \cite{filippas.tertikas}. In the case of radial functions, $D$ is allowed to be equal to $R$ (see \cite{zogr.jfa}). Note that, since $D$ or $R$ appear in the inequalities, these inequalities depend on the domain. \end{enumerate} Directly from 1. and 2. we have the following. \begin{lemma} Assume that $h$ is defined as \[ h(x) = \left \{ \begin{array}{ll} |x|^2, & |x|<1, \\ 1, & |x| \geq 1. \end{array} \right. \] Then, inequality (\ref{IHSRN}) is true. \end{lemma} However, from the point of view of 3. this inequality is not optimal. \vspace{1cm} \noindent \textsc{Acknowledgment.} The first author was supported by $E\Lambda KE$, National Technical University of Athens (grant number 65193600). \ {\small \bibliographystyle{amsplain}
1,941,325,220,868
arxiv
\section{Introduction} In their beautiful recent paper \cite{MM}, Matsumoto and Matui proved that a simple Cuntz--Krieger algebra remembers the flow equivalence class of the irreducible shift of finite type defining it, provided that the canonical diagonal subalgebra is considered as a part of the data. A key tool for obtaining this groundbreaking result was the realisation that diagonal-preserving isomorphism translates directly to isomorphism of the groupoids associated to the shift spaces, reducing the problem to establishing that when two one-sided irreducible shifts of finite type are continuously orbit equivalent in the sense developed by Matsumoto, then the corresponding two-sided shift spaces are flow equivalent. Having such rigidity results for $C^*$-algebras associated to general shift spaces of finite type would provide a better understanding of the classification problem for general Cuntz--Krieger algebras recently solved in \cite{ERRS} and \cite{ERRS2}. From the point of view of symbolic dynamics, it is also of interest to determine the class of shift spaces for which continuous orbit equivalence implies flow equivalence. The groupoid component of the proof in \cite{MM} has in \cite{BCW} and \cite{CRS} been generalised to a much more general setting, but the argument leading from diagonal-preserving isomorphism to flow equivalence in \cite{MM} goes through a deep result about the ordered cohomology of irreducible shifts of finite type by Boyle and Handelman (\cite{BH}) which does not readily extend to the reducible case. In addition, several of the arguments used in \cite{MM} rely on the assumption that the shifts of finite type in question do not contain isolated periodic points. In the present paper we give a direct proof that continuously orbit equivalent shifts of finite type are also flow equivalent (Theorem~\ref{ib}) and thereby generalising \cite[Theorem 3.5]{MM} from irreducible one-sided Markov shifts to general (possible reducible) shifts of finite type. We do that by producing a concrete flow equivalence from a given orbit equivalence between general shift spaces with continuous cocycles under added hypotheses on the given orbit equivalence and cocycles (Proposition~\ref{diego}), and then proving by methods related to the original proof in \cite{MM} that when the shift spaces are of finite type, then these hypotheses may always be arranged (Proposition~\ref{sven} and Proposition~\ref{john}). As a corollary to Proposition~\ref{diego}, we generalise in Corollary~\ref{cor:scoe} \cite[Theorem 5.5]{Mat2} from irreducible topological Markov chains with no isolated points to general shift spaces by showing that for general shift spaces, strongly continuous orbit equivalence implies two-sided conjugacy. We also prove that the groupoids of two one-sided shifts of finite type are isomorphic if and only if the shift spaces are continuously orbit equivalent (Theorem~\ref{orbit}), and by combining this with a result of Matui \cite{Matui} and results in \cite{CRS} and \cite{ERRS}, we obtain that these groupoids are stably isomorphic if and only if the corresponding two-sided shift spaces are flow equivalent (Theorem~\ref{thm:1}). As applications, we show in Corollary~\ref{cor:2} that the one-sided edge shifts of two finite directed graphs with no sinks and no sources are continuous orbit equivalent if and only if the corresponding graph $C^*$-algebras are isomorphic by a diagonal-preserving isomorphism (if and only if the corresponding Leavitt path algebras are isomorphic by a diagonal-preserving isomorphism), and we show in Corollary~\ref{cor:1} that the graphs are move equivalent, as defined in \cite{Sor}, if and only if the corresponding graph $C^*$-algebras are stably isomorphic by a diagonal-preserving isomorphism (if and only if the corresponding Leavitt path algebras are stably isomorphic by a diagonal-preserving isomorphism). We also apply our results to Cuntz--Krieger algebras and topological Markov chains and directed graphs of $\{0,1\}$-matrices and thereby generalise \cite[Theorem 2.3]{MM} and \cite[Corollary 3.8]{MM} from the irreducible to the general case (Corollary~\ref{CK} and Corollary~\ref{cor:3}). \subsection*{Acknowledgements} This work was partially supported by the Danish National Research Foundation through the Centre for Symmetry and Deformation (DNRF92), by VILLUM FONDEN through the network for Experimental Mathematics in Number Theory, Operator Algebras, and Topology, and by the Danish Council for Independent Research $|$ Natural Sciences (7014-00145B). Some of the work was done while all four authors were attending the research program \emph{Classification of operator algebras: complexity, rigidity, and dynamics} at the Mittag-Leffler Institute, January--April 2016. We thank the institute and its staff for the excellent work conditions provided. \section{Definitions and notation} In this section we briefly recall the definitions of \emph{shift spaces}, \emph{shifts of finite type}, \emph{continuous orbit equivalence of shift spaces}, and \emph{flow equivalence of shift spaces}, and introduce notation. We let $\mathbb{N}$ denote the set of positive integers, and $\mathbb{N}_0$ the set of non-negative integers. \subsection{One-sided shift spaces} A \emph{one-sided shift space} (or \emph{one-sided subshift}) is a closed, and hence compact, subset $X$ of $\boldsymbol{a}^{\mathbb{N}_0}$, where $\boldsymbol{a}$ is a finite set equipped with the discrete topology and $\boldsymbol{a}^{\mathbb{N}_0}$ is equipped with the product topology, such that $X$ is invariant by the shift transformation $$\sigma:\boldsymbol{a}^{\mathbb{N}_0}\to \boldsymbol{a}^{\mathbb{N}_0}\,,$$ (i.e., $\sigma(X)= X$) given by $(\sigma((x_i)_{i\in\mathbb{N}_0}))_j=x_{j+1}$ for $j\in\mathbb{N}_0$. When $X$ is a one-sided shift space, then we let $\osh:X\to X$ denote the restriction of $\sigma$ to $X$. For $n\in\mathbb{N}_0$ we denote by $\osh^n$ the $n$-fold composition of $\osh$ with itself (when $n=0$, then $\osh^n$ denotes the identity map on $X$). Two one-sided shift spaces $X$ and $Y$ are \emph{conjugate} if there is a \emph{conjugacy} between them, i.e., a homeomorphism $h:X\to Y$ such that $\osh[Y]\circ h=h\circ\osh$. Let $X$ be a one-sided shift space. We say that $x\in X$ is \emph{periodic} if $\osh^p(x)=x$ for some $p\in\mathbb{N}$, and that $x$ is \emph{eventually periodic} if $\osh^n(x)$ is periodic for some $n\in\mathbb{N}_0$. When $x\in X$ is eventually periodic, then we call the number $$\operatorname{lp}(x):=\min\{p\in\mathbb{N}:\exists n,m\in\mathbb{N}_0:p=n-m\text{ and }\osh^n(x)=\osh^m(x)\}$$ \emph{the least period} of $x$. When $X$ is a shift space, we write $\mathcal{L}(X)$ for the \emph{language of $X$} (i.e., the set of finite words, included the empty word $\emptyset$, that appear in elements of $X$). Given a word $v$ in $\mathcal{L}(X)$, we denote by $|v|$ the length of $v$, and for $m\in\mathbb{N}$, we let $\mathcal{L}^m(X)$ be the set of words in $\mathcal{L}(X)$ of length $m$. Given $x\in X$ and $n,m\in \mathbb{N}_0$ with $n\leq m$, we define the word $x_{[n,m]}:=(x_n,\ldots,x_m)\in \mathcal{L}^{m-n+1}(X)$. For $v\in\mathcal{L}(X)\setminus\{\emptyset\}$, we write $Z(v)$ for the \emph{cylinder set} $\{x\in X:x_{[0,|v|)}=v\}$ where $x_{[0,|v|)}:=x_{[0,|v|-1]}$. \subsection{Shifts of finite type} A \emph{one-sided shift of finite type} is a one-sided shift space $X$ such that there is an $m\in\mathbb{N}$ with the property that if $v\in\mathcal{L}(X)$ has length $m$ and $uv,vw\in\mathcal{L}(X)$, then $uvw\in\mathcal{L}(X)$. The shift map $\osh$ is a local homeomorphism if and only if $X$ is a shift of finite type, in which case $\osh^n$ is a local homeomorphism for all $n\in\mathbb{N}_0$. \subsection{Continuous orbit equivalence} Let $X$ and $Y$ be two one-sided shift spaces. Following \cite{Mat}, we say that a homeomorphism $h:X\to Y$ is a \emph{continuous orbit equivalence} if there exist continuous maps $k,l:X\to \mathbb{N}_0$ and $k',l':Y\to \mathbb{N}_0$ such that \begin{equation}\label{eq:1} \osh[Y]^{k(x)}(h(\osh(x)))=\osh[Y]^{l(x)}(h(x)) \end{equation} for $x\in X$, and \begin{equation}\label{eq:2} \osh^{k'(y)}(h^{-1}(\osh[Y](y)))=\osh^{l'(y)}(h^{-1}(y)) \end{equation} for $y\in Y$. Observe that in this case $h^{-1}:Y\rightarrow X$ is also a continuous orbit equivalence. We say that $X$ and $Y$ are \emph{continuously orbit equivalent} if there exists a continuous orbit equivalence $h:X\to Y$ (it is routine to check that the composition of two continuous orbit equivalences is a continuous orbit equivalence, and thus that continuous orbit equivalence indeed is an equivalence relation of one-sided shift spaces, cf.~\cite[Lemma 2.3]{Mat2}). If $h:X\to Y$ is a continuous orbit equivalence, then we say that a pair $(k,l)$ of continuous maps $k,l:X\to \mathbb{N}_0$ satisfying \eqref{eq:1} is a \emph{$h$-cocycle pair}. \subsection{Flow equivalence} Let $X$ be a one-sided shift space. Given $\mathbf{x}=(x_n)_{n\in \mathbb{Z}}\in \boldsymbol{a}^\mathbb{Z}$ and $m\in \mathbb{Z}$, we define $$\mathbf{x}_{[m,\infty)}:=(x_m,x_{m+1},\ldots)\in\boldsymbol{a}^{\mathbb{N}_0}\,.$$ The \emph{two-sided shift space associated to $X$} is defined to be $$\mathbf{X}:=\{\mathbf{x}\in\boldsymbol{a}^\mathbb{Z}: \mathbf{x}_{[m,\infty)}\in X\text{ for all }m\in \mathbb{Z}\}\,.$$ The set $\mathbf{X}$ is a closed and compact subset of $\boldsymbol{a}^\mathbb{Z}$ with the induced product topology of $\boldsymbol{a}^\mathbb{Z}$, and invariant by the shift transformation $$\tsh:\mathbf{X}\to \mathbf{X}$$ given by $(\tsh((x_i)_{i\in\mathbb{Z}})_j=x_{j+1}$ for $j\in\mathbb{Z}$. Notice that $X\leftrightarrow \mathbf{X}$ is a bijective correspondence between the class of one-sided shift spaces and the class of two-sided shift spaces (i.e., the class of closed subsets $\mathbf{X}$ of $\boldsymbol{a}^\mathbb{Z}$ satisfying that $\tsh(\mathbf{X})=\mathbf{X}$). Two two-sided shift spaces $\mathbf{X}$ and $\mathbf{Y}$ are \emph{conjugate} if there is a \emph{conjugacy} between them, i.e., a homeomorphism $\boldsymbol{\varphi}:\mathbf{X}\to \mathbf{Y}$ such that $\tsh[\mathbf{Y}]\circ \boldsymbol{\varphi}=\boldsymbol{\varphi}\circ\tsh$. If $X$ and $Y$ are conjugate, then $\mathbf{X}$ and $\mathbf{Y}$ are conjugate (but $\mathbf{X}$ and $\mathbf{Y}$ can be conjugate without $X$ and $Y$ being conjugate). We say that $\mathbf{x}\in\mathbf{X}$ is \emph{periodic} if $\tsh^p(\mathbf{x})=\mathbf{x}$ for some $p\in\mathbb{N}$. When $\mathbf{x}\in \mathbf{X}$ is periodic, then we call the number $$\operatorname{lp}(\mathbf{x}):=\min\{p\in\mathbb{N}:\tsh^p(\mathbf{x})=\mathbf{x}\}$$ \emph{the least period} of $\mathbf{x}$. Let $\sim$ be the smallest equivalence relation on $\mathbf{X}\times\mathbb{R}$ such that $(\tsh^n(\mathbf{x}),t)\sim (\mathbf{x},t+n)$ for $\mathbf{x}\in\mathbf{X}$, $t\in\mathbb{R}$ and $n\in\mathbb{Z}$, and let $[(\mathbf{x},t)]$ denote the equivalence class of $(\mathbf{x},t)$. The \emph{suspension} $S\mathbf{X}$ of $\mathbf{X}$ is the quotient $\mathbf{X}\times\mathbb{R}/\sim$ equipped with the quotient topology of the product topology on $\mathbf{X}\times\mathbb{R}$. A \emph{flow equivalence} between the suspensions of two two-sided shift spaces $\mathbf{X}$ and $\mathbf{Y}$ is a homeomorphism $\psi:S\mathbf{X}\to S\mathbf{Y}$ that maps flow lines onto flow lines in an orientation preserving way: so if $\mathbf{x}\in\mathbf{X}$, $\mathbf{y}\in\mathbf{Y}$, $r,s,t,u\in\mathbb{R}$, $s,u>0$ and $\psi([(\mathbf{x},t)])=[(\mathbf{y},r)]$, then there is an $v>0$ such that $\psi([(\mathbf{x},t+s)])=[(\mathbf{y},r+v)]$, and a $w>0$ such that $\psi^{-1}([(\mathbf{y},r+u)])=[(\mathbf{x},t+w)]$. Two two-sided shift spaces $\mathbf{X}$ and $\mathbf{Y}$ are \emph{flow equivalent} if there exists a flow equivalence between $S\mathbf{X}$ and $S\mathbf{Y}$. It is routine to check that the composition of two flow equivalences is a flow equivalence, and thus that flow equivalence is an equivalence relation of two-sided shift spaces. If $\mathbf{X}$ and $\mathbf{Y}$ are conjugate, then $\mathbf{X}$ and $\mathbf{Y}$ are flow equivalent, but $\mathbf{X}$ and $\mathbf{Y}$ can be flow equivalent without being conjugate. \subsection{The cohomology of a shift space} \label{cohomology} Let $X$ be a one-sided shift space. Following \cite{MM}, we let $H^X$ be the group \begin{equation*} H^X:=C(X,\mathbb{Z})/\{f-f\circ\osh:f\in C(X,\mathbb{Z})\} \end{equation*} with addition defined by $[f]+[g]=[f+g]$, and we let \begin{equation*} H^X_+:=\{[f]\in H^X: f(x)\ge 0\text{ for all }x\in X\}. \end{equation*} It follows from \cite[Lemma 3.1]{MM} that the preordered group $(H^X,H^X_+)$ is isomorphic to the ordered cohomology group $(G^{\tsh},G^{\tsh}_+)$ of $(\mathbf{X},\tsh)$ defined in \cite{BH} (\cite[Lemma 3.1]{MM} is only stated for irreducible shifts associated with $\{0,1\}$ matrices, but it is easy to see that its proof holds for general shift spaces). \subsection{The groupoid of a one-sided shift space of finite type} The groupoid $\mathcal{G}_X$ of a one-sided shift of finite type $X$ has unit space $\mathcal{G}^{(0)}:=X$ and morphisms \begin{equation*} \mathcal{G}_X:=\{(x,n,x')\in X\times\mathbb{Z}\times X:\exists i,j\in\mathbb{N}_0:n=i-j \text{ and } \osh^i(x)=\osh^j(x')\}. \end{equation*} The range and source maps $r,s:\mathcal{G}_X\to \mathcal{G}^{(0)}$ are defined by $r((x,n,x'))=x$ and $s((x,n,x'))=x'$, and the product and inverse operators by $(x,n,x')(x',n',x'')=(x,n+n',x'')$ and $(x,n,x')^{-1}=(x',-n,x)$. We let $c:\mathcal{G}_X\to\mathbb{Z}$ be the map defined by $c((x,n,x'))=n$. There is a topology on $\mathcal{G}_X$ that has a basis consisting of sets of the form \begin{equation*} \{(x,i-j,x'):x\in U,\ x'\in U',\ \osh^i(x)=\osh^j(x')\} \end{equation*} where $i,j\in\mathbb{N}_0$ and $U$ and $U'$ are open subsets such that $\osh^i$ restricted to $U$ is injective, $\osh^j$ restricted to $U'$ is injective, and $\osh^i(U)=\osh^j(U')$. If we identify $X$ with the subspace $\{(x,0,x):x\in X\}$ of $\mathcal{G}_X$, then the topology of $X$ coincides with the subspace topology. With the topology described above, $\mathcal{G}_X$ is an \emph{ample} Hausdorff groupoid, i.e., the product and inverse operators are continuous and the topology is Hausdorff and has a basis of compact open bisections (a subset $A$ of a groupoid $\mathcal{G}$ is a \emph{bisection} if both the restriction of the range map and the restriction of the source map to $A$ are injective). In particular, $\mathcal{G}_X$ is étale (i.e., the range and source maps are local homeomorphisms). As in \cite{MM}, we let $\hom(\mathcal{G}_X,\mathbb{Z})$ be the set of continuous maps $\omega:\mathcal{G}_X\to\mathbb{Z}$ such that $\omega(\eta^{-1})=-\omega(\eta)$ for $\eta\in\mathcal{G}_X$ and $\omega(\eta_1\eta_2)=\omega(\eta_1)+\omega(\eta_2)$ for $\eta_1,\eta_2\in\mathcal{G}_X$ with $s(\eta_1)=r(\eta_2)$. For $f\in C(X,\mathbb{Z})$, the map $\partial(f):\mathcal{G}_X\to\mathbb{Z}$ defined by $\partial(f)(\eta)=f(r(\eta))-f(s(\eta))$ belongs to $\hom(\mathcal{G}_X,\mathbb{Z})$. As in \cite{MM}, we denote by $H^1(\mathcal{G}_X)$ the group \begin{equation*} H^1(\mathcal{G}_X):=\hom(\mathcal{G}_X,\mathbb{Z})/\{\partial(f):f\in C(X,\mathbb{Z})\} \end{equation*} with addition defined by $[f]+[g]=[f+g]$. We shall in Proposition \ref{erik} see that there is an isomorphism $\Phi:H^1(\mathcal{G}_X)\to H^X$ such that $\Phi([f])=[g]$, where $g\in C(X,\mathbb{Z})$ is given by $g(x)=f((x,1,\osh(x)))$, and $\Phi([f])\in H^X_+$ if and only if $f((x,\operatorname{lp}(x),x))\ge 0$ for every eventually periodic point $x\in X$, cf.~\cite[Proposition 3.4]{MM}. A \emph{homomorphism} between two topological groupoids $\mathcal{G}_1$ and $\mathcal{G}_2$ is a continuous map $\phi:\mathcal{G}_1\to\mathcal{G}_2$ such that $\phi(\eta^{-1})=\phi(\eta)^{-1}$ for every $\eta\in\mathcal{G}_1$, and $\phi(\eta_1)\phi(\eta_2)$ is defined and equal to $\phi(\eta_1\eta_2)$ for all $\eta_1,\eta_2\in\mathcal{G}_1$ for which $\eta_1\eta_2$ is defined. An \emph{isomorphism} between two topological groupoids $\mathcal{G}_1$ and $\mathcal{G}_2$ is a bijective homomorphism $\phi:\mathcal{G}_1\to\mathcal{G}_2$ such that $\phi^{-1}:\mathcal{G}_2\to\mathcal{G}_1$ is also a homomorphism. \section{Orbit equivalence and flow equivalence for general shift spaces} One of the goals of this paper is to show that continuous orbit equivalence implies flow equivalence for shifts of finite type, and thereby generalise \cite[Theorem 3.5]{MM} from irreducible one-sided Markov shifts to general (possible reducible) shifts of finite type. In this section we prove Proposition~\ref{diego}, which gives sufficient conditions for when continuous orbit equivalence implies flow equivalence for general shift spaces. These conditions are related to the preordered cohomology groups of one-sided shift spaces introduced in Section~\ref{cohomology} (see the discussion right after Remark~\ref{Olga}). As a corollary (Corollary~\ref{cor:scoe}), we generalise \cite[Theorem 5.5]{Mat2} and show that for general shift spaces, strongly continuous orbit equivalence implies two-sided conjugacy. In this paper, we only apply Proposition~\ref{diego} to shifts of finite type, but we hope that it also can be used to prove that orbit equivalence implies flow equivalence for other classes of shift spaces. Our strategy for proving Proposition~\ref{diego} is to use techniques and ideas related to those used in \cite{MM} and \cite{MM2} to construct a discrete flow equivalence from a continuous orbit equivalence satisfying the conditions of Proposition~\ref{diego}, and then construct a flow equivalence from the discrete flow equivalence. Since we work with shift spaces that might not be irreducible and might contain isolated points, we have to modify the approach of \cite{MM} and \cite{MM2} a bit. \subsection{A sufficient condition for flow equivalence} Let $X$ and $Y$ be two one-sided shift spaces and let $h:X\to Y$ be a continuous orbit equivalence. We say that $h$ \emph{maps eventually periodic points to eventually periodic points} if $h(x)$ is eventually periodic exactly when $x$ is eventually periodic. \begin{remark} Matsumoto and Matui prove in \cite[Proposition 3.5]{MM2} that if $X$ and $Y$ are the one-sided shift spaces associated with two irreducible $\{0,1\}$ square matrices that satisfy the Condition (I) introduced by Cuntz and Krieger in \cite{CK}, then any continuous orbit equivalence between $X$ and $Y$ maps eventually periodic points to eventually periodic points. By inspecting the proof, one sees that it actually holds for any pair of one-sided shifts spaces $X$ and $Y$ that have the property that the complement of the set of eventually periodic points is dense. We prove in Proposition~\ref{sven} that any continuous orbit equivalence between shifts of finite type maps eventually periodic points to eventually periodic points. We do not know if there are continuous orbit equivalences between one-sided shift spaces that do not map eventually periodic points to eventually periodic points. \end{remark} Let $X$ and $Y$ be two one-sided shift spaces and let $h:X\to Y$ be a continuous orbit equivalence that maps eventually periodic points to eventually periodic points. We say that an $h$-cocycle pair $(k,l)$ is \emph{least period preserving} if $$\operatorname{lp}(h(x))=\sum_{i=0}^{\operatorname{lp}(x)-1}\bigl(l(\osh^i(x))-k(\osh^i(x))\bigr)$$ for every eventually periodic point $x\in X$ (this terminology is justified by Proposition~\ref{john}). \begin{proposition}\label{diego} Let $X$ and $Y$ be two one-sided shift spaces and suppose that $h:X\to Y$ is a continuous orbit equivalence that maps eventually periodic points to eventually periodic points, that $(k,l)$ is a least period preserving $h$-cocycle pair, that $(k',l')$ is a least period preserving $h^{-1}$-cocycle pair, and that $b:X\to\mathbb{Z}$, $n:X\to\mathbb{N}_0$, $b':Y\to\mathbb{Z}$ and $n':Y\to\mathbb{N}_0$ are continuous maps such that $l(x)-k(x)=n(x)+b(x)-b(\osh(x))$ and $l'(y)-k'(y)=n'(y)+b'(y)-b'(\osh[Y](y))$ for $x\in X$ and $y\in Y$. Then $\mathbf{X}$ and $\mathbf{Y}$ are flow equivalent. \end{proposition} The rest of this section contains the proof of Proposition~\ref{diego}. We assume in the rest of this section that $X$, $Y$, $h$, $k$, $l$, $k'$, $l'$, $b$, $b'$, $n$, and $n'$ are as specified in the proposition. We shall construct an explicit flow equivalence $\psi:S\mathbf{X}\to S\mathbf{Y}$ from this data. We begin by constructing a continuous map $\varphi:X\rightarrow Y$ and establish some properties of it in Claim~\ref{lemma1} and Claim~\ref{yap}. In Claim~\ref{lemma_n} we show that the map $n$ satisfies a condition which we need in order to construct a continuous map $\boldsymbol{\varphi}:\mathbf{X}\to \mathbf{Y}$ in Claim~\ref{proposition1}. We then prove some properties of $\boldsymbol{\varphi}$ in Claim~\ref{lemma2}, Claim~\ref{robert} and Claim~\ref{periodic}, before we for each $\mathbf{x}\in\mathbf{X}$ construct an increasing piecewise linear homeomorphism $r_\mathbf{x}:\mathbb{R}\to\mathbb{R}$. In Claim~\ref{harrison} we show a relationship between $r_\mathbf{x}$ and $r_{\tsh^p(\mathbf{x})}$, before we in Claim~\ref{super} finally show that there is a flow equivalence $\psi:S\mathbf{X}\to S\mathbf{Y}$ given by $\psi([(\mathbf{x},t)])=[(\boldsymbol{\varphi}(\mathbf{x}),r_\mathbf{x}(t))]$. Since $l$ and $b$ are bounded, we can by adding a constant to $b$ if necessary, assume that $b(x)\ge l(x)$ for every $x\in X$. Similarly, we can assume that $b'(y)\ge l'(y)$ for every $y\in Y$. We let $\varphi:X\rightarrow Y$ be the continuous map defined by \begin{equation}\label{function_1} \varphi(x)=\osh[Y]^{b(x)}(h(x)) \end{equation} for $x\in X$. \begin{claim}\label{lemma1} The function $\varphi$ defined in (\ref{function_1}) is finite-to-one, i.e., $|\varphi^{-1}(y)|<\infty$ for every $y\in Y$. \end{claim} \begin{proof} Recall that $h$ is a homeomorphism and $b$ is bounded. Let $j\in\mathbb{N}_0$ be such that $0\leq b(x)\leq j$ for every $x\in X$. For $y\in Y$ we have that $$\varphi^{-1}(y)\subseteq \bigcup_{i=0}^j h^{-1}(\osh[Y]^{-i}(y))\,.$$ Since $\osh[Y]^{i}$ is finite-to-one, so is $\osh[Y]^i\circ h$. It follows that $\varphi$ is finite-to-one. \end{proof} \begin{claim}\label{yap} For $x\in X$ we have that \begin{equation}\label{equation_2}\varphi(\osh(x))=\osh[Y]^{n(x)}(\varphi(x))\,. \end{equation} \end{claim} \begin{proof} Since $b(\osh(x))=n(x)-l(x)+k(x)+b(x)$, $n(x)-l(x)+b(x)\ge b(x)-l(x)\geq 0$, and $\osh[Y]^{k(x)}(h(\osh(x)))=\osh[Y]^{l(x)}(h(x))$, it follows that \begin{align*} \varphi(\osh(x))& =\osh[Y]^{b(\osh(x))}(h(\osh(x)))=\osh[Y]^{n(x)-l(x)+k(x)+b(x)}(h(\osh(x)))\\ & =\osh[Y]^{n(x)-l(x)+b(x)}(\osh[Y]^{k(x)}(h(\osh(x))))=\osh[Y]^{n(x)-l(x)+b(x)}(\osh[Y]^{l(x)}(h(x))) \\ & = \osh[Y]^{n(x)+b(x)}(h(x))=\osh[Y]^{n(x)}(\varphi(x))\,.\qedhere \end{align*} \end{proof} For $j\in\mathbb{N}$ and $x\in X$, we set $n^j(x):=\sum_{i=1}^{j}n(\osh^{i-1}(x))$ and $n^0(x):=0$. Observe that then \begin{equation}\label{eq:56} \varphi(\osh^j(x))=\varphi(\osh(\osh^{j-1}(x)))=\osh[Y]^{n(\osh^{j-1}(x))}(\varphi(\osh^{j-1}(x)))=\cdots=\osh[Y]^{n^j(x)}(\varphi(x))\,, \end{equation} by an iteration of (\ref{equation_2}). \begin{claim}\label{lemma_n} Given $\mathbf{x}\in \mathbf{X}$ and $i_0\in\mathbb{Z}$, there exist $i,j\in \mathbb{Z}$ such that $i<i_0$ and $n(\mathbf{x}_{[i,\infty)})\neq 0$ and $j>i_0$ and $n(\mathbf{x}_{[j,\infty)})\neq 0$. \end{claim} \begin{proof} We first show that $n(\mathbf{x}_{[j,\infty)})\neq 0$ for some $j>i_0$. Assume, for contradiction, that $n(\mathbf{x}_{[j,\infty)})=0$ for every $j>i_0$. Then $n^{j-i_0}(\mathbf{x}_{[i_0,\infty)})=\sum_{i=i_0}^{j-1}n(\mathbf{x}_{[i,\infty)})=0$ for every $j>i_0$. An application of \eqref{eq:56} gives us that $$\varphi(\mathbf{x}_{[j,\infty)})=\varphi(\osh^{j-i_0}(\mathbf{x}_{[i_0,\infty)]}))=\osh[Y]^{n^{j-i_0}(\mathbf{x}_{[i_0,\infty)})}(\varphi(\mathbf{x}_{[i_0,\infty)}))=\varphi(\mathbf{x}_{[i_0,\infty)})$$ for every $j>i_0$, and since $\varphi$ is finite-to-one (Claim~\ref{lemma1}), it follows that $\{\mathbf{x}_{[j,\infty)}:j>i_0\}$ is finite, and thus that $\mathbf{x}_{[j,\infty)}$ is periodic for some $j>i_0$. But then $$\operatorname{lp}(h(\mathbf{x}_{[j,\infty)}))=\sum_{i=0}^{\operatorname{lp}(\mathbf{x}_{[j,\infty)})-1}\left(l(\mathbf{x}_{[i+j,\infty)})-k(\mathbf{x}_{[i+j,\infty)})\right)=\sum_{i=0}^{\operatorname{lp}(\mathbf{x}_{[j,\infty)})-1}n(\mathbf{x}_{[i+j,\infty)})=0\ ,$$ which cannot be the case. Similarly, if $n(\mathbf{x}_{[i,\infty)})= 0$ for every $i<i_0$, then $$\varphi(\mathbf{x}_{[i_0,\infty)})=\varphi(\osh^{i_0-i}(\mathbf{x}_{[i,\infty)]}))=\osh[Y]^{n^{i_0-i}(\mathbf{x}_{[i,\infty)})}(\varphi(\mathbf{x}_{[i,\infty)}))=\varphi(\mathbf{x}_{[i,\infty)})$$ for every $i<i_0$, and since $\varphi$ is finite-to-one, it follows that $\{\mathbf{x}_{[i,\infty)}:i<i_0\}$ is finite, and thus that $\mathbf{x}$ is periodic. It follows from the first part of the proof that there is an $i\in \mathbb{N}_0$ such that $n(\mathbf{x}_{[i,\infty)})\neq 0$, but since $\mathbf{x}$ is periodic, there is a $j\in\mathbb{N}$ such that $\mathbf{x}_{[-j,\infty)}=\mathbf{x}_{[i,\infty)}$ from which it follows that $n(\mathbf{x}_{[-j,\infty)})=n(\mathbf{x}_{[i,\infty)})\neq 0$. \end{proof} For $\mathbf{x}\in\mathbf{X}$ and $j\in\mathbb{Z}$, we set \begin{equation*} m_\mathbf{x}(j):= \begin{cases} -\sum_{i=1}^{-j} n(\mathbf{x}_{[-i,\infty)})&\text{if }j<0,\\ 0&\text{if }j=0,\\ \sum_{i=0}^{j-1} n(\mathbf{x}_{[i,\infty)})&\text{if }j>0. \end{cases} \end{equation*} Then $m_\mathbf{x}:\mathbb{Z}\to\mathbb{Z}$ is a weakly increasing function (i.e., $m_\mathbf{x}(i)\le m_\mathbf{x}(j)$ if $i<j$), and it follows from Claim~\ref{lemma_n} that $m_\mathbf{x}(j)\to\pm\infty$ for $j\to\pm\infty$. It is straightforward to check that if $\mathbf{x}\in\mathbf{X}$ and $i,j\in\mathbb{Z}$, then \begin{equation}\label{bobs} m_\mathbf{x}(i+j)=m_\mathbf{x}(i)+m_{\tsh^i(\mathbf{x})}(j). \end{equation} \begin{claim}\label{proposition1} There is a continuous map $\boldsymbol{\varphi}:\mathbf{X}\to \mathbf{Y}$ such that $\boldsymbol{\varphi}(\mathbf{x})_{[m_\mathbf{x}(-i),\infty)}=\varphi(\mathbf{x}_{[-i,\infty)})$ for $i\in\mathbb{N}_0$. \end{claim} \begin{proof} Let $\mathbf{x}\in \mathbf{X}$. Since $m_\mathbf{x}(-i)\to -\infty$ for $i\to \infty$, it follows that there is at most one $\mathbf{y}\in\mathbf{Y}$ such that $\mathbf{y}_{[m_\mathbf{x}(-i),\infty)}=\varphi(\mathbf{x}_{[-i,\infty)})$ for $i\in\mathbb{N}_0$. That there is such a $\mathbf{y}\in\mathbf{Y}$ follows from the fact that \begin{equation*} \osh[Y]^{n(\mathbf{x}_{[-i-1,\infty)})}(\varphi(\mathbf{x}_{[-i-1,\infty)})) =\varphi(\osh(\mathbf{x}_{[-i-1,\infty)})) =\varphi(\mathbf{x}_{[-i,\infty)}) \, \end{equation*} for $i\in\mathbb{N}_0$. Since, for fixed $i\in\mathbb{N}_0$, the function $\mathbf{x}\mapsto m_\mathbf{x}(-i)$ is a continuous and thus locally constant function from $\mathbf{X}$ to $\mathbb{Z}$, and $\varphi$ is continuous, it follows that $\boldsymbol{\varphi}$ is continuous. \end{proof} \begin{claim}\label{lemma2} $\tsh[\mathbf{Y}]^{m_\mathbf{x}(j)}(\boldsymbol{\varphi}(\mathbf{x}))=\boldsymbol{\varphi}(\tsh^j(\mathbf{x}))$ for $\mathbf{x}\in\mathbf{X}$ and $j\in\mathbb{Z}$. \end{claim} \begin{proof} Let $\mathbf{x}'\in\mathbf{X}$ and $i,j'\in\mathbb{N}_0$. It follows from \eqref{bobs} that \begin{align*} (\tsh[\mathbf{Y}]^{-m_{\mathbf{x}'}(j')}(\boldsymbol{\varphi}(\tsh^{j'}(\mathbf{x}'))))_{[m_{\mathbf{x}'}(-i),\infty)} &=(\boldsymbol{\varphi}(\tsh^{j'}(\mathbf{x}')))_{[m_{\mathbf{x}'}(-i)-m_{\mathbf{x}'}(j'),\infty)}\\ &=(\boldsymbol{\varphi}(\tsh^{j'}(\mathbf{x}')))_{[m_{\tsh^{j'}(\mathbf{x}')}(-i-j'),\infty)}\\ &=\varphi((\tsh^{j'}(\mathbf{x}')_{[-i-j',\infty)}))\\ &=\varphi(\mathbf{x}'_{[-i,\infty)})\\ &=\boldsymbol{\varphi}(\mathbf{x}')_{[m_{\mathbf{x}'}(-i),\infty)}. \end{align*} Thus, \begin{equation}\label{eq:33} \tsh[\mathbf{Y}]^{-m_{\mathbf{x}'}(j')}(\boldsymbol{\varphi}(\tsh^{j'}(\mathbf{x}')))=\boldsymbol{\varphi}(\mathbf{x}'). \end{equation} If $j\ge 0$, then an application of \eqref{eq:33} with $\mathbf{x}'=\mathbf{x}$ and $j'=j$ gives us that $\tsh[\mathbf{Y}]^{m_\mathbf{x}(j)}(\boldsymbol{\varphi}(\mathbf{x}))=\boldsymbol{\varphi}(\tsh^j(\mathbf{x}))$, and if $j<0$, then an application of \eqref{eq:33} with $\mathbf{x}'=\tsh^{j}(\mathbf{x})$ and $j'=-j$ gives us together with \eqref{bobs} that \begin{equation*} \boldsymbol{\varphi}(\tsh^{j}(\mathbf{x})) =\tsh[\mathbf{Y}]^{-m_{\tsh^{j}(\mathbf{x})}(-j)}(\boldsymbol{\varphi}(\tsh^{-j}(\tsh^{j}(\mathbf{x})))) =\tsh[\mathbf{Y}]^{-m_{\tsh^{j}(\mathbf{x})}(-j)}(\boldsymbol{\varphi}(\mathbf{x})) =\tsh[\mathbf{Y}]^{m_{\mathbf{x}}(j)}(\boldsymbol{\varphi}(\mathbf{x})). \end{equation*} \end{proof} Similarly to how we constructed $\varphi$, $m_\mathbf{x}$ and $\boldsymbol{\varphi}$, we can for each $\mathbf{y}\in\mathbf{Y}$ construct a weakly increasing function $m'_\mathbf{y}:\mathbb{Z}\to\mathbb{Z}$ and continuous functions $\varphi':Y\to X$ and $\boldsymbol{\varphi}':\mathbf{Y}\to\mathbf{X}$ such that \begin{equation*} m'_\mathbf{y}(j)= \begin{cases} -\sum_{i=1}^{-j} n'(\mathbf{y}_{[-i,\infty)})&\text{if }j<0,\\ 0&\text{if }j=0,\\ \sum_{i=0}^{j-1} n'(\mathbf{y}_{[i,\infty)})&\text{if }j>0, \end{cases} \end{equation*} $\varphi'(y)=\osh^{b'(y)}(h^{-1}(y))$, and $\boldsymbol{\varphi}'(\mathbf{y})_{[m'_\mathbf{y}(-i),\infty)}=\varphi'(\mathbf{y}_{[-i,\infty)})$ for $y\in Y$, $\mathbf{y}\in\mathbf{Y}$, $i\in\mathbb{N}_0$, and $j\in\mathbb{Z}$. For $j\in\mathbb{N}$ and $y\in Y$, we set $(n')^j(y):=\sum_{i=1}^{j}n'(\osh[Y]^{i-1}(y))$ and $(n')^0(y):=0$. \begin{claim} \label{robert} Given $\mathbf{x}\in\mathbf{X}$ and $\mathbf{y}\in\mathbf{Y}$, there exist $d,d'\in\mathbb{Z}$ such that $\boldsymbol{\varphi}'(\boldsymbol{\varphi}(\mathbf{x}))=\tsh^d(\mathbf{x})$ and $\boldsymbol{\varphi}(\boldsymbol{\varphi}'(\mathbf{y}))=\tsh[\mathbf{Y}]^{d'}(\mathbf{y})$. \end{claim} \begin{proof} Let $\mathbf{x}\in\mathbf{X}$. Then we have for $j\in\mathbb{N}_0$ that \begin{align*} \boldsymbol{\varphi}'(\boldsymbol{\varphi}(\mathbf{x}))_{[m'_{\boldsymbol{\varphi}(\mathbf{x})}(m_\mathbf{x}(-j)),\infty)} &=\varphi'(\boldsymbol{\varphi}(\mathbf{x})_{[m_\mathbf{x}(-j),\infty)})\\ &=\varphi'(\varphi(\mathbf{x}_{[-j,\infty)}))\\ &=\varphi'(\osh[Y]^{b(\mathbf{x}_{[-j,\infty)})}(h(\mathbf{x}_{[-j,\infty)})))\\ &=\osh^{(n')^{b(\mathbf{x}_{[-j,\infty)})}(h(\mathbf{x}_{[-j,\infty)}))}(\varphi'(h(\mathbf{x}_{[-j,\infty)})))\\ &=\osh^{(n')^{b(\mathbf{x}_{[-j,\infty)})}(h(\mathbf{x}_{[-j,\infty)}))+b'(h(\mathbf{x}_{[-j,\infty)}))}(\mathbf{x}_{[-j,\infty)})\\ &=\mathbf{x}_{[-j+(n')^{b(\mathbf{x}_{[-j,\infty)})}(h(\mathbf{x}_{[-j,\infty)}))+b'(h(\mathbf{x}_{[-j,\infty)})),\infty)}. \end{align*} Let us first set $d=(n')^{b(\mathbf{x}_{[0,\infty)})}(h(\mathbf{x}_{[0,\infty)}))+b'(h(\mathbf{x}_{[0,\infty)}))$. By letting $j=0$ we see that $\boldsymbol{\varphi}'(\boldsymbol{\varphi}(\mathbf{x}))_{[0,\infty)}=\mathbf{x}_{[d,\infty)}$. Since $m_\mathbf{x}(-j)\to -\infty$ as $j\to\infty$, it follows that $$m'_{\boldsymbol{\varphi}(\mathbf{x})}(m_\mathbf{x}(-j))\to -\infty \text{ as } j\to\infty,$$ and since $b$ and $b'$ are bounded functions, and $(n')^i$ is bounded for each $i\in\mathbb{N}_0$, we get that $$-j+(n')^{b(\mathbf{x}_{[-j,\infty)})}(h(\mathbf{x}_{[-j,\infty)}))+b'(h(\mathbf{x}_{[-j,\infty)}))\to -\infty \text{ as } j\to\infty.$$ It follows that if $\mathbf{x}$ is periodic, then $\boldsymbol{\varphi}'(\boldsymbol{\varphi}(\mathbf{x}))$ is also periodic, and $\boldsymbol{\varphi}'(\boldsymbol{\varphi}(\mathbf{x}))=\tsh^d(\mathbf{x})$. Suppose then that $\mathbf{x}$ is not periodic. Then there is a $j\in\mathbb{N}_0$ such that $$\boldsymbol{\varphi}'(\boldsymbol{\varphi}(\mathbf{x}))_{[m'_{\boldsymbol{\varphi}(\mathbf{x})}(m_\mathbf{x}(-j)),\infty)}=\mathbf{x}_{[-j+(n')^{b(\mathbf{x}_{[-j,\infty)})}(h(\mathbf{x}_{[-j,\infty)}))+b'(h(\mathbf{x}_{[-j,\infty)})),\infty)},$$ is not periodic. It follows that if we now set $$d=-m'_{\boldsymbol{\varphi}(\mathbf{x})}(m_\mathbf{x}(-j))-j+(n')^{b(\mathbf{x}_{[-j,\infty)})}(h(\mathbf{x}_{[-j,\infty)}))+b'(h(\mathbf{x}_{[-j,\infty)})),$$ then $\boldsymbol{\varphi}'(\boldsymbol{\varphi}(\mathbf{x}))=\tsh^d(\mathbf{x})$. That there for $\mathbf{y}\in\mathbf{Y}$ is a $d'\in\mathbb{Z}$ such that $\boldsymbol{\varphi}(\boldsymbol{\varphi}'(\mathbf{y}))=\tsh[\mathbf{Y}]^{d'}(\mathbf{y})$, can be proved in a similar way. \end{proof} \begin{claim} \label{periodic} Let $\mathbf{x}\in\mathbf{X}$. Then $\boldsymbol{\varphi}(\mathbf{x})$ is periodic if and only if $\mathbf{x}$ is, in which case $\operatorname{lp}(\boldsymbol{\varphi}(\mathbf{x}))=m_{\mathbf{x}}(\operatorname{lp}(\mathbf{x}))$. \end{claim} \begin{proof} Suppose $\mathbf{x}$ is periodic with period $p$. Since $m_\mathbf{x}(j)$ goes monotonically to $\infty$ as $j\to\infty$, it follows from \eqref{bobs} that $m_\mathbf{x}(p)\ne 0$. It thus follows from Claim~\ref{lemma2} that $\boldsymbol{\varphi}(\mathbf{x})$ is periodic with period $m_\mathbf{x}(p)$. Analogously, if $\boldsymbol{\varphi}(\mathbf{x})$ is periodic with period $q$, then $\mathbf{x}$ is periodic with period $m'_{\boldsymbol{\varphi}(\mathbf{x})}(q)$. Suppose again that $\mathbf{x}$ is periodic. Then $\mathbf{x}_{[0,\infty)}$ is also periodic. Since $h$ maps eventually periodic points to eventually periodic points, it follows that $h(\mathbf{x}_{[0,\infty)})$ is eventually periodic. It is clear that $\operatorname{lp}(\mathbf{x}_{[0,\infty)})=\operatorname{lp}(\mathbf{x})$ and $\operatorname{lp}(h(\mathbf{x}_{[0,\infty)}))=\operatorname{lp}(\boldsymbol{\varphi}(\mathbf{x}))$. Since the $h$-cocycle pair $(k,l)$ is least period preserving, it follows that \begin{align*} \operatorname{lp}(\boldsymbol{\varphi}(\mathbf{x}))&=\operatorname{lp}(h(\mathbf{x}_{[0,\infty)})) =\sum_{i=0}^{\operatorname{lp}(\mathbf{x}_{[0,\infty)})-1}(l(\osh^i(\mathbf{x}_{[0,\infty)}))-k(\osh^i(\mathbf{x}_{[0,\infty)})))\\ &=\sum_{i=0}^{\operatorname{lp}(\mathbf{x})-1}n(\mathbf{x}_{[i,\infty)})=m_{\mathbf{x}}(\operatorname{lp}(\mathbf{x})).\qedhere \end{align*} \end{proof} Let $\mathbf{x}\in\mathbf{X}$. Let functions $i_\mathbf{x},j_\mathbf{x}:\mathbb{R}\to\mathbb{Z}$ be given by $i_\mathbf{x}(t):=\max\{i\le t:n(\mathbf{x}_{[i,\infty)})\ne 0\}$ and $j_\mathbf{x}(t):=\min\{j> t:n(\mathbf{x}_{[j,\infty)})\ne 0\}$ (it follows from Claim~\ref{lemma_n} that $i_\mathbf{x}(t)$ and $j_\mathbf{x}(t)$ are well-defined), and let \begin{equation*} r_\mathbf{x}(t):=m_\mathbf{x}(i_\mathbf{x}(t))+\frac{t-i_\mathbf{x}(t)}{j_\mathbf{x}(t)-i_\mathbf{x}(t)}n(\mathbf{x}_{[i_\mathbf{x}(t),\infty)}). \end{equation*} Then $r_\mathbf{x}:\mathbb{R}\to\mathbb{R}$ is an increasing piecewise linear homeomorphism such that $r_\mathbf{x}(i)=m_\mathbf{x}(i)$ for those $i\in\mathbb{Z}$ for which $n(\mathbf{x}_{[i,\infty)})\ne 0$. \begin{claim} \label{harrison} $r_\mathbf{x}(t+p)=r_{\tsh^p(\mathbf{x})}(t)+m_\mathbf{x}(p)$ for $\mathbf{x}\in\mathbf{X}$, $t\in\mathbb{R}$ and $p\in\mathbb{Z}$. \end{claim} \begin{proof} Since $i_\mathbf{x}(t+p)=i_{\tsh^p(\mathbf{x})}(t)+p$ and $j_\mathbf{x}(t+p)=j_{\tsh^p(\mathbf{x})}(t)+p$, it follows from \eqref{bobs} that \begin{equation*} r_\mathbf{x}(t+p)=r_{\tsh^p(\mathbf{x})}(t)+m_\mathbf{x}(i_{\tsh^p(\mathbf{x})}(t)+p)-m_{\tsh^p(\mathbf{x})}(i_{\tsh^p(\mathbf{x})}(t)) =r_{\tsh^p(\mathbf{x})}(t)+m_\mathbf{x}(p).\qedhere \end{equation*} \end{proof} It is now routine to construct a flow equivalence $\psi:S\mathbf{X}\to S\mathbf{Y}$ from $\boldsymbol{\varphi}$ and $r_\mathbf{x}$ (cf. \cite{BCE} and \cite{PS}). \begin{claim} \label{super} There is a flow equivalence $\psi:S\mathbf{X}\to S\mathbf{Y}$ such that $$\psi([(\mathbf{x},t)])=[(\boldsymbol{\varphi}(\mathbf{x}),r_\mathbf{x}(t))]$$ for $\mathbf{x}\in\mathbf{X}$ and $t\in\mathbb{R}$. \end{claim} \begin{proof} It follows from Claim~\ref{lemma2} and Claim~\ref{harrison} that \begin{align*} [(\boldsymbol{\varphi}(\tsh^p(\mathbf{x})),r_{\tsh^p(\mathbf{x})}(t))] &=[(\tsh[\mathbf{Y}]^{m_\mathbf{x}(p)}(\boldsymbol{\varphi}(\mathbf{x})),r_{\tsh^p(\mathbf{x})}(t))]\\ &=[(\boldsymbol{\varphi}(\mathbf{x}),r_{\tsh^p(\mathbf{x})}(t)+m_\mathbf{x}(p))]\\ &=[(\boldsymbol{\varphi}(\mathbf{x}),r_\mathbf{x}(t+p))]. \end{align*} It follows that there is a map $\psi:S\mathbf{X}\to S\mathbf{Y}$ such that $\psi([(\mathbf{x},t)])=[(\boldsymbol{\varphi}(\mathbf{x}),r_\mathbf{x}(t))]$ for $\mathbf{x}\in\mathbf{X}$ and $t\in\mathbb{R}$. We check that $\psi$ is injective. Suppose $\psi([(\mathbf{x},t)])=\psi([(\mathbf{x}',t')])$. Then there is a $p\in\mathbb{Z}$ such that $\boldsymbol{\varphi}(\mathbf{x})=\tsh[\mathbf{Y}]^p(\boldsymbol{\varphi}(\mathbf{x}'))$ and $r_\mathbf{x}(t)+p=r_{\mathbf{x}'}(t')$. It then follows from Claim~\ref{lemma2} and Claim~\ref{robert} that there is a $q\in\mathbb{Z}$ such that $\mathbf{x}'=\tsh^q(\mathbf{x})$. So $[(\mathbf{x}',t')]=[(\mathbf{x},s)]$ for some $s\in\mathbb{R}$. If $\mathbf{x}$ is not periodic, then $\boldsymbol{\varphi}(\mathbf{x})$ is not periodic either, so $\psi([(\mathbf{x},s)])=\psi([(\mathbf{x},t)])$ implies that $r_\mathbf{x}(s)=r_\mathbf{x}(t)$, and since $r_\mathbf{x}$ is injective, it follows that $s=t$ and thus that $[(\mathbf{x}',t')]=[(\mathbf{x},s)]=[(\mathbf{x},t)]$. Suppose that $\mathbf{x}$ is periodic. Then it follows from Claim~\ref{periodic} that $\boldsymbol{\varphi}(\mathbf{x})$ is periodic and that $\operatorname{lp}(\boldsymbol{\varphi}(\mathbf{x}))=m_\mathbf{x}(\operatorname{lp}(\mathbf{x}))$. So $\psi([(\mathbf{x},s)])=\psi([(\mathbf{x},t)])$ implies that $r_\mathbf{x}(s)=r_\mathbf{x}(t)+i\ m_\mathbf{x}(\operatorname{lp}(\mathbf{x}))$ for some $i\in\mathbb{Z}$. It follows from Claim~\ref{harrison} that $r_\mathbf{x}(t)+i\ m_\mathbf{x}(\operatorname{lp}(\mathbf{x}))=r_\mathbf{x}(t+i\operatorname{lp}(\mathbf{x}))$, and since $r_\mathbf{x}$ is injective, it follows that $s=t+i\operatorname{lp}(\mathbf{x})$ and thus that $[(\mathbf{x}',t')]=[(\mathbf{x},s)]=[(\mathbf{x},t+i\operatorname{lp}(\mathbf{x}))]=[(\mathbf{x},t)]$. Next, we show that $\psi$ is surjective. Let $[(\mathbf{y},s)]\in S\mathbf{Y}$. It follows from Claim~\ref{robert} that $[(\mathbf{y},s)]=[(\boldsymbol{\varphi}(\boldsymbol{\varphi}'(\mathbf{y})),r)]$ for some $r\in\mathbb{R}$. Since $r_{\boldsymbol{\varphi}'(\mathbf{y})}$ is surjective, it follows that there is a $t\in\mathbb{R}$ such that $\psi([(\boldsymbol{\varphi}'(\mathbf{y}),t)])=[(\boldsymbol{\varphi}(\boldsymbol{\varphi}'(\mathbf{y})),r)]=[(\mathbf{y},s)]$. Let us then show that $\psi$ is continuous. It suffices to show that the map $(\mathbf{x},t)\mapsto (\boldsymbol{\varphi}(\mathbf{x}),r_\mathbf{x}(t))$ is a continuous map from $\mathbf{X}\times\mathbb{R}$ to $\mathbf{Y}\times\mathbb{R}$. Let $(\mathbf{x}_i,t_i)$ be a sequence that converges to $(\mathbf{x},t)$ in $\mathbf{X}\times\mathbb{R}$. Then $\mathbf{x}_i\to\mathbf{x}$ in $\mathbf{X}$. Since $\boldsymbol{\varphi}$ is continuous, it follows that $\boldsymbol{\varphi}(\mathbf{x}_i)\to\boldsymbol{\varphi}(\mathbf{x})$. Since the map $n$ is continuous, it follows that there is an $M\in\mathbb{N}$ such that $i_{\mathbf{x}_i}(s)=i_\mathbf{x}(s)$ and $j_{\mathbf{x}_i}(s)=j_\mathbf{x}(s)$ for $i\ge M$ and $s\in (t-1,t+1)$, and thus that there is an $N\in\mathbb{N}$ such that $r_{\mathbf{x}_i}(s)=r_\mathbf{x}(s)$ for $i\ge N$ and $s\in (t-1,t+1)$. Since $r_\mathbf{x}$ is continuous, it follows that $r_{\mathbf{x}_i}(t_i)\to r_\mathbf{x}(t)$. Thus, $(\boldsymbol{\varphi}(\mathbf{x}_i),r_{\mathbf{x}_i}(t_i))\to (\boldsymbol{\varphi}(\mathbf{x}),r_\mathbf{x}(t))$. We have now shown that $\psi$ is bijective and continuous. Since $S\mathbf{X}$ is compact and $S\mathbf{Y}$ is Hausdorff, it follows that $\psi$ is a homeomorphism. Since $r_\mathbf{x}$ is an increasing homeomorphism from $\mathbb{R}$ to $\mathbb{R}$, it follows that $\psi$ maps flow lines onto flow lines in an orientation preserving way. So $\psi$ is a flow equivalence. \end{proof} \subsection{Strongly continuous orbit equivalence} Following \cite{Mat2}, we say that two one-sided shift spaces $X$ and $Y$ are \emph{strongly continuous orbit equivalent} if there is a continuous orbit equivalence $h:X\to Y$, an $h$-cocycle pair $(k,l)$, and a continuous map $b:X\to\mathbb{Z}$ such that \begin{equation*} l(x)-k(x)=1+b(x)-b(\osh(x)) \end{equation*} for all $x\in X$. Matsumoto proved in \cite[Theorem 5.5]{Mat2} that if two irreducible topological Markov chains $X$ and $Y$ with no isolated points are strongly continuous orbit equivalent, then the corresponding two-sided shift spaces $\mathbf{X}$ and $\mathbf{Y}$ are conjugate. We now generalise this results to arbitrary shift spaces. \begin{corollary}\label{cor:scoe} If two shift spaces $X$ and $Y$ are strongly continuous orbit equivalent, then the corresponding two-sided shift spaces $\mathbf{X}$ and $\mathbf{Y}$ are conjugate. \end{corollary} \begin{proof} If $X$ and $Y$ are strongly continuous orbit equivalent, then we can choose the function $n:X\to\mathbb{N}$ in Proposition~\ref{diego} to be constantly equal to $1$. Then $m_\mathbf{x}(j)=j$, $i_\mathbf{x}(j)=j$, $j_\mathbf{x}(j)=j+1$, and $r_\mathbf{x}(j)=j$ for all $\mathbf{x}\in\mathbf{X}$ and all $j\in\mathbb{Z}$. Consequently, $\boldsymbol{\varphi}:\mathbf{X}\to\mathbf{Y}$ is a conjugacy. \end{proof} \section{Orbit equivalence and flow equivalence for shifts of finite type} In this section we use Proposition~\ref{diego} to prove the following theorem. \begin{theorem}\label{ib} Suppose $X$ and $Y$ are one-sided shifts of finite type and that they are continuously orbit equivalent. Then $\mathbf{X}$ and $\mathbf{Y}$ are flow equivalent. \end{theorem} If $X$ and $Y$ are irreducible, then the result of Theorem~\ref{ib} easily follows from \cite[Theorem 3.5]{MM} and the fact that every one-sided shift of finite type is conjugate to a one-sided topological Markov shift. \begin{remark} \label{Olga} It follows from \cite[Theorem 1.5]{BH} and \cite[Lemma 3.1]{MM} that if $\mathbf{X}$ and $\mathbf{Y}$ are flow equivalent, then there is an isomorphism from $H^X$ to $H^Y$ that maps $H^X_+$ onto $H^Y_+$. Theorem~\ref{ib} can therefore be seen as a generalisation of \cite[Theorem 3.5]{MM} (it will also follow directly from Proposition~\ref{sven} and Proposition~\ref{erik} that if $X$ and $Y$ are continuously orbit equivalent, then there is an isomorphism from $H^X$ to $H^Y$ that maps $H^X_+$ onto $H^Y_+$). \end{remark} To prove Theorem~\ref{ib} we will prove that if $X$ and $Y$ are one-sided shifts of finite type and $h:X\to Y$ is a continuous orbit equivalence, then there exist functions $k$, $l$, $k'$, $l'$, $b$, $b'$, $n$, and $n'$ with the property specified in Proposition~\ref{diego}. We do this by closely following \cite{MM} and use the groupoid of a one-sided shift of finite type. However, since we are working with general shifts of finite type and not just irreducible shifts of finite type with no isolated periodic points as in \cite{MM}, we cannot just simply follow the approach of \cite{MM}. In particular, the possibility that our shift spaces contain isolated periodic points implies that we need to make adjustments to the approach used in \cite{MM} (see Proposition~\ref{sven} and Remark~\ref{vm}). The conditions in Proposition~\ref{diego} are equivalent to the condition that there is an isomorphism $\phi:\mathcal{G}_X\to\mathcal{G}_Y$ such that $r(\phi(\eta))=h(r(\eta))$ and $s(\phi(\eta))=h(s(\eta))$ for $\eta\in\mathcal{G}_X$ and $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every eventually periodic point $x\in X$, and such that $\phi$ induces an isomorphism from $H^Y$ to $H^X$ that maps the class of the constant function 1 into $H^X_+$. We show in Proposition~\ref{sven} that $h$ maps eventually periodic points to eventually periodic points and that there is an isomorphism $\phi:\mathcal{G}_X\to\mathcal{G}_Y$ such that $r(\phi(\eta))=h(r(\eta))$ and $s(\phi(\eta))=h(s(\eta))$ for $\eta\in\mathcal{G}_X$ and $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every eventually periodic point $x\in X$, and then we generalise \cite[Proposition 3.4]{MM} in Proposition~\ref{erik} and show that there is an isomorphism from $H^1(\mathcal{G}_X)$ to $H^X$ that maps the class of a function $f\in \hom(\mathcal{G}_X,\mathbb{Z})$ into $H^X_+$ if and only if $f((x,\operatorname{lp}(x),x))\ge 0$ for every eventually periodic point $x\in X$. From this we deduce in Proposition~\ref{john} that if $\phi:\mathcal{G}_X\to\mathcal{G}_Y$ is an isomorphism with the above mentioned properties, then there exist functions $k$, $l$, $k'$, $l'$, $b$, $b'$, $n$, and $n'$ with the property specified in Proposition~\ref{diego}. We end the section by putting it all together and give the proof of Theorem~\ref{ib}. We begin with two lemmas which we need for the proof of Proposition~\ref{sven}. \begin{lemma}\label{kurt} Let $X$ be a one-sided shift of finite type. \begin{enumerate} \item If $x$ is an isolated point in $X$, then $x$ is eventually periodic. \item If $x$ is not an isolated point in $X$, $U$ is an open neighbourhood of $x$, $W$ is an open subset of $X$, $\alpha:U\to W$ is a homeomorphism, $k,l:U\to\mathbb{N}_0$ are continuous, and $\osh^{k(x')}(\alpha(x'))=\osh^{l(x')}(x')$ for every $x'\in U$, then there is a unique $n\in\mathbb{Z}$ with the property that there exist $k_0,l_0\in\mathbb{N}_0$ and an open subset $V$ such that $n=l_0-k_0$, $x\in V\subseteq U$ and $\osh^{k_0}(\alpha(x'))=\osh^{l_0}(x')$ for every $x'\in V$. \end{enumerate} \end{lemma} \begin{proof} (1): Suppose $x$ is an isolated point in $X$. Because $X$ is a shift of finite type, there is an $m\in\mathbb{N}$ such that if $v\in\mathcal{L}(X)$ has length $m$ and $uv,vw\in\mathcal{L}(X)$, then $uvw\in\mathcal{L}(X)$. Choose $n$ such that $Z(x_{[0,n-1]})=\{x\}$. Since there are only finitely many words of length $m$ in $\mathcal{L}(X)$, it follows that there are $p,q\in\mathbb{N}$ such that $p\ge n$, $q-p\ge m$ and $x_{[p,p+m-1]}=x_{[q,q+m-1]}$. Since $q-p\ge m$, the infinite sequence $x_{[0,p-1]}x_{[p,q-1]}x_{[p,q-1]}x_{[p,q-1]}\dots$ belongs to $X$ and thus to $Z(x_{[0,n-1]})$, so it must be equal to $x$. This shows that $x$ is eventually periodic. (2): Let $x,U,W,k,l,\alpha$ be given as specified. We first show the existence of an $n\in\mathbb{Z}$, $k_0,l_0\in\mathbb{N}_0$ and an open subset $V$ such that $n=l_0-k_0$, $x\in V\subseteq U$ and $\osh^{k_0}(\alpha(x'))=\osh^{l_0}(x')$ for every $x'\in V$. Let $k_0:=k(x)$, $l_0:=l(x)$ and $n:=l_0-k_0$. Since $k,l:U\to\mathbb{N}_0$ are continuous, there is an open subset $V$ such that $x\in V\subseteq U$ and $\osh^{k_0}(\alpha(x'))=\osh^{l_0}(x')$ for every $x'\in V$. Suppose then that $n'\in\mathbb{Z}$, $k_0',l_0'\in\mathbb{N}_0$ and $V'$ is an open subset such that $n\ne n'=l_0'-k_0'$, $x\in V'\subseteq U$ and $\osh^{k_0'}(\alpha(x'))=\osh^{l_0'}(x')$ for every $x'\in V'$. Let $U':=V\cap V'$, $k_0'':=\max\{k_0,k_0'\}$, $h:=l_0+k_0''-k_0$ and $j:=l_0'+k_0''-k_0'$. Then $U'$ is open, $x\in U'\subseteq U$, $h\ne j$ and $\osh^h(x')=\osh^{k_0''}(\alpha(x'))=\osh^j(x')$ for every $x'\in U'$. Let $p=\max\{h,j\}$ and $q=\min\{h,j\}$. Then $p>q$ because $h\ne j$. Choose $r\ge p$ such that $Z(x_{[0,r-1]})\subseteq U'$. Then $x'=x_{[0,p-1]}x_{[q,p-1]}x_{[q,p-1]}\dots $ for every $x'\in Z(x_{[0,r-1]})$, but this contradicts the assumption that $x$ is not an isolated point in $X$. \end{proof} \begin{lemma}\label{henrik} Let $X$ and $Y$ be two one-sided shifts of finite type. Suppose $\phi:\mathcal{G}_X\to\mathcal{G}_Y$ is an isomorphism and $h:X\to Y$ is a homeomorphism such that $\phi((x',0,x'))=(h(x'),0,h(x'))$ for all $x'\in X$. If $x\in X$ is eventually periodic, then $h(x)$ is eventually periodic and $\phi((x,\operatorname{lp}(x),x))$ is either equal to $(h(x),\operatorname{lp}(h(x)),h(x))$ or to $(h(x),-\operatorname{lp}(h(x)),h(x))$. If $x$ is not isolated in $X$, then $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$. \end{lemma} \begin{proof} The proof uses ideas from \cite[Lemma 3.3]{MM}. Suppose $x\in X$ is eventually periodic. Since $\phi$ is an isomorphism and $\phi((x',0,x'))=(h(x'),0,h(x'))$ for all $x'\in X$, it follows that $\phi((x,\operatorname{lp}(x),x))=(h(x),n,h(x))$ for some $n\in\mathbb{Z}$ different from 0. It follows that $h(x)$ is eventually periodic. Since $\phi$ is an isomorphism, it follows that either $n=\operatorname{lp}(h(x))$ or $n=-\operatorname{lp}(h(x))$. Suppose $n=-\operatorname{lp}(h(x))$. We will show that $x$ is then isolated in $X$. Choose $m\in\mathbb{N}$ such that if $v\in\mathcal{L}(X)$ has length $m$ and $uv,vw\in\mathcal{L}(X)$, then $uvw\in\mathcal{L}(X)$, and choose $r,s\in\mathbb{N}_0$ such that $r-s=\operatorname{lp}(x)$ and $\osh^r(x)=\osh^s(x)$. Then \begin{equation*} A:=\{(x'',\operatorname{lp}(x),x'):x''\in Z(x_{[0,r+m-1]}),\ x'\in Z(x_{[0,s+m-1]}),\ \osh^r(x'')=\osh^s(x')\} \end{equation*} is an open bisection containing $(x,\operatorname{lp}(x),x)$. It follows that $s(A)=Z(x_{[0,s+m-1]})$ and $r(A)=Z(x_{[0,r+m-1]})$ and the map $\alpha_A:s(A)\to r(A)$ defined by $\alpha_A(s(\xi))=r(\xi)$ for $\xi\in A$ is a homeomorphism (cf.~\cite[Proposition 3.3]{BCW}) such that \begin{equation}\label{eq:3} \alpha_A(x')=x_{[0,r+m-1]}\osh^{s+m}(x') \end{equation} for $x'\in s(A)$. Notice that $r(A)\subseteq s(A)$. It follows from \eqref{eq:3} that $\lim_{i\to\infty}\alpha_A^i(x')= x$ for all $x'\in s(A)$. Choose $m'\in\mathbb{N}$ such that if $v\in\mathcal{L}(Y)$ has length $m'$ and $uv,vw\in\mathcal{L}(Y)$, then $uvw\in\mathcal{L}(y)$. Since $\phi((x,\operatorname{lp}(x),x))=(h(x),-\operatorname{lp}(h(x)),h(x))$, there is an $j\in\mathbb{N}$ such that $\osh[Y]^j(h(x))=\osh[Y]^{j+\operatorname{lp}(h(x))}(h(x))$, and such that the open bisection \begin{multline*} \{(y'',-\operatorname{lp}(h(x)),y'):y''\in Z(h(x)_{[0,j+m'-1]}),\\ y'\in Z(h(x)_{[0,j+\operatorname{lp}(h(x))+m'-1]}),\ \osh[Y]^j(y'')=\osh[Y]^{j+\operatorname{lp}(h(x))}(y')\} \end{multline*} is contained in $\phi(A)$. Let $y\in h(s(A))$. Then $\lim_{i\to\infty}\alpha_A^i(h^{-1}(y))= x$. It follows that there is an $I\in\mathbb{N}$ such that $h(\alpha_A^i(h^{-1}(y)))\in Z(h(x)_{[0,j+\operatorname{lp}(h(x))+m'-1]})$ for $i\ge I$. Let $y':=h(\alpha_A^I(h^{-1}(y)))$ and $y'':=h(x)_{[0,j-1]}\osh[Y]^{j+\operatorname{lp}(h(x))}(y')$. Then $(y'',-\operatorname{lp}(h(x)),y')\in\phi(A)$. It follows that $y''=h(\alpha_A(h^{-1}(y')))\in Z(h(x)_{[0,j+\operatorname{lp}(h(x))+m'-1]})$, and thus that $y'\in Z(h(x)_{[0,j+2\operatorname{lp}(h(x))+m'-1]})$. By repeating this argument, we see that $y'\in Z(h(x)_{[0,j+i\operatorname{lp}(h(x))+m'-1]})$ for all $i\in\mathbb{N}$. It follows that $y'=h(x)$ and thus that $y=h(x)$. This shows that $h(x)$ is isolated in $Y$. Since $h$ is a homeomorphism, it follows that $x$ is isolated in $X$. \end{proof} \begin{proposition}\label{sven} Let $X$ and $Y$ be two one-sided shifts of finite type and let $h:X\to Y$ be a continuous orbit equivalence. Then $h$ maps eventually periodic points to eventually periodic points, and there is an isomorphism $\phi:\mathcal{G}_X\to \mathcal{G}_Y$ such that $r(\phi(\eta))=h(r(\eta))$ and $s(\phi(\eta))=h(s(\eta))$ for $\eta\in\mathcal{G}_X$ and such that $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every eventually periodic point $x\in X$. \end{proposition} \begin{proof} We begin by constructing the isomorphism $\phi:\mathcal{G}_X\to \mathcal{G}_Y$. We first define what $\phi(\eta)$ is when $s(\eta)$ is an isolated point in $X$, and then what $\phi(\eta)$ is when $s(\eta)$ is not an isolated point in $X$. For $x\in X$, let $[x]:=\{x'\in X:\exists\eta\in\mathcal{G}_X\text{ such that }r(\eta)=x\text{ and }s(\eta)=x'\}$. Notice that if $x'\in [x]$, then $x$ is isolated in $X$ if and only if $x'$ is. It follows from Lemma~\ref{kurt}(1) that if $x$ is an isolated point in $X$, then $[x]$ contains a periodic point. Choose for each $A\in\{[x]:x\text{ is an isolated point in } X\}$, a periodic point $x_A\in A$. For $x\in A$, let $j_x:=\min\{j\in\mathbb{N}_0:\osh^j(x)=x_A\}$. If the source of $\eta\in\mathcal{G}_X$ is an isolated point, then $r(\eta)$ is also an isolated point, $[r(\eta)]=[s(\eta)]$, and $j_{r(\eta)}-j_{s(\eta)}-c(\eta)=n\operatorname{lp}(x_{[r(\eta)]})$ for some $n\in\mathbb{Z}$. We write $n_\eta$ for this $n$. We similarly let $[y]:=\{y'\in y:\exists\eta\in\mathcal{G}_Y\text{ such that }r(\eta)=y\text{ and }s(\eta)=y'\}$ for $y\in Y$, choose for each $A\in\{[y]:y\text{ is an isolated point in } Y\}$ a periodic point $y_A\in A$, let $j_y:=\min\{j\in\mathbb{N}_0:\osh[Y]^j(y)=y_A\}$ for $y\in A$, and let $n_\eta$ be the unique integer such that $j_{r(\eta)}-j_{s(\eta)}-c(\eta)=n_\eta\operatorname{lp}(y_{[r(\eta)]})$ for $\eta\in\mathcal{G}_Y$ with $s(\eta)$ an isolated point in $Y$. Let $\eta\in\mathcal{G}_X$ and suppose $s(\eta)$ is an isolated point in $X$. Then $r(\eta)$ is also an isolated point in $X$, $h(r(\eta))$ and $h(s(\eta))$ are isolated points in $Y$, and $$(h(r(\eta)),j_{h(r(\eta))}-j_{h(s(\eta))}-n_\eta\operatorname{lp}(y_{[h(r(\eta))]}),h(s(\eta)))\in\mathcal{G}_Y.$$ We let $$\phi(\eta):=(h(r(\eta)),j_{h(r(\eta))}-j_{h(s(\eta))}-n_\eta\operatorname{lp}(y_{[h(r(\eta))]}),h(s(\eta))).$$ Suppose then that $s(\eta)$ is not an isolated point in $X$. Choose an open bisection $A$ such that $\eta\in A$. Then $s(A)$ and $r(A)$ are open in $X$ and the map $\alpha_A:s(A)\to r(A)$ defined by $\alpha_A(s(\xi))=r(\xi)$ for $\xi\in A$ is a homeomorphism with the property that there are continuous maps $k,l:s(A)\to\mathbb{N}_0$ such that $\osh^{k(x)}(\alpha_A(x))=\osh^{l(x)}(x)$ for every $x\in s(A)$ (cf.~\cite[Proposition 3.3]{BCW}). Since $h:X\to Y$ is a continuous orbit equivalence, it follows that there are a homeomorphism $\alpha'_A:h(s(A))\to h(r(A))$ such that $\alpha'_A(h(x))=h(\alpha(x))$ for $x\in s(A)$, and continuous maps $k',l':h(s(A))\to\mathbb{N}_0$ such that $\osh[Y]^{k'(y)}(\alpha'_A(y))=\osh[Y]^{l'(y)}(y)$ for every $y\in h(s(A))$ (cf.~the proof of \cite[Proposition 3.4]{BCW}). Since $h(s(\eta))$ is not an isolated point in $Y$, it follows from Lemma~\ref{kurt}(2) that there is a unique $n\in\mathbb{Z}$ such that $\osh[Y]^{k_0}(\alpha'_A(y))=\osh[Y]^{l_0}(y)$ for all $y$ in some open neighbourhood $V\subseteq h(s(A))$ of $h(s(\eta))$ and some $k_0,l_0\in\mathbb{N}_0$ satisfying $l_0-k_0=n$. Then $(h(r(\eta)),n,h(s(\eta)))\in\mathcal{G}_Y$. Notice that $n$ does not depend on the particular choice of $A$ because if $B$ is another open bisection containing $\eta$, then $A\cap B$ is also an open bisection containing $\eta$ and $\alpha_A(x)=\alpha_{A\cap B}(x)=\alpha_B(x)$ for every $x\in s(A\cap B)$, so if $n'$ and $n''$ are integers such that $\osh[Y]^{k'_0}(\alpha'_B(y))=\osh[Y]^{l'_0}(y)$ for all $y$ in some open neighbourhood $V'\subseteq h(s(B))$ of $h(s(\eta))$ and some $k'_0,l'_0\in\mathbb{N}_0$ satisfying $l'_0-k'_0=n'$, and $\osh[Y]^{k''_0}(\alpha'_B(y))=\osh[Y]^{l''_0}(y)$ for all $y$ in some open neighbourhood $V''\subseteq h(s(A\cap B))$ of $h(s(\eta))$ and some $k''_0,l''_0\in\mathbb{N}_0$ satisfying $l''_0-k''_0=n''$, then it follows from the uniqueness of $n$, $n'$ and $n''$ that $n=n''=n'$. We let $\phi(\eta):=(h(r(\eta)),n,h(s(\eta)))$. We have now defined a map $\phi:\mathcal{G}_X\to \mathcal{G}_Y$ such that $r(\phi(\eta))=h(r(\eta))$ and $s(\phi(\eta))=h(s(\eta))$ for $\eta\in\mathcal{G}_X$ and such that $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every isolated point in $X$. It is straightforward to check that $\phi$ is a bijection, that $\phi(\eta)^{-1}=\phi(\eta^{-1})$ for every $\eta\in\mathcal{G}_X$, and that $\phi(\eta_1\eta_2)=\phi(\eta_1)\phi(\eta_2)$ for $\eta_1,\eta_2\in\mathcal{G}_X$ with $s(\eta_1)=r(\eta_2)$. Since $x\in X$ is eventually periodic if and only if $\{\eta\in\mathcal{G}_X: s(\eta)=r(\eta)=x\}$ is infinite, and $h(x)$ is eventually periodic if and only if $\{\eta\in\mathcal{G}_Y: s(\eta)=r(\eta)=h(x)\}$ is infinite, it follows that $h$ maps eventually periodic points to eventually periodic points. We will now show that $\phi$ is continuous. We will do that by, for $\eta\in\mathcal{G}_X$ and an open subset neighbourhood $V$ of $\phi(\eta)$, constructing an open neighbourhood $U$ of $\eta$ such that $\phi(U)\subseteq V$. If $s(\eta)$ is an isolated point in $X$, then $\eta$ is an isolated point in $\mathcal{G}_X$, so we can just take $U$ to be equal to $\{\eta\}$ in that case. Suppose that $s(\eta)$ is not an isolated point in $X$. Choose $m'\in\mathbb{N}$ such that if $v\in\mathcal{L}(Y)$ has length $m'$ and $uv,vw\in\mathcal{L}(Y)$, then $uvw\in\mathcal{L}(Y)$, and choose $k',l'\in\mathbb{N}$ such that $\osh[Y]^{k'}(r(\phi(\eta)))=\osh[Y]^{l'}(s(\phi(\eta)))$ and \begin{equation*} \phi(\eta)\in\{(r(\phi(\eta))_{[0,k'-1]}y,k'-l',s(\phi(\eta))_{[0,l'-1]}y):y\in Z(r(\phi(\eta))_{[k',k'+m'-1]})\}\subseteq V. \end{equation*} Then choose $m\in\mathbb{N}$ such that if $v\in\mathcal{L}(X)$ has length $m$ and $uv,vw\in\mathcal{L}(X)$, then $uvw\in\mathcal{L}(X)$, and choose $k,l\in\mathbb{N}$ such that $\osh^k(r(\eta))=\osh^l(s(\eta))$, $k-l=c(\eta)$, $h(Z(r(\eta)_{[0,k-1]}))\subseteq Z(r(\phi(\eta))_{[0,k'+m'-1]})$, and $h(Z(s(\eta)_{[0,l-1]}))\subseteq Z(s(\phi(\eta))_{[0,l'+m'-1]})$. Then \begin{equation*} A:=\{(r(\eta)_{[0,k-1]}x,k-l,s(\eta)_{[0,l-1]}x):x\in Z(r(\eta)_{[k,k+m-1]})\} \end{equation*} is a bisection that contains $\eta$. Choose an open neighbourhood $V'\subseteq h(s(A))$ of $h(s(\eta))$ and $n'\in\mathbb{N}_0$ such that $\osh[Y]^{k'+n'}(h(\alpha_A(h^{-1}(y)))=\osh[Y]^{l'+n'}(y)$ for all $y\in V'$, and $n\in\mathbb{N}_0$ such that $h(Z(s(\eta)_{[0,l+n-1]}))\subseteq V'$, and let \begin{equation*} U:=\{(r(\eta)_{[0,k+n-1]}x,k-l,s(\eta)_{[0,l+n-1]}x):x\in Z(r(\eta)_{[k+n,k+n+m-1]})\}. \end{equation*} Then $U$ is an open neighbourhood of $\eta$ such that $$\phi(\xi)\in \{(r(\phi(\eta))_{[0,k'-1]}y,k'-l',s(\phi(\eta))_{[0,l'-1]}y):y\in Z(r(\phi(\eta))_{[k',k'+m'-1]})\}\subseteq V$$ if $\xi\in U$ and $s(\xi)$ is not an isolated point in $X$. We claim that $\phi(\xi)\in V$ if $\xi\in U$, also if $s(\xi)$ is an isolated point in $X$. Suppose $\xi\in U$ and that $s(\xi)$ is an isolated point in $X$. If $x$ is an isolated point in $X$ and periodic, then $Z(x_{[0,m-1]})=\{x\}$. It follows that $\osh^i(s(\xi))\ne x_{[s(\xi)]}$ for $i\in\{0,1,\dots,l+n-1\}$ and that $\osh^j(r(\xi))\ne x_{[s(\xi)]}$ for $j\in\{0,1,\dots,k+n-1\}$ because if $\osh^i(s(\xi))= x_{[s(\xi)]}$ and $i\in\{0,1,\dots,l+n-1\}$, then $\osh^i(s(\eta))\in Z((x_{[s(\xi)]})_{[0,m-1]})$, and if $\osh^j(r(\xi))= x_{[s(\xi)]}$ and $j\in\{0,1,\dots,k+n-1\}$, then $\osh^j(r(\eta))\in Z((x_{[s(\xi)]})_{[0,m-1]})$. Thus, $j_{r(\xi)}-j_{s(\xi)}=k-l$ and $n_\xi=0$. Similarly, $j_{h(r(\xi))}-j_{h(s(\xi))}=k'-l'$, so \begin{multline*} \phi(\xi)=(h(r(\xi)),k'-l',h(s(\xi)))\\\in \{(r(\phi(\eta))_{[0,k'-1]}y,k'-l',s(\phi(\eta))_{[0,l'-1]}y):y\in Z(r(\phi(\eta))_{[k',k'+m'-1]})\}\subseteq V. \end{multline*} This shows that $\phi$ is continuous. That $\phi^{-1}$ is continuous can be proved in a similar way. Thus, $\phi$ is an isomorphism such that $r(\phi(\eta))=h(r(\eta))$ and $s(\phi(\eta))=h(s(\eta))$ for $\eta\in\mathcal{G}_X$ and such that $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every isolated point in $X$. Finally, it follows from Lemma~\ref{henrik} that $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every eventually periodic point that is not an isolated point in $X$. \end{proof} \begin{remark}\label{vm} Let $X$ and $Y$ be two one-sided shifts of finite type and let $\phi:\mathcal{G}_X\to\mathcal{G}_Y$ be an isomorphism. Then the map $h:X\to Y$ given by $\phi((x,0,x))=(h(x),0,h(x))$ is a continuous orbit equivalence (see Proposition~\ref{john}), and it follows from Lemma~\ref{henrik} that $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every eventually periodic point $x\in X$ that is not isolated in $X$, but it might be the case that $\phi((x,\operatorname{lp}(x),x))=(h(x),-\operatorname{lp}(h(x)),h(x))$ if $x$ is isolated in $X$. \end{remark} We have shown in Proposition~\ref{sven} that if two one-sided shifts of finite type $X$ and $Y$ are continuously orbit equivalent by an orbit equivalence $h:X\to Y$, then there is an isomorphism $\phi:\mathcal{G}_X\to \mathcal{G}_Y$ such that $r(\phi(\eta))=h(r(\eta))$ and $s(\phi(\eta))=h(s(\eta))$ for $\eta\in\mathcal{G}_X$ and such that $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every eventually periodic point $x\in X$. From this, it is not difficult to construct the least period preserving cocycle pairs for $h$ and $h^{-1}$ we need to apply Proposition~\ref{diego} (see the proof of Proposition~\ref{john}). To construct the continuous maps $b:X\to\mathbb{Z}$, $n:X\to\mathbb{N}_0$, $b':Y\to\mathbb{Z}$ and $n':Y\to\mathbb{N}_0$ required to apply Proposition~\ref{diego}, we need the following proposition which is a generalisation of \cite[Proposition 3.4]{MM} to general shifts of finite type. The proof of Proposition~\ref{erik} is essentially identical to the proof of \cite[Proposition 3.4]{MM}, but we have included it for completeness. \begin{proposition}\label{erik} Let $X$ be a shift of finite type. Then there is an isomorphism $\Phi:H^1(\mathcal{G}_X)\to H^X$ such that $\Phi([f])=[g]$ where $g\in C(X,\mathbb{Z})$ is given by $g(x)=f((x,1,\osh(x)))$. Moreover, $\Phi([f])\in H^X_+$ if and only if $f((x,\operatorname{lp}(x),x))\ge 0$ for every eventually periodic point $x\in X$. \end{proposition} \begin{proof} It is straightforward to check that $[f]\mapsto [g]$ where $g\in C(X,\mathbb{Z})$ is given by $g(x)=f((x,1,\osh(x)))$, defines an isomorphism $\Phi:H^1(\mathcal{G}_X)\to H^X$, with inverse $\Phi^{-1}([g])((x,r-s,y))=\sum_{i=0}^{r-1}g(\osh^i(x))-\sum_{j=0}^{s-1}g(\osh^j(y))$ for every $(x,r-s,y)\in\mathcal{G}_X$. It is also easy to check that if $\Phi([f])\in H^X_+$, then $f((x,\operatorname{lp}(x),x))\ge 0$ for every eventually periodic point $x\in X$. Suppose $f\in\hom(\mathcal{G},\mathbb{Z})$ and that $f((x,\operatorname{lp}(x),x))\ge 0$ for every eventually periodic point $x\in X$. We shall prove that $\Phi([f])\in H^X_+$ by constructing $h\in C(X,\mathbb{Z})$ such that $f((x,1,\osh(x)))+h(x)-h(\osh(x))\ge 0$ for all $x\in X$. Since $X$ is a shift of finite type and $f$ is continuous, there is an $m\ge 2$ such that if $v\in\mathcal{L}(X)$ has length $m$ and $uv,vw\in\mathcal{L}(X)$, then $uvw\in\mathcal{L}(X)$, and such that if $x_{[0,m]}=x'_{[0,m]}$, then $f((x,1,\osh(x)))=f((x',1,\osh(x')))$. Let $E=(E^0,E^1,r,s)$ be the finite directed graph with $E^0=\{v\in\mathcal{L}(X):v\text{ has length }m\}$, $E^1=\{w\in\mathcal{L}(X):w\text{ has length }m+1\}$, $s(w)=w_{[0,m-1]}$ and $r(w)=w_{[1,m]}$ for $v\in E^1$. Let $\omega:E^1\to\mathbb{Z}$ be defined by $\omega(w)=f((x,1,\osh(x)))$ for some $x\in X$ with $x_{[0,m]}=w$. Let $w^1w^2\dots w^n$ be a cycle on $E$ (so $w^1,w^2,\dots,w^n\in E^1$, $r(w^i)=s(w^{i+1})$ for $i=1,2,\dots,n-1$, and $r(w^n)=s(w^1)$). Then there is a periodic $x\in X$ such that $x_{i+kn}=w^{i+1}_1$ ($w^{i+1}_1$ is the first letter of $w^{i+1}$) for $i=0,1,\dots,n-1$ and $k\in\mathbb{N}_0$. It follows that there is $j\in\mathbb{N}$ such that $n=j\operatorname{lp}(x)$ and that \begin{equation*} \sum_{i=1}^n\omega(w^i)=\sum_{i=1}^nf((\osh^{i-1}(x),1,\osh^i(x)))=jf((x,\operatorname{lp}(x),x))\ge 0. \end{equation*} It therefore follows from \cite[Proposition 3.3(2)]{BH} that there is a function $\kappa:E^0\to\mathbb{Z}$ such that $\omega(w)+\kappa(s(w))-\kappa(r(w))\ge 0$ for $w\in E^1$. Define $h:X\to\mathbb{Z}$ by $h(x)=\kappa(x_{[0,m-1]})$. Then $h\in C(X,\mathbb{Z})$ and \begin{equation*} f((x,1,\osh(x)))+h(x)-h(\osh(x))=\omega(x_{[0,m]})+\kappa(s(x_{[0,m]}))-\kappa(r(x_{[0,m]}))\ge 0 \end{equation*} for $x\in X$. \end{proof} \begin{proposition}\label{john} Let $X$ and $Y$ be two one-sided shifts of finite type and suppose that $\phi:\mathcal{G}_X\to\mathcal{G}_Y$ is an isomorphism. Then there is a continuous orbit equivalence $h:X\to Y$, a $h$-cocycle pair $(k,l)$ such that \begin{equation}\label{eq:5} \begin{split} \phi\bigl((&x,r-s,x')\bigr)\\&=\left(h(x),\sum_{i=0}^{r-1}[l(\osh^i(x))-k(\osh^i(x))]-\sum_{j=0}^{s-1}[l(\osh^j(x'))-k(\osh^j(x'))],h(x')\right) \end{split} \end{equation} for $(x,r-s,x')\in\mathcal{G}_X$ with $\osh^r(x)=\osh^s(x')$, and a $h^{-1}$-cocycle pair $(k',l')$ such that \begin{equation}\label{eq:6} \begin{split} \phi^{-1}&\bigl((y,r'-s',y')\bigr)\\&=\left(h^{-1}(y),\sum_{i=0}^{r'-1}[l'(\osh[Y]^i(y))-k'(\osh[Y]^i(y))]-\sum_{j=0}^{s'-1}[l'(\osh[Y]^j(y'))-k'(\osh[Y]^j(y'))],h^{-1}(y')\right) \end{split} \end{equation} for $(y,r'-s',y')\in\mathcal{G}_Y$ with $\osh[Y]^{r'}(y)=\osh[Y]^{s'}(y')$. The $h$-cocycle pair $(k,l)$ and the $h^{-1}$-cocycle pair $(k',l')$ are least period preserving if and only if $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every eventually periodic $x\in X$, in which case there exist continuous maps $b,n:X\to\mathbb{N}_0$ and $b',n':Y\to\mathbb{N}_0$ such that $l(x)-k(x)=n(x)+b(x)-b(\osh(x))$ and $l'(y)-k'(y)=n'(y)+b'(y)-b'(\osh[Y](y))$ for $x\in X$ and $y\in Y$. \end{proposition} \begin{proof} The restriction of $\phi$ to $\mathcal{G}^{(0)}=X$ is a homeomorphism $h:X\to Y$ such that $r(\phi(\eta))=h(r(\eta))$ and $s(\phi(\eta))=h(s(\eta))$ for $\eta\in\mathcal{G}_X$. We shall prove that $h$ is a continuous orbit equivalence by constructing an $h$-cocycle pair $(k,l)$ satisfying \eqref{eq:5} and an $h^{-1}$-cocycle pair $(k',l')$ satisfying \eqref{eq:6}. Let $x\in X$ be a point that is not isolated in $X$, and let $A$ be an open bisection containing $(\sigma(x),-1,x)$. Then $\phi(A)$ is an open bisection containing $\phi((\sigma(x),-1,x))$, so the map $\alpha_{\phi(A)}:s(\phi(A))\to r(\phi(A))$ defined by $\alpha_{\phi(A)}(s(\xi))=r(\xi)$ for $\xi\in \phi(A)$ is a homeomorphism with the property that there are continuous maps $k,l:s(\phi(A))\to\mathbb{N}_0$ such that $\osh[Y]^{k(y)}(\alpha_{\phi(A)}(y))=\osh[Y]^{l(y)}(y)$ for every $y\in s(\phi(A))$ (cf. \cite[Proposition 3.3]{BCW}). It follows from Lemma~\ref{kurt} and the fact that $Y$ is a locally compact and totally disconnected Hausdorff space that there is an clopen neighbourhood $V$ of $h(x)$ and $k_x,l_x\in\mathbb{N}_0$ such that $\osh[Y]^{k_x}(\alpha_{\phi(A)}(y))=\osh[Y]^{l_x}(y)$ for $y\in V$. Let $U_x=h^{-1}(V)$. Then $U_x$ is a clopen neighbourhood of $x$ and \begin{equation*} \osh[Y]^{k_x}(h(\osh(x'))) =\osh[Y]^{k_x}(\alpha_{\phi(A)}(h(x'))) =\osh[Y]^{l_x}(h(x')) \end{equation*} for all $x'\in U_x$. If $x\in X$ is isolated in $X$, then we let $U_x=\{x\}$ and choose $k_x,l_x\in\mathbb{N}_0$ such that $\osh[Y]^{k_x}(h(\osh(x)))=\osh[Y]^{l_x}(h(x))$ (we can do that because $\phi((\osh(x),-1,x))\in\mathcal{G}_Y$). Since $X$ is compact, it follows that there is a finite $F\subseteq X$ and mutually disjoint clopen sets $\{U'_x:x\in F\}$ such that $x\in U'_x\subseteq U_x$ for $x\in F$ and $\bigcup_{x\in F}=X$. If we define $k,l:X\to\mathbb{N}_0$ by setting $k(x)=k_{x'}$ and $l(x)=l_{x'}$ for $x\in U'_{x'}$, then $(k,l)$ is an $h$-cocycle pair satisfying \eqref{eq:5}. We can in a similar way construct an $h^{-1}$-cocycle pair $(k',l')$ satisfying \eqref{eq:6}. It follows from \eqref{eq:5} that if $x\in X$ is eventually periodic, then \begin{equation*} \phi((x,\operatorname{lp}(x),x))=\left(h(x),\sum_{i=0}^{\operatorname{lp}(x)-1}(l(\osh^i(x))-k(\osh^i(x))),h(x)\right), \end{equation*} and it follows from \eqref{eq:6} that if $y\in Y$ is eventually periodic, then \begin{equation*} \phi^{-1}((y,\operatorname{lp}(y),y))=\left(h^{-1}(y),\sum_{i=0}^{\operatorname{lp}(y)-1}(l'(\osh[Y]^i(y))-k'(\osh[Y]^i(y))),h^{-1}(y)\right). \end{equation*} It follows that $(k,l)$ and $(k',l')$ are least period preserving if and only if $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every eventually periodic $x\in X$. It follows from Proposition~\ref{erik} and \eqref{eq:5} that there is an isomorphism $\Psi:H^Y\to H^X$ such that $\Psi([f])=[g]$ where $g\in C(X,\mathbb{Z})$ is given by \begin{equation*} g(x)=\sum_{i=0}^{l(x)-1}f(\osh[Y]^i(h(x)))-\sum_{j=0}^{k(x)-1}f(\osh[Y]^j(h(\osh(x)))). \end{equation*} Suppose $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every eventually periodic $x\in X$. It then follows from Proposition~\ref{erik} that $\Psi(H^Y_+)=H^X_+$. Let $1_Y$ be the function that sends every $y\in Y$ to $1$. Then $[1_Y]\in H^Y_+$, so $\Psi([1_Y])\in H^X_+$. Since $\Psi([1_Y])=[l-k]$, it follows that there are continuous maps $b,n:X\to\mathbb{N}_0$ such that $l(x)-k(x)=n(x)+b(x)-b(\osh(x))$ for $x\in X$. The existence of continuous maps $b',n':Y\to\mathbb{N}_0$ such that $l'(y)-k'(y)=n'(y)+b'(y)-b'(\osh[Y](y))$ for $y\in Y$, can be proved in a similar way. \end{proof} \begin{proof}[Proof of Theorem \ref{ib}] Let $h:X\to Y$ be a continuous orbit equivalence. It follows from Proposition~\ref{sven} that $h$ maps eventually periodic points to eventually periodic points, and that there is an isomorphism $\phi:\mathcal{G}_X\to \mathcal{G}_Y$ such that $r(\phi(\eta))=h(r(\eta))$ and $s(\phi(\eta))=h(s(\eta))$ for $\eta\in\mathcal{G}_X$ and such that $\phi((x,\operatorname{lp}(x),x))=(h(x),\operatorname{lp}(h(x)),h(x))$ for every eventually periodic point $x\in X$. It follows from Proposition~\ref{john} that there are a least period preserving $h$-cocycle pair $(k,l)$, a least period preserving $h^{-1}$-cocycle pair $(k',l')$, and continuous maps $b,n:X\to\mathbb{N}_0$ and $b',n':Y\to\mathbb{N}_0$ such that $l(x)-k(x)=n(x)+b(x)-b(\osh(x))$ and $l'(y)-k'(y)=n'(y)+b'(y)-b'(\osh[Y](y))$ for $x\in X$ and $y\in Y$. It therefore follows from Proposition~\ref{diego} that $\mathbf{X}$ and $\mathbf{Y}$ are flow equivalent. \end{proof} \section{Flow equivalence and orbit equivalence for shifts of finite type and isomorphisms of their groupoids} The following theorem follows directly from Proposition~\ref{sven} and Proposition~\ref{john} (it also follows from \cite[Corollary 4.6]{CW} and the fact that a shift of finite type can be represented by a finite graph that has no sinks). \begin{theorem}\label{orbit} Let $X$ and $Y$ be two one-sided shifts of finite type. Then $X$ and $Y$ are continuously orbit equivalent if and only if the groupoids $\mathcal{G}_X$ and $\mathcal{G}_Y$ are isomorphic. \end{theorem} If $X$ and $Y$ are irreducible, then the result of Theorem~\ref{orbit} easily follows from \cite[Theorem 2.3]{MM} and the fact that every one-sided shift of finite type is conjugate to a one-sided topological Markov shift. If $X$ and $Y$ have no isolated periodic points, then the result of Theorem~\ref{orbit} follows from \cite[Proposition 3.2]{Renault}. In the rest of this section we prove an analogue of Theorem~\ref{orbit} (Theorem~\ref{thm:1}), which, among other things, says that the groupoids of two one-sided shifts of finite type are stably isomorphic if and only if the corresponding two-sided shift spaces are flow equivalence. We will use results of \cite{CRS}, \cite{ERRS}, and \cite{Matui} for this. Before we state and prove Theorem~\ref{thm:1}, we need to introduce some notation and a lemma. Suppose $X$ is a one-sided shift space and $f\in C(X,\mathbb{N})$. Let $$X_f:=\{(x,i)\in X\times\mathbb{N}_0:i<f(x)\, \},$$ and equip $X_f$ with the subspace topology of $X\times\mathbb{N}_0$, where the latter is equipped with the product topology of the topology of $X$ and the discrete topology on $\mathbb{N}_0$. Then $X_f$ is compact. Define $\osh[X_f]:X_f\to X_f$ by $$\osh[X_f](x,0)=(\osh(x),f(\osh(x))-1\, )$$ and $\osh[X_f](x,i)=(x,i-1)$ for $x\in X$ and $i\in\{1,2,\dots,f(x)-1\}$. Then $\osh[X_f]$ is continuous and surjective. Let $\boldsymbol{a}$ be the alphabet of $X$ and $\mathcal N:=\{0,1,\dots,\max\{f(x):x\in X\}-1\}$. Define $\iota_{X_f}:X_f\to (\boldsymbol{a}\times\mathcal{N})^{\mathbb{N}_0}$ by \begin{multline*} \iota_{X_f}(x,i)=(a_0,i)(a_0,i-1)\dots (a_0,0)(a_1,f(\osh(x))-1)(a_1,f(\osh(x))-2)\\\dots (a_1,0)(a_2,f(\osh^2(x))-1)\dots \end{multline*} where $x=a_0a_1a_2\dots$. Then $\iota_{X_f}$ is continuous and injective and $\iota_{X_f}\circ\osh[X_f]=\sigma\circ\iota_{X_f}$, where $\sigma$ is the shift map on $(\boldsymbol{a}\times\mathcal{N})^{\mathbb{N}_0}$. It follows that $X^f:=\iota_{X_f}(X_f)$ is a subshift of $(\boldsymbol{a}\times\mathcal{N})^{\mathbb{N}_0}$, and that $(X_f,\osh[X_f])$ and $(X^f,\osh[X^f])$ are conjugate. Let $X_0:=\iota_{X_f}(X\times\{0\})$ and define $\iota_X:X\to X_0$ by $\iota_X(x)=\iota_{X_f}(x,0)$. Then $X_0$ is a cross section of $X^f$ (i.e., $X_0$ is a compact and open subset of $X^f$, $X^f=\{\osh[X^f]^k(x):x\in X_0,\ k\in\mathbb{N}_0\}$, there exists for all $x\in X_0$ a $k\in\mathbb{N}$ such that $\osh[X^f]^k(x)\in X_0$, and $\operatorname{frt}_{X_0}:X_0\to\mathbb{N}$ defined by $\operatorname{frt}_{X_0}(x)=\min\{k\in\mathbb{N}:\osh[X^f]^k(x)\in X_0\}$ is continuous), and $\iota_X$ is a conjugacy between $(X,\osh)$ and $(X_0,\osh[X_0])$ where $\osh[X_0]$ is the \emph{first return map} defined by $\osh[X_0](x)=\osh[X^f]^{\operatorname{frt}_{X_0}(x)}(x)$. It follows that the two-sided shift spaces $\mathbf{X}$ and $\mathbf{X}^f$ are flow equivalent. So $X^f$ is of finite type if and only if $X$ is. If $X$ is of finite type, then we let $(\mathcal{G}_X)_f$ denote the groupoid \begin{equation*} \{(\eta,i,j)\in\mathcal{G}_X\times\mathbb{N}_0\times\mathbb{N}_0: 0\le i<f(r(\eta)),\ 0\le j<f(s(\eta))\, \}, \end{equation*} introduced in \cite{Matui}. \begin{lemma}\label{lem:groupoid} Let $X$ be a one-sided shift space and $f\in C(X,\mathbb{N})$. Then $(\mathcal{G}_X)_f$ and $\mathcal{G}_{X^f}$ are isomorphic. \end{lemma} \begin{proof} It is routine to check that the map $$((x,l-k,y),i,j)\mapsto \left(\iota_{X_f}(x,i),i-j+\sum_{h=1}^{l}f(\osh^h(x))-\sum_{h=1}^{k}f(\osh^h(y)),\iota_{X_f}(y,j)\right)$$ where $x,y\in X$, $i,j,k,l\in\mathbb{N}_0$, $i<f(x)$, $j<f(y)$, and $\osh^l(x)=\osh^k(y)$, is an isomorphism between $(\mathcal{G}_X)_f$ and $\mathcal{G}_{X^f}$. \end{proof} Suppose $\mathcal{G}$ is a groupoid with unit space $\mathcal{G}^{(0)}$ and range and source maps $r,s:\mathcal{G}\to\mathcal{G}^{(0)}$, and that $Z$ is a subset of $\mathcal{G}^{(0)}$. We let $\mathcal{G}|_Z:=s^{-1}(Z)\cap r^{-1}(Z)$, and say that $Z$ is \emph{$\mathcal{G}$-full} or just \emph{full} if $r(s^{-1}(Z))=\mathcal{G}^{(0)}$. Suppose $\mathcal{G}_1$ and $\mathcal{G}_2$ are two ample groupoids (see for example \cite{CRS}). As in \cite{CRS} and \cite{Matui}, we say that $\mathcal{G}_1$ and $\mathcal{G}_2$ are \emph{Kakutani equivalent} if there for $i=1,2$ is a $\mathcal{G}_i$-full clopen subset $Z_i\subseteq\mathcal{G}_i^{(0)}$ such that $\mathcal{G}_1|_{Z_1}$ and $\mathcal{G}_2|_{Z_2}$ are isomorphic as topological groupoids, and we say that $\mathcal{G}_1$ and $\mathcal{G}_2$ are \emph{groupoid equivalent} if there is a $\mathcal{G}_1$--$\mathcal{G}_2$ equivalence in the sense of \cite[Definition 2.1]{MRW}. As in \cite{CRS}, we write $\mathcal{R}$ for the full countable equivalence relation $\mathbb{N}_0\times\mathbb{N}_0$ regarded as a discrete principal groupoid with unit space $\mathbb{N}_0$. \begin{theorem}\label{thm:1} Let $X$ and $Y$ be one-sided shifts of finite type. The following are equivalent. \begin{enumerate} \item $\mathcal{G}_X$ and $\mathcal{G}_Y$ are Kakutani equivalent. \item $\mathcal{G}_X$ and $\mathcal{G}_Y$ are groupoid equivalent. \item $\mathcal{G}_X\times\mathcal{R}$ and $\mathcal{G}_Y\times\mathcal{R}$ are isomorphic as topological groupoids. \item There exist full open sets $X'\subseteq X$ and $Y'\subseteq Y$ such that $\mathcal{G}_X|_{X'}$ and $\mathcal{G}_Y|_{Y'}$ are isomorphic as topological groupoids. \item There exist $f\in C(X,\mathbb{N})$ and $g\in C(Y,\mathbb{N})$ such that $X^f$ and $Y^g$ are continuous orbit equivalent. \item $\mathbf{X}$ and $\mathbf{Y}$ are flow equivalent. \end{enumerate} \end{theorem} \begin{proof} The equivalence of (1)--(4) follows from \cite[Theorem 3.2]{CRS}. (1)$\implies$(5): It follows from \cite[Lemma 4.6]{Matui} that there are $f\in C(X,\mathbb{N})$ and $g\in C(Y,\mathbb{N})$ such that $(\mathcal{G}_X)_f$ and $(\mathcal{G}_Y)_g$ are isomorphic. Since $(\mathcal{G}_X)_f$ is isomorphic to $\mathcal{G}_{X^f}$, and $(\mathcal{G}_Y)_g$ is isomorphic to $\mathcal{G}_{Y^g}$, it follows that $\mathcal{G}_{X^f}$ and $\mathcal{G}_{Y^g}$ are isomorphic, so an application of Theorem~\ref{orbit} gives us that $X^f$ and $Y^g$ are continuous orbit equivalent. (5)$\implies$(6): It follows from Theorem~\ref{ib} that $\mathbf{X}^f$ and $\mathbf{Y}^g$ are flow equivalent. Since $\mathbf{X}$ and $\mathbf{X}^f$ are flow equivalent, and $\mathbf{Y}$ and $\mathbf{Y}^g$ are flow equivalent, it follows that $\mathbf{X}$ and $\mathbf{Y}$ are flow equivalent. (6)$\implies$(3): This is probably well-known to the experts, but we have not been able to find a proof in the literature, so we give one here for completeness. Let $E_1$ and $E_2$ be finite directed graphs with no sinks and no sources such that the one-sided edge shift of $E_1$ is conjugate to $X$ and the one-sided edge shift of $E_2$ is conjugate to $Y$. Then the two-sided edge shifts of $E_1$ and $E_2$ are flow equivalent, so it follows from \cite[Lemma 5.1]{ERRS} that $E_1$ and $E_2$ are move equivalent. Since the groupoid of $E_1$ is isomorphic to $\mathcal{G}_X$ and the groupoid of $E_2$ is isomorphic to $\mathcal{G}_Y$, an application of \cite[Corollary 4.9]{CRS} gives that $\mathcal{G}_X\times\mathcal{R}$ and $\mathcal{G}_Y\times\mathcal{R}$ are isomorphic as topological groupoids. \end{proof} \section{Graph $C^*$-algebras and Leavitt path algebras} We now apply the results of the previous section to strengthen some of the results of \cite{ABHS,BCaH,BCW,CRS} in the special case of finite directed graphs with no sinks and no sources. Suppose $E$ is a directed graph and $R$ is a unital ring. We write $\mathcal{G}_E$ for the groupoid of $E$ (see for example \cite{AER,BCW,Carlsen,CW,CS}), $C^*(E)$ for the $C^*$-algebra of $E$ (see for example \cite{AER,BCW,Carlsen,ERRS,Sor,Tomforde}), $D(E)$ for the $C^*$-subalgebra $\overline{\operatorname{span}}\{s_\mu s_\mu^*:\mu\in E^*\}$ of $C^*(E)$, $L_R(E)$ for the Leavitt path $R$-algebra of $E$ (see for example \cite{Carlsen,Tomforde2}), $D_R(E)$ for the $*$-subalgebra $\operatorname{span}_R\{\mu \mu^*:\mu\in E^*\}$ of $L_R(E)$, and $X_E$ for the one-sided edge shift of $E$. The next result depends on the equivalence of orbit equivalence as defined in \cite{BCW} and isomorphisms of the associated groupoids. This was established in \cite{BCW} in the presence of the so-called Condition (L), and generalised to the given setting in the independent articles \cite{AER} and \cite{CW}. We will follow \cite{CW} since it is much better aligned with the approach in the paper at hand. Combining Theorem 5.1 further with results of \cite{BCW} and \cite{CR2}, we obtain the following corollary: \begin{corollary}\label{cor:2} Let $E$ and $F$ be finite directed graphs with no sinks and no sources and let $R$ be a unital commutative integral domain. The following are equivalent. \begin{enumerate} \item $X_E$ and $X_F$ are continuous orbit equivalent. \item $\mathcal{G}_E$ and $\mathcal{G}_F$ are isomorphic. \item There is a $*$-isomorphism $\phi:C^*(E)\to C^*(F)$ such that $\phi(D(E))=D(F)$. \item There is a ring isomorphism $\beta:L_R(E)\to L_R(F)$ such that $\beta(D_R(E))=D_R(F)$. \item There is a $*$-algebra isomorphism $\gamma:L_R(E)\to L_R(F)$ such that $\gamma(D_R(E))=D_R(F)$. \item $E$ and $F$ are orbit equivalent as defined in \cite{BCW}. \end{enumerate} \end{corollary} \begin{proof} The equivalence of (1) and (2) is proved in Theorem~\ref{orbit}. The equivalence of (2) and (3) follows from \cite[Theorem 5.1]{BCW}, and the equivalence of (2), (4), and (5) follows from \cite[Corollary 4.2]{CR2}. Finally, the equivalence of (2) and (6) follows from \cite[Corollary 4.6]{CW}. \end{proof} \begin{remark} It follows from \cite[Corollary 6]{Carlsen} that if $R=\mathbb{Z}$ (or more generally, if $R$ is a \emph{kind} $*$-subring of $\mathbb{C}$, see \cite[Section 3]{Carlsen}), then the condition ``$L_R(E)$ and $L_R(F)$ are isomorphic $*$-rings'' can be added to the list of equivalent conditions in Corollary \ref{cor:2}. \end{remark} Suppose $E$ is a directed graph. We write $\mathbf{X}_E$ for the two-sided edge shift of $E$, and as in \cite{CRS} and \cite{CW}, we write $SE$ for the directed graph obtained by attaching a head to each vertex of $E$ (see \cite[Definition 4.2]{Tomforde}). We write $\mathcal{K}$ for the $C^*$-algebra of compact operators on $l^2(\mathbb{N}_0)$ and $\mathcal{C}$ for the maximal abelian subalgebra of $\mathcal{K}$ consisting of diagonal operators. Suppose $R$ is a unital ring. We write $M_\infty(R)$ for the ring of finitely supported, countable infinite square matrices over $R$, and $D_\infty(R)$ for the abelian subring of $M_\infty(R)$ consisting of diagonal matrices. By combining Theorem~\ref{thm:1} with results of \cite{CR2,CRS,CW,ERRS}, we obtain the following corollary. \begin{corollary}\label{cor:1} Let $E$ and $F$ be finite directed graphs with no sinks and no sources and let $R$ be a unital commutative integral domain. The following are equivalent. \begin{enumerate} \item $E$ and $F$ are move equivalent as defined in \cite{Sor}. \item $\mathbf{X}_E$ and $\mathbf{X}_F$ are flow equivalent. \item $\mathcal{G}_E$ and $\mathcal{G}_F$ are Kakutani equivalent. \item $\mathcal{G}_E$ and $\mathcal{G}_F$ are groupoid equivalent. \item $\mathcal{G}_E\times\mathcal{R}$ and $\mathcal{G}_E\times\mathcal{R}$ are isomorphic as topological groupoids. \item $\mathcal{G}_{SE}$ and $\mathcal{G}_{SF}$ are isomorphic as topological groupoids. \item $SE$ and $SF$ are orbit equivalent as defined in \cite{BCW}. \item There is a $*$-isomorphism $\phi:C^*(E)\otimes\mathcal{K}\to C^*(F)\otimes\mathcal{K}$ such that $\phi(D(E)\otimes\mathcal{C})=D(F)\otimes\mathcal{C}$. \item There is a ring isomorphism $\beta:L_R(E)\otimes M_\infty(R)\to L_R(F)\otimes M_\infty(R)$ such that $\beta(D_R(E)\otimes D_\infty(R))=D_R(F)\otimes D_\infty(R)$. \item There is a $*$-algebra isomorphism $\gamma:L_R(E)\otimes M_\infty(R)\to L_R(F)\otimes M_\infty(R)$ such that $\gamma(D_R(E)\otimes D_\infty(R))=D_R(F)\otimes D_\infty(R)$. \item There is a $*$-isomorphism $\psi:C^*(SE)\to C^*(SF)$ such that $\psi(D(SE))=D(SF)$. \item There is a ring isomorphism $\rho:L_R(SE)\to L_R(SF)$ such that $\rho(D_R(SE))=D_R(SF)$. \item There is a $*$-algebra isomorphism $\tau:L_R(SE)\to L_R(SF)$ such that $\tau(D_R(SE))=D_R(SF)$. \item There exist projections $p_E\in D(E)$ and $p_F\in D(F)$ and a $*$-isomorphism $\omega:p_EC^*(E)p_E\to p_FC^*(F)p_F$ such that $p_E$ is full in $C^*(E)$, $p_F$ is full in $C^*(F)$, and $\omega(p_ED(E))=p_FD(F)$. \item There exist idempotents $p_E\in D_R(E)$ and $p_F\in D_R(F)$ and a ring isomorphism $\zeta:p_EL_R(E)p_E\to p_FL_R(F)p_F$ such that $p_E$ is full in $L_R(E)$, $p_F$ is full in $L_R(F)$, and $\zeta(p_ED_R(E))=p_FD_R(F)$. \item There exist projections $p_E\in D_R(E)$ and $p_F\in D_R(F)$ and a $*$-algebra isomorphism $\kappa:p_EL_R(E)p_E\to p_FL_R(F)p_F$ such that $p_E$ is full in $L_R(E)$, $p_F$ is full in $L_R(F)$, and $\kappa(p_ED_R(E))=p_FD_R(F)$. \end{enumerate} \end{corollary} The implication $(2)\implies (8)$ of Corollary~\ref{cor:1} was originally proved for irreducible graphs satisfying condition (L) by Cuntz and Krieger in \cite[Theorem 4.1]{CK} (in the setting of Cuntz--Krieger graphs), and for reducible graphs satisfying condition (L) by Cuntz in \cite[Theorem 2.4]{Cuntz} (also in the setting of Cuntz--Krieger graphs). If the graphs $E$ and $F$ both satisfy condition (L), then the equivalence of (3) and (14) of Corollary~\ref{cor:1} follows from \cite[Theorem 5.4]{Matui}. \begin{proof}[Proof of Corollary~\ref{cor:1}] The equivalence of (1) and (2) is proved in \cite[Lemma 5.1]{ERRS}. Since $\mathcal{G}_E=\mathcal{G}_{X_E}$ and $\mathcal{G}_F=\mathcal{G}_{X_F}$, the equivalence of (2)--(5) follows from Theorem~\ref{thm:1}. The equivalence of (5), (6), (8), and (11) follows from \cite[Theorem 4.2]{CRS}, the equivalence of (3) and (14) follows from \cite[Corollary 4.5]{CRS}, and the equivalence of (6) and (7) follows from \cite[Corollary 4.6]{CW}. Finally, the equivalence of (9)--(16) is proved in \cite[Corollary 4.8]{CR2}. \end{proof} \begin{remark} It follows from \cite[Proposition 5]{Carlsen} and an argument similar to the one used in the proof of \cite[Proposition 6.1]{JS} that if $R=\mathbb{Z}$ (or more generally, if $R$ is a \emph{kind} $*$-subring of $\mathbb{C}$, see \cite[Section 3]{Carlsen}), then the following 3 conditions can be added to the list of equivalent conditions in Corollary \ref{cor:1}. \begin{enumerate} \item[(17)] $L_R(E)\otimes M_\infty(R)$ and $L_R(F)\otimes M_\infty(R)$ are isomorphic $*$-rings. \item[(18)] $L_R(SE)$ and $L_R(SF)$ are isomorphic $*$-rings. \item[(19)] There are projections $p_E\in D_R(E)$ and $p_F\in D_R(F)$ such that $p_E$ is full in $L_R(E)$, $p_F$ is full in $L_R(F)$, and $p_EL_R(SE)p_E$ and $p_FL_R(SF)p_F$ are isomorphic $*$-rings. \end{enumerate} \end{remark} \section{Cuntz--Krieger algebras} In this final section, we apply the results of Section 5 and Section 6 to Cuntz--Krieger algebras and topological Markov chains and directed graphs of $\{0,1\}$-matrix in order to generalise \cite[Theorem 2.3]{MM} and \cite[Corollary 3.8]{MM} from the irreducible to the general case. Let $A$ be an $N\times N$-matrix with entries in $\{0,1\}$, and assume that every row and every column of $A$ is nonzero. As in \cite{CK,MM}, we denote by $\mathcal{O}_A$ the Cuntz--Krieger algebra of $A$ with canonical abelian subalgebra $\mathcal{D}_A$, by $X_A$ the one-sided shift space $\{(x_i)_{i\in\mathbb{N}_0}\in\{1,2,\dots,N\}^{\mathbb{N}_0}:A(x_i,x_{i+1})=1\text{ for all }i\in\mathbb{N}_0\}$ associated with $A$, and by $\TSS$ the two-sided subshift of $X_A$ (if $A$ does not satisfy Condition (I), then we let $\mathcal{O}_A$ denote the universal Cuntz--Krieger algebra $\mathcal{A}\mathcal{O}_A$ introduced in \cite{aHR}). Notice that the groupoid $\mathcal{G}_{X_A}$ is equal to the groupoid $G_A$ considered in \cite{MM}. We get from Theorem~\ref{orbit} and \cite[Theorem 5.1]{BCW} (or Corollary~\ref{cor:2}) the following generalisation of \cite[Theorem 2.3]{MM}. \begin{corollary}\label{CK} Let $A$ and $B$ be finite square matrices with entries in $\{0,1\}$, and assume that every row and every column of $A$ and $B$ is nonzero. The following conditions are equivalent. \begin{enumerate} \item $X_A$ and $X_B$ are continuously orbit equivalent. \item The groupoids $\mathcal{G}_{X_A}$ and $\mathcal{G}_{X_B}$ are isomorphic. \item There is a $*$-isomorphism $\Psi:\mathcal{O}_A\to\mathcal{O}_B$ such that $\Psi(\mathcal{D}_A)=\mathcal{D}_B$. \end{enumerate} \end{corollary} \begin{proof} The equivalence of (1) and (2) follows from Theorem~\ref{orbit}. Let $E_A$ be the graph of $A$, i.e., $E_A^0$ is the index set of $A$, $E_A^1=\{(i,j)\in E_A^0\times E_A^0: A(i,j)=1\}$, and $r((i,j))=j$ and $s((i,j))=i$ for $(i,j)\in E_A^1$. Then $\mathcal{G}_{X_A}$ is isomorphic to the groupoid $\mathcal{G}_{E_A}$ of $E_A$ defined in \cite{BCW}. It is well-known that there is an isomorphism $\Psi:\mathcal{O}_A\to C^*(E_A)$ satisfying $\Psi(\mathcal{D}_A)=\mathcal{D}(E_A)$. The equivalence of (2) and (3) therefore follows from \cite[Theorem 5.1]{BCW} (or Corollary~\ref{cor:2}). \end{proof} Similarly, we get from Corollary~\ref{cor:1} the following strengthening of \cite[Corollary 3.8]{MM}. \begin{corollary} \label{cor:3} Let $A$ and $B$ be finite square matrices with entries in $\{0,1\}$, and assume that every row and every column of $A$ and $B$ is nonzero. The following are equivalent. \begin{enumerate} \item There is a $*$-isomorphism $\Psi:\mathcal{O}_A \otimes\mathcal{K}\to \mathcal{O}_B \otimes\mathcal{K}$ such that $\Psi(\mathcal{D}_{A}\otimes\mathcal{C})=\mathcal{D}_{B}\otimes\mathcal{C}$. \item There are projections $p_A\in \mathcal{D}_A$ and $p_B\in \mathcal{D}_B$ and an isomorphism $\phi:p_A\mathcal{O}_Ap_A\to p_B\mathcal{O}_Bp_B$ such that $p_A$ is full in $\mathcal{O}_A$, $p_B$ is full in $\mathcal{O}_B$, and $\phi(p_A\mathcal{D}_A)=p_B\mathcal{D}_B$. \item $\TSS$ and $\TSS[B]$ are flow equivalent. \item $\mathcal{G}_{X_A}$ and $\mathcal{G}_{X_B}$ are Kakutani equivalent. \item $\mathcal{G}_{X_A}$ and $\mathcal{G}_{X_B}$ are groupoid equivalent. \item $\mathcal{G}_{X_A}\times\mathcal{R}$ and $\mathcal{G}_{X_B}\times\mathcal{R}$ are isomorphic. \end{enumerate} \end{corollary} \begin{proof} Let $E_A$ be as in the proof of Corollary~\ref{CK}. Then the two-sided shift spaces $\TSS$ and $\mathbf{X}_{E_A}$ are conjugate, the groupoids $\mathcal{G}_{E_A}$ and $\mathcal{G}_{X_A}$ are isomorphic, and there is a $*$-isomorphism $\Psi:\mathcal{O}_A\to C^*(E_A)$ satisfying $\Psi(\mathcal{D}_A)=D(E_A)$. The result therefore follows from Corollary~\ref{cor:1}. \end{proof}
1,941,325,220,869
arxiv
\section{Introduction} Recently, particles with a long lifetime have been studied extensively at colliders (see e.g.\ Ref. \cite{Alimena:2019zri} for a review). At the large hadron collider (LHC), the long-lived particles have been searched for in channels with displaced dilepton vertices \cite{Aad:2015rba, Aaboud:2018jbr, CMS:2014hka, Khachatryan:2014mea, Aad:2019tcc} and in channels with displaced jets vertices \cite{Aad:2012zn, Chatrchyan:2012sp, Aad:2013gva, Khachatryan:2015jha, Aaboud:2016dgf, Khachatryan:2016sfv, Aaboud:2017iio, Aaboud:2019opc, CMS:2016azs}. Searches for displaced vertices of collimated leptons or light hadrons with low $p_T$, which originate from light neutral particles such as dark photons, have been carried out at the LHC, e.g.\ by the ATLAS collaboration \cite{Aad:2014yea, Aad:2019tua}. Typically, for a particle to be considered as a long-lived particle at the LHC, the decay length has to be larger than ${\cal O}(1)$ mm so that the displaced vertex can be detected by the spatial resolution of the LHC detectors. Thus, the coupling strength between long-lived particles and the standard model (SM) particles is usually significantly reduced. For example, an electrophilic vector boson $A'$ that couples with electron via $g A'_\mu \bar e \gamma^\mu e$ has a decay width $\Gamma = g^2 m_{A'}/(12\pi)$ and a decay length $d = \gamma v \tau$, where $v$ ($\tau$) is the velocity (lifetime) of the vector boson $A'$, and $\gamma$ is the Lorentz boost factor. For such a particle to have a macroscopic decay length so that it can give rise to a long-lived particle signature at the LHC, the decay coupling is typically small. For example, consider a typical decay length as $d \simeq 1$ m, one has $g \sim {\cal O}(10^{-6})$ for the case where $m_{A'} = 1$ GeV and $\gamma = 100$. Thus simple long-lived vector boson models usually lead to a suppressed LHC cross section due to the small coupling constant needed for the large decay length. In this paper, we construct a model that predicts a long-lived dark photon (LLDP) with a GeV scale mass. Unlike many other dark photon models, the production cross section of the GeV LLDP in our model at the LHC is not suppressed. This is because the production process of the LLDP is different from its decay process. We use the Stueckelberg mechanism to mediate the interaction between the hidden sector and the SM sector; the production and decay processes of the LLDPs are mediated by different Stueckelberg mass terms. Recently, models in which LLDPs can have a sizable collider signal have been proposed in the literature. For example, Ref.\ \cite{Buschmann:2015awa} introduced a second boson with couplings to both SM quarks and the hidden fermion to produce dark photons at colliders. Ref.\ \cite{Arguelles:2016ney} used a dimension-five operator between a scalar $SU(2)_L$ triplet, the SM $SU(2)_L$ gauge bosons and the dark gauge boson to generate a non-abelian kinetic mixing term, which can enhance the LLDP signal. Potential large LLDP collider signals can also arise via top-partner decays \cite{Kim:2019oyh}, or via a Higgs portal interaction to the hidden QED \cite{Krovi:2019hdl}. \section{The model} \label{sec:model} We introduce two Abelian gauge groups in the hidden sector: $U(1)_F$ with gauge boson $X^{\mu}$ and $U(1)_W$ with gauge boson $C^{\mu}$. We use the Stueckelberg mechanism to provide masses to the two new gauge bosons in the hidden sector, and also to mediate the interactions between the hidden sector and the SM sector \cite{Kors:2005uz, Feldman:2006ce, Feldman:2006wb, Feldman:2007wj, Feldman:2009wv}. The Lagrangian for the extension is given by ${\cal L} = {\cal L}_F + {\cal L}_W$ where \begin{eqnarray} &-4 {\cal L}_{F} = X_{\mu\nu}^2 +2 ( \partial_\mu \sigma_1 + m_{1}\epsilon_1 B_{\mu} + m_{1} X_{\mu} )^2, \nonumber \\ &-4{\cal L}_{W} = C_{\mu\nu}^2 + 2( \partial_\mu \sigma_2 + m_{2}\epsilon_2 B_{\mu} + m_{2} C_{\mu} )^2. \nonumber \end{eqnarray} Here $B_\mu$ is the hypercharge boson in the SM, $\sigma_1$ and $\sigma_2$ are the axion fields {in the Stueckelberg mechanism}, and $m_1$, $m_2$, $m_{1}\epsilon_1$, and $m_{2}\epsilon_2$ are mass terms in the Stueckelberg mechanism. {The dimensionless parameters $\epsilon_1$ and $\epsilon_2$ are assumed to be small in our analysis: $\epsilon_1 \sim {\cal O}(10^{-7})$ and ${\epsilon_2} \sim {\cal O}(10^{-2})$ for our current analysis at the LHC.} ${\cal L}_{F}$ is invariant under $U(1)_Y$ gauge transformations $ \delta_Y B_{\mu} = \partial_\mu \lambda_B $ and $ \delta_Y \sigma_1 = - m_{1}\epsilon_1 \lambda_B $; ${\cal L}_{F}$ is also invariant under $U(1)_F$ gauge transformation $ \delta_F X_{\mu} = \partial_\mu \lambda_X , $ and $ \delta_F \sigma_1 = - m_{1} \lambda_X $. Similarly, ${\cal L}_{W}$ is gauge invariant under both $U(1)_{Y}$ and $U(1)_{W}$. In the hidden sector, we further introduce one Dirac fermion $\psi$ that is charged under both $U(1)_{F}$ and $U(1)_{W}$. Vector current interactions between the Dirac fermion and the gauge bosons in the hidden sector are assumed, i.e., $ g_F \bar \psi \gamma^\mu \psi X_{\mu} + g_W \bar \psi \gamma^\mu \psi C_{\mu} $ where $g_{F}$ and $g_W$ are the gauge couplings for $U(1)_{F}$ and $U(1)_{W}$ respectively. The two by two mass matrix in the neutral gauge boson sector in the SM is now extended to a four by four mass matrix, which, in the gauge basis $V= ( C,X, B, A^3)$, is given by \begin{equation} M^2 = \begin{pmatrix} m_{2}^2 & 0 & m_{2}^2 \epsilon_2 & 0\cr 0 & m_{1}^2 & m_{1}^2 \epsilon_1 & 0 \cr m_{2}^2 \epsilon_2 & m_{1}^2 \epsilon_1 & \sum\limits_{i=1}^2 m_{i}^2 \epsilon_i^2 + {g'^2 v^2 \over 4} & - {g'g v^2 \over 4} \cr 0 & 0 & - {g'g v^2 \over 4} & {g^2 v^2 \over 4} \end{pmatrix} \label{2zpstmass} \end{equation} where $g$ and $g'$ are gauge couplings for the SM $SU(2)_L$ and $U(1)_Y$ gauge groups respectively, $A^3$ is the third component of the $SU(2)_L$ gauge bosons, and $v$ is the Higgs vacuum expectation value. The determinant of the mass square matrix vanishes which ensures the existence of a massless mode to be identified as the SM photon. The mass matrix can be diagonalized via an orthogonal transformation ${\cal O}$ such that ${\cal O}^T M^2 {\cal O} = {\rm diag} (m^2_{Z'}, m^2_{A'}, m^2_{Z}, 0)$; the mass {basis} $E= ( Z', A', Z, A)$ {is} related to the gauge {basis} $V$ via $E_i={\cal O}_{ji} V_{j}$. In the mass basis, $A$ is the SM photon, $Z$ is the SM $Z$ boson, $A'$ is the dark photon with {a GeV-scale} mass and $Z'$ is the {heavy boson with a TeV-scale mass.} Diagonalization of the mass matrix leads to interactions between $Z/A$ in the SM to the fermion $\psi$ in the hidden sector, and also interactions between $Z'/A'$ in the hidden sector and SM fermions $f$; both $\psi-Z/A$ and $f-Z'/A'$ couplings are suppressed by the small $\epsilon_1$ and $\epsilon_2$ parameters, {and vanish in the $\epsilon_1=0=\epsilon_2$ limit.} We parameterize the interactions between the fermions and the mass eigenstates of the neutral bosons via \begin{equation} \bar f \gamma_\mu (v^f_i - \gamma_5 a^f_i) f E^\mu_i + v^\psi_i \bar \psi \gamma_\mu \psi E^\mu_i \end{equation} where the vector and axial-vector couplings are given by \begin{eqnarray} v^f_i &=& (g {\cal O}_{4i} - g' {\cal O}_{3i})T^{3 }_f/2 + g' {\cal O}_{3i}Q_f , \\ \label{eq:vcouplings} a^f_i &=& (g {\cal O}_{4i} - g' {\cal O}_{3i})T^{3 }_f/2, \\ \label{eq:acouplings} v^\psi_i &=& g_{W} {\cal O}_{1i} + g_{F} {\cal O}_{2i}, \label{eq:couplings} \end{eqnarray} with $Q_f$ is the electric charge of fermion $f$, $T_f^3$ is the quantum number of the left-hand chiral component of the fermion $f$ {under SU(2)$_L$}. \section{Experimental constraints} \label{sec:constraints} Here we discuss various constraints on the model, including electroweak constraints from LEP, constraints from LHC, and also cosmological constraints. In our analysis, we will fix most model parameters so that a sizable LHC signal is expected. We first discuss these default parameter values. The heavy $Z'$ boson in our model mostly originates from the $U(1)_W$ boson $C_\mu$ whose mass is fixed to be $m_2 = 700$ GeV in the remaining of the analysis. In order to obtain a sufficient large $\psi\bar\psi$ production cross section at the LHC, we choose $g_W = 1$; a relatively large $U(1)_F$ coupling constant is also chosen, $g_F = 1.5$, so that a rather sizable dark radiation rate for the $\psi$ particle can be achieved in the model. We use $c\tau = \hbar c /\Gamma = 1$ m as the characteristic value for the proper lifetime of the dark photon, where $\Gamma$ is the dark photon decay width; the dark photon decay widths are given in Appendix. We find that small modifications around $c\tau = 1$ m do not lead to significant changes in the collider signatures. The above values are the default ones used throughout the analysis, if not explicitly specified. {\it $Z$ invisible decay:} The $Z$ invisible decay width is measured to be $\Gamma_{\rm inv}^{\rm Z} \pm \delta \Gamma^{\rm Z}_{\rm inv} = 499$ MeV $\pm 1.5$ MeV \cite{ALEPH:2005ab}. The $Z$ boson can decay into the $\psi\bar\psi$ final state, if $m_Z > 2 m_\psi$, with a decay width \begin{equation} \Gamma_{Z \to \psi \bar{\psi}}= { m_{Z} \over 12 \pi} (v^{\psi}_3)^{2} \sqrt{1-4 x_{\psi Z}} (1+2 x_{\psi Z}), \end{equation} where $x_{\psi Z} \equiv (m_{\psi}/m_{Z})^{2}$, and $v^{\psi}_3$ is the vector coupling between the $Z$ boson and $\psi$, as given in Eq. ({\ref{eq:couplings}}). Equating the invisible decay width due to the $\psi\bar\psi$ final state to the experimental uncertainty $\delta \Gamma^{\rm Z}_{\rm inv}$, one obtains an upper bound on $v^{\psi}_3$, which is shown on Fig.\ ({\ref{fig:limit-mchi-eps2}}). For light $\psi$ mass, one has $v^{\psi}_3 \gtrsim 2.5 \times 10^{-2}$. {\it Electroweak constraint on the $Z$ mass: } The mass of the $Z$ boson is modified due to the enlarged neutral gauge boson mass matrix, as given in Eq.\ (\ref{2zpstmass}). For the parameter space of interest in our analysis, i.e., $\epsilon_1 \ll \epsilon_2$, the mass shift on the $Z$ boson can be estimated as \begin{equation} \left| {\Delta m_{Z} \over m_{Z}}\right| \simeq {\epsilon_2^{2} \over 2} s^2_W \left(1- {m_{Z}^{2} \over m_{2}^{2}} \right)^{-1}, \label{MZdev} \end{equation} where $s^2_W \equiv \sin ^{2} \theta_{W} = 0.22343$ \cite{Tanabashi:2018oca}, with $\theta_{W}$ being the weak rotation angle. We adopt the methodology in Ref.\ \cite{Feldman:2006ce} to estimate the electroweak constraints. The experimental uncertainty of the $Z$ mass is given by \cite{Feldman:2006ce} \begin{equation} \bigg[ {\delta m_Z \over m_Z} \bigg]^{2}= \bigg[ {c_W^{-2}-2 t_W^2 \over \delta m_{W}^{-1} m_W}\bigg]^{2} +\frac{t_W^4(\delta \Delta r)^{2}}{4(1-\Delta r)^{2}}, \label{deltaMZ} \end{equation} where $c_W \equiv \cos\theta_{W}$, and $t_W \equiv \tan\theta_{W}$. Here we take into account the recent analysis on the uncertainty of the $W$ boson mass , $m_W \pm \delta m_W = 80.387 \pm 0.016$ GeV \cite{Tanabashi:2018oca}, and on the radiative correction, $\Delta r \pm \delta \Delta r = 0.03672 \mp 0.00017 \pm 0.00008$ \cite{Tanabashi:2018oca} where the first uncertainty is due to the top quark mass and the second is due to the fine structure constant $\alpha({m_Z)}$ at the $m_Z$ scale. Adding in quadrature, we obtain $\delta \Delta r = 0.00019$. Equating the mass shift on the $Z$ boson given in Eq.\ (\ref{MZdev}) to the experimental uncertainty, one obtains an upper bound on $\epsilon_2$ \begin{equation} |\epsilon_2| \lesssim 0.036 \,\sqrt{1-\left(m_{Z} / m_{2}\right)^{2}}. \label{eq:eps2MZ} \end{equation} {\it Di-lepton constraint on $Z'$ decays: } {Dilepton final states which are produced at the LHC from the heavy $Z'$ boson via the Drell-Yan process, can be searched for by reconstructing their invariant mass.} Fig.\ ({\ref{fig:limit-mchi-eps2}}) shows the 95 \% CL upper bound on $\epsilon_2$ in the dilepton channel from ATLAS \cite{ATLAS:2019vcr}. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.9\columnwidth]{./figures/limit_mchi_eps2_v3} \caption{The $95\%$ CL exclusion limits on $\epsilon_2$ as a function of $m_\psi$. We use the following parameters: $m_1 = 3$ GeV, $m_2 = 700$ GeV, $g_F = 1.5$, $g_W = 1.0$, and $c\tau = 1$ m. Shown are the millicharged particle searches at the colliders (shaded light gray) \cite{Davidson:2000hf}, electroweak constraints due to the $Z$ mass shift (dashed red), the $Z$ invisible decay (dash-dotted green) \cite{ALEPH:2005ab}, the di-lepton high mass resonance search at ATLAS (dash-dotted blue) \cite{ATLAS:2019vcr}, and the mono jet search at ATLAS (solid black) \cite{Aaboud:2017phn}. } \label{fig:limit-mchi-eps2} \end{centering} \end{figure} \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.9\columnwidth]{./figures/mchi_relic} \caption{The fraction of $\psi$ to the total DM in the Universe as a function of the $\psi$ mass. Here, we take the canonical DM cross section is $\langle \sigma v\rangle_{\rm DM} = 1$ pb. The magenta dashed line represents the current limit from Refs. \cite{Munoz:2018pzp, Boddy:2018wzy, dePutter:2018xte, Kovetz:2018zan}. } \label{fig:relic} \end{centering} \end{figure} {\it Constraints on millicharge}: Due to the off-diagonal elements in the four by four mass matrix, the $\psi$ particle interacts weakly with the $Z$ boson and the photon in the SM. Thus there is a small electric charge of the $\psi$ particle, $\delta = v_4^\psi/e$, with respect to the photon. In the parameter space of interest of the model, we have $\delta \simeq 0.88 \,\epsilon_2/e$. The small electric charge is often referred to as ``millicharge''; the $\psi$ particle remains undetectable in typical particle detectors due to the minute electric charge. Fig.\ (\ref{fig:limit-mchi-eps2}) shows the collider constraints on millicharge in the ($m_\psi$, $\epsilon_2$) plane \cite{Davidson:2000hf}. The collider constraints on millicharge are the most stringent constraints on $\epsilon_2$ for $m_\psi \lesssim 0.6$ GeV. Because the $\psi$ particle is charged under $U(1)_W$ and $U(1)_F$ in the hidden sector, it is stable and thus can be a dark matter (DM) candidate. For the parameter space of interest, i.e., $m_\psi>m_{A'}$, the annihilation channel into on-shell dark photons, $\psi \bar{\psi} \to A' A'$, is the dominant process for the relic abundance of the $\psi$ particle; the annihilation cross section can be approximated as follows \cite{Cline:2014dwa} \begin{equation} \langle \sigma v\rangle_{\psi \bar{\psi} \to A' A'} \simeq \frac{(v^\psi_2)^4}{16 \pi \,m_{\psi}^2} \frac{(1-r^2)^{3/2}}{(1-r^2/2)^{2}}, \label{eq:chichiz1z1} \end{equation} where $v^\psi_2$ is the coupling between the dark photon and $\psi$, as given in Eq.~(\ref{eq:couplings}), and $r = m_{A'}/m_{\psi}$. We compute the ratio between the $\psi$ relic abundance and the total DM relic abundance via $f_{\psi} = 2 \langle \sigma v\rangle_{\rm DM}/\langle \sigma v\rangle_{\psi \bar{\psi} \to A' A'}$, where $\langle \sigma v\rangle_{\rm DM} = 1$ pb is the canonical DM cross section, and the factor of 2 accounts for the Dirac nature of the $\psi$ particle. Because $v^\psi_2\simeq g_F$, the annihilation cross section given in Eq.~(\ref{eq:chichiz1z1}) is much larger than the canonical annihilation cross section needed for the cold DM relic density in the Universe \cite{Ade:2015xua}, for the case $m_{A'} \sim {\cal O}$(1) GeV and $m_\psi \sim {\cal O}$(10) GeV. Thus, the contribution of the $\psi$ particle to the DM in the Universe is less 0.1\% when $m_\psi < 100$ GeV, as shown in Fig.\ (\ref{fig:relic}). This is consistent with the cosmological limits on millicharged DM, which constrain the fraction of the millicharged DM to be $ \lesssim 1\%$ of the total DM in the Universe \cite{Munoz:2018pzp, Boddy:2018wzy, dePutter:2018xte, Kovetz:2018zan}. The $\psi$ DM is efficiently stopped by the rock above underground labs of DM direct detection experiments, unless the millicharge is extremely small. Adapting the estimation in Refs. \cite{Foot:2003iv,Cline:2012is} for 1 km of rock, we found that the DM direct detection is only sensitive to the mixing parameter of $\epsilon_{2} < 10^{-6}$ in our model. Thus the current underground DM direct detection experiments do not constrain the model. {\it Monojet constraints}: Searches for invisible particles which are produced in association with an initial state radiation (ISR) jet have been carried out at ATLAS \cite{Aaboud:2017phn} and CMS \cite{Sirunyan:2017jix}. Here {we} recast the ATLAS result \cite{Aaboud:2017phn} to set constraints on our model. We use MadGraph5 aMC@NLO (MG5) \cite{Alwall:2014hca} to generate events for the process $p p \to \psi \bar{\psi} j $ which are then passed to Pythia 8 for showering and hadronization \cite{Sjostrand:2014zea, Carloni:2010tw, Carloni:2011kk}. The Madanalysis 5 package \cite{Dumont:2014tja, Sengupta} is further used to analyze the ATLAS results \cite{Aaboud:2017phn}. We use the same detector cuts as in Ref.\ \cite{Aaboud:2017phn}; the optimal selection region for our model is found to be in the window: $E_{T}^{\rm miss} \in (300,350)$ GeV (the EM2 region in Ref.\ \cite{Aaboud:2017phn}). The $95\%$ {CL} exclusion limit on $\epsilon_2$ from the monojet channel in ATLAS \cite{Aaboud:2017phn} is shown in Fig.\ (\ref{fig:limit-mchi-eps2}). When $m_{\psi} < m_Z/2$, the $Z$ boson diagram, i.e., the $p p \to Z j \to \psi \bar{\psi} j$ process gives the dominant contribution to the monojet signal; when $m_{\psi} > m_Z/2$, the $Z'$ boson diagram, i.e., the $p p \to Z' j \to \psi \bar{\psi} j$ becomes more important than the $Z$ process. Because of the large $Z'$ mass, the $Z'$ boson diagram is suppressed as compared to the $Z$ diagram. Thus the monojet channel only provides a comparable constraint to other constraints for $m_{\psi} < m_Z/2$. As shown in Fig.\ (\ref{fig:limit-mchi-eps2}), the electroweak constraint on the $Z$ mass shift provides the best constraint to the parameter space of interest in our model, except for the $m_\psi \lesssim 0.6$ GeV region where collider constraints on millicharged particles become strong. \section{Timing detector} Recently, some precision timing detectors are proposed to be installed at CMS \cite{CMStiming}, ATLAS \cite{Allaire:2018bof, Allaire:2019ioj} and LHCb \cite{LHCbtiming}. These timing detectors, which aim to reduce the pile-up rate at {the high luminosity LHC (HL-LHC)}, can also be used in long-lived particle searches \cite{Liu:2018wte, Mason:2019okp,Kang:2019ukr, Cerri:2018rkm}. In this analysis, we focus on one of the timing detectors, the minimum ionizing particle (MIP) timing detector to be installed to the CMS detector (hereafter MTD CMS) \cite{CMStiming}, which has a $\sim$30 pico-second timing resolution. At the MTD CMS, the timing detection layer, which is proposed to be installed between the inner tracker and the electromagnetic calorimeter, is about 1.17 m away from the beam axis and 6.08 m long in the beam axis direction. A time delay at the LHC due to long-lived particles can be measured by the new precision timing detectors, which can enhance the collider sensitivity to such models \cite{Liu:2018wte}. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.35\textwidth]{./figures/feyn_diag} \caption{Feynman diagram for the dark photon production at the LHC.} \label{fig:feyndiag} \end{centering} \end{figure} At the LHC, the hidden sector particle $\psi$ can be pair-produced via $pp \to Z/Z' \to \bar \psi \psi$, which subsequently radiates dark photons $\psi \to \psi A'$; the corresponding Feynman diagram is shown in Fig.\ (\ref{fig:feyndiag}). Due to the feeble interaction strength to SM fermions, the $A'$ boson travels a macroscopic distance away from its production point and then decays into a pair of SM particles which are detected by the timing layers. Here we use the di-lepton final states measured by the timing layers to detect the LLDP in our model. The time delay between the leptons from the LLDP and SM particles produced at the primary vertex is given by \begin{equation} \Delta t = {L_{A'}/v_{A'}} + L_\ell - L_{\rm SM}, \end{equation} where the $L$'s are the distances traveled by various particles and $v=c$ for the SM particles \cite{Liu:2018wte}. The time delay is significant if the LLDP moves non-relativistically. We select the leading leptons with transverse momentum $p_{T}^{\ell} > 3$ GeV to suppress faked signals from hadrons produced with low $p_T$ \cite{Sirunyan:2017zmn} and to ensure that the lepton is moving relativistically and its trajectory is not significantly bent \cite{Liu:2018wte}. The point where the dark photon decays is required to have a radial distance away from the beam axis of $0.2 ~{\rm m} < L^T_{A'} < 1.17 ~{\rm m}$ and a longitudinal distance along the beam axis of $|z_{A'}| < 3.04 ~{\rm m}$. Following Ref. \cite{Liu:2018wte}, an ISR jet with $p_T^j > 30$ GeV and $|\eta_j| <2.5$ is required to {time stamp} the hard collision. The time delay is required to be $\Delta t > 1.2 $ ns in order to suppress the background. The dominant SM backgrounds come from the multi trackless jets in the same-vertex (SV) hard collisions and in the pile-up (PU) events \cite{Liu:2018wte}, as well as photons in SV hard collisions \cite{Mason:2019okp}. The SV background arises because of the finite timing resolution; the PU background is due to the fact that within one bunch crossing, two hard collisions occurring at two different times can lead to a time delay signal. We compute the dijet events at the LHC with $\sqrt{s}=13$ TeV by using MG5 \cite{Alwall:2014hca} and also Pythia 8 \cite{Sjostrand:2014zea}. We select events in which the leading jet has $p_T > 30$ GeV and $|\eta(j)| < 2.5$ to time stamp the primary collision, and the subleading jets have $p_T^j > 3$ GeV and $|\eta(j)| < 2.5$. The inclusive jet cross section is $\sigma_j \approx 1\times 10^{8}$ pb, under these detector cuts. The inclusive photon production cross section at NLO is $\sigma_{\gamma} \approx 2 \times 10^{8}$ pb at the LHC with $\sqrt{s}=13$ TeV, by using JETPHOX \cite{Catani:2002ny} with the CT10 PDF. The detector cuts are $p_T^{\gamma} > 3$ GeV and $|\eta^{\gamma}| < 2.5$ for photon and $p_T^{j} > 30$ GeV and $|\eta^{j}| < 2.5$ for the leading jet. At the 13 TeV LHC, the SV background events can be estimated as \cite{Liu:2018wte, Mason:2019okp} \begin{equation} N_{\rm SV} = \sigma_{\gamma}{\cal L} + \sigma_{j}{\cal L} f_{\gamma} \sim 6\times 10^{14}, \end{equation} where ${\cal L} = 3{\rm \, ab^{-1}}$ is the integrated luminosity, and $f_{\gamma} \approx 10^{-4}$ is the rate of a jet to fake a photon or a lepton \cite{Mason:2019okp}. The PU background events can be estimated as \cite{Liu:2018wte, Mason:2019okp} \begin{equation} N_{\rm PU} = \sigma_{j} {\cal L} (n_{\rm PU} \frac{\sigma'_{j} } {\sigma_{\rm inc}}) f_{\gamma} f_{j} \sim 3.75\times10^{9}, \end{equation} where $f_{j} \sim 10^{-3}$ \cite{Mason:2019okp} is the rate for the jet to be trackless, $\sigma'_{j} \approx 1\times 10^{11}$ pb is the dijet cross section with the requirement on all jets of $p_T^j > 3$ GeV and $|\eta(j)| < 2.5$, $\sigma_{\rm inc} = 80$ mb \cite{Aaboud:2016mmw} is the inelastic cross section of $pp$ collisions at 13 TeV and $n_{\rm PU} \approx 100$ \cite{Sopczak:2017mvr} is the average pile-up number at the HL-LHC. The time delay distribution of the SM background can be described by a Gaussian distribution \cite{Liu:2018wte} \begin{equation} \frac{d{\cal P}(\Delta t)}{d\Delta t} = \frac{1}{\sqrt{2\pi} \delta_t} e^{\frac{-\Delta t^2}{2\delta_t^2}}, \end{equation} where $\delta_t$ is the time spread. For the PU background, the time spread, $\delta_t = 190$ ps, is determined by the beam property; for the SV background, $\delta_t = 30$ ps, is determined by the time resolution \cite{CMStiming}. We find that under the detector cut $\Delta t > 1$ ns, the SV background is negligible and the PU background is about 260 with ${\cal L} = 3{\rm \, ab^{-1}}$; the PU background also becomes negligible, $N_{\rm PU} \lesssim 0.5$, if the time delay $\Delta t > 1.2$ ns is required. Thus, we take $\Delta t > 1.2$ ns as the detector cut in our analysis. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.9\columnwidth]{./figures/pTl_final} \caption{ The $p_T$ distribution of the leading lepton. We choose $m_2 = 700$ GeV, $m_{\psi} = 15$ GeV, $c\tau = 1$ m, $\epsilon_2 = 0.01$, $g_F = 1.5$ and $g_W = 1.0$. The red, blue and black lines indicate the dark photon mass of 0.1 GeV, 6 GeV and 20 GeV respectively. } \label{fig:pTl-final} \end{centering} \end{figure} \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.9\columnwidth]{./figures/deltaT_final} \caption{The distribution of the time delay $\Delta t$, with the integrated luminosity ${\cal L} = 3 {\rm ab}^{-1}$. $p_{T} > 3$ GeV is required for the leading lepton. The bin width is 0.5 (0.1) ns for $\Delta t > 1$ ns ($\Delta t < 1$ ns). We choose $m_2 = 700$ GeV, $m_{\psi} = 15$ GeV, $c\tau = 1$ m, $\epsilon_2 = 0.01$, $g_F = 1.5$ and $g_W = 1.0$. The solid blue, red and black lines indicate the dark photon mass of 0.1 GeV, 6 GeV and 20 GeV. The solid and dashed magenta curves represent the PU and SV backgrounds respectively.} \label{fig:delt-final} \end{centering} \end{figure} \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.5\textwidth]{./figures/eff_final_run2_lead} \caption{The cut efficiency $\epsilon_{\rm cut}$ as a function of $m_{A'}$ and $m_\psi$. We set $m_2 = 700$ GeV, $c\tau = 1$ m, $\epsilon_2 = 0.01$, $g_F = 1.5$ and $g_W = 1.0$. } \label{fig:eff} \end{centering} \end{figure} \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.45\textwidth]{./figures/Nevent_finalv2_run2_lead} \caption{The contour of the expected signal events at the 13 TeV LHC, as a function of $m_{A'}$ and $m_\psi$. We choose $m_2 = 700$ GeV, $c\tau = 1$ m, $\epsilon_2 = 0.01$, $g_F = 1.5$ and $g_W = 1.0$. The blue and magenta contours indicate the needed integrated luminosity of 70 fb$^{-1}$ and 250 fb$^{-1}$ respectively, to generate 10 events. } \label{fig:limit} \end{centering} \end{figure} We perform a full MC simulation and study the efficiency of the detector cuts in the parameter space of interest. We first implement the model into {the} FeynRules package \cite{Alloul:2013bka} and pass the UFO model file into MG5 \cite{Alwall:2014hca} to generate $8\times 10^4$ events of the $\psi$ pair production associated {with the time stamping ISR jet} i.e $p p \to \psi \bar{\psi} j$. The dark showering is simulated in Pythia 8 \cite{Sjostrand:2014zea, Carloni:2010tw, Carloni:2011kk}. Fig.~(\ref{fig:pTl-final}) shows the transverse momentum distribution of the leading lepton, for three different dark photon masses. We choose $m_2 = 700$ GeV, $m_{\psi} = 15$ GeV, $c\tau = 1$ m, $\epsilon_2 = 0.01$, $g_F = 1.5$ and $g_W = 1.0$, as the benchmark point. The final state leptons from dark photon decays are generally not very energetic in the models shown in Fig.~(\ref{fig:pTl-final}). In particular, the lepton events are highly suppressed under the detector cut $p_T > 3$ GeV, for the $0.1$ GeV dark photon case. Fig.~(\ref{fig:delt-final}) shows the distribution of the time delay $\Delta t$. The model parameters in Fig.~(\ref{fig:delt-final}) are the same as in Fig.~(\ref{fig:pTl-final}). The SM backgrounds are negligible when the time delay $\Delta t > 1.2$ ns. When the dark photon becomes heavier, more events with a larger {time delay} appear, as shown {in} Fig.~(\ref{fig:delt-final}), since in this case, the dark photon has a higher probability to move non-relativistically. The increase of the events with the larger {time delay}, however, is offset by the smaller dark photon radiation rate of the heavier dark photon. Fig.~(\ref{fig:eff}) shows the cut efficiency as a function of $m_{A'}$ and $m_{\psi}$, where 1370 grid points are simulated. We set $m_2 = 700$ GeV, $c\tau = 1$ m, $\epsilon_2 = 0.01$, $g_F = 1.5$ and $g_W = 1.0$. As shown in Fig.~(\ref{fig:pTl-final}), the detector cut: $p_T>3$ GeV for the leading lepton, significantly reduces the efficiency for light dark photon mass. The low efficiency in the heavy mass region in Fig.~(\ref{fig:eff}) is primarily due to the low radiation rate \cite{Chen:2018uii}. It turns out that the region with significant cut efficiency has $5 \,{\rm GeV} < m_{A'}, m_\psi < 35 \,{\rm GeV}$, with the highest efficiency $\sim 0.18\%$. We note that, for the $m_{A'} > 2 m_{\psi}$ region, the dark photon is no longer a long-lived particle since it can decay into a pair of $\psi$. Fig.~(\ref{fig:limit}) shows the regions that can be probed at the 13 TeV LHC, with the discovery criterion: $S=10$, as a function of the dark photon mass and the $\psi$ mass. The number of signal events is computed via $S =\epsilon_{\rm cut}\, {\cal L} \, \sigma(p p \to \psi \bar{\psi} j) $, where ${\cal L}$ is the integrated luminosity, $\sigma(p p \to \psi \bar{\psi} j)$ is the production cross section at the LHC, and $\epsilon_{\rm cut}$ is the cut efficiency as shown in Fig.~(\ref{fig:eff}). The model parameters are the same as in Fig.~(\ref{fig:eff}). The blue and magenta contours indicate the needed integrated luminosity to generate 10 signal events. Therefore, with an integrated luminosity of $70$ $\rm{fb}^{-1}$ at the HL-LHC, the LLDP can be discovered in the time delay channel in the mass region: $5$ GeV $< m_{A'},m_{\psi} < 21$ GeV, with the rest of the model parameters fixed as in Fig.~(\ref{fig:eff}). A larger mass region: 3 GeV$< m_{A'}, m_{\psi} <$ 30 GeV, can be discovered if $250$ $\rm{fb}^{-1}$ data are accumulated at the HL-LHC. Fig.\ (\ref{fig:signalregion}) shows the integrated luminosity needed to probe the parameter space spanned by $\epsilon_2$ and $m_{Z'}$. We choose $m_1 = 6$ GeV, $m_{\psi} = 15$ GeV, $c \tau = 1$ m, $g_F = 1.5$ and $g_W = 1$ as a benchmark. With an integrated luminosity of $\sim 4.0$ fb$^{-1}$, one can probe the $\epsilon_2$ value that saturates the electroweak constraint on the $Z$ mass shift. To discover a long-lived dark photon model in which $\epsilon_2 \simeq 10^{-3}$, however, one needs about 3000 fb$^{-1}$ data at the HL-LHC. When $\epsilon_2 \simeq {\cal O} (10^{-7})$, the LLDP signal approaches the value in the conventional LLDP scenario, so that it is no longer enhanced by the production channel mediated by the $\epsilon_2$ parameter. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.5\textwidth]{./figures/mz2_eps2_BPnew} \caption{ The integrated luminosity needed at the HL-LHC to probe the parameter region spanned by $m_{Z'}$ and $\epsilon_2$. We choose $m_1 = 6$ GeV, $m_{\psi} = 15$ GeV, $c \tau = 1$ m, $g_F = 1.5$ and $g_W = 1$. The integrated luminosity needed are $\sim 4.0$ fb$^{-1}$ (black solid), $250$ fb$^{-1}$ (blue solid), and $3000$ fb$^{-1}$ (blue dashed). The gray shaded region is excluded by the $Z$ mass shift constraint, as given in Eq.\ (\ref{eq:eps2MZ}). Below the red line, the dark photon production cross section via $\epsilon_1$ dominates.} \label{fig:signalregion} \end{centering} \end{figure} \section{LHCb} \begin{figure}[htbp] \begin{centering} \hspace{0.2cm} \includegraphics[width=0.45\textwidth]{./figures/limit_mz1_esp1_1d2_new} \caption{LHC current and future sensitivity contours to the LLDP parameter space spanned by $\epsilon_1$ and the dark photon mass. We take $m_{\psi} =5$ GeV and $\epsilon_2 = 0.01$. The blue solid and dashed contours indicate the regions probed by the LHCb with $5.5$ $\rm{fb}^{-1}$ and $15$ $\rm{fb}^{-1}$ data respectively. The regions probed by the future MTD CMS with $250$ ${\rm fb}^{-1}$ and $3000$ ${\rm fb}^{-1}$ data are shown as red solid and dashed contours respectively. The gray islands at $\epsilon_1 \sim (10^{-4}-10^{-5})$ are the LHCb exclusion regions for the {conventional} dark photon scenario \cite{Aaij:2019bvg}. Various experimental constraints on the {conventional} dark photon scenario are shown as color shaded regions. } \label{fig:LHCb} \end{centering} \end{figure} Due to the excellent mass resolution (7-20 MeV) and vertex resolution ($\sim 10 \,{\rm \mu m}$ on the transverse plane) as well as the ability for particle identification ($\sim 90\%$ for muons) \cite{Aaij:2015bpa, Benson:2015yzo}, the LHCb detector is able to discover new elusive particles beyond the SM. % An upcoming upgrade with an increased luminosity and a {more} advanced trigger system with only software triggers will further improve the capability of the LHCb detector to probe new physics phenomena, such as LLDPs \cite{Benson:2015yzo}. Recently, a search for LLDPs in the kinetic mixing model has been carried out at the LHCb via displaced muon pairs \cite{Ilten:2016tkc, Aaij:2017rft, Aaij:2019bvg}. In our model, because LLDPs has a larger production cross section at the LHC than the conventional LLDP models, the LHCb search \cite{Aaij:2019bvg} can probe a much larger parameter space. To analyze the LHCb constraints, we choose a benchmark point in which $m_{\psi} =5$ GeV and $\epsilon_2 = 0.01$, while the rest of the parameters take the default values. In this benchmark point, the $\psi$ production cross section, $\sigma(p p \to \psi \bar{\psi}) \sim 4.3$ pb which is dominated by the $Z$-boson exchange channel. We use MG5 \cite{Alwall:2014hca} to generate the LHC events for each model point on the $\epsilon_1$-$m_{A'}$ plane, which are then passed to Pythia 8 \cite{Sjostrand:2014zea, Carloni:2010tw, Carloni:2011kk} for showering (including showering in hidden sector) and hadronization. We follow the LLDP search criteria in Ref. \cite{Aaij:2017rft, Aaij:2019bvg} to analyze the signal. In particular, we require the transverse distance of the dark photon decay vertex of $6 \,{\rm mm}<l_T(A')<22\,{\rm mm}$ and the pseudo-rapidity of dark photons and muons of $ 2 < \eta(A', \mu^{\pm}) < 4.5$. These requirements ensure that the displaced vertex is sufficiently separated from the beam line and registered in the Vertex Locator (VELO) where the dimuon can be reconstructed with good efficiency. Furthermore, in order to suppress the background from fake muons, we also require the momentum and transverse momentum of muons are greater than $10$ GeV and $0.5$ GeV respectively. The dominant background includes the photon-conversion in the VELO, muons produced from b-hadron decay chains, and pions from $K^0_s$ decays which are misidentified as muons. Ref.\ {\cite{Ilten:2016tkc}} estimated the background events as $B = 25$ for ${\cal L} = 15\, \rm{fb}^{-1}$, which is adopted in our analysis and also rescaled for the ${\cal L} = 5.5\, \rm{fb}^{-1}$ case. We compute the exclusion region by demanding that $S/\sqrt{B} > 2.71$ where $S$ is the signal event number. Fig.\ ({\ref{fig:LHCb}}) shows the LHCb exclusion region in the parameter space spanned by $\epsilon_1$ and the dark photon mass $m_{A'}$. With the current luminosity $5.5$ ${\rm fb}^{-1}$, LHCb can probe the parameter space of our model: $200\, {\rm MeV} < m_{A'} < 9 \, {\rm GeV}$ and $ 2 \times 10^{-7} < \epsilon_1 < 6 \times 10^{-5}$. The exclusion region in the {conventional} dark photon scenario is, however, much smaller, which is shown as two small gray islands at $\epsilon_1 \sim (10^{-4}-10^{-5})$. Thus, in our model, a significantly larger region of parameter space than the {conventional} dark photon model can be probed by the current LLDP search at the LHCb. A projected limit from the Run 3 data is also computed; LHCb can probe the parameter space: $200 \, {\rm MeV} < m_{A'} < 10 \, {\rm GeV}$ and $ 10^{-7} < \epsilon_1 < 10^{-4}$, if $15$ $\rm{fb}^{-1}$ integrated luminosity can be accumulated in the LHC Run 3 data. We note in passing that the shape of the exclusion contours is primarily due to the detector cut on the dark photon decay length: a smaller $\epsilon_1$ value is needed in the larger dark photon mass region so that the dark photon has the desired decay width to disintegrate in the VELO region. Also the dip at $m_{A'} \simeq 0.8$ GeV is due to the $\omega$ resonance which suppresses the BR($A' \to \mu^+\mu^-$). Fig.\ ({\ref{fig:LHCb}}) also shows the exclusion limits on the conventional dark photon from various experiments; the limits are taken from the Darkcast package \cite{darkcast}. Fig.\ ({\ref{fig:LHCb}}) also shows the sensitivities from the future MTD CMS detector via the time delay measurement. As mentioned before, the time delay signal from the final state leptons becomes more significant if the LLDPs have long lifetime and move non-relativistically. Therefore, the timing detector probes the heavy dark photon mass region with a smaller mixing parameter $\epsilon_1$ which are currently almost inaccessible at the LHCb. In particular, with the luminosity of $250 \, \rm{fb}^{-1}$ at the HL-LHC, the MTD CMS detector can probe the parameter space: $m_{A'} > 3.3 $ GeV and $ 10^{-8} < \epsilon_1 < 10^{-7}$. An even larger parameter space in our model: $m_{A'} > 2.0 $ GeV and $ 10^{-9} < \epsilon_1 < 2\times 10^{-7}$, can be reached with $3000 \, \rm{fb}^{-1}$ data accumulated at the HL-LHC. Interestingly, this MTD CMS sensitivity region partly overlaps with the LHCb sensitivity region with 15 fb$^{-1}$ data. Thus, if the LLDP is discovered in this overlapped region, the timing detector can be used to verify the LHCb results. We note that, in the region of $m_{A'} > 2 m_{\psi}$, the dark photon will dominantly decay into ${\psi}$ so that it can no longer be searched for in the visible channel by the LHCb detector and the future precision timing detectors. \section{Summary} We construct a long-lived dark photon model which has an enhanced dark photon collider signal. We extend the standard model by a hidden sector which has two gauge bosons and one Dirac fermion $\psi$; the two gauge bosons interact with the SM sector via different Stueckelberg mass terms. The GeV-scale dark photon $A'$ interacts with the SM fermions via a very small Stueckelberg mass term (parametrized by the dimensionless quantity $\epsilon_1$) such that it has a macroscopic decay length which can lead to a displaced vertex or a time delay signal at the LHC. The TeV-scale $Z'$ boson interacts with the SM via a relatively larger mass term (parametrized by the dimensionless quantity $\epsilon_2$). Because the dark photon $A'$ is mainly produced at the LHC via the $\psi$ dark radiation processes in which the effective coupling strength is of the size of $\epsilon_2$ the LHC signal of $A'$ is thus enhanced significantly. Various experimental constraints on the model are analyzed, including the electroweak constraint on the $Z$ boson mass shift, the constraint from the $Z$ invisible decay, LHC constraints, collider constraints on millicharge, and cosmological constraints on millicharge. The electroweak constraint on the $Z$ mass turns out to be the most stringent one, which leads to an upper bound $\epsilon_2 \lesssim 0.036$, in the parameter space of interest. Two types of LHC signals from the LLDP in our model are investigated: the time delay signal measured by the precision timing detectors at the HL-LHC, and the current LHCb searches on LLDPs. If the LLDP is produced non-relativistically at the LHC, it has a significant time delay $\Delta t$, which can be measured by the precision timing detectors. Under the detector cut $\Delta t > 1.2$ ns, the SV and PU backgrounds are found to be negligible. The parameter space of 3 GeV$< m_{A'}, m_{\psi} <$ 30 GeV in our model is found to be probed by the timing detector with 250 ${\rm fb^{-1}}$ data at the HL-LHC. Due to the different search strategy, the current LHCb analysis is more sensitive to the lighter dark photon mass than the time delay searches. We found that the parameter space probed by the current LHCb analysis is much larger in our model than the conventional dark photon model investigated in the LHCb experimental analysis. A comparison between the LHCb search and the time delay search is also made; they typically probe different regions of the parameter space but can overlap in some small regions. We note that a similar model as ours can be constructed by introducing two kinetic mixing parameters, of magnitude ${\cal O}(10^{-2})$ and ${\cal O}(10^{-7})$, which are responsible for dark photon production and decay processes, respectively. \section{Acknowledgement} We thank Jinhan Liang and Lei Zhang for helpful discussions and correspondence. The work is supported in part by the National Natural Science Foundation of China under Grant Nos.\ 11775109 and U1738134.
1,941,325,220,870
arxiv
\section{Introduction} Testing the nature of the black holes has been an important goal for decades. There is a plethora of observations that pave the way towards determining whether the spacetime around such compact objects in indeed described by the Kerr metric \cite{Berti:2015itd}. Especially with the advance of the gravitational wave astronomy, as well as the X-ray observations and the observations of black hole shadows, we are closer than ever to put strong limits on the deviations from general relativity (GR) or even more interesting -- to observe a phenomenon that does not fit well in the standard picture and is a signal for a modification of Einstein's gravity. The advance in observations calls for a further advance in theory. That is why it is very important to understand as well as possible the deviations from the Schwarzschild or Kerr metric that different alternative theories of gravity offer and especially their astrophysical implications. Even though this topic has advanced a lot in the last decades there is still a lot to be done. One of the reasons is that for a large class of modified theories of gravity there are uniqueness theorems stating that the solutions should be the same as in GR. Thus, it is not always easy to find a theory predicting a viable alternative to the Kerr solutions while still being in agreement with the weak field observations where GR is tested with remarkable accuracy \cite{Will:2005va,Faraoni:2010pgm,Berti:2015itd,CANTATA:2021mgk}. Among the interesting theories that fall into this category are the extended scalar-tensor theories allowing for the development of the so-called scalarization. Their essence is that they are perturbatively equivalent to GR for weak fields while second order phase transitions to a scalarized state of the compact object can occur for strong fields that is called spontaneous scalarization \cite{Damour:1993hw,Damour:1996ke,Doneva:2017bvd,Silva:2017uqg,Antoniou:2017acq}. Even though initially this idea was developed in the framework of neutron stars \cite{Damour:1993hw,Damour:1996ke}, it was recently found that in certain types of modified theories of gravity the spacetime curvature itself can act as a source of the scalar field. This was first discovered in the scalar-Gauss-Bonnet (sGB) theory \cite{Doneva:2017bvd,Silva:2017uqg,Antoniou:2017acq,Cunha:2019dwb,Collodel:2019kkx,Dima:2020yac,Herdeiro:2020wei,Berti:2020kgk} but later it was demonstrated that a much larger class of modified theories of gravity admits such scalarization \cite{Herdeiro:2018wub,Andreou:2019ikc,Gao:2018acg,Doneva:2021dcc}. The dynamics of these black holes was also addressed in a series of papers \cite{Witek:2018dmd,Witek:2020uzz,Silva:2020omi,Doneva:2021dqn,Kuan:2021lol,East:2021bqk}. Further studies in sGB gravity have focused on the linear stability of these spontaneously scalarized black holes \cite{Blazquez-Salcedo:2018jnn,Blazquez-Salcedo:2020rhf,Blazquez-Salcedo:2020caw} and on interesting effects that one can have if the coupling function between the scalar field and the Gauss-Bonnet invariant is modified or a potential is included for the scalar field \cite{Minamitsuji:2018xde,Silva:2018qhn,Doneva:2019vuh,Macedo:2019sem,Bakopoulos:2020dfg}. All these studies considered the case of standard scalarization when we have the Schwarzschild (or Kerr) black hole being destabilized below a certain black hole mass and the scalarized solutions branching out at that point. Thus any initially arbitrarily small perturbation of the Schwarzschild black hole, when in the unstable range, will result in an exponential growth of the scalar field that will be quenched at a certain point by nonlinear mechanisms forming an equilibrium black hole configuration endowed by scalar hair. For certain scalar field couplings, though, another very interesting effect can be observed that we call fully nonlinear scalarization. That is when the Schwarzschild black hole is always linearly stable but still black holes with nonzero scalar hair exist that are energetically more favourable \cite{Doneva:2021tvn}. An analogous phenomenon has been observed in the case of the charged black holes \cite{Blazquez-Salcedo:2020nhs,LuisBlazquez-Salcedo:2020rqp,Blazquez-Salcedo:2020crd}. We call them scalarized phases of the Schwarzschild black hole and they can be excited only if a strong enough perturbation is imposed. It was found in \cite{Doneva:2021tvn} that the spectrum of hairy black hole solutions can be complicated and a natural question that arises is about their stability. This is the topic of the present paper where we study the radial stability of the scalarized phases with the idea to identify the stable and hyperbolic parts of the branches that will be also of interest to astrophysics. The structure of the paper is as follows. In Section \ref{sec:theory_and_ansatz} we present the basics of the employed sGB theory and the construction of the background black hole solutions. Section \ref{sec:Radial_Perturbations} is devoted to the presentation of the approach we follow in order to perturb the field equations and solve the radial perturbations problem. The main results of the paper about the stability and hyperbolicity of the scalarized phases are presented in \ref{sec:StabilityAnalysis}. The paper ends with Conclusions. \section{Theory and ansatz} \label{sec:theory_and_ansatz} The general form of the action in sGB gravity can be written in the following way \begin{eqnarray} S=&&\frac{1}{16\pi}\int d^4x \sqrt{-g} \Big[R - 2\nabla_\mu \varphi \nabla^\mu \varphi + \lambda^2 f(\varphi){\cal R}^2_{GB} \Big] ,\label{eq:quadratic} \end{eqnarray} where $R$ is the Ricci scalar with respect to the spacetime metric $g_{\mu\nu}$ and $\varphi$ is the scalar field. The Gauss-Bonnet invariant entering the action ${\cal R}^2_{GB}$ is defined as ${\cal R}^2_{GB}=R^2 - 4 R_{\mu\nu} R^{\mu\nu} + R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}$ where $R_{\mu\nu}$ is the Ricci tensor and $R_{\mu\nu\alpha\beta}$ is the Riemann tensor. $f(\varphi)$ is the coupling function between the scalar field and ${\cal R}^2_{GB}$, while $\lambda$ is the Gauss-Bonnet coupling constant that has dimension of $length$. We will consider static and spherically symmetric spacetimes as well as static and spherically symmetric scalar field configurations. Thus the ansatz for the spacetime metric can be taken as \begin{eqnarray} ds^2= - e^{2\Phi(r)}dt^2 + e^{2\Lambda(r)} dr^2 + r^2 (d\theta^2 + \sin^2\theta d\phi^2 ). \end{eqnarray} Using it we can derive the dimensionally reduced field equations that will describe the spacetime around the unperturbed black holes \begin{eqnarray} &&\frac{2}{r}\left[1 + \frac{2}{r} (1-3e^{-2\Lambda}) \Psi_{r} \right] \frac{d\Lambda}{dr} + \frac{(e^{2\Lambda}-1)}{r^2} - \frac{4}{r^2}(1-e^{-2\Lambda}) \frac{d\Psi_{r}}{dr} - \left( \frac{d\varphi}{dr}\right)^2=0, \label{DRFE1}\\ && \nonumber \\ &&\frac{2}{r}\left[1 + \frac{2}{r} (1-3e^{-2\Lambda}) \Psi_{r} \right] \frac{d\Phi}{dr} - \frac{(e^{2\Lambda}-1)}{r^2} - \left( \frac{d\varphi}{dr}\right)^2=0,\label{DRFE2}\\ && \nonumber \\ && \frac{d^2\Phi}{dr^2} + \left(\frac{d\Phi}{dr} + \frac{1}{r}\right)\left(\frac{d\Phi}{dr} - \frac{d\Lambda}{dr}\right) + \frac{4e^{-2\Lambda}}{r}\left[3\frac{d\Phi}{dr}\frac{d\Lambda}{dr} - \frac{d^2\Phi}{dr^2} - \left(\frac{d\Phi}{dr}\right)^2 \right]\Psi_{r} \nonumber \\ && \hspace{0.9cm} - \frac{4e^{-2\Lambda}}{r}\frac{d\Phi}{dr} \frac{d\Psi_r}{dr} + \left(\frac{d\varphi}{dr}\right)^2=0, \label{DRFE3}\\ && \nonumber \\ \end{eqnarray} \begin{eqnarray} && \frac{d^2\varphi}{dr^2} + \left(\frac{d\Phi}{dr} \nonumber - \frac{d\Lambda}{dr} + \frac{2}{r}\right)\frac{d\varphi}{dr} \nonumber \\ && \hspace{0.9cm} - \frac{2\lambda^2}{r^2} \frac{df(\varphi)}{d\varphi}\Big\{(1-e^{-2\Lambda})\left[\frac{d^2\Phi}{dr^2} + \frac{d\Phi}{dr} \left(\frac{d\Phi}{dr} - \frac{d\Lambda}{dr}\right)\right] + 2e^{-2\Lambda}\frac{d\Phi}{dr} \frac{d\Lambda}{dr}\Big\} =0, \label{DRFE4} \end{eqnarray} where \begin{eqnarray} \Psi_{r}=\lambda^2 \frac{df(\varphi)}{d\varphi} \frac{d\varphi}{dr}. \end{eqnarray} We further assume that the cosmological value of the scalar field vanishes $\varphi|_{r\rightarrow\infty}\equiv\varphi_{\infty}=0$, We will focus our study on three different coupling functions all of them leading to nonlinearly scalarized black hole solutions in the spirit of \cite{Doneva:2021tvn}: \begin{equation} f_I(\varphi)= \frac{1}{4\kappa}\left(1-e^{-\kappa \varphi^4}\right), \label{eq:coupling_function_I} \end{equation} \begin{equation} f_{II}(\varphi)= \frac{1}{6\kappa}\left(1-e^{-\kappa \varphi^6}\right), \label{eq:coupling_function_II} \end{equation} \begin{equation} f_{III}(\varphi)= \frac{1}{2\beta}\left(1-e^{-\beta (\varphi^2+\kappa\varphi^4)}\right). \label{eq:coupling_function_III} \end{equation} All the coupling functions are constructed so that the conditions $f(0)=0$ and $df/d\varphi(0)=0$ are fulfilled. The former condition can be imposed since the field equations include only the first derivative of the coupling function but not the coupling function itself, while the latter condition guarantees that the Schwarzschild solution is also a solution to the sGB field equations with $\varphi=0$. The first two couplings with a pure quartic and sextic scalar field term in the exponent, though, have an important difference compared to the third one since they satisfy different conditions on the second derivative with respect to the scalar field. More specifically, \begin{eqnarray} \frac{d^2f_{(I\;{\rm or}\;II)}}{d\varphi^2}(0)=0,\;\; {\rm while}\;\;\ \frac{d^2f_{III}}{d\varphi^2}(0)>0. \label{eq:CouplingFuncConditions} \end{eqnarray} The first condition for $f_I$ and $f_{II}$ leads to the fact that no tachyonic instability is possible and the Schwarzschild solution is always stable against linear scalar perturbations. Thus if black holes with scalar hair exist with such type of coupling, they should form a new black hole phase coexisting with the stable Schwarzschild black hole phase that we will call a scalarized black hole phase. The second condition in \eqref{eq:CouplingFuncConditions} for $f_{III}$ leads to a tachyonic destabilization of the Schwarzschild black hole below a certain mass that is the mechanism for the appearance of the standard spontaneously scalarized black holes \cite{Doneva:2017bvd,Silva:2017uqg,Antoniou:2017acq}. The quartic scalar field term in $f_{III}(\varphi)$, though, changes the spectrum of the solutions considerably and for certain combinations of parameters the co-existence of both nonlinearly and standard scalarized black holes is allowed. We refer the reader to \cite{Doneva:2021tvn} for a detailed discussion of all these types of scalarization. In order to construct the black hole solutions we have to impose the following conditions at the black hole horizon $r=r_H$ \begin{eqnarray} e^{2\Phi}|_{r\rightarrow r_H} \rightarrow 0, \;\; e^{-2\Lambda}|_{r\rightarrow r_H} \rightarrow 0. \label{eq:BC_rh} \end{eqnarray} If one requires regularity of the metric functions and the scalar field then the following condition should be also satisfied \begin{equation} r_H^4 > 24 \lambda^4 \left(\frac{df}{d\varphi}(\varphi_{H})\right)^2, \label{eq:BC_sqrt_rh} \end{equation} where $\varphi_{H}$ is the value of the scalar field at the horizon. In the calculations below we will use the following natural definition of a singular limit described by the curve $(r_H/\lambda)^4 - 24 \left(\frac{df}{d\varphi}(\varphi_{H})\right)^2=0$ in the $(\varphi_{H},r_H/\lambda)$ plane, that separates the region of the possible existence of hairy black holes from the region where the regularity condition is violated, and only the Schwarzschild black hole can be a solution of the field equations. At infinity asymptotic flatness means \begin{eqnarray} \Phi|_{r\rightarrow\infty} \rightarrow 0, \;\; \Lambda|_{r\rightarrow\infty} \rightarrow 0,\;\; \varphi|_{r\rightarrow\infty} \rightarrow 0\;\;. \label{eq:BH_inf} \end{eqnarray} The leading order scalar field asymptotic for the black hole solutions we will be interested in, assuming zero background scalar field at infinity, has the form \begin{equation} \varphi|_{r\rightarrow\infty} \sim \frac{D}{r}, \end{equation} where the constant $D$ denotes the scalar charge of the back hole solutions. \section{Radial Perturbations}\label{sec:Radial_Perturbations} \subsection{Ansatz and equations} Having discussed the equations that determine the background black hole solutions, let us now turn to considering their radial perturbations with the end goal to derive the master equation governing these perturbations. In what follows, we adopt the same procedure as in \cite{Blazquez-Salcedo:2018jnn}. Namely, we assume the following form of the metric and scalar field perturbations: \begin{eqnarray} \label{metric_pert} ds^2 &=& -e^{2\Phi(r)+\epsilon F_t(r,t)}dt^2 + e^{2\Lambda(r) + \epsilon F_r(r,t)} dr^2 + r^2 (d\theta^2 + \sin^2\theta d\phi^2), \nonumber \\ \varphi &=& \varphi_0(r) + \epsilon \varphi_1(r,t), \end{eqnarray} with $\epsilon$ being the control parameter of the perturbations. By writing the field equations in terms of this metric and keeping only first order terms in $\epsilon$, one can derive a system of perturbation equations for the metric and scalar field perturbations $F_t(r,t)$, $F_r(r,t)$ and $\varphi_1(r,t)$. This system can be simplified into a single second order differential equation of the form \begin{equation} \label{wave_eq} g^2(r) \frac{\partial^2\varphi_1}{\partial t^2} - \frac{\partial^2\varphi_1}{\partial r^2} + C_1(r) \frac{\partial\varphi_1}{\partial r} + U(r) \varphi_1=0, \end{equation} where the functions $U(r)$, $g(r)$ and $C_1(r)$ depend only on the background metric and scalar field, with complicated expressions that can be found in \cite{Blazquez-Salcedo:2018jnn}. In order to study the mode stability of the background configuration, we decompose the perturbation function $\varphi_1$ as \begin{equation} \varphi_1(r,t) = \varphi_1(r) e^{i\omega t}, \end{equation} where $\omega$ is in general a complex number. {We obtain the master equation for the eigenvalue problem, namely} \begin{eqnarray} \label{master_eq} \frac{d^2\varphi_1}{dr^2} = C_1(r) \frac{d\varphi_1}{dr} + \left[ U(r) - \omega^2 g^2(r) \right] \varphi_1(r). \end{eqnarray} {The master equation (\ref{master_eq}) can be cast into the standard Schr\"odinger form by defining the function $Z(r)$:} \begin{eqnarray} \varphi_1(r) = C_0(r) Z(r), \end{eqnarray} {where $C_0(r)$ is the solution of the following differential equation} \begin{equation} \frac{1}{C_0}\frac{dC_0}{dr}= C_1 - \frac{1}{g}\frac{dg}{dr}. \end{equation} {Thus we obtain} \begin{eqnarray} \label{master_eq_simp} \frac{d^2Z}{dR^2} = \left[ V(R) - \omega^2 \right] Z, \end{eqnarray} where we have defined the tortoise coordinate $R$ and the effective potential as \begin{eqnarray} \frac{dR}{dr} &=& g, \label{eq:g}\\ V(R) &=& \frac{1}{g^2} \left( U + \frac{C_1}{C_0}\frac{dC_0}{dr}-\frac{1}{C_0}\frac{d^2C_0}{dr^2} \right). \label{eq:potential} \end{eqnarray} % \subsection{Boundary conditions and numerical method} We want the perturbation to have the form of an outgoing wave at infinity and an ingoing wave at the horizon. Since we are interested in the stability analysis of the background solutions, we will focus on perturbations with purely imaginary eigenfrequencies: $\omega = i\omega_I$. In such a case the asymptotic behaviour of the perturbation function $Z$ becomes: \begin{eqnarray} Z \xrightarrow[r \to \infty]{} e^{i\omega(t-R)} = e^{-\omega_I (t-R)}, \\ Z \xrightarrow[r \to r_H]{} e^{i\omega(t+R)} = e^{-\omega_I (t+R)}. \end{eqnarray} The asymptotic behaviour of unstable perturbations possessing $\omega_I < 0$ simplifies considerably, and it is straightforward to show that the function $Z$ must satisfy the following boundary conditions: \begin{eqnarray} Z|_{r=\infty}=Z|_{r=r_H}=0. \label{bc:Z} \end{eqnarray} % Our main goal in this paper is to check the desired black hole solutions for stability using the radial perturbation equation \eqref{master_eq_simp}. In order to do so we solve this equation together with the boundary conditions (\ref{bc:Z}), and calculate numerically the unstable modes with $\omega_I<0$, if they exist. The numerical procedure is the same as in \cite{Blazquez-Salcedo:2018jnn}, and we refer there for more details. The free parameters we have in our system are $r_H$, $\lambda$ and the coupling constant $\kappa$ (and $\beta$ in the case of $f_{III}$). The numerical procedure allows us to explore the dependence of the unstable mode on these parameters. % A lot of information can be gained, though, only on the basis of the behaviour of the potential $V(R)$ in eq.~\eqref{eq:potential} and more precisely on the sign of the integral \begin{eqnarray} \int^{\infty}_{-\infty} V(R) dR \ . \label{int:V} \end{eqnarray} A negative value is a signal for the instability of the black hole solutions. That is why, we monitor the sign of the potential and the value of this integral as we explore the domain of existence of the solutions. % Another important property of the perturbation equation is its hyperbolicity. As well known, the time-dependent field equations in Gauss-Bonnet theory suffers from loss of hyperbolicity both in their linear and nonlinear form \cite{Blazquez-Salcedo:2018jnn,Ripley:2019hxt,Blazquez-Salcedo:2020rhf,East:2021bqk,Kuan:2021lol}. For the standard scalarization this normally happens for small mass black holes. It is interesting to observe whether such hyperbolicity loss is also present for the scalarized black hole phases with the coupling functions listed above. In order to explore this, we study the positivity of the function $(1-\frac{r_H}{r})^2 g^2$. We say that hyperbolicity of a configuration is lost if this function takes zero or negative values. Hence we also monitor the behaviour of this function as we vary the free parameters of our system. At the end let us point out that what we consider in the present paper is only the loss of hyperbolicity with respect to the radial perturbation equations. As demonstrated in \cite{Blazquez-Salcedo:2020rhf} the black hole mass where the evolution equation governing the axial perturbations changes its character can be a bit different. The two points for the radial and axial perturbations are very close, though, at least for the case of the standard scalarization \cite{Blazquez-Salcedo:2018jnn,Blazquez-Salcedo:2020rhf}, and that is why we expect that the conclusions made in the present paper will not be significantly altered even if other types of perturbations are considered. \section{Stability analysis}\label{sec:StabilityAnalysis} Now we start analysing the stability of the black hole solutions separately for each of the three coupling functions \eqref{eq:coupling_function_I}--\eqref{eq:coupling_function_III}. We will comment in detail on the spectrum of solutions and their domain of existence, the stability properties and the possible loss of hyperbolicity of the radial perturbations equation. \subsection{Results for coupling $f_I(\varphi)= \frac{1}{4\kappa}\left(1-e^{-\kappa \varphi^4}\right)$} This is the perhaps simplest type of coupling function that can lead to the existence of stable scalarized black hole phases of the stable Schwarzschild solution. Indeed, a similar phenomenon might appear for simpler types of coupling when $f(\varphi)$ is proportional to $\varphi^4$, but our studies in \cite{Doneva:2021tvn} have shown that even if scalarized phases exist in this case, they are always unstable and thus not of interest. Other studies of coupling functions, containing some type of $\varphi^4$ terms, without considering the phenomenon of nonlinear scalarized phases, though, were performed e.g. in \cite{Antoniou:2017hxj,Minamitsuji:2018xde,Silva:2018qhn,Corelli2020}. Let us start with a brief description of the domain of existence and the structure of scalarized black hole branches. It changes with the value of $\kappa$, but essentially seems to fall into one of three possible cases. \begin{figure}[H] \centering \includegraphics[width=0.38\textwidth,angle=-90]{plot_ks_rh_phih_full_phi4_v2.eps} \caption{Domain of existence of black holes parametrized by $r_H/\lambda$ and $\varphi_H$ for the first coupling $f_I$ and different values of $\kappa$ (blue, red and green for $\kappa=25$, $100$ and $1000$ respectively). Dashed lines correspond to black hole solutions while the solid line represents the singular limit curve.} \label{Fig:domain_I} \end{figure} In figure \ref{Fig:domain_I} we show $r_H/\lambda$ vs $\varphi_H$ for the first coupling $f_I$ eq.~\eqref{eq:coupling_function_I} and $\kappa=25,100,1000$ in blue, red and green, respectively, that represents small, intermediate and large values of $\kappa$. The solid lines correspond to the singular limit curve that is determined by eq.~\eqref{eq:BC_sqrt_rh} and separates the region where black hole solutions with scalar hair can exist from the region where their existence is not possible due to violation of the regularity condition at the black hole horizon. The dashed lines represent the scalarized black hole phases. For $\kappa=1000$ (green), all solutions are smoothly connected, and the solution curve lies above the limit curve for this value of $\kappa$. As one decreases $\kappa$, the maximum of the limit curve rises and eventually intersects the curve of solutions, dividing it in two pieces. This is the case for $\kappa=100$ (red) where it can be seen that the solutions are separated in two different sets, one to the left of the limit curve, and another one to the right. Interestingly, as one further decreases $\kappa$, the right part of the curve of solutions deforms into a closed loop. This is the case of $\kappa=25$ (blue). Remember that for every value of $\kappa$, including the closed loop case, we have a branch of hairy black hole solutions spanning from zero black hole mass to the intersection with the line of existence. Due to resolution reasons, though, it is difficult to distinguish these branches for different $\kappa$ in Fig.~\ref{Fig:domain_I} (see for instance how the solutions for all $\kappa$ are superimposed on the bottom left corner of this figure). Below we will consider the stability of each of these cases separately. Let us just summarize in advance that the results indicate that only large and intermediate values of $\kappa$ possess both radially stable and hyperbolic solutions. In addition, all conclusions regarding radial stability are in agreement with the thermodynamic analysis performed in \cite{Doneva:2021tvn}. \subsubsection{Coupling $f_{I}$, large $\kappa=1000$} \begin{figure}[H] \centering \includegraphics[width=0.38\textwidth,angle=-90]{plot_k1000_rh_phih_full_phi4_v2.eps} \includegraphics[width=0.38\textwidth,angle=-90]{plot_k1000_M_D_full_phi4_v2.eps} \includegraphics[width=0.38\textwidth,angle=-90]{plot_k1000_M_omegaI_full_phi4_v2.eps} \caption{The results for coupling $f_I$ and large $\kappa=1000$ are plotted. \textit{(top-left panel)} The normalized black hole radius $r_H/\lambda$ as a function of $\varphi_H$. \textit{(top-right panel)} The normalized black hole scalar charge $D/\lambda$ as a function of the black hole mass $M/\lambda$. The solid black line represents the singular limit curve while the lines with different colors correspond to different branches of hairy black hole. \textit{(bottom panel)} The imaginary part of the frequency normalized with the black holes mass $M \omega_I$ for the only unstable b1 branch as a function of $M/\lambda$.} \label{Fig:k1000_I} \end{figure} In figure \ref{Fig:k1000_I} we plot the results for $f_I$ and $\kappa=1000$ that represent the large $\kappa$ case discussed above. In the top-left panel we show again $r_H/\lambda$ vs $\varphi_H$ but with a clear separation between the different parts of the branch that possess different properties such as (in)stability and (loss of) hyperbolicity. We have introduced the following naming convention of the branches: We call b1 (in red) the part of the branch that spans from zero black hole mass until the maximum of $r_H$ while b2 (in blue and orange) is the second part of the sequence after this maximum. Additional information about the black hole branches can be gained from the top-right panel of figure \ref{Fig:k1000_I} where we show the scalar charge $D/\lambda$ vs $M/\lambda$. It can be seen there that the solutions in b1 are always below the solutions in b2 and thus have a lower scalar charge. Solutions in b1 (red curve) are always unstable since the integral of the potential (\ref{int:V}) is negative and it is possible to find an unstable mode for all these configuration. This can be seen in the bottom panel of figure \ref{Fig:k1000_I} where the imaginary part of the frequency $\omega_I$ for the b1 branch is plotted as a function of $M/\lambda$ \footnote{Remember that for unstable modes the real part of the frequency is always zero.}. The black hole with the maximum value of $M/\lambda$ possesses a zero mode. The solutions belonging to the b2 branch are stable. However, for small enough values of $M/\lambda$, the radial perturbations become non-hyperbolic (orange curve in the top panels in Fig. \ref{Fig:k1000_I}). This non-hyperbolic part is smaller for larger $\kappa$ while it can increase significantly with the decrease of $\kappa$ as we will see below. % \subsubsection{Coupling $f_{I}$, intermediate $\kappa=100$} \begin{figure}[H] \centering \includegraphics[width=0.35\textwidth,angle=-90]{plot_k100_rh_phih_full_phi4_v2.eps} \includegraphics[width=0.34\textwidth,angle=-90]{plot_k100_M_D_full_phi4_v2.eps} \includegraphics[width=0.36\textwidth,angle=-90]{plot_k100_M_omegaI_full_phi4_v2.eps} \caption{The same type of plots as in Fig. \ref{Fig:k1000_I} but for $\kappa=100$ and coupling $f_I$.} \label{Fig:k100_I} \end{figure} Now let us turn to the case of intermediate values of $\kappa$ that is perhaps the most complicated of all. The behavior of the solutions as well as their stability is represented in figure \ref{Fig:k100_I}. Let us first focus on the top-left panel of the figure that shows $r_H/\lambda$ vs $\varphi_H$. In red the solutions from branch b1 are represented that now end in the limit curve. In blue and in orange we show configurations from branch b2. However, a new branch b3 is found in this case, and it is represented by the pink and purple curves. Configurations in b3 are continuously connected with the solutions in b2. However, branch b3 does not stop at the singular limit curve (solid black line). Instead it is smoothly connected to a new branch of solutions that we label by b4 (green curve). % As a matter of fact we suspect that more branches like this exist, connecting with b4. If we look at the top-right diagram where $D/\lambda$ is represented as a function of $M/\lambda$, we can see how these branches inspiral around a finite value of the mass and scalar charge, each branch being considerably shorter than the previous one. This is a very interesting phenomenon observed also for other hairy black hole solutions \cite{Herdeiro:2014goa,Kunz:2019bhm} (or for black strings \cite{Kleihaus:2006ee,Kalisch:2016fkm,Kalisch:2017bin}). It is, though, notoriously difficult to obtain these branches from a numerical point of view. As we will see later, they are also unstable {or non-hyperbolic} and that is why we have limited our calculations up to b4. % Similar to the large $\kappa$ case, solutions in b1 are always unstable but always hyperbolic.We show the unstable mode in the bottom panel of figure \ref{Fig:k100_I}, red curve. For the configuration in b1 that ends on the critical solution, the mode has a finite value. Solutions in b3 are also unstable and possess an unstable mode (pink and purple line). The mode becomes a zero mode at the solution with maximum mass. Interestingly, part of the b3 branch also loses hyperbolicity (purple curve) as we approach the configuration with minimum mass. In fact, as branch b4 connects with b3, it remains non-hyperbolic as it inspirals, and we expect that the additional branches inspiraling after b4 will be also either unstable or non-hyperbolic. Solutions in b2 are again stable (blue), but similar to the large $\kappa$ case, the radial perturbations become non-hyperbolic (orange curve) for small enough values of $M/\lambda$. With the further decrease of $\kappa$ the part of the branch that is hyperbolic shrinks until it completely disappears making the full b2 branch non-hyperbolic as we will see in the next subsection. \subsubsection{Coupling $f_{I}$, small $\kappa=25$} \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth,angle=-90]{plot_k25_rh_phih_full_phi4_v2.eps} \includegraphics[width=0.34\textwidth,angle=-90]{plot_k25_M_D_full_phi4_v2.eps} \includegraphics[width=0.36\textwidth,angle=-90]{plot_k25_M_omegaI_full_phi4_v2.eps} \caption{The same type of plots as in Fig. \ref{Fig:k1000_I} but for $\kappa=25$ and coupling $f_I$.} \label{Fig:k25_I} \end{figure} In figure \ref{Fig:k25_I} the results for coupling $f_I$ and small $\kappa=25$ are shown. The spectrum of solutions in this case is somewhat simpler. Let us fist look in detail in the top-left panel of the figure where $r_H/\lambda$ vs $\varphi_H$ is plotted. In red we show solutions from branch b1, with similar properties for all $\kappa$ ranges. In orange we have the solutions from b2, which are continuously connected with the b3 solutions (purple). In the top-right panel of \ref{Fig:k25_I} the $D/\lambda$ vs $M/\lambda$ dependence is represented. Branch b2 and branch b3 form a closed loop that lies between two solutions with minimum and maximum mass. Now let us look at the stability of the solutions that can be concluded from the bottom panel of the figure where the imaginary part of the unstable modes are represented. Solutions in b1 (red curve) are always unstable and hyperbolic, as it can be seen from the bottom panel in the figure, where the unstable mode frequency is plotted for b1. Solutions in b2 (orange) and b3 (purple) are always non-hyperbolic. \subsection{Results for coupling $f_{II}(\varphi)= \frac{1}{6\kappa}\left(1-e^{-\kappa\varphi^6}\right)$} \begin{figure}[H] \centering \includegraphics[width=0.38\textwidth,angle=-90]{plot_ks_rh_phih_full_phi6_v1.eps} % \caption{The normalized horizon radius $r_H/\lambda$ as a function of the scalar field at the horizon for $f_{II}$, $\beta=6$ and several $\kappa$. Dashed lines correspond to black hole solutions while the solid line represents the singular limit curve.} \label{Fig:results_II} \end{figure} Here we analyze the coupling $f_{II}$ that contains a sextic scalar field term in the exponent instead of the quartic term in $f_I$. It is interesting because it also allows for the existence of scalarized phases. Essentially, the results are similar to the first coupling $f_I$ and that is why we will present them here quickly. In figure \ref{Fig:results_II} we show $r_H/\lambda$ vs $\varphi_H$ for $f_{II}$ and $\kappa=500,3\times 10^3,2\times 10^5,1.8\times 10^6$ in blue, red, green and purple, respectively. The solid lines correspond to the singular limit curves, while the dashed lines represent all scalarized solutions. The general structure of the space of solutions is very similar to the one obtained for $f_I$ where three distinct types of cases, depending on the value of $\kappa$, can be recognized: For large values of $\kappa$, all solutions are smoothly connected. For intermediate values, the limit curve separates the solutions in two parts to the left and to the right. For large values of $\kappa$, the branches on the right side close forming a loop. % \begin{figure}[H] \centering \includegraphics[width=0.38\textwidth,angle=-90]{plot_k1.8e6_rh_phih_full_phi6_v1.eps} \includegraphics[width=0.38\textwidth,angle=-90]{plot_k2e5_rh_phih_full_phi6_v1.eps} \includegraphics[width=0.38\textwidth,angle=-90]{plot_k3000_rh_phih_full_phi6_v1.eps} \includegraphics[width=0.38\textwidth,angle=-90]{plot_k500_rh_phih_full_phi6_v1.eps} \caption{The horizon radius $r_H/\lambda$ as a function of $\varphi_H$ for $f_{II}$ and $\kappa=1.8\times10^6$ (upper-left), $\kappa=2\times10^5$ (upper-right), $\kappa=3000$ (bottom-left), and $\kappa=500$ (bottom-right). % The solid black line represents the singular limit curve while the lines with different colors correspond to different branches of hairy black hole. The stability and hyperbolicity of the branches are explicitly marked in the figure. % } \label{Fig:k1.8e6_II} \end{figure} % Since the stability properties of the branches in this case are qualitatively very similar to the $f_I$ coupling, we will present the results for all interesting ranges of $\kappa$ in the single figure \ref{Fig:k1.8e6_II}. {There we show $r_H/\lambda$ vs $\varphi_H$, and there is a single plot for each interesting $\kappa$ range with a representative value of $\kappa$ chosen. The stability and hyperbolicity of the branches are explicitly marked.} As one can see the b1 branches are all unstable. All b3 branches in the figures are also either unstable or non-hyperbolic. The only branch that can be stable is b2. As one can see, this branch also loses hyperbolicity for small black hole masses and the non-hyperbolic region gets bigger with the decrease of $\kappa$. The stable hyperbolic part of the branch completely disappears for small enough $\kappa$, as can be observed in the bottom-right panel of Fig.~\ref{Fig:k1.8e6_II}. On the other hand, for intermediate values of $\kappa$, we also expect b3 to be smoothly connected to branches that inspiral around some particular values of the black hole parameters (see for instance the inset in the top right figure). But these branches will most likely be unstable or non-hyperbolic. % \subsection{Results for coupling function $f_{III}(\varphi)= \frac{1}{2\beta}\left(1-e^{-\beta (\varphi^2+\kappa\varphi^4)}\right)$} \begin{figure}[H] \centering \includegraphics[width=0.38\textwidth,angle=-90]{plot_b6_ks_M_D_full_phi2+4_v3.eps} \includegraphics[width=0.38\textwidth,angle=-90]{plot_b6_k4_M_omegaI_full_phi2+4_v3.eps} \caption{Results for the scalarized black hole branches and their stability for the third coupling $f_{III}$. \textit{(left panel)} The normalized scalar charge $D/\lambda$ as a function of $M/\lambda$. \textit{(right panel)} The imaginary part of the frequency normalized with the black holes mass $M \omega_I$ for the unstable branches in the left panel, as a function of $M/\lambda$.} \label{Fig:results_III} \end{figure} Here we will analyze the $f_{III}$ coupling function that allows for the presence of both the standard scalarization and the nonlinear scalarized phases. The structure of the solution branches in this case is simpler compared to $f_I$ and $f_{II}$ and it is presented in Fig.~\ref{Fig:results_III}. We fix $\beta=6$, meaning that the quadratic part of the coupling is always on, and only $\kappa$ is varied. This parameter is, loosely speaking, responsible for the appearance and the behavior of the nonlinear scalarized phases. We consider {four values of $\kappa=0,1,4,16$}, meaning we deform the quadratic coupling. In figure \ref{Fig:results_III} (left) we show $D/\lambda$ vs $M/\lambda$. For $\kappa=0$ we have the well known curve of standard scalarized solutions: these black holes possess a non-trivial scalar field with no nodes. They branch from the Schwarzschild solution at the point where it becomes unstable and extend down to $M/\lambda=0$. These black holes are radially stable, but for small values of the mass, they lose hyperbolicity, as indicated in the plot. When $\kappa=1$, the contribution from the $\phi^4$ coupling is still small, and the qualitative behavior of the standard scalarized branch is the same as the $\kappa=0$ case. We should note that in the figure we have plotted only the so-called fundamental branch of solutions that have no nodes of the scalar field. More branches of solutions exist for smaller masses with increasing number of scalar field nodes, but all of them are unstable \cite{Blazquez-Salcedo:2018jnn} and thus we will not pay special attention to them. With the increase of $\kappa$, though, the picture gradually changes and already for $\kappa=4$ the deviations due to the quadratic coupling are more significant. After the bifurcation point the scalarized branch turns right instead of left, which is normally a signal of instability observed also in the case of pure standard scalarization with $\kappa=0$ but for smaller $\beta$ \cite{Doneva:2017bvd,Doneva:2018rou,Silva:2018qhn}. Thus we expect that here as well part of the fundamental branch is no longer stable. In fact, uniqueness on this branch with respect to the total mass is lost: It is possible to find two node-less solutions with the same mass and different scalar charges. However the solution with lower scalar charge is unstable up to the maximum of the mass. This can be seen in figure \ref{Fig:results_III} (right), where we plot the unstable radial mode for the solutions. Note that for $\kappa=4$ there are two zero modes, corresponding to the two branching solutions: Schwarzschild ($M/\lambda=0.587$) and the maximum mass configuration ($M/\lambda=0.622$). {From this maximum mass configuration emerges the second branch that turns out to be radially stable. Thus, our interpretation is that we can formally divide the scalarized solution curve in two branches before and after the maximum of the mass, similar to the other coupling functions $f_I$ and $f_{II}$. The red curve would correspond to the branch b1 while the blue and orange correspond to b2.} Similar to what we find for $\kappa=0,1$ though, these solutions become non-hyperbolic for small enough values of the mass. Apart from the visual analogy with the other coupling functions, we have also a physical motivation for associating the larger $D/\lambda$ branch with {b2}. The reason is that it is stable but unconnected to Schwarzschild (at least not through a sequence of stable solutions). Part of it also spans a region where the Schwarzschild solution is stable. In order to ``jump'' from the GR branch to the one with stable hairy black holes, a large initial perturbation will be required that is exactly the mechanism for the formation of black hole scalarized phases described above. In figure \ref{Fig:results_III} (left) we also show solutoions with $\kappa=16$. In this case the limit curve separates the solutions in to two different sets. After the bifurcation point, the fundamental scalarized branch turns right, and it is possible to obtain an unstable mode along this branch (see figure \ref{Fig:results_III} (right)). The stable branch is no longer connected with the unstable solutions, but instead is surrounded by non-hyperbolic branches, very similar to what we obtain for the couplings $f_I$ and $f_{II}$ for intermediate values of $\kappa$. This is another motivation to associate the larger $D/\lambda$ branch with b2. \section{Conclusions} In the present paper we have studied the radial stability and hyperbolicity of {the so-called nonlinear scalarized phases of Schwarzschild black holes within the beyond spontaneous scalarization scenario}. These are black hole solutions with nontrivial scalar field that co-exist with the (always) linearly stable Schwarzschild solutions and can be excited only nonlinearly if a strong enough perturbation is imposed. The spectrum of these solutions can be very complicated possessing several distinct branches and it is interesting to identify the stable solutions that are potentially relevant for astrophysics. Depending on the exact choice of the coupling function between the scalar field and the Gauss-Bonnet invariant, the number and the structure of hairy black hole branches can vary significantly. In some cases these branches are smoothly connected to each other and in others they are terminated because they start violating the regularity condition at the black hole horizon. For certain values of the parameters $\kappa$ in the coupling functions \eqref{eq:coupling_function_I} and \eqref{eq:coupling_function_II} we have even found a series of inspiraling branches that is a new result unobserved before in sGB gravity. Despite the complicated structure of solutions it turns out that only one scalarized branch can be stable, and that coincides with the results from the thermodynamical analysis in \cite{Doneva:2021tvn}. This branch either spans from zero black hole mass to a maximum mass, as happens for large $\kappa$, or it is limited between two finite nonzero values of the mass for small $\kappa$. As far as hyperbolicity is concerned it turns out that it is lost for small black hole masses in the former case, while in the latter we can easily have a situation where the whole {potentially stable} branch consists of non-hyperbolic solutions with respect to the radial perturbation equation. This effectively means that for certain coupling parameters it is not possible to have both stable and hyperbolic scalarized phases of the Schwarzschild black hole. We should point out, though, that this undesired behavior happens only in a limited range of small $\kappa$ while for the majority of cases we can have stable and hyperbolic black hole solutions that are viable astrophysical candidates. We have focused our analysis also on a second type of coupling function given by \eqref{eq:coupling_function_III} that can be viewed as a connection between the standard scalarization and the nonlinear scalarized phases. Namely, the Schwarzschild black hole still becomes unstable below a certain mass, but the standard scalarized black holes branching out at this point are unstable and smoothly connected to a new branch of solutions whose properties resemble closely the scalarized phases. More specifically, this new branch also exists for values of the parameters where the Schwarzschild black hole is stable and solutions can form scalar hair only if a sufficiently large nonlinear perturbation is imposed. The radial stability analysis indeed shows that this new branch of solutions is stable and, similar to the case of standard scalarization, it loses hyperbolicity for small black hole masses. All these results place the nonlinear scalarized phases of the Schwarzschild black hole on a solid basis, showing that they are both stable and hyperbolic for a sufficiently large range of parameters. The fact that they are not smoothly connected with the stable Schwarzschild black hole offers a variety of astrophysical implications. Namely, the transition from one to the other will happen with a jump that will leave a distinct gravitational wave and electromagnetic signature. In order to study such processes in detail, though, we need to examine the nonlinear dynamics of the system that will be the topic of future work. \bibliographystyle{unsrt}
1,941,325,220,871
arxiv
\section{Introduction} \label{sec1} The BV/BRST formalism was developed as a general method to quantize an arbitrary field theory with gauge redundancies \cite{Batalin:1981jr,Batalin:1983ggl}.{\footnote{Useful expositions of the subject include the book \cite{HT} and the review \cite{Gomis:1994he}.}} It is particularly useful, and in fact necessary, for field theories whose gauge structure is reducible, namely when not all gauge transformations are independent, or whose gauge algebra closes either only up to equations of motion or with structure ``constants'' that depend on the fields of the theory. This is the case in many supersymmetric field theories or supergravity, string field theory, more generally any time when differential form fields of degree higher than one are present in the theory, but also in a variety of topological field theories such as the A-model, {B-model} \cite{Witten:1991zz,Witten:1988xj}, Chern-Simons theory \cite{Deser:1981wh,Witten:1988hf} {and BF theories}. The construction of topological quantum field theory in the spirit of the BV/BRST formalism found a geometric foundation through the work of Alexandrov, Kontsevich, Schwarz and Zaboronsky, who showed that solutions to the classical master equation correspond to graded supermanifolds with a QP structure \cite{Alexandrov:1995kv}. QP manifolds are characterised by the existence of a cohomological vector field Q of degree 1 and of a graded symplectic structure P, which are compatible in the sense that the graded symplectic form is Q-invariant. The AKSZ construction utilizes this structure to build a ladder of topological field theories that correspond to the geometric ladder of QP manifolds in diverse dimensions. The first instances include the 2D Poisson sigma model \cite{SchallerStrobl,Ikeda}---and the A-model {and B-model} through its gauge fixing---and an extension of the 3D Chern-Simons theory from algebras to algebroids, called the Courant sigma model \cite{Ikeda:2000yq,Ikeda:2002wh,Hofman:2002jz,Roytenberg:2006qz}. Both have found numerous applications, notably in the deformation quantization of Poisson manifolds \cite{Kontsevich:1997vb,Cattaneo:1999fm} which to some extend sparked the development of noncommutative field theory \cite{Szabo:2001kg} and in the description of general string backgrounds with fluxes \cite{Mylonas:2012pg,Chatzistavrakidis:2015vka,Chatzistavrakidis:2018ztm,Bessho:2015tkk,Heller:2016abk} and T-duality \cite{Severa:2016prq}. On the other hand, it is fair to say that the BV/BRST formalism is more general than the AKSZ construction, since it is possible to imagine that the target space of a given gauge field theory might not possess an underlying QP structure. This is indeed the case in a variety of topological field theories, such as the 2D H-twisted WZW-Poisson sigma model \cite{Klimcik:2001vg}, the 2D Dirac sigma models \cite{Kotov:2004wz} and a class of models with twisted R-Poisson structure in any spacetime dimension that was recently described in Ref. \cite{Chatzistavrakidis:2021nom} and extended further in Ref. \cite{Ikeda:2021rir} to higher Dirac structures. The underlying reason for the obstruction to QP-ness can be either the presence of a Wess-Zumino term that renders the Q and P structures of the graded manifold incompatible or, more drastically, the very absence of a P structure.{\footnote{See for example \cite{Alkalaev:2013hta} for the case of presymplectic structures.}} In such cases the original AKSZ construction does not apply. Nevertheless, it was demonstrated recently in the case of the H-twisted Poisson sigma model that a solution to the classical master equation can be determined \cite{Ikeda:2019czt}. This solution, found using the traditional approach to the BV formalism, involves a single term quadratic in antifields, which modifies the one of the untwisted case by $H$-dependent terms, $H$ being the closed 3-form on the target space that gives rise to the Wess-Zumino term. The full term is controlled by the so-called basic curvature associated to the notion of an $E$-connection in the case $E=T^{\ast}M$. $E$-connections and $E$-covariant derivatives generalize the usual notion of vector bundle connections and covariant derivatives in case the tangent bundle $TM$ is replaced by a general vector bundle $E$ with suitable structure, for example a Lie or Courant algebroid {\cite{Blaom, Abad-Crainic, Kotov-Strobl2}}. Such higher geometric structures appear to play a central role in the BV/BRST formalism, notably also for topological field theories without a QP structure in their target space. The main goal of the present paper is to determine the solution to the classical master equation for 4-form-twisted R-Poisson sigma models in 3D. This is a special case of a general class of topological field theories with a $(p+2)$-form-twisted R-Poisson structure on the target space in any world volume dimension $p+1$. This structure comprises a smooth target space manifold $M$ endowed with a Poisson bivector $\Pi\in \G(\wedge^2TM)$ together with a fully antisymmetric multivector $R$ of order $p+1$ and a closed $(p+2)$-form $H$ such that{\footnote{We have corrected a minus sign in comparison to the definition in Ref. \cite{Chatzistavrakidis:2021nom}.}} \begin{equation} [\Pi,R]=(-1)^{p+1}\langle\otimes^{p+2}\Pi, H\rangle, \end{equation} where the bracket on the left-hand side is the Schouten-Nijenhuis bracket of multivector fields. Thus a twisted R-Poisson manifold is given as the quadruple $(M,\Pi,R,H)$. For vanishing $R$ and $H$ it reduces to a Poisson manifold and thus higher dimensional generalizations of the Poisson sigma model are included as special cases. A graded-geometric incarnation of the structure is achieved by considering the graded supermanifold ${\cal M}=T^{\ast}[p]T^{\ast}[1]M$ equipped with a cohomological vector field, a Q-structure. Being a degree-shifted cotangent bundle, i.e. a generalized phase space, this manifold has a natural graded symplectic structure P. When $H$ vanishes, the two structures are compatible, rendering ${\cal M}$ a QP manifold, however this ceases to be true in presence of the $(p+2)$-form $H$, to which we refer as the twist. Once a sigma model with this target space is written down, the twist appears as a Wess-Zumino term. The theory contains four different types of fields, scalars $X^{i}$, spacetime 1-forms $A_i$, spacetime $(p-1)$-forms $Y^{i}$ and spacetime $p$-forms $Z_i$, the index being one on the target space. Since there are differential forms of degree higher than 1, the theory is multiple stages reducible and moreover its gauge algebra does not close off-shell. This means that the BV formalism must be used to quantize the theory. As already mentioned, this is a standardized task in absence of Wess-Zumino term, since one can simply apply the ASKZ construction. For instance, the theory in 3D is a specific version of the Courant sigma model. On the other hand, turning on the Wess-Zumino term leads to a departure from this simple picture. Since there is no standardized procedure \`a la AKSZ for determining the BV action in such cases, one should find it in a direct way, as in the case of the $H$-twisted Poisson sigma model. However, comparing to this 2D case, new features appear in higher dimensions. Notably, we will see that the square of the BRST operator on the top form $Z_i$ contains field equations not only linearly but also non-linearly, namely products of field equations. This is not the case in 2D models, like the $H$-twisted Poisson sigma model. Nevertheless, a careful application of the BV formalism leads to the correct BV action in the 3D case, with all modifications due to the twist taken into account. This is then the first example of 4-form twisted Courant sigma model, in the sense of \cite{Hansen:2009zd}, whose BV action is determined. The mathematical structure underlying this model is within the class of pre-Courant algebroids \cite{Vaisman:2004msa,preca2,prehequiv}. The problem can be actually formulated in arbitrary world volume dimensions. Using the gauge invariances of the classical action, the ghost and ghost-for-ghost content of the theory is easily identified and the BRST operator and its square can be calculated for the full tower of fields and ghosts in presence of the $(p+2)$-form twist. As one would expect, the square of the BRST operator turns out to be nonvanishing yet proportional to the classical field equations or products thereof. A genuine BV operator, i.e. one that is strictly nilpotent, then requires the introduction of antifields in the theory. In general dimensions, we provide a universal set of closed formulas for the BV operator on all fields and ghosts in the case of vanishing twist. Although such expressions have been written down explicitly for 3D Courant sigma models \cite{Ikeda:2000yq} (see also \cite{Grewcoe:2020ren} for an $L_{\infty}$ approach), here we derive an elegant set of formulas for topological sigma models in any dimension. The essential content of these formulas could also be found via the AKSZ construction in the untwisted case, nevertheless we have determined them in a direct way, which results in very explicit and compact expressions. As already mentioned, the problem becomes more tractable for 3 world volume dimensions, in which case we determine the full BV operator including the twist and consequently the full BV action for 4-form-twisted Courant sigma models of twisted R-Poisson type. The rest of the paper is organised as follows. In Section \ref{sec2} we describe the main features of twisted R-Poisson sigma models and the geometry of their target space. In Section \ref{sec21} twisted R-Poisson manifolds are defined and their description as Q-manifolds in graded geometry is explained. The corresponding topological fields theory is discussed in Section \ref{sec22} together with its gauge symmetries and field equations. Section \ref{sec31} contains the identification of the ghosts and ghosts for ghosts that must be included in the theory in arbitrary dimensions. In the same section we determine the BRST operator for all fields and ghosts and show that it is nilpotent only on-shell. We proceed in Section \ref{sec32} with the introduction of antifields and antighosts and the extension of the BRST operator to the BV operator, which is nilpotent off-shell, in the untwisted case. We then focus on 3-dimensional world volumes with 4-form Wess-Zumino term in Section \ref{sec4}, where we fully determine all the $H$-dependent corrections to the AKSZ R-Poisson-Courant sigma model. Finally, Section \ref{sec5} contains our conclusions and outlook. \section{Twisted R-Poisson topological field theory} \label{sec2} \subsection{Twisted R-Poisson manifolds} \label{sec21} The geometrical structure of the target space for the topological field theories considered in this paper is a twisted R-Poisson manifold. This is an extension of Poisson and twisted Poisson structures to include multisymplectic structures. Before explaining it in more detail, let us briefly recall some basic facts about ordinary Poisson geometry that will be useful in the ensuing. Our goal is to characterize a Poisson structure in several different yet equivalent ways. The most common one is the Lie bracket characterization, where a Poisson manifold is a smooth manifold $M$ equipped with a bilinear, skew symmetric map $\{\cdot,\cdot\}: C^{\infty}(M)\times C^{\infty}(M)\to C^{\infty}(M)$ that satisfies the Jacobi identity and the Leibniz rule. In our approach we will mostly work with the equivalent definition of a Poisson manifold $(M,\Pi)$ with $\Pi\in\G(\wedge^2 TM)$ an antisymmetric bivector field (Poisson structure) that satisfies \begin{equation} [\Pi,\Pi]=0 \end{equation} with respect to the Schouten-Nijenhuis bracket on multivector fields $[\cdot,\cdot]:\wedge^pTM\times \wedge^q TM\to \wedge^{p+q-1}TM$. It is worth mentioning that this structure can be twisted in a specific way once a closed 3-form $H_3$ on $M$ is considered. This gives rise to the so-called twisted Poisson manifold $(M,\Pi,H_3)$ \cite{Severa:2001qm} with defining conditions \bea \label{twistedPoisson} \frac 12 \, [\Pi,\Pi]&=&\langle \otimes^{3}\Pi,H_3\rangle\,, \\[4pt] \mathrm{d} H_3&=&0\,, \end{eqnarray} where $\mathrm{d}$ is the de Rham differential and the contraction of the tensor field $\otimes^3\Pi$ with the 3-form in a given local coordinate system is taken in the odd order indices of each $\Pi$ . Both above structures may be seen as the geometry underlying certain Lie algebroids. Recall that a Lie algebroid is a vector bundle $E$ over $M$ with a Lie algebra structure on its sections, so that there exists a Lie bracket $[\cdot,\cdot]_{E}:E\times E\to E$ that satisfies a Leibniz rule, like the ordinary Lie bracket of vector fields on the tangent bundle $TM$, with the help of a smooth bundle (anchor) map $\rho: E\to TM$: \begin{equation} [e,f e']_E=f[e,e']_E+\rho(e)f \, e'\,, \quad e,e'\in \G(E), f\in C^{\infty}(M)\,. \end{equation} Poisson structures and their twisted extension are related to this concept when one makes the choice that the vector bundle is the cotangent bundle, namely $E=T^{\ast}M$, the map $\rho$ is the ``musical'' isomorphism $\Pi^{\sharp}:T^{\ast}M\to TM$ induced by a (possibly twisted, in the above sense) Poisson structure on $M$ and the bracket on sections of the cotangent bundle (that is, 1-forms) is the (twisted) Koszul bracket{\footnote{Note that the action of $\Pi^{\sharp}$ on $e$ is $\Pi^{\sharp}(e)=\Pi^{ij}e_{i}\partial_j.$}} \begin{equation} [e,e']_K={\cal L}_{\Pi^{\sharp}(e)}e'-{\cal L}_{\Pi^{\sharp}(e')}e-\mathrm{d} (\Pi(e,e'))-H_3(\Pi^{\sharp}(e),\Pi^{\sharp}(e'))\,. \end{equation} Then $(T^{\ast}M,\Pi^{\sharp},[\cdot,\cdot]_{K})$ is a Lie algebroid if and only if the defining conditions \eqref{twistedPoisson} hold. Alternatively, a more modern way to think about the above structures is in the context of graded differential geometry and $Q$-structures. Specifically, Vaintrob showed in Ref. \cite{Vaintrob} that a Lie algebroid on $E$ is in one-to-one correspondence with a $Q$-manifold ${\cal M}=E[1]$, that is a graded manifold whose fiber coordinates are assigned degree 1, equipped with a cohomological vector field $Q_{E}$ of degree 1, namely one that satisfies $Q^{2}=\frac 12\{Q,Q\}=0$. In the present case, the graded manifold is $T^{\ast}[1]M$ and the degree 1 vector field reads \begin{equation} \label{Qhp} Q_{T^{\ast}M}=\Pi^{ij}(x)\xi_i\partial_{x^{j}}-\frac 12 (\partial_i\Pi^{jk}+\Pi^{jl}\Pi^{km}H_{ilm})\xi_{j}\xi_{k}\partial_{\xi_{i}}\,, \end{equation} where $(x^{i},\xi_{i})$ are degree 0 and 1 coordinates on the graded manifold respectively and we introduced the notation $\partial_{x^{i}}=\partial/\partial x^{i}$ and $\partial_{\xi_{i}}=\partial/\partial\xi_{i}$. In accordance with the above statements, this vector field is of degree 1 and it satisfies the condition $Q^{2}=0$ if and only if \eqref{twistedPoisson} hold, with or without $H_3$. After this very brief introduction to twisted Poisson manifolds and their corresponding Lie algebroids, let us move on to the main concept underlying the field theories we consider in this paper, namely twisted R-Poisson manifolds. These are equipped with the additional structure of a fully antisymmetric multivector field $R$ of order $p+1$. This can give rise to a bracket that generalizes the Poisson bracket{\footnote{Note that every multivector field defines a multiderivation on a manifold, namely a multilinear map $C^{\infty}(M)\times\dots C^{\infty}(M)\to C^{\infty}(M)$ which is totally antisymmetric and a $C^{\infty}$-derivation in each of the arguments \cite{Pham}. In addition, the space of multiderivations and the space of multivector fields of the same order are in one-to-one correspondence.}} $\{\cdot,\cdot\}$, however we directly describe the structure in terms of the alternative formulation based on the Schouten-Nijenhuis bracket. Therefore, we consider the quadruple $(M,\Pi,R,H)$ consisting of a smooth manifold $M$ equipped with a bivector $\Pi\in\G(\wedge^{2}TM)$, an antisymmetric multivector $R\in \G(\wedge^{p+1}TM)$ of degree $p+1$ and a $(p+2)$-form $H\in\G(\wedge^{p+2}T^{\ast}M)$. This is called a twisted R-Poisson manifold of order $p+1$ when the following conditions hold \cite{Chatzistavrakidis:2021nom} \bea [\Pi,\Pi]&=&0\,, \\[4pt] [\Pi,R]&=&(-1)^{p+1}\langle \otimes^{p+2}\Pi,H\rangle\,, \\[4pt] \mathrm{d} H&=&0\,. \end{eqnarray} For completeness and in absence of any other structure one may include in the definition the requirement that $[R,R]=0$ with respect to the Schouten-Nijenhuis bracket, although this does not appear in the field theoretic incarnation of twisted R-Poisson target spaces. Evidently, for vanishing $H$ this should be called an (ordinary or untwisted) R-Poisson structure. Moreover, when in addition $R$ is absent, this reduces to a Poisson structure.{\footnote{We note that an extension of the above to what is called a bi-twisted R-Poisson structure is possible in special cases and reduces to a \emph{twisted} Poisson structure in absence of $H$ and $R$. We refer to \cite{Chatzistavrakidis:2021nom} for more details, since we are not dealing with this more general situation in the present paper.}} As already discussed in \cite{Chatzistavrakidis:2021nom}, a twisted R-Poisson structure has a characterization in terms of $Q$-manifolds too. Instead of the degree-shifted cotangent bundle that was associated to the Poisson case, one should now consider the degree-shifted second-order bundle \begin{equation} {\cal M}=T^{\ast}[p]T^{\ast}[1]M\,. \end{equation} In a local patch, this manifold can be described by four types of graded coordinates $(x^{i},a_{i},y^{i},z_{i})$ of degrees $(0,1,p-1,p)$ respectively. The $Q$-structure on this graded manifold is given by the degree 1 vector field \bea Q&=&\Pi^{ji}a_j\partial_{x^{i}}-\frac 12 \partial_i\Pi^{jk}a_ja_k\partial_{a_{i}}+\left((-1)^{p}\Pi^{ji}z_j-\partial_j\Pi^{ik}a_ky^{j}+\frac 1{p!}R^{ij_1\dots j_p}a_{j_1}\dots a_{j_p}\right)\partial_{y^{i}} \, + \nonumber\\[4pt] && +\, \left(\partial_i\Pi^{jk}a_kz_j-\frac {(-1)^{p}}2 \partial_i\partial_j\Pi^{kl}y^{j}a_ka_l+\frac {(-1)^{p}}{(p+1)!} f_{i}^{k_1\dots k_{p+1}}a_{k_1}\dots a_{k_{p+1}}\right)\partial_{z_{i}}\,,\label{Qp+1} \end{eqnarray} where $f_{i}^{k_1\dots k_{p+1}}= \partial_iR^{k_1\dots k_{p+1}}+\prod_{r=1}^{p+1}\Pi^{k_rl_r}H_{il_1\dots l_{p+1}}$. This vector field is cohomological if and only if $(M,\Pi,R,H)$ is a twisted R-Poisson manifold of order $p+1$ \cite{Chatzistavrakidis:2021nom}. In both cases above, namely the twisted Poisson and twisted R-Poisson structures, the graded $Q$-manifold we described possesses a graded symplectic structure P given in terms of a graded symplectic 2-form $\omega$. This is evident from the fact that the underlying graded manifold is a cotangent bundle and therefore the graded coordinates form pairs of generalized ``coordinates'' and ``momenta'' as in ordinary Hamiltonian mechanics; such pairs are the $(x,\xi)$ in the (twisted) Poisson case and the $(x,z)$ and $(a,y)$ in the (twisted) R-Poisson case. The graded symplectic structure is of degree 2 in the first case and of degree $p$ in the second case. One may then ask whether the graded manifold has a QP structure. As mentioned in the introduction, this is true if and only if the twist $H$ vanishes. Then the graded symplectic 2-form is indeed $Q$-invariant, namely its Lie derivative along the vector field $Q$ vanishes. However, the presence of $H$ introduces an obstruction to this invariance, as explained in detail in \cite{Ikeda:2019czt} and \cite{Chatzistavrakidis:2021nom} respectively for each of the two cases, and a genuine QP structure does not exist. \subsection{Twisted R-Poisson sigma models} \label{sec22} Given a twisted R-Poisson structure of order $p+1$, there exists a topological field theory in $p+1$ dimensions with target space the corresponding twisted R-Poisson manifold $M$ \cite{Chatzistavrakidis:2021nom}. The fields of the theory are of four different types, specifically (a) a set of scalar fields $X^{i}, i=1,\dots \text{dim}\,M$, which are identified with the components of a sigma model map $X: \S_{p+1} \to M$, where $\S_{p+1}$ is the $(p+1)$-dimensional spacetime where the theory is defined (the world volume; in the few instances when we consider a local coordinate system on it, we refer to its coordinates as $\sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta^{\m}$ with $\m=0,\dots p$), (b) world volume 1-forms $A_{i}=A_{i\m}(\sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta)\mathrm{d} \sigma} \def\S{\Sigma} \def\t{\tau} \def\th{\theta^{\m}$ taking values in the pull-back bundle $X^{\ast}T^{\ast}M$, (c) world volume $(p-1)$-forms $Y^{i}$ taking values in the pull-back bundle $TM$ and (d) world volume $p$-forms $Z_i$ taking values in the pull-back bundle $T^{\ast}M$. Summarizing, the field content of the theory is \begin{equation} (X^{i},A_i,Y^{i},Z_i) \quad \text{of form degrees} \quad (0,1,p-1,p)\,. \end{equation} With the above field content, one can write down a general action functional in $p+1$ dimensions with $p\ge 1$, which has the form of a topological sigma model, specifically \bea \label{Sp+1} S^{(p+1)}&=&\int_{\S_{p+1}}\left(Z_i\wedge \mathrm{d} X^{i}- A_i\wedge\mathrm{d} Y^{i}+\Pi^{ij}(X)Z_i\wedge A_j -\frac 12 \partial_k\Pi^{ij}(X)Y^{k}\wedge A_i\wedge A_j \,+\right. \nonumber \\[4pt] &&\qquad \,\,\,\,\,\,\,\left. +\,\frac 1{(p+1)!}R^{i_1\dots i_{p+1}}(X)A_{i_1}\wedge \dots \wedge A_{i_{p+1}}\right)+\int_{\S_{p+2}}X^{\ast}H\,, \end{eqnarray} where the last term is a Wess-Zumino one, obtained as the pull-back of the $(p+2)$-form $H$ on $M$, \begin{equation} X^{\ast}H=\frac 1{(p+2)!}H_{i_1\dots i_{p+2}}(X)\,\mathrm{d} X^{i_1}\wedge\dots \wedge\mathrm{d} X^{i_{p+2}} \end{equation} and supported on an open $(p+2)$-brane $\S_{p+2}$ whose boundary is $\S_{p+1}$. The $(p+2)$-form $H$ is further assumed to be closed, $\mathrm{d} H=0$, so that its variation drops to the boundary and its contribution to the field equations is only through the map X and not its extension that is necessary to define the higher-dimensional term in \eqref{Sp+1}. As usual, the quantum theory is well-defined provided that the homology class $[X(\S_{p+1})] \in H_{p+1}(M)$ vanishes and that $H$ defines an integer cohomology class \cite{Figueroa-OFarrill:2005vws}. Although the action functional \eqref{Sp+1} is written on a local patch of the target space $M$, it can be naturally defined globally once the relevant target space connections are introduced. Then the apparently non-tensorial coefficient $\partial_k\Pi^{ij}$ is completed to a tensor and the full theory can be written without using a local coordinate system on the target. Since this is not necessary for the purposes of the present paper, we refer to \cite{Chatzistavrakidis:2021nom} where a complete discussion of the covariant formulation appears. Provided that $(M,\Pi,R,H)$ is a twisted R-Poisson manifold of order $p+1$, it was shown in \cite{Chatzistavrakidis:2021nom} that the theory given by \eqref{Sp+1} is invariant under the following set of gauge transformations: \bea \label{gt1} \d X^{i}&=&\Pi^{ji}\epsilon_{j}\,,\\[4pt] \label{gt2} \d A_i&=&\mathrm{d}\epsilon_i+\partial_i\Pi^{jk}A_j\epsilon_k \,,\\[4pt] \d Y^{i}&=&(-1)^{p-1}\mathrm{d}\chi^{i}+\Pi^{ji}\, \psi_j -\partial_j\Pi^{ik}\left(\chi^{j}A_k+Y^{j}\epsilon_k\right)+\frac 1{(p-1)!}R^{iji_1\dots i_{p-1}}A_{i_1}\dots A_{i_{p-1}}\epsilon_j\,,\nonumber\\ \label{gt3}\\[4pt] \d Z_i&=&(-1)^{p}\mathrm{d}\psi_i+\partial_i\Pi^{jk}\left(Z_j\epsilon_k+\psi_jA_k\right) -\partial_i\partial_j\Pi^{kl}\left(Y^jA_k\epsilon_l-\frac 12 \, A_kA_l\chi^{j}\right) \, + \nonumber\\ &&\qquad \qquad \quad + \, \frac {(-1)^p}{p!} \, \partial_iR^{ji_1\dots i_{p}}A_{i_1}\dots A_{i_{p}}\epsilon_j-\frac 1{(p+1)!}\Pi^{kj}H_{ijl_1\dots l_p}\Omega^{l_1\dots l_p}\epsilon_k \,, \label{gt4} \end{eqnarray} where wedge products between differential forms are implicit. It is observed that there are three gauge parameters $(\epsilon_i,\chi^{i},\psi_i)$ of form degrees $(0,p-2,p-1)$ respectively. The gauge transformation of the scalar fields is controlled by the Poisson structure $\Pi$. Notably, the only appearance of the components of the $(p+2)$-form $H$ is in the ultimate term of the highest differential form field $Z_i$. They are combined with the world volume $p$-form $\Omega^{l_1\dots l_p}$ defined as \bea \label{Omega} \Omega^{l_1\dots l_p}=\sum_{r=1}^{p+1}(-1)^{r}\prod_{s=1}^{r-1}\mathrm{d} X^{l_{s}}\prod_{t=r}^{p}\Pi^{l_tm_t}A_{m_t}\,, \end{eqnarray} which contains all possible combinations of $\mathrm{d} X$ and $\Pi(A)$ that yield a $p$-form. This is essentially tailor-made to cancel the contribution of the Wess-Zumino term to the gauge variation of $S^{(p+1)}$. The field equations obtained from the action for the twisted R-Poisson sigma model read \bea \label{eom1} F^{i}&:=&\mathrm{d} X^{i}+\Pi^{ij}A_j=0\,, \\[4pt] \label{eom2} G_{i}&:=&\mathrm{d} A_i+\frac 12 \partial_i\Pi^{jk}A_j\wedge A_k=0\,, \\[4pt] \label{eom3} {\cal F}^{i}&:=&\mathrm{d} Y^{i}+(-1)^{p}\Pi^{ij}Z_j+\partial_k\Pi^{ij}A_j\wedge Y^{k}-\frac 1{p!}R^{ij_1\dots j_p}A_{j_1}\wedge\dots\wedge A_{j_{p}}=0\,, \\[4pt] {\cal G}_{i}&:=&(-1)^{p+1}\mathrm{d} Z_i+\partial_i\Pi^{jk}\,Z_j\wedge A_k-\frac 12 \partial_i\partial_j\Pi^{kl}\,Y^{j}\wedge A_k\wedge A_l\, +\nonumber\\[4pt] && \,+ \, \frac 1{(p+1)!}\partial_iR^{j_1\dots j_{p+1}}A_{j_1}\wedge\dots\wedge A_{j_{p+1}}+\frac 1{(p+1)!}H_{ij_1\dots j_{p+1}}\mathrm{d} X^{j_1}\wedge\dots\wedge \mathrm{d} X^{j_{p+1}}=0\,. \nonumber\\ \label{eom4} \end{eqnarray} Using the first field equation, i.e. the one of the highest form $Z_i$, the gauge transformation rule of $Z_i$ can be rewritten in an equivalent and more useful form as \bea \d Z_i&=&(-1)^{p}\mathrm{d}\psi_i+\partial_i\Pi^{jk}\left(Z_j\epsilon_k+\psi_jA_k\right) -\partial_i\partial_j\Pi^{kl}\left(Y^jA_k\epsilon_l-\frac 12 \, A_kA_l\chi^{j}\right) \, + \nonumber\\ &+& \, \frac {1}{p!} \, f_i^{i_1\dots i_{p}j}A_{i_1}\dots A_{i_{p}}\epsilon_j \, - \nonumber\\[4pt] &-&\frac 1{(p+1)!}\Pi^{kj}H_{ijl_1\dots l_p}\sum_{r=1}^{p}(-1)^{r+1}\binom{p+1}{r+1}\prod_{s=1}^{r} F^{l_{s}}\prod_{t=r+1}^{p}\Pi^{l_tm_t}A_{m_t}\epsilon_k \,, \end{eqnarray} where we have defined \begin{equation} \label{fdef} f_{i}^{i_1\dots i_{p+1}}:=\partial_iR^{i_1\dots i_{p+1}}+H_{i}{}^{i_1\dots i_{p+1}}\,, \end{equation} and we introduced the short-hand notation of raising the indices of the components of the $(p+2)$-form $H$ via the 2-vector $\Pi$, specifically \begin{equation} H_{i}{}^{i_1\dots i_{p+1}}=\Pi^{i_1l_1}\dots \Pi^{i_{p+1}l_{p+1}}H_{il_1\dots l_{p+1}}\,, \end{equation} and accordingly for other index structures. We can now readily observe that the transformation of $Z_i$ contains the field strength of the scalar fields $X^{i}$, which is the field equation of $Z_i$. Remarkably, although for the 2D model ($p=1$) this transformation only contains $F^{i}$ linearly and without it appearing together with the field $A_i$, this ceases to be true in every other dimension higher than two. In contrast, for example in 3D one finds that{\footnote{In this paper, we use the subset symbol $\supset$ to mean that the right-hand side appears in the full expression of the left-hand side along with other terms that are not shown. It will mostly be used to provide heuristic explanations that clarify the often complicated structure of the quantities we compute. }} \begin{equation} \d Z_i\supset \frac 1 {2} H_{il}{}^{mk}F^{l}A_{m}\epsilon_{k}+\frac 1 {3!}H_{ilm}{}^{k}F^{l}F^{m}\epsilon_{k}\,. \end{equation} Thus both a product of $F^{i}$ with $A_i$ appears as well as a quadratic term in the field equation. Clearly the situation becomes even more non-linear in higher dimensions. This general feature of this class of theories is unusual and it reproduces itself in the closure of the gauge algebra and the square of the BRST operator that we will encounter in the next section. Although in gauge theories we are used to having gauge algebras that only close on-shell or BRST operators that are nilpotent only on-shell, we are not aware of particular examples where products of field equations appear. This should not be discouraging however, since the general statements of on-shell closure or on-shell nilpotency are still valid. Therefore one expects that these features can still be treated within the BV/BRST formalism and we show in the next sections that this is indeed the case. \section{BRST/BV formalism for R-Poisson sigma models} \label{sec3} \subsection{Ghosts and the BRST operator} \label{sec31} Having reviewed the classical action functional, the gauge transformations and the field equations of the theory, the next step would be to prepare it for quantization. Therefore, we are interested in determining the classical BV action, which will be the solution to the classical master equation. We recall that the BV extension is necessary when the theory is reducible as a constrained Hamiltonian system and when the gauge algebra closes only on-shell or the BRST operator is nilpotent only on-shell. Both these features are present in the class of theories we study, as we now describe in more detail. The first step toward quantization is to construct the classical basis of fields and ghosts. The ghosts correspond to the gauge parameters of the theory, promoted to fields of ghost number 1. To avoid introducing too much new notation, we denote the ghosts with the same letters as the gauge parameters. Thus the degree-1 ghosts are $(\epsilon_{i},\chi^{i},\psi_{i})$. However, the theory contains differential forms of form degree greater than 1 and therefore there will necessarily exist gauge transformations that are not independent. This means that the theory is highly reducible as a constrained Hamiltonian system and we must introduce additional ghosts for ghosts that take care of this redundancy. Indeed, the ghosts for the higher differential forms $Y^{i}$ and $Z_{i}$ are $\chi^{i}$ and $\psi_{i}$ of form degree $p-2$ and $p-1$ respectively. Being differential forms themselves means that we must include in the theory fields of ghost degree 2, say $\chi^{i}_{(1)}$ and $\psi_{i}^{(1)}$ of differential form degree $p-3$ and $p-2$. This process continues until we reach the top ghosts for ghost for each of the $\chi$ and $\psi$ series, which will be spacetime scalars. Thus we find that the classical basis contains the fields \begin{equation} \label{basis} (X^{i}, A_{i}, Y^{i}, Z_{i}, \epsilon_{i}, \chi^{i}_{(r)}, \psi_{i}^{(r)})\,, \end{equation} where the counter $r$ takes values from 0 to $p-2$ for the $\chi$-series $\chi^{i}_{(r)}$ and from 0 to $p-1$ for the $\psi$-series $\psi_{i}^{(r)}$.{\footnote{This discrepancy in the range is irrelevant; one could just state that the upper value is $p-1$ and the ghost $\chi^{i}_{(p-1)}$ does not exist since otherwise it would have negative form degree.}} Thus the classical basis contains a total of $2p+4$ fields of diverse ghost and form degree. At this stage, this is in accordance with the AKSZ construction; for example in the 3D case where $p=2$ we find 8 fields (4 ordinary fields, 3 ghosts and 1 ghost for ghost) as expected for the Courant sigma model \cite{Ikeda:2000yq,Ikeda:2002wh,Hofman:2002jz,Roytenberg:2006qz}. We collect these fields along with their ghost and differential form degrees in Table \ref{table1}. \begin{table} \begin{center} \begin{tabular}{| c | c | c | c | c | c | c | c |} \hline \multirow{3}{5.2em}{Field/Ghost} &&&&&&& \\ & $X^{i}$ & $A_i$ & $Y^i$ & $Z_{i}$ & $\epsilon_i$ & $\chi^{i}_{(r)}$ & $\psi_i^{(r)}$ \\ &&&&&&& \\ \hhline{|=|=|=|=|=|=|=|=|} \multirow{3}{6.0em}{Ghost degree} &&&&&&& \\ & $0$ & $0$ & $0$ & $0$ & $1$ & $r+1$ & $r+1$ \\ &&&&&&& \\\hline \multirow{3}{5.5em}{Form degree} &&&&&&& \\ & $0$ & $1$ & $p-1$ & $p$ & $0$ & $p-2-r$ & $p-1-r$ \\ &&&&&&& \\\hline \end{tabular}\end{center}\caption{The fields and ghosts of the twisted R-Poisson sigma model in $p+1$ dimensions. The range of $r$ is $r=0,\dots,p-1$ and we make the identifications $\chi^{i}_{(0)}\equiv \chi^{i}$, $\psi^{(0)}_i\equiv \psi_i$, so that we use a collective notation for the $p-1$ ghosts $\chi$ and the $p$ ghosts $\psi$. Obviously, $\chi_{(p-1)}^{i}=0$, since this ghost does not exist. }\label{table1}\end{table} Since all fields of the theory are bi-graded, having one grading $\textrm{fd}(\cdot)$ as differential forms and one grading $\textrm{gh}(\cdot)$ as ghosts, we must choose a sign convention for their commutation. Herewith, for any two fields (including, later on, antifields) $\varphi_1$ and $\varphi_2$ we shall use the convention \begin{equation} \varphi_1\varphi_2=(-1)^{\textrm{gh}(\varphi_1)\textrm{gh}(\varphi_2)+\textrm{fd}(\varphi_1)\textrm{fd}(\varphi_2)}\varphi_2\varphi_1\,. \end{equation} Next we define the BRST operator on the fields and ghosts, denoted as $s_0$ and raising the ghost degree by 1. Its action on the fields is simply the gauge transformation rule appearing in \eqref{gt1}-\eqref{gt4} with the gauge parameters replaced by the corresponding ghosts. Since we use the same notation for ghosts, we do not repeat these expressions here. The BRST operator should be nilpotent on-shell, \begin{equation} s_0^2 \,(\cdot) \overset{!}\approx 0\,, \end{equation} where $(\cdot)$ is a placeholder for any field or ghost and $\approx$ denotes that the field equations of the theory have been taken into account, or in other words that the square of the BRST operator is proportional to equations of motion. This requirement fixes the BRST transformation of the ghost fields. In particular, observing that \begin{equation} s_0^2X^{i}= \Pi^{lk}\partial_k\Pi^{ji}\epsilon} \def\vare{\varepsilon_l\epsilon} \def\vare{\varepsilon_j+\Pi^{ji}s_{0}\epsilon} \def\vare{\varepsilon_{i}\,, \end{equation} and since $X^{i}$ is a scalar and {we can require that BRST transformations of $X^i$ and $\epsilon_i$ do not contain field equations}, it is directly observed that due to $\Pi$ being a Poisson bivector the BRST transformation of the ghost $\epsilon} \def\vare{\varepsilon_i$ is completely fixed to be \begin{equation} s_0\epsilon} \def\vare{\varepsilon_i=-\frac 12 \partial_i\Pi^{jk}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\,. \end{equation} One may then check that $s_0^2\epsilon} \def\vare{\varepsilon_i=0$, as it should. Knowing the BRST transformation of $\epsilon} \def\vare{\varepsilon_i$ allows us to compute the square of the BRST operator on $A_i$ and find \begin{equation} \label{s02A} s_0^2A_i=-\frac 12 \partial_i\partial_j\Pi^{jk}F^{l}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\,. \end{equation} We observe that it is proportional to the field equation for $Z_i$ and thus it vanishes only on-shell. This already dictates that the BV formalism must be used. Following this logic for the rest of the fields, leads to the BRST transformations of the ghost in the $\chi$ and $\psi$ series. They all follow the same pattern and therefore they can be presented collectively as \bea s_0\chi^{i}_{(r)}&=&\mathrm{d}\chi^i_{(r+1)}+\partial_k\Pi^{ij}\left(A_j\chi^k_{(r+1)}-\epsilon_j\chi^k_{(r)}\right)-(-1)^{p+r}\Pi^{ij}\psi_j^{(r+1)}+\nonumber\\[4pt] && -\,\frac{\beta_{(r)}}{(r+2)!(p-r-2)!}R^{ij_1\ldots j_{r+2}k_1\ldots k_{p-r-2}}\epsilon_{j_1}\ldots\epsilon_{j_{r+2}} A_{k_1}\ldots A_{k_{p-r-2}}\,,\label{s0chi}\\[4pt] s_0\psi_i^{(r)} &=& \mathrm{d}\psi_i^{(r+1)}+\partial_i\Pi^{jk}\left(A_j\psi_k^{(r+1)}-\epsilon_j\psi_k^{(r)}\right)+\nonumber\\ && +(-1)^{p+r}\partial_i\partial_l\Pi^{jk}\left(\frac{1}{2}\epsilon_j\epsilon_k\chi^l_{(r-1)}-\epsilon_j A_k\chi^l_{(r)}-\frac{1}{2}A_j A_k \chi^l_{(r+1)}\right)\,-\nonumber\\[4pt] &&+\frac{(-1)^p\beta_{(r)}}{(r+2)!(p-r-1)!}f_i^{j_1\ldots j_{r+2}k_1\ldots k_{p-r-1}} \epsilon_{j_1}\ldots\epsilon_{j_{r+2}} A_{k_1}\ldots A_{k_{p-r-1}}+\nonumber\\[4pt] &&+ \sum_{s=1}^{p-r-1}\frac{(-1)^{p(s+1)}\beta_{(r)}}{(s+1)!(r+2)!(p-r-s-1)!}\tensor{H}{_{il_1\ldots l_s}^{j_1\ldots j_{r+2}k_1\ldots k_{p-r-s-1}}}\times\nonumber\\[4pt] && \qquad\qquad\times\epsilon_{j_1}\ldots\epsilon_{j_{r+2}}A_{k_1}\ldots A_{k_{p-r-s-1}} F^{l_1}\ldots F^{l_s}\,, \label{s0psi} \end{eqnarray} where \begin{equation} \beta_{(r)}=(-1)^{p+r(r+1)/2}\,. \end{equation} A useful remark is that the fields $Y^{i}$ and $Z_i$ may be seen as the ``$-1$'' elements in the $\chi$ and $\psi$ series, as is confirmed by inspection of the degrees in Table \ref{table1}. Indeed, if we identify \begin{equation} \chi^i_{(-1)}:=(-1)^{p+1}Y^i \qquad\text{and}\qquad \psi_i^{(-1)}:=(-1)^pZ_i\,,\label{ids1} \end{equation} then the general formulas \eqref{s0chi} and \eqref{s0psi} are identical to the BRST transformations of $Y^{i}$ and $Z_{i}$ for $r=-1$, given in \eqref{gt3} and \eqref{gt4} with the gauge parameters replaced by the corresponding ghosts. This includes the term in $s_0Z_i$ containing field equations explicitly. Note that none of the ghosts in the $\chi$-series contains explicit equation of motion terms in their BRST transformation, whereas all ghosts in the $\psi$-series do, save the top one which is anyway a scalar. The advantage of this identification is that once we compute the square of the BRST operator on the ghosts, the one for the fields $Y^{i}$ and $Z_i$ simply follows. A straightforward calculation leads to the results \bea s_0^2\chi^i_{(r)} &=& -\frac{\beta_{(r+1)}}{(r+3)!(p-r-4)!}R^{i l j_1\ldots j_{r+3}k_1\ldots k_{p-r-4}}\epsilon_{j_1}\ldots\epsilon_{j_{r+3}}A_{k_1}\ldots A_{k_{p-r-4}} G_l-\nonumber\\[4pt] &&-\,\frac{(-1)^p\beta_{(r+1)}}{(r+3)!(p-r-3)!}\partial_l R^{i j_1 \ldots j_{r+3} k_1 \ldots k_{p-r-3}}\epsilon_{j_1}\ldots\epsilon_{j_{r+3}}A_{k_1}\ldots A_{k_{p-r-3}} F^l+\nonumber\\[4pt] &&+\sum_{s=1}^{p-r-2}\frac{(-1)^{(p+1)s}\beta_{(r)}}{(s+1)!(r+3)!(p-r-s-2)!}\tensor{H}{_{l_1\ldots l_s}^{ij_1\ldots j_{r+3}k_1\ldots k_{p-r-s-2}}}\times\nonumber\\[4pt] &&\qquad\qquad\times \epsilon_{j_1}\ldots \epsilon_{j_{r+3}} A_{k_1}\ldots A_{k_{p-r-s-2}}F^{l_1}\ldots F^{l_s}+\nonumber\\[4pt] &&+\,\partial_k\partial_l\Pi^{ij}F^k\left(A_j\chi^l_{(r+2)}-\epsilon_j \chi^l_{(r+1)}\right)+\partial_k\Pi^{ij}\left(G_j\chi^k_{(r+2)}-\psi_j^{(r+2)} F^k\right)\,,\label{s02chir} \end{eqnarray} for the $\chi$-series of ghosts, and \bea s_0^2\psi_i^{(r)} &=& \frac{(-1)^{p}\beta_{(r)}}{(r+3)!(p-r-3)!}\partial_i R^{lj_1\ldots j_{r+3}k_1\ldots k_{p-r-3}}\epsilon_{j_1}\ldots\epsilon_{j_{r+3}}A_{k_1}\ldots A_{k_{p-r-3}} G_l-\nonumber\\[4pt] && -\,\frac{\beta_{(r)}}{(r+3)!(p-r-2)!}\partial_l\partial_i R^{j_1\ldots j_{r+3}k_1\ldots k_{p-r-2}}\epsilon_{j_1}\ldots\epsilon_{j_{r+3}}A_{k_1}\ldots A_{k_{p-r-2}} F^l+\nonumber\\[4pt] &&+\sum_{s=0}^{p-r-3}\frac{(-1)^{p(s+1)}\beta_{(r)}}{(s+2)!(r+3)!(p-r-s-3)!}\tensor{H}{_{il_1\ldots l_s}^{mj_1\ldots j_{r+3}k_1\ldots k_{p-r-s-3}}}\times\nonumber\\[4pt] &&\qquad\qquad\times G_m\epsilon_{j_1}\ldots\epsilon_{j_{r+3}}A_{k_1}\ldots A_{k_{p-r-s-3}}F^{l_1}\ldots F^{l_s}+\nonumber\\[4pt] &&+2\sum_{s=1}^{p-r-1}\frac{(-1)^{p(s+1)+s}\beta_{(r)}}{(s+1)!(r+3)!(p-r-s-1)!}\partial_{(i}\tensor{H}{_{l_1)l_2\ldots l_s}^{j_1\ldots j_{r+3}k_1\ldots k_{p-r-s-1}}}\times\nonumber\\[4pt] &&\qquad\qquad\times \epsilon_{j_1}\ldots\epsilon_{j_{r+3}}A_{k_1}\ldots A_{k_{p-r-s-1}}F^{l_1}\ldots F^{l_s}+\nonumber\\[4pt] &&+\,\partial_i\Pi^{jk} G_j \psi_k^{(r+2)}+\partial_i\partial_j\Pi^{kl}F^j\left(A_k\wedge\psi_l^{(r+2)}-\epsilon_k\psi_l^{(r+1)}\right)+\nonumber\\[4pt] &&+\,(-1)^{p+r}\partial_i\partial_l\Pi^{jk}G_j\left(A_k\chi^l_{(r+2)}-\epsilon_k\chi^l_{(r+1)}\right)+\nonumber\\[4pt] &&+\,(-1)^{p+r}\partial_i \partial_j\partial_m \Pi^{kl} F^j\left(\frac{1}{2}A_k A_l\chi^m_{(r+2)}+\epsilon_k A_l\chi^m_{(r+1)}-\frac{1}{2}\epsilon_k\epsilon_l \chi^m_{(r)}\right)\,, \label{s02psir} \end{eqnarray} for the $\psi$-series of ghosts. We observe that in both cases the field equations of $Y^{i}$ and $Z_{i}$ appear in all terms on the right-hand side. Moreover, according to the discussion above, the square of the BRST operator on $Y^{i}$ is found to be \bea s_0^2Y^i &=& \frac{1}{2(p-3)!}R^{i l j_1 j_2 k_1\ldots k_{p-3}}\epsilon_{j_1}\epsilon_{j_2}A_{k_1}\ldots A_{k_{p-3}} G_l +(-1)^p\partial_k\Pi^{ij}\left(\psi_j^{(1)} F^k-G_j\chi^k_{(1)}\right)+\nonumber\\[4pt] &&+\,\frac{(-1)^p}{2(p-2)!}\partial_l R^{i j_1 j_2 k_1 \ldots k_{p-2}}\epsilon_{j_1}\epsilon_{j_2}A_{k_1}\ldots A_{k_{p-2}} F^l+(-1)^p\partial_k\partial_l\Pi^{ij}F^k\left(\epsilon_j \chi^l-A_k\chi^l_{(1)}\right)-\nonumber\\[4pt] &&-\sum_{s=1}^{p-1}\frac{(-1)^{(p+1)s}}{2(s+1)!(p-s-1)!}\tensor{H}{_{l_1\ldots l_s}^{ij_1 j_2 k_1\ldots k_{p-s-1}}}\epsilon_{j_1}\epsilon_{j_2}A_{k_1}\ldots A_{k_{p-s-1}}F^{l_1}\ldots F^{l_s}\,. \label{s0Y} \end{eqnarray} For the corresponding expression of $Z_i$, which could alternatively be calculated directly from \eqref{gt4}, we find \bea s_0^2 Z_i &=& \frac{(-1)^p}{2(p-2)!} \partial_i R^{lj_1 j_2 k_1\ldots k_{p-2}}\epsilon_{j_1}\epsilon_{j_2}A_{k_1}\ldots A_{k_{p-2}} G_l+(-1)^p\partial_i\Pi^{jk} G_j \psi_k^{(1)}-\nonumber\\[4pt] &-&\frac{1}{2(p-1)!}\partial_l \partial_i R^{j_1 j_2 k_1\ldots k_{p-1}}\epsilon_{j_1}\epsilon_{j_2}A_{k_1}\ldots A_{k_{p-1}} F^l+(-1)^p\partial_j\partial_i\Pi^{kl}F^j\left(A_k\psi_l^{(1)}-\epsilon_k\psi_l\right)+\nonumber\\[4pt] &+&\partial_i\partial_l\Pi^{jk}G_j\left(\epsilon_k \chi^l - A_k\chi^l_{(1)}\right) + \frac{(-1)^{p}}{2}\partial_i\partial_l\Pi^{jk}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k{\cal F}^{l} \, -\nonumber\\[4pt] &-&\partial_i\partial_j\partial_m\Pi^{kl} F^j\left(\frac{1}{2}A_k A_l\chi^m_{(1)}+\epsilon_k A_l\chi^m+\frac{(-1)^p}{2}\epsilon_k\epsilon_l Y^m\right)\, + \nonumber\\[4pt] &+& \sum_{s=0}^{p-2}\frac{(-1)^{p(s+1)}}{2(s+2)!(p-s-2)!}\tensor{H}{_{il_1\ldots l_s}^{mj_1j_2k_1\ldots k_{p-s-2}}}G_m\epsilon_{j_1}\epsilon_{j_2} A_{k_1}\ldots A_{k_{p-s-2}}F^{l_1}\ldots F^{l_s}+\nonumber\\[4pt] &+&\sum_{s=1}^p\frac{(-1)^{p(s+1)+s}}{(s+1)!(p-s)!}\partial_{(i}\tensor{H}{_{l_1)l_2\ldots l_s}^{j_1j_2k_1\ldots k_{p-s}}}\epsilon_{j_1}\epsilon_{j_2} A_{k_1}\ldots A_{k_{p-s}}F^{l_1}\ldots F^{l_s}\,. \label{s0Z} \end{eqnarray} This completes the calculation of the square of the BRST operator on all fields. Since in most cases it does not vanish off-shell, the BV formalism is necessary to solve the classical master equation. Nevertheless, before proceeding with the BV formalism, it is worth listing the fields and ghosts whose BRST transformation is already nilpotent off-shell. First, we saw that this is the case for $X^{i}$ and $\epsilon_i$. There exist, however, two more ghosts with this property. These are the top ghosts in each of the $\chi$ and $\psi$ series, namely $\chi^{i}_{(p-2)}$ and $\psi_{i}^{(p-1)}$, both being spacetime scalars. The general formulas yield \bea s_0\chi^{i}_{(p-2)}&=&-\partial_k\Pi^{ij}\epsilon} \def\vare{\varepsilon_j\chi^{k}_{(p-2)}-\Pi^{ij}\psi_{j}^{(p-1)}-\frac{\beta_{(p-2)}}{p!}R^{ij_1\dots j_p}\epsilon} \def\vare{\varepsilon_{j_1}\dots \epsilon} \def\vare{\varepsilon_{j_p}\,, \\[4pt] s_0\psi_i^{(p-1)}&=&-\partial_{i}\Pi^{jk}\epsilon} \def\vare{\varepsilon_j\psi_k^{(p-1)}-\frac 12\partial_i\partial_l\Pi^{jk}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\chi^{l}_{(p-2)}+\frac{(-1)^p\beta_{(p-1)}}{(p+1)!}f_{i}^{j_1\dots j_{p+1}}\epsilon} \def\vare{\varepsilon_{j_1}\dots\epsilon} \def\vare{\varepsilon_{j_{p+1}}\,. \quad\quad \,\,\, \end{eqnarray} Either by direct computation or simply by inspection of the results \eqref{s02chir} and \eqref{s02psir}, we find \begin{equation} s_0^2\chi^{i}_{(p-2)}=0=s_0^2\psi_i^{(p-1)}\,. \end{equation} We conclude that only $4$ of the $2p+4$ fields in \eqref{basis}, naturally being the four scalars, have nilpotent BRST operator acting on them. Therefore, for these fields there is no need to modify this operator, or in other words the BRST and the BV operator are identical for them. Thus we denote \begin{equation} \label{bvisbrst} s X^{i}:=s_0 X^{i}\,, \quad s\epsilon} \def\vare{\varepsilon_i:=s_0\epsilon} \def\vare{\varepsilon_i\,, \quad s\chi^{i}_{(p-2)}=s_0\chi^{i}_{(p-2)}\,, \quad s\psi_i^{(p-1)}:=s_0\psi_{i}^{(p-1)}\,, \end{equation} and $s^2$ vanishes on these fields. \subsection{Antifields and the untwisted BV operator} \label{sec32} To pave the way towards determining the solution of the classical master equation of a twisted R-Poisson sigma model, we could follow one of two equivalent ways. The first step is common in either of the two and it amounts to enlarging the space of fields and ghosts by inclusion of the corresponding antifields and antighosts. For any field $\varphi$ we denote them as $\varphi_{+}$ (or $\varphi^{+}$, depending on the index position.) These are fields such that \bea \textrm{gh}(\varphi)+\textrm{gh}(\varphi_{+})&=&-1\,, \\[4pt] \textrm{fd}(\varphi)+\textrm{fd}(\varphi_+)&=&p+1\,. \end{eqnarray} The full set of $2p+4$ antifields and antighosts with their degrees appears in Table \ref{table2}. In total the fields and antifields are $4(p+2)$ in number, a multiple of 4. This is to be expected, since without the Wess-Zumino term one could have used the AKSZ contruction with source space the graded manifold $T[1]\S$ and would have constructed 4 superfields containing the sum of all fields and antifields of total degree (the sum of ghost and form degrees) $0, 1, p-1, p$ respectively. In particular, the superfield $\mathbf{X}^{i}$ of total degree 0 would contain $(X^{i},Z_{+}^{i},\psi_{+}^{i}{}_{(r)})$, the total degree-1 superfield $\mathbf{A}_{i}$ would contain $(A_i,\epsilon_i,Y^{+}_{i},\chi^{+}_{i}{}^{(r)})$, the total degree-$(p-1)$ superfield $\mathbf{Y}^{i}$ would contain $(Y^{i},\chi^{i}_{(r)},A_{+}^{i},\epsilon} \def\vare{\varepsilon^{i}_{+})$ and the total degree-$p$ superfield $\mathbf{Z}_{i}$ would contain $(Z_i,\psi_i^{(r)},X^{+}_{i})$. The BV action would then be of the same form as the classical action but with superfields instead of fields. The Wess-Zumino term given by the pull-back of the 4-form $H$ is the sole reason that this would not be sufficient to determine the correct BV action. \begin{table} \begin{center} \begin{tabular}{| c | c | c | c | c | c | c | c |} \hline \multirow{3}{4em}{Antifield} &&&&&&& \\ & $X^{+}_{i}$ & $A_{+}^i$ & $Y^{+}_i$ & $Z_{+}^{i}$ & $\epsilon_{+}^i$ & $\chi^{+}_{i}{}^{(r)}$ & $\psi_{+}^i{}_{(r)}$ \\ &&&&&&& \\ \hhline{|=|=|=|=|=|=|=|=|} \multirow{3}{6.0em}{Ghost degree} &&&&&&& \\ & $-1$ & $-1$ & $-1$ & $-1$ & $-2$ & $-r-2$ & $-r-2$ \\ &&&&&&& \\\hline \multirow{3}{5.5em}{Form degree} &&&&&&& \\ & $p+1$ & $p$ & $2$ & $1$ & $p+1$ & $r+3$ & $r+2$ \\ &&&&&&& \\\hline \end{tabular}\end{center}\caption{The antifields and antighosts of the twisted R-Poisson sigma model in $p+1$ dimensions. The range of $r$ is the same as for the corresponding fields.}\label{table2}\end{table} Next, one could use the BRST transformations found in the previous section to write down the extension of the classical action $S^{(p+1)}$ of \eqref{Sp+1} by all terms that contain one antifield and subsequently extend this action with all allowed terms with two and more antifields such that the classical master equation is satisfied. Alternatively, one could directly determine the BV operator $s$, i.e. the extension of the BRST operator $s_0$ that is nilpotent off-shell, using the fact that its action on the antifields produces equations of motion. Here we will work in this latter approach. Let us describe our approach in a heuristic way before presenting the details of the procedure. We have already found in Section \ref{sec31} that in all cases when the square of the BRST operator on the fields does not vanish, it is proportional to the field equations $F^{i}$, $G_{i}$ and ${\cal F}^{i}$, the latter appearing only in $s_0^2Z_i$. Therefore, we will certainly need the antifields from Table \ref{table2} whose transformation gives these field equations. These are $Z^{i}_{+}$, $Y_{i}^{+}$ and $A_{+}^{i}$, whose BV transformation will contain \bea s Z^{i}_{+} &\supset& (-1)^{p+1}\,F^{i}\,, \label{sZ+pre}\\[4pt] sY_{i}^{+}&\supset& \, G_{i}\,, \\[4pt] sA_{+}^{i}&\supset& (-1)^p{\cal F}^{i}\,, \label{sA+pre} \end{eqnarray} among other terms that we will determine. The goal then is to extend the BRST transformations by terms proportional to these antifields such that the square of the resulting operator vanishes. However, one should be careful with two more issues. The first issue is that once $Z_{+}^{i}$- and $Y^{+}_{i}$-dependent terms are included in the transformation of some field, the lower field which transforms as the derivative of the previous field will contain terms proportional to $\mathrm{d} Z_{+}^{i}$ and $\mathrm{d} Y^{+}_{i}$. This issue is ameliorated by noting that \bea s\psi^{i}_{+}&\supset& (-1)^p\,\mathrm{d} Z^{i}_{+}\,,\\[4pt] s\chi^{+}_{i}&\supset& (-1)^p\, \mathrm{d} Y^{+}_{i}\,, \end{eqnarray} and so on for the $\chi$ and $\psi$ series, since in general \bea s \psi_{+}^{i}{}_{(r)} &\supset& \,\mathrm{d} \psi_{+}^{i}{}_{(r-1)}\,, \\[4pt] s \chi^{+}_{i}{}^{(r)} &\supset& -\,\mathrm{d} \chi^{+}_{i}{}^{(r-1)}\,. \end{eqnarray} The second issue regards the appearance of explicit field equations in $s_0Z_i$ and in fact in all ghosts of the $\psi$-series. One may then ask whether any of the antifields will also contain explicit field equations in their BV transformation. The answer is necessarily yes and it will turn out to be very important in determining the correct BV action. Crucially, we will find that $sA^{i}_{+}$ contains $Z_{i}$ dependence and this will lead to a modification of its BV transformation by explicit $F^{i}$-dependent terms. Higher antifields will also get corrected accordingly, but it will become obvious that this will not be crucial for finding the BV action and can be determined a posteriori. This feature of higher ghosts and antifields having BV operator that contains field equations is one that does not exist in ordinary AKSZ constructions. In summary, this heuristic discussion establishes the strategy for determining the BV operator on the fields. First we recall that the BV operator, denoted in general as $s_{\text{\tiny{BV}}}$ should satisfy the following three properties: \begin{enumerate} \item[I.] When antifields are set to zero, it reduces to the BRST operator $s_0$. \item[II.] It is strictly nilpotent, $s_{\text{\tiny{BV}}}^2=0$, without using the field equations. \item[III.] It is obtained from a BV action as $s_{\text{\tiny{BV}}}\cdot=(S_{\text{BV}},\cdot)$, with respect to the BV antibracket $(\cdot,\cdot)$ defined in Appendix \ref{appa}. \end{enumerate} Note that there can (and will) exist operators $s$ other than $s_{\text{\tiny{BV}}}$ that satisfy the first two properties. It is the third property that establishes the right $s=s_{\text{\tiny{BV}}}$ that corresponds to a solution of the classical master equation. Our strategy then goes as follows. Consider the square of the BRST operator and add terms linear in the antifields to $s_0$, say $s_1$ such that the field equations cancel. Then compute the square of the modified BRST operator $s_0+s_1$, which will also be proportional to the field equations in general. Modify the operator $s_0+s_1$ by some antifield-dependent $s_2$ and repeat the procedure until the point that the modified operator is nilpotent off-shell. Then properties I and II are addressed. Property III is nearly automatic in the untwisted case but harder to satisfy in the twisted case. The strategy would be to add all possible additional $H$-dependent terms and solve a complicated set of consistency conditions. In the following, we will apply the above strategy in the untwisted case in arbitrary dimensions and we will also solve explicitly the twisted case in three dimensions, addressing the complicated property III. Let us now apply this procedure, starting with the simplest case of $A_i$, for which the square of the BRST operator is given in \eqref{s02A}. Using \eqref{sZ+pre}, we refine it to \begin{equation} (s_0+s_1)A_i=\mathrm{d}\epsilon} \def\vare{\varepsilon_i+\partial_{i}\Pi^{jk}A_j\epsilon} \def\vare{\varepsilon_{k}-\frac {(-1)^p}{2} \partial_i\partial_l\Pi^{jk}Z_{+}^{l}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\,. \end{equation} The square of the modified operator can be easily calculated; requiring that it vanishes fixes completely the transformation of $Z_+^{i}$ too. Specifically, for \begin{equation} \label{sZ+} s Z_{+}^{i}=(-1)^{p+1}F^{i}+\partial_{j}\Pi^{ik}Z_+^{j}\epsilon} \def\vare{\varepsilon_{k}\,, \end{equation} we find that $(s_0+s_1)^2A_{i}=0$ identically. Therefore the BV operator on $A_i$ is \begin{equation} \label{sA} sA_i=\mathrm{d}\epsilon} \def\vare{\varepsilon_i+\partial_{i}\Pi^{jk}A_j\epsilon} \def\vare{\varepsilon_{k}-\frac {(-1)^p}{2} \partial_i\partial_l\Pi^{jk}Z_{+}^{l}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\,. \end{equation} However, one should now cross check that $s^{2}Z_{+}^{i}=0$ too. This is a non-trivial consistency check, whose validity is easily established via an easy calculation, using also the modified transformation of $F^{i}$ which is directly computed to be \begin{equation} s F^{i}=\partial_k\Pi^{ji}F^{k}\epsilon} \def\vare{\varepsilon_j-\frac {(-1)^p}{2} \Pi^{ij}\partial_j\partial_k\Pi^{lm}Z_{+}^{k}\epsilon} \def\vare{\varepsilon_l\epsilon} \def\vare{\varepsilon_m\,. \end{equation} In this way we have determined the BV operator on $A_i$ and $Z_{+}^{i}$. The fact that the procedure stopped quickly is only a feature of the low degree differential form $A_i$. For the rest of the fields reducibility kicks in and the procedure must be repeated multiple times. Fortunately, the common pattern of their BRST transformation allows us to perform this task once for each of the $\chi$ and $\psi$ series, before turning to the fields $Y^{i}$ and $Z_{i}$. For all ghosts $\chi^{i}_{(r)}$, we find the BV transformation \bea s\chi^{i}_{(r)}&=&\mathrm{d}\chi^{i}_{(r+1)}+\sum_{s=0}^{p-r-2}\frac {(-1)^p}{s!}\partial_k\partial_{l_1}\dots \partial_{l_s}\Pi^{ij}\sum_{s'=0}^{p-r-s-2}(-1)^{s'}{\cal O}^{l_1\dots l_s}(s,s')\mathfrak{X}^{k}_{j}(s,s')+ \nonumber\\[4pt] &+&\sum_{s=0}^{p-r-2}\frac 1{s!}(-1)^{p+r+s-1}\partial_{l_1}\dots \partial_{l_{s}}\Pi^{ij}\sum_{s'=0}^{p-r-s-2}(-1)^{s'}{\cal O}^{l_1\dots l_s}(s,s')\psi_j^{(r+s+s'+1)} + \nonumber\\[4pt] &-&\sum_{t=0}^{\lfloor\frac{p-r-2}{2}\rfloor}\sum_{s=0}^{p-r-2t-2}\sum_{s'=0}^{p-r-s-2t-2}\sum_{t'=0}^{p-r-s-s'-2t-2}\frac{(-1)^{(t+1)p+s'}}{s!t!}\frac{\beta_{(t-1)}\beta_{(a-2)}}{a!(p-a-t)!}\times \nonumber\\[4pt] &\times& \partial_{l_1}\dots\partial_{l_s}R^{ij_1\dots j_{a}k_1\dots k_{p-a}}{\cal O}^{l_1\dots l_s}(s,s')\widetilde{\cal O}_{k_1\dots k_{t}}(t,t') \epsilon} \def\vare{\varepsilon_{j_1}\dots \epsilon} \def\vare{\varepsilon_{j_a}A_{k_{t+1}}\dots A_{k_{p-a}}\,, \label{schir} \end{eqnarray} where we denote $a:=r+s+s'+t+t'+2$ and we define the following operators, \bea {\cal O}^{l_1\dots l_s}(s,s')&=&\sum_{\substack{m_i=-1 \\[2pt] 1\le i\le s-1}}^{s'-1}\left(\prod_{u=1}^{s-1}\psi_{+}^{l_{u}}{}_{(m_{u})}\right)\psi_{+}^{l_s}{}_{(s'-s-\sum_{i=1}^{s-1}m_i)}\,, \nonumber\\[4pt] \widetilde{\cal O}_{k_1\dots k_t}(t,t')&=&\sum_{\substack{m_i=-1 \\[2pt] 1\le i\le t-1 }}^{t'-1}(-1)^{\sum_{q=0}^{\lfloor t/2\rfloor -1}(1+m_{t-1-2q})}\left(\prod_{u=1}^{t-1}\chi^{+}_{k_{u}}{}^{(m_{u})}\right)\chi^{+}_{k_t}{}_{(t'-t-\sum_{i=1}^{t-1}m_i)}\,, \nonumber\\[4pt] \mathfrak{X}^{k}_{j}(s,s')&=&\sum_{u=0}^{p-r-s-s'-2}\widetilde{\cal O}_{j}(1,u-2)\chi^{k}_{(r+s+s'+u)}\,, \end{eqnarray} with starting values \bea {\cal O}(0,s')&=&\d_{0,s'}\,, \\[4pt] \widetilde{\cal O}(0,t')&=&\d_{0,t'}\,. \end{eqnarray} We observe that the operators ${\cal O}$ and $\widetilde{\cal O}$ contain all products of antighosts of the $\psi_{+}$ and $\chi^{+}$ series and their fusion appears in the last term of the BV operator for the ghosts $\chi^{i}_{(r)}$. A few further remarks are in order. In these formulas, the antighosts $\chi^{+}_{i}{}^{(r)}$ have been extended to include the values $r=-1,-2,-3$, which by inspection of Tables \ref{table1} and \ref{table2} are identified with \begin{equation} \chi^{+}_{i}{}^{(-1)}\equiv (-1)^{p-1}Y_i^{+}\,,\quad \chi^{+}_{i}{}^{(-2)}\equiv (-1)^p A_i\,,\quad \chi^{+}_{i}{}^{(-3)}\equiv (-1)^{p-1}\epsilon_i\,. \end{equation} There is nothing deep about these identifications, it is just one that uniformizes the presentation of the diverse expressions. In particular, it does not mean that $A_i$ and $\epsilon_i$ are antighosts, but only that they can be alternatively included in the antighost series for presentation purposes. Similarly, for all ghosts $\psi_{i}^{(r)}$ we find the BV transformation \bea s\psi_{i}^{(r)}&=&\mathrm{d}\psi_{i}^{(r+1)}+\sum_{s=0}^{p-r-1}\frac {(-1)^p}{s!}\partial_i\partial_{l_1}\dots \partial_{l_s}\Pi^{jk}\sum_{s'=0}^{p-r-s-1}(-1)^{s'}{\cal O}^{l_1\dots l_s}(s,s')\widetilde{\mathfrak{X}}_{jk}(s,s')+ \nonumber\\[4pt] &+&\frac {(-1)^{p+r}}{2}\sum_{s=0}^{p-r-1}\frac {(-1)^s}{s!}\partial_i\partial_l\partial_{l_1}\dots \partial_{l_{s}}\Pi^{jk}\sum_{s'=0}^{p-r-s-1}(-1)^{s+s'}{\cal O}^{l_1\dots l_s}(s,s')\times \nonumber\\[4pt] &\times&\, \sum_{t=0}^{p-r-s-s'-1}\sum_{t'=0}^{t}(-1)^{t'}\widetilde{\cal O}_j(1,t'-2)\widetilde{\cal O}_k(1,t-t'-2)\chi^{l}_{(t+r+s+s'-1)} + \nonumber\\[4pt] &+&\sum_{t=0}^{\lfloor\frac{p-r-1}{2}\rfloor}\sum_{s=0}^{p-r-2t-1}\sum_{s'=0}^{p-r-s-2t-1}\sum_{t'=0}^{p-r-s-s'-2t-1}\frac{(-1)^{pt+s'}}{s!t!}\frac{\beta_{(t-1)}\beta_{(a-2)}}{a!(p-a-t+1)!}\times \nonumber\\[4pt] &\times&\, \partial_{l_1}\dots\partial_{l_s}\partial_iR^{j_1\dots j_{a}k_1\dots k_{p-a+1}}{\cal O}^{l_1\dots l_s}(s,s')\widetilde{\cal O}_{k_1\dots k_{t}}(t,t') \epsilon} \def\vare{\varepsilon_{j_1}\dots \epsilon} \def\vare{\varepsilon_{j_a}A_{k_{t+1}}\dots A_{k_{p-a+1}}\,,\quad \quad \,\,\, \label{spsir} \end{eqnarray} where the only new operator that appears is defined as \begin{equation} \widetilde{\mathfrak{X}}_{jk}=\sum_{u=0}^{p-r-s-s'-1}\widetilde{\cal O}_{j}(1,u-2)\psi_{k}^{(r+s+s'+u)}\,. \end{equation} Once again, the fusion of the operators ${\cal O}$ and $\widetilde{\cal O}$ appears in the last term. What remains is to determine the BV operator acting on the fields $Y^{i}$ and $Z_{i}$. These are however just special values of the above general formulas by means of the identifications in \eqref{ids1}. The above universal formulas give the desired result of the operator that satisfies properties I, II and III. This can be alternatively found via the AKSZ construction since we have set $H=0$ to find these expressions and thus the QP structure is restored. Nevertheless, it is worth emphasizing that the BV operator found via AKSZ would at face value look much more complicated than \eqref{schir} and \eqref{spsir}. These formulas organise the different terms in a neat and simple way and they are valid in any dimension. Turning on $H$, the task of finding a closed expression for the BV operator with all $H$-dependent terms included becomes complicated. Nevertheless, the strategy we employed can still be applied, at least in a case by case fashion. In general, the requirement I. and the fact that we have determined the form of the BRST operator for all fields including the $H$-dependence already indicates that $s\psi_i^{(r)}$ is modified to \begin{equation} s\psi_i^{(r)}|_{H=0} \mapsto s\psi_i^{(r)}|_{H=0}+ \D s\, \psi_i^{(r)}\,, \end{equation} where the additional $H$- and $F$-dependent term $\D s\,\psi_i^{(r)}$, which vanishes in absence of $H$, is given as \bea \D s\,\psi_i^{(r)}&=&\sum_{s=1}^{p-r-1}\frac{(-1)^{p(s+1)}\beta_{(r)}}{(s+1)!(r+2)!(p-r-s-1)!}\tensor{H}{_{il_1\ldots l_s}^{j_1\ldots j_{r+2}k_1\ldots k_{p-r-s-1}}}\times\nonumber\\[4pt] && \qquad\qquad\times\epsilon_{j_1}\ldots\epsilon_{j_{r+2}}A_{k_1}\ldots A_{k_{p-r-s-1}} F^{l_1}\ldots F^{l_s}+ \nonumber\\[4pt] &+&\sum_{t=0}^{\lfloor\frac{p-r-1}{2}\rfloor}\sum_{s=0}^{p-r-2t-1}\sum_{s'=0}^{p-r-s-2t-1}\sum_{t'=0}^{p-r-s-s'-2t-1}\frac{(-1)^{pt+s'}}{s!t!}\frac{\beta_{(t-1)}\beta_{(a-2)}}{a!(p-a-t+1)!}\times \nonumber\\[4pt] &\times&\, \partial_{l_1}\dots\partial_{l_s}H_i{}^{j_1\dots j_{a}k_1\dots k_{p-a+1}}{\cal O}^{l_1\dots l_s}(s,s')\widetilde{\cal O}_{k_1\dots k_{t}}(t,t') \epsilon} \def\vare{\varepsilon_{j_1}\dots \epsilon} \def\vare{\varepsilon_{j_a}A_{k_{t+1}}\dots A_{k_{p-a+1}}\quad \quad \nonumber\\[4pt] &+& \D_i(H)\,,\nonumber \end{eqnarray} where the explicit contributions guarantee that property I. is satisfied and $\D_i(H)$ with $\D_i\overset{H\to 0}\longrightarrow 0$ has to be determined such that properties II. and III. are satisfied too. In addition, $s\chi^{i}_{(r)}$ is also modified with a corresponding term that should be determined. One should then apply the same algorithmic procedure of taking the square of the modified operator and refining it with suitable antifields as many times as necessary such that eventually its square vanishes. Once this is achieved, one must determine the relative weight of each of the unknown terms in the two series of $\chi$'s and $\psi$'s such that the nilpotent operator is indeed one obtained from a BV action through the antibracket. In the next section we apply this approach to the twisted R-Poisson sigma model in 3D. \section{Twisted R-Poisson-Courant sigma models in 3D} \label{sec4} In this section we apply the general formalism developed above to a specific example, essentially the simplest non-trivial one that can be fully solved including the twist. This is a 3D Courant sigma model with a 4-form Wess-Zumino term. Such $H$-twisted Courant sigma models were considered from the viewpoint of first class constrained systems and 4-form-twisted Courant algebroids in \cite{Hansen:2009zd}. Here we study one such topological field theory that has the structure of a twisted R-Poisson sigma model. Apart from determining for the first time the BV action for twisted Courant sigma models, this task will be helpful in exemplifying (and of course extending to the twisted case) the rather complicated closed formulas derived in Section \ref{sec3}. We consider the action functional \eqref{Sp+1} in three dimensions ($p=2$), \bea \label{S3} S^{(3)}&=&\int_{\S_{3}}\left(Z_i\wedge \mathrm{d} X^{i}- A_i\wedge\mathrm{d} Y^{i}+\Pi^{ij}(X)Z_i\wedge A_j -\frac 12 \partial_k\Pi^{ij}(X)Y^{k}\wedge A_i\wedge A_j \,+\right. \nonumber \\[4pt] &&\qquad \,\,\,\,\,\,\,\left. +\,\frac 1{3!}R^{ijk}(X)A_{i} \wedge A_{j}\wedge A_k\right)+\int_{\S_{4}}X^{\ast}H\,, \end{eqnarray} with the Wess-Zumino term being the pull-back of a 4-form on the target space $M$, which is equipped with a twisted R-Poisson structure, consisting of a Poisson bivector $\Pi$ and an antisymmetric trivector $R$ that satisfy \begin{equation} [\Pi,R]=\langle \otimes^4\Pi,H\rangle\,. \end{equation} In absence of $H$, this is a Bianchi identity for the derivation \begin{equation} \mathrm{d}_{\Pi} (\cdot):=[\Pi,(\cdot)]\,, \end{equation} which is nilpotent due to the Poisson condition $[\Pi,\Pi]=0$. In this case one notices that $Y^{i}$ is a spacetime 1-form and it may be combined with $A_i$ to a 1-form ${V}^{I}=(A_i,Y^{i})$ taking values in the pull-back of the generalized tangent bundle $TM\oplus T^{\ast}M$, where the index $I$ takes its $2\,\text{dim}\,M$ values. This observation is helpful in identifying the action \eqref{S3} with the general form of a Courant sigma model with Wess-Zumino term, which reads in our conventions as \bea S^{(\text{WZ-CSM})}&=&\int_{\S_{3}}\left(Z_i\wedge \mathrm{d} X^{i}-\frac 12 \eta_{IJ}{V}^I\wedge\mathrm{d} {V}^{J}+\rho^{i}_{I}(X)Z_i\wedge {V}^{I} \, + \right. \nonumber \\[4pt] &&\qquad \left.+\,\frac 1{3!} T_{IJK}(X) {V}^I\wedge {V}^J\wedge {V}^{K}\right)+\int_{\S_{4}}X^{\ast}H\,, \end{eqnarray} with $\eta_{IJ}$ the $O(\text{dim}\,M,\text{dim}\,M)$ covariant metric \begin{equation} \eta=(\eta_{IJ})=\begin{pmatrix} 0 & \mbox{1 \kern-.59em {\rm l}}_{\text{dim}\,M} \\ \mbox{1 \kern-.59em {\rm l}}_{\text{dim}\,M} & 0\end{pmatrix}\,, \end{equation} $\rho^{i}_{I}$ the components of the anchor map $\rho: E=TM\oplus T^{\ast}M\to TM$ of a Courant algebroid with vector bundle $E$ and $T_{IJK}$ the elements of the Courant bracket in a local basis. The example we use has anchor map components given by the Poisson bivector $\Pi$ and Courant bracket the twisted Koszul one. {For $H=0$, it is called a Poisson Courant algebroid or a contravariant Courant algebroid on a Poisson manifold \cite{Asakawa:2015jza, Bessho:2015tkk}.} In presence of the Wess-Zumino term there is a departure from this Courant algebroid structure to a twisted one in the sense of \cite{Hansen:2009zd}, or a pre-Courant algebroid in the sense of \cite{Vaisman:2004msa}, which in our example becomes the twisted R-Poisson structure. More details on this relation are found in \cite{Chatzistavrakidis:2021nom}. Our goal now is to determine the corresponding BV action of the classical action \eqref{S3}. According to our discussion in Section \ref{sec3}, there exist 16 fields and antifields, specifically the four fields $X^{i}, A_{i}, Y^{i}, Z_i$, their four antifields, three ghosts $\epsilon} \def\vare{\varepsilon_i, \chi^{i}, \psi_{i}$ and their three antighosts and one ghost for ghost $\widetilde{\psi}_i\equiv \psi_i^{(1)}$ and its antighost. First we briefly recall that when $H=0$ the BV action can be found using the AKSZ construction, see \cite{Roytenberg:2006qz}. In short, the above 16 fields are collected in four superfields of degrees $0,1,1,2$, \bea \boldsymbol{X}^i&=& X^i +Z^{i}_{+} + \psi_{+}^{i}+\widetilde{\psi}_{+}^{i}~,\\ \boldsymbol{A}_i&=& \epsilon} \def\vare{\varepsilon_i+A_i+Y^{+}_{i}+\chi^{+}_{i}~, \\ \boldsymbol{Y}^i&=& \chi^{i}+Y^i +A_{+}^{i}+\epsilon} \def\vare{\varepsilon_{+}^{i}~, \\ \boldsymbol{Z}_i&=& \widetilde{\psi}_{i}+\psi_{i}+Z_i+X^{+}_{i}~, \end{eqnarray} defined on the graded Q-manifold $T[1]\S_{3}$ and taking values on the QP manifold $T^{\ast}[2]T^{\ast}[1]M$, which is isomorphic to $T^{\ast}[2]T[1]M$, which is typically associated to Courant sigma models. Then the BV action is simply \cite{Roytenberg:2006qz} \bea S^{(3)}_{\text{AKSZ}}=\int_{T[1]\S_{3}}\left(\boldsymbol{Z}_i \boldsymbol{\mathrm{d} X}^{i}-\frac 12 \eta_{IJ}{\boldsymbol{V}}^I\boldsymbol{\mathrm{d}} {\boldsymbol{V}}^{J}+\rho^{i}_{I}(\boldsymbol X)\boldsymbol Z_i {\boldsymbol V}^{I} +\frac 1{3!} T_{IJK}(\boldsymbol X) {\boldsymbol V}^I {\boldsymbol V}^J {\boldsymbol V}^{K}\right)\,,\,\,\,\,\, \label{Saksz} \end{eqnarray} with $\boldsymbol V^{I}$ the superfield that combines $\boldsymbol A_i$ and $\boldsymbol Y^{i}$ and $\boldsymbol{\mathrm{d}}$ the cohomological vector field on $T[1]\S_3$. Once the twist $H$ is turned on though, this simple sequence of steps does not work, as already argued and as proven in \cite{Ikeda:2019czt} for the 2D AKSZ sigma model after twisting it by a 3-form. For the sake of completeness and for examplifying the general formulas of Section \ref{sec3}, we now present the BV operator on the eight fields as obtained by applying them in this case. First of all, due to \eqref{bvisbrst} and \eqref{sA}, we already have the BV operator on five of them, \bea sX^{i}&=&\Pi^{ji}\epsilon} \def\vare{\varepsilon_{j}\,, \\[4pt] s\epsilon} \def\vare{\varepsilon_i&=&-\frac 12 \partial_i\Pi^{jk}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\,,\\[4pt] s\chi^{i}&=&-\partial_k\Pi^{ij}\epsilon} \def\vare{\varepsilon_j\chi^{k}-\Pi^{ij}\widetilde{\psi}_j-\frac 12R^{ijk}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\,,\\[4pt] s\widetilde{\psi}_{i}&=&-\partial_i\Pi^{jk}\epsilon} \def\vare{\varepsilon_j\widetilde{\psi}_{k}-\frac 12 \partial_i\partial_l\Pi^{jk}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\chi^{l}-\frac 1 {3!}f_i^{jkl}\epsilon} \def\vare{\varepsilon_{j}\epsilon} \def\vare{\varepsilon_k\epsilon} \def\vare{\varepsilon_l\,.\,\,\,\,\,\\[4pt] sA_i&=& \mathrm{d}\epsilon} \def\vare{\varepsilon_i+\partial_{i}\Pi^{jk}A_j\epsilon} \def\vare{\varepsilon_{k}-\frac 12 \partial_i\partial_l\Pi^{jk}Z_{+}^{l}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\,, \end{eqnarray} where we recall that \begin{equation} f_i^{jkl}= \partial_iR^{jkl}+\tensor{H}{_i^{jkl}}\,. \end{equation} We observe that in the above BV transformations only the one of the ghost for ghost $\widetilde{\psi}_i$ receives a correction due to the twist $H$, whereas the rest are identical to the AKSZ result. For $Y^{i}$, partially guided by the formula \eqref{schir} for $p=2$ and $r=-1$ (recalling that $Y^{i}=-\chi^{i}_{(-1)}$), and adding a suitable $H$-dependent correction, we obtain \bea sY^{i}&=& -\mathrm{d} \chi^{i}-\partial_k\Pi^{ij}(\epsilon} \def\vare{\varepsilon_jY^{k}+A_j\chi^{k})+\partial_k\partial_l\Pi^{ij}Z_{+}^{l}\epsilon} \def\vare{\varepsilon_j\chi^{k} + \nonumber\\[4pt] &&+\,\Pi^{ji}\psi_j+\partial_k\Pi^{ij}Z_{+}^{k}\widetilde{\psi}_j + \nonumber\\[4pt] &&+\,R^{ijk}\epsilon} \def\vare{\varepsilon_jA_k+\frac 12 \left(\partial_lR^{ijk}+\frac{1}{2}\tensor{H}{_l^{ijk}}\right)Z_{+}^{l}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\,, \end{eqnarray} where we wrote the terms exactly in order of appearance in \eqref{schir}. Note that the form of the $H$-correction in the ultimate term is absolutely necessary so as to satisfy all three required properties of the BV operator eventually. Similarly, for $\psi_i$ we apply the formula \eqref{spsir} for $p=2$ and $r=0$ keeping the order of appearance and add suitable $H$-dependent terms to obtain \bea s\psi_i&=&\mathrm{d}\widetilde{\psi}_i+\partial_i\Pi^{jk}(-\epsilon} \def\vare{\varepsilon_j\psi_k+A_j\widetilde{\psi}_k)-\partial_i\partial_j\Pi^{jk}Z_{+}^l\epsilon} \def\vare{\varepsilon_j\widetilde{\psi}_k- \nonumber\\[4pt] &&-\, \partial_i\partial_l\Pi^{jk}(\epsilon} \def\vare{\varepsilon_jA_k\chi^{l}+\frac 12 \epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_kY^{l})-\frac 12 \partial_i\partial_l\partial_m\Pi^{jk}Z_+^{m}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\chi^{l} - \nonumber\\[4pt] &&+\frac{1}{4}\tensor{H}{_{il}^{jk}}\epsilon_j\epsilon_k F^l+\,\frac 12 f_{i}^{jkl}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_kA_l-\frac 1{3!}\partial_{(m}f_{i)}^{jkl}Z_+^{m}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\epsilon} \def\vare{\varepsilon_l\,. \end{eqnarray} Finally, collecting together terms of the same type, for the field $Z_i$ we find that \bea sZ_i&=&\mathrm{d}\psi_i+\partial_i\Pi^{jk}(-\epsilon} \def\vare{\varepsilon_jZ_k+A_j\psi_k-Y^{+}_{j}\widetilde{\psi}_k) \,+ \nonumber\\[4pt] &+&\partial_i\partial_l\Pi^{jk}\left(\frac{1}{2}\epsilon_j\epsilon_k A_+^l -\epsilon_j A_k Y^l +\frac{1}{2}A_jA_k\chi^l+\epsilon_j \psi_k Z_+^l-A_j\widetilde{\psi}_k Z_+^l -\epsilon_j Y^+_k \chi^l +\epsilon_k \widetilde{\psi}_k \psi_+^l\right)+ \nonumber\\[4pt] &+&\partial_i\partial_l\partial_m\Pi^{jk}\left(\frac{1}{2}\epsilon_j\epsilon_k Y^l Z_+^m+\epsilon_j A_k\chi^l Z_+^m-\frac{1}{2}\epsilon_j \widetilde{\psi}_k Z_+^l Z_+^m +\frac{1}{2} \epsilon_j\epsilon_k\chi^l \psi_+^m\right)\, - \nonumber\\[4pt] &-&\frac 14 \partial_i\partial_l\partial_m\partial_n\Pi^{jk}Z_+^mZ_+^n\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\chi^{l}+\frac{1}{2}f_i^{jkl}\epsilon_j A_kA_l+\frac{1}{6}\tensor{H}{_{ikl}^j}\epsilon_j F^k F^l+\frac{1}{2}\tensor{H}{_{il}^{jk}}\epsilon_j A_k F^l+\nonumber\\[4pt] &+&\partial_{(i}f_{m)}^{jkl}\left(\frac{1}{6}\epsilon_j\epsilon_k\epsilon_l\psi_+^m-\frac{1}{2}\epsilon_j\epsilon_k A_lZ_+^m\right)-\frac{1}{2}\left(\partial_i R^{jkl}+\frac{1}{2}\tensor{H}{_i^{jkl}}\right)\epsilon_j\epsilon_k Y^+_l- \nonumber\\[4pt] &-&\frac{1}{6}\partial_{(i}\tensor{H}{_{m)l}^{jk}}\epsilon_j\epsilon_k F^lZ_+^m-\left(\frac{1}{12}\partial_{(m}\partial_n f_{i)}^{jkl}+\frac{1}{8}\partial_{(m}\partial_n\Pi^{jp}\tensor{H}{_{i)p}^{kl}}\right)\epsilon_j\epsilon_k\epsilon_l Z_+^mZ_+^n \end{eqnarray} To verify that all the BV operators shown above are nilpotent off-shell, the complete ones for the antifields $Z_+^{i},Y^{+}_{i}, A_+^{i}$ and $\psi_{+}^{i}$ are needed too. They are found to be \bea s Z_{+}^{i}&=&-F^{i}-\partial_{k}\Pi^{ij}\epsilon} \def\vare{\varepsilon_{j}Z_+^k\,, \\[4pt] sY^{+}_{i}&=&G_i-\partial_i\Pi^{jk}\epsilon} \def\vare{\varepsilon_jY^{+}_{k}+\partial_i\partial_l\Pi^{jk}\left(\frac 12 \epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\psi_{+}^l-\epsilon_j A_k Z_+^l\right)-\frac 14 \partial_i\partial_l\partial_m\Pi^{jk}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_kZ_+^{l}Z_{+}^{m}\,,\,\,\,\,\,\,\,\,\,\,\,\\[4pt] sA_+^{i}&=&{\cal F}^{i}-\partial_k\Pi^{ij}(\epsilon} \def\vare{\varepsilon_jA_+^{k}-Y^{+}_{j}\chi^{k}+\psi_jZ_+^{k}+\widetilde{\psi}_j\psi_+^k) -\,\nonumber\\[4pt] &&-\,\partial_k\partial_l\Pi^{ij}(A_j\chi^{k}Z_+^{l}+\epsilon} \def\vare{\varepsilon_jY^{k}Z_+^{l}-\frac 12 \widetilde{\psi}_{j}Z_+^kZ_+^l+\epsilon} \def\vare{\varepsilon_j\chi^k\psi_+^l) + \frac 12 \partial_k\partial_l\partial_m\Pi^{ij}\epsilon} \def\vare{\varepsilon_j\chi^kZ_+^lZ_+^m \,+\nonumber\\[4pt] && + \, R^{ijk}\epsilon} \def\vare{\varepsilon_jY^{+}_k+\left(\partial_l R^{ijk}+\frac{1}{2}\tensor{H}{_l^{ijk}}\right)\left(\epsilon_j A_k Z_+^l-\frac{1}{2}\epsilon_j\epsilon_k\psi_+^l\right)-\nonumber\\[4pt] &&-\frac{1}{6}\tensor{H}{_{kl}^{ij}}\epsilon_j F^k Z_+^l+\left(\frac{1}{4}\partial_{l}f_{m}^{ijk}-\frac{1}{12}\Pi^{in}\partial_l\tensor{H}{_{mn}^{jk}}\right)\epsilon_j\epsilon_k Z_+^l Z_+^m\\[4pt] s\psi_+^{i}&=&\mathrm{d} Z_+^{i}+\Pi^{ij}Y_j^{+}+\partial_k\Pi^{ij}(A_jZ_+^{k}-\epsilon} \def\vare{\varepsilon_j\psi^{k}_{+})+\frac 12 \partial_k\partial_l\Pi^{ij}\epsilon} \def\vare{\varepsilon_jZ_+^kZ_+^l\,. \end{eqnarray} Apart from confirming that the BV operator on the fields is nilpotent, a long yet straightforward calculation leads to the result that its action on these four antifields is also nilpotent, as desired. With the above data, we can now write the candidate BV action for the 4-form-twisted R-Poisson sigma model in three dimensions. To present it in a compact way, let $\varphi^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}, \alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta=1,\dots,8$ be a collective notation for the eight distinct fields and ghosts of the theory, whose BV operator is given above. The BV action is simply given as \begin{equation} \label{S3BV} S_{\text{BV}}^{(3)}=S^{(3)}-\sum_{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}\int (-1)^{\text{gh}(\varphi)}\varphi^{+}_{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}\,s_0\varphi^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}+\int \left(L_k\,Z_+^k+M_{kl}\,Z_+^{k}Z_+^{l}+N_{klm}\,Z_+^kZ_+^lZ_+^m\right)\,, \end{equation} with $S^{(3)}$ as in \eqref{S3} and \bea L_k&=& -\partial_k\Pi^{ij}\widetilde{\psi}_jY^{+}_i+\partial_k\partial_l\Pi^{ij}(\frac 12 \epsilon} \def\vare{\varepsilon_i\epsilon} \def\vare{\varepsilon_jA_+^l-\epsilon} \def\vare{\varepsilon_j\chi^kY^+_i+\epsilon} \def\vare{\varepsilon_i\widetilde{\psi}_{j}\psi_+^k)\,+\nonumber\\[4pt] && +\, \frac 12 \partial_{k}\partial_l\partial_m\Pi^{ij}\epsilon} \def\vare{\varepsilon_i\epsilon} \def\vare{\varepsilon_j\chi^{l}\psi_+^m-\frac 12 (\partial_kR^{ijl}+\frac 12 H_{k}{}^{ijl})\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_lY^+_i+\frac 16 \partial_{(k}f_{m)}{}^{ijl}\epsilon} \def\vare{\varepsilon_i\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_l\psi_+^m\,,\,\,\,\,\,\,\,\,\, \\[4pt] M_{kl}&=& \frac 12 \partial_k\partial_l\Pi^{ij} (\epsilon} \def\vare{\varepsilon_i\psi_j-A_i\widetilde{\psi}_j)+\frac 12 \partial_k\partial_l\partial_m\Pi^{ij}(\epsilon} \def\vare{\varepsilon_iA_j\chi^m+\frac 12 \epsilon} \def\vare{\varepsilon_i\epsilon} \def\vare{\varepsilon_jY^m) - \nonumber\\[4pt] && - \,\frac 14 \partial_{(k}f_{l)}{}^{ijm}\epsilon} \def\vare{\varepsilon_i\epsilon} \def\vare{\varepsilon_jA_m-\frac 1{12}\partial_{(k}H_{l)m}{}^{ij}\epsilon} \def\vare{\varepsilon_i\epsilon} \def\vare{\varepsilon_jF^m\,, \\[4pt] N_{klm}&=& -\frac 16 \partial_k\partial_l\partial_m\Pi^{ij}\epsilon} \def\vare{\varepsilon_i\widetilde{\psi}_j-\frac 1{12}\partial_{k}\partial_l\partial_m\partial_n\Pi^{ij}\epsilon} \def\vare{\varepsilon_i\epsilon} \def\vare{\varepsilon_j\chi^n- \nonumber\\[4pt] && -\, \left(\frac 1{36}\partial_{(k}\partial_lf_{m)}{}^{ijn}+\frac 1{24}\partial_{(k}\partial_l\Pi^{ip}H_{m)p}{}^{jn}\right)\epsilon} \def\vare{\varepsilon_i\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_n\,. \end{eqnarray} That this is indeed the BV action, or in other words that it is the solution to the classical master equation $(S_{\text{BV}},S_{\text{BV}})$ with respect to the BV antibracket $(\cdot,\cdot)$ defined in Appendix \ref{appa}, can be seen as follows. The BV operator on the fields should satisfy the three properties I., II. and III. mentioned in section \ref{sec32}. To confirm that $S_{\text{BV}}$ as given in \eqref{S3BV} satisfies the classical master equation, it suffices to show that all the nilpotent operators $s$ derived above are indeed the unique BV operator stemming from $S_{\text{BV}}$ and moreover that the remaining four ones on the antifields of $X^{i},\epsilon} \def\vare{\varepsilon_{i},\chi^{i},\widetilde{\psi}_i$ are also strictly nilpotent. Then the classical master equation follows due to the graded Jacobi identity for the antibracket. This is not trivial because the operator $s$ can have additional off-shell ambiguities, terms that are proportional to the classical equations of motion of the theory. In particular, there are more than one ways to satisfy properties I. and II., and the point is to show that property III. completely fixes $s$ to be $s_{\text{\tiny{BV}}}$ without further ambiguities. To show this, first of all notice that only terms proportional to the field equation $F^{i}$ constitute possible ambiguities. This is proven as follows. An ambiguity proportional to the field equation ${\cal G}_i$ can only potentially appear in the 3-form antifields $X^+_i, \epsilon} \def\vare{\varepsilon^{i}_+, \chi_i^{+}, \widetilde{\psi}_+^{i}$, since ${\cal G}_i$ is a 3-form; such ambiguity terms are a product of a scalar and ${\cal G}_i$. Since there are no scalar antifields, the ghost degree of a scalar that multiplies ${\cal G}_i$ has to be nonnegative which means that this type of correction can exist only for the antifields of ghost degree $-1$. The only such antifield is $X^+_i$ in which case the scalar multiplying ${\cal G}_i$ would have vanishing ghost degree, meaning that it is a function of $X$. But such terms in $sX^+_i$ are completely determined by the classical part of the BV action and cannot be modified which then eliminates all the ambiguities proportional to ${\cal G}_i$. The ambiguities proportional to 2-form field equations $G_i$ and ${\cal F}^i$ are possible only in 3-form antifields $X^+_i$, $\epsilon_+^i$, $\chi^+_i$, $\widetilde{\psi}_+^i$, 2-form antifields $A_+^i$, $Y^+_i$, $\psi_+^i$ and a 2-form field $Z_i$. Here 2-form antifields cannot receive such corrections for the same reason why 3-form antifields could not receive corrections proportional to ${\cal G}_i$. In $sZ_i$ such corrections would not contain any antifields (because there are no scalar antifields), but all such terms are determined by the BRST operator (property I.). Finally, correction terms in 3-form antifields would have 1-form multiplying the field equation. This 1-form would need to contain an antifield, for otherwise, such a term would be determined by the classical part of the BV action. Since there are no scalar antifields and the only 1-form antifield is $Z_+^i$, the term would contain only $Z_+^i$ and no other antifields. However, this would produce terms in $sZ_i$ that contain no antifields and all such terms are determined by the BRST operator. Finally, the only possibility are the ambiguities proportional to the field equation $F^i$. With this, 1-form fields $A_i$ and $Y^i$ cannot receive any corrections since those kind of terms cannot contain any antifields and as such are determined by the BRST operator. Similarly, $Z_+^i$ cannot receive those corrections as well because that would require $sZ_i$ to receive corrections that contain no antifields and that part is again determined by the BRST operator. On the other hand, there are no obstructions for $s\psi_i$ and $sZ_i$ to receive corrections proportional to $F^i$ (with the correction in $sZ_i$ containing at least one antifield). The remaining antifields would then receive corrections proportional to $F^i$ as well, but all those would be determined by the corrections of $s\psi_i$ and $sZ_i$. So, all the possible independent ambiguities are those proportional to the field equation $F^i$ in $s\psi_i$ and $sZ_i$. In addition, property I. is now completely satisfied. However, properties II. and III. still have to be taken into account. Taking into account all possible corrections, a straightforward calculation finally removes any remaining ambiguities. A final cross-check is to confirm that $s_{\text{\tiny{BV}}}^2$ vanishes on $X_i^{+},\epsilon} \def\vare{\varepsilon_{+}^{i},\chi_i^{+}$ and $\widetilde{\psi}^{i}_{+}$. First of all, using property III. we determine \bea s\widetilde{\psi}^{i}_{+}&=& \mathrm{d}\psi_{+}^{i}+\Pi^{ji}\chi^{j}_+-\partial_k\Pi^{ij}(\epsilon} \def\vare{\varepsilon_j\widetilde{\psi}^k_{+}-Z_+^{k}Y^{+}_j-A_j\psi_+^k)\, + \nonumber\\[4pt] && + \,\partial_k\partial_l\Pi^{ij}(\epsilon} \def\vare{\varepsilon_jZ_+^l\psi_+^{k}-\frac 12 A_jZ_+^lZ_+^k)-\frac 1{3!}\partial_k\partial_l\partial_m\Pi^{ij}\epsilon} \def\vare{\varepsilon_jZ_+^kZ_+^lZ_+^m\,, \\[4pt] s \chi_i^{+}&=& \mathrm{d} Y_i^{+}+\partial_i\Pi^{jk}(-\epsilon} \def\vare{\varepsilon_j\chi^+_k+A_jY^+_k)+\, \nonumber\\[4pt] && +\, \partial_i\partial_l\Pi^{jk}(\epsilon} \def\vare{\varepsilon_jZ_+^lY^+_k+\epsilon} \def\vare{\varepsilon_jA_k\psi_+^l-\frac 12 \epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\widetilde{\psi}_{+}^{l}+\frac 12 A_jA_kZ_+^l) +\, \nonumber\\[4pt] &&+\,\frac 12 \partial_i\partial_l\partial_m\Pi^{jk}\epsilon} \def\vare{\varepsilon_j(\epsilon} \def\vare{\varepsilon_kZ_+^m\psi_+^l-A_kZ_+^lZ_+^m)-\frac 12 \partial_i\partial_l\partial_m\partial_n\Pi^{jk}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_kZ_+^lZ_+^mZ_{+}^n\,, \end{eqnarray} for the 3-form antighosts of the scalar ghosts of the theory, and moreover \bea s\epsilon} \def\vare{\varepsilon_i^{+}&=& -\mathrm{d} A_+^{i}+\Pi^{ij}X^{+}_{j}-\partial_k\Pi^{ij}(\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_+^k-\chi^k\chi_j^{+}+\widetilde{\psi}_j\widetilde{\psi}_+^k+A_jA_+^k-Y^kY_j^+-\psi_j\psi_+^k+Z_jZ_+^k)\nonumber \\[4pt] && -\, \partial_k\partial_l\Pi^{ij}(\epsilon} \def\vare{\varepsilon_j\chi^l\widetilde{\psi}_+^{k}+\epsilon} \def\vare{\varepsilon_jZ_+^lA_+^k+\chi^kZ_+^lY^+_j+\epsilon} \def\vare{\varepsilon_jY^l\psi_+^k+A_j\chi^l\psi_+^k-\widetilde{\psi}_{j}Z_+^l\psi_+^k+\nonumber\\[4pt] &&\qquad\qquad\, +A_jY^lZ_+^k-\frac 12 \psi_jZ_+^kZ_+^l)+ \nonumber\\[4pt] && +\, \partial_k\partial_l\partial_m\Pi^{ij}(\epsilon} \def\vare{\varepsilon_j\chi^lZ_+^m\psi_+^k+\frac 12 \epsilon} \def\vare{\varepsilon_jY^lZ_+^kZ_+^m+\frac 12 A_j\chi^lZ_+^kZ_+^m-\frac 16 \widetilde{\psi}_{j}Z_+^kZ_+^lZ_+^m)- \nonumber\\[4pt] &&-\,\frac 16 \partial_k\partial_l\partial_m\partial_m\Pi^{ij}\epsilon} \def\vare{\varepsilon_j\chi^lZ_+^kZ_+^mZ_+^n-R^{ijk}(\epsilon} \def\vare{\varepsilon_j\chi_k^++A_kY^+_j)-\nonumber\\[4pt] &&-\,\partial_lR^{ijk}(\frac 12\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\widetilde{\psi}_+^l+\epsilon} \def\vare{\varepsilon_kZ_+^lY^+_j-\epsilon} \def\vare{\varepsilon_jA_k\psi_+^l-\frac 12 A_{j}A_kZ_+^l)+\\[4pt] && +\, \frac 12\partial_l\partial_mR^{ijk}( \epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_kZ_+^l\psi_+^m-\epsilon} \def\vare{\varepsilon_jA_kZ_+^lZ_+^m)-\frac 1{12}\partial_l\partial_m\partial_nR^{ijk}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_kZ_+^lZ_+^mZ_+^n+ \Delta s\,\epsilon} \def\vare{\varepsilon_i^{+}\,,\nonumber\\[4pt] sX^{+}_i&=&(S_{\text{BV}},X_i^{+})\,, \end{eqnarray} where we refrain from presenting the full result for $X^+_i$ since it contains all possible partial derivatives with respect to $X$ on every term of the BV action and is hence a very long expression. The $H$-dependent part of the transformation on $\epsilon} \def\vare{\varepsilon^+_i$ is hidden in $\D s$, which is given as \bea \Delta s\,\epsilon} \def\vare{\varepsilon_+^i&=&-\frac 12 H_l{}^{ijk}(\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\widetilde{\psi}_{+}^{l}+\epsilon} \def\vare{\varepsilon_kZ_+^lY^+_j-2\epsilon} \def\vare{\varepsilon_jA_k\psi_+^l-A_jA_kZ_+^l)+\nonumber\\[4pt] && +\, \frac 12 H_{kl}{}^{ij}F^{l}(\epsilon} \def\vare{\varepsilon_j\psi_+^k-A_jZ_+^k)-\frac 16 \partial_{(l}H_{m)k}{}^{ij}\epsilon} \def\vare{\varepsilon_jF^kZ_+^lZ_+^m+\frac 16 H_{jkl}{}^{i}F^kF^lZ_+^j+\nonumber\\[4pt] && +\, \frac 12 \partial_{(m}H_{l)}{}^{ijk}(\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_kZ_+^m\psi_+^l-\epsilon} \def\vare{\varepsilon_jA_kZ_+^lZ_+^m)-\nonumber\\[4pt] &&-\,\left(\frac 1 {12}\partial_{(m}\partial_nH_{l)}{}^{ijk}+\frac 16 \partial_{(m}\partial_n\Pi^{ip}H_{l)p}{}^{jk}\right)\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_kZ_+^lZ_+^mZ_+^n\,. \end{eqnarray} Tracking all possible terms with an exterior derivative $\mathrm{d}\cdot$ in the calculation of $s^2$ for any of these four antifields, we indeed find that they all vanish, as desired. To facilitate the comparison with the BV operators and the BV action found through the AKSZ theory in the $H=0$ case, hence called $s_{\text{{\tiny{AKSZ}}}}$, we may rewrite the above expressions as \bea s\varphi^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}=s_{\text{{\tiny{AKSZ}}}}\varphi^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}+\Delta s\, \varphi^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}\,, \end{eqnarray} where $\varphi^{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta}$ are the eight distinct fields and ghosts. Then $\Delta s$ vanishes for four of them, namely for $X^{i},\epsilon} \def\vare{\varepsilon_i,\chi^{i}$ and $A_i$, whereas for the remaining four we have found \bea \Delta s\,Y^{i}&=&\frac 14 H_{l}{}^{ijk}Z_{+}^{l}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\,,\\ \Delta s\, \widetilde{\psi}_i&=& -\frac 1 {3!}H_{i}{}^{jkl}\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k\epsilon} \def\vare{\varepsilon_l\,, \\[4pt] \Delta s\, \psi_i&=& \left(\frac 14 H_{il}{}^{jk} F^{l}+\frac 12 H_{i}{}^{jkl}A_l-\frac 1{3!}\partial_{(m}H_{i)}{}^{jkl}Z_{+}^m\epsilon} \def\vare{\varepsilon_l\right)\epsilon} \def\vare{\varepsilon_j\epsilon} \def\vare{\varepsilon_k \\[4pt] \Delta s\, Z_i&=&\left\{\frac 1{3!}H_{ikl}{}^{j}\, F^{k}F^{l}+\frac 12H_{il}{}^{jk}\, A_kF^l+\frac 14H_{i}{}^{jkl}\, (A_kA_l+\epsilon} \def\vare{\varepsilon_kY^{+}_l) +\frac{1}{3!}\partial_{(i}H_{m)l}{}^{jk} F^lZ_+^m\epsilon_k \right. \,\quad \nonumber\\[4pt] && \left.+\frac 1{3!} \partial_{(m}H_{i)}{}^{jkl}\left(\epsilon_l\psi_+^m+3 A_lZ_+^m\right)\epsilon_k -\frac 1{2\cdot 3!}\partial_{(m}\partial_n H_{i)}{}^{jkl}\epsilon_k\epsilon_l Z_+^mZ_+^n\right\}\epsilon} \def\vare{\varepsilon_j\,. \end{eqnarray} This leads us to an alternative presentation of the BV action for the 4-form-twisted R-Poisson-Courant sigma model, which reads{\footnote{To avoid confusion, note that it is only $S_{\text{BV}}^{(3)}$ that satisfies the classical master equation. In the present context $S^{(3)}_{\text{{AKSZ}}}$ does not satisfy the classical master equation in general, but only when $H=0$. }} \bea S_{\text{BV}}^{(3)}&=&S_{\text{AKSZ}}^{(3)}+\Delta S^{(3)}\,, \end{eqnarray} where $S_{\text{AKSZ}}^{(3)}$ is the AKSZ action for the untwisted R-Poisson-Courant sigma model given in \eqref{Saksz}, and $\Delta S^{(3)}$ is the $H$-dependent correction to it, given by \bea \Delta S^{(3)}&=& \int_{\Sigma_3}\left(-\frac{1}{6}\tensor{H}{_l^{ijk}}\epsilon_i\epsilon_j\epsilon_k\widetilde{\psi}_+^{l}-\frac{1}{4}\tensor{H}{_{kl}^{ij}}\epsilon_i\epsilon_j F^k\psi_+^l +\frac{1}{2}\tensor{H}{_l^{ijk}}\epsilon_i\epsilon_j A_k\psi_+^l+\frac{1}{6}\tensor{H}{_{jkl}^i}\epsilon_i F^jF^k Z_+^l\right. \nonumber\\[4pt] &+&\frac{1}{6}\partial_{(m}\tensor{H}{_{l)}^{ijk}}\epsilon_i\epsilon_j\epsilon_k Z_+^l\psi_+^m -\frac{1}{4}\tensor{H}{_l^{ijk}}\epsilon_i\epsilon_j Y^+_k Z_+^l+\frac{1}{2}\tensor{H}{_l^{ijk}}\epsilon_i A_jA_k Z_+^l- \nonumber\\[4pt] &-&\frac{1}{2}\tensor{H}{_{kl}^{ij}}\epsilon_i A_j F^k Z_+^l -\frac{1}{4}\partial_m\tensor{H}{_l^{ijk}}\epsilon_i\epsilon_j A_k Z_+^l Z_+^m+\frac{1}{12}\partial_m\tensor{H}{_{kl}^{ij}}\epsilon_i\epsilon_j F^k Z_+^l Z_+^m-\nonumber\\[4pt] &-&\left.\left(\frac{1}{36}\partial_m\partial_n\tensor{H}{_l^{ijk}}+\frac{1}{24}\partial_m\partial_n\Pi^{kp}\tensor{H}{_{lp}^{ij}}\right)\epsilon_i\epsilon_j\epsilon_k Z_+^l Z_+^m Z_+^n\right) +\int_{\S_4} X^{\ast}H\,. \end{eqnarray} Obviously, when $H=0$ then $\D S^{(3)}=0$ and the correct solution of the classical master equation is given by the AKSZ action. \section{Conclusions} \label{sec5} The solution to the classical master equation of topological sigma models with a target space that possesses a QP structure as a graded manifold can be found using the AKSZ construction that provides at the same time a clear correspondence between geometry and field theory. In 2D this procedure results in the BV action of the Poisson sigma model and the A-/B-models and in 3D in the one of Chern-Simons theory and more generally of Courant sigma models. Higher-dimensional cases, essentially reflecting Hamiltonian mechanics in many dimensions, were formally discussed in \cite{Severa} and a 4D case was worked out completely in \cite{Ikeda:2010vz}. In this paper, we studied topological sigma models whose target space does not have a genuine QP structure and therefore the systematic construction mentioned above does not apply as it is. This is motivated by the 2D example of the 3-form-twisted Poisson sigma model, where the Wess-Zumino term obstructs QP-ness of the target but the solution of the classical master equation was fully identified in \cite{Ikeda:2019czt}. Our main purpose was to generate new examples of this situation with an outlook towards developing a general geometric theory for the BV formalism of topological sigma models with Wess-Zumino terms. In this spirit, we started from the recently constructed twisted R-Poisson sigma models in arbitrary dimensions \cite{Chatzistavrakidis:2021nom}. In 3D this corresponds to 4-form twisted Courant sigma models \cite{Hansen:2009zd} (Chern-Simons theory with a Wess-Zumino term), whereas in general dimensions, say $p+1$, they correspond to twisting AKSZ models by a closed $(p+2)$-form. One of the advantages is that the theories are known in great detail and they offer the possibility of deriving explicit and universal formulas that are valid in any dimension, so they can be fully worked out. Twisted R-Poisson sigma models are multiple stages reducible systems with an open gauge algebra. In a first step, we determined the BRST operator on all fields and ghosts of the theory in any dimension and by calculating its square we confirmed that it vanishes only on-shell. Notably, the square of the BRST operator is not linear in the classical field equations for all fields; instead, products of them can arise, a phenomenon that we could call ``non-linear openness of the gauge algebra''. To take care of this, we introduced the necessary antifields and antighosts dictated by the BV formalism. Our first main result then is that \bi \item in the untwisted case, namely when the Wess-Zumino term is turned off, we determined a complete set of expressions for the off-shell nilpotent BV operator of the theory that gives rise to the BV action that solves the classical master equation in any world volume dimension. \ei This result is formally not new, in the sense that these expressions could be derived using the AKSZ construction, since there is no obstruction to the QP structure on the target space in absence of Wess-Zumino term. Nonetheless, should one derive these formulas from the AKSZ/BV action, one would find an expanded and very complicated form of the expressions we derived. This is due to the fact that we followed a different strategy that may be summarized as follows. Instead of adding antifield-dependent terms in the classical action, which is the usual procedure in the BV formalism and in arbitrary dimensions it is a very hard thing to do, we instead followed a refinement procedure for the BRST operator. Specifically, knowing its square, we replaced each field equation in it with an antifield and added this term to the BRST operator. Calculating the square of the new operator, we find again terms proportional to the field equations. Repeating this procedure as many times as necessary, one can end up with an off-shell nilpotent operator, which becomes the BRST one once the antifields are turned off. Fortunately, due to repeating patterns in the transformation of the ghosts in the theory, this procedure is fully tractable. Requiring that the resulting operator is obtained from an action via the BV antibracket, we identify it with the BV operator of the theory. This procedure has the advantage that it yields elegant and closed expressions for the BV operator in any dimension in contrast to the AKSZ construction, while being equivalent to it. Once the Wess-Zumino term is turned on and as a result the R-Poisson structure is twisted, the procedure we suggested above requires to determine suitable modifications to the AKSZ/BV operator such that the new operator satisfies again all requirements to be a BV one, this time with the new geometric conditions brought about by the $(p+2)$-form twist. This is a hard problem, which we solve in 3D. Specifically, our second main result is that \bi \item in the twisted case in three dimensions, we determined the full solution to the classical master equation. In other words we determined all necessary 4-form-dependent modifications to the BV operator and the BV action for 4-form-twisted R-Poisson-Courant sigma models. \ei This is then the second fully worked out example of a topological field theory with a non-QP target space whose BV action is identified, and the first in dimensions greater than two. We note that in two dimensions, there exist in fact many more examples based on Dirac sigma models, as reported in \cite{CJSS}. Based on the above results, it would be interesting to attempt the development of an extension to the AKSZ construction such that Wess-Zumino terms are taken into account and the target space geometry goes beyond QP structures. To achieve this, it would be helpful to solve the classical master equation in arbitrary dimensions and in presence of Wess-Zumino terms and also identify in every detail the higher geometric structures that appear in the problem, meaning all higher connections, torsion and curvature tensors that generalize the ones of the twisted Poisson sigma model. This would also be useful in solving the quantum master equation and identifying the corresponding quantum BV action for this class of theories, which would be also interesting in relation to deformation quantization. We plan to report on these issues in future work.
1,941,325,220,872
arxiv
\section{Introduction} In the last years Intelligent Transportation Systems (ITS) have experienced an unparalleled expansion for many reasons. The~availability of cost-effective sensor networks, pervasive computation in assorted flavors (distributed/edge/fog computing) and the so-called Internet of Things are all accelerating the evolution of ITS~\cite{zhu2018big}. On top of them, Smart~Cities cannot be understood anyhow without Smart Mobility and ITS as technological pillars sustaining their operation~\cite{albino2015smart}. Smartness springs from connectivity and intelligence, which implies that massive flows of information are acquired, processed, modeled and used to enable faster and informed decisions. For the last couple of decades, ITS have grown enough to cross pollinate with previously distant areas such as Machine Learning and its superset in the Artificial Intelligence taxonomy: Data Science. These days Data Science is placed at the methodological core of works ranging from traffic and safety analysis, modeling and simulation, to transit network optimization, autonomous and connected driving and shared mobility. Since the early 90's most ITS systems exclusively relied on traditional statistics, econometric methods, Kalman filters, Bayesian regression, auto-regressive models for time series and Neural Networks, to mention a few~\cite{zhang2011data,karlaftis2011statistical}. What has changed dramatically over the years is the abundance of available data in ITS application scenarios as a result of new forms of sensing (e.g.,~crowd sensing) with unprecedented levels of heterogeneity and velocity. Zhang~et~al.~\cite{zhang2011data} have defined this new form of data-driven ITS as the systems that have vision, multisource, and~learning algorithms driven to optimize its performance and augment its privacy-aware people-centric~character. The exploitation of this upsurge of data has been enabled by advances in computational structures for data storage, retrieval and analysis, which have rendered it feasible to train and maintain extremely complex data-based models. These baseline technologies have laid a solid substrate for the proliferation of studies dealing with powerful modeling approaches such as Deep Learning or bio-inspired computation~\cite{del2019bioinspired}, which currently protrude in the literature as the \emph{de facto} modeling choice for a myriad of data-intensive~applications. However, significant consideration must be placed to the systematic and myopic selection of complex data-based solutions over well-established modeling choices. The~current research mainstream seems to be misleadingly focusing on performance-biased studies, in~a fast-paced race towards incorporating sophisticated data-based models to manifold research area, leaving aside or completely disregarding the operational aspects for the applicability of such models in ITS environment. The~scope of this work is to review existing literature on data-driven modeling and ITS, and~identify the functional elements and specific requirements of engineering solutions, which are the ultimate enablers for data-based models to lead towards efficient means to operate ITS assets, systems and processes; in other words, for data-based models to fully become \emph{actionable}. Bearing the above rationale in mind, this~work underscores the need for formulating the requirements to be met by forthcoming research contributions around data-based modeling in ITS systems. To this end, we~focus mainly on system-level on-line operations that hinge on data-based pipelines. However, ITS is a wide research field, encompassing operations held at longer time scales (e.g.,~long-term and mid-term planning) that may not demand some of the functional requirements discussed throughout our work. Furthermore, our discussions target system-level operations rather than user-level or vehicle-level applications, since in the latter the information flow from and to the system is scarce. Nevertheless, some of the described functional requirements for system-level real-time decisions can be extrapolated to other levels and time scales seamlessly. From this perspective, our ultimate goal is to prescribe -- or at least, set forth -- the main guidelines for the design of models that rely heavily on the collection, analysis and exploitation of data. To this end, we~delve into a series of contributions that are summarized below: \begin{itemize}[leftmargin=*] \item In the first place, we~identify the gap between the data-driven research reported so far, and~the practical requirements that ITS experts demand in operation. We~capitalize on this gap to define what we herein refer to as \emph{actionable data-based modeling workflow}, which comprises all data processing stages that should be considered by any actionable data-based ITS model. Although diverse data-based modeling workflows can be found in literature with different purposes, most of them count on recognized stages, that are presented in this work from an actionability perspective, i.e.,~what to take into account from the operational point of view when designing the workflow, how~to capture and preprocess data, how to develop a model and how to prescribe its output. These guidelines are proposed and argued within an ITS application context. However, they can be useful for any other discipline in which data-based modeling is performed. \item Next, functional requirements to be satisfied by the aforementioned workflow are described and framed in the context of ITS systems and processes, with examples exposing their relevance and consequences if they are not fulfilled.The contributions of this section are twofold: on {the} one hand, we~identify and define the holistically actionable ITS model along with its main features; on the other hand, we~enumerate requirements for each feature to be considered actionable, as well as a review of the latest literature dealing with these features and requisites. \item Finally, on a prospective note we elaborate on current research areas of Data Science that should progressively enter the ITS arena to bridge the identified gap to actionability. Once the challenges of modeling and ITS requirements have been stated, we~review emerging research areas in Artificial Intelligence and Data Science that can contribute to the fulfilment of such requirements. We~expect that our reflexive analysis serves as a guiding material for the community to steer efforts towards modeling aspects of more impact for the field than the performance of the model itself. \end{itemize} {As a summary, the~contributions of this work consist of identifying the main actionability gaps in the data-based modeling workflow, gathering and describing the fundamental requirements for a system to be actionable, and~considering both the requirements and the usual data-based processing workflow, proposing solutions through the most recent technologies.} These contributions are organized throughout the rest of the paper as follows: Section~\ref{pipeline} delves into the \emph{actionable data-based modeling workflow}, i.e.,~the canonical data processing pipeline that should be considered by a fully actionable ITS system with data-based models in use. Section~\ref{requirements} follows by elaborating on the functional features that an ITS system should comply with so as to be regarded as \emph{actionable}. Once these requirements are listed and argued in detail, Section~\ref{challenges} analyzes research paths under vibrant activity in areas related to Data Science that could bring profitable insights in regards to the actionability of data-based models for the ITS field, such~as explainable AI, the~inference of causality from data, online learning and adaptation to non-stationary data flows. Finally, Section~\ref{conclusion} concludes the paper with summarizing remarks drawn from our prospect. \section{From Data to Actions: An Actionable Data-based Modeling Workflow}\label{pipeline} ITS applications with data driven modeling problems underneath range from the characterization of driving behavioral patterns, the~inference of typical routes or traffic flow forecasting, among others. Data driven modeling can be considered to include the family of problems where a computational model or system must be characterized or learned from a set of inputs and their expected outputs~\cite{eiben2003introduction}. In~the context of this definition, actionability complements the data-driven model by prescribing the actions (in the form of rules, optimized variable values or any other representation alike) that build upon the output knowledge enabled by the model. In general, a~design workflow for data-based modeling consists of 4 sequential stages: (1)~data acquisition (\emph{sensing}), which usually considers different sources; (2)~data preprocessing, which aims at building consistent, complete, statistically robust datasets; (3)~data modeling, where a model is learned for different purposes; and (4) model exploitation, which includes the definition of actions to be taken with respect to the insights provided by models in real life application scenarios. These 4 stages can be regarded as the core of off-line data-driven modeling; however, when time dimension joins the game, a~fifth stage -- adaptation -- must be considered as an iterative stage of this data pipeline, aimed at maintaining learned models updated and adapted to eventual changes in the data distribution. This adaptation is crucial for real-life scenarios, where changes can happen in all stages, from~variations of the input data sources, to interpretation adjustments and other sources of non-stationarity imprinting the so-called \emph{concept drift} in the underlying phenomenon to be modelled~\cite{ditzler2015learning}. We~now delve into these five data processing stages in the context of their implementation in ITS applications, following the diagram in Figure \ref{fig:1}. { The stages provided in Figure \ref{fig:1} can be considered as a standard workflow in any data-based work; however, although these steps are easily recognisable, they are not always regarded, and~it is common to observe that practitioners put the focus only on a subset of them, disregarding their interactions or omitting some of them. For~instance, the~prescription stage is not frequently considered, while it is an essential link between the modeling outcome and the final decision/action derived from the modeling result. Besides, each step can have implications for the final actionability of the model, reason for which all of them are analyzed below.} \nointerlineskip \begin{figure}[H]% \includegraphics[width=0.97\columnwidth]{./PIPELINE1.pdf} \caption{Data-based modeling workflow showing its main processing stages and their principal technology~areas.} \label{fig:1} \end{figure} \subsection{Data Acquisition (Sensing)} \label{sensing} The path towards concrete data-based actions departs from the capture of available ITS information, which in this specific sector is plentiful and highly diverse. The~advent of data science for ITS has come along with the unfolding of copious data sources related to transportation. Indeed, ITS are pouring volumes of sensed data, from~the environment perception layer of intelligent and connected vehicles, to human behaviour detection/estimation (drivers, passengers, pedestrians) and the multiple technologies deployed to sense traffic flow and behaviour. Concurrently, many other non-traditional sources that were useful to infer behavioral needs and expectations of people that use transportation, such~as social media, have started to become increasingly available and exploited augmenting the more conventional sensing sources towards more efficient mobility solutions. Some of these data sources are currently used in almost any domain of ITS, from~operational perspectives such as the estimation of future transportation demands, adaptive signaling or the discovery of mobility patterns, to the provision and of practical solutions, such~as the development of autonomous vehicles{, although not all sources are suitable for all applications. The~model actionability is dependant on this early stage too, reason for which the data selection (when possible) should not be neglected. For~instance, a~model that consumes speed data will probably require some other measurements (maximum speed of a road segment) to provide in the end something meaningful, while a model that consumes travel-time data will be more straight-forward}. Five main categories can be established to describe the spectrum of ITS data sources: \begin{enumerate \item Roadside sensing, which brings together tools and mechanisms that directly capture and convey data measurements from the road, obtaining valuable metrics such as speed, occupation, flow or even which vehicles are traversing a given road segment. These are the most commonly used sensors in ITS, most frequently based on computer vision and radar, as they directly provide traffic information close to the point where it originates. { This kind of sensed metrics are useful for traffic flow or speed modeling, allowing practitioners to identify mobility patterns and to model them, so future behavior in sensorized locations can be estimated. Counting vehicles or detecting their speed at a certain point of the road also allows to obtain network wide mobility patterns that can be compared to those provided by a simulation engine. This can help traffic managers and city planners take long-term decisions, such~as which road should be extended or how a road cut could affect other segments}. However, this~information is tethered to the exact points where the sensors are placed, thus~the actionability of a system built upon these data is subject to the geographical area where such sensing devices are deployed and their range. { \item In-vehicle sensing, which includes a broad range of transponder devices that are part of the on-board equipment of certain fleets. Commercial vehicles on land, air and water usually have location devices that record and emit the position and other metrics of the vehicles at all times. This opens up a wide range of ITS applications, such~as fleet management~\cite{said2016utilizing}, route optimization~\cite{urbahs2017remotely}, delay analysis and detection~\cite{khaksar2019airline}, airport/port management~\cite{mott2017estimating} or, when the vehicles are a part of traffic, a~detailed analysis of their behavior along a complete route (not only in certain sensorized locations)~\cite{herring2010estimating}. This technology is highly extended in commercial fleets and its multiple analytic applications are nowadays remarkably actionable, due to industry standards requirements. However, machine learning modeling based approaches are starting to emerge, and~should consider actionability as a core concern. } \item Cooperative sensing, which denotes the general family of data collection strategies that regards the information provided by different users of the ITS ecosystem as a whole, thus being grouped and jointly processed forward. This inner perspective of traffic and transportation can be obtained through many mechanisms, and, although it is more specific and scarce, it is also more complete than the one obtained from roadside sensing. These data open the door to mobility profiling and anomaly detection, enriching the outlook of a transportation model by means of the fusion of different data-based \emph{views} of an ITS scenario. This includes all forms of mobile sensing data, from~call detail record data that can be used to obtain users trajectories~\cite{kujala2016estimation}, to~GPS data~\cite{sun2015trajectory}. These sources are the foundation of abundant research~\cite{rodrigues2011mobile, lana2018road}, but in most cases the data fusion part is obviated. Crowdsourced and Social Media sensing can be analogously considered in this category. These data sources can also contribute to data-based ITS models by means of sentiment analysis and geolocation. The~use of crowd-sourced data is well established among technology-based companies (Google, Uber etc), yet not very often available to research community and private and public authorities in the transport operations management. The~limited information that becomes available is deprived from the necessary statistical representativeness and truthfulness in order to be easily integrated to legacy management systems. \item External data sources, which include all data that are not directly related to traffic of demand, but have an impact on it, such~as weather, calendar, or planned events, social and economic indicators, demographic characteristics etc. These data are usually easy to obtain, and~their incorporation to ITS models augments in general the quality of their produced insights and ultimately, the~actionability of the actions yielded therefrom. It is also true that this data source is typically unstructured, which can pose a challenge regarding its automatic integration. \item Structured/static data, which refers to data sources that provide information of elements that have a direct impact on transportation, such~as public transportation lines and timetables, or municipal bike rental services. Due to their inherently structured nature, data provided by these sources are often arranged in a fixed format, making it easier to incorporate to subsequent data-based modeling stages.{ Any of the previous data and applications can be enriched with these kind of data; a model that is able to represent the mobility of a city would probably enhance its capabilities if it considered these data. For~instance, a~bus timetable can help understand traffic in the street segments that are traversed by the bus service or where its stops are located.} These information sources must be considered for an intelligent transportation system to be actionable, being a particularly essential piece of urban and interurban mobility. \end{enumerate} \subsection{Data Preprocessing} The variety of the above mentioned sensing sources comes with promises and perils. These data is produced in various forms and formats, various time resolutions, synchronously or asynchronously and different rates of accumulation. To leverage the full spectrum of knowledge these data can bring to the sake of informed decision making, the~more the sensing opportunities the larger the needs for powerful preprocessing and skills are before reaching the stage of modeling. A principled data-driven modeling workflow requires more than just applying off-the-shelf tools. In~this regard, preprocessing raw data is undoubtedly an elementary step of the modeling process~\cite{garcia2015data}, but still persists nowadays as a step frequently overlooked by researchers in the ITS field~\cite{lopes2010traffic}. To begin with, when a model is to be built on real ITS data, an important fact to be taken into account is the proneness of real environments to present missing or corrupted data due to many uncertain events that can affect the whole collection, transformation, transmission and storage process~\cite{vlahogianni2004short}. This issue needs to be assessed, controlled and suitably tackled before proceeding further with next stages of the processing pipeline. Otherwise, missing and/or corrupted instances within the captured data may distort severely the outcome of data-based models, hindering their practical utility~\cite{chen2001study}. A~wide extent of missing data imputation strategies can be found in literature~\cite{qu2009ppca,tan2013tensor}, as well as methods to identify, correct or discriminate abnormal data inputs~\cite{li2014missing}. However, they~are often loosely coupled to the rest of the modeling pipeline~\cite{ran2016tensor}. An actionable data preprocessing should focus not only on improving the quality of the captured data in terms of completeness and regularity, but also on providing valuable insights about the underlying phenomena yielding missing, corrupted and/or outlying data, along with their implications on modeling~\cite{lana2018imputation}. Next, the~cleansed dataset can be engineered further to lie an enriched data substrate for the subsequent modeling~\cite{krempl2014open,etemad2018predicting}. A~number of operations can be applied to improve the way in which data are further processed along the chain. For~instance, data transformation methods can be applied for different purposes related to the representation and distribution of data (e.g.,~dimensionality reduction, standardization, normalization, discretization or binarization). Although these transformations are not mandatory in all cases, a~deep knowledge of what input data represent and how they contribute to modeling is a key aspect to be considered in this preprocessing stage. Furthermore, data enrichment can be held from two different perspectives that can be adopted depending on the characteristics of the dataset at this point. As such, feature selection/engineering refers to the implementation of methods to either discard irrelevant features for the modeling problem at hand, or to produce more valuable data descriptors by combining the original ones through different operations. Likewise, instance selection/generation implies a transformation of the original data in terms of the examples. Removing instances can be a straight solution for corrupted data and/or outliers, whereas the addition of synthetic instances can help train and validate models for which scarce real data instances are available. Besides, these approaches are among the most predominant techniques to cope with class imbalance~\cite{zheng2013using}, a~very frequent problem in predictive modeling with real data. Whether each of these operations is required or not depends entirely on the input data, their quality, abundance and the relations among them. This entails a deep understanding of both data and domain, which is not always a common ground among the ITS field practitioners~\cite{smith2004investigation}. Finally, data fusion embodies one of the most promising research fields for data-driven ITS~\cite{zhang2011data, el2011data}, yet remains marginally studied with respect to other modeling stages despite its potential to boost the actionability of the overall data-based model. Indeed, an ITS model can hardly be actionable if it does not exploit interactions among different data sources. Upon their availability, ITS models can be enriched by fusing diverse data sources. A~recent review on different operational aspects of data-driven ITS developments states that these models rarely count on more than one source of data~\cite{lana2018road}. This fact clearly unveils a niche of research when taking into account the increasing availability of data provided by the growing amount of sensors, devices and other data capturing mechanisms that are deployed in transportation networks, in~all sorts of vehicles, or even in personal devices held by the infrastructure users. Despite the relative scarcity of contributions dealing with this part of the data-based modeling workflow, the~combination of multiple sources of information has been proven to enrich the model output along different axis, from~accuracy to interpretability~\cite{choi2002data, chang2010intelligent,han2010radar, treiber2011reconstructing}. \subsection{Modeling} Once data are obtained, fused, preprocessed and curated, the~modeling phase implies the extraction of knowledge by constructing a model to characterize the distribution of such data or their evolution in time. The~distillation of such knowledge can be performed for different purposes: to represent unsupervised data in a more valuable manner (as~in e.g.,~clustering or manifold learning), for instance, to insight patterns relating the input data to a set of supervised outputs (correspondingly, classification/regression) aiming to automatically label unseen data observations, to predict future values based on the previous values (time series forecasting), or to inspect the output produced by a model when processing input data (simulation). To do so, in~data-based modeling machine learning algorithms are often put to use, which allow automating the modeling process~itself. The above purposes can serve as a discrimination criterion for different algorithmic approaches for data-based modeling. However, when the goal is to model data interactions within complex systems such as transportation networks, it is often the case that the modeling choice resorts to ensembles of different learner types. For~instance, when applying regression models for road traffic forecasting, a~first clustering stage is often advisable to unveil typicalities in the historical traffic profiles and to feed them as priors for the subsequent predictive modeling~\cite{vlahogianni2009enhancing, lana2019, liu2019mining}. However, when it comes to model actionability, a~key feature of this stage is the \emph{generalization} of the developed model to unseen data. This~characteristic implies making a model useful beyond the data on which it is trained, which implies that the model design efforts should not only be put on making the model achieve a marginally superior performance, but also to be useful in other spatial or temporal circumstances. Achieving good generalization properties for the developed can be tackled by diverse means, which often depend on the modeling purpose at hand (e.g.,~cross-validation, regularization, or the use of ensembles in predictive modeling). Essentially, the~design goal is to find the trade-off between performance (through representing much of the intrinsic variance of data) and generalization (staying away from an overfitted model to a particular training set). This aspect becomes utterly relevant when data modeling is done on time-varying data produced by dynamic phenomena. ITS are, in~point of fact, complex scenarios subject to strong sources of non-stationarity, thereby calling for an utmost focus on this~aspect. The complexity met in traffic and transportation operations is usually treated with heterogeneous modeling approaches that aim to complement each other to improve accuracy~\cite{moretti2015urban,cong2016traffic,kim2015urban}. This can be done either by comparing different models and selecting the most appropriate one every time, or by combining different models to produce the final outcome. Additionally, in~some fields of ITS, such~as traffic modeling, physical (namely, theory- or simulation-based) models have been available for decades. Their integration into data-based modeling workflows, considering the knowledge they can provide, can~become crucial for a manifold of purposes, e.g.,~to enforce traffic theory awareness in models learned from ITS data. Indeed, the~hybridization of physical and data-based models has a yet to be developed potential that has only been timidly explored in some recent works~\cite{fusco2015short,montanino2015trajectory,chaulwar2016hybrid}. Interestingly, complex data driven modeling solutions to transportation phenomena have been numerous and resourceful ranging from modular structures, to model combinations, surrogate modeling~\cite{vlahogianni2015optimization} and so on. Regardless of the approach, literature emphasizes on the critical issue of model hyperparameter optimization using for example nature inspired algorithms, namely Evolutionary Computation or Swarm Intelligence~\cite{cong2016traffic,teodorovic2008swarm}. Assuming that there is a feasible and acceptable solution to the problem of selecting the proposed parameters for a data drive model, when dealing with complex modeling structure this task should be conducted automatically by optimizing the hyperparameter space usually based on the models' predictive error. It is to note that, the~greater the number of models involved the more difficult the optimization task becomes. Moreover, relying on nature inspired stochastic approaches, full determinism in the solution and convergence stability can not be formally guaranteed~\cite{del2019bioinspired}. \subsection{Prescription} Once the modeling phase itself has been completed, the~resulting model faces its application to a real ITS environment. It is at this stage when actions deriving from the data insights are defined/learned/decided, and~when the actionability of the model can be best assessed.{ Yet, this~stage is frequently overlooked in most ITS research, where most works conclude at presenting the good performance of a model; it is uncommon to find evaluations of a given model in terms of its final application in a certain environment. Are~the actions that can be taken as a result of the outcome of a data-based model aimed at a strategic, tactical or operational decision making? Is the output of the data-based model able to support decisions made by transportation networks managers? Can the output be consumed directly without any need for further modeling, or exploited by means of a secondary modeling process aimed at optimizing the decision making process?} This latter case can be exemplified, for instance, by the formulation of the decision making process as an optimization problem, in~which actions are represented by the variables compounding a solution to the problem, and~the output of the previous data-based modeling phase can be used to quantitatively estimate the quality or fitness of the solution. One of the most prominent examples of this prescription mode deals with routing problems, since they often use simulation tools or predictive models to assess the travel time, pollutant emissions or any other optimization objective characterizing the fitness of the tested routes~\cite{kumar2012survey,osaba2016improved}. Other examples of prescription based on data emerge in tactical and strategic planning, such~as the modification of public transportation lines~\cite{mendes2015validating}, the~establishment of special lanes (e.g.,~taxi, bike)~\cite{szeto2015sustainable}, the~improvement of road features~\cite{van2016automatic}, the~adaptive control of traffic signaling~\cite{mannion2016experimental}, the~identification of optimal delivery (or pickup) routes for different kinds of transportation services~\cite{osaba2017discrete}, the~incident detection and management~\cite{imprialou2013methods}, learning~for automated driving~\cite{yu2019distributed}, or the design of sustainable urban mobility plans based on the current and future demand or the drivers' behavior~\cite{lecue2014star,gindele2015learning}. In any of the above presented ITS cases, a~data-based model should be equipped with a certain set of features that guarantee its actionability. For~instance, if a traffic manager is not able to interpret a model or understand its outcome in terms of confidence, it~can be hardly applied for practical decision making. When the model is used for adaptive control purposes (as~in automated traffic light scheduling), the~adaptability of the model to contextual changes is a key requirement for prescribed actions to be matched to the current traffic status~\cite{kammoun2014adapt}. Interestingly, some control techniques with a long history in the field (e.g.,~Stochastic Model Predictive Control, SMPC,~\cite{mesbah2016stochastic}) serve as a good example of the triple-play between application requirements, decision making and data-based models. When dealing with the design of control methods in ITS, SMPC has been proven to perform efficiently in highly-complex systems subject to the probabilistic occurrence of uncertainties~\cite{hrovat2012development}. Specifically, SMPC leverages at its core data-based prediction modeling and low-complexity chance-constrained optimization to deal with control problems that impose that the method to be used must operate in real time. In~this case, and~in most actionable data-based workflows where decision making is formulated as an optimization problem, we~note a clear entanglement between application requirements (e.g.,~real-time processing), decision making (low-complexity, dynamic optimization techniques) and data-based models (predictive modeling for system dynamics forecasting). \subsection{Adaptation} \label{sec:adapt} Finally, the~proposed actionable data processing workflow considers model adaptation as a processing layer that can be applied over different modeling stages along the pipeline. When models are based on data, they are subject to many kinds of uncertainties and non-stationarities that can affect all stages of the process. Streaming data initially used to build the model can experience long-term drifts (for instance, an increase of the average number of vehicles), sudden changes (a newly available road), or unexpected events (for example, a~public transportation strike)\cite{buchanan2015traffic, pan2013crowd,davison2006bus}. A~closed lane, a~new tram line, the~opening of a tunnel or simply the opening of a new commercial center, may change completely the way in which network users behave, and~thus, affect the data-based models that are intended to reflect such a mobility. Therefore, data-based modeling cannot be conceived as a static design process. This critical adaptation should be considered in all parts of the workflow, and~constantly updated with new data: \begin{enumerate} \item In the preprocessing stage, adaptation could be understood from many perspectives: the incorporation of new sources of data, the~partial or total failure of data capturing sensors, which lead to an increased need for data fusion, imputation, engineering or~augmentation. \item In the modeling stage, adaptations could range from model retraining, adaptation to new data or alternative model switching, to the change of the learning algorithm due to a change in the requested system requirements (for instance, in~terms of processing latency any other performance~indicator). \item In the prescription stage, adaptation is intended to dynamically support decisions accounting for changes in data that propagate to the output of preceding modeling stages. Data-based modeling can deal with such changes and adapt their output accordingly, yet they are effective to a point. For~instance, online learning strategies devised to overcome from concept drift in data streams can speed up the learning process after the drift occurs (by e.g.,~diversity induction in ensembles or active learning mechanisms). Unfortunately, even when model adaptation is considered the performance of the adapted model degrades at different levels after the drift. Extending adaptation to the prescription stage provides an additional capacity of the overall workflow to adapt to changes, leveraging techniques from prescriptive analysis such as dynamic or stochastic optimization. \end{enumerate} Adaptations within the above stages can be observed from two perspectives: automatic adaptations that the system is prepared to do when certain circumstances occur, or~adaptations that are derived from changes that are introduced by the user.~Thus, the~adaptation layer is strongly linked to actionability: an ITS model will be more actionable if adaptations, either needed or imposed, are accessible to its final users. For~instance, a~system could be required to introduce a new set of data, and~its impact on all the stages should be controlled by the transportation network manager, or if a drift is detected, the~system should consider if it is relevant to inform the user. \section{Functional Requirements for Model Actionability}\label{requirements} Any data-based modeling process should embrace actionability as its most desirable feature for the engineered model to yield insights of practical value, so that field stakeholders can harness them in their decision making processes. This is certainly the case of ITS, in~which managers, transportation users and policy makers rely on models and research results to make better and more informed decisions. Thus, once the main stages of data-driven modeling have been outlined, this~section places the spotlight on the main functional features that should be mandatory to produce fully-actionable ITS data-based models. These functional requirements, which are shown in Figure \ref{fig:2}, should not be understood as a compulsory list of features, but rather as an enumeration of possibilities to make a model actionable. Not all ITS scenarios requiring actionable data-based models should impose all these requirements, nor can actionability be thought to be a Boolean property. Different loosely defined degrees of actionability may hold depending on the practicality of decisions stemming from the model. \begin{figure}[H \centering \includegraphics[width=0.80\columnwidth]{PIPELINE2col.pdf} \caption{\vspace{0mm}Functional requirements for actionable data-based models in Intelligent Transportation Systems (ITS). ATIS: Advanced Traveler Information Systems; ATMS: Advanced Transportation Management Systems.} \label{fig:2} \end{figure} \subsection{Usability} The way in which humans interact with information systems has been thoroughly studied in last decades and formalized under the general \emph{usability} term~\cite{nielsen1994usability}. Although usability is a feature that can be associated to any system in which there is some kind of interaction with the user, most of its definitions to date gravitate around the design of software systems~\cite{nielsen1994usability2,brooke1996sus, nielsen199510}, which is not necessarily the case of ITS research. Usable designs imply defining a clear purpose for a system, and~helping users making use of it to reach their objectives~\cite{nielsen2003usability}. Within ITS, there are domains where this definitions apply directly~\cite{noy1997human}, such~as vehicle user interfaces~\cite{green1999estimating,green1999navigation,burns2010importance}, the~development of navigation systems~\cite{burnett2000turn}, road~signalization~\cite{dos2017proposal}, or even the way in which public transportation systems information is shown to users~\cite{avelar2006design,roberts2016radi}. The aforementioned domains of application, and~mostly any system lying at the core of Advanced Traveler Information Systems (ATIS), have an explicit interaction component. On the other hand, models developed for Advanced Transportation Management Systems (ATMS) are less related to user interaction (beyond the interface design of decision making tools), hence this canonical definition of usability seems to be less applicable. However, the~general concept of usability can also accommodate the notion of \emph{utility} as the quality of a system of being useful for its purpose, or the concept of \textit{effectiveness}, in~regards to how effective is the information provided by them~\cite{lyons2001advanced}. Since ITS are systems developed as tools designed to help the different stakeholders that take part in transportation activities, the~actionability of data-based models used for this regard depends stringently in this general idea of usability~\cite{barfield2014human}. Models' usability is a feature largely disregarded in literature. A~clear example of this situation is traffic forecasting, a~preeminent subfield of ATMS, in~which the link between the high end deep learning models with the requirements by the road operators in forecasts to support the decision making is very weak~\cite{Vlahogianni2014}. Usability may relate to the person that is going to operate the model, and~to the type and complexity of the model, which relate to specific skills. Achieving usable ITS models does not entail the same efforts for all ITS subdomains. Thus, while for research contributions related to ATIS there is a clear interest in this matter~\cite{horan2006assessing}, for ATMS developments some extra considerations need to be made. Usability in ITS has, therefore, a~facet oriented towards user interface, where interfaces reflect at least one of the outputs of an ITS data-based model, and~another facet towards creating models that are more aware of the way their outputs are going to be consumed afterwards by the decision maker. \subsubsection{User Interface} For the first of these facets, Spyridakis~et~al.~\cite{barfield2014human} propose general software usability measuring tools and scales such as System Usability Scale (SUS)~\cite{brooke1996sus}, ethnographic field studies, or even questionnaires. These basic techniques are also proposed in~\cite{ross2001evaluating} in order to evaluate navigation systems interfaces. There are also many other evaluation measures that are more specific to the field, such~as~\cite{fischer2002human}, or those defined by public authorities~\cite{dingus1996development}. Some of the main techniques to appraise ITS interface usability are: \begin{itemize}[leftmargin=* \item Usability techniques: if the output of the developed model is consumed through the use of an interface, common techniques like asking directly the users about their experience can be adopted~\cite{ross2001evaluating}. Among them SUS surveys are the standard to provide interpretable metrics that can be used for the evaluation of passenger information systems~\cite{beul2014usability} or any other kind of automated traveler information system~\cite{horan2006assessing}. \item Quality of the provided information: in~\cite{lyons2001advanced}, another perspective is proposed, based on estimating the quality of the information provided by the model. Characteristics such as the means to access the information, the~reliability of the information provider, or the awareness of the information availability can be measured for assessing the model's usability. \item Transportation-aware strategies: an alternative way to measure usability is to take into account the transportation context and how the use of the model impacts the system. As many of these systems are used during the course of transportation, the~environment must be considered in order to provide an adequate and pertinent output~\cite{dingus1996development}. This particular aspect is regarded below in section \ref{sec:appcon}. \item Public transportation guidelines: when ITS developments are intended for the public domain, inclusion of disadvantaged collectives in the usability evaluation is a must~\cite{fischer2002human}. The~extent in which these concerns are addressed by the ITS solution should not be~disregarded. \end{itemize} \subsubsection{Consumption of the Model's Output} For this second usability facet, there are no scales or measurements in literature that provide an objective (or even subjective) usability assessments, but we propose some angles that should be considered when designing this kind of models: \begin{itemize}[leftmargin=*] \item\textit{Confidence-based outputs:} data-driven models are often subject to stochasticity as a result of their learning procedure or the uncertainty/randomness of their input data (as~specially occurs in crowdsourced and Social Media data). This randomness imprints a certain degree of uncertainty in their outputs, which can be estimated values, predicted categories, solutions to an optimization problem or any other alike. Such~outcomes are often assessed in terms of their similarity to a ground truth in order to quantitatively assess the performance of the data-based model. Thus, a~practitioner aiming to make decisions based on the model's output is informed with a nominal performance score (which has been computed over test data), and~the predicted output for a given input. However, when one of such data-based models is intended to work in a real environment, there is no ground truth to evaluate the quality of the result they are providing towards making a decision.For instance, a~predictive model could score high on average as per the errors made during the testing phase. However, predictions produced by the model could be less reliable during peak hours than during the night, being less trustworthy in the first case as per the variability of the data from which it was learned, and/or the model's learning algorithm itself. For~this reason, the~estimation of the confidence of outputs from a data-based model must be analyzed for the sake of its usability. For~example, a~public transportation model that provides outlooks of future demand could be more usable if, besides the estimation itself, some kind of confidence metric was provided. Elaborating on this aspect is not very frequent in academic research, mainly due to the fact that confidence is not always that easy to obtain and the estimation procedure is, in~most cases, model-specific,requiring a previous statistical analysis of input data to properly understand their variability and characteristics. Unfortunately, such a confidence analysis is usually left out of the scope of research contributions, which rather focus on finding the best scoring model for a particular problem. Exceptions to this scarcity of related works are~\cite{mazloumi2011prediction}, in~which the uncertainty inherent to artificial neural networks is analyzed in a real ITS context; ~\cite{van2009bayesian}, in~which a committee of different models provides intervals of confidence to predictions;or the more recent contribution in~\cite{liu2019dynamic}, which~departs from previous findings in~\cite{tsekeris2009short, khosravi2011prediction} to estimate the uncertainty of traffic demand. This uncertainty estimation is then used as an input to assess the confidence of traffic demand predictions. These few references exemplify good practices that should be universally considered in contributions to appear. \item\textit{Interpretability:} a stream of work has been lately concentrated around the noted need for \emph{explaining} how complex models process input data and produce decisions/actions therefrom. Under the so-called XAI (eXplainable Artificial Intelligence) term, a~torrent of techniques have been reported lately to explain the rationale behind traditional black-box models, mainly built for prediction purposes~\cite{gunning2017explainable,arrieta2020explainable}. Nowadays, Deep Learning is arguably the family of data-driven models mostly targeted by XAI-related studies~\cite{samek2017explainable,ras2018explanation}. The interest of transport researchers to interpretable data-driven models is not new; intuitively, any decision in transportation and traffic operations should be based on a solid understanding of the mechanism by which different factors interact and influence transportation phenomena~\cite{vlahogianni2012modeling}. In~the transportation context explainability is closely related to integrability, when it comes to traffic managers, as ensuring that data-based models can be understood by non-AI expert can make them appropriately trust and favor the inclusion of data-based models in their decisional processes. When~framed within ITS systems and processes, the~need for explainable data-based models can help decision makers understand how information is processed along the data modeling pipeline, including the quantification of insightful managerial aspects such as the relationship and sensibility of a predicted output with respect to their inputs. \item\textit{Trade-off between accuracy and usability:} when ITS data-based models aim at superior performances, they often work in ideal scenarios where the real context of application is disregarded; should that context apply in practice, the~claimed suitability of the developed model for its particular purpose could be compromised. For~instance, the~goodness of an ITS model devised to detect users' typical trajectories can be measured with regard to the exactitude of the detected trajectories. If the pursuit of a superb performance relies on a constant stream of data (hence, eventually depleting the user's phone battery), it could be a pointless achievement when put to practice. This particular example has been already considered by plenty of researchers~\cite{thiagarajan2009vtrack,thiagarajan2011probabilistic}. However, there is a long way to go in this aspect, as most ITS research developments consider only ideal circumstances without regarding the implications that an accurate design could have on its final usability. \end{itemize} \subsection{Self-Sustainability} \label{sec:self-sust} In general, self-sustainability of a model refers to its ability to survive---hence, to continue to be useful---in a dynamic environment. ITS models and developments are usually intended to operate during long periods of time. However, it is widely accepted that traffic and transportation phenomena are strongly dynamic in nature, meaning that these phenomena exhibit long term trends, evolve in space and time, but also, at the occurrence of an unexpected event, they are susceptible to abrupt changes and exhibit long term memory effects. For~instance, a~trip information system based on traffic forecasts on a certain part of the network trained with historical data coming from recurrent traffic conditions may not be easily transferable to other road networks or not efficient in case of a severe disruption in traffic operation (accident). What is more, if the specific system does not undergo constant training with new data over time, eventually it will fail to correctly operate even for the network location it was originally designed to operate due to contextually induced non-stationarities. Thus, an intelligent transportation system developed based on data-based approaches should at least follows a set of minimum self-sustainability requirements during the design workflow. To better understand the importance of self-sustainability as a significant aspect of model's actionability, one should bring to mind the case of cooperative ITS systems (e.g.,~advanced vehicle control systems) and the automated driving. To this end, a~self-sustainable data-based model should bridge the gap between the development of a model prototype and its deployment in a real, potentially non-stationary environment. When an ITS system or model is deployed to operate in changing conditions, self-sustainability involves dealing with the effects of such changes in the learned knowledge. To this end, different strategies and design approaches could be required depending on the nature of the change and its effects on the model. We~next delve into several attributes that can be desirable to deploy data-driven systems or models in changing environments, rendering them actionable: \begin{enumerate}[leftmargin=*] \item \textit{Adaptable}: Data-driven models for ITS applications created in controlled conditions, with static, self-contained datasets, can provide great performance metrics, but could also fail if data evolve along time~\cite{geisler2012evaluation}. Adaptation is the reaction of a system, model~or process to new circumstances intended to reduce its performance deterioration in comparison to the one expected before the change in the environment happened. If~data change over time, their evolution is not detected by the model and it does not adapt to it whatsoever, then the developed model will eventually provide an obsolete output. When these contextual variations occur over data streams and models are learned on-line therefrom (for e.g.,~on-line clustering or classification), such variations can imprint changes in the statistical distribution of input and/or output data, making it necessary to update such models to reflect this change in their learned knowledge. This phenomenon is known as \emph{concept drift}~\cite{gama2014survey}, and~has been identified as an active research challenge for most of fields connected to machine learning in non-stationary environment~\cite{vzliobaite2016overview}. Many of those fields are already studying this topic, from~spam detection~\cite{delany2005case,mendez2006tracking} to medicine~\cite{stiglic2011interpretability}. There are two main lines related to concept drift: how to detect drift, and~how to adapt to it. Both lines should be scheduled in the research agenda of data-driven ITS, as they have obvious implications when analyzing traffic~\cite{moreira2014improving}. Situations like road works can modify completely traffic profiles over a certain area during a period of time, after which the situation goes back to normal. A~similar casuistry happens with road design changes (i.e.,~new lanes, transformation of types of lanes, new accesses, roundabouts, etc), although in those cases there is a new stable traffic profile largely after the change. Even without man-crafted changes, traffic profiles may change for social-economical reasons~\cite{lana2016role}. Besides, analysis of drift can be used to detect anomalies in the normal operation of roads~\cite{moreira2015drift3flow}, or to analyze patterns in maritime traffic flow data~\cite{osekowska2017maritime}. However, the~adaptability of ITS models to evolving data is scarcely found in literature, and~certainly, in~many cases concept drift management is the scope of the work, and~not a circumstance that is considered to achieve a greater goal~\cite{moreira2015drift3flow, wibisono2016traffic}. There are though some online approaches to typical ITS problems that consider the effects of drift in data~\cite{lana2019, wu2012online, procopio2009learning}, and~we consider this kind of initiatives should lead the way for an actionable ITS research. \item \textit{Robust}: When an ITS system is deployed in a real-life environment, diverse kinds of setbacks can affect its normal operation, from~power failures that preclude its functioning to the interruption of the input data flow. Robustness is a self-sustainability trait that prevents a system to stop working when external disruptions occur. Although in most research-level designs this is not a relevant feature, it is essential for actionable, self-sustainable designs. Robustness, defined as the ability to recover from failures, would have, however, different requirements depending on the criticality of the ITS system. Thus, in~a traffic flow forecasting system robustness could only imply that the system does not crack when input data fail~\cite{zhang2008short}, and~it continues to operate; on~the other hand, for critical systems such as air traffic management, robustness would require additional measurements to contain damage~\cite{isaacson2010concept,chen2017air}. All in all, robust data-based workflows should be able to accommodate unseen operational circumstances, such~as data distribution shifts or unprecedented levels of information uncertainty, which particularly prevail in crowdsourced and Social Media data~\cite{wechsler2019pervasive, adar2007managing}. \item \textit{Stable and resilient}: Actionable systems require a certain output stability in order to be understandable by their users. This notion is apparently opposed to adaptability, but while the latter is the ability to adapt the output to environment or data changes, stability pursues maintaining the output statistically bounded even when contextual changes occur, through e.g.,~model adaptation techniques. When adaptation is not perfect and the model violates a given level of statistical stability, stability requires another kind of adaptation, namely \emph{resilience}, to make the model return to its normal operation and thus, minimize the impact of external changes on the quality of its output~\cite{de2017mathematical}. This entails, in~essence, going one step further in the knowledge of the environment and taking into account those circumstances that can affect the system, and~it could be linked to transferable models, which would be addressed below. For~instance, a~traffic volume characterization model would be adaptable if it considers the changes inherent to traffic volume (an increase over time due to economical factors), and~it would be stable if a change in the weather conditions does not deteriorate its performance, or in other words, it has considered this essential circumstance. These kind of considerations are almost nonexistent in literature~\cite{Vlahogianni2014}, but however crucial for a model to be self-sustainable. \item \textit{Scalable}: In the research environment, tests are run in a delimited scale, constrained to the size of data, and~useful for the experiments, in~contrast with large, multi-variate real environments. Scaling up is not, of course, a~matter of ITS research, but~an engineering problem. However, models should be designed to be scalable since their~conception. Leaving aside calibration and training phases, classic transportation theories tend in general to be computationally more affordable than data-driven models. However, the~unprecedented amount of computing power available nowadays discards any real pragmatic limitation due to the computational complexity of learning algorithms in data-based modeling. An exception occurs with models falling within the Deep Learning category which, depending on their architecture and size of training data, may require specialized computing hardware such as GPU or multi-core equipment. Nevertheless, the~rising trend in terms of scalability is to make data-based models incremental and adaptable~\cite{zhang2011data}, which finds its rationale not only in the environmental sustainability of data centers (lower energy consumption and thereby, carbon footprint), but also in the deployment of scalable model architectures on edge devices, usually with significantly less computing resources than data centers. Although some ITS problems are easier to scale and this feature would not be troublesome, there are some fields that can be very sensitive to scalability. For~instance, route~planners frequently consist of shortest-path problem and travel-salesman problem implementations that increase in complexity when the number of nodes grow~\cite{colpaert2016impact}. This is a good example where artificial intelligence and optimization tools provide solutions that are actionable in terms of scalability, and~where cases are found effortlessly~\cite{basu2016genetic,schmitt2018experimental}. Caring about aspects like the easiness to introduce new variables when needed, the~complexity of tuning if applies, or the execution time, would make a model more actionable, by increasing its self-sustainability. This need for scalability is not just a matter related to the computational complexity of modeling elements along the pipeline, but also links to the feasibility of migrating the designed models from a lab setup to a, e.g., Big Data computing architecture. Unfortunately, scarce~publications reflect nowadays on whether their proposed data-based workflows can be deployed and run on legacy ITS systems, thereby avoiding costly upgrade investments in computing equipment. \end{enumerate} \subsection{Traffic Theory Awareness} Theoretical representations of traffic attempt to construct (mostly simple) models with causal aspects. These models are usually of a closed form and are frequently dictated by simplifying assumptions, which leads to limited performance when modeling complex spatio-temporal dynamics in the microscopic analysis context. In~these models, data are instrumental to estimate how well they fit real world conditions. On the other hand, and~since their upsurge in the 80s, data driven models rely exclusively on the data to extract the dynamics that govern the phenomena. This, at least theoretically, makes them more adaptable and more efficient in complex conditions when compared to theory based models. But, they can hardly claim applicability in large scale scenarios (city level traffic management) due to significant computational resources requirements. Such data-driven traffic models have been systematically implemented as proof of concepts and are now dominant in Traffic Engineering literature~\cite{lana2018road}, incorporating most well-known advanced techniques, and, in~many cases, ignoring the elementary knowledge of traffic and focusing blindly on performance. Owing to the above, researchers in traffic modeling have diversified the way in which their models are developed and evaluated, fitting them to the technology that is introduced, as opposed to fitting the model to the well established knowledge described in well established theories of traffic flow. This results in models that are hardly actionable for traffic engineers, in~terms of integration to legacy traffic control and management systems and relevance to the decision making process of road network operators. Besides, there is a lack of standards in what regards to data and scenarios used to assess the performance, usually due to the availability of real data for each researcher. This was already identified in~\cite{Vlahogianni2014}, where test-beds were proposed, either generating them or using some of the existent as standards. This would help compare models, understand them better, as they can be evaluated in a known environment, and~obtain their insights concerning traffic theory. Besides, as we anticipated in Section~\ref{sensing} there is a industrial trend towards the consideration of different data sources when modeling traffic dynamics. In~many cases, these data sources do not have any straightforward relationship to traffic itself. The~integration of these sources of data, the~models learned from them and theoretical representations of transportation scenarios remains an open challenge that has started to be addressed in literature~\cite{zhang2012exploring, zhang2016exploratory, zhang2017understanding}. In this line of reasoning, linking data-driven to theory based models in transportation may resort to efficient and physically consistent representations of transportation phenomena. In~fields like traffic modeling and forecasting, this~hybrid approach permits to consider theoretical aspects of traffic, such~as the relationship among speed, flow and density, the~three phases of traffic~\cite{kerner2004three}, or the Breakdown Minimization Principle~\cite{kerner1999physics} when modeling bottlenecks. The~consideration of these theoretical concepts takes effect mainly in the preprocessing, modeling and prescription phases of the modeling workflow. In~preprocessing, domain knowledge can be crucial for feature engineering, by describing how available features are related to each other, estimating collinearities in advance, deleting irrelevant predictors, or obtaining feature combinations with improved modeling power ~\cite{Vlahogianni2014}. Applying traffic theories and principles can also be useful for data augmentation and missing data imputation, by simulating or generating data that are more akin to what the context can provide ~\cite{lana2019}. In~the modeling phase, previously defined mathematical frameworks can help define the constraints, operation ranges and correct the output of data-based models, which do not take into account the compliance of their output with respect to well-established theories. Lastly, in~the prescription phase, model outputs can be linked to traffic theory knowledge to improve the way in which they are applied: a~predicted flow value can be more useful if the travel time or the bottleneck probability can be computed afterwards. Furthermore, in~the case of predictive models, they can reach a point in which the provided predictions ultimately affect the future behavior of the models themselves, if they are trained only with observed past data. For~instance, a~model that assists traffic management decisions, like closing a lane, might lead to a situation that has not been observed by the model before, thus making the knowledge captured by the traffic model obsolete and useless until the data captured from the environment is exploited for retraining. Physical models can be highly useful to anticipate scenarios and complement data-based models, providing additional information of what theories or simulations determine that the behavior of the scenario should be. This emergent modeling paradigm is known as Theory Guided Data Science, and~aims to enhance data driven models by integrating scientific knowledge~\cite{karpatne2017theory}. The~main objective of this approach is to enable an insightful learning using theoretic foundations of a specific discipline to tackle the problem of data representativeness, spurious patterns found in datasets, as well as providing physically inconsistent solutions. From the algorithmic point of view, this~induction of domain knowledge can be done in assorted means, such~as the use of specially devised regularization terms in predictive models (e.g.,~in the loss function of Deep Learning models), data cleansing strategies that account for known data correlations, or memetic solvers that incorporate local search methods embedding problem-specific heuristics. In~transportation, there has been several example of theory enhanced models departing from traffic conditions identification and characterization~\cite{vlahogianni2007spatio,ramezani2012estimation}, to~data driven and agent based traffic simulation models for control and management~\mbox{\cite{zhang2011data,chen2010review,montanino2015trajectory,shahrbabaki2018data},} or cooperative intelligent driving services~\cite{mintsis2017evaluation}. Awareness with domain-specific knowledge can be also enforced at the end of the workflow. When decision making is formulated as an optimization problem, the~family of optimization strategies known as Memetic Computing~\cite{gupta2018memetic,neri2012memetic} has been used for years to incorporate local search strategies compounded by global search techniques and low-level local search heuristics. These heuristics can be driven by intuition when tackling the optimization problem at hand or, more suitably for actionability purposes, by a priori knowledge about the decision making process gained as a result of human experience or prevailing theories. For~instance, traffic management under incidences in the road network can largely benefit from the human knowledge acquired for years by the manager in charge, since this knowledge may embed features of the traffic dynamics that are not easily observable from historical data. This knowledge can be inserted in an optimization algorithm devised to decide e.g.,~which lanes should be rerouted in an accident. \subsection{Application Context Awareness} \label{sec:appcon} Transportation is exceptionally diverse around the world, with notable differences in modes, preferences and availability due to social, economic and cultural disparity. Moreover, Intelligent Transportation Systems with different purposes have also characteristic requirements that can also be very divergent with respect to space and time. To address this landscape of complex and some times conflicting goals, policies and decision making should span from few seconds (traffic management and control) to years (planning and designing of new systems). It is strongly argued that data driven framework are able to cope with context aware datasets, due to their inherent capabilities of learning patterns hindering in resourceful data and reconstruct - in a sense - the context of the application. Typical examples of such context aware systems are the extraction of Origin-Destination matrices from cellphone based data~\cite{gonzalez2008understanding}, the~mobility applications that aim to improve the the mobility footprint of users~\cite{chatterjee2018type2motion}, as well as the smartphone based driving insurance systems~\cite{tselentis2017innovative}. Although these approaches seem to be appropriate to complement the user or system's experience on a problem, significant uncertainty lies in their transferability and accuracy, owing to the lack of context-aware knowledge. A certain degree of awareness of the context should be a matter of concern when developing ITS models that intend to be actionable. Context aware information is usually introduced in the modeling, for example accounting fro the demographic characteristics of the application area, the~type of the road or network, the~mode, the~travel purpose etc. However, what is usually disregarded is a much broader consideration of the operational and system's characteristics, such~as how models can be introduced to the operations at hand, what the privacy concerns are with respect to data and information flows, what is the regulatory framework and policy level restrictions and goals to be reached. First, within the operation, the~deployment context where a developed model is intended to be implemented can enforce a series of operative constraints. Creating and proposing an ITS model without observing these requirements is an exercise of futility, for its lack of actionability. From this operation perspective, the~context covers from deployment and operation costs---is the system cost-efficient considering its potential service?---to functioning modes---has the model the expected response times? can it operate in reduced computational power environments? As an illustration, a~system designed to detect and identify pedestrians can be very effective in terms of performance, but if it does not operate at an appropriate speed, or it needs more demanding computations that cannot be embarked in a vehicle, it is useless for an autonomous driving context~\cite{andreasson2015autonomous}. A~similar reasoning holds if by \emph{operation cost} one thinks about the energy consumption of the model at hand. Questions such as whether the energy consumed by the model compliant with the system should be kept in mind at design time, but also from the academic perspective, where efforts should be directed to the development of models that are consequent with the actual operative circumscription. Second, regulations constitute a hard and highly contextual constraint in the implementation of ITS. Besides the wide regulatory differences that can be found across regions, there are transport frameworks where regulations are specially rigid. A~typical example is the case of airports~\cite{kulik2016intelligent}, and~where there is a broad field for specialized ITS. Another example is the constantly rising use of drone systems to monitor traffic~\cite{barmpounakis2016unmanned}. Models that fail to relate to the application's regulatory environment are not actionable. Third, data privacy and sovereignty constitute a growing concern in a connected world where, after a decade of handing over data with complacency, an awareness about personal information sharing is springing. A~recent example is the introduction of the EU General Data Protection Regulation (GDPR) framework, that severely disputed the manner data were introduced to models, as well as data availability. ITS models that are based on personal data are common nowadays, for instance in floating car data based developments~\cite{lin2014mining}. However, there are fields where this aspect is becoming crucial (autonomous driving connectivity~\cite{khodaei2015key}, security in public transport environments~\cite{menouar2017uav}), and~research is steering to privacy-preserving approaches~\cite{sucasas2016autonomous}, spheres where technologies such as Blockchain can have a major dominance~\cite{yuan2016towards,lei2017blockchain}. Fourth, social aspects of the application play a major role in modeling. Social transportation is the subfield in ITS where the ``social'' information coming from mobile devices, wearable devices and social media is used for a number of ITS management related applications~\cite{zheng2015big}. The~outcomes from social transportation may be, to name a few, traffic analysis and forecasting~\cite{he2013improving,ni2016forecasting}, transportation based social media~\cite{evans2012microparticipation}, transportation knowledge automation in the form of recommending systems and decision support systems~\cite{kuflik2017automating}, and~services for the collection of further signal to be used later for the already mentioned purposes or others. However, cultural differences can have a relevant impact in how these systems operate, as social data are most commonly strongly linked to geographical information. This is a key aspect for their actionability. Fifth, transportation is currently a large source of greenhouse-gas emissions~\cite{woodcock2009public}. These concerns are gaining momentum in a wide range of ITS applications, such~as the discovery of parking spots~\cite{chen2013development}, multimodality applications that grant travelers the chance of using collective transportation systems efficiently and conveniently~\cite{kramers2014designing}, the~improvement of logistics operations~\cite{zhang2015swarm}, shared mobility applications, which help reducing the number of one-passenger vehicles in the road network~\cite{feigon2016shared}, or driving analytics to improve safety and ecological footprint~\cite{vlahogianni2017driving, huang2018eco, adamidis2019impacts}. Of course, research goes beyond the application context and does not need to be always connected to a certain application scenario. A~prototype can be far from the practical requirements of its eventual deployment; still, knowing the essential application common grounds is key to converge to actionable models. Unfortunately, this~is a matter frequently disregarded in ITS research. \subsection{Transferability} Within the research context, it is common to employ test data to assess the models. Regardless if these data are obtained from real sources or synthetically generated, the~resulting models have been built around them, and~can be heavily linked to that experimentation context. Would these models work in other context or with other input data? Transferability could be defined as the quality of a data-driven model to be applied in other environment with other data, and~it is directly linked with actionability: the application of a model should be generalizable to different datasets and transportation settings. This definition stems from the more general concept of \emph{Transfer Learning}~\cite{pan2009survey}, which can entail that models trained in a certain domain are applied to other domains, so that the previous knowledge obtained from the first makes them perform better in the latter than models without it. Depending on the subcategory of ITS, this~requirement can be easily met or arduous to achieve, as some subcategories are more oriented towards the application and rely less on the environment than others; the key is defining what is \emph{environment}. For~example, a~travel time forecasting model developed with data of a certain location could be transferable to another location without great complications, if it is built considering this feature~\cite{bajwa2005performance}. In~fact, many ITS models that are spatial-sensitive are developed using real data, but within the experimentation context, they are evaluated only in certain locations. Transferability for these scenarios would imply that the obtained results are reproducible (with certain degree of tolerance) in other locations.This could entail from plainly extrapolating the model to other locations~\cite{getachew2007simplified}, to implementing of techniques such as \emph{soft-sensing}, aimed at modeling situations where no sensor is available~\cite{habibzadeh2018soft}, and~the environment information is enough to obtain these models. A~similar case in terms of spatial contexts, but with more parameter complexity, requires plenty of information about the environment. As an illustration, the~case of crash risk estimation implies a higher calibration and adjustment needs due to the higher number of parameters that take part in this type of estimations. In~these circumstances, works such as~\cite{shew2013transferability} or~\cite{xu2014using} work with posterior probability models and give more relevance to models that behave with a certain performance in many contexts than to models that perform better in a particular location. On the other extreme, for cases like autonomous driving, the~change of environment is connatural to the domain (a moving vehicle constantly changing its location), and~the parameters of these models are abundant and highly variable. Thus, these applications need transferable solutions, transferability that is specifically sought by researchers, for instance in LIDAR based localization~\cite{ibisch2013towards} or pedestrian motion estimation~\cite{shen2018transferable}. In~any case, and~regardless the domain, ITS research is in an incipient stage (probably with the exception of autonomous driving) of developing transferable models, and~evaluating this feature, and~some machine learning paradigms can help improve this characteristic. \section{Emerging AI Areas towards Actionable ITS}\label{challenges} We have hitherto elaborated on the requisites that a model should meet towards leading to actionable data-based insights in ITS applications and processes. Some of these requirements can be fulfilled by properly designing the data-based workflow (e.g.,~interpretability can be straightforward for certain prediction models, whereas adaptability can be enforced by periodically scheduling the learning algorithm under use and feeding it with new data). However, several research areas have stemmed in the last years from the wide fields of Data Science and Artificial Intelligence that may serve to catalyze the compliance of data-based ITS workflows with the prescribed requisites, and~thereby attain the sought actionability of their produced insights. The main AI areas that have been identified as potentially appropriate for addressing the requirements can be summarized briefly as follows: \begin{itemize}[leftmargin=*] \item Real-time data processing and online learning, which are not brand new research avenues in ITS, as we can find advanced developments in the literature. However, as we will later show, emerging fields with great potential such as dynamic data fusion and dynamic optimization can expedite and proliferate the adoption of incremental data-based models in more ITS-related applications. \item Transfer learning and domain adaptation, that could allow to develop models for certain contexts and export them to others, linking directly to the transferability requirement, but also to the integration of transportation theories and physical models to data-based models. \item Gray-box modeling, a~paradigm halfway between white-box (physical) and black-box (data-based) models. Gray-box modeling represents a promising area to bring awareness to traffic theory and other physical modeling when developing data-based models, with the potential to increase the performance, usability and comprehensibility of the latter. \item Green AI, a~trend in Artificial Intelligence research that connects directly with energy and cost efficiency. Developing efficient models has a relevant impact in their sustainability and context awareness. \item Fairness, Accountability, Transparency and Ethics: Data-based models---specially those learning from large amounts of diverse data from many sources---are fragile to biases, and~can compromise aspects such as the fairness of decisions or the differential privacy of data. In~this context of growing sources of data, including those gathered from people, and~increasingly opaque data-based models, it has become essential to understand what models have learned from data, and~to analyze them beyond their predictive performance to consider ethical, societal and legal aspects. These aspects have been scarcely considered in ITS research. \item Other Artificial Intelligence areas such as imbalanced learning, reinforcement learning, adversarial machine learning are later highlighted for their noted relevance in ITS. \end{itemize} We next discuss on the research opportunities spurred by the above research lines, their connections with the requirements presented in Section~\ref{requirements} (shown in Figure \ref{fig:3}), as well as the challenges that stem from the consideration of these AI areas in the context of ITS. \begin{figure}[H \includegraphics[width=.95\columnwidth]{PIPELINE3col.pdf} \caption{Schematic diagram showing how avant-garde AI subareas can promote actionability in ITS data-based modeling workflows. Subareas contributing with particular emphasis to different functional requirements are connected together along the way from data to actions.} \label{fig:3} \end{figure} \subsection{Online Learning and Dynamic Data Fusion/Optimization} Previously sketched in Section~\ref{sec:self-sust}, by online learning we refer to the capability of the learning model and in general, of the entire workflow, to learn from fastly arriving data possibly produced by non-stationary phenomena that enforces a need for adapting the knowledge captured by the model along time. Changes over data streams can make the data pipeline obsolete, thus demanding active or passive techniques to update it with the characteristics of the stream~\cite{ditzler2015learning,gama2014survey}. Although activity around online learning has mostly revolved on certain clustering and classification paradigms (the latter giving rise to the so-called concept drift term to refer to pattern changes), it is important to note that adaptation can be also needed in other stages of the actionable data-based workflow, from~data fusion to the prescription of actions. This~being said, research areas such as dynamic optimization and dynamic multi-sensor data fusion should be also investigated deeply in future studies related to actionable data-based models, specially when the scenario under analysis can produce information with non-stationary statistical characteristics. When merging different data sources, fusion strategies at different levels can be designed and implemented, from~traditional means (data-level fusion, knowledge-level fusion) to modern methods (corr. model-based fusion, federated learning or multiview learning)~\cite{smirnov2019knowledge,wang2019data}. Fusion of correlated data sources can compensate for missing entries or noisy instances in static environments. However, when data evolve over time as a result of their non-stationarity, new challenges may arise in regards to the inconsistency among multiple information sources, including measurement discrepancy, inconsistent spatial and temporal resolutions, or the timeliness/obsolescence of the data flows to be merged, among other issues. For~this reason, close attention should be paid to advances reported around adaptive fusion methods capable of detecting, counteracting and correcting misalignments between data flows that occur and evolve over time. This~branch of dynamic data fusion schemes aims at combining together information flows produced by non-stationary sources, synthesizing a representation of the recent history of each of the flows to be merged into a set of more coherent, useful data inputs to the rest of the data-based pipeline~\cite{khaleghi2013multisensor,ramachandran2006dynamic}. On the other hand, dynamic optimization techniques can efficiently deliver optimized actionable policies when the objectives and/or constraints of the underlying optimization problem varies~\cite{nguyen2012evolutionary,mavrovouniotis2017survey}. We~energetically advocate for a widespread embrace of advances in these fields by the ITS community, emphasizing on those scenarios whose dynamic nature can make the obtained actionable insights eventually obsolete. This is the case, for instance, of traffic related modeling problems (e.g.,~traffic flow forecasting and optimal routing) or driver characterization for consumption minimization, among many others. Other requirements for actionability can also benefit from the adoption of the above models in dynamic ITS contexts. For~instance, cost efficiency in terms of energy consumption can largely harness the incrementality that often features an online learning model. The~use of dynamic data fusion can also yield a drastically less usage of communication resources in wireless V2V links, such~as those established in cooperative driving scenarios. All in all, the~recent literature poses no question around the relevance of adaptation in data-based modeling exercises noted in this work, with an increasing volume of contributions dealing with the extrapolation of adaptation mechanisms to ITS problems~\cite{chang2009online,saadallah2018bright,moreira2013predicting}. \subsection{Transfer Learning and Domain Adaptation} \label{sec:tfada} In close semantics to its related actionability requirement (\emph{transferability}), transfer learning aims at deriving novel means to export the knowledge captured by a data-based model for a given task to another task with different inputs and/or outputs~\cite{pan2009survey}. Depending on the amount of alikeness between the origin task and the destination task, we~may be also referring to \emph{domain adaptation}, by which we adapt the model built to perform a certain task to make it generalize better when processing new unseen inputs that do not follow the same distribution as their original counterparts (only the distribution changes~\cite{sun2015survey}). Techniques such as subspace mapping, representation learning, of feature weighting arise as those methods most used to allow knowledge to be transferred between data-based models used for prediction. In essence, transfer learning can provide higher prediction accuracy for models whose number of parameters to be learned (e.g.,~weights in a Neural Network) demands higher amounts of labeled data than those available in practice. However, data augmentation is not the only goal targeted by transfer learning. Domain adaptation may yield a better performance when used between ITS models that can become severely affected by a lack of calibration, different configurations or diverging specifications. An immediate example illustrating this hypothesis is the use of camera sensors for vehicular perception. Models trained to detect and identify objects in the surroundings of the vehicle can fail if the images provided as their inputs are produced by image sensors with new specs. The~same holds for car engine prognosis: replaced components can make a data-based characterization of the normal operation of the engine be of no practical use unless a domain adaptation mechanism is applied. Personalization of ITS services can be another problem where domain adaptation can help refine a model trained with data from many sources: a clear example springs from naturalistic driving, where a behavioral characterization model built at first instance from driving data produced by many individuals (source domain) can be progressively specialized to the particular driver of the car where it is deployed~\cite{ou2018transfer,xing2019driver,xing2018end}. In regards to actionability, several functional requisites can be approached by using elements from Transfer Learning over the data-based pipeline. To begin with, it should be clear that the transferability of learned models for their deployment in different locations and contexts could be vastly improved by Transfer Learning, as the purpose of this AI branch is indeed to meet this requirement in data-based learning models. In~fact, this~approach is currently under study and wide adoption within the ITS community working on vehicular perception: when the capability of the vehicle to sense and identify its surrounding hinges on learning models (e.g.,~Deep Learning for image segmentation with cameras), a~plethora of contributions depart from pretrained models, which are later particularized for the problem/scenario at hand~\cite{ye2018machine}. This exemplified use case supports our advocacy for further efforts to incorporate transfer learning methods in other ITS applications, specially those where data collection and supervision are not straightforward to achieve in practice. Another functional requirement where Transfer Learning can pose a difference in ITS developments to come is cost efficiency. The~knowledge transferred between models learned from different contexts can improve their performance, thereby reducing the need for supervising data instances and ultimately, the~time, costs and resources required to perform the data annotation. Finally, the~more recent paradigm coined as Federated Learning refers to the privacy-preserving exchange of captured knowledge among models deployed in different contexts~\cite{konevcny2016federated,mcmahan2017communication}. Although the main motivation for the initial inception of Federated Learning targeted the mobile sector, techniques supporting the federation of distributed data-based models can be of utmost importance in the future of ITS, specially for V2V communications among autonomous vehicles and in-vehicle ATIS systems. Definitely the enrichment of models with global knowledge about the data-based task(s) at hand will pose a differential breakthrough in vehicular safety and driving experience. For~instance, federated models can collectively identify, assess and countermeasure the risk of more complex vehicular scenarios than each of them in isolation~\cite{ferdowsi2019deep}. Likewise, ATIS systems can learn from the preferences and habits of other users to better anticipate the preferences of the driver and act accordingly~\cite{vogel2018emotion}. In~a few words: an enhanced and more effective actionability of the data-based workflows built to undertake such tasks. \subsection{Gray-Box Modeling} Gray-box modelling refers to the design of models that combine theoretical developments and structures related to the problem, with data that serve as a complement for such theories to make the overall model match better the scenario under analysis~\cite{kroll2000grey,oussar2001gray}. Gray-box models lie in between white-box models, for which the learned structure is deterministic and grounded in theoretical concepts; and black-box models, whose~internal structure lacks physical significance and is learned from data. An example of white-box model in ITS systems is the use of computational fluid dynamics for macroscopic traffic flow modeling, whereas Deep Learning models for traffic forecasting can exemplify black-box modeling in this domain. Gray-box models have been lately embraced by the ITS community in a number of modeling scenarios, such~as those combining biological concepts and data-based models for driver characterization~\cite{inga2015gray,flad2017cooperative}. Gray-box modeling can contribute to the actionability of data-based workflows for ITS applications in two different albeit interconnected directions. To begin with, the~incorporation of theoretical models to data-based pipelines can narrow the gap between engineers and practitioners more acquainted with traditional tools to analyze ITS systems and processes. Indeed, hybrid modeling can tie both worlds together not only without questioning the validity of prevalent theoretical developments, but also evincing the complementarity and synergy of both approaches. On the other hand, using validated theoretical models can help data-based modeling overcome difficult learning contexts such as class imbalance, outlier characterization or the partial interpretability of data clusters, among others. \subsection{Green Artificial Intelligence} A profitable strand of literature has recently stressed on the energy efficiency of data-based models, highlighting the need for redesigning their learning algorithms to minimize their energy consumption and thereby, make them implementable and usable in practice~\cite{mittal2016survey,alwadi2015energy,han2013approximate}. While this issue is particularly relevant for resource-constrained devices (e.g.,~mobile hand-helds), the~concern with energy efficiency goes beyond usability towards environmental friendliness. For~this reason many recent contributions are striving for computationally lightweight variants of machine learning models that sacrifice performance for a notable reduction of their energy demand. This is not only the case of predictive models capable of incrementally learning from data, but also of specific Deep Learning architectures tailored for their deployment on embedded devices~\cite{lane2015can}. Based on the above rationale, cost efficiency is arguably the most evident functional requirement around which energy-aware model designs can pose a breakthrough towards improving the actionability of the overall data-based workflow. In~addition, other aspects can be made more actionable by using energy-aware model designs, such~as usability~\cite{faisal2015towards}. Despite achieving unprecedented levels of predictive accuracy, a~data-based workflow may become useless should it deplete the battery of the system on which it is deployed for operation. Therefore, energy efficiency should be under the target of future research efforts, specially when dealing with ITS applications running on battery-powered devices, inspecting interesting paths rooted thereon such as the trade-off between performance and energy consumption, or the adaptation of the model's operation regime depending on the remaining battery life, among others~\cite{zliobaite2015towards}. \subsection{Fairness, Accountability, Transparency and Ethics} To end with, the~prescription of actions based on the insights provided by a data-based pipeline must be buttressed by a thorough understanding of the mechanisms behind its provided decisions~\cite{1910.10045}. Extended information about the model must be presented to the end user for several reasons: \begin{itemize}[leftmargin=*] \item To gauge as many consequences of the actions as possible, identifying situations where decision making based on the outputs of the data-based workflow gives rise to socially unfair scenarios due to the propagation of inadvertently encoding bias to the automated decisions of the model. \item To ensure him/her that the output of the model is reliable and invariant under the same data stimuli, maintaining a record of the intermediate decisions made along the pipeline, allowing for the post-mortem, potentially correcting analysis of bad decision paths, and~thereby maximizing the trust and certainty of the user when embracing its~output. \item To make the user understand why the developed model produces its prescriptive output when fed with a set of data inputs, shedding light on which inputs correlate more significantly with the prescribed actions, tracing back causal relations between intermediate data inputs, and~discriminating extreme cases where decisions can change radically under slight modifications of the model inputs. \item To supervise the ethics of data-based workflows, identifying potentially illegal uses of unlawful data given the prevailing legislation, guaranteeing the privacy and governance of personal data by third-party data-based ITS applications and processes, and~certifying that the output of the model's output does not favor inequalities in terms of gender, religion, race or any other aspect alike. \end{itemize} The above requirements have been lately collectively compiled under the FATE (Fairness, Accountability, Transparency and Ethics) concept, which refers to the design of actionable data-based pipelines whose internal operations can be explained, accounted and critically examined in regards to the consequences of their eventual bias in privacy, fairness and ethical issues~\cite{martin2018ethical,veale2017fairer,stoyanovich2017fides}. This recent concern with the operation of machine learning models spawns from the proliferation of real cases where practical model installments have unveiled deficiencies of different kind, from~differential privacy breaks (data revealing the identity of the persons to whom they belong) to unnoticed output bias that caused racist discriminatory issues~\cite{whittaker2018ai}. For~instance, data-based models for vehicular perception, obstacle detection and avoidance must be also endowed with ethics and legal design factors to make the overall decision not just drifted by the data themselves. Another clear domain where FATE can be crucial is modeling with crowd-sourced Big Data, where~aspects like privacy preservation~\cite{victor2016privacy} and bias avoidance~\cite{rashidi2017exploring} are arguably more critical~\cite{boyd2012critical,chen2017traffic}. The~construction of the data-based modeling workflow must (i)~ensure that protected features remain as such once the workflow has been built, without any chance for reverse engineering (via e.g.,~XAI techniques~\cite{arrieta2020explainable}) that could compromise the differential privacy of data; and (ii)~that learning algorithms along the workflow counteract hidden bias in data that could eventually lead to discriminatory decisions (due to skewed samples, tainted annotation, limited data sizes or imbalanced data). From our perspective, these are among the most concerning challenges in the exploitation of Big Data in ITS, and~the main source of motivation for a number of recent studies in areas related to data-driven transportation systems such as pedestrian detection~\cite{wilson2019predictive}, autonomous vehicles~\cite{lim2019algorithmic,bigman2020life} or urban computing~\cite{fu2019batman}. Bias-related issues can be identified by a proper analysis of the decisions made by the workflow, which in turn requires models to be accountable and transparent enough to thoroughly characterize their sensitivity to bias, and~how inputs and outputs (decisions) correlate in regards to protected features. It is also remarkable to note that several proposals have been made to quantify fairness in machine learning pipelines, yielding useful metrics that account for the parity of models when processing groups of inputs~\cite{leben2020normative,verma2018fairness}. Without these aspects being considered jointly with performance measures, data-based ITS developments in years to come are at the risk of being restricted to the academia playground~\cite{zook2017ten}. \subsection{Other AI Research Areas Connected to Actionability} The above areas have been highlighted as the main propellers for model actionability in ITS systems. However, it is worthwhile to mention other research areas from the AI realm that can also help completing the chain from data to actions: \begin{itemize}[leftmargin=*] \item Few-shot learning~\cite{fei2006one}, which aims at overcoming the lack of reliably annotated data and the practical difficulty of performing annotation in certain application scenarios. For~instance, accident prevention models cannot be enriched with positive samples unless a fatality occurs and the data captured in place is fed {into} the model. Few~shots learning and related subareas (zero-shot, one-shot) deriving solutions that can automatically learn from very small amounts of training data, incorporating mechanisms (e.g.,~generative models, regularization techniques, guided simulation) to prevent the overall model from overfitting~\cite{1904.05046}. In~regards to actionability, this~family of learning techniques can be helpful to make data-based ITS models deployable in situations lacking data supervision, specially when such a data annotation cannot be guaranteed to be achievable over time. \item Imbalanced and cost-sensitive learning~\cite{krawczyk2016learning,branco2016survey}, which link to the need for avoiding model bias, not only to ensure the generalization of its output, but also to reduce the likeliness of the workflow to cause discriminatory issues as the ones exemplified above. The~history of these AI areas in the ITS community has been going for years now~\cite{zhang2011data}. However, we~here emphasize the crucial role of these techniques beyond performance boosting: the techniques originally aimed to counteract the effects of class imbalance in the output of data-based models could be also leveraged to reflect legal impositions that not necessarily relate to the model's performance nor can they be inferred easily for the attributes within the data themselves. The~lack of compliance of the model with fairness and ethics standards does not necessarily render a performance degradation observed at its output, nor can it be inferred easily from the available data. \item Hybrid models encompassing linguistic rules and data-based learning techniques, capable of supporting the transition from the traditional way of doing to the new data-based modeling era in the management of ITS systems. We~foresee that the community will witness a renaissance of data mining methods incorporating methods such as fuzzy logic not only to implement human knowledge to decision workflows, but also to explain and describe the internal structure of learned models, as it is currently under investigation in many contributions under the XAI umbrella~\cite{fernandez2019evolutionary,mencar2018paving}. \item New prescriptive data-based techniques such as Deep Reinforcement Learning~\cite{1701.07274} and Algorithmic Game Theory~\cite{nisan2007algorithmic} will also grasp interest in the near future for their close connection to actionable data science. The~interaction of data-based workflows with humans will require techniques capable of learning actions from experience, and~eventually orchestrating the interaction and negotiation among users when their actions are governed by interrelated yet conflicting objectives. In~fact such new prescriptive elements are progressively entering the literature in certain ITS applications that target machine autonomy (e.g.,~autonomous vehicle~\cite{sallab2017deep,ruch2019value} or automated signaling~\cite{mannion2016experimental}), but it is our vision that they will gain momentum in many other ITS setups. \item Privacy-preserving Data Mining~\cite{aldeen2015comprehensive,agrawal2000privacy}, which has garnered a great interest in the last year with major breakthroughs reported in the intersection between machine learning, cryptography, homomorphic encryption, secure enclaves and blockchains~\cite{mendes2017privacy}. The~use of personal data and the stringent pressure placed by governments and agencies on differential privacy preservation has spurred a flurry of research to prevent models from revealing sensitive data from their training instances~\cite{ding2019survey, victor2016privacy}. Within the ITS domain, it is possible to find many areas in which privacy preservation has recently been a subject of intense research: from origin-destination flow estimation~\cite{zhou2013privacy} to route planners~\cite{florian2014privacy, rabieh2015privacy}, or pattern mining~\cite{kim2008privacy}, a~glance at recent literature reveals the momentum this topic has acquired lately. In~any of these examples data are available as a result of the sensing pervasiveness (specially in the case of VANETs) and the capture of user data. While previous works explored how to used these data in a proper way with respect to privacy matters, it is straightforward to think that the natural evolution of this research line arrives at how protected data is preserved through the modeling workflow. \item Furthermore, the~proven vulnerability of data-based models against adversarial attacks has also motivated the community to lay the foundations of an entirely new research area---Adversarial Machine Learning---, committed to the design of robust models against attacks crafted to confuse their outputs~\cite{huang2011adversarial,1312.6199}. Interestingly, one of the most widely exemplified scenarios in this research area relates to ITS: automated traffic signal classification models were proven to be vulnerable to adversarial attacks by placing a simple, intelligently designed sticker on the traffic sign itself~\cite{akhtar2018threat}. Likewise, the~rationale behind Federated Learning (discussed in Section~\ref{sec:tfada}) also spans beyond the efficient distribution of locally captured knowledge among models: since no raw data instances are involved in the information transfer, privacy of local data is consequently preserved. In~short: security also matters in actionable data-based pipelines. \item Finally, the~ever-growing scales of ITS scenarios demand more research invested in scaling up learning algorithms in a computationally efficient manner~\cite{nguyen2019machine}. Automated traffic, smart cities, mobility as a service constitute ITS scenarios where a plethora of information sources interact with each other. Definitely more efforts must be invested in aggregation strategies for data-based models learned from different interrelated data ecosystems, either in a distributed fashion (e.g.,~federated learning) or in a centralized system (correspondingly, Map-Reduce implementations of data-based models, cloud-based architectures, etc). Computational aspects of large-scale implementations should be also under study due to their implications in terms of actionability, such~as the latency of the system when prescribing decisions from data. This latter aspect can be a key for real-time ITS applications for which the gap from data to actions must be shortened to its minimum. \end{itemize} \section{Concluding Notes and Outlook}\label{conclusion} This work has built upon the overabundance of contributions within the ITS community dealing with performance-based comparisons among data-based models. Our claim is that, as in any other domain of application, data-based modeling should bridge the gap between data and actions, providing further value to the ITS application at hand than superior model performance statistics. It is our firm belief that the research community should embrace actionability as the primary design motto, with negligible performance improvements being left behind in favor of relevant aspects such as adaptability, usability, resiliency, scalability or efficiency. To provide a solid rationale for our postulations, we~have first presented a reference model for actionable data-based workflows, placing emphasis on the different phases that should be undertaken to translate data into actions of added value for the decision maker. Adaptation has been highlighted as a necessary albeit often neglected processing step in data-based modeling, which allows models to be effective when deployed on dynamic ITS environments with time-varying data sources. Next, our study has listed the main functional requirements that models along the reference model should meet to guarantee their actionability, followed by an overview of incipient research areas in Data Science and Artificial Intelligence that should progressively enter the ITS arena. Indeed, advances in XAI, Online Learning, Gray-box Modeling and Transfer Learning are currently investigated mostly from an application-agnostic perspective. Their undoubted connection to actionability makes them the core of a promising future for data-based modeling in ITS systems, processes and applications. { Other research areas related to Artificial Intelligence beyond those covered in our reflections will surely spawn further opportunities for actionability in ITS, provided that they fully embrace their ultimate goal: to effectively support decision making. Among them, the~use of Automated Machine Learning (AutoML~\cite{hutter2019automated}) for tuning data-based models should not only optimize performance-based metrics (e.g.,~finding a model that attains maximum accuracy for image segmentation in vehicular perception cameras), but also comply with other objectives and constraints that closely link to actionability (e.g., robustness against adversarial attacks, or a lower epistemic uncertainty of the model induced in its output). Unless all such actionability constraints are regarded as design objectives and accounted for as such in the automated discovery of new data-based pipelines, any incursion of AutoML in ITS will be of no practical value. For~this to occur, it is our belief that the confluence of multiple functional and non-functional requirements in this automated design process will pave the way towards the massive adoption of multi-objective optimization algorithms as a massive framework to infer and analyze all trade-offs existing among the design objectives.} Data-based modeling has brought a deep transformation to ITS. A~vast amount of research works in the field are produced by data-based modeling specialists attracted by the profusion of available data, and~with limited knowledge of transportation. Data-based models are getting progressively more complex, increasing the gap between research and practice. This situation calls for a change of paradigm, to a one in which actionability requirements of models is desired by researchers, and~practitioners are aware of the technologies available to provide it. Model actionability is a great whole that can act as an incentive to perform smaller steps towards its realization. It is probably unthinkable to develop, in~a research environment, a~data-based model that meets all proposed requirements. However, addressing some of the postulated requirements while developing a competing data-based ITS model will make it closer to actionability. There is, therefore, a~long road to be travelled in ITS model actionability, with interesting avenues around the thorough understanding of models, and~the adoption of emerging AI technologies to endow data-based workflows with the requirements needed to make them actionable in practice. As exposed in our study, there is a germinal interest in these research topics. Nevertheless, we~foresee vast opportunities for future work when model actionability is set as a design priority. On a closing note, we~advocate for a new dawn of Data Science in the ITS domain, where advances in modeling performance concurrently emerge along with histories and reports about how such models have helped decision making in practical scenarios. Data mining has limited merit without actions prescribed from its outputs, always in compliance and close match with the specificities of its context. \vspace{6pt} \section*{Acknowledgements} This work was supported in part by the Basque Government for its funding support through the EMAITEK program (3KIA, ref. KK-2020/00049). It has also received funding support from the Consolidated Research Group MATHMODE (IT1294-19) granted by the Department of Education of the Basque Government.
1,941,325,220,873
arxiv
\section{Introduction} Toric Topology opened new directions in the framework of equivariant topology and found remarkable applications due to the possibility of constructing explicitly smooth manifolds, CW complexes, and their mappings that provide us with realizations of fundamental results of algebraic topology. In particular, the notion of a Massey product, well-known and widely used in algeraic topology, acquired a deep new meaning in toric topology, thanks to the new methods that have led to explicit constructions of manifolds with nontrivial higher Massey products in cohomology. Consider an embedding $i_{F,P}:\,F\subset P$ of an $r$-dimensional face $F$ into an $n$-dimensional simple polytope $P$. A moment-angle manifold $\mathcal Z_{F}$ of the face $F$ is a submanifold of $\mathcal Z_P$. The explicit construction of an induced embedding of moment-angle manifolds $\hat{i}_{F,P}:\,\mathcal Z_{F}\to\mathcal Z_P$ plays a key role in this paper. Suppose $S$ is a subset of vertices of a simplicial complex $K$. Denote by $K_S$ the full subcomplex of $K$ on the vertex set $S$. It is well known that for the canonical embedding $j_{S}:\,K_S\to K$ there exists an induced embedding of moment-angle-complexes $\hat{j}_{S}:\,\mathcal Z_{K_S}\to\zk$ and, furthermore, $K_S$ is a retract of $K$; the explicit construction of this retraction is described in Construction~\ref{fullsubcompretract}. On the other hand, due to~\cite{bu-pa00-2}, for any simple polytope $P$ there exists a canonical equivariant homeomorphism $h_{P}:\,\zp\to\mathcal Z_{K_P}$ (see Definition~\ref{mac}, Definition~\ref{mamfdDJ}, and Definition~\ref{mamfdBP}), where $K_P$ is the boundary of the dual simplicial polytope $P^*$. In this work, to any face embedding $i_{F,P}:\,F\subset P$ we associate a canonical simplicial embedding $\Phi_{F,P}:\,K_{F}\to K_{P}$. Consider the full subcomplex $K_{F,P}$ of $K_P$ on the same vertex set as $\Phi_{F,P}(K_{F})$. We prove that the canonical equivariant homeomorphisms $h_{F}, h_{P}$ and the canonical retraction $r:\,\zk\to\mathcal Z_{K_S}$ are connected by the induced embeddings $\hat{i}_{F,P}$, $\hat{\Phi}_{F,P}$ of moment-angle manifolds and moment-angle-complexes, respectively, in a commutative pentagonal diagram with the vertices $\mathcal Z_{K_F}$, $\mathcal Z_{K_P}$, $\mathcal Z_F$, $\mathcal Z_{P}$, and $\mathcal Z_{K_{F,P}}$, see Proposition~\ref{GenCase}. Moreover, we prove that a polytope $P$ is flag if and only if the embedding $\Phi_{F,P}$ gives an isomorphism $K_{F}\cong K_{F,P}$ for any face $F\subset P$, see Theorem~\ref{FlagCriterion}. As a corollary we obtain that in the latter case the induced embedding $\hat{i}_{F,P}$ of moment-angle manifolds has a retraction and thus induces a split ring epimorphism in cohomology. As an application of this result we obtain sequences $\{P^n\}$ of flag simple polytopes such that there exists a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{P^n})$ with $k\to\infty$ as $n\to\infty$. Moreover, the existence of a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{P^n})$ implies existence of a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{P^l})$ for any $l>n$. We started with a sequence of polytopes $\mathcal Q=\{Q^n|\,n\geq 0\}$ (see Definition~\ref{2truncMassey}), introduced by the second author~\cite{L1, L2}, for which it was proved in~\cite{L2} that there exists a nontrivial $n$-fold Massey product $\langle\alpha_{1},\ldots,\alpha_{n}\rangle$ with $\dim\alpha_{i}=3,1\leq i\leq n$ in $H^*(\mathcal Z_{Q^n})$ for any $n\geq 2$. In this work we prove that $Q^n$ is a facet of $Q^{n+1}$ for all $n\geq 0$, see Theorem~\ref{Qdfpnm}, and obtain as a corollary that there exists a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{Q^n})$ for any $2\leq k\leq n$, see Corollary~\ref{AllProducts}, cf.~\cite[Theorem 4.1]{L3}. Using Construction~\ref{familyFlag} we introduce a wide family of sequences of flag polytopes, to which our main results can be applied, see Proposition~\ref{MasseySeqInfinity} and Theorem~\ref{mainMasseysequence}. {\it{Acknowledgements.}} We are grateful to Taras Panov for stimulating discussions on the results of this work. We also thank Jelena Grbi\'c for asking a question of how to find sequences of polytopes $\mathcal P=\{P^n\}$, different from $\mathcal Q$, for which there exists a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{P^n})$ with $k\to\infty$ as $n\to\infty$. \section{Basic constructions, motivation, and main results} Here we recall only the notions that are the key ones for our constructions and results. For the definitions, constructions, and results on other notions we are using in this work we refer the reader to the monograph~\cite{TT}. Alongside with our main results, in this section we give extended descriptions of certain constructions from~\cite{TT} motivated by their applications in our work. \begin{defi} An \emph{abstract simplicial complex} $K$ on the vertex set $[m]=\{1,2,\ldots,m\}$ is a set of subsets of $[m]$ called its \emph{simplices} such that if $\tau\subset\sigma$ and $\sigma\in K$, then $\tau\in K$.\\ The dimension of $K$ is equal to the maximal value of $|\sigma|-1$. We assume in what follows, unless otherwise is stated explicitly, that there are no \emph{ghost vertices} in $K$, that is $\{i\}\in K$ for all $i\in [m]$. We call a {\emph{full subcomplex}} $K_J$ on the vertex set $J\subseteq [m]$ the set of all elements in $K$ having vertices in $J$, that is $K_{J}=2^{J}\cap K$. \end{defi} \begin{constr}(moment-angle-complex)\label{mac} Suppose $K$ is an abstract simplicial complex on $[m]$. Then define its \emph{moment-angle-complex} to be $$ \zk=\cup_{\sigma\in K} (D^2,S^1)^{\sigma}, $$ where $(D^2,S^1)^{\sigma}=\prod\limits_{i=1}^{m}\,Y_{i}$ such that $Y_{i}=D^2$, if $i\in\sigma$, and $Y_{i}=S^1$, otherwise. Buchstaber and Panov, see~\cite[Chapter 4]{TT}, proved that $\zk$ is a CW complex for any simplicial complex $K$ and, moreover, gave an alternative description of $\zk$ from the following commutative diagram: $$\begin{CD} \zk @>>>(\mathbb{D}^2)^m\\ @VVrV\hspace{-0.2em} @VV\rho V @.\\ \cc(K) @>i_c>> I^m \end{CD}\eqno $$ where $i_{c}:\,\cc(K)\hookrightarrow I^{m}=(I^1,I^1)^{[m]}$ is an embedding of a cubical subcomplex $\cc(K)=(I^1,1)^K$ in $I^{m}=[0,1]^m$ (it is PL homeomorphic to a cone over a barycentric subdivision of $K$), induced by the inclusion of pairs: $(I^1,1)\subset (I^1,I^1)$, and the maps $r$ and $\rho$ are projections onto the orbit spaces of $\mathbb{T}^m$-action induced by the coordinatewise action of $\mathbb{T}^m$ on the unitary complex polydisk $(\mathbb{D}^2)^m$ in $\C^m$. \end{constr} In our paper we use the following definition of a simple convex polytope. \begin{defi}\label{Simplepolytopes} A \emph{simple convex $n$-dimensional polytope} $P$ in the Euclidean space $\R^n$ with scalar product $\langle\;,\:\rangle$ can be defined as a bounded intersection of $m$ closed halfspaces: $$ P=\bigl\{\mb x\in\R^n\colon\langle\mb a_i,\mb x\rangle+b_i\ge0\quad\text{for } i=1,\ldots,m\bigr\},\eqno (1) $$ where $\mb a_i\in\R^n$, $b_i\in\R$. We assume that \emph{facets} of $P$ $$ F_i=\bigl\{\mb x\in P\colon\langle\mb a_i,\mb x\rangle+b_i=0\bigr\},\quad\text{for } i=1,\ldots,m. $$ are in general position, that is, exactly $n$ of them meet at a single point (such a point is called a {\emph{vertex}} of $P$). \end{defi} In what follows we also assume that there are no redundant inequalities in $(1)$, that is, no inequality can be removed from $(1)$ without changing~$P$. The latter condition is equivalent to saying that $K_P$ has no ghost vertices. We also fix the following notation: we denote by $m(P^n)$, or simply, by $m(n)$ the number of facets of a simple polytope $P^n$. Then $K_P$ is an $(n-1)$-dimensional triangulated sphere with $m(n)$ vertices. \begin{defi} A minimal nonface of $K$ is a set $J=\{j_1,\ldots,j_k\}\subset [m]$, $|J|=k\geq 2$ such that $K_{J}=\partial\Delta^{k-1}$. We denote the set of all minimal nonfaces of $K$ by $MF(K)$. A simplicial complex $K$ is called \emph{flag} if $|J|=2$ for any $J\in MF(K)$. A simple polytope $P$ is called \emph{flag} if its nerve complex $K_P$ is flag. Equivalently, if a set of facets of $P$ has an empty intersection: $F_{j_1}\cap\ldots\cap F_{j_k}=\varnothing$, then some pair among these facets also has an empty intersection: $J_{j_{s}}\cap F_{j_t}=\varnothing$ for some $s,t\in [k]$. \end{defi} The next construction appeared firstly in the work of Davis and Januszkiewicz~\cite{DJ}. \begin{constr}(moment-angle manifold I)\label{mamfdDJ} Suppose $P^n$ is a simple convex polytope with the set of facets $\mathcal F(P^n)=\{F_{1},\ldots,F_{m}\}$. Denote by $T^{F_{i}}$ a 1-dimensional coordinate subgroup in $T^{F}\cong T^{m}$ for each $1\leq i\leq m$ and $T^{G}=\prod\,T^{F_i}\subset T^{F}$ for a face $G=\cap\,F_{i}$ of a polytope $P^n$. Then the \emph{moment-angle manifold} over~$P$ is defined as a quotient space $$ \zp=T^{F}\times P^{n}/\sim, $$ where $(t_{1},p)\sim (t_{2},q)$ if and only if $p=q\in P$ and $t_{1}t_{2}^{-1}\in T^{G(p)}$, $G(p)$ is a minimal face of $P$ which contains $p=q$. \end{constr} \begin{rema} It can be deduced from Construction~\ref{mamfdDJ} that if $P_{1}$ and $P_{2}$ are \emph{combinatorially equivalent}, that is, their face lattices are isomorphic (or, equivalently, $K_{P_1}$ and $K_{P_2}$ are simplicially isomorphic), then $\mathcal Z_{P_{1}}$ and $\mathcal Z_{P_{2}}$ are homeomorphic. The opposite statement is {\emph{not}} true. \end{rema} Due to the above remark, in what follows we will often not distinguish between combinatorially equivalent simple polytopes. Let $A_P$ be the $m\times n$ matrix of row vectors $\mb a_i\in\mathbb{R}^n$, and let $\mb b_P\in\mathbb{R}^m$ be the column vector of scalars $b_i\in\R$. Then we can rewrite $(1)$ as \[ P=\bigl\{\mb x\in\R^n\colon A_P\mb x+\mb b_P\ge\mathbf 0\}, \] and consider the affine map \[ i_P\colon \R^n\to\R^m,\quad i_P(\mb x)=A_P\mb x+\mb b_P. \] It embeds $P$ into \[ \R^m_\ge=\{\mb y\in\R^m\colon y_i\ge0\quad\text{for } i=1,\ldots,m\}. \] In the series of works by Buchstaber and Panov moment-angle manifolds were intensively studied by means of algebraic topology, combinatorial commutative algebra, and polytope theory, which started a new area of geometry and topology, toric topology, see~\cite{TT}. They gave the following definition and proved it to be equivalent to the one above. \begin{constr}(moment-angle manifold II)\label{mamfdBP} Define a \emph{moment-angle manifold} $\mathcal Z_P$ of a polytope $P$ as a pullback from the commutative diagram $$\begin{CD} \mathcal Z_P @>i_Z>>\C^m\\ @VVV\hspace{-0.2em} @VV\mu V @.\\ P @>i_P>> \R^m_\ge \end{CD}\eqno $$ where $\mu(z_1,\ldots,z_m)=(|z_1|^2,\ldots,|z_m|^2)$. The projection $\zp\rightarrow P$ in the above diagram is the quotient map of the canonical action of the compact torus $\mathbb{T}^m$ on $\zp$ induced by the standard action of $\mathbb{T}^m$ \[ \mathbb T^m=\{\mb z\in\C^m\colon|z_i|=1\quad\text{for }i=1,\ldots,m\} \] on~$\C^m$. Therefore, $\mathbb T^m$ acts on $\zp$ with an orbit space $P$, and $i_Z$ is a $\mathbb T^m$-equivariant embedding. \end{constr} It follows immediately from the Construction~\ref{mamfdBP}, see~\cite[\S3]{BR}, that $\zp$ is a total intersection of Hermitian quadrics in $\C^m$. Thus, $\zp$ obtains a canonical equivariant smooth structure. For any simple $n$-dimensional polytope $P$ with $m$ facets its moment-angle manifold $\zp$ is a 2-connected, closed, $(m+n)$-dimensional manifold. \begin{rema} The general problem on how many $K$-invariant smooth structures may exist on $\zp$, where $K\cong T^{m-n}$ is a maximal subgroup of $\mathbb{T}^m$ that acts freely on $\zp$, remains open. In the case when $P=\Delta^3$ and $K\cong S^1$ (that is, $\zp=S^7$) it was solved in the work of Bogomolov~\cite{Bg}. \end{rema} Now we focus on a description of the cohomology algebra of $\zp$ and higher Massey products in it. To proceed, we need firstly to recall some notions from combinatorial commutative algebra. Let $\ko$ be a commutative ring with a unit. Throughout the paper, unless otherwise stated explicitly, we denote by $K$ an $(n-1)$-dimensional simplicial complex on the vertex set $[m]=\{1,2,\ldots,m\}$ and by $P$ a simple convex $n$-dimensional polytope with $m$ facets: $\mathcal F(P)=\{F_{1},\ldots,F_{m}\}$. Let $\ko[m]=\ko[v_1,\ldots,v_m]$ be the graded polynomial algebra on $m$ variables, $\deg v_{i}=2$. \begin{defi}\label{Facerings A \emph{face ring} (or a \emph{Stanley-Reisner ring}) of $K$ is the quotient ring $$ \ko[K]:=\ko[v_{1},\ldots,v_{m}]/I_K $$ where $I_K$ is the ideal generated by those square free monomials $v_{i_{1}}\cdots{v_{i_{k}}}$ such that $\{i_{1},\ldots,i_{k}\}\notin K$.\\ We call a \emph{face ring of a polytope} $P$ the Stanley-Reisner ring of its {\emph{nerve complex}}: $\ko[P]=\ko[K_P]$, where $K_P=\partial P^*$. \end{defi} Note that $\ko[P]$ is a module over $\ko[v_{1},\ldots,{v_{m}}]$ via the quotient projection. The following result relates the cohomology algebra of $\zk$ to combinatorics of $K$. \begin{theo}[{\cite[Theorem 4.5.4]{TT} or \cite[Theorem 4.7]{P}}]\label{BPtheo} The following statements hold. \begin{itemize} \item[(I)] The isomorphisms of algebras hold: $$ \begin{aligned} H^*(\zk;\ko)&\cong\Tor_{\ko[v_1,\ldots,v_m]}^{*,*}(\ko[K],\ko)\\ &\cong H^{*,*}\bigl[\Lambda[u_1,\ldots,u_m]\otimes \ko[K],d\bigr]\\ &\cong \bigoplus\limits_{J\subset [m]}\widetilde{H}^*(K_{J};\ko), \end{aligned} $$ $\mathop{\mathrm{bideg}} u_i=(-1,2),\;\mathop{\mathrm{bideg}} v_i=(0,2);\quad du_i=v_i,\;dv_i=0$. The last isomorphism is the sum of isomorphisms of $\ko$-modules: $$ H^p(\zp;\ko)\cong\sum\limits_{J\subset [m]}\widetilde{H}^{p-|J|-1}(P_{J};\ko); $$ \item[(II)] Cup product in $H^*(\zk;\ko)$ is described as follows. Suppose we have two cohomology classes $\alpha=[a]\in\tilde{H}^{p}(K_{I_1};\ko)$ and $\beta=[b]\in\tilde{H}^{q}(K_{I_2};\ko)$ on full subcomplexes $K_{I_1}$ and $K_{I_2}$. Then one can define a natural inclusion of sets $i:\,K_{I_{1}\sqcup I_{2}}\rightarrow K_{I_1}*K_{I_2}$ and the canonical isomorphism of cochain modules: $$ s:\,\tilde{C}^{p}(K_{I_1})\otimes\tilde{C}^{q}(K_{I_2})\rightarrow\tilde{C}^{p+q+1}(K_{I_{1}}*K_{I_2}). $$ Then the product of $\alpha$ and $\beta$ is given by: $$ \alpha\cdot\beta=\begin{cases} 0,&\text{if $I_{1}\cap I_{2}\neq\varnothing$;}\\ i^{*}[s(a\otimes b)]\in\tilde{H}^{p+q+1}(K_{I_{1}\sqcup I_{2}};\ko),&\text{if $I_{1}\cap I_{2}=\varnothing$.} \end{cases} $$ \end{itemize} \end{theo} \begin{rema} Hochster~\cite{Hoch} proved an isomorphism of $\ko$-modules $$ \Tor_{\ko[v_1,\ldots,v_m]}^{*,*}(\ko[K],\ko)\cong\bigoplus\limits_{J\subset [m]}\widetilde{H}^*(K_{J};\ko) $$ for any simplicial complex $K$. Since one has a homeomorphism $\zp\cong\mathcal Z_{K_P}$, this gives a description of the additive structure of $H^*(\zp;\ko)$ as well. \end{rema} If we define a finitely generated differential graded algebra $R(K)=\Lambda[u_{1},\ldots,u_{m}]\otimes\ko[K]/(v_{i}^{2}=u_{i}v_{i}=0,1\leq i\leq m)$ with the same $d$ as in the theorem above, then one has the following result. \begin{prop}\label{BPmultigrad} A graded algebra isomorphism holds: $$ H^{*,*}(\zk;\ko)\cong H^{*,*}[R(K),d]\cong\Tor^{*,*}_{\ko[v_{1},\ldots,v_{m}]}(\ko[K],\ko). $$ These algebras admit $\mathbb{N}\oplus\mathbb{Z}^m$-multigrading and we have $$ \Tor^{-i,2{\bf{a}}}_{\ko[v_{1},\ldots,v_{m}]}(\ko[K],\ko)\cong H^{-i,2{\bf{a}}}[R(K),d], $$ where $i\in\mathbb{N}, J\in\mathbb{Z}^m$, and $\Tor^{-i,2J}_{\ko[v_{1},\ldots,v_{m}]}(\ko[K],\ko)\cong\widetilde{H}^{|J|-i-1}(K_{J};\ko)$ for $J\subseteq [m]$. The multigraded component $\Tor^{-i,2{\bf{a}}}_{\ko[v_{1},\ldots,v_{m}]}(\ko[K],\ko)=0$, if ${\bf{a}}$ is not a $(0,1)$-vector of length $m$. \end{prop} \begin{constr}\label{fullsubcompretract} Consider a subset of vertices $S\subset [m]$ of a simplicial complex $K$. Then there is a natural simplicial embedding $j_{S}:\,K_{S}\to K$. We are going to construct a retraction to the induced embedding of moment-angle-complexes $\hat{j}_{S}:\mathcal Z_{K_S}\to\mathcal\zk$. To do this, consider a projection $$ p_{\sigma}:\,\prod\limits_{i\in\sigma}\,D^{2}\times\prod\limits_{i\notin\sigma}\,S^{1}\rightarrow\prod\limits_{i\in\sigma\cap S}D^{2}\times\prod\limits_{S\backslash\sigma}S^{1} $$ for each $\sigma\in K$. Observe that the image of $p_{\sigma}$ is in $\mathcal Z_{K_S}$ for all $\sigma\in K$, since $K_{S}=\{\sigma\cap S|\,\sigma\in K\}$. It is easy to see that $r_{S}:\,\cup_{\sigma\in K}\,p_{\sigma}:\,\zk\to\mathcal Z_{K_S}$ is a retraction. \end{constr} \begin{coro}\label{splitepi} Suppose $j_{S}:\,K_{S}\hookrightarrow K$ is an embedding of a full subcomplex of $K$ on the vertex set $S\subseteq [m]$. Then the embedding of moment-angle-complexes $\hat{j}_{S}:\,\mathcal Z_{K_S}\to\zk$ has a retraction map and the induced ring homomorphism in cohomology $j^*_{S}:\,H^*(\zk)\rightarrow H^*(\mathcal Z_{K_S})$ is a split ring epimorphism. \end{coro} \begin{proof} The statement about the embedding of moment-angle-complexes follows directly from Definition~\ref{mac} and Construction~\ref{fullsubcompretract} (cf.~\cite[Exercise 4.2.13]{TT}). The rest of the statement now follows from the fact that a homomorphism in cohomology, induced by a retraction, is a split ring epimorphism: $(ri)^{*}=i^{*}r^{*}=1_{A}^{*}$ for a retract $A=\mathcal Z_{K_S}\subset\zk$. \end{proof} Now we will discuss in more details different ways to construct an equivariant embedding of a moment-angle manifold $\mathcal Z_{F^r}$ into a moment-angle manifold $\mathcal Z_{P^n}$ induced by a face embedding $F^{r}\to P^{n}$. Although, $\mathcal Z_{F^r}$ is always a submanifold of $\mathcal Z_{P^n}$, no retraction $\mathcal Z_{P^n}\to\mathcal Z_{F^r}$ exists, in general, see Example~\ref{prism} below. \begin{constr}(mappings of moment-angle manifolds I)\label{mapmfds} Suppose $F^{r}=F_{i_{1}}\cap\ldots\cap F_{i_{n-r}}$ is an $r$-dimensional face of $P^n$ and $i^{n}_{r}:\,F^{r}\hookrightarrow\partial P^{n}\subset P^n$ is its embedding into $P^n$. Then $F^r$ is a simple polytope itself having $m(r)$ facets $G_{i},1\leq i\leq m(r)$, that is for each facet $G_{\alpha}$ of $F^r$ there is a unique facet $F_{j}$ of $P^n$ such that $$ G_{\alpha}=(F_{i_{1}}\cap\ldots\cap F_{i_{n-r}})\cap F_{j} $$ Thus, a map $\phi^{n}_{r}:\,[m(r)]\rightarrow [m(n)]$ is determined such that $\phi^{n}_{r}(\alpha)=j$. Now using Construction~\ref{mamfdDJ}, we are going to construct a map $\hat{\phi}^{n}_{r}:\,\mathcal Z_{F^r}\rightarrow\mathcal Z_{P^n}$ induced by $\phi^{n}_{r}$ and $i^{n}_{r}$. First consider the following map $$ \tilde{\phi}^{n}_{r}:\,T^{\mathcal F(F^r)}\rightarrow T^{\mathcal F(P^n)}, $$ where $\mathcal F(F^r)=\{G_{1},\ldots,G_{m(r)}\}$ and $\mathcal F(P^n)=\{F_{1},\ldots,F_{m(n)}\}$ denote the sets of facets of $F^r$ and $P^n$ respectively, and $$ \tilde{\phi}^{n}_{r}(t_{1},\ldots,t_{m(r)})=(\tau_{1},\ldots,\tau_{m(n)}), $$ for $$ \tau_{i}=\begin{cases} t_{(\phi^{n}_{r})^{-1}(i)},&\text{if $i\in\im\phi_{r}^{n}$;}\\ 1,&\text{otherwise.} \end{cases} $$ It is easy to see that $\tilde{\phi}^{n}_{r}:\,T^{m(r)}\rightarrow T^{m(n)}$ is a group homomorphism. Finally, we are able to define a map $$ \hat{\phi}^{n}_{r}:\,\mathcal Z_{F^r}=(T^{\mathcal F(F^r)}\times F^{r})/\sim\rightarrow\mathcal Z_{P^n}=(T^{\mathcal F(P^n)}\times P^{n})/\sim $$ by formula: $$ \hat{\phi}^{n}_{r}([t,p])=[\tilde{\phi}^{n}_{r}(t),i_{r}^{n}(p)] $$ Let us prove correctness of the definition above. Due to Construction~\ref{mamfdDJ} it sufficies to prove that if $T_{1}^{-1}T_{2}\in T^{G_{F^r}(p=q)}$ then $\tilde{\phi}^{n}_{r}(T_{1}^{-1})\tilde{\phi}^{n}_{r}(T_{2})\in T^{G_{P^n}(p=q)}$. As $\tilde{\phi}^{n}_{r}$ is a group homomorphism, we need to prove that $$ \tilde{\phi}^{n}_{r}(T_{1}^{-1}T_{2})\in T^{G_{P^n}(p=q)}, $$ whenever $$ T_{1}^{-1}T_{2}\in T^{G_{F^r}(p=q)}. $$ Let $T_{1}^{-1}T_{2}=(t_{1},\ldots,t_{m(r)})$ and $\tilde{\phi}^{n}_{r}(T_{1}^{-1}T_{2})=(\tau_{1},\ldots,\tau_{m(n)})$. Note that, by Construction~\ref{mamfdDJ}, $G:=G_{F^r}(p=q)=G_{P^n}(p=q)$ is the unique face in $F^r\subset P^n$ for which the point $p=q$ belongs to its interior. Suppose $G=G_{\alpha_{1}}\cap\ldots\cap G_{\alpha_{k}}=F^{r}\cap F_{j_{1}}\cap\ldots\cap F_{j_k}$. Then by definition of $\phi^{n}_{r}$ one has: $\phi^{n}_{r}(\alpha_{p})=j_p$ for all $1\leq p\leq k$. Now it suffices to show that if $\tau_{i}\neq 1$, then $i\in\{j_{1},\ldots,j_{k}\}$. As $\tau_{i}\neq 1$, by definition of $\tilde{\phi}^{n}_{r}$ one has: $\tau_{i}=t_{(\phi^{n}_{r})^{-1}(i)}\neq 1$ and $i\in\im\phi^{n}_{r}$. Since $t_{(\phi^{n}_{r})^{-1}(i)}\neq 1$ and $(t_{1},\ldots,t_{m(r)})\in T^{G}=T^{G_{\alpha_{1}}}\times\ldots\times T^{G_{\alpha_{k}}}$, we must have $t_{(\phi^{n}_{r})^{-1}(i)}\in T^{G_{\alpha_{s}}}$ for some $s\in [k]$. Again by definition of $\tilde{\phi}^{n}_{r}$ this means that $(\phi^{n}_{r})^{-1}(i)=\alpha_{s}$, or $\phi^{n}_{r}(\alpha_{s})=i$. As we previously had $\phi^{n}_{r}(\alpha_{s})=j_{s}$ for $G$ above, it implies that $i=\phi^{n}_{r}(\alpha_{s})=j_{s}\in\{j_{1},\ldots,j_k\}$, and correctness of the definition of $\hat{\phi}^{n}_{r}$ is proved. To show that $\hat{\phi}^{n}_{r}$ is continuous, we observe that the following commutative diagram takes place, by definition of $\hat{\phi}^{n}_{r}$: $$\begin{CD} T^{m(r)}\times F^{r} @>\tilde{\phi}^{n}_{r}\times i_{r}^{n}>> T^{m(n)}\times P^{n}\\ @VV\pr_{r} V\hspace{-0.2em} @VV\pr_{n} V @.\\ \mathcal Z_{F^r} @>\hat{\phi}^{n}_{r}>>\mathcal Z_{P^n} \end{CD}\eqno $$ Since $\mathcal Z_{F^r}$ and $\mathcal Z_{P^n}$ both have quotient topologies determined by the canonical projections $pr_{r}$ and $pr_{n}$, respectively, given in Construction~\ref{mamfdDJ}, one concludes that $\hat{\phi}^{n}_{r}$ is continuous if and only if $\hat{\phi}^{n}_{r}\pr_{r}=\pr_{n}\;(\tilde{\phi}^{n}_{r}\times i_{r}^{n})$ is continuous. The latter map is a composition of continuous maps, which finishes the proof. \end{constr} Now we will construct a simplicial map $\Phi^{n}_{r}:\,K_{F^r}\rightarrow K_{P^n}$ determined by $\phi_{r}^{n}$ such that it induces a continuous map of moment-angle-complexes $\hat{\Phi}^{n}_{r}:\,\mathcal Z_{K_{F^r}}\rightarrow\mathcal Z_{K_{P^n}}$. \begin{constr}\label{mapmcxs} First, recall that $\phi^{n}_{r}$ induces an injective map of facets $\overline{\phi}^{n}_{r}:\,\mathcal F(F^r)\rightarrow\mathcal F(P^n)$, where $\overline{\phi}^{n}_{r}(G_{\alpha})=F_{\phi^{n}_{r}(\alpha)}$ for $G_{\alpha}=F^r\cap F_{\phi^{n}_{r}(\alpha)}$. Then any simplex $\sigma=(\alpha_{1},\ldots,\alpha_k)\in K_{F^r}$ is in 1-1 correspondence with a nonempty intersection of facets of $F^r$: $$ G_{\alpha_1}\cap\ldots\cap G_{\alpha_k}=F^{r}\cap F_{\phi^{n}_{r}(\alpha_{1})}\cap\ldots\cap F_{\phi^{n}_{r}(\alpha_k)}\neq\varnothing. $$ It implies that $F_{\phi^{n}_{r}(\alpha_{1})}\cap\ldots\cap F_{\phi^{n}_{r}(\alpha_k)}\neq\varnothing$, thus a nondegenerate injective simplicial map $\Phi^{n}_{r}:\,K_{F^r}\rightarrow K_{P^n}$ can be defined by $$ \Phi^{n}_{r}(\sigma)=(\phi_{r}^{n}(\alpha_{1}),\ldots,\phi^{n}_{r}(\alpha_k))\in K_{P^n}. $$ Observe that: \begin{itemize} \item[(1)] If $F^{r}=F_{i_{1}}\cap\ldots\cap F_{i_{n-r}}$ in $P$, then there is a simplex $\Delta({F})=\{i_{1},\ldots,i_{n-r}\}\in K_P$; \item[(2)] By definition of link, $\Phi_{r}^{n}(K_{F^r})=\Link_{K_{P^n}}\Delta(F)$. \end{itemize} It follows from Construction~\ref{mac} that $\Phi^{n}_{r}$ determines a continuous map of moment-angle-complexes: $$ \hat{\Phi}^{n}_{r}:\,\mathcal Z_{K_{F^r}}\rightarrow\mathcal Z_{K_{P^n}}, $$ which is induced by homeomorphisms: $$ (D^2,S^1)^{\sigma}\cong (D^2,S^1)^{\Phi^{n}_{r}(\sigma)}. $$ \end{constr} \begin{rema} Note that both $\hat{\phi}^{n}_{r}$ and $\hat{\Phi}^{n}_{r}$ are weakly equivariant maps with respect to the $\mathbb{T}^{m(r)}$-action, induced by the map $\tilde{\phi}^{n}_{r}:\,\mathbb{T}^{m(r)}\rightarrow\mathbb{T}^{m(n)}$. \end{rema} Now, using Definition~\ref{mamfdBP}, we are going to introduce a description of a map of moment-angle manifolds $\mathcal Z_{F^r}\rightarrow\mathcal Z_{P^n}$, induced by a face embedding $i_{r}^{n}:\,F^r\to P^n$, equivalent to that in Construction~\ref{mapmfds}, for which we are able to give explicit formulae in coordinates in the ambient complex Euclidean spaces $\C^{m(r)}$ and $\C^{m(n)}$. \begin{constr}(mappings of moment-angle manifolds II)\label{mapmfds2} Suppose $F^{r}$ is an $r$-dimensional face of $P^n$ with a set of facets $\mathcal F(P^n)=\{F_{1},\ldots,F_{m(n)}\}$. Firstly, let us prove that there exists an induced embedding $\hat{i}_{r}^{n}:\,\mathcal Z_{F^r}\hookrightarrow\mathcal Z_{P^n}$. Consider the affine embedding $f:\,\R^n \to\R^{m(n)}$ such that its restriction to $P^n$ equals $i_{P^n}$, see Construction~\ref{mamfdDJ}, and the embedding $g:\,\R^r \to\R^n$, whose restriction on $F^r$ is $i_{r}^{n}$. Now consider the composition map $f\,g:\,\R^r \to \R^n \to R^{m(n)}$ and a section $s_{n}: \R^{m(n)}_\ge \to \C^{m(n)}$. Recall that the embedding $\tilde{\phi}_{r}^{n}:\,T^{m(r)}\to T^{m(n)}$ introduced in Construction~\ref{mapmfds} gives an action of $T^{m(r)}$ on $\C^{m(n)}$. Finally, we get an action of $T^{m(r)}$ on the image $s_{n}\,f\,g(F^r)$ of the face $F^r$ in $\C^{m(n)}$, where $s_{n}$ is a continuous section of the moment map $\mu_{n}:\,\C^{m(n)}\to\R^m_{\ge}$. Let us denote the corresponding moment-angle manifold (see Construction~\ref{mamfdDJ}) by $W$. Then, by the Construction~\ref{mamfdBP}, $W$ is embedded into $\mathcal Z_{P^n}$ in $\C^{m(n)}$ and the induced map $\hat{i}_{r}^{n}:\,\mathcal Z_{F^r}\to\mathcal Z_{P^n}$ is defined, whose image is $W$. Now let us give explicit formulae for the embedding of $W$ into $\C^{m(n)}$. Without loss of generality, we may assume that $F^{r}=F_{r+1}\cap\ldots\cap F_{n}$ and $P^n$ is given in $\R^{n}$ by the following system of linear inequalities: $$ P^{n}=\{x\in\R^n|\,A_{P^n}x+b_{P^n}\geq 0\}, A_{P^n}^{T}=(E_{n},\tilde{A}^{T}), b_{P^n}=(\underbrace{0,\ldots,0}_{n},\tilde{b})^{T}, $$ where $\tilde{A}$ is an $(m(n)-n)\times n$-matrix and $\tilde{b}=(b^{1},\ldots b^{m(n)-n})$. Then one has: $$ i_{r}^{n}:\,F^{r}\rightarrow P^{n}, i_{r}^{n}(x^{1},\ldots,x^{r})=(x^{1},\ldots,x^{r},0_{n-r}), $$ where $0_{n-r}=(\underbrace{0,\ldots,0}_{n-r})$. Denote the submatrix consisting of the first $r$ columns and first $m(r)$ rows of $\tilde{A}$ by $\tilde{A}_{I}$ and the submatrix consisting of the first $r$ columns and last $m(n)-m(r)-n$ rows of $\tilde{A}$ by $\tilde{A}_{II}$. Set $\tilde{b}=(b_{I},b_{II})^{T}$, where $b_{I}$ has length $m(r)$ and $b_{II}$ has length $m(n)-m(r)-n$. Observe that the following equality holds: $$ F^{r}=\{x\in\R^r|\,\tilde{A}_{I}x+b_{I}\geq 0\}. $$ Note that the following formula takes place: $$ i_{P^n}\,i_{r}^{n}(x)=(x,0_{n-r},\tilde{A}_{I}x+b_{I},\tilde{A}_{II}x+b_{II}), $$ where $x=(x^{1},\ldots,x^{r})^{T}$. On the other hand, one gets by definition: $$ i_{F^r}(x)=\tilde{A}_{I}x+b_{I}. $$ Since $F^r$ is a simple polytope, the rank of $\tilde{A}_{I}$ equals $r$. Thus, there exist a $r\times m(r)$-matrix $C$ and a column vector $D$ of length $r$ such that $$ y=\tilde{A}_{I}x+b_{I}\,\rightarrow\,x=Cy+D, $$ where $x\in\R^r, y\in\R^{m(r)}$. Consider an affine embedding $i_{R}:\,\mathbb{R}^{m(r)}\rightarrow\mathbb{R}^{m(n)}$ given by formula $$ i_{R}(y)=(Cy+D,0_{n-r},y,(\tilde{A}_{II}C)y+(\tilde{A}_{II}D+b_{II})). $$ Now the following diagram commutes: $$\begin{CD} F^r @>i_{r}^{n}>> P^n\\ @VVi_{F^r} V\hspace{-0.2em} @VVi_{P^n} V @.\\ \R^{m(r)} @>i_{R}>>\R^{m(n)} \end{CD}\eqno $$ Observe that for a certain map $i_{C}:\C^{m(r)}\rightarrow\C^{m(n)}$ the following diagram commutes: $$\begin{CD} \mathcal Z_{F^r} @>i_{\mathcal Z_{F^r}}>> \mathbb{C}^{m(r)} @>i_{C}>> \mathbb{C}^{m(n)}\\ @VV\pr_{\mathbb{T}^r} V\hspace{-0.2em} @VV\mu_{m(r)} V\hspace{-0.2em} @VV\mu_{m(n)} V @.\\ F^r @>i_{F^r}>>\mathbb{R}^{m(r)}_{\ge} @>i_{R}>> \mathbb{R}^{m(n)}_{\ge}. \end{CD}\eqno $$ By Construction~\ref{mamfdBP} $\mathcal Z_{P^n}$ is a pullback in the following diagram: $$\begin{CD} \mathcal Z_{P^n} @>i_{\mathcal Z_{P^n}}>>\C^{m(n)}\\ @VVV\hspace{-0.2em} @VV\mu_{m(n)} V @.\\ P^{n} @>i_{P^n}>> \R^{m(n)}_{\ge} \end{CD}\eqno $$ Therefore, by the universal property of pullback, there exists a unique map $\hat{i}_{r}^{n}:\,\mathcal Z_{F^r}\rightarrow\mathcal Z_{P^n}$ such that the following diagram commutes: $$\begin{CD} \mathcal Z_{F^r} @>\hat{i}_{r}^{n}>> \mathcal Z_{P^n} @>i_{\mathcal Z_{P^n}}>> \mathbb{C}^{m(n)}\\ @VV\pr_{\mathbb{T}^r} V\hspace{-0.2em} @VV\pr_{\mathbb{T}^ {n}} V\hspace{-0.2em} @VV\mu_{m(n)} V @.\\ F^r @>i_{r}^{n}>> P^{n} @>i_{P^n}>> \mathbb{R}^{m(n)}_{\ge}. \end{CD}\eqno $$ In particular, from the diagram above one has: $i_{\mathcal Z_{P^n}}\,\hat{i}_{r}^{n}:\,\mathcal Z_{F^r}\rightarrow\mathcal Z_{P^n}\hookrightarrow\C^{m(n)}$ coincides with the composition of embeddings $i_{C}\,i_{\mathcal Z_{F^r}}:\,\mathcal Z_{F^r}\hookrightarrow\C^{m(r)}\hookrightarrow\C^{m(n)}$. Therefore, $\hat{i}_{r}^{n}:\,\mathcal Z_{F^r}\rightarrow\mathcal Z_{P^n}$ is an embedding. It follows that $\hat{i}_{r}^{n}(z)$ for $z=(z_{1},\ldots,z_{m(r)})\in\mathcal Z_{F^r}$ coincides with $i_{C}(z)$ (we regard $\mathcal Z_{F^r}$ as an intersection of Hermitian quadrics in $\C^{m(r)}$), and, moreover, one has: $W=i_{\mathcal Z_{P^n}}\hat{i}_{r}^{n}(\mathcal Z_{F^r})$. Therefore, we can give $W\subset\C^{m(n)}$ by the following formulae: $$ (|z_{1}|^{2},\ldots,|z_{r}|^{2})=(x_{1},\ldots,x_{r})=:x,\quad (z_{r+1},\ldots,z_{n})=0_{n-r}, $$ $$ (|z_{n+1}|^{2},\ldots,|z_{n+m(r)}|^{2})=\tilde{A}_{I}x+b_{I},\quad (|z_{n+m(r)+1}|^{2},\ldots,|z_{m(n)}|^{2})=\tilde{A}_{II}x+b_{II}. $$ For the embedding $i_{C}:\C^{m(r)}\to\C^{m(n)}$ one has the following formulae: $$ \C^{m(r)}\to\C^{m(n)}:\,z\to(\sqrt{C_{1}Z+D_{1}},\ldots,\sqrt{C_{r}Z+D_{r}},0_{n-r},z, $$ $$ \sqrt{(\tilde{A}_{II}C)_{1}Z+(\tilde{A}_{II}D)_{1}+b_{II,1}},\ldots, $$ $$ \sqrt{(\tilde{A}_{II}C)_{m(r,n)}Z+(\tilde{A}_{II}D)_{m(r,n)}+b_{II,m(r,n)}}), $$ where $z=(z_{1},\ldots,z_{m(r)})$, $Z=(|z_{1}|^{2},\ldots,|z_{m(r)}|^{2})$, and $m(r,n)=m(n)-m(r)-n$. \end{constr} \begin{exam} Consider a $5$-gon $P_5^2$, which is embedded into $\mathbb{R}^2$ as shown in the Figure below with facets $F_1,\ldots,F_5$ labeled simply by $\{1,2,3,4,5\}$ respectively. \begin{center} \begin{picture}(90,60)(0,0) {\thicklines \put(25,40){\line(1,0){20}} \put(65,0){\line(0,1){20}} \put(65.1,0){\line(0,1){20}} \put(65,20){\line(-1,1){20}} } \put(20,0){\vector(1,0){60}} \put(25,-5){\vector(0,1){60}} \put(77,-5){$x_1$} \put(18,53){$x_2$} \put(20,21){$1$} \put(45,-5){$2$} \put(67,10){$3$} \put(58,29){$4$} \put(35,41){$5$} \end{picture} \end{center} \vskip 1.0cm Let us take $P^2=P_5^2 = \{ x\in \mathbb{R}^2\;: Ax+b\geqslant 0 \}$, where \[ A^\top = \begin{pmatrix} 1 & 0 & -1 & -1 & 0 \\ 0 & 1 & 0 & -1 & -1 \\ \end{pmatrix},\qquad b^\top = (0,0,2,3,2). \] and $\top$ means transposition of a matrix. Then its moment-angle manifold is embedded into $\mathbb{C}^5$ with coordinates $(z_1,\ldots,z_5)$ and one obtains $\mathcal{Z}_{P^2}$ as the following intersection of quadrics:\\[7pt] 1. $|z_1|^2+|z_3|^2 = 2$,\\[7pt] 2. $|z_1|^2+|z_2|^2+|z_4|^2 = 3$,\\[7pt] 3. $|z_2|^2+|z_5|^2 = 2$.\\[1pt] In $\mathbb{R}^2$ with coordinates $(x_1,x_2)$ the facet $F^{r}=F_1\subset P^2$ is given by $x_1=0$. Obviously, facets of $F_1$ are its intersections with $F_2$ and $F_5$. Let us realize $F_1$ as a line segment $I=\{ x_{2}\in \mathbb{R}^1\;: 0\leqslant x_{2} \leqslant 2 \}$. Then the face embedding $i^{2}_{1} \colon I\subset P^2$ works as follows: $x_{2} \to (0,x_{2})$. In $\mathbb{C}^2$ with coordinates $(z_2,z_5)$ the moment-angle manifold $\mathcal{Z}_{F_1}$ of the face is a sphere given by the equation $|z_2|^2+|z_5|^2 = 2$. The embedding of $\mathbb{C}^2$ into $\mathbb{C}^5$, that maps $\mathcal{Z}_{F_1}$ into $\mathcal{Z}_{P^2}$, covers the embedding of the face $i^{2}_{1}:\,I\subset P^2$. Moreover, the corresponding composition of polytope embeddings $j \colon I=F_1 \subset P^2 \to \mathbb{R}_{\geqslant}^5$ is given by: $x_{2} \to (0,x_{2},2,-x_{2}+3,-x_{2}+2)$. Consider the projection map $\pi \colon \mathbb{C}^5 \to \mathbb{R}_{\geqslant}^5\; : (z_1,\ldots,z_5) \to (|z_1|^2,\ldots,|z_5|^2 )$ and set $W=\pi^{-1}j(F_1)$. In coordinates one has: \[ W=\{z\in\mathbb{C}^5\,| z_1=0; |z_2|^2=x_{2}; |z_3|^2 = 2; |z_4|^2 =-x_{2}+3; |z_5|^2 =-x_{2}+2\}. \] Here $W$ is the image of $\mathcal Z_{F}$ under the map $i_{C}:\mathbb{C}^{2}\rightarrow\mathbb{C}^{5}$. We get explicit formulae for this map, which is an embedding, although a nonlinear one: \[ \mathbb{C}^2 \subset \mathbb{C}^5\colon (z_2,z_5) \to (0,z_2,\sqrt{2},\sqrt{3-|z_2|^2},z_5). \] Its restriction gives the embedding $\hat{i}^{2}_{1}$ of $\mathcal{Z}_{F_1}=\{ (z_2,z_5)\in \mathbb{C}^2\;: |z_2|^2+|z_5|^2 = 2 \}$ into $\mathcal{Z}_{P^2}$. \end{exam} In the notation of the above constructions let us denote by $K^{r-1}=K_{F^r}$, $K^{n-1}=K_{P^n}$, and $K_{n,r}=K^{n-1}_{\phi_{r}^{n}[m(r)]}$. Then due to Construction~\ref{mapmcxs} the nondegenerate injective simplicial map $\Phi_{r}^{n}:\,K^{r-1}\to K^{n-1}$ gives an embedding $$ \Phi^{n}_{r}(K^{r-1})\subseteq K_{n,r} $$ and is a composition of simplicial maps $K^{r-1}\rightarrow K_{n,r}\rightarrow K^{n-1}$, where the former map is induced by $\Phi^{n}_{r}$ and the latter map is a natural embedding of a full subcomplex into its simplicial complex. Then the following general result holds. \begin{prop}\label{GenCase} There is a commutative diagram $$ \xymatrix{ & \mathcal Z_F\ar[rr]^{\hat{i}^{n}_{r}} \ar[dl]_{h_F} && \mathcal Z_P\ar[dr]^{h_P} \\ \mathcal Z_{K^{r-1}} \ar[drr] &&&& \mathcal Z_{K^{n-1}},\\ && \mathcal Z_{K_{n,r}} \ar[urr] } $$ where $h_{F}$ and $h_{P}$ are equivariant homeomorphisms, and the composition map in the bottom rows equals $\hat{\Phi}^{n}_{r}$. \end{prop} \begin{proof} The above diagram commutes by applying Construction~\ref{mapmfds} to the equivariant homeomorphism $h_{P}:\,\mathcal Z_{P}\cong\mathcal Z_{K_{P}}$ defined in~\cite[Lemma 3.1.6, formula (37)]{bu-pa00-2}, using the cubical subdivision $\mathcal C(P)\subset I^{m(n)}$. Namely, first observe that for any face $F^{r}\subset P^n$ one has: $\mathcal C(F^r)$ is a cubical subcomplex in $\mathcal C(P^n)$, and the following diagram commutes $$ \begin{CD} \mathcal Z_{F^r} @>\hat{i}_{r}^{n}>> \mathcal Z_{P^n} @>h_{P}>> \mathcal Z_{K^{n-1}} @>>> (\mathbb{D}^2)^m\\ @VVr V\hspace{-0.2em} @VV\rho V\hspace{-0.2em} @VV\rho V\hspace{-0.2em} @VV\rho V @.\\ F^{r} @>i_{r}^{n}>>P^n @>j_P>> \cc(K^{n-1}) @>>> I^{m(n)}, \end{CD}\eqno $$ where $r$ is a projection onto the orbit space of the canonical $\mathbb{T}^{m(r)}$-action, $\rho$ is a projection onto the orbit space of the canonical $\mathbb{T}^{m(n)}$-action, and $j_P$ is an embedding of a cubical subdivision $\mathcal C(P)$ of $P$ into $I^m$ with its image being $\cc(K^{n-1})$, see~\cite{bu-pa00-2}. Note that the composition map in the bottom row equals $j_F$, since $j_{P}(\mathcal C(F))=\cc(K^{r-1})\subset\cc(K^{n-1})$, and thus the composition map in the upper row equals $\hat{\Phi}^{n}_{r}h_{F}$, which finishes the proof. \end{proof} We are able to say more in the case of flag polytopes. Namely, the next result holds. \begin{lemm}\label{FlagCommuteEmbed} Suppose $P^n$ is a flag polytope and $F^r\subset P^n$ is its $r$-dimensional face. Then $\Phi^{n}_{r}(K^{r-1})=K_{n,r}$. Moreover, there is a commutative diagram $$\begin{CD} \mathcal Z_{F^r} @>\hat{i}_{r}^{n}>> \mathcal Z_{P^n}\\ @VVH_{1} V\hspace{-0.2em} @VVH_{2} V @.\\ \mathcal Z_{K^{r-1}} @>\hat{\Phi}_{r}^{n}>>\mathcal Z_{K^{n-1}}, \end{CD}\eqno $$ where $H_{1}, H_{2}$ are homeomorphisms, and $\hat{i}_{r}^{n}$ induces a split epimorphism in cohomology. \end{lemm} \begin{proof} Let us prove the first part of the statement. Due to Construction~\ref{mapmcxs}, for any simple polytope $P^n$ one has the following inclusion of simplicial complexes, both on the vertex set $\phi^{n}_{r}[m(r)]$ in $K^{n-1}$: $$ \Phi^{n}_{r}(K^{r-1})\subseteq K_{n,r} $$ We need to prove the inverse inclusion holds. We argue by contradiction; suppose $\{j_{1},\ldots,j_k\}\subset\phi_{r}^{n}[m(r)]$ is such that: $$ \sigma=(j_{1},\ldots,j_k)\in K_{n,r},\,\sigma\notin\Phi^{n}_{r}(K^{r-1}). $$ These formulae are equivalent to the following relations in the face poset of $P^n$: $$ F_{j_{1}}\cap\ldots\cap F_{j_k}\neq\varnothing,\,F^{r}\cap F_{j_1}\cap\ldots\cap F_{j_k}=\varnothing. $$ Thus $F_{j_s}\cap F_{j_t}\neq\varnothing$ for any $s,t\in [k]$, and $$ F_{i_1}\cap\ldots\cap F_{i_{n-r}}\cap F_{j_1}\cap\ldots\cap F_{j_k}=\varnothing. $$ On the other hand, by definition of $\phi_{r}^{n}$ one gets the following relations in the face poset of $F^r$ $$ F_{j_s}\cap F_{i_{1}}\cap\ldots\cap F_{i_{n-r}}=G_{\alpha_s}, $$ where $\phi_{r}^{n}(\alpha_{s})=j_s$ for $s\in [k]$. This implies that $F_{j_s}\cap F_{i_t}\neq\varnothing$ for any $s\in [k], t\in [n-r]$. Moreover, $F_{i_s}\cap F_{j_r}\neq\varnothing$ for any $r,s\in [n-r]$, since $F_{i_{1}}\cap\ldots\cap F_{i_{n-r}}=F^r\neq\varnothing$. Now recall that $P^n$ is a flag polytope and we proved that any two of its facets from the set $$ F_{i_1},\ldots,F_{i_{n-r}},F_{j_1},\ldots,F_{j_k} $$ have a nonempty intersection. It follows that $$ F_{i_1}\cap\ldots\cap F_{i_{n-r}}\cap F_{j_1}\cap\ldots\cap F_{j_k}\neq\varnothing $$ and we get a contradiction. To prove the second part of the statement note firstly that the diagram $$ \xymatrix{ & \mathcal Z_F\ar[rr]^{\hat{i}^{n}_{r}} \ar[dl]_{h_F} && \mathcal Z_P\ar[dr]^{h_P} \\ Z_{K^{r-1}} \ar[drr] &&&& \mathcal Z_{K^{n-1}}\ar[dll]_{r_{\phi_{r}^{n}[m(r)]}}\\ && Z_{K_{n,r}} } $$ commutes due to Proposition~\ref{GenCase}, where $r_{\phi_{r}^{n}[m(r)]}$ is a retraction map for the induced embedding $\hat{j}_{\phi^{n}_{r}[m(r)]}$ of moment-angle-complexes. Note that one has the equality $\Phi_{r}^{n}(K^{r-1})=K_{n,r}$ that we already proved above. Thus the above diagram implies $\mathcal Z_F$ is a retract of $\mathcal Z_P$. The rest follows from the fact that a homomorphism, being equivalent to a split epimorphism, is a split epimorphism itself, see Corollary~\ref{splitepi}. \end{proof} \begin{coro}\label{FaceInduceSplit} Suppose $F^r$ is an $r$-dimensional face of a simple polytope $P^n$. Then the following conditions are equivalent: \begin{itemize} \item[(1)] For any $\{j_{1},\ldots,j_{k}\}\subset\phi_{r}^{n}[m(r)]$, if $F_{j_{1}}\cap\ldots\cap F_{j_k}\neq\varnothing$, then $F^{r}\cap F_{j_1}\cap\ldots\cap F_{j_k}\neq\varnothing$; \item[(2)] $\Phi_{r}^{n}(K^{r-1})=K_{n,r}$; \item[(3)] $\hat{i}_{r}^{n}:\,\mathcal Z_{F^r}\rightarrow\mathcal Z_{P^n}$ is an embedding of a submanifold having a retraction; \item[(4)] $(i_{r}^{n})_{*}:\,H^*(\mathcal Z_{P^n})\rightarrow H^*(\mathcal Z_{F^r})$ is a split epimorphism of rings. \end{itemize} \end{coro} \begin{proof} The equivalence of conditions (1) and (2) follows directly from Construction~\ref{mapmcxs} and the proof of Lemma~\ref{FlagCommuteEmbed}. The implications $(2)\Rightarrow (3)\Rightarrow (4)$ follow from Corollary~\ref{splitepi}. Finally, (4) implies (2), since if $\Phi_{r}^{n}(K^{r-1})$ is not a full subcomplex, then there exists $\sigma=(j_{1},\ldots,j_k)\in K_{n,r}$, $\sigma\notin\Phi_{r}^{n}(K^{r-1})$, and $\partial\sigma\subset\Phi_{r}^{n}(K^{r-1})$, which by (4) and Theorem~\ref{BPmultigrad} gives $H^*(\partial\sigma)$ is a direct summand in $H^*(\mathcal Z_{K^{n-1}})$ (and, in particular, $\beta^{-i,2J}(K^{n-1})>0$, where $J$ is an $m$-tuple of 0's and 1's such that $J_{t}=1$ if and only if $t\in\sigma$; $|J|-i-1=k-2$), a contradiction. \end{proof} Therefore, when $P^n$ is a flag polytope, the map $\Phi_{r}^{n}$ from Construction~\ref{mapmcxs} establishes a simpicial isomorphism between $K_{F^r}$ and the full subcomplex of $K_{P^n}$ on the vertex set $\phi_{r}^{n}[m(r)]$. Thus, in what follows we may identify these simplicial complexes and consider the corresponding simplicial embedding $j_{r}^{n}:\,K_{F^r}\rightarrow K_{P^n}$. Now we want to prove that the opposite statement to that in Proposition~\ref{FlagCommuteEmbed} also holds. To do this we need the next combinatorial criterion of flagness for simplicial complexes. \begin{lemm}\label{FlagCriterionLemma} A simplicial complex $K$ is flag if and only if $\Link_{K}(v)$ is a full subcomplex in $K$ for any vertex $v\in K$. \end{lemm} \begin{proof} If $K$ is a flag complex, let us assume the converse statement holds. Then there exists a vertex $v\in K$ and a simplex $\sigma\in K$ such that $|\sigma|\geq 2$, $\partial\sigma\subseteq\Link_{K}(v)$, and $\sigma\notin\Link_{K}(v)$. Then $v\notin\sigma$, $v\cup\sigma\notin K$, and $v\cup\sigma_{i}\in K$, for any $i$, where $\sigma_{i}$ is a facet of $\sigma$. The latter means that $v\cup\sigma\in MF(K)$ having $|v\cup\sigma|\geq 3$ elements. This contradicts our assumption that $K$ is flag. Suppose that $K$ is not flag. Then there is a minimal nonface $\{i_{1},\ldots,i_{p}\}$ of $K$ on $p\geq 3$ elements. Then $\partial\Delta_{\{i_{1},\ldots,i_k\}}$ is a full subcomplex in $K$ on the vertex set $\{i_{1},\ldots,i_{p}\}$. Observe that $\Link_{K}(i_1)$ contains all the vertices from $\{i_{2},\ldots,i_k\}$, but not the simplex $(i_{2},\ldots,i_k)\in K$. That is, $\Link_{K}(i_1)$ is not a full subcomplex of $K$ on its vertex set, which finishes the proof. \end{proof} Finally, we obtain the next result. \begin{theo}\label{FlagCriterion} The following statements are equivalent: \begin{itemize} \item[(1)] $P^n$ is a flag polytope; \item[(2)] $\Phi_{r}^{n}(K^{r-1})=K_{n,r}$ for any $F^r\subset P^n$; \item[(3)] $\hat{i}_{r}^{n}:\,\mathcal Z_{F^r}\rightarrow\mathcal Z_{P^n}$ is an embedding of a submanifold having a retraction for any $F^r\subset P^n$; \item[(4)] $(i_{r}^{n})_{*}:\,H^*(\mathcal Z_{P^n})\rightarrow H^*(\mathcal Z_{F^r})$ is a split epimorphism of rings for any $F^r\subset P^n$. \end{itemize} \end{theo} \begin{proof} The conditions (2), (3), and (4) are equivalent by Corollary~\ref{FaceInduceSplit}. Suppose (2) holds. In particular, for any facet $F^{n-1}\subset P^{n}$ one has: $\Phi_{n-1}^{n}(K^{n-2})=K_{n,n-1}$. On the other hand, $\Phi_{n-1}^{n}(K^{n-2})=\Link_{K_P}(v)$, where $v\in K_P$ corresponds to $F^{n-1}\subset P^n$, see Construction~\ref{mapmcxs}. This means $\Link_{K_P}(v)$ is a full subcomplex in $K_P$ for any $v\in K_P$, therefore, by Lemma~\ref{FlagCriterionLemma}, $K_P$ is a flag simplicial complex, which implies (1). Finally, (1) implies (2) by Lemma~\ref{FlagCommuteEmbed}, which finishes the proof. \end{proof} Obviously, a face of a flag polytope is a flag polytope itself (we can also see it from the above theorem: the equivalence of (1) and (2) above shows that $K^{r-1}=K_{F^r}$ is flag being isomorphic to a full subcomplex $K_{n,r}$ in a flag simplicial complex $K^{n-1}=K_{P^n}$). Moreover, any flag polytope $F^r$ is a proper face of another flag polytope: $F^r$ is a facet of $P^{r+1}=F^{r}\times I$. One can ask the following question that naturally arises here: For a given flag simple polytope $F^r$ what are the obstructions on an (indecomposable) flag simple polytope $P^n$ to have $F^r$ as its proper face? We get a topological obstruction by means of Theorem~\ref{FlagCriterion} as follows. \begin{coro}\label{FaceObstruction} For a flag polytope $F^r$ to be a proper face of a (flag, indecomposable) polytope $P^n$ it is necessary that $\mathcal Z_{F^r}$ is a retract of $\mathcal Z_{P^n}$. \end{coro} Next example represents the situation in the nonflag case. \begin{exam}\label{prism} Consider a triangular prism $P^3$, $m(P^3)=5$ and denote its triangular facets by $F_1$ and $F_5$. Consider its quadrangular facet $F_2$. Then $\phi_{2}^{3}[m(2)]=\{1,3,4,5\}\subset [m(3)]=[5]$, the nerve complex of $F_2$ is a boundary of a square, and the full subcomplex of $K_{P^3}$ on the vertex set $\{1,3,4,5\}$ is a boundary of the square alongside with the diagonal $\{3,4\}$. Indeed, there is no retraction of $S^{3}\times S^{5}$ to $S^{3}\times S^3$. On the other hand, for any triangular facet, say, for $F_{1}$, one has $\Phi_{2}^{3}(K_{F_1})=K_{3,2}$ and thus the retraction and the split epimorphism in cohomology both take place. \end{exam} \begin{rema} For any face embedding $i_{r}^{n}:\,F^{r}\to P^n$ there is an induced embedding of quasitoric manifolds $M^{2r}(F,\Lambda_{F})\to M^{2n}(P,\Lambda)$, which is a composition of characteristic submanifold embeddings, induced by: $$ F^{r}=F_{i_1}\cap\ldots\cap F_{i_{n-r}}\subset F_{i_2}\cap\ldots\cap F_{i_{n-r}}\subset\ldots\subset F_{i_{n-r}}\subset P. $$ An alternative description of the induced map of quasitoric manifolds can be given similarly to that in Construction~\ref{mapmfds}, using the definition of a quasitoric manifold as a quotient space of $T^{n}\times P^{n}$, introduced in~\cite{DJ}.\\ However, the induced embeddings of quasitoric manifolds, in general, do not have retraction maps, even in the case when $P^n$ is a flag polytope, as the example below shows. \end{rema} \begin{exam} Consider a $5$-gon $P_5^2$, which is embedded into $\mathbb{R}^2$ as shown in the Figure below with facets $F_1,\ldots,F_5$ labeled simply by $\{1,2,3,4,5\}$ respectively. \begin{center} \begin{picture}(90,60)(0,0) {\thicklines \put(25,40){\line(1,0){20}} \put(65,0){\line(0,1){20}} \put(65.1,0){\line(0,1){20}} \put(65,20){\line(-1,1){20}} } \put(20,0){\vector(1,0){60}} \put(25,-5){\vector(0,1){60}} \put(77,-5){$x_1$} \put(18,53){$x_2$} \put(20,21){$1$} \put(45,-5){$2$} \put(67,10){$3$} \put(58,29){$4$} \put(35,41){$5$} \end{picture} \end{center} \vskip 1.0cm Let us take $P^2=P_5^2 = \{ x\in \mathbb{R}^2\;: Ax+b\geqslant 0 \}$, where \[ A^\top = \begin{pmatrix} 1 & 0 & -1 & -1 & 0 \\ 0 & 1 & 0 & -1 & -1 \\ \end{pmatrix},\qquad b^\top = (0,0,2,3,2). \] and $\top$ means transposition of a matrix. By Davis-Januszkiewicz theorem, see~\cite{DJ}, one has the following description of the cohomology ring of $M=M_{P_5}$ $$ H^*(M(P_{5}^{2},A^{\top}))\cong\mathbb{Z}[v_{1},v_{2},v_{3},v_{4},v_{5}]/I, $$ where $$ I=(v_{1}-v_{3}-v_{4},v_{2}-v_{4}-v_{5},v_{1}v_{3},v_{1}v_{4},v_{2}v_{4},v_{2}v_{5},v_{3}v_{5}), $$ which implies that $v_{1}^{2}=v_{2}^{2}=0$ and $v_{3}^{2}=v_{4}^{2}=v_{5}^{2}\neq 0$ in $H^*(M)$ (if all the squares of the 2-dimensional generators were zero, then all the monomials of degree greater than 2 in $H^*(M)$ are zero, which contradicts $H^{4}(M^4)\cong\mathbb{Z}$). Therefore, there is no retraction map for the embedding of quasitoric manifolds induced by the face embedding $i_{1}^{3}:\,F_{3}\to P_5$, since the converse would imply the existence of the induced split ring epimorphism in cohomology: $H^{2}(M^4)\to H^{2}(M_{F_3})$, which contradicts $v_{3}^{2}\neq 0$ in $H^*(M)$ (recall that $M_{F_3}\cong\mathbb{C}P^1$ and thus for its 2-dimensional cohomology generator one has: $v_{3}^{2}=0$). \end{exam} \section{Applications I} Now we recall a definition of a higher Massey product in cohomology of a differential graded algebra, see the exposition of basic definitions in the work of Babenko and Taimanov~\cite{BaTa}; more details can be found in~\cite[Appendix $\Gamma$]{BP04}. \begin{defi}\label{DefiningSystem} Suppose $(A,d)$ is a differential graded algebra, $\alpha_{i}=[a_{i}]\in H^{*}[A,d]$ and $a_{i}\in A^{n_{i}}$ for $1\leq i\leq k$. Then a \emph{defining system} for $(\alpha_{1},\ldots,\alpha_{k})$ is a $(k+1)\times (k+1)$-matrix $C$ such that the following conditions hold: \begin{itemize} \item[{(1)}] $c_{i,j}=0$, if $i\geq j$, \item[{(2)}] $c_{i,i+1}=a_{i}$, \item[{(3)}] $a\cdot E_{1,k+1}=dC-\bar{C}\cdot C$ for some $a=a(C)\in A$, where $\bar{c}_{i,j}=(-1)^{deg{c_{i,j}}}\cdot c_{i,j}$ and $E_{1,k+1}$ is a $(k+1)\times (k+1)$-matrix with all elements equal to zero, except for that in the position $(1,k+1)$, which equals 1. \end{itemize} It is easy to see that conditions (1)-(3) above imply $d(a)=0$ and $a\in A^{m}$, $m=n_{1}+\ldots+n_{k}-k+2$. A $k$-fold \emph{Massey product} $\langle\alpha_{1},\ldots,\alpha_{k}\rangle$ is said to be \emph{defined}, if there exists a defining system $C$ for it.\\ If so, this Massey product is defined to be the set of all cohomology classes $\alpha=[a(C)]$, when $C$ is a defining system. A defined Massey product is called \emph{trivial}, or \emph{vanishing} if $[a(C)]=0$ for some defining system $C$. \end{defi} We next recall the construction of a sequence of 2-truncated cubes $\mathcal Q=\{Q^n|\,n\geq 0\}$, for which $\mathcal Z_{Q^{n}}$ was proved in~\cite{L2} to have a nontrivial $n$-fold Massey product in cohomology for all $n\geq 2$. \begin{defi}[\cite{L2}]\label{2truncMassey} Set $Q^0$ to be a point and $Q^1\subset\R^1$ to be a segment $[0,1]$. Suppose $I^{n}=[0,1]^n, n\geq 2$ is an $n$-dimensional cube with facets $F_{1},\ldots,F_{2n}$ such that $F_{i},1\leq i\leq n$ contains the origin 0, $F_{i}$ and $F_{n+i}$ for $1\leq i\leq n$ are parallel. Then its face ring is the following one: $$ \ko[I^n]=\ko[v_{1},\ldots,v_{n},v_{n+1},\ldots,v_{2n}]/I_{I^n}, $$ where $I_{I^n}=(v_{1}v_{n+1},\ldots,v_{n}v_{2n})$. Consider the polynomial ring $$ \mathbb{Z}[v_{1},\ldots,v_{2n},w_{k',n+k'+i'}|\,1\leq i'\leq n-2, 1\leq k'\leq n-i'] $$ and the following square free monomial ideal $$ I=(v_{k}v_{n+k+i},w_{k',n+k'+i'}v_{n+k'+l},w_{k',n+k'+i'}v_{p},w_{k',n+k'+i'}w_{k'',n+k''+i''}), $$ in the above ring, where $v_{j}$ corresponds to $F_{j}$ for $1\leq j\leq 2n$, and $$ 0\leq i\leq n-2, 1\leq k\leq n-i, 1\leq i',i''\leq n-2, 1\leq k'\leq n-i', $$ $$ 1\leq k''\leq n-i'', 1\leq p\neq k'\leq k'+i', 0\leq l\neq i'\leq n-2, $$ $$ k'+i'=k''\,\text{or }k''+i''=k'. $$ Let us define $Q^n\subset\R^n$ to be a simple polytope such that $I_{Q^n}=I$. Observe that $Q^n$ has a natural realization as a 2-truncated cube and, furthermore, its combinatorial type does not depend on the order of face truncations of $I^n$. \end{defi} Below we give an explicit description for $Q^n\subset\R^n$ and the face embeddings $i_{n-1}^{n}:\,Q^{n-1}\to Q^n$ in the ambient Euclidean spaces, see Example~\ref{faceQ}. The next result on higher Massey products in cohomology of moment-angle manifolds holds. \begin{theo}[{\cite{L2}}]\label{mainMassey} Let $\alpha_i\in H^{3}(\mathcal Z_{Q^n})$ be represented by a 3-cocycle $v_{i}u_{n+i}\in R^{-1,4}(Q^n)$ for $1\leq i\leq n$ and $n\geq 2$. Then all Massey products of consecutive elements from $\alpha_{1},\ldots,\alpha_{n}$ are defined and the whole $n$-product $\langle\alpha_{1},\ldots,\alpha_{n}\rangle$ is nontrivial. \end{theo} \begin{theo}\label{Qdfpnm} For any $Q^n\in\mathcal Q$ and any $0\leq r<n$ there exists a face $F^{r}\subset Q^n$ such that $F^r$ is combinatorially equivalent to $Q^r$. \end{theo} \begin{proof} To prove the statement it suffices to show that $Q^{n-1}$ is a facet of $Q^n$ for $n\geq 1$. Indeed, consider the facet $F_{n-1}$ of $Q^n$. Let us show that $F_{n-1}=Q^{n-1}$. By definition of $Q^n$, $F_{n-1}$ is obtained from a facet $G_{n-1}\simeq I^{n-1}$ of $I^{n}$ with facets $G_{i},1\leq i\leq 2n$ (recall that $G_{i}\cap G_{n+i}=\varnothing$ for $1\leq i\leq n$ as in Definition~\ref{2truncMassey}) by consecutive cutting off faces of codimension 2 and, moreover, is also a 2-truncated cube, see~\cite[\S1.6]{TT}. The latter faces are the following ones (here we use the fact that cutting off a facet does not change combinatorial type of a polytope and that $G_{n-1}\cap G_{2n-1}=\varnothing$): $$ G_{n-1}\cap G_{i}\cap G_{j}, $$ where $1\leq i\leq n-2$, $n+2\leq j\leq 2n$, $j\neq 2n-1$. It remains to observe that the Stanley-Reisner ideals $I_{F_{n-1}}$ and $I_{Q^{n-1}}$ are isomorphic by Definition~\ref{2truncMassey} and $m(Q^{n-1})=m(F_{n-1})$. Therefore, $\mathbb{Z}[F_{n-1}]\cong\mathbb{Z}[Q^{n-1}]$, which implies that the resulting polytope $F_{n-1}$ is combinatorially equivalent to $Q^{n-1}$. Similarly, $F_{2n-1}\simeq Q^{n-1}$. Moreover, one can easily see from the above argument, that $F_{r}\cap F_{r+1}\cap\ldots\cap F_{n-1}\simeq Q^{r}$. \end{proof} Now Theorem~\ref{FlagCriterion} and the above theorem imply the following statement holds. \begin{coro}\label{AllProducts} There exists a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{Q^n})$ for each $k, 2\leq k\leq n$. \end{coro} \begin{rema} Note that it was proved in~\cite[Theorem 4.1]{L3} that the full subcomplex in $K=K_{Q^n}$ on the vertex set $\{1,\ldots,r-1,n,n+1,\ldots,n+r-1,2n\}$ is combinatorially equivalent to the full subcomplex of $K_{Q^r}$ on the vertex set $[2r]$ for each $r, 2\leq r\leq n$, which provides a different proof of the above statement. \end{rema} \begin{exam}\label{faceQ} Let us give explicit formulae for the embedding $\mathcal Z_{Q^r}\rightarrow\mathcal Z_{Q^n}$, where $Q^{r}\simeq F^{r}=F_{r}\cap\ldots\cap F_{n-1}$, using Construction~\ref{mapmfds2}. Note that we can choose the basis in $\R^n$ such that $A_{Q^n}^{T}=(E_{n},-E_{n},B^T)$, where $B$ is an $(m(n)-2n)\times n$-matrix such that arrows of $B$ have the form $e_{k}-e_{k+i},1\leq i\leq n-2,1\leq k\leq n-i$; they correspond to the codimension 2 face truncations, which are performed on $I^n$ to obtain $Q^n$, see Definition~\ref{2truncMassey}. It is easy to see from the proof of Theorem~\ref{Qdfpnm} that a row of $\tilde{A}_{I}$ has either a form $e_{k}-e_{k+i}$ with $1\leq k\leq n-2$ and $2\leq k+i\leq n$, $k+i\neq n-1$, or $e_k, k\in\{1,\ldots,r-1,n\}$ and $-e_{k}$ for $k\in\{n+1,\ldots,n+r-1,2n\}$. Moreover, the map $i^{n}_{r};\,\R^r\to\R^n$ sends $(x_{1},\ldots,x_{r})$ to $(x_{1},\ldots,x_{r},0_{n-r})$. Suppose $n=4,r=3$. Then we can give $W\subset\C^{13}$ by the following formulae: $$ (|z_{1}|^{2},|z_{4}|^{2})=(x_{1},x_{4}), (|z_{5}|^{2},|z_{8}|^{2})=(-x_{1}+1,-x_{4}+1), $$ $$ (z_{2},z_{3},z_{6},z_{7})=(0,0,0,0), $$ $$ |z_{9}|^{2}=x_{1}-x_{2}+1,|z_{10}|^{2}=x_{2}-x_{4}+1,|z_{11}|^{2}=x_{2}-x_{3}+1, $$ $$ |z_{12}|^{2}=x_{3}-x_{4}+1,|z_{13}|^{2}=x_{1}-x_{3}+1. $$ \end{exam} \section{Applications II} Starting with any indecomposable flag polytope $F^r$ we can construct a sequence of indecomposable flag polytopes $\{P^n|\,n\geq r\}$ with $P^{r}=F^{r}$ such that $P^{k}$ is a face of $P^{l}$ for any $l>k\geq r$. \begin{constr}\label{familyFlag} Given a flag polytope $F^r$ let us determine a sequence of flag polytopes $\mathcal P(F)=\{P^n|\,n\geq r\}$ as follows. Set $P^{r}=F^{r}$ and if $P^n$ is already constructed, define $P^{n+1}$ to be obtained from $P^{n}\times I$ by cutting of a certain codimension-2 face $F_{i(n)}\times\{1\}\subset P^{n}\times\{1\}\subset P^{n}\times I$ ($F_{i(n)}$ is a facet of $P^n$). Observe, that the resulting polytope is again flag and has $P^n$ as its facet $P^{n}\times\{0\}\subset P^{n+1}$ for any $n\geq r$. Obviously, the combinatorial type of $P^n$ for $n>r$ depends, in general, not only on that of $F^r$, but also on the choice of the facet $F_{i(n)}$ of $P^{n}$. \end{constr} The above construction introduces a new operation $\fc$ ('face cut') on flag simple polytopes; we have $P^{n+1}=\fc(P^n)$ for all $n\geq r$. We denote by $Q=\fc^{k}(P)$ a polyope obtained from $P$ by performing $k$ consecutive operations described in Construction~\ref{familyFlag}; note that $P=\fc^{0}(P)$ and $\dim Q=\dim P+k$. As an application of the above construction one immediately obtains the next result. \begin{coro}\label{MasseySeq} Suppose $F^r$ is a flag polytope and there exists a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{F^r})$. Then there is a sequence of polytopes $\mathcal P=\{P^n|\,n\geq r\}$ such that there exists a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{P^n})$ for all $n\geq r$. \end{coro} \begin{proof} Direct application of Theorem~\ref{FlagCriterion} to the sequence of flag polytopes $\mathcal P(F^r)=\{P^n|\,n\geq r\}$ defined in Construction~\ref{familyFlag}. \end{proof} \begin{defi} A sequence of indecomposable flag polytopes $\mathcal P=\{P^n\}$ such that there exists a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{P^n})$ with $k\to\infty$ as $n\to\infty$ and, moreover, the existence of a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{P^n})$ implies existence of a nontrivial $k$-fold Massey product in $H^*(\mathcal Z_{P^l})$ for any $l>n$ will be called \emph{a sequence of polytopes with strongly connected Massey products}. \end{defi} \begin{defi} We say that sequences of polytopes $\mathcal P_{1}=\{P_{1}^{n}\}$ and $\mathcal P_{2}=\{P_{2}^{n}\}$ are \emph{combinatorially different} if for any $N\geq 0$ there exists $n>N$ such that $P_{1}^{n}$ and $P_{2}^{n}$ are not combinatorially equivalent. \end{defi} \begin{prop}\label{MasseySeqInfinity} There exists an infinite set $\mathcal S=\{\mathcal P_{\alpha}|\,\alpha\in I\}$ of sequences of polytopes with strongly connected Massey products such that any $P_{\alpha}\in\mathcal S$ is combinatorially different from all other elements of $\mathcal S$. \end{prop} \begin{proof} Observe that if $P^{k}, P^{l}\in\mathcal P(F)$ ($k<l$) for some flag polytope $F$, then $P^{k}$ is a proper face of $P^l$ and, furthermore, $\fc(P^k)=P^{k+1}$ is a face of $P^l\subset\fc(P^l)$. For any sequence $s\in\{0,1\}^{\infty}$ not stabilizing at 0 define such a sequence of flag polytopes $\mathcal P_{s}=\{P_{s}^{n}|\,n\geq 4\}$ that $P_{s}^{n}=Q^{n}$, if $s_{n}=1$, and $P_{s}^{n}=\fc^{n-k(n)}(Q^{k(n)})$, if $s_{n}=0$. Here $k(n)=1$, if $s_{1}=\ldots=s_{n}=0$, and $k(n)=\max\,\{m|\,m<n, s_{m}=1\}$, otherwise. Then Theorem~\ref{FlagCriterion} implies $\mathcal P_{s}$ is a sequence of polytopes with strongly connected Massey products. Moreover, a sequence $\mathcal P_{s}$ is combinatorially different from $\mathcal P_{s'}$ (for $s\neq s'$) if $|s-s'|$ is not stabilizing at 0, since $m(\fc(P))=m(P)+3$ for any polytope $P$ and $m(Q^n)=\frac{n(n+3)}{2}-1$. \end{proof} Another way to get different sequences of flag polytopes with higher nontrivial Massey products in cohomology of their moment-angle manifolds is introduced in the next statement. \begin{theo}\label{mainMasseysequence} There exists infinitely many sequences $\mathcal P_{k}=\{P_{k}^n\}, k\geq 1$ of indecomposable flag simple polytopes such that \begin{itemize} \item[(a)] If $P\in\mathcal P_{i}$ and $Q\in\mathcal P_{j}$ for $i\neq j$, then $P$ and $Q$ are not combinatorially equivalent; \item[(b)] For any $k\geq 1$ and $r\geq 2$ there exists $N=N(k,r)$ such that there is a nontrivial $l$-fold Massey product in $H^*(\mathcal Z_{P_{k}^n})$, for all $2\leq l\leq r, n\geq N$. \end{itemize} \end{theo} \begin{proof} Consider the following sequences of flag polytopes: $\mathcal P_{k}=\{\fc^{k-1}(Q^{n})|\,n\geq 3\}$, $k\geq 1$. To prove (a) assume the converse is true; then the following equality for the number of facets of combinatorially equivalent polytopes of the same dimension holds: $$ m(\fc^{k}(Q^l))=m(\fc^{l}(Q^k)). $$ As $m(Q^n)=\frac{n(n+3)}{2}-1$ for any $n\geq 2$ and $m(\fc(P))=m(P)+3$ the above formula implies $(l-k)(l+k-3)=0$. If $l=k$, then both polytopes belong to the same sequence $\mathcal P_{l+1}$ and if $l+k=3$, then one of the dimensions $l$ or $k$ is smaller than 3. In both cases we get a contradiction with the definition of the sequences $\mathcal P_k$, which finishes the proof of (a). Applying Construction~\ref{familyFlag}, one has that $Q^n$ is a face of any polytope of dimension greater or equal to $n+k-1$ in $\mathcal P_k$. Therefore, statement (b) (with $N(k,r)=r+k-1$) follows from Theorem~\ref{FlagCriterion} and Corollary~\ref{AllProducts}, which finishes the whole proof. \end{proof}
1,941,325,220,874
arxiv
\section{Introduction} Random linear mappings play a central role in dimension reduction, compressed sensing, and numerical linear algebra due to their propensity to preserve the geometry of a given set. The performance of a random linear mapping $A\in\mathbb{R}^{m\times n}$ is often determined by the uniform concentration bound of $\frac{1}{\sqrt{m}}\|Ax\|_2$ around $\|x\|_2$ for all vectors in a set of interest (in other words, how close the map $\frac{1}{\sqrt{m}}A$ is to being an isometry on the set). This is now well-understood by the standard techniques in the Gaussian random matrix case \cite{Schechtman2006, vershynin_2018,gordon1988milman}. However, in many applications, non-Gaussian random mappings are more useful because of their computational/storage benefits or simply the difficulty to generate Gaussian matrices using sampling devices \cite{krahmer2015compressive}. For example, sparse or structured random matrices are preferred in both dimension reduction \cite{Dirksen2016dimension} and random sketching in numerical linear algebra \cite{achlioptas2003database, kane2014sparser, 2015pilanci, 2014woodruff} since they provide more efficient matrix multiplications than dense and unstructured matrices such as Gaussian ones. Certain formulations in compressed sensing also naturally require random matrices such as randomly subsampled Fourier measurements \cite{Krahmer2014} or Bernoulli random matrices \cite{saab2018compressed}. There has been a series of recent works \cite{Dirksen2016dimension, liaw2017simple,2017oymak} to demonstrate the effectiveness of random mappings outside the Gaussian setup. Unlike the Gaussian case in which we have a rotation invariance property, non-Gaussian setups require more sophisticated arguments to address various new technical challenges. In this article, we will be focusing on sub-Gaussian random mappings. Let us recall some definitions. For $\alpha \geq 1$, the $\psi_\alpha$-norm (which is the Orlicz norm taken with respect to function $\exp(x^\alpha) -1$) of a random variable $X$ is defined as $$ \| X\|_{\psi_\alpha} := \inf\{ t>0: \mathbb{E} \exp(|X|^\alpha/t^\alpha) \leq 2\}. $$ In particular, $\alpha=2$ gives the sub-Gaussian norm and $\alpha=1$ gives the sub-exponential norm. The random variable $X$ is called sub-Gaussian if $\| X\|_{\psi_2}<\infty$ and called sub-exponential if $\| X\|_{\psi_1}<\infty$. For sub-Gaussian random variables, the $\psi_2$-norm roughly measures how fast the tail distribution decays -- usually the bigger $\psi_2$-norm is, the heavier the tail. We will repeatedly use the fact that $\|X\|_{\psi_2}\leq K$ if and only if the tail probability $\P(|X|\geq t)$ is bounded by a Gaussian with standard deviation in the order of $K$. A precise statement of this, along with some other properties of $\psi_\alpha$-norm, can be found in \Cref{appendix_psi_alpha_properties}. The sub-Gaussian norms for many random variables can be calculated by looking at the moment generating function of their squares. For example, the sub-Gaussian norm for $\mathbf{Normal}(0,\sigma^2)$ is $\sqrt{\frac{8}{3}}\,\sigma$; for $\mathbf{Bernoulli}(p)$ it is $\log^{-\frac{1}{2}}\left( 1+p^{-1} \right)$; for Rademacher random variable it is $\log^{-\frac{1}{2}}(2)$ and for any bounded (by $M$) random variable it is no more than $M\log^{-\frac{1}{2}}(2)$. For $\mathbf{Exponential(\lambda)}$, it is not a sub-Gaussian random variable, but has sub-exponential norm $\frac{2}{\lambda}$. For a random vector $a\in \mathbb{R}^n$ we say $a$ is sub-Gaussian if $$ \|a\|_{\psi_2}:=\sup_{x\in \S^{n-1}} \|\langle a,x\rangle \|_{\psi_2}<\infty, $$ and say $a$ is isotropic if $$ \mathbb{E} aa^T=I_n.$$ We say a random matrix $A\in \mathbb{R}^{m\times n}$ is isotropic and sub-Gaussian if its rows are independent, isotropic and sub-Gaussian random vectors in $\mathbb{R}^n$. The sub-Gaussian parameter of $A$ is defined as $$ K:=\max_{1\leq i\leq m}\{\|A_i\|_{\psi_2}: A_i^T \text{ is the $i$-th row of } A\}. $$ For random matrix $A\in \mathbb{R}^{m\times n}$, the isotropic condition guarantees $\frac{1}{\sqrt{m}}A$ will preserve Euclidean norm in expectation. Some examples of isotropic and sub-Gaussian matrices include matrices with independent and sub-Gaussian entries $A_{ij}$ satisfying $\mathbb{E} A_{ij}=0$ and $\mathbb{E} A_{ij}^2=1$, uniformly subsampled (with replacement and after proper normalization) rows of orthonormal basis or tight frames, etc. \cite{vershynin_2018}. In the cases of Bernoulli matrices or sparse ternary matrices, which is a generalization of the database-friendly mappings in \cite{achlioptas2003database}, the sub-Gaussian parameter can depend on the signal dimension $n$ if the probability of an entry being nonzero is $n$-dependent. In the line of research regarding sub-Gaussian random mappings, Liaw et al. \cite{liaw2017simple} showed that for isotropic and sub-Gaussian mapping $A$ with sub-Gaussian parameter $K$, let $T\subset\mathbb{R}^n$, then we have with high probability, \begin{equation} \label{eq_liaw} \sup_{x\in T} \left|\frac{1}{\sqrt{m}}\|Ax\|_2-\|x\|_2\right| \leq \frac{K^2\cdot O(w(T)+\mathrm{rad}(T))}{\sqrt{m}}. \end{equation} Here $w(T)$ is the Gaussian width given by \[w(T):=\mathbb{E} \sup_{y\in T} \langle g, y \rangle \text{ where }g\sim \mathbf{Normal}(0,I_n),\] and $\mathrm{rad}(T)$ is given by $$\mathrm{rad}(T):=\sup_{y\in T} \|y\|_2,$$ which is the radius when $T$ is symmetric. Gaussian width measures the complexity of a set. In particular, denote $\text{cone}(T):=\{tx:t\geq0, x\in T\}$, then $w^2(\text{cone}(T)\cap \S^{n-1})$ is a meaningful approximation for dimension \cite{candes2014mathematics, 2017oymak}. Generally $\text{rad}(T)$ is also dominated by $w(T)$. For example, if $0\in T$, then by Jensen's inequality, $$w(T) = \mathbb{E} \sup_{y\in T} \left( \max\{ \inp{g}{y},0\} \right) \geq \sup_{y\in T} \mathbb{E} \max\{ \inp{g}{y},0\} = \text{rad}(T)/\sqrt{2\pi}.$$ In such case, \eqref{eq_liaw} implies that with high probability, $\frac{1}{\sqrt{m}}A$ is a near isometry on $T$ whenever $m\geq CK^4 w^2(T)$ for some constant $C$. The dependency on $w(T)$ in \eqref{eq_liaw} is optimal. This is easy to see when $m=1$ and $A$ has i.i.d. $\mathbf{Normal}(0,1)$ entries. But when it comes to the dependency on the sub-Gaussian parameter $K$, whether the $K^2$ factor can be improved is a question raised but left unanswered in \cite{liaw2017simple}. Other important works regarding this type of bounds are either not explicit \cite{2005Klartag, 2017oymak} or at least of the same order $K^2$ \cite{2007Mendelson, Dirksen2016dimension, 2015dirksen}. In this article, we refine this dependency on the sub-Gaussian parameter from $K^2$ to the optimal $K\sqrt{\log K}$. This enhances the concentration bound substantially when the sub-Gaussian mapping is not well-behaved, for example, when $K$ increases together with the signal dimension. We also relax the row-independent requirement by considering random mappings in the form of $BA$ where $B$ is an arbitrary matrix and $A$ is mean zero, isotropic and sub-Gaussian. The mean zero assumption is additional when comparing to the assumptions for \eqref{eq_liaw}, and not needed when $B$ is only diagonal. However, it is necessary for arbitrary $B$. Our bound is broadly applicable since it only require these properties from the random matrix $A$ without any other assumptions. Now we state our main theorem. In the following, $\|B\|_F$ and $\|B\|$ denote Frobenius and operator norm of $B$ respectively. The matrix $B\in \mathbb{R}^{l\times m}$ is diagonal means that the only possible non-zero entries are $B_{ii}$ where $1\leq i\leq \min\{l,m\}$. \begin{theorem} \label{theorem_mainBA} Let $B\in\mathbb{R}^{l \times m}$ be a fixed matrix, let $A\in\mathbb{R}^{m\times n}$ be a mean zero, isotropic and sub-Gaussian matrix with sub-Gaussian parameter $K$ and let $T\subset \mathbb{R}^n$ be a bounded set. Then $$ \mathbb{E}\sup_{x\in T} \left|\|BAx\|_2-\|B\|_F\|x\|_2\right|\leq CK\sqrt{\log K}\,\|B\|\left[w(T)+\mathrm{rad}(T)\right], $$ and for any $u\geq 0$, with probability at least $1-3e^{-u^2}$, $$ \sup_{x\in T} \left|\|BAx\|_2-\|B\|_F\|x\|_2\right|\leq CK\sqrt{\log K}\,\|B\| \left[w(T)+u\cdot\mathrm{rad}(T)\right]. $$ Here $C$ is an absolute constant. Furthermore, when $B$ is a diagonal matrix, random matrix $A$ only need to be isotropic and sub-Gaussian with sub-Gaussian parameter $K$ for the conclusions to hold. \end{theorem} When $B$ is the identity matrix, we have the following corollary. \begin{corollary} \label{theorem_main} Let $A\in\mathbb{R}^{m\times n}$ be an isotropic and sub-Gaussian matrix with sub-Gaussian parameter $K$, and let $T\subset \mathbb{R}^n$ be a bounded set. Then $$ \mathbb{E}\sup_{x\in T} \left|\|Ax\|_2-\sqrt{m}\|x\|_2\right|\leq CK\sqrt{\log K}\,\left[w(T)+\mathrm{rad}(T)\right], $$ and for any $u\geq 0$, with probability at least $1-3e^{-u^2}$, $$ \sup_{x\in T} \left|\|Ax\|_2-\sqrt{m}\|x\|_2\right|\leq CK\sqrt{\log K}\,\left[w(T)+u\cdot\mathrm{rad}(T)\right] $$ \end{corollary} In general, the high probability concentration bound for supremum in \Cref{theorem_mainBA} (and \Cref{theorem_main}) is optimal up to constants. As an example where $K\sqrt{\log K}$ dependency is optimal, consider the scaled Bernoulli distribution as in \Cref{prop_tightness_ex}. In this case, we can let $T=\{(1,0,\dots,0)^T\}$ as a singleton, let $B$ be identity and let $A$ be any isotropic matrix whose entries are independent with $A_{ij}^2$ following the same distribution as $X_i^2$ in \Cref{prop_tightness_ex}. Note that such $A$ is not unique, and we can have different values for $\mathbb{E} A$ by assigning different (arbitrary or random) signs for $A_{ij}$ while keeping $A$ isotropic -- for example, if all $A_{ij}$ are symmetric, then $\mathbb{E} A=0$; if $A$ has a column which is non-negative with probability 1, and $A_{ij}$ in all other columns are symmetric, then $\mathbb{E} A \neq 0$. In either case, by \Cref{prop_tightness_ex} (assuming $m\geq K^2\log K$), the sub-Gaussian process here has $\psi_2$-norm at least $cK\sqrt{\log K}$ as $K \to \infty$, this implies that the $K\sqrt{\log K}$ factor in our concentration bound cannot be improved (because up to constants, a sub-Gaussian concentration bound is also an upper bound on $\psi_2$-norm). The $\|B\|$ factor is optimal and this is easy to see when $B$ has non-zero singular values being all equal (because the statement should be invariant under scaling for $B$). We also give another example below in which the singular values are not all equal. As mentioned after \Cref{eq_liaw}, $\mathrm{rad}(T)$ is generally dominated by $w(T)$ and the dependency on $w(T)$ is optimal as well. Assuming $\mathrm{rad}(T)$ is dominated by $w(T)$, \Cref{theorem_mainBA} then implies that with high probability, matrix $ \frac{1}{\|B\|_F} BA$ is a near isometry on $T$ whenever the stable rank of $B$ $$ \mathrm{sr}(B):=\frac{\|B\|_F^2}{\|B\|^2} \geq CK^2\log K \,w^2(T). $$ This result recovers \eqref{eq_liaw} with improved dependency on $K$ when $B=I_m$. \Cref{theorem_mainBA} can fail for some $B$ if $\mathbb{E} A \neq 0$. For example, let $B$ be the all ones matrix, i.e. $B_{ij}=1$ for $1\leq i\leq l$ and $1\leq j\leq m$, then $\|B\|_F=\|B\|=\sqrt{lm}$. Suppose $A$ has independent entries where \[ \left\{ \begin{array}{ll} A_{ij} \sim \mathbf{Normal}(0,1), & 1\leq i\leq m \,\text{ and }\, 2\leq j\leq n \\ A_{i1}=|g_i|, & g_i\sim \mathbf{Normal}(0,1) \end{array} \right. \] It is easy to verify that $\mathbb{E} A\neq 0$ and $A$ has isotropic rows $A_i^T$. Moreover, for any $y=(y_1,\dots,y_n)\in \S^{n-1}$, notice that $$\inp{A_i}{y}\sim \sqrt{1-y_1^2}\cdot \mathbf{Normal}(0,1) + y_1|g_i|.$$ So using triangle inequality for the $\psi_2$-norm and inequality $\sqrt{1-y_1^2}+|y_1|\leq \sqrt{2}$, we get the sub-Gaussian parameter of $A$ is no more than $\sqrt{2}\|g_i\|_{\psi_2}=\sqrt{16/3}$.\\ Let $x=(1,0,\dots,0)^T$ and $T=\{x\}$. Since $Ax=(|g_1|,\dots,|g_m|)^T$, we have \begin{align*} \mathbb{E} \left|\|BAx\|_2-\|B\|_F\|x\|_2\right| & \geq \mathbb{E} \|BAx\|_2 - \|B\|_F\|x\|_2 \\ & = \mathbb{E} \left( \sqrt{l} \sum_{i=1}^m |g_i| \right) - \sqrt{lm} \\ &= \sqrt{lm} \left( \sqrt{2m/\pi} -1 \right) . \end{align*} On the other hand, $\|B\|\left[w(T)+\mathrm{rad}(T)\right]=\sqrt{lm}$. So in this case, \Cref{theorem_mainBA} does not hold when $m$ is sufficiently large. As an example demonstrating $\|B\|$ is optimal in general, consider the case when $T=\{x\}\subset \S^{n-1}$, $A$ is standard Gaussian so that $g:=Ax\sim \mathbf{Normal}(0,I_m)$ and $B=\mathrm{diag}(\tau,1,\dots,1)$ where $\tau>0$. Also let $g_i$ be the coordinates of $g$, then \begin{align*} \mathbb{E} \left|\|BAx\|_2-\|B\|_F\|x\|_2\right| & \geq \|B\|_F\|x\|_2 - \mathbb{E} \|BAx\|_2 \\ &= \sqrt{\tau^2+m-1}-\mathbb{E} \sqrt{\tau^2g_1^2+{\textstyle\sum_{i\geq 2}g_i^2}} \\ &\geq \sqrt{\tau^2+m-1} -\mathbb{E} \left( \tau |g_1| + \sqrt{\textstyle\sum_{i\geq 2}g_i^2} \right) \\ &\geq \sqrt{\tau^2+m-1} -\tau\sqrt{2/\pi}-\sqrt{m-1} \\ &=\tau \left( \sqrt{1+\frac{m-1}{\tau^2}}-\sqrt{\frac{2}{\pi}}-\sqrt{\frac{m-1}{\tau^2}} \right) \end{align*} where we used Jensen's inequality in the second last line. This estimate is in the order of $\tau=\|B\|$ when $\tau>C\sqrt{m}$ with some constant $C$ large enough. We make one more technical remark that the $\sqrt{\log K}$ factor here is well-defined. In fact, the isotropic and sub-Gaussian conditions of $A$ guarantee that $K$ is bounded below from $1$. To see this, let $X:=Ax$ for some $x\in \S^{n-1}$, then $X$ has independent coordinates $X_i$ satisfying $\mathbb{E} X_i^2=1$ and $\|X_i\|_{\psi_2}\leq K$. Also let $K_0:=\sqrt{1/ \log 2}\approx 1.201$, from \begin{equation} \label{eq_unitVar65} \mathbb{E} \exp(X_i^2/K_0^2) = \sum_{n\geq 0} \frac{\mathbb{E} X_i^{2n}}{n!K_0^{2n}} \geq \sum_{n\geq 0} \frac{1}{n!K_0^{2n}} = e^{1/K_0^2} = 2 \end{equation} we can conclude that $K\geq K_0$ and the equality is achieved when $X_i=1$ a.s. The proof for \Cref{theorem_mainBA} follows an analogous approach in Liaw et al. \cite{liaw2017simple}. One major difference is that we prove and apply two new concentration inequalities with improved parametric dependency in the sub-Gaussian regime. We believe these inequalities are interesting on their own as an application-oriented concentration inequality. The first one is a new Bernstein type inequality under bounded first absolute moment condition. This inequality provides a concentration bound for sum of sub-exponential random variables. \begin{theorem}[New Bernstein's Inequality] \label{theorem_newbernstein} Let $a=(a_1,\dots,a_m)$ be a fixed non-zero vector and let $Y_1,\dots,Y_m$ be independent, mean zero sub-exponential random variables satisfying $\mathbb{E} |Y_i|\leq 2$ and $\|Y_i\|_{\psi_1} \leq K_i^2$ $\left( \text{assume }K_i \geq \frac{6}{5}\right)$. Then for every $t \geq 0$ we have $$ \P\left(\left|\sum_{i=1}^m a_iY_i \right|\geq t\right) \leq 2\exp [-c\min\left(\frac{t^2}{\sum_{i=1}^m a_i^2 K_i^2\log K_i},\frac{t}{\|a\|_{\infty}K^2\log K}\right)], $$ where $K=\max_iK_i$ and $c$ is an absolute constant. \end{theorem} \begin{remark} \label{remark_bernstein} \Cref{theorem_newbernstein} remains true (with a different absolute constant $c$) when the $2$ in $\mathbb{E} |Y_i|\leq 2$ is replaced with an arbitrary positive constant (see \Cref{remark_EYalpha} for more detail). \end{remark} The second one is a new Hanson-Wright inequality under unit variance condition. This inequality provides a concentration bound for quadratic forms of independent random variables and is more general than the aforementioned Bernstein's inequality. In the literature, results of similar flavor have been obtained \cite{rudelson2013hanson,vu2015random,adamczak2015note,klochkov2018uniform} but under different assumptions. We will give a brief comparison between our result and a few notable ones in \Cref{sec_hansonwright}. \begin{theorem}[New Hanson-Wright Inequality] \label{theorem_newhansonwright} Let $A\in \mathbb{R}^{n\times n}$ be a fixed non-zero matrix and let $X=(X_1,\dots, X_n)\in\mathbb{R}^n$ be a random vector with independent, mean zero, sub-Gaussian coordinates satisfying $\mathbb{E} X_i^2=1$ and $\|X_i\|_{\psi_2}\leq K$. Then for every $t\geq 0$ we have \[ \P\left( |X^TAX-\mathbb{E} X^TAX| \geq t \right) \leq 2 \exp\left[ -c\min\left( \frac{t^2}{\|A\|_F^2K^2\log K},\frac{t}{\|A\|K^2\log K}\right) \right], \] where $c$ is an absolute constant. \end{theorem} \begin{remark} If $A$ is a diagonal matrix, then \Cref{theorem_newhansonwright} recovers \Cref{theorem_newbernstein} (assuming all $K_i$ are equal) with $Y_i=X_i^2-\mathbb{E} X_i^2$. Therefore this can be viewed as a generalization of the new Bernstein's inequality given in \Cref{theorem_newbernstein}. \end{remark} \subsection*{Notations} We use $\|\cdot\|_2$ for Euclidean norm of vectors, $\|\cdot\|_F$ and $\|\cdot\|$ for Frobenius and operator norm of matrices respectively. We use $\circ$ for Hadamard (entrywise) product. We say $f\lesssim g$ if $f\leq Cg$ for some absolute constant $C$ and say $f\gtrsim g$ if $f\geq Cg$ for some absolute constant $C$. Typically, $c$ and $C$ denote absolute constants (often $c$ for small ones and $C$ for large ones) which may vary from line to line. \subsection*{Organization} The rest of this paper is organized as follows: In \Cref{sec_results_bernstein}, we discuss and prove the new Bernstein's inequality (\Cref{theorem_newbernstein}). In \Cref{sec_hansonwright}, we first discuss and compare the new Hanson-Wright inequality (\Cref{theorem_newhansonwright}) to other known variants of Hanson-Wright inequalities and then prove \Cref{theorem_newhansonwright}. In \Cref{sec_results_main}, we prove our main theorem regarding sub-Gaussian matrices on sets (\Cref{theorem_mainBA}) and give an example to show our tail dependency on $K$ is optimal. In \Cref{sec_applications}, we demonstrate how our result can improve theoretical guarantees of some popular applications such as Johnson-Lindenstrauss embedding, null space property for 0-1 matrices, randomized sketches and blind demodulation. In \Cref{sec_conclusion}, we make a brief conclusion for this paper. \section{New Bernstein's Inequality} \label{sec_results_bernstein} In this section we prove the new Bernstein's inequality \Cref{theorem_newbernstein}. Let us first recall the standard Bernstein's inequality for sub-exponential random variables \cite[Theorem 2.8.2]{vershynin_2018}, which states that for independent, mean zero, sub-exponential random variables $Y_1, Y_2,\dots, Y_m$ and a vector $a=(a_1,\dots,a_m)\in\mathbb{R}^m$, we have \begin{equation} \label{eq-std-bernstein} \P\left(\left|\sum_{i=1}^m a_iY_i \right|\geq u\right) \leq 2\exp [-c\min\left(\frac{u^2}{K^4\|a\|_2^2},\frac{u}{K^2\|a\|_{\infty}}\right)], \end{equation} where $K^2=\max_i\|Y_i\|_{\psi_1}$. Compared to \eqref{eq-std-bernstein}, \Cref{theorem_newbernstein} has an extra assumption on the first absolute moment of $Y_i$ -- namely $\mathbb{E}|Y_i|\leq 2$, but it improves the dependence on $K$ in the sub-Gaussian regime from $K^4$ to $K^2\log K$. It is worth noting that such extra assumption comes naturally when considering isotropic random matrices/vectors. In fact, let $x$ be a fixed point on the unit sphere and let $a_i$ be isotropic random vectors of the same dimension, then $Y_i:=|\inp{a_i}{x}|^2-1$ is mean zero since $a_i$ is isotropic, and $\mathbb{E} |Y_i|\leq \mathbb{E} |\inp{a_i}{x}|^2+1=2$ by triangle inequality. \subsection*{Proof of \Cref{theorem_newbernstein}} We will first bound the moments of $Y_i$, then bound their moment generating functions, and finally use Chernoff method to obtain the desired tail bound. \step{Step 1: Bounding the moments} The idea here is to write the moment as an integral and then estimate under the two constraints $\mathbb{E} |Y_i|\leq 2$ and $\|Y_i\|_{\psi_1}\leq K^2$. \begin{lemma}[Moment Bounds] \label{lemma_pmoment} Let $Y$ be a sub-exponential random variable satisfying $\mathbb{E} |Y|\leq 2$ and $\|Y\|_{\psi_1} \leq K^2$ with $ K \geq \frac{6}{5}$. Then \begin{equation*} \mathbb{E} |Y|^{p} \leq C^{p} p^p \left(K^2\log K\right)^{p-1}, \; \forall p\geq 1. \end{equation*} \end{lemma} \begin{proof} Define $f(t):= \P(|Y|\geq t) \, e^{t/K^2}$. Since $\mathbb{E} |Y|\leq 2$, we have \begin{equation} \int_0^\infty f(t)e^{-t/K^2} dt = \int_0^\infty \P(|Y|\geq t) dt \leq 2. \label{eq-pmoment-1} \end{equation} Also, since $\|Y\|_{\psi_1} \leq K^2$, a change of variable $s=e^{t/K^2}$ gives \begin{align*} 2 \geq \mathbb{E} \exp(|Y|/K^2) &=\int_0^\infty \P\left( e^{|Y|/K^2} \geq s\right) ds =\int_{-\infty}^0 K^{-2}e^{t/K^2} dt + \int_0^\infty K^{-2}f(t) dt. \end{align*} Notice that $\int_{-\infty}^0 K^{-2}e^{t/K^2} dt=1$, this becomes \begin{equation} \int_0^\infty f(t) \,dt \leq K^2. \label{eq-pmoment-2} \end{equation} For the $p$-th moment of $|Y|$, with a change of variable $s=u^p$, we have $$ \mathbb{E} |Y|^{p} = \int_0^\infty \P(|Y|^p \geq s) d s = \int_0^\infty f(u)e^{-u/K^2} pu^{p-1} du. $$ We will split this integral into two parts. Set $T=6pK^2\log K$. Since $pu^{p-1}$ monotonically increases on $[0,T]$, we have \begin{align*} \int_0^T f(u)e^{-u/K^2} pu^{p-1} du \; &\leq\; pT^{p-1}\int_0^T f(u)e^{-u/K^2} du \\ &\overset{\eqref{eq-pmoment-1}}{\leq}\; 2p\left(6pK^2\log K\right)^{p-1}. \end{align*} On the other hand, since $$ \frac{d}{du} \left(u^{p-1}e^{-u/K^2}\right)=K^{-2}e^{-u/K^2}u^{p-2}\left(K^2(p-1)-u\right), $$ and $T\geq \left( 6\log \frac{6}{5}\right) pK^2>pK^2$ (note that $6\log \frac{6}{5}\approx 1.09$), we can conclude that $u^{p-1}e^{-u/K^2}$ monotonically decreases on $[T, \infty)$. Thus \begin{align*} \int_T^\infty f(u)e^{-u/K^2} pu^{p-1} du &\leq pT^{p-1}e^{-T/K^2} \int_T^\infty f(u)du \\ &\overset{\eqref{eq-pmoment-2}}{\leq} pT^{p-1}K^{-6p}K^2 \\ &\leq p\left(6pK^2\log K\right)^{p-1}. \end{align*} Combining these two parts completes the proof with $C\leq 6$. \end{proof} \step{Step 2: Bounding the moment generating function} Let $Y$ be the random variable as in \Cref{lemma_pmoment}, the moment generating function of $Y$ can be estimated through Taylor series \begin{align*} \mathbb{E} \exp(\lambda Y) &= 1+\mathbb{E} Y +\sum_{p\geq 2}\frac{\mathbb{E} (\lambda Y)^p}{p!} \\ &\leq 1+\sum_{p\geq 2} \frac{|\lambda|^p(C_1p)^p\left(K^2\log K\right)^{p-1}}{p!} \\ &\leq 1+\frac{1}{K^2\log K}\sum_{p\geq 2} \left(C_1e|\lambda| K^2\log K\right)^p. \end{align*} Here the first inequality is by \Cref{lemma_pmoment} (with $C_1\leq 6$) and the second inequality uses $p!\geq (p/e)^p$.\\ When $\displaystyle|\lambda|K^2\log K\leq 1/(2C_1e)$, the above summation converges, and we have \begin{align*} \mathbb{E} \exp(\lambda Y) \leq 1+C_1^2e^2\lambda^2K^2\log K \leq \exp\left(C_1^2e^2\lambda^2K^2\log K\right), \end{align*} where the last inequality uses $1+x\leq e^x$.\\ Hence we have showed \begin{equation} \label{eq_mgfY} \mathbb{E}\exp(\lambda Y) \leq \exp\left(C_0\lambda^2K^2\log K\right) \text{ when } |\lambda|K^2\log K\leq c_0 \end{equation} for absolute constants $C_0=(C_1e)^2$ and $c_0=1/(2C_1e)$. \step{Step 3: Chernoff bound} For $\displaystyle \lambda \in \left[0,\frac{c_0}{\|a\|_\infty K^2\log K} \right]$, by Markov's inequality and \Cref{eq_mgfY} we have \begin{align} \P\left( \sum_{i=1}^m a_iY_i \geq u\right) &\leq e^{-\lambda u}\mathbb{E} \exp\left( \lambda \sum_{i=1}^m a_i Y_i\right) \nonumber \\ & \leq \exp\left(-\lambda u+\lambda^2C_0\sum_{i=1}^m a_i^2 K_i^2\log K_i\right) \label{eq_bern1} \end{align} where $c_0$ and $C_0$ are absolute constants. When we minimize the above expression over $\lambda$, we get the optimal value $$\lambda_{\mathrm{opt}}=\min\left(\frac{u}{2C_0\sum a_i^2 K_i^2\log K_i},\frac{c_0}{\|a\|_\infty K^2\log K}\right).$$ Next we plug in $\lambda_{\mathrm{opt}}$ into \eqref{eq_bern1} to get \begin{equation} \label{eq_bern2} \P\left(\sum_{i=1}^m a_iY_i \geq u\right) \leq \exp[-\min\left(\frac{u^2}{4C_0\sum a_i^2 K_i^2\log K_i},\frac{c_0u}{2\|a\|_\infty K^2\log K}\right)]. \end{equation} Setting $\displaystyle c=\min\left\{\frac{1}{4C_0},\frac{c_0}{2}\right\}$ in \eqref{eq_bern2}, we obtain the one sided bound $$ \P\left(\sum_{i=1}^m a_iY_i\geq u\right) \leq \exp [-c\min\left(\frac{u^2}{\sum a_i^2 K_i^2\log K_i},\frac{u}{\|a\|_\infty K^2\log K}\right)]. $$ The bound for $\P \left( \sum a_iY_i<-u \right)$ is similarly obtained by considering $-Y_i$ instead of $Y_i$. This completes the proof. \begin{remark} \label{remark_EYalpha} If the random variables $Y_i$ have first absolute moment $\mathbb{E} |Y_i|\leq \alpha$, then the right hand side of \Cref{eq-pmoment-1} becomes $\alpha$ and it is easy to see that \Cref{lemma_pmoment} still holds with $C\leq 6+\alpha$. It follows that the $C_1$ in Step 2 will be no more than $6+\alpha$ and \Cref{theorem_newbernstein} now holds with constant $c=\frac{1}{4(C_1e)^2}\geq \frac{1}{4e^2(6+\alpha)^2}$. \end{remark} \section{New Hanson-Wright Inequality} \label{sec_hansonwright} Hanson-Wright inequality gives a concentration bound for quadratic forms of random variables. The version in \cite{rudelson2013hanson} states that for a random vector $X=(X_1,\dots,X_n)\in\mathbb{R}^n$ with independent, mean zero, sub-Gaussian coordinates, suppose $\max_i\|X_i\|_{\psi_2}\leq K$ and let $A$ be an $n\times n$ real matrix, then \begin{equation} \label{eq-hansonwright} \P\left( |X^TAX-\mathbb{E} X^TAX| \geq t \right) \leq 2 \exp\left[ -c\min\left( \frac{t^2}{\|A\|_F^2 K^4},\frac{t}{\|A\| K^2}\right) \right]. \end{equation} In the same spirit as the new Bernstein's inequality, we can improve the tail dependency on $K$ in the sub-Gaussian regime from $K^4$ to $K^2\log K$ under a further assumption $\mathbb{E} X_i^2=1$ for each $X_i$. This is the new Hanson-Wright inequality \Cref{theorem_newhansonwright}. It is not difficult to drop the requirement $\mathbb{E} X_i^2=1$ in \Cref{theorem_newhansonwright} by a simple scaling, in which case we have the following corollary. \begin{corollary} \label{theorem_hansonwright_alt} Let $X=(X_1,\dots, X_n)\in\mathbb{R}^n$ be a random vector with independent, mean zero, sub-Gaussian coordinates satisfying $0<\|X_i\|_{\psi_2}\leq K$, then for fixed square matrix $A$, \[ \P\left( |X^TAX-\mathbb{E} X^TAX| \geq t \right) \leq 2 \exp\left[ -c\min\left( \frac{t^2}{\|A\|_F^2\alpha_2^2\gamma^2K^2\log\frac{K}{\alpha_1}},\frac{t}{\|A\|\gamma^2K^2\log \frac{K}{\alpha_1}}\right) \right]. \] where $\alpha_1=\min_i\left( \mathbb{E} X_i^2\right)^{\frac{1}{2}}$, $\alpha_2=\max_i\left( \mathbb{E} X_i^2\right)^{\frac{1}{2}}$ and $\gamma=\alpha_2/\alpha_1$. \end{corollary} \begin{proof} Let $\beta_i:=(\mathbb{E} X_i^2)^\frac{1}{2}$ for $1\leq i\leq n$ and define diagonal matrices $$ D_\beta:=\text{diag}(\beta_1,\beta_2,\dots,\beta_n), \quad D_{1/\beta}:=\text{diag}(1/\beta_1,1/\beta_2,\dots,1/\beta_n). $$ Then $\tilde{X}:=D_{1/\beta}X$ satisfies the assumption of \Cref{theorem_newhansonwright} with $\mathbb{E}\tilde{X}_i^2=1$ and $\|\tilde{X}_i\|_{\psi_2}\leq K/\beta_i\leq K/\alpha_1$. Applying \Cref{theorem_newhansonwright} to $\tilde{X}$ and $\tilde{A}:=D_\beta AD_\beta$ completes the proof. \end{proof} \subsection*{Comparison with Other Hanson-Wright Inequalities} Let us first compare \Cref{theorem_hansonwright_alt} to the standard Hanson-Wright inequality \eqref{eq-hansonwright} in the case when $\gamma=1$. The concentration bound in \eqref{eq-hansonwright} implies that, with probability at least $1-2e^{-t}$, \begin{equation} \label{eq-hw-cmp1} |X^TAX-\mathbb{E} X^TAX| \lesssim K^2\|A\|_F\sqrt{t}+K^2\|A\|t. \end{equation} Meanwhile, \Cref{theorem_hansonwright_alt} implies that \begin{equation} \label{eq-hw-cmp2} |X^TAX-\mathbb{E} X^TAX| \lesssim \alpha K \sqrt{\log (K/\alpha)} \|A\|_F\sqrt{t}+K^2\log(K/\alpha)\|A\|t \end{equation} where $\alpha=(\mathbb{E} X_i)^\frac{1}{2}$. Note that $\alpha\leq \|X_i\|_{\psi_2}\leq K$, so this bound improves the parameter dependence (up to a log factor) in the sub-Gaussian regime from $K^2$ to $\alpha K$. Such improvement can be significant when $\alpha$ is far less than $K$. Other variants of Hanson-Wright inequality have appeared in literature with similar improvements \cite{adamczak2015note, vu2015random}. In particular, one of the results by Adamczak \cite{adamczak2015note} works under the assumption that $X$ satisfies the convex concentration property with constant $\tilde{K}$, that is, for every 1-Lipschitz convex function $\varphi : \mathbb{R}^n\to \mathbb{R}$, we always have $\mathbb{E} |\varphi(X)|< \infty$ and $$ \P(|\varphi(X)-\mathbb{E}\varphi(X)|\geq t)\leq 2\exp(-t^2/\tilde{K}^2) \;\text{ for any } t\geq 0.$$ Then under such assumption, \begin{equation} \label{eq-hw-cmp3} |X^TAX-\mathbb{E} X^TAX| \lesssim \tilde{K} \sqrt{\|\text{Cov}(X)\|} \|A\|_F\sqrt{t}+\tilde{K}^2\|A\|t \end{equation} where $\text{Cov}(X)$ is the covariance matrix of $X$. When $X$ has independent and mean zero coordinates, $\|\text{Cov}(X)\|=\max_i\mathbb{E} X_i^2$. However, the convex concentration property is not the same as sub-Gaussianity. More precisely, while it is true that $\tilde{K}$ is independent of dimension when $X$ has i.i.d. coordinates which are bounded almost surely \cite{samson2000concentration}, this can fail when the boundedness assumption of $X_i$ is replaced by sub-Gaussianity (i.e. $\tilde{K}$ could depend on the dimension of $X$ when $X_i$ are i.i.d. and sub-Gaussian) \cite{adamczak2005logarithmic, hitczenko1998hypercontractivity}. Therefore, the bound in \eqref{eq-hw-cmp3} does not imply \eqref{eq-hw-cmp2} nor \eqref{eq-hw-cmp1} in general. In a more recent paper by Klochkov and Zhivotovskiy \cite{klochkov2018uniform}, the authors proved a uniform version of the Hanson-Wright inequality which, when applying to a single matrix under the same assumption as \Cref{theorem_hansonwright_alt}, yields the following bound: \begin{equation} \label{eq-hw-cmp4} X^TAX-\mathbb{E} X^TAX \lesssim M\mathbb{E}\|AX\|_2 \sqrt{t}+M^2\|A\|t \end{equation} where $M=\|\max_i |X_i|\|_{\psi_2}$. This bound also improves \eqref{eq-hw-cmp1} in some cases as demonstrated in \cite{klochkov2018uniform}. We shall compare this bound to \eqref{eq-hw-cmp2} in the sub-Gaussian regime. On one hand, Jensen's inequality tells us that the $\mathbb{E} \|AX\|_2$ factor in \eqref{eq-hw-cmp4} is bounded by the $\alpha \|A\|_F$ factor in \eqref{eq-hw-cmp2}. On the other, the factor $M$ in \eqref{eq-hw-cmp4} is only bounded by $M \lesssim K\sqrt{\log n}$, which could depend on dimension $n$. Moreover, \eqref{eq-hw-cmp4} only provides a one-sided bound instead of two-sided concentration bounds like \Cref{eq-hw-cmp1,eq-hw-cmp2,eq-hw-cmp3}. \subsection*{Proof of \Cref{theorem_newhansonwright}} The main idea of proof is similar to \cite{rudelson2013hanson}, that is to divide the sum into diagonal and off-diagonal, then bound the moment generating function of the latter through a decoupling and comparison argument. However, there are two significant differences. The first difference is the random variables used for comparison. We will use scaled Bernoulli multiplied by standard Gaussian in order to preserve the condition of second moment being 1. Using such random variables also leads to challenges in bounding the moment generating function, which is the second difference. Now we proceed with the proof. For any $t>0$, let $$p:= \P\left( |X^TAX-\mathbb{E} X^TAX| \geq t \right)$$ be the the tail probability we want to bound. Let $A_1:=\text{diag}(A)$ be the diagonal of $A$ and let $A_2:=A-A_1$. Then $$ p \leq \P\left( |X^TA_1X-\mathbb{E} X^TA_1X| \geq \frac{t}{2} \right) + \P\left( |X^TA_2X-\mathbb{E} X^TA_2X| \geq \frac{t}{2} \right) =:p_1+p_2. $$ We will seek bounds for $p_1$ and $p_2$. \step{Step 1: The diagonal sum} The bound for $p_1$ is given by our new Bernstein's inequality. Notice that $$X^TA_1X-\mathbb{E} X^TA_1X=\sum a_{ii}(X_i^2-1),$$ where $\mathbb{E} |X_i^2-1|\leq 2$ and $\|X_i^2-1\|_{\psi_1}\leq C\|X_i^2\|_{\psi_1}\leq CK^2$. So by \Cref{theorem_newbernstein} (note that $K\geq 6/5$ as shown in \Cref{eq_unitVar65}) and the simple relationships between the norms of $A_1$ and $A$, we have \begin{equation*} p_1\leq 2 \exp\left[ -c\min\left( \frac{t^2}{\|A\|_F^2K^2\log K},\frac{t}{\|A\|K^2\log K}\right) \right]. \end{equation*} \step{Step 2: Decoupling} To bound $p_2$, we will derive a bound for the moment generating function of $X^TA_2X$. Let $X'$ be an independent copy of $X$, then $$ \mathbb{E} \exp(\lambda X^TA_2X) \leq \mathbb{E}_{X'} \mathbb{E}_{X} \exp(4\lambda X^TAX'). $$ The above follows directly from the following decoupling lemma. \begin{lemma}[Decoupling \cite{vershynin_2018}] \label{lemma_HW_decoupling} Let $A=(a_{ij})$ be a fixed $n\times n$ matrix, and let $X=(X_1,\dots, X_n)\in\mathbb{R}^n$ be a random vector with independent mean zero coordinates. Then for every convex function $F:\mathbb{R}\rightarrow \mathbb{R}$, $$ \mathbb{E} F \left( \sum_{i\neq j} a_{ij} X_iX_j \right) \leq \mathbb{E} F \left( 4X^TAX' \right), $$ where $X'$ is an independent copy of $X$. \end{lemma} See Theorem 6.1.1 and Remark 6.1.3 in \cite{vershynin_2018} for a proof of \Cref{lemma_HW_decoupling}. \step{Step 3: Comparison} We will compare $X$ (and $X'$) to scaled Bernoulli multiplied entrywise by standard Gaussian. But first let us look at the case of a single variable through the following lemma. Note here the Bernoulli parameter $(2L)^{-2}<1$ since $K\geq \sqrt{1/\log 2}$, as shown in \eqref{eq_unitVar65}. \begin{lemma} \label{lemma_HW_lm2} For random variable $Z\in\mathbb{R}$, if $\mathbb{E} Z=0$, $\mathbb{E} Z^2=1$ and $\|Z\|_{\psi_2}\leq K$, then $$\mathbb{E} \exp(tZ)\leq \mathbb{E}_{r,g}\exp(Ctrg),\quad \forall t\in\mathbb{R} $$ where $g\sim\mathbf{Normal}(0,1)$, $r^2\sim (2L)^2\cdot \mathbf{Bernoulli}((2L)^{-2})$ and $L^2=K^2\log K$. \end{lemma} \begin{proof} Using the inequality $e^x\leq x+\cosh(2x)$, which is true for all $x\in\mathbb{R}$ (see \Cref{appendix_b}), we have $$ \mathbb{E}\exp(tZ) \leq \mathbb{E} tZ+\mathbb{E}\cosh (2tZ) =0+1+\sum_{q\geq 1}\frac{(2t)^{2q}}{(2q)!}\mathbb{E} Z^{2q}. $$ By \Cref{lemma_pmoment} we know that $\mathbb{E} Z^{2q}\leq C_0^q q^qL^{2q-2}$ for any positive integer $q$ and some absolute constant $C_0$, hence $$ \mathbb{E}\exp(tZ) \leq 1+\sum_{q\geq 1} \frac{4^qt^{2q}}{(2q)!}C_0^q q^q L^{2q-2} \leq 1+\sum_{q\geq 1} \frac{(4C_0)^qt^{2q}}{q!}L^{2q-2}. $$ On the other hand, a direct calculation gives $$ \mathbb{E}_{r,g}\exp(Ctrg)=\mathbb{E}_r\exp(\frac{1}{2}C^2t^2r^2)=1+\sum_{q\geq 1}\frac{(C^2/2)^qt^{2q}}{q!}\mathbb{E} r^{2q} \geq 1+\sum_{q\geq 1} \frac{(C^2/2)^qt^{2q}}{q!}L^{2q-2}. $$ Choosing any $C$ such that $C^2\geq 8C_0$ completes the proof. \end{proof} Now let $g,r\in \mathbb{R}^n$ be random vectors such that $g\sim \mathbf{Normal}(0,I_n)$ and $r$ has entries $r_i^2\overset{i.i.d}{\sim} (2L)^2\cdot \mathbf{Bernoulli}((2L)^{-2})$ where $L^2=K^2\log K$. Also let $g'$ and $r'$ be independent copies of $g$ and $r$. Let $\alpha$ be any vector in $\mathbb{R}^n$, by \Cref{lemma_HW_lm2} and independence we have \begin{equation} \label{eq-HW-compare} \mathbb{E}_X\exp(\alpha^TX) =\prod_j\mathbb{E}_{X_j}\exp(\alpha_jX_j) \leq \prod_j \mathbb{E}_{r_j,g_j}\exp(C\alpha_j\, r_jg_j) = \mathbb{E}_{r,g}\exp(C\alpha^T(r\circ g)). \end{equation} Note the above also holds for $\mathbb{E}_{X'}\exp(\alpha^TX')$. Therefore \begin{align*} \mathbb{E}_{X'} \mathbb{E}_{X} \exp(4\lambda X^TAX') &\leq \mathbb{E}_{X'} \mathbb{E}_{r,g} \exp(C\lambda (r\circ g)^TAX') \\ &= \mathbb{E}_{r,g} \mathbb{E}_{X'} \exp(C\lambda (r\circ g)^TAX') \\ &\leq \mathbb{E}_{r,g} \mathbb{E}_{r',g'} \exp(C\lambda(r\circ g)^T A(r'\circ g')) \\ &= \mathbb{E}\exp( C\lambda g^TRAR'g' ) \end{align*} where $R:=\text{diag}(r)$ and $R':=\text{diag}(r')$. Here the two inequalities are repeated applications of \Cref{eq-HW-compare}. \step{Step 4: Moment generating function of $g^TRAR'g'$} Denote $\sigma_i=\sigma_i(RAR')$ the singular values of matrix $RAR'$. From the rotation invariance of $g$ and $g'$ we have $$ \mathbb{E} \exp(\lambda g^TRAR'g' ) = \mathbb{E}_{r,r'} \mathbb{E}_{g,g'}\exp(\sum_{i=1}^n \lambda \sigma_ig_ig_i'). $$ For standard normal random variables $g_i$ and $g_i'$, \begin{equation*} \mathbb{E}_{g_i,g_i'} \exp(\eta g_i g_i')=\mathbb{E}_{g_i}\exp(\frac{1}{2}\eta^2g_i^2)=(1-\eta^2)^{-\frac{1}{2}} \leq \exp(\eta^2) \;\text{ whenever }\; \eta^2<\frac{1}{2}, \end{equation*} where the inequality uses $(1-x)^{-\frac{1}{2}}\leq e^x$ when $x\in[0,\frac{1}{2})$ (see \Cref{appendix_b}).\\ Also, note that $\sigma_i\leq \|RAR'\|\leq 4L^2\|A\|$, so if $\displaystyle \lambda^2 <\frac{1}{32L^4\|A\|^2}$ we have $$ \mathbb{E} \exp(\lambda g^TRAR'g' ) \leq \mathbb{E}_{r,r'} \exp(\sum_{i=1}^n \lambda^2\sigma_i^2) = \mathbb{E}_{r,r'} \exp(\lambda^2\|RAR'\|_F^2). $$ Next, use the following \Cref{lemma_HW_lm4} (with $\eta=16\lambda^2L^4$ and $p=\frac{1}{4L^{2}}$) to bound the moment generating function of $\|RAR'\|_F^2$ and we obtain $$ \mathbb{E} \exp(\lambda g^TRAR'g' ) \leq \exp(8\lambda^2L^2\|A\|_F^2) \;\text{ when }\; \lambda^2 <\frac{1}{32L^4\|A\|^2}. $$ \begin{lemma} \label{lemma_HW_lm4} Let $D$ be a diagonal random matrix with i.i.d. entries $D_{ii}=d_i\sim \mathbf{Bernoulli}(p)$, and let $D'$ be an independent copy of $D$. Given a fixed matrix $A$, then $$ \mathbb{E}\exp(\eta\|DAD'\|_F^2) \leq \exp(2p\eta\|A\|_F^2) \;\text{ when }\; 0<\eta\leq\frac{1}{\|A\|^2}. $$ \end{lemma} \begin{proof} Denote $A_i$ the $i$-th row of $A$. Notice that $$ \|DAD'\|_F^2\leq \|DA\|_F^2=\sum_i \|A_i\|_2^2 d_i, $$ so for $\eta\in \left( 0,\frac{1}{\|A\|^2} \right]$, we have \begin{align*} \mathbb{E}\exp(\eta\|DAD'\|_F^2) &\leq \prod_i \mathbb{E}\exp(\eta \|A_i\|_2^2 d_i) \\ &= \prod_i \left( 1-p+pe^{\eta \|A_i\|_2^2} \right) \\ &\leq \prod_i \left( 1+ 2p\eta\|A_i\|_2^2 \right) \\ &\leq \exp(2p\eta\|A\|_F^2) \end{align*} Here the second last inequality uses $\eta\|A_i\|_2^2\leq \eta\|A\|^2\leq 1$ and $e^x\leq 1+2x$ when $x\in [0,1]$. The last inequality uses $1+x\leq e^x$. \end{proof} \step{Step 5: Chernoff bound} From previous steps we get $$ \mathbb{E} \exp(\lambda X^TA_2X) \leq \exp(C\lambda^2L^2\|A\|_F^2) \;\text{ when }\; \lambda^2 <\frac{c}{L^4\|A\|^2} $$ for some absolute constants $C$ and $c$.\\ Notice that $\mathbb{E} X^TA_2X=0$, so by Markov's inequality we have for $0<\lambda \leq \frac{c}{L^2\|A\|}$, \begin{align*} \P\left( X^TA_2X-\mathbb{E} X^TA_2X \geq \frac{t}{2} \right) & \leq e^{-\lambda t/2}\mathbb{E}\exp(\lambda X^TA_2X) \\ & \leq \exp(-\lambda t/2+C\lambda^2L^2\|A\|_F^2) \end{align*} Optimizing this over $\lambda$ (similar to proof of \Cref{theorem_newbernstein}) yields a one sided bound for $p_2$. The other side can then be obtained by considering $-A_2$ (and $-A$) instead of $A_2$. Together they give \begin{equation*} p_2\leq 2 \exp\left[ -c\min\left( \frac{t^2}{\|A\|_F^2K^2\log K},\frac{t}{\|A\|K^2\log K}\right) \right]. \end{equation*} \step{Step 6: The bound for $p$} Lastly, since $p\leq \min\{1,p_1+p_2\}$, combining the bounds for $p_1, p_2$ and then applying inequality $\min\{1,4e^{-x}\}\leq 2e^{-x/2}$ (see \Cref{appendix_b}) complete the proof of \Cref{theorem_newhansonwright}. \section{Sub-Gaussian Matrices on Sets} \label{sec_results_main} In this section we prove \Cref{theorem_mainBA} and show that the $K\sqrt{\log K}$ tail dependence on $K$ is optimal. \Cref{subsec_main1} studies the simple case when $T$ consists of only a single point. \Cref{subsec_main2} establishes the technical sub-Gaussian increments lemmas and \Cref{subsec_main3} proves \Cref{theorem_mainBA} through these lemmas and Talagrand's Majorizing Measure Theorem. \Cref{sec_tightness} provides an example through scaled Bernoulli random variables that can be used to show the tightness of $K\sqrt{\log K}$ in our concentration bound. \subsection{Concentration of Random Vectors} \label{subsec_main1} Let $X:=Ax\in \mathbb{R}^m$ with $x\in \S^{n-1}$. The isotropic and sub-Gaussian assumption on $A$ now implies $X$ has independent coordinates satisfying $\mathbb{E} X_i^2=1$ and $\|X_i\|_{\psi_2}\leq K$. Moreover, recall that $K\geq \sqrt{1/\log 2}>1$ from \eqref{eq_unitVar65}. Lemma 5.3 in \cite{liaw2017simple} states that $$ \| \|X\|_2-\sqrt{m} \|_{\psi_2} \lesssim K^2 .$$ In other words, $\|Ax\|_2$ has a sub-Gaussian concentration around $\sqrt{m}$. It is worth noting that this concentration is independent of the ambient dimension $m$. We will follow a similar proof idea, but use the new inequalities (\Cref{theorem_newbernstein} and \Cref{theorem_newhansonwright}) to generalize and refine this result. \begin{theorem} \label{theorem_concentrationBX} Let $B$ be a fixed $m\times n$ matrix and let $X=(X_1,\dots, X_n)\in\mathbb{R}^n$ be a random vector with independent sub-Gaussian coordinates satisfying $\mathbb{E} X_i^2=1$ and $\|X_i\|_{\psi_2}\leq K$. If either one of the following conditions further holds: \begin{itemize} \setlength\itemsep{0em} \item[$\mathrm{(a)}$] $X$ is mean zero; \item[$\mathrm{(b)}$] $m=n$ and $B$ is a diagonal matrix. \end{itemize} Then $$ \left\Vert \|BX\|_2-\|B\|_F \right\Vert_{\psi_2}\leq CK\sqrt{\log K}\|B \|. $$ \end{theorem} \begin{proof} The conclusion is trivial if $\|B\|=0$, so we will assume $B$ is non-zero. \vskip .1in \noindent (a) Let $A:=B^TB$, then $$ X^TAX=\|BX\|_2^2,\quad \mathbb{E} X^TAX=\|B\|_F^2, $$ and $$ \|A\|=\|B\|^2, \quad \|A\|_F\leq \|B^T\|\|B\|_F=\|B\|\|B\|_F. $$ Let $Y:=\|BX\|_2^2-\|B\|_F^2$, by \Cref{theorem_newhansonwright} we have \begin{equation} \label{eq-concBX-Y} \P \left( \left| Y \right| \geq t \right) \leq 2\exp[-c\min\left(\frac{t^2}{\|B\|^2\|B\|_F^2 K^2\log K},\frac{t}{\|B\|^2 K^2\log K}\right)]. \end{equation} Note that for $\alpha, \beta, s \geq 0$, $$ |\alpha-\beta|\geq s \quad\Rightarrow\quad |\alpha^2-\beta^2|\geq \max\{s^2, s\beta\}. $$ (This readily comes from the inequalities $|\alpha^2-\beta^2|\geq |\alpha-\beta|^2$ and $|\alpha^2-\beta^2|\geq |\alpha-\beta|\beta$ whenever $\alpha,\beta\geq 0$.)\\ Let $Z:=\|BX\|_2 -\|B\|_F$, then $$ \P(|Z|\geq s)\leq \P \left( |Y|\geq \max\{s^2, s\|B\|_F\} \right). $$ To bound this probability, we observe that \begin{itemize} \item[]{\makebox[4cm]{if $0\leq s\leq \|B\|_F$, then\hfill}} $ \displaystyle \P(|Z|\geq s) \leq \P( |Y|\geq s\|B\|_F ) \leq 2\exp(\frac{-cs^2}{\|B\|^2 K^2\log K});\;$ and \item[]{\makebox[4cm]{if $s\geq \|B\|_F$, then\hfill}} $ \displaystyle \P(|Z|\geq s) \leq \P(|Y|\geq s^2) \leq 2\exp(\frac{-cs^2}{\|B\|^2 K^2\log K}).$ \end{itemize} Combining these two bounds and then using property (b) in \Cref{appendix_psi_alpha_properties} complete the proof. \vskip .1in \noindent (b) We will first use Bernstein's inequality to obtain \eqref{eq-concBX-Y}. Denote $b_i:=B_{ii}$ the diagonal entries of $B$, then $$Y:=\|BX\|_2^2-\|B\|_F^2 =\sum_{i=1}^m b_i^2\left( X_i^2-1 \right).$$ For random variables $X_i^2-1$, notice that $$\mathbb{E} |X_i^2-1|\leq 2 \quad\text{ and }\quad \|X_i^2-1\|_{\psi_1}\leq C\|X_i^2\|_{\psi_1}\leq CK^2,$$ where the $\psi_1$-norm estimate is from property (f) in \Cref{appendix_psi_alpha_properties}. Also, note that $K\geq 6/5$ as shown by \Cref{eq_unitVar65}. Using \Cref{theorem_newbernstein} and the inequality $\sum b_i^4\leq \left( \max_i b_i^2 \right) \cdot \sum b_i^2 = \|B\|^2\|B\|_F^2,$ we have $$ \P\left(|Y| \geq t\right) \leq 2\exp [-c\min\left(\frac{t^2}{\|B\|^2\|B\|_F^2 K^2\log K},\frac{t}{\|B\|^2 K^2\log K}\right)]. $$ The rest of the proof is the same as in (a). \end{proof} \subsection{Sub-Gaussian Increments Lemma} \label{subsec_main2} A key lemma for \Cref{theorem_mainBA} is to show that the random process $Z_x:=\|BAx\|_2-\|B\|_F \|x\|_2 $ has sub-Gaussian increments. That is, $\|Z_x-Z_y\|_{\psi_2}\leq M\|x-y\|_2$ for some $M$ and for all $x,y\in \mathbb{R}^n$. Theorem 1.3 in \cite{liaw2017simple} showed sub-Gaussian increments for $B=I_m$ with $M= CK^2$. Here we improve and generalize this result to any $B$ with $M=C K\sqrt{\log K}\,\|B\|$. From \Cref{prop_tightness_ex}, it is easy to see that this $K\sqrt{\log K}$ factor is in fact tight for certain $A, B, x$ and $y$. We will prove two versions of the sub-Gaussian increment lemma. The first one (\Cref{lemma_subG_incrementB}) is for arbitrary $B$, but require the random matrix $A$ to be mean zero. The second one (\Cref{lemma_subG_incrementB_diagonal}) is only for diagonal $B$, but does not require zero mean from $A$. For \Cref{lemma_subG_incrementB} below, the beginning of the proof follows the argument in \cite{liaw2017simple}, except we will use \Cref{theorem_concentrationBX} for better tail bounds. Later on in the proof, we will use a different approach to bound one of the tail probabilities (i.e. $p_3$) through the new Hanson-Wright inequality (\Cref{theorem_newhansonwright}). \begin{lemma} \label{lemma_subG_incrementB} Let $B\in\mathbb{R}^{l \times m}$ be a fixed matrix and let $A\in\mathbb{R}^{m\times n}$ be a mean zero, isotropic and sub-Gaussian matrix with sub-Gaussian parameter $K$. Then the random process $$ Z_x:=\|BAx\|_2-\|B\|_F \|x\|_2 $$ has sub-Gaussian increments with $$ \| Z_x-Z_y\|_{\psi_2} \leq CK\sqrt{\log K}\,\|B\| \|x-y\|_2 , \;\; \forall x,y\in\mathbb{R}^n. $$ \end{lemma} \begin{proof} The statement is invariant under scaling for $B$. So without loss of generality, we will assume $B$ has operator norm $\|B\|=1$. \subsubsection*{Step 1: Show sub-Gaussian increments for $x,y\in \S^{n-1}$ on the unit sphere} Without loss of generality, assume $x\neq y$ and define $$ p:=\P \left( \frac{|Z_x-Z_y|}{\|x-y\|_2} \geq s \right) = \P \left( \frac{| \|BAx\|_2-\|BAy\|_2 |}{\|x-y\|_2} \geq s \right). $$ We need to bound this tail probability by a Gaussian whose standard deviation is the order of $K\sqrt{\log K}$. Consider the following two cases: \begin{itemize} \item $s\geq 2\|B\|_F$. Denote $u:=\frac{x-y}{\|x-y\|_2}$ and by triangle inequality we have $$ p\leq\P \left( \frac{ \|BA(x-y)\|_2}{\|x-y\|_2} \geq s \right) = \P \left( \|BAu\|_2\geq s \right) =: p_1. $$ \item $0<s<2\|B\|_F$. Write $p$ as $$ p=\P \left( |Z| \geq s(\|BAx\|_2+\|BAy\|_2) \right) \quad\text{ where }\quad Z:=\frac{\|BAx\|_2^2-\|BAy\|_2^2}{\|x-y\|_2}. $$ Then $$ p\leq \P \left( |Z| \geq s\|BAx\|_2 \right) \leq \P \left( \|BAx\|_2\leq \frac{1}{2}\|B\|_F \right) + \P \left( |Z|>\frac{s}{2}\|B\|_F \right) =: p_2+p_3, $$ where $p_2$ and $p_3$ denote the first and second summand respectively. \end{itemize} Next we derive bounds for $p_1$, $p_2$ and $p_3$. \vskip .1in {\bf \noindent Bound for $p_1$} \vskip .1in From $s\geq 2\|B\|_F$ we have $$ p_1 = \P \left( \|BAu\|_2-\|B\|_F \geq s -\|B\|_F\right) \leq \P \left( \|BAu\|_2-\|B\|_F \geq \frac{s}{2} \right). $$ Applying \Cref{theorem_concentrationBX} to the random vector $Au$ we get $$ p_1 \leq 2\exp(-c\frac{s^2}{4K^2\log K}). $$ \vskip .1in {\bf \noindent Bound for $p_2$} \vskip .1in Applying \Cref{theorem_concentrationBX} to the random vector $Ax$ and note that $\|B\|_F>\frac{1}{2}s$, we get $$ p_2\leq 2\exp(-c\frac{(\|B\|_F/2)^2}{K^2\log K})\leq 2\exp(-c\frac{s^2}{16K^2\log K}). $$ \vskip .1in {\bf \noindent Bound for $p_3$} \vskip .1in Denote $u:=\frac{x-y}{\|x-y\|_2}$ and $v:=x+y$, then $\inp{u}{v}=0$ since $\|x\|_2=\|y\|_2=1$. We can write $Z$ as $$ Z=\frac{\inp{BA(x-y)}{BA(x+y)}}{\|x-y\|_2}=\inp{BAu}{BAv}. $$ Notice that \[ 2\inp{BAu}{BAv}=\inp{BA(u+v)}{BA(u+v)}-\inp{BAu}{BAu}-\inp{BAv}{BAv}. \] Let us also denote $X_w:=Aw$ for $w\in \mathbb{R}^n$, then from $\mathbb{E} X_wX_w^T=\|w\|_2^2\, I_n$ we have $$ \mathbb{E} \|BX_w\|_2^2=\mathbb{E}\, \mathrm{tr}(B^TBX_wX_w^T) = \mathrm{tr}(B^TB\,\mathbb{E} \left( X_wX_w^T \right) ) =\|w\|_2^2\|B\|_F^2. $$ Thus we can further write $Z$ as \begin{align*} Z& = \frac{1}{2}\|BX_{u+v}\|_2^2 - \frac{1}{2}\|BX_u\|_2^2 - \frac{1}{2}\|BX_v\|_2^2 \\ & = \frac{1}{2} \left( \|BX_{u+v}\|_2^2 -\mathbb{E} \|BX_{u+v}\|_2^2 \right) - \frac{1}{2} \left( \|BX_u\|_2^2-\mathbb{E} \|BX_u\|_2^2 \right) \notag \\ &\qquad - \frac{1}{2} \left( \|BX_v\|_2^2- \mathbb{E} \|BX_v\|_2^2 \right) \\ & = \frac{1}{2}Y_{u+v} -\frac{1}{2}Y_u -\frac{1}{2}Y_v. \end{align*} where the second equality uses the fact that $Z$ is mean zero and in the last equality $Y_w:=\|BX_w\|_2^2- \mathbb{E} \|BX_w\|_2^2$. Therefore \begin{align*} p_3 &= \P (|Y_{u+v}-Y_u-Y_v|>s\|B\|_F) \\ & \leq \P \left( |Y_{u+v}|+|Y_u|+|Y_v|>s\|B\|_F \right) \\ &\leq \P \left( |Y_{u+v}| \geq \frac{s}{2}\|B\|_F \right) + \P \left( |Y_u|+|Y_v| > \frac{s}{2}\|B\|_F \right) \\ &\leq \P \left( |Y_{u+v}| \geq \frac{s}{2}\|B\|_F \right) + \P \left( |Y_u| \geq \left( 1-\frac{1}{8}\|v\|_2^2 \right) \frac{s}{2}\|B\|_F \right) \notag \\ &\qquad +\P \left( |Y_v| > \frac{1}{8}\|v\|_2^2\cdot \frac{s}{2}\|B\|_F \right) \\ &=: p_4+p_5+p_6. \end{align*} We will bound $p_4,\,p_5$ and $p_6$ through the new Hanson-Wright inequality (\Cref{theorem_newhansonwright}). For any non-zero vector $w$, define $\bar{w}:=\frac{w}{\|w\|_2}$. It is easy to see that $X_w=\|w\|_2 X_{\bar{w}}$ and $Y_w=\|w\|_2^2Y_{\bar{w}}$. Also note that $$ \|B^TB\|=\|B\|^2=1, \quad \|B^TB\|_F\leq \|B^T\|\|B\|_F=\|B\|\|B\|_F=\|B\|_F, $$ so by \Cref{theorem_newhansonwright} we have \begin{align*} \P \left( |Y_{\bar{w}}|\geq r \right) & \leq 2\exp[-c\min\left(\frac{r^2}{\|B\|_F^2 K^2\log K},\frac{r}{K^2\log K}\right)] \\ &= 2\exp(-c \frac{r^2}{\|B\|_F^2 K^2\log K}) \quad\text{ when }\, 0\leq r \leq \|B\|_F^2. \end{align*} Hence for $0\leq t\leq \|w\|_2^2\|B\|_F^2$, \begin{equation} \label{eq-boundp456} \P(|Y_w|\geq t) = \P \left( |Y_{\bar{w}}| \geq \frac{t}{\|w\|_2^2} \right) \leq 2\exp(\frac{-ct^2}{\|w\|_2^4\|B\|_F^2 K^2\log K}). \end{equation} Now we apply \Cref{eq-boundp456} to $p_4,\,p_5$ and $p_6$. \begin{itemize} \item For $p_4$. Since $s<2\|B\|_F$ and $\|u+v\|_2=\sqrt{1+\|v\|_2^2}\in [1,\sqrt{5})$, we can conclude that $$\frac{s}{2}\|B\|_F< \|B\|_F^2\leq \|u+v\|_2^2\|B\|_F^2$$ and therefore $$ p_4 \leq 2\exp(\frac{-cs^2}{4\|u+v\|_2^4 K^2\log K}) \leq 2\exp(\frac{-cs^2}{100 K^2\log K}). $$ \item For $p_5$. Notice that $\|u\|_2=1$ and $1-\frac{1}{8}\|v\|_2^2 \in (\frac{1}{2}, 1]$, so $$ p_5\leq \P \left( |Y_u|\geq \frac{s}{4}\|B\|_F \right) \leq 2\exp(\frac{-cs^2}{16 K^2\log K}). $$ \item For $p_6$. If $v=0$ (i.e. $x=-y$), then $p_6=\P (0>0)=0$. Now assume $v\neq 0$, then by \eqref{eq-boundp456} we have $$ p_6=\P \left( |Y_v| > \frac{\|v\|_2^2}{16}s\|B\|_F \right) \leq 2\exp(\frac{-cs^2}{256 K^2\log K}). $$ \end{itemize} \vskip .1in {\bf \noindent Putting everything together -- the bound for $p$} \vskip .1in So far we have showed that \[ p\leq \max\{ p_1, p_2+p_3\} \quad\text{ and }\quad p_3\leq p_4+p_5+p_6, \] where $p_i \leq 2\exp(\frac{-cs^2}{K^2\log K})$ for some absolute constant $c$ and $1\leq i\leq 6$. Note $p\leq 1$ and the inequality $\min\{1,8e^{-x}\}\leq 2e^{-x/3}$ (see \Cref{appendix_b}), we get $$ p\leq \min \left\{ 1, 8\exp(\frac{-cs^2}{K^2\log K})\right\} \leq 2 \exp(\frac{-cs^2}{3K^2\log K}). $$ \subsubsection*{Step 2: Show sub-Gaussian increments for all $x$ and $y$} Without loss of generality, we can assume $\|x\|_2=1$ and $\|y\|_2\geq 1$. Let $\bar{y}:=\frac{y}{\|y\|_2}$ be the projection of $y$ onto unit ball, then by triangle inequality, $$ \|Z_x-Z_y\|_{\psi_2} \leq \|Z_x-Z_{\bar{y}}\|_{\psi_2} + \|Z_y-Z_{\bar{y}}\|_{\psi_2}=:R_1+R_2. $$ Here $R_1$ it is bounded by $CK\sqrt{\log K}\|x-\bar{y}\|_2$ since $x,\bar{y}\in\S^{n-1}$, and $$R_2=\| \left( \|y\|_2-1\right) Z_{\bar{y}}\|_{\psi_2}=\|y-\bar{y}\|_2\|Z_{\bar{y}}\|_{\psi_2}\leq CK\sqrt{\log K}\|y-\bar{y}\|_2,$$ where the first equality uses $Z_y=\|y\|_2Z_{\bar{y}}$, the second equality is true since $\|y\|_2-1=\|y-\bar{y}\|_2$ and the last inequality follows from \Cref{theorem_concentrationBX}. Combining these bounds we get $$ \|Z_x-Z_y\|_{\psi_2} \leq CK\sqrt{\log K} \left( \|x-\bar{y}\|_2+\|y-\bar{y}\|_2 \right). $$ Finally, note that $\|x\|_2=1$, so by non-expansiveness of projection, $\|x-\bar{y}\|_2\leq \|x-y\|_2$, and by definition of projection, $\|y-\bar{y}\|_2\leq \|y-x\|_2$. This completes the proof. \end{proof} Next we show the second version of sub-Gaussian increment lemma, which requires $B$ to be diagonal and does not need $A$ to be mean zero. The proof is mostly the same as \Cref{lemma_subG_incrementB}, so we will only highlight the differences. \begin{lemma} \label{lemma_subG_incrementB_diagonal} Let $B\in\mathbb{R}^{l \times m}$ be a fixed diagonal matrix and let $A\in\mathbb{R}^{m\times n}$ be a isotropic, sub-Gaussian matrix with sub-Gaussian parameter $K$, then the random process $$ Z_x:=\|BAx\|_2-\|B\|_F \|x\|_2 $$ has sub-Gaussian increments with $$ \| Z_x-Z_y\|_{\psi_2} \leq CK\sqrt{\log K}\, \|B\| \|x-y\|_2, \;\; \forall x,y\in\mathbb{R}^n. $$ \end{lemma} \begin{proof} If $B$ is not a square matrix, we can always add $m-l$ rows of zeros to $B$ (when $l<m$) or remove the last $l-m$ rows of zeros from $B$ (when $l>m$). This will turn $B$ into a $m \times m$ square matrix without changing the values of $\|BAx\|_2$, $\|B\|_F$ and $\|B\|$. So without loss of generality, we can assume $B$ is a square matrix. Also without loss of generality, we can further assume $\|B\|=1$ since the conclusion is invariant under scaling for $B$. The remaining proof for \Cref{lemma_subG_incrementB_diagonal} is the same as proof for \Cref{lemma_subG_incrementB} except for bounding $p_3$ in Step 1. A bound for $p_3$ here can be obtained through the new Bernstein's inequality (\Cref{theorem_newbernstein}) as detailed below. Recall that $$ Z=\inp{BAu}{BAv}=\sum_{i=1}^mb_i^2 \inp{A_i}{u}\inp{A_i}{v}=:\sum_{i=1}^m b_i^2Y_i, $$ where $b_i:=B_{ii}$ and $A_i$ is the $i$-th row of $A$. The random variables $Y_i:=\inp{A_i}{u}\inp{A_i}{v}$ are independent, with $$ \mathbb{E} Y_i=\frac{\mathbb{E} \inp{A_i}{x-y}\inp{A_i}{x+y}}{\|x-y\|_2}=\frac{\mathbb{E}\inp{A_i}{x}^2 -\mathbb{E}\inp{A_i}{y}^2}{\|x-y\|_2}=\frac{1-1}{\|x-y\|_2}=0, $$ $$ \mathbb{E} |Y_i|\leq \mathbb{E} \frac{\inp{A_i}{u}^2+\inp{A_i}{v}^2}{2}\leq \frac{1+4}{2}=\frac{5}{2}. $$ Here we used $\|x\|_2=\|y\|_2=\|u\|_2=1$, $\|v\|_2\leq 2$, and that $A_i$ is isotropic. Furthermore, from property (d) in \Cref{appendix_psi_alpha_properties} we have $$ \|Y_i\|_{\psi_1}\leq \|\inp{A_i}{u}\|_{\psi_2}\|\inp{A_i}{v}\|_{\psi_2}\leq K\|u\|_2 \cdot K\|v\|_2\leq 2K^2. $$ Therefore by \Cref{theorem_newbernstein} and note that $\sum b_i^4\leq \left( \max_i b_i^2 \right) \cdot \sum b_i^2 = \|B\|_F^2$, we have $$ \P\left( \left| \sum b_i^2Y_i \right| > t \right) \leq 2\exp [-c\min\left(\frac{t^2}{\|B\|_F^2 K^2\log K},\frac{t}{K^2\log K}\right)] $$ Since $0<s<2\|B\|_F$, we get $$ p_3=\P\left( \left| Z \right| > \frac{s}{2}\|B\|_F \right) \leq 2\exp(-c\frac{s^2}{4K^2\log K}). $$ \end{proof} \subsection{Proof of \texorpdfstring{\Cref{theorem_mainBA}}{}} \label{subsec_main3} \Cref{theorem_mainBA} follows form the sub-Gaussian increments lemmas and Talagrand's Majorizing Measure Theorem. Let us first recall the Majorizing Measure Theorem. The following statement is from \cite{liaw2017simple}. \begin{theorem}[Majorizing Measure Theorem] \label{theorem_majorm} Let $(Z_x)_{x\in T}$ be a random process on a bounded set $T\subset \mathbb{R}^n$. Assume that the process has sub-Gaussian increments, that is there exists $M\geq 0$ such that \[ \|Z_x-Z_y\|_{\psi_2} \leq M\|x-y\|_2 \;\text{ for all }\; x,y \in T. \] Then \[ \mathbb{E} \sup_{x,y\in T}|Z_x-Z_y|\leq CM \, \mathbb{E}\sup_{x\in T}\inp{g}{x}, \] where $g\sim\mathbf{Normal}(0,I_n)$. Moreover, for any $u\geq 1$, the event \[ \sup_{x,y\in T}|Z_x-Z_y|\leq CM \left( \mathbb{E}\sup_{x\in T}\inp{g}{x}+u\cdot \mathrm{diam}(T) \right) \] holds with probability at least $1-e^{-u^2}$. Here $\mathrm{diam}(T):=\sup_{x,y\in T}\|x-y\|_2$. \end{theorem} The first part of \Cref{theorem_majorm} can be found in \cite[Theorem 2.4.12]{talagrand2014upper} and the second part can be found in \cite[Theorem 3.2]{2015dirksen}. \begin{proof}[{\bf Proof of \Cref{theorem_mainBA}}] Let $Z_x:=\|BAx\|_2-\|B\|_F\|x\|_2$. For the expectation bound, take an arbitrary $y\in T$, then from triangle inequality we have \[ \mathbb{E}\sup_{x\in T}|Z_x|\leq \mathbb{E}\sup_{x\in T}|Z_x-Z_y|+\mathbb{E} |Z_y|. \] Using \Cref{lemma_subG_incrementB} and \Cref{theorem_majorm} (Majorizing Measure Theorem), we get \[ \mathbb{E}\sup_{x\in T}|Z_x-Z_y| \leq \mathbb{E}\sup_{x,y\in T}|Z_x-Z_y| \lesssim K\sqrt{\log K}\, \|B\| w(T)\] Using property (e) in \Cref{appendix_psi_alpha_properties} and \Cref{lemma_subG_incrementB}, we get \[ \mathbb{E}|Z_y| \lesssim \|Z_y\|_{\psi_2} = \|Z_y-Z_0\|_{\psi_2}\lesssim K\sqrt{\log K}\, \|B\|\|y\|_2. \] Therefore $\mathbb{E}\sup_{x\in T}|Z_x|\leq CK\sqrt{\log K}\, \|B\| \left( w(T)+\mathrm{rad}(T) \right) $. For the high probability bound, notice that the result is trivial when $u<1$. When $u\geq 1$, fix an arbitrary $y\in T$ and use triangle inequality again to get \[ \sup_{x\in T}|Z_x|\leq \sup_{x\in T}|Z_x-Z_{y}|+ |Z_{y}| \leq \sup_{x,x'\in T}|Z_x-Z_{x'}|+ |Z_{y}|. \] Since $\mathrm{diam}(T)\leq 2\,\mathrm{rad}(T)$, applying \Cref{lemma_subG_incrementB} and \Cref{theorem_majorm} we know that the event \[ \sup_{x,x'\in T}|Z_x-Z_{x'}| \lesssim K\sqrt{\log K} \, \|B\| [ w(T)+u\cdot \mathrm{rad}(T) ] \] holds with probability at least $1-e^{-u^2}$. \\ To bound $|Z_y|$, again by \Cref{lemma_subG_incrementB}, $\|Z_y\|_{\psi_2}=\|Z_y-Z_0\|_{\psi_2}\leq CK\sqrt{\log K}\,\|B\|\|y\|_2$, so the event \[ |Z_y| \lesssim uK\sqrt{\log K} \,\|B\| \|y\|_2 \] holds with probability at least $1-2e^{-u^2}$. Combining these yields the desired high probability bound. Finally, when $B$ is a diagonal matrix and $A$ is not necessarily mean zero, we can repeat the above argument with \Cref{lemma_subG_incrementB_diagonal} instead of \Cref{lemma_subG_incrementB}. This completes the proof. \end{proof} \subsection{An Example for Lower Bound} \label{sec_tightness} Here we give an example based on scaled Bernoulli random variables. This example can be used to demonstrate that the $K\sqrt{\log K}$ factor in \Cref{theorem_mainBA} tail bound is optimal in general. \begin{prop} \label{prop_tightness_ex} Let $K\geq 4$ and let $X=(X_1,\dots,X_m)\in\mathbb{R}^m$ be a random vector with independent coordinates such that $\frac{1}{K^2\log K}X_i^2 \sim \mathbf{Bernoulli}\left( \frac{1}{K^2\log K}\right)$, then $\|X_i\|_{\psi_2}\leq K$. Furthermore, if $m\geq K^2\log K$, then we also have \begin{equation} \label{eq-ex-tight} \| \|X\|_2-\sqrt{m}\|_{\psi_2} \geq cK\sqrt{\log K} \end{equation} for some absolute constant $c> \frac{1}{5}$. Note that this result does not depend on the distribution of the signs of $X_i$. \end{prop} Since $X_i$ has the same $\psi_2$-norm as $|X_i|$, and $\|X\|_2$ can be determined by all the $|X_i|$, it is easy to see that the above result does not depend on the distribution of the signs of $X_i$. Also, note that the expected number of non-zero coordinates for $X$ is $\frac{m}{K^2\log K}$, so assumption $m\geq K^2\log K$ is requiring this expected number to be at least 1, i.e. a typical realization of $X$ should be non-zero. This assumption can be considered mild. To prove \Cref{prop_tightness_ex}, we will need a lower bound on Binomial tails. The following estimate is taken from \cite[Lemma 4.7.2]{robert1990ash}: \begin{equation} \label{eq-binomial-lower-integerk} \P\left( \mathbf{Binomial}(m,p)\geq k\right) \geq \frac{1}{\sqrt{8k(1-\frac{k}{m})}}\exp \left( -m D\left( \frac{k}{m} \,\|\, p\right) \right) \;\text{ when }\; p<\frac{k}{m}<1, \, k\in \mathbb{N}. \end{equation} Here $D(x\|y)$ is the Kullback-Leibler divergence between two Bernoulli distributions with parameters $x$ and $y$ respectively given by $$ D(x\|y)=x\log \frac{x}{y} + (1-x)\log\frac{1-x}{1-y}. $$ Moreover, for $0<y<x<1$, \begin{equation} \label{eq-KL-mono} \frac{\partial}{\partial x}D(x\|y) = \log \frac{x}{y} + \log\frac{1-y}{1-x} > 0. \end{equation} Estimate \eqref{eq-binomial-lower-integerk} is only for integers $k$. However, using \eqref{eq-binomial-lower-integerk}, we can also prove a variant which is true for all real numbers $k \in (mp+1,m/2)$. This is \Cref{lemma-binomial-lower} below. The proof of this lemma is based on two observations. First, the tail $\P\left( \mathbf{Binomial}(m,p)\geq k\right)$ is a left-continuous decreasing step function, with possible discontinuity at integer points. Second, the right hand side lower bound in \eqref{eq-binomial-lower-integerk} is a decreasing function in $k$ on $[mp,\frac{m}{2}]$. We will apply \Cref{lemma-binomial-lower} in the proof of \Cref{prop_tightness_ex}. \begin{lemma} \label{lemma-binomial-lower} Assume $p < \frac{1}{4}$ and $mp\geq 1$, then \begin{equation*} \P\left( \mathbf{Binomial}(m,p)\geq k-1\right) \geq \frac{1}{\sqrt{8k(1-\frac{k}{m})}}\exp \left( -m D\left( \frac{k}{m} \,\|\, p\right) \right) . \end{equation*} for all real numbers $k\in (mp+1, m/2)$. \end{lemma} \begin{proof} Define $$ q(t):=\P\left( \mathbf{Binomial}(m,p)\geq t\right) \;\;\text{ and }\;\; f(t) := \frac{1}{\sqrt{8t(1-\frac{t}{m})}}\exp \left( -m D\left( \frac{t}{m} \,\|\, p\right) \right). $$ Also denote $u=t/m$, we can calculate the derivative of $f(t)$: \begin{align*} \frac{df}{dt} &= \left[ -\frac{1-\frac{2t}{m}}{2\sqrt{8}t^{\frac{3}{2}}(1-\frac{t}{m})^{\frac{3}{2}}} + \frac{1}{\sqrt{8t(1-\frac{t}{m})}} \cdot (-m)\frac{\partial D}{\partial u}\frac{\partial u}{\partial t} \right] \exp \left( -m D\left( \frac{t}{m} \,\|\, p\right) \right) \\ &= -\frac{\exp \left( -m D\left( \frac{t}{m} \,\|\, p\right) \right)}{\sqrt{8t(1-\frac{t}{m})}} \left[ \frac{1-\frac{2t}{m}}{2t(1-\frac{t}{m})} + \frac{\partial D}{\partial u} \right]. \end{align*} When $t \in [mp, m/2]$, notice $\frac{\partial D}{\partial u}\geq 0$ by \eqref{eq-KL-mono} and we get $\frac{df}{dk}\leq 0$. So $f(t)$ monotonically decreases on $ [mp, m/2]$. On the other hand, notice that $q(t)$ is a left-continuous decreasing step function, so by the above and \eqref{eq-binomial-lower-integerk} we have \[ q(k-1)=q(\lceil k-1 \rceil) \geq q(\lfloor k \rfloor) \geq f(\lfloor k \rfloor) \geq f(k), \quad k\in (mp+1,m/2) \] where $\lfloor\cdot\rfloor$ and $\lceil\cdot\rceil$ are the floor and ceiling functions. This completes the proof. \end{proof} \begin{proof}[{\bf Proof of \Cref{prop_tightness_ex}}] $\|X_i\|_{\psi_2}\leq K$ follows directly from definition since $$ \mathbb{E} \exp(X_i^2/K^2) = \frac{1}{K^2\log K}e^{\log K}+\left( 1-\frac{1}{K^2\log K}\right) e^0 < 2. $$ Let $\lambda>0$, $Z:=\|X\|_2-\sqrt{m}$ and $L^2:=K^2\log K$, with a change of variable $s=\exp(\lambda t/L^2)$ we have \begin{align*} \mathbb{E} \exp( \lambda Z^2/L^2) &= \int_{0}^\infty \P\left(e^{\lambda Z^2/L^2}\geq s\right) ds \\ &=\int_0^1 1\, ds + \int_{1}^\infty \P\left(e^{\lambda Z^2/L^2}\geq s\right) ds \\ &= 1 + \frac{\lambda}{L^2}\int_0^\infty \P(Z^2\geq t)\, e^{\lambda t/L^2} dt. \end{align*} To show \eqref{eq-ex-tight}, we need to find a $\lambda$ such that $\mathbb{E} \exp( \lambda Z^2/L^2)>2 $. By a change of variable $t=v^2L^2$, it suffices to show $$I:=2\lambda \int_0^\infty \P(|Z|\geq vL)\, ve^{\lambda v^2}dv>1 \;\text{ for some } \lambda>0. $$ Let $$\alpha:=\frac{\sqrt{m}}{L}\geq 1, \quad \beta_v:=\alpha +v=\frac{\sqrt{m}}{L}+v, \quad \gamma_v:=\frac{\beta_v^2+1}{m}=\left( \frac{1}{L}+\frac{v}{\sqrt{m}}\right) ^2 +\frac{1}{m}, $$ then \begin{align*} \P(|Z|\geq vL) &\geq \P\left( \|X\|_2\geq \sqrt{m}+vL\right) = \P \left( \frac{1}{L^2}\|X\|_2^2\geq \beta_v^2 \right) \end{align*} where $\frac{1}{L^2}\|X\|_2^2\sim \mathbf{Binomial}\left( m,\frac{1}{L^2} \right)$. Note that for $v\in [\alpha, 2\alpha]$, we have $$\beta_v^2+1>mL^{-2}+1 \quad\text{ and }\quad \beta_v^2+1 \leq 9mL^{-2}+1\leq 10mL^{-2}<m/2. $$ So by \Cref{lemma-binomial-lower} (with $k=\beta_v^2+1$) we get \[ \P \left( |Z|\geq vL \right) \geq \frac{1}{\sqrt{8(\beta_v^2+1)}} \exp \left( -m D\left( \gamma_v \,\|\, L^{-2} \right) \right). \] Also note that for $v\in [\alpha, 2\alpha]$, \[ \frac{v^2}{\beta_v^2+1}= \frac{v^2}{(\alpha+v)^2+1} \geq \frac{v^2}{(2v)^2+v^2}=\frac{1}{5}. \] Therefore \begin{align*} I &\geq 2\lambda\int_{\alpha}^{2\alpha} \frac{1}{\sqrt{8}} \frac{v}{\sqrt{\beta_v^2+1}}\exp\left( -mD\left( \gamma_v \,\middle\|\, L^{-2} \right) \right) \cdot \exp(\lambda v^2) dv \\ &\geq \frac{2\lambda}{\sqrt{40}} \int_{\alpha}^{2\alpha} \exp \left( -mD\left( \gamma_v \,\middle\|\, L^{-2} \right) +\lambda v^2 \right) dv \\ &\geq \frac{\lambda}{\sqrt{10}} \int_{\alpha}^{2\alpha} \exp(-\lambda_0\alpha^2 +\lambda v^2) dv \end{align*} with $\lambda_0:=10\log 10$. The last inequality above holds because $\gamma_v \leq \gamma_{2\alpha}=\frac{9}{L^2}+\frac{1}{m}\leq \frac{10}{L^2}$, so it follows from \eqref{eq-KL-mono} that \[ D\left( \gamma_v \,\middle\|\, L^{-2} \right) \leq D\left( 10L^{-2} \,\middle\|\, L^{-2}\right) \leq 10L^{-2}\log \frac{10L^{-2}}{L^{-2}} = \frac{\lambda_0}{L^2}. \] Choose $\lambda= \lambda_0$ and we get $$ I\geq \frac{\lambda_0}{\sqrt{10}}\int_\alpha^{2\alpha} 1\,dv \geq \frac{\lambda_0}{\sqrt{10}} >1.$$ This proves \eqref{eq-ex-tight} with $c=1/\sqrt{\lambda_0}\approx 0.208$. \end{proof} \section{Applications} \label{sec_applications} \subsection{Johnson-Lindenstrauss Lemma} One immediate application of our result is a guarantee for all isotropic and sub-Gaussian matrices as Johnson-Lindenstrauss (JL) embeddings for dimension reduction. We state this JL lemma below. It follows directly form \Cref{theorem_concentrationBX}. \begin{lemma} \label{lemma_JL_xy} Let $A\in\mathbb{R}^{m\times n}$ be an isotropic and sub-Gaussian matrix with sub-Gaussian parameter $K$. If \begin{equation} \label{eq-JLlm} m \geq CK^2\log K \varepsilon^{-2}\log(1/\delta), \end{equation} then for any $x, y\in\mathbb{R}^n$, with probability at least $1-\delta$ we have $$ (1-\varepsilon)\|x-y\|_2 \leq \frac{1}{\sqrt{m}} \|A(x-y)\|_2 \leq (1+\varepsilon)\|x-y\|_2. $$ \end{lemma} \begin{proof By scaling we can assume $\|x-y\|_2=1$. By \Cref{theorem_concentrationBX} (with $B=I_m$) we have $$ \| \|m^{-\frac{1}{2}}A(x-y)\|_2-1\|_{\psi_2}\leq C m^{-\frac{1}{2}}K\sqrt{\log K}, $$ the result then follows from property (a) in \Cref{appendix_psi_alpha_properties}. \end{proof} It is known that the dependence on $\varepsilon$ and $\delta$ in \eqref{eq-JLlm} is optimal for linear mappings \cite{larsen2014johnson}. Using the same example as \Cref{prop_tightness_ex}, we can conclude that (see \Cref{appendix_jlopt}) the dependence on sub-Gaussian parameter $K$ here is optimal as well. Similar results to \Cref{lemma_JL_xy} have appeared in \cite{matouvsek2008variants,Dirksen2016dimension}, but to the best of our knowledge, the previous known dependence on $K$ was $K^4$. \subsection{Null Space Property for 0-1 Matrices: Improved Parametric Dependence} {\it Null space property} (NSP) is a well-known sufficient condition for sparse recovery and can be used to provide guarantees for $\ell_1$-based algorithms such as {\it basis pursuit denoising}. For the robust version, we say a matrix $A\in \mathbb{R}^{m\times n}$ satisfies the {\it $\ell_2$-robust null space property} of order $s$ with parameters $\rho\in (0,1)$ and $\tau >0$, denoted as $\ell_2\text{-rNSP}(s,\rho,\tau)$, if $$ \|v_S\|_2 \leq \frac{\rho}{\sqrt{s}}\|v_{\overline{S}}\|_1 + \tau \|Av\|_2 $$ holds for all $v\in \mathbb{R}^n$ and all $S\subset [n]= \{1,2,\dots,n\}$ with size $|S|\leq s$. Here $v_S$ is the restriction of $v$ to set $S$ (that is, $v_S(i)=v(i)$ if $i\in S$ and $v_S(i)=0$ otherwise) and $\overline{S}$ is the complement of $S$. Matrices with all entries equal to 0 or 1, called 0-1 matrices, appear in many compressed sensing applications, including group testing \cite{cohen2020multi,shental2020efficient}, compressed imaging \cite{shental2020efficient}, wireless network activity detection \cite{kueng2017robust}. In some cases, most of the entries of the matrix should be equal to 0. In particular, in group testing each 1 that appears in a row corresponds with a sample that is being mixed into a pool, but including too many samples causes dilution. Measurement matrices are designed to be sparse to avoid such dilution \cite{cohen2020multi}. The traditional way to draw such a 0-1 matrix for group testing is to take each entry as an independent $\mathbf{Bernoulli}(p)$ random variable \cite{cohen2020multi}, although recent research focused on COVID-19 detection has considered other methods to generate the 0-1 matrix \cite{cohen2020multi,shental2020efficient}. The authors of \cite{kueng2017robust} proved that 0-1 matrices with $\mathbf{Bernoulli}(p)$ entries satisfy $\ell_2$-rNSP under certain conditions, with a proof based on Mendelson’s small ball method. Using the tools of our paper as a black box allows for a simpler proof of this result, as well as a significantly improvement on the dependence of parameter $p$. A common way of proving NSP is through the {\it restricted isometry property} (RIP). We say matrix $A\in \mathbb{R}^{m\times n}$ satisfies RIP of order $s$ with parameters $\delta\in (0,1)$, denoted as RIP$(s,\delta)$, if \[ (1-\delta)\|x\|_2 \leq \|Ax\|_2 \leq (1+\delta)\|x\|_2, \quad \forall x\in \Sigma_s^n \] where $\Sigma_s^n$ is the set of all $s$-sparse vectors in $\mathbb{R}^n$ (note that \Cref{theorem_mainBA} can provide such a result by choosing $T=\Sigma_s^n\cap \S^{n-1}$). However, for 0-1 matrices, it is necessary to have $m\gtrsim \min\{s^2, n\}$ in order for RIP of order $s$ to hold \cite{chandar2008negative}. This would be highly sub-optimal as the optimal sample complexity for sparse recovery is known to be $Cs\log \frac{en}{s}$, which can also be achieved by many isotropic random matrices \cite{foucart2013mathematical}. To our knowledge, the best result so far regarding $\ell_2$-rNSP for 0-1 matrices with i.i.d. $\mathbf{Bernoulli}(p)$ entries appeared in \cite{kueng2017robust}. While its bound on sample complexity scales optimally in $s$, the dependence on Bernoulli parameter $p$ was at least $\frac{1}{p^2(1-p)^2}$. In the following we give an alternative proof for such $\ell_2$-rNSP (\Cref{theorem_nsp01}). Our proof yields a sample complexity bound that scales optimally in both $s$ and $p$. We remark that the optimal $\frac{1}{p(1-p)}$ dependence on $p$ is obtained due to the improvement on sub-Gaussian parameters in \Cref{theorem_mainBA}. We also give a brief justification of this optimality after the proof. The proof idea for \Cref{theorem_nsp01} is to show RIP (and therefore $\ell_2$-rNSP) for a projected version of the original 0-1 matrix, and then pass the $\ell_2$-rNSP back to the original matrix. \begin{theorem} \label{theorem_nsp01} Fix $\rho \in (0,1)$ and let $A\in \mathbb{R}^{m\times n}$ be a random matrix with i.i.d. $\mathbf{Bernoulli}(p)$ entries where $p\in (0,1)$. If \begin{equation} \label{eq-nsp-m} m\geq C\rho^{-2}\frac{1}{p(1-p)} \left( s\log\frac{en}{s} + u^2 \right) \end{equation} for some absolute constant $C$, then with probability at least $1-3e^{-u^2}$, $A$ satisfies $\ell_2\text{-rNSP}(s,\rho,\tau)$ with $\tau = \frac{2}{\sqrt{m-1}\sqrt{p(1-p)}} $. \end{theorem} \begin{proof} Denote $\mathbf{1}_n$ the all ones column vector in $\mathbb{R}^n$, then $\mathbb{E} A = p\mathbf{1}_m\mathbf{1}_n^T$. Let $$\textstyle \tilde{A}:=\frac{1}{\sqrt{p(1-p)}} \left( A - \mathbb{E} A \right), $$ it is easy to verify that $\mathbb{E} \tilde{A}_{ij}=0$ and $\mathbb{E} \tilde{A}_{ij}^2=1$. Moreover, by \Cref{prop_nsp_psi2} below, $\tilde{K}^2 \log \tilde{K}\leq \frac{1}{p(1-p)}$ where $\tilde{K}$ is the sub-Gaussian parameter of $\tilde{A}$. Let $P\in \mathbb{R}^{m\times m}$ be the orthogonal projection matrix onto span$\{\mathbf{1}_m\}^\perp$ and let $T=\Sigma_{2s}^n\cap \S^{n-1}$, then $\|P\|_F=\sqrt{m-1}$ and by \Cref{theorem_mainBA}, we have with probability at least $1-3e^{-u^2}$, $$ \sup_{x\in T} \left| \frac{1}{\sqrt{m-1}}\|P\tilde{A}x\|_2 - 1 \right| \leq \frac{C}{\sqrt{m-1}}\frac{1}{\sqrt{p(1-p)}} \left( w(T) + u \right). $$ Since $w^2(T)\leq 4s \log \frac{en}{s}$ \cite[Proposition 9.24]{foucart2013mathematical}, by choosing $m$ as in \eqref{eq-nsp-m} with an appropriate absolute constant $C$, we get with probability at least $1-3e^{-u^2}$, \begin{equation} \label{eq-nsp-rip} \sup_{x\in T} \left| \frac{1}{\sqrt{m-1}}\|P\tilde{A}x\|_2 - 1 \right| \leq \frac{1}{2}\rho =: \delta, \end{equation} or equivalently, $\frac{1}{\sqrt{m-1}}P\tilde{A}$ satisfies RIP$(2s,\delta)$ where $\delta=\frac{1}{2}\rho \in(0, \frac{1}{2})$. Note that RIP implies NSP, in particular, using \cite[Theorem 6.13]{foucart2013mathematical} we know \eqref{eq-nsp-rip} implies that $\frac{1}{\sqrt{m-1}}P\tilde{A}$ satisfies $\ell_2$-rNSP$(s,\rho',\tau')$ with $$ \rho':=\frac{\delta}{\sqrt{1-\delta^2}-\delta/4} < 2\delta =\rho \quad \text{ and }\quad \tau':=\frac{\sqrt{1+\delta}}{\sqrt{1-\delta^2}-\delta/4} < 2. $$ Notice that $P\tilde{A}= \frac{1}{\sqrt{p(1-p)}} PA$, this robust null space property then reduces to $$ \|v_S\|_2 \leq \frac{\rho'}{\sqrt{s}}\|v_{\overline{S}}\|_1 + \frac{\tau'}{\sqrt{m-1}\sqrt{p(1-p)}} \|PAv\|_2, \quad \forall v\in \mathbb{R}^n, S\subset [n] \text{ with } |S|\leq s. $$ $\ell_2\text{-rNSP}(s,\rho,\tau)$ for $A$ follows immediately as $\|PAv\|_2\leq \|Av\|_2$. \end{proof} \begin{prop}[$\psi_2$-norm of $\tilde{A}_{ij}$ in proof of \Cref{theorem_nsp01}] \label{prop_nsp_psi2} Let $p\in (0,1)$ and \[ X:= \frac{1}{\sqrt{p(1-p)}} \left( \mathbf{Bernoulli}(p) - p \right) =\left\lbrace \begin{array}{lc} \sqrt{\frac{1-p}{p}} & \text{with probability } p \\ -\sqrt{\frac{p}{1-p}} & \text{with probability } 1-p \end{array} \right. \] Also let $K$ be the real number such that $K^2\log K=\frac{1}{p(1-p)}$, then $\|X\|_{\psi_2}\leq K$. \end{prop} \begin{proof} Let $q=1-p$ and notice that $q/(pK^2)=q^2\log K$ and $p/(qK^2)=p^2\log K$, we have \begin{align*} \mathbb{E} \exp(X^2/K^2) = p e^{q/(pK^2)} + q e^{p/(qK^2)} = p K^{q^2} + qK^{p^2}. \end{align*} Since $\frac{1}{pq}\geq 4$, we must have $\log K > \frac{1}{2}$ (otherwise $K^2\log K\leq \frac{e}{2}$), thus $K^2 < \frac{2}{pq}$. So \begin{align*} \mathbb{E} \exp(X^2/K^2) < p \left( \frac{2}{pq} \right) ^\frac{q^2}{2} + q \left( \frac{2}{pq} \right) ^\frac{p^2}{2} \leq 2 \end{align*} where the final estimate follows from inequality $(1-x)\left( \frac{2}{x(1-x)} \right) ^ {x^2/2} \leq 1$ when $0<x<1$ (see \Cref{appendix_b} for a proof of this inequality). \end{proof} Lastly, we remark that the $\frac{1}{p(1-p)}$ dependence in sample complexity is (up to constants) optimal as $p\to 0$ or $p\to 1$. In fact, we can show that if $mp<\frac{1}{2}$ or $m(1-p)<\frac{1}{2}$, then matrix $A$ from \Cref{theorem_nsp01} cannot satisfy $\ell_2$-NSP of order 2 (regardless of $\rho$ and $\tau$) with probability at least $\frac{1}{4}$. To see this, let $q=1-p$, $v=(1,-1,0,\dots,0)^T\in \mathbb{R}^n$, $S=\{1,2\}$ and consider cases: \begin{itemize} \item $mp<\frac{1}{2}$. Denote $A_i$ the $i$-th column of $A$, then by Bernoulli's inequality, the probability of $A_i$ being zero is $$ \P(A_i=\mathbf{0})=(1-p)^m\geq 1-mp > \tfrac{1}{2}. $$ Thus $\P(A_1=A_2=\mathbf{0})> \frac{1}{4}$ and on the event $\{A_1=A_2=\mathbf{0}\}$, we have $$ \|v_S\|_2=\sqrt{2} \quad\text{ while }\quad \frac{\rho}{\sqrt{s}}\|v_{\overline{S}}\|_1 + \tau \|Av\|_2=0. $$ This means with probability at least $\frac{1}{4}$, $A$ cannot satisfy $\ell_2$-rNSP of order 2. \item $mq<\frac{1}{2}$. Let $B=\mathbf{1}_m\mathbf{1}_n^T -A$ and denote $B_i$ the $i$-th column of $B$. Notice that $B$ has i.i.d. $\mathbf{Bernoulli}(q)$ entries and $Bv=\mathbf{1}_m\mathbf{1}_n^Tv -Av=-Av$, so by applying the same argument as above to $B$, we can conclude that $A$ does not satisfy $\ell_2$-rNSP of order 2 with probability at least $\frac{1}{4}$. \end{itemize} \subsection{Randomized Sketches} Randomized sketching provide a method for approximating convex programs \cite{2015pilanci,yang2017randomized}. In essence, a randomized sketch reduces the dimension of the original optimization problem through random projections, which can be beneficial in both computational time and memory storage. Following the problem formulation and ideas in \cite{2015pilanci}, consider convex program in the form of \begin{equation} \label{origprob} \min_{x\in\mathcal{C}} f(x):=\|Bx-y\|_2^2, \end{equation} where $B\in\mathbb{R}^{n\times d}$, $y\in\mathbb{R}^d$ and $\mathcal{C}\subset \mathbb{R}^d$ is some convex set. Let $A\in \mathbb{R}^{m\times n}$ be an isotropic and sub-Gaussian matrix and solve instead the convex program \begin{equation} \label{sketchprob} \min_{x\in\mathcal{C}} g(x):=\|A(Bx-y)\|_2^2. \end{equation} This is called the "sketched problem". It reduces the dimension from $n$ to $m$ and can be viewed as an approximation to the original problem \eqref{origprob}. Moreover, say a solution $\hat{x}$ to the sketched problem \eqref{sketchprob} is $\delta$-optimal to the original optimal solution $x^*$ of \eqref{origprob} if $$ f(\hat{x})\leq (1+\delta)^2f(x^*). $$ Pilanci and Wainwright \cite{2015pilanci} gave a high probability guarantee for $\hat{x}$ being $\delta$-optimal when $m$ is sufficient large. The following \Cref{theorem_delta_opt_guarantee} improves the dependence on $K$ in their guarantee from $K^4$ to $K^2\log K$. The proof of \Cref{theorem_delta_opt_guarantee} is also more concise thanks to the tools we have developed. \begin{theorem}[$\delta$-optimal guarantee] \label{theorem_delta_opt_guarantee} Let $A$ be an isotropic and sub-Gaussian matrix with sub-Gaussian parameter $K$. For any $\delta\in (0,1)$, if $$ m \geq c_0K^2\log K \frac{w^2(B\mathcal{T}\cap \S^{n-1})}{\delta^2}, $$ then a solution $\hat{x}$ to the sketched problem as given in \eqref{sketchprob} is $\delta$-optimal with probability at least $1-c_1e^{-c_2m\delta^2/(K^2\log K)}$. Here $c_0,c_1,c_2$ are absolute constants and $\mathcal{T}$ is the tangent cone of $\mathcal{C}$ at optimum $x^*$, given by $$ \mathcal{T}:=\text{clconv }\{t(x-x^*): t\geq 0 \text{ and } x\in \mathcal{C}\} $$ where clconv denotes the closed convex hull. \end{theorem} We will use an argument similar to \cite{2015pilanci} to prove \Cref{theorem_delta_opt_guarantee}. First let us state a deterministic result that says $\delta$-optimality can be obtained by controlling two quantities. \begin{lemma}[Lemma 1 \cite{2015pilanci}] \label{lemma_sketch_ratio} For any sketching matrix $A\in\mathbb{R}^{m\times n}$, let \begin{align*} Z_1&:=\inf_{v\in B\mathcal{T}\cap \S^{n-1}} \frac{1}{m}\|Av\|_2^2, \\ Z_2&:=\sup_{v\in B\mathcal{T}\cap \S^{n-1}} \left| \inp{u}{\left( \frac{1}{m}A^TA-I \right) v} \right|, \end{align*} where $\mathcal{T}$ is the tangent cone of $\mathcal{C}$ at $x^*$ and $u\in\S^{n-1}$ is an arbitrarily fixed vector. Then \begin{equation*} f(\hat{x})\leq \left( 1+2\frac{Z_2}{Z_1}\right)^2 f(x^*). \end{equation*} \end{lemma} Next we show a technical Lemma that will be helpful when estimating $Z_1$ and $Z_2$. \begin{lemma} \label{lemma_sketch_lm1} Let $A$ be an isotropic and sub-Gaussian matrix with sub-Gaussian parameter $K$, and let $T\subset\mathbb{R}^n$ be a set with radius $\text{rad}(T)\leq 2$, then there exists absolute constants $C$ and $c$ such that for any $\delta\in (0,1)$, \begin{equation*} \sup_{x\in T}\left| \frac{1}{m}\|Ax\|_2^2-\|x\|_2^2\right| \leq \delta \end{equation*} holds with probability at least $1-3e^{-cm\delta^2/(K^2\log K)}$ provided $m\geq CK^2\log Kw^2(T)/\delta^2$. \begin{proof} Denote $L:=K\sqrt{\log K}$. By \Cref{theorem_main} we have $$ \sup_{x\in T}\left| \frac{1}{\sqrt{m}}\|Ax\|_2-\|x\|_2\right| \leq \frac{C_0L}{\sqrt{m}}\left( w(T)+2u \right) $$ with probability at least $1-3e^{-u^2}$ for some absolute constant $C_0$.\\ Take $\displaystyle \delta_0=\frac{1}{5}\delta$ and $\displaystyle m \geq 9C_0^2\frac{L^2w^2(T)}{\delta_0^2}$, also choose $\displaystyle u= \frac{1}{3C_0}\frac{\sqrt{m}\delta_0}{L}$. It follows that the event $$ \sup_{x\in T}\left| \frac{1}{\sqrt{m}}\|Ax\|_2-\|x\|_2\right| \leq \frac{\delta_0}{3}+\frac{2\delta_0}{3}=\delta_0 $$ holds with probability at least $1-3e^{-m\delta_0^2/(9C_0^2L^2)}$. On this event, $$ \sup_{x\in T}\left| \frac{1}{m}\|Ax\|_2^2-\|x\|_2^2 \right| \leq (4+\delta_0)\delta_0 \leq 5\delta_0=\delta, $$ where we use the estimate $ \left| \frac{1}{\sqrt{m}}\|Ax\|_2+\|x\|_2\right| \leq 2\|x\|_2+\delta_0$ for $x\in T$. \end{proof} \end{lemma} \begin{proof}[\bf Proof of \Cref{theorem_delta_opt_guarantee}] We wish to control the ratio $Z_2/Z_1$ in sight of \Cref{lemma_sketch_ratio}. By \Cref{lemma_sketch_lm1}, if $m\geq CK^2\log Kw^2(T)/\delta^{2}$, then $$ \P\left( Z_1\geq 1-\frac{\delta}{2} \right) \geq 1-3e^{-cm\delta^2/(K^2\log K)}. $$ Let $T:=B\mathcal{T}\cap \S^{n-1}$ and $Q:=\frac{1}{m}A^TA-I$. Since $$ 2\inp{u}{Qv}=\inp{u+v}{Q(u+v)}-\inp{u}{Qu}-\inp{v}{Qv}, $$ triangle inequality gives $$ Z_2\leq \frac{1}{2}\sup_{x\in u+T}|\inp{x}{Qx}| + \frac{1}{2}\sup_{x\in \{u\}}|\inp{x}{Qx}| + \frac{1}{2}\sup_{x\in T}|\inp{x}{Qx}| =:Z_2^{(1)}+Z_2^{(2)}+Z_2^{(2)} $$ where $u+T:=\{u+v:v\in T\}$. Applying \Cref{lemma_sketch_lm1} to $Z_2^{(i)}\, (i=1,2,3)$ we get \begin{align*} \P\left( Z_2\leq \frac{\delta}{4} \right) &= 1- \P\left( Z_2 >\frac{\delta}{4}\right) \\ &\geq 1- \P\left( Z_2^{(1)}>\frac{\delta}{12}\right) - \P\left( Z_2^{(2)}>\frac{\delta}{12}\right) - \P\left( Z_2^{(3)}>\frac{\delta}{12} \right) \\ &\geq 1-9e^{-cm\delta^2/(K^2\log K)}, \end{align*} provided $m\geq CK^2\log Kw^2\delta^{-2}$ with $ w:=\max\{ w(u+T),\, w(\{u\}),\, w(T) \}. $\\ By the properties of Gaussian width, we claim that $w=w(T)$. In fact, $$ w(\{u\}) =\mathbb{E} \sup_{x\in \{u\}}\inp{g}{x} = \mathbb{E} \inp{g}{u} = 0, $$ $$ w(u+T) = \mathbb{E} \sup_{v\in T}\inp{g}{u+v} = \mathbb{E} \left( \inp{g}{u} + \sup_{v\in T}\inp{g}{v} \right) = \mathbb{E} \inp{g}{u} + \mathbb{E}\sup_{v\in T}\inp{g}{v}=w(T). $$ Combining the bounds for $Z_1$ and $Z_2$ we have $$ 2\frac{Z_2}{Z_1}\leq 2\frac{\delta/4}{1-\delta/2}\leq \delta $$ with probability at least $1-12e^{-cm\delta^2/(K^2\log K)}$. This completes the proof. \end{proof} \subsection{Favorable Landscape for Blind Demodulation with Generative Priors} In this section, we give a concrete example where the improvement on the sub-Gaussian parameter $K$ can be important through blind demodulation with generative priors. Blind demodulation aims to recover two signals $x_0, y_0 \in \mathbb{R}^l$ from observation $z_0 = x_0 \circ y_0$, where $\circ$ denotes componentwise multiplication. Due to the inherent nature of ambiguity of the solutions from $z_0$, one usually assume that the signals come with some structure. A traditional way to model this structure is through a sparsity prior with respect to a basis such as wavelet or the Discrete Cosine Transform basis in case the signals are images. On the other hand, with recent development in deep learning, the generative adversarial network (GAN) is turning out to be very effective in generating realistic synthetic images, which naturally indicates that we may model a certain type of image signals as outputs of GAN. Especially in the inverse problems like compressed sensing, phase retrieval including this blind demodulation, practitioners have observed an order of magnitude sample (observation) complexity improvement over the sparsity prior \cite{bora2017compressed,lucas2018using,hand2017global}. This alternative model is called the generative prior and as a consequence is becoming a new promising model for modern signal processing \cite{bora2017compressed,hand2017global,hand2018phase,hand2019global}. In Hand and Joshi \cite{hand2019global}, the authors provide a global landscape guarantee for blind demodulation problem with generative priors, and they applied our Bernstein's inequality in their proof. With generative priors, unknown signals $x_0,y_0$ are assumed to be in the range of two generative neural networks $\mathcal{G}^{(1)}$ and $\mathcal{G}^{(2)}$ respectively. More precisely, $\mathcal{G}^{(1)}: \mathbb{R}^n \rightarrow \mathbb{R}^l$ is a $d$-layer network, $\mathcal{G}^{(2)}: \mathbb{R}^p \rightarrow \mathbb{R}^l$ is a $s$-layer network and they can be written as \begin{align*} \mathcal{G}^{(1)}(h) &= \text{relu}\left( W^{(1)}_d \dots \text{relu}\left( W^{(1)}_2 \text{relu}\left( W^{(1)}_1 h\right) \right) \dots \right), \\ \mathcal{G}^{(2)}(m) &= \text{relu}\left( W^{(2)}_s \dots \text{relu}\left( W^{(2)}_2 \text{relu}\left( W^{(2)}_1 s\right) \right) \dots \right), \end{align*} where $\text{relu}$ is the Rectified Linear Unit activation function given by $\text{relu}(x) = \max\{x, 0\}$ and $W^{(1)}_i, W^{(2)}_j$ for $i \in \{1, \dots, d\}$ and $j \in \{1, \dots, s\}$ are weight matrices. The weight matrices are normally obtained in the training process of the networks but the empirical evidence in \cite{arora2015deep} suggests that they behave a ``random-like" quantity . Based on this phenomenon, the authors of \cite{hand2019global} made the following additional assumptions on the networks $\mathcal{G}^{(1)}$ and $\mathcal{G}^{(2)}$ to facilitate analysis further: \begin{itemize} \setlength\itemsep{0em} \item[A1.] The weight matrices are random Gaussian matrices. \item[A2.] The dimension of each layer increases at least logarithmically. \item[A3.] The last layer dimension $l$ satisfies, up to log factors, $l \gtrsim n^2 + p^2$. \end{itemize} The signals can then be recovered by finding their latent codes $h_0\in\mathbb{R}^n$ and $m_0\in\mathbb{R}^p$ such that $x_0 = \mathcal{G}^{(1)}(h_0)$ and $y_0 = \mathcal{G}^{(2)}(m_0)$. This leads to the following empirical risk minimization program: $$ \min_{h\in\mathbb{R}^n,m\in\mathbb{R}^p} f(h,m):=\frac{1}{2}\| \mathcal{G}^{(1)}(h_0)\circ \mathcal{G}^{(2)}(m_0) - \mathcal{G}^{(1)}(h)\circ \mathcal{G}^{(2)}(m)\|_2^2. $$ Note that there is a scaling ambiguity in this problem since it does not distinguish points on curve $\{(ch,\frac{1}{c}m):c>0\}$ for any given $(h,m)$, thus one can only hope to find the solution curve $\{(ch_0,\frac{1}{c}m_0):c>0\}$. The authors in \cite{hand2019global} showed that under assumptions A1-A3, two conditions that are called the Weight Distributed Condition (WDC) and the joint-WDC are met. These conditions guarantee a favorable landscape for the objective function $f(h,m)$, namely $f$ has a descent direction at all points outside of a small neighborhood of four curves containing the solution. One of the important ingredients in their proof is concentration bounds for singular values of random matrices. When they showed that the joint-WDC condition is satisfied by concentration argument, they were able to improve the requirement in assumption A3 from, up to log factors, $l \gtrsim n^3 + p^3$ to $l \gtrsim n^2 + p^2$. Such improvement is made possible by our new Bernstein's inequality with refined sub-exponential parameter dependence. This $n^2 + p^2$ sample complexity matches the one in the previous recovery guarantees with sparsity prior (in which case $n$ and $p$ denotes the sparsity levels), but potentially better since the latent code dimension is oftentimes smaller than a sparsity level with respect to a particular basis. See Theorem 2, Theorem 5, Lemma 8, and Lemma 9 in \cite{hand2019global} for more details. \section{Conclusion} \label{sec_conclusion} In this article, we proved the optimal concentration bound for sub-Gaussian random matrices on sets. Namely, with high probability, $$ \sup_{x\in T} \left|\frac{1}{\|B\|_F}\|BAx\|_2-\|x\|_2\right| \lesssim \frac{K\sqrt{\log K}}{\sqrt{\mathrm{sr}(B)}} (w(T)+\mathrm{rad}(T)), $$ where $B\in \mathbb{R}^{l\times m}$ is an arbitrary matrix, $A\in\mathbb{R}^{m\times n}$ is an (mean zero) isotropic and sub-Gaussian random matrix, $T\in\mathbb{R}^n$ is the set, $K$ is the sub-Gaussian parameter of $A$, $\mathrm{sr}(B)$ is the stable rank of $B$, $w(T)$ is the Gaussian width of $T$ and $\mathrm{rad}(T):=\sup_{y\in T}\|y\|_2$. Compared to the previous work in \cite{liaw2017simple}, this result generalizes by allowing an arbitrary matrix $B$ while improves the dependency on the sub-Gaussian parameter from $K^2$ to the optimal $K\sqrt{\log K}$. Consequently, this can lead to a tighter concentration bound even in the cases where the sub-Gaussian matrix $BA$ have correlated rows. It is also worth noting that dependence on $w(T)+\mathrm{rad}(T)$ is optimal in general as well. We also proved, under extra moment conditions, a new Bernstein type inequality and a new Hanson-Wright inequality. The extra conditions here are bounded first absolute moment (e.g. $\mathbb{E} |Y_i|\leq 2$) for Bernstein's inequality and bounded second moment (e.g. $\mathbb{E} X_i^2=1$) for Hanson-Wright inequality. In many cases, these conditions can be easily met -- for example, they are implied by the isotropic condition of random variables or vectors. In general, both of our new inequalities give improved tail bounds in the sub-Gaussian regime, which is the regime of interest in many applications as demonstrated in \Cref{sec_applications}. \section{Acknowledgements} Y.~Plan is partially supported by an NSERC Discovery Grant (22R23068), a PIMS CRG 33: High-Dimensional Data Analysis, and a Tier II Canada Research Chair in Data Science. {\"O}.~Y{\i}lmaz is partially supported by an NSERC Discovery Grant (22R82411) and PIMS CRG 33: High-Dimensional Data Analysis. H.~Jeong is funded in part by the University of British Columbia Data Science Institute (UBC DSI) and by the Pacific Institute of Mathematical Sciences (PIMS). The authors of this paper would also like to thank Babhru Joshi for the helpful discussions and providing us with an important application in \Cref{sec_applications}.
1,941,325,220,875
arxiv
\section{Introduction} \label{sec:intro} While non-perturbative instanton effects have been analyzed in great detail in field theory and can be evaluated by means of complete and clear algorithms (for reviews, see for instance Refs. \cite{Dorey:2002ik,Bianchi:2007ft}), the study of these effects in string theory is still at an early stage and, despite some remarkable progresses in the last few years, further work is still needed to reach a similar degree of accuracy in their computation. This would be very important not only for including string corrections to the effects that have been already computed with field theoretical methods, but especially to derive new non-perturbative effects of purely stringy origin that could play a relevant role in the applications of string theory to phenomenology. Recently this possibility has been intensively investigated from several different points of view and has received considerable attention~\cite{Beasley:2005iu}--\cite{Blumenhagen:2007bn}. However, in order to learn how to deal with non-perturbative effects in string theory and gain a good control on the results, it is very important also to reproduce, using string methods, the non-perturbative effects already known from field theory. To this aim, toroidal orbifolds of Type II string theory (for a review see Ref. \cite{Blumenhagen:2006ci}) are very useful since they provide a concrete framework in which one can perform explicit calculations of instanton effects. For example, they can be used to engineer ${\cal{N}}=2$ super Yang-Mills (SYM) theories and study the instanton induced prepotential, as discussed in detail in Ref. \cite{Billo:2006jm}. In a recent paper \cite{Billo:2007sw} we have extended this procedure by compactifying six dimensions on $(\mathcal{T}_2^{(1)}\times\mathcal{T}_2^{(2)})/\mathbb{Z}_2\, \times \mathcal{T}_2^{(3)}$ and by including the contribution of the mixed annuli diagrams, as advocated in Refs. \cite{Blumenhagen:2006xt,Akerblom:2006hx,Akerblom:2007uc}. In particular we have shown that the non-holomorphic terms in these annulus amplitudes precisely reconstruct the appropriate K{\"{a}}hler metric factors that are needed to write the instanton correlators in terms of purely holomorphic variables. In this way the correct holomorphic structure of the instanton induced low energy effective action in the Coulomb branch of the $\mathcal{N}=2$ SYM theory has been obtained. In the present paper we apply this procedure to ${\cal{N}}=1$ SYM theories that we engineer by means of stacks of magnetized fractional D$9$ branes in a background given by the product of $\mathbb{R}^{1,3}$ times a six-dimensional orbifold $(\mathcal{T}_2^{(1)}\times\mathcal{T}_2^{(2)}\times\mathcal{ T}_2^{(3)})/(\mathbb{Z}_2\times \mathbb{Z}_2)$. A single stack of fractional D9 branes, that we call ``color'' branes, supports on its world-volume a pure ${\cal{N}}=1$ gauge theory. Matter chiral multiplets can be obtained by introducing a second stack of magnetized fractional D$9$ branes, called ``flavor'' branes, that belong in general to a different irreducible representation of the orbifold group, and by considering the massless open strings having one endpoint on the color branes and the other on the flavor branes. In this framework one can also engineer ${\cal{N}}=1$ super QCD by suitably introducing a third stack of magnetized fractional D$9$ branes, in such a way that the massless open strings connecting the color branes and the two types of flavor branes correspond respectively to the right and left-handed quarks and their super-partners, and hence give rise to a vector-like theory as described in Section. \ref{sec:model}. To study instanton effects in this set-up one has to add a stack of fractional Euclidean D5 branes (E5 branes for short) that completely wrap the internal manifold and hence describe point-like configurations from the four-dimensional point of view. If the wrapping numbers and magnetization of these E5 branes are the same as those of the color D9 branes, one has a stringy realization of ordinary gauge theory instantons% \footnote{In fact, these D9/E5 systems are essentially a T-dual version of the D3/D(--1) systems which, in un-compactified set-ups, are well-known to realize at the string theory level the gauge instantons and their moduli, described \`a la ADHM \cite{Witten:1995im}-\cite{Billo:2002hm}.}. If instead their wrapping numbers and magnetization are different from the color branes, one obtains ``exhotic'' instanton configurations of purely stringy nature. In this paper we will not explicitly consider this possibility, even if our methods could be used also in this case. On the contrary, following the procedure outlined in Refs. \cite{Billo:2002hm,Billo:2006jm}, we compute using string methods the superpotential in $\mathcal{N}=1$ SYM theories induced by gauge instantons. In doing so, the contribution of mixed annulus diagrams with a boundary attached to the E5 branes, which are of the same order in the string coupling constant as the disk diagrams which account for the moduli measure, has to be taken into account. As noticed in the literature \cite{Blumenhagen:2006xt,Abel:2006yk,Akerblom:2006hx,Akerblom:2007uc}, in supersymmetric situations these mixed annulus amplitudes are related in a precise way to the 1-loop corrections to the gauge coupling constant of the color gauge theory and the physical origin of this identification has been discussed in Ref. \cite{Billo:2007sw}. This relation can then be used to compare the explicit expression of the mixed annulus amplitudes to the general formula \cite{Dixon:1990pc,Kaplunovsky:1994fg,Louis:1996ya} that expresses the 1-loop corrections to the gauge coupling computed in string theory in terms of the fields and geometrical quantities that appear in the effective supergravity theory, such as the K\"ahler metrics for the various multiplets. Exploiting this fact, we explicitly compute the mixed annulus diagrams in our orbifold models and extract from them information on the K{\"{a}}hler metric for the matter multiplets. We then perform two checks on our results. First, we consider the 1-instanton induced superpotential in the set-up corresponding to $\mathcal{N}=1$ SQCD. In Refs. \cite{Akerblom:2006hx,Argurio:2007vq} it has already been shown that the stringy instanton calculus in this case reproduces the ADS/TVY superpotential \cite{Affleck:1983mk} (see also Ref. \cite{Taylor:1982bp}). Here we discuss in detail the r\^ole of the mixed annuli contributions and show that they are crucial in making this superpotential holomorphic when expressed in terms of the variables appropriate to the low-energy supergravity description. Second, we exploit the fact that the K\"ahler metrics of the matter multiplets enter crucially in the relation between the holomorphic superpotential couplings in the effective Lagrangian and the physical Yukawa couplings for the canonically normalized fields. We consider the expression of the latter provided in Ref. \cite{Cremades:2004wa} for the field-theory limit of magnetized brane models, and show that, after transforming it to the supergravity basis, it becomes purely holomorphic. The paper is organized as follows. In Section \ref{sec:model} we describe the set-up we utilize for realizing ${\cal{N}}=1$ supersymmetric gauge theories. Section \ref{sec:inst_branes} is devoted to the description of the instanton calculus in this set-up. In Section \ref{subsec:mix_ann} we compute the mixed annulus diagrams while in Section \ref{sec:relation} we discuss the relation with the K{\"{a}}hler metric for the matter fields; furthermore we check the holomorphicity of the 1-instanton induced superpotential. In the last section we show that our expressions yield holomorphic cubic superpotential couplings of the matter multiplets if we start from the physical Yukawa couplings in magnetized brane models computed in Ref. \cite{Cremades:2004wa}. Finally, many technical details are given in the Appendix. \section{Local $\mathcal{N}=1$ brane models with chiral matter} \label{sec:model} A way to realize a $\mathcal{N}=1$ SYM theory is to place a stack of fractional D$9$ branes in a background given by the product of $\mathbb{R}^{1,3}$ times a six-dimensional orbifold \begin{equation} \frac{\mathcal{T}_2^{(1)}\times\mathcal{T}_2^{(2)}\times\mathcal{ T}_2^{(3)}}{\mathbb{Z}_2\times \mathbb{Z}_2}~. \label{orbifold} \end{equation} For each torus $\mathcal{T}_2^{(i)}$, the string frame metric and the $B$-field% \footnote{Without loss of generality, in the following we will actually set the $B$-field to zero.} are parameterized by the K\"ahler and complex structure moduli, $T^{(i)}=T_1^{(i)}+\mathrm{i} \,T_2^{(i)}$ and $U^{(i)}=U_1^{(i)}+ \mathrm{i} \,U_2^{(i)}$ respectively. For our precise conventions we refer to Appendix \ref{appsub:geom}. The ten-dimensional string coordinates $X^M$ and $\psi^M$ are split as \begin{equation} X^M \to (X^\mu, Z^i) ~~~~{\rm and}~~~~ \psi^M \to (\psi^\mu, \Psi^i)~, \label{coordinates} \end{equation} where $\mu=0,1,2,3$ and the complex coordinates $Z^i$ and $\Psi^i$, defined in Eq. (\ref{zipsii}), are orthonormal in the metric of the $i$-th torus. Also the (anti-chiral) spin-fields $S^{\dot{\mathcal{A}}}$ of the RNS formalism in ten dimensions factorize in a product of four-dimensional and internal spin-fields, and the precise splitting is given in Eq. (\ref{spin}). The $\mathbb{Z}_2\times \mathbb{Z}_2$ orbifold group in (\ref{orbifold}) contains three non-trivial elements $h_i$ ($i=1,2,3$). The element $h_i$ leaves the $i$-th torus $\mathcal{T}_2^{(i)}$ invariant while acting as a reflection on the remaining two tori. The above geometry can also be described in the so-called supergravity basis using the complex moduli $s$, $t^{(i)}$ and $u^{(i)}$, whose relation with the previously introduced quantities in the string basis is \cite{Lust:2004cx,Blumenhagen:2006ci} \begin{equation} \begin{aligned} &\mathrm{Im}(s) \equiv s_2 = \frac{1}{4\pi}\,\mathrm{e}^{-\phi_{10}}\, T_2^{(1)}T_2^{(2)}T_2^{(3)}~, \\ &\mathrm{Im}(t^{(i)}) \equiv t_2^{(i)} = \mathrm{e}^{-\phi_{10}} T_2^{(i)}~, \\ & u^{(i)} = u_1^{(i)} + \mathrm{i}\, u_2^{(i)} = U^{(i)}~, \end{aligned} \label{stu} \end{equation} where $\phi_{10}$ is the ten-dimensional dilaton. The real parts of $s$ and $t^{(i)}$ are related to suitable RR potentials. In terms of these variables, the $\mathcal{N}=1$ bulk K\"ahler potential is given by \cite{Antoniadis:1996vw} \begin{equation} K = -\log (s_2) -\sum_{i=1}^3 \log(t_2^{(i)}) - \sum_{i=1}^3 \log(u_2^{(i)})~. \label{kpot} \end{equation} \paragraph{Colored and flavored branes} In this orbifold background we place a stack of $N_a$ fractional D$9$ branes (hereinafter called colored branes and labeled by an index $a$) which for definiteness are taken to transform in the trivial irreducible representation $R_0$ of the orbifold group. The massless excitations of the open strings attached to these branes fill the $\mathcal{N}=1$ vector multiplet in the adjoint representation of $\mathrm{U}(N_a)$. The disk interactions of the corresponding vertex operators reproduce, in the field theory limit $\alpha'\to 0$, the $\mathcal{N}=1$ SYM action with $\mathrm{U}(N_a)$ gauge group, which in the Euclidean signature appropriate to discuss instanton effects, reads \begin{equation} S_{\rm SYM}=\frac{1}{g_a^2}\, \int d^4x ~{\rm Tr}\,\Big\{\frac{1}{2}\,F_{\mu\nu}^2 -2\,\bar\Lambda_{\dot\alpha}\bar D\!\!\!\!/^{\,\dot\alpha \beta} \Lambda_\beta \Big\}~, \label{n1} \end{equation} where the tree-level Yang-Mills coupling constant $g_a$ is given by \begin{equation} \frac{1}{g_a^2} = \frac{1}{4\pi}\,\mathrm{e}^{-\phi_{10}}\,T_2^{(1)}T_2^{(2)}T_2^{(3)} = s_2~. \label{gym} \end{equation} Richer models can be found if we introduce additional stacks of fractional D9-branes, distinguished with a subscript $b$, that belong to various irreducible representations of the orbifold group and can be magnetized. In general, we will have $N_b$ branes of type $b$, which we will call flavor branes, and $n_b^{(i)}$ will be their wrapping number around the $i$-th torus. These branes admit a constant magnetic field on the $i$-th torus \begin{equation} F_b^{(i)}= f_b^{(i)}\,dX^{2i+2}\wedge dX^{2i+3} = \mathrm{i}\,\frac{f_b^{(i)}}{T_2^{(i)}}\, dZ^i\wedge d{\bar Z}^i ~. \label{fi} \end{equation} The generalized Dirac quantization condition requires that the first Chern class $c_1(F_b^{(i)})$ be an integer, which, in our conventions, implies that \begin{equation} 2\pi\alpha' f_b^{(i)} = \frac{m_b^{(i)}}{n_b^{(i)}} \label{nm} \end{equation} with $m_b^{(i)}\in \mathbb{Z}$. In terms of the angular parameters $\nu_b^{(i)}$, defined by \begin{equation} 2\pi\alpha' \frac{f_b^{(i)}}{T_2^{(i)}} = \tan \pi\nu_b^{(i)}~~~~{\rm with}~~~~0\leq \nu_b^{(i)} < 1~, \label{nui} \end{equation} it is possible to show that bulk $\mathcal{N}=1$ supersymmetry is preserved if% \footnote{Other possibilities are $-\nu_b^{(1)}-\nu_b^{(2)}+\nu_b^{(3)}=0$; $-\nu_b^{(1)}+\nu_b^{(2)}-\nu_b^{(3)}=0$; $\nu_b^{(1)}+\nu_b^{(2)}+\nu_b^{(3)}=2$. They are all related to the position (\ref{nu123}) by obvious changes.} \begin{equation} \nu_b^{(1)}-\nu_b^{(2)}-\nu_b^{(3)}=0~. \label{nu123} \end{equation} The presence of the magnetic fluxes implies that the open strings stretching between two different types of branes ({\it e.g.} the D$9_b$/D$9_a$ strings) are twisted. This means that the internal string coordinates $Z^i$ and $\Psi^i$ have the following twisted monodromy properties \begin{equation} Z^i\big({\rm e}^{2\pi{\rm i}} z\big)= \,{\rm e}^{2\pi{\rm i}\nu^{(i)}_{b}}\,Z^i(z)~~~{\mbox{and}}~~~ \Psi^i\big({\rm e}^{2\pi{\rm i}} z\big)= \eta\,{\rm e}^{2\pi{\rm i}\nu^{(i)}_{b}}\,\Psi^i(z)~, \label{monodromy} \end{equation} where $\eta=+1$ for the NS sector and $\eta=-1$ for the R sector. If also the color branes are magnetized, we have to replace in (\ref{nu123}) and (\ref{monodromy}) $\nu^{(i)}_{b}$ with $\nu^{(i)}_{ba}=\nu^{(i)}_{b}-\nu^{(i)}_{a}$, which describe the relative magnetization of the two stacks of branes. When no confusion is possible, we will denote the twist angles simply by $\nu^{(i)}$. As is well-known, in a toroidal orbifold compactification with wrapped branes there are unphysical closed string tadpoles that must be canceled to have a globally consistent model. Usually this cancellation is achieved by introducing an orientifold projection and suitable orientifold planes. Like in other cases treated in the literature, in this paper we take a ``local'' point of view and assume that the brane systems we consider can be made fully consistent with an orientifold projection. \paragraph{$\mathcal{N}=1$ SQCD with magnetized branes} In the following, we will be mostly interested in studying instanton effects in $\mathcal{N}=1$ SQCD with $N_F$ flavors. In our orbifold background we can realize this model by taking two stacks of flavored fractional D9 branes, denoted by $b$ and $c$ respectively, both belonging to a different representation of the orbifold group with respect to the color branes; see Figure \ref{fig:9a9b9c} for a pictorial representation of the system we consider. For definiteness, we take the $R_1$ representation as defined in Appendix \ref{appsub:geom}. \begin{figure} \begin{center} \begin{picture}(0,0)% \includegraphics{9a9b9c.eps}% \end{picture}% \setlength{\unitlength}{2171sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(6175,4386)(276,-3844) \put(1801,-136){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_b$}}}} \put(5551,-286){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_c$}}}} \put(3151,-2011){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$q_{ba}\equiv q$}}}} \put(5251,-2311){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$q_{ac}\equiv \tilde{q}$}}}} \put(4201,-3736){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_a$}}}} \put(1276,-961){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$R_1$}}}} \put(6451,-1111){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$R_1$}}}} \put(5851,-3736){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$R_0$}}}} \end{picture}% \end{center} \caption{Schematization of the brane system we consider, and of its spectrum of chiral multiplets; see the text for more details.} \label{fig:9a9b9c} \end{figure} If the twist angles satisfy the $\mathcal{N}=1$ supersymmetry condition $\nu_{ba}^{(1)}-\nu_{ba}^{(2)}-\nu_{ba}^{(3)}=0$, the massless states of the D$9_b$/D$9_a$ strings fill up a chiral multiplet $q_{ba}\equiv q$, which transforms in the anti-fundamental representation $\bar{N}_a$ of the color group and appears with a flavor degeneracy \begin{equation} \label{Nab} N_b |I_{ab}|~, \end{equation} where $I_{ab}$ is the number of Landau levels for the $(a,b)$ ``intersection'', namely \begin{equation} I_{ab} = \prod_{i=1}^3\big(m_a^{(i)}n_b^{(i)}- m_b^{(i)}n_a^{(i)}\big)=-I_{ba}~. \label{iab} \end{equation} The complex scalar, denoted with an abuse of notation by the same letter $q$ used for the whole multiplet, arises from the NS sector and is described by the vertex operator (\ref{vertn1scal}). Its supersymmetric partner is a chiral fermion $\chi_\alpha$ described by the vertex operator (\ref{vertn1ferm}) of the R sector, which is connected to the scalar vertex by the $\mathcal{N}=1$ supersymmetry generated by the open string supercharges preserved by the $\mathbb{Z}_2\times\mathbb{Z}_2$ orbifold. In an analogous way, we can analyze the open strings stretching between the color branes and the flavor branes of type $c$. If the twist angles are such that $\nu_{ac}^{(1)}-\nu_{ac}^{(2)}-\nu_{ac}^{(3)}=0$, then the massless states of the D$9_a$/D$9_c$ strings (notice the orientation!) fill up a chiral multiplet $q_{ac}\equiv \tilde q$ that transforms in the fundamental representation ${N}_a$ of the color group and appears with a flavor degeneracy \begin{equation} \label{Nac} N_c |I_{ac}|~, \end{equation} where $I_{ac}$ is the number of Landau levels for the $(a,c)$ ``intersection''. The bosonic and fermionic components of the multiplet $\tilde q$ are described, respectively, by the vertex operators (\ref{vertn1scalt}) and (\ref{vertn1fermt}) which are also related to each other by the $\mathcal{N}=1$ supersymmetry preserved by the orbifold. This set-up provides a realization of $\mathcal{N}=1$ SQCD if we arrange the branes in such a way that the flavor degeneracies (\ref{Nab}) and (\ref{Nac}) are equal: \begin{equation} N_b |I_{ab}|=N_c |I_{ac}|\equiv N_F~. \label{nf} \end{equation} In this way we engineer the same number $N_F$ of fundamental and anti-fundamental chiral multiplets, which will be denoted by $q_f$ and ${\tilde q}^f$ with $f=1,\ldots,N_F$. The field-theory limit of the disk amplitudes involving the fields of the chiral multiplets and those of the vector multiplet yields the $\mathcal{N}=1$ SQCD action; for instance, the kinetic term of the scalars arises in the form \begin{equation} \int d^4x ~\sum_{f=1}^{N_F}\Big\{D_\mu q^{\dagger f} \,D^\mu {q}_f + D_\mu {\tilde q}^f\,D^\mu \tilde{q}^\dagger_f\Big\}~, \label{kinq} \end{equation} where we have explicitly indicated the sum over the flavor indices and suppressed the color indices. In the supergravity basis it is customary to use fields with a different normalization. The kinetic term for the scalars of the chiral multiplet is written as \begin{equation} \int d^4x ~\sum_{f=1}^{N_F} \Big\{ K_Q\,D_\mu Q^{\dagger f} \,D^\mu {Q}_f + K_{\tilde Q}\, D_\mu {\tilde Q}^f\,D^\mu {\tilde Q}^\dagger_f \Big\}~, \label{kinQ} \end{equation} where $K_Q$ and $K_{\tilde Q}$ are the K\"ahler metrics. Upon comparison with (\ref{kinq}), we see that relation between the fields $q$ and $\tilde q$ appearing in the string vertex operators and the fields $Q$ and $\tilde Q$ of the supergravity basis is \begin{equation} q = \sqrt{K_{Q}^{}}\,Q\quad,\quad \tilde q = \sqrt{K_{\tilde Q}}\,\tilde Q~. \label{qQ} \end{equation} Actually, the rescalings (\ref{qQ}) apply not only to the scalar components, but to the entire chiral multiplets. \section{Instantonic brane effects} \label{sec:inst_branes} In this stringy set-up non-perturbative instantonic effects can be included by adding fractional Euclidean D5 branes (or E5 branes for short) that completely wrap the internal manifold. We choose these branes to be identical to the color D$9_a$ branes in the internal directions (i.e. they transform in the same representation of the orbifold group; they have, if any, the same magnetization etc.), while they are point-like in the space-time directions. Thus we call them E$5_a$, and they provide the stringy representation of ordinary instantons for the gauge theory on the D$9_a$ branes. Notice, however, that with respect to the gauge theory living on a different stack of D$9$ branes (like the branes D$9_b$ or D$9_c$), the E$5_a$ represent ``exotic'' instantons, whose properties are different from those of the ordinary gauge theory instantons. Recently, these ``exotic'' configurations have been investigated ~\cite{Beasley:2005iu}--\cite{Blumenhagen:2007bn} from various points of view. Our aim is to use the relation between the non-holomorphic corrections appearing in the string computation of instantonic effects and the K\"ahler metrics of the chiral multiplets in the supergravity basis to gain information on the latter. To elucidate the physical meaning of these corrections, we will examine in particular the one-instanton induced ADS/TVY superpotential \cite{Affleck:1983mk} (see also Ref. \cite{Taylor:1982bp}), present in the case $N_F = N_a - 1$, whose stringy derivation has been recently reconsidered in \cite{Akerblom:2006hx,Argurio:2007vq}. To proceed, let us first review how the instanton contributions to the superpotential arise in our specific set-up. \subsection{The instanton moduli} \label{subsec:inst_moduli} In presence of the E$5_a$ branes we have new types of open strings: the E$5_a$/E$5_a$ strings (neutral sector), the D$9_a$/E$5_a$ or E$5_a$/D$9_a$ strings (charged sector) and the D$9_b$/E$5_a$ or E$5_a$/D$9_c$ strings (flavored sectors). The states of such strings do not carry any space-time momentum and represent moduli rather than dynamical fields in space-time. The spectrum of moduli is summarized in Table \ref{tab:moduli}, and the corresponding vertex operators are listed in Appendix \ref{appsub:vert}. Let us notice that the states of these strings can carry (discretized) momentum along the compact directions, when they are untwisted, {\it i.e.} when they belong to the neutral or charged sectors; such Kaluza-Klein copies of the moduli represent a genuine string feature. \begin{table} \begin{center} \begin{tabular}{cc|cccc} \hline\hline \multicolumn{2}{c}{Sector} & ADHM & Meaning & Chan-Paton & Dimension\\ \hline $\phantom{\vdots}5_a$/$5_a$ & NS & $a_\mu$ & centers & adj. $\mbox{U}(k)$ & (length)\\ & & $D_c$ & Lagrange mult. & $\vdots$ & (length)$^{-2}$\\ & R & ${M}^{\alpha}$ & partners & $\vdots$ & (length)$^{\frac12}$\\ & & $\lambda_{\dot\alpha}$ & Lagrange mult. & $\vdots$ & (length)$^{-\frac32}$ \\ \hline $\phantom{\vdots}9_a/5_a$ & NS & ${w}_{\dot\alpha}$ & sizes & $N_a \times \overline{k}$ & (length)\\ $5_a/9_a$ & & ${\bar w}_{\dot\alpha}$ & $\vdots$ & $k\times \overline{N}_a$ & $\vdots$\\ $9_a/5_a$ & R & ${\mu}$ & partners & $N_a \times \overline{k}$ & (length)$^{\frac12}$\\ $5_a/9_a$ & & ${\bar \mu}$ & $\vdots$ &$k\times \overline{N}_a$ & $\vdots$\\ \hline $\phantom{\vdots}9_b/5_a$ & R & ${\mu}^\prime$ & flavored & ${N}_F\times \overline{k}$ & (length)$^{\frac12}$\\ $5_a/9_c$ & & ${\tilde \mu}^\prime$ & $\vdots$ & ${k} \times\overline{N}_F $ & $\vdots$\\ \hline\hline \end{tabular} \end{center} \caption{The spectrum of moduli from the open strings with at least one boundary attached to the instantonic E$5_a$ branes. See the text for more details and comments, and Appendix \ref{appsub:vert} for the expressions of the corresponding emission vertices.} \label{tab:moduli} \end{table} Let us also recall that, in order to yield non-trivial interactions when $\alpha'\to 0$ \cite{Billo:2002hm}, the emission vertices of some of the moduli, given in Appendix \ref{appsub:vert}, have to be rescaled with factors of the dimensionful coupling constant on the E$5_a$, namely $g_{5_a} = g_a/(4\pi^2\alpha')$, with $g_a$ given in (\ref{gym}). As a consequence, some of the moduli acquire unconventional scaling dimensions which, however, are the right ones for their interpretation as parameters of an instanton solution \cite{Dorey:2002ik,Billo:2002hm}. The neutral moduli which survive the orbifold projection are the four physical bosonic excitations $a_\mu$ from the NS sector, related to the positions of the (multi-)centers of the instanton, and three auxiliary excitations $D_c$ ($c=1,2,3$). In the R sector, we find two chiral fermionic zero-modes $M^{\alpha}$, and two anti-chiral ones $\lambda_{\dot\alpha}$. The ${M}^{\alpha}$ are the fermionic partners of the instanton centers. All of these moduli are $k\times k$ matrices and transform in the adjoint representation of $\mathrm{U}(k)$. If we write the $k\times k$ matrices ${a}^\mu$ and ${M}^{\alpha}$ as \begin{equation} {a}^\mu = x_0^\mu\,\mbox{1\!\negmedspace1}_{k\times k} + y^\mu_c\,T^c\quad,\quad {M}^{\alpha}=\theta^{\alpha}\,\mbox{1\!\negmedspace1}_{k \times k } + {\zeta}^{\alpha}_c\,T^c~, \label{xtheta} \end{equation} where $T^c$ are the generators of $\mathrm{SU}(k)$, then the instanton center of mass, $x_0^\mu$, and its fermionic partners, $\theta^{\alpha }$, can be identified respectively with the bosonic and fermionic coordinates of the $\mathcal{N}=1$ superspace. The charged instantonic sector contains, in the NS sector, two physical bosonic moduli $w_{\dot\alpha}$ with dimension of (length), related to the size and orientation in color space of the instanton, and a fermionic modulus $\mu$. These moduli carry a fundamental $\mathrm{U}(k)$ index and a color one. \begin{figure} \begin{center} \begin{picture}(0,0)% \includegraphics{5a9b9c.eps}% \end{picture}% \setlength{\unitlength}{2171sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(6474,4401)(226,-3784) \put(3661,-3676){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$5_a$}}}} \put(5176,-286){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_c$}}}} \put(4201,-2011){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\tilde{\mu}^\prime$}}}} \put(2551,-1861){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\mu^\prime$}}}} \put(1276,-136){\makebox(0,0)[rb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_b$}}}} \put(4651,-3661){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$R_0$}}}} \put(5701,-1111){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$R_1$}}}} \put(226,-811){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$R_1$}}}} \end{picture}% \end{center} \caption{The flavored moduli of instantonic E$5_a$ branes in presence of D$9_b$ and D$9_c$ branes; see the text for more details.} \label{fig:5a9b9c} \end{figure} In our realization of $\mathcal{N}=1$ SQCD there are two flavored instantonic sectors corresponding to the open strings that stretch between the E$5_a$ branes and the flavor branes of type $b$ or $c$, depicted in Fig. \ref{fig:5a9b9c}. In both cases the four non-compact directions have mixed Neumann-Dirichlet boundary conditions while all the internal complex coordinates are twisted. As a consequence, there are no bosonic physical zero-modes in the NS sector and the only physical excitations are fermionic ones from the R sector. A detailed analysis of the twisted conformal field theory shows that there are fermionic moduli $\mu'_f$ in the D$9_b$/E$5_a$ strings and fermionic moduli ${\tilde \mu}'{}^f$ in the E$5_a$/D$9_c$ strings. They are described respectively by the vertex operators (\ref{vertmup}) and (\ref{vertmupt}). On the other hand no physical states survive the GSO projection in the E$5_a$/D$9_b$ and D$9_c$/E$5_a$ sectors. The fermionic moduli $\mu'_f$ and ${\tilde \mu}'{}^f$ are the counterparts of the chiral multiplets $q_f$ and ${\tilde q}^f$ respectively, when the color D$9_a$ branes are replaced by the instantonic E$5_a$ branes. The physical moduli we have listed above, collectively denoted by $\mathcal{M}_k$, are in one-to-one correspondence with the ADHM moduli of $\mathcal{N}=1$ gauge instantons (for a more detailed discussion see, for instance, \cite{Dorey:2002ik} and references therein). In all instantonic sectors we can construct many other open string states that carry a discretized momentum along the compact directions and/or have some bosonic or fermionic string oscillators. These ``massive" states are not physical, {\it i.e.} they cannot be described by vertex operators of conformal dimension one; they can, however, circulate in open string loop diagrams. \subsection{The instanton induced superpotential} \label{subsec:inst_sup} In the sector with instanton number $k$, the effective action for the gauge/matter fields is obtained by the ``functional'' integral over the instanton moduli of the exponential of all diagrams with at least part of their boundary on the E$5_a$ branes, possibly with insertions of moduli and gauge/matter fields \cite{Polchinski:1994fq,Green:1997tv,Green:2000ke,Billo:2002hm, Blumenhagen:2006xt,Billo:2007sw}. In the semi-classical approximation, only disk diagrams and annuli (the latter with no insertions) are retained. Focusing on the dependence from the scalar fields of the chiral multiplets in the Higgs branch, we have \begin{equation} S_{k}= {\cal C}_k ~\mathrm{e}^{-\frac{8 \pi^2}{g_a^2}\,k}~ \mathrm{e}^{\mathcal{A}^\prime_{5_a}} \int d{\mathcal M}_{k}~ \mathrm{e}^{-S_{\rm mod}(q,\tilde q;{\cal M}_{k})}~. \label{Z1} \end{equation} Let us now analyze the various terms in this expression. ${\cal C}_k$ is a normalization factor which compensates for the dimensions of the integration measure $d{\cal M}_k$, and may contain numerical constants and powers of the coupling $g_a$. Its dimensionality is determined by counting the dimensions (measured in units of $\alpha'$) of the various moduli ${\mathcal M}_{k}$ as given in the previous subsections, and the result is, up to overall numerical constants, \begin{equation} {\mathcal C}_k = \big({\sqrt {\alpha'}}\big)^{-(3N_a-N_F)k}\, (g_a)^{-2N_ak}~. \label{ck} \end{equation} Notice the appearance of the one-loop coefficient $b_1=(3N_a-N_F)$ of the $\beta$-function of the $\mathcal{N}=1$ SQCD with $N_F$ flavors. The factor of $(g_a)^{-2N_ak}$ in (\ref{ck}) has been inserted following the discussion of Ref. \cite{Dorey:2002ik}, but in principle it should also have a stringy interpretation, probably as a left-over of the cancellation between the bosonic and fermionic fluctuation determinants when the $\mathcal{N}=1$ gauge action is normalized as in (\ref{n1}). As explained in Refs.~\cite{Polchinski:1994fq,Billo:2002hm}, the disk diagrams with no insertions account for the exponential of (minus) the classical instanton action $8 \pi^2 k/g_a^2$, where $g_a$ is interpreted as the Yang-Mills coupling constant at the string scale. This explains the second factor in (\ref{Z1}). The third factor contains $\mathcal{A}^\prime_{5_a}$ which accounts for the open string annuli diagrams with at least one boundary on the E$5_a$ branes and no insertions \cite{Blumenhagen:2006xt,Abel:2006yk,Akerblom:2006hx,Billo:2007sw}. Since the functional integration over the ADHM moduli ${\cal M}_k$ is explicitly performed in (\ref{Z1}), to avoid double counting only the contribution of the ``massive'' string excitations has to be taken into account in these annuli: this is the reason of the $'$ notation which reminds that only the ``massive'' instantonic string excitations must circulate in the loop. Finally, in the integrand of (\ref{Z1}) we find the moduli action $S_{\rm mod}(q,\tilde q;{\cal M}_{k})$. This can be computed following the procedure explained in Ref. \cite{Billo:2002hm} from all disk scattering amplitudes involving the ADHM moduli and the scalar fields $q$ and $\tilde q$ in the limit $\alpha'\to 0$ (with $g_a$ fixed). The result is \begin{equation} \label{smodex} \begin{aligned} S_{\rm mod}(q,\tilde q;{\cal M}_{k}) & = {\rm tr}_k \Big\{ \mathrm{i} D_c\Big({\bar w}_{\dot\alpha}(\tau^c)^{\dot\alpha}_{~\dot\beta}w^{\dot\beta} +\mathrm{i} \bar\eta_{\mu\nu}^c \big[{a}^\mu,{a}^\nu\big]\Big) \\& - \mathrm{i} {\lambda}^{\dot\alpha}\Big(\bar{\mu}{w}_{\dot\alpha}+ \bar{w}_{\dot\alpha}{\mu} + \big[a_\mu,{M}^{\alpha}\big]\sigma^\mu_{\alpha\dot\alpha}\Big)\Big\} \\ & + \mathrm{tr}\,_k\sum_{f=1}^{N_F} \Big\{ {\bar w}_{\dot\alpha} \big[q^{\dagger f}{q}_f + {\tilde q}^f\tilde{q}^\dagger_f\big] w^{\dot\alpha} - \frac{\mathrm{i}}{2}\, {\bar \mu}\, q^{\dagger f} \mu'_f + \frac{\mathrm{i}}{2}\, {{\tilde\mu}'}{}^f\tilde{q}^\dagger_f\, \mu\Big\}~. \end{aligned} \end{equation} The first two lines above have the only effect of implementing the (super) ADHM constraints, for which $D_c$ and $\lambda^{\dot\alpha}$ act as Lagrange multipliers. The last line arises from the disk diagrams which contain insertions of the chiral scalars and survive in the field theory limit, depicted in Fig. \ref{fig:1}. \begin{figure} \begin{center} \begin{picture}(0,0)% \includegraphics{mod_chiral.eps}% \end{picture}% \setlength{\unitlength}{1973sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(10583,8451)(1,-7849) \put(1,-5671){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\tilde{q}^\dagger$}}}} \put(2746,-7711){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\mu$}}}} \put(1321,-7231){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_a$}}}} \put(4231,-5986){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$5_a$}}}} \put(1216,-4666){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_c$}}}} \put(8146,-7741){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\mu^\prime$}}}} \put(6616,-4696){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_a$}}}} \put(6721,-7261){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_b$}}}} \put(9631,-6016){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$5_a$}}}} \put(5401,-5701){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$q^\dagger$}}}} \put(4261,-1441){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$5_a$}}}} \put(2326,-3136){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_a$}}}} \put(2191,299){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_a$}}}} \put(3676,-61){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\bar w$}}}} \put(3811,-2836){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$w$}}}} \put(9661,-1426){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$5_a$}}}} \put(7726,-3121){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_a$}}}} \put(7591,314){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_a$}}}} \put(9076,-46){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\bar w$}}}} \put(9211,-2821){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$w$}}}} \put(751,179){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$q^\dagger$}}}} \put(991,-3031){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$q$}}}} \put(6211,194){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\tilde{q}$}}}} \put(6361,-3031){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\tilde{q}^\dagger$}}}} \put(721,-1351){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_b$}}}} \put(6181,-1396){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_c$}}}} \put(2716,-4066){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\tilde{\mu}^\prime$}}}} \put(8101,-4126){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}${\bar \mu}$}}}} \end{picture}% \end{center} \caption{Disk interactions between the moduli and the chiral scalars which survive in the field theory limit} \label{fig:1} \end{figure} There are other non-zero disk diagrams with moduli and matter fields that survive in the field theory limit. However, these other diagrams can be related to the ones in Fig. \ref{fig:1} by means of supersymmetry Ward identities~\cite{Green:2000ke,Billo:2002hm}. This implies that the complete result is obtained simply by replacing in (\ref{smodex}) the scalars $q$ and $\tilde q$ and their conjugate with the corresponding chiral and anti-chiral superfields. From now on, we will assume this replacement. Notice that the multiplets $q$ and $\tilde q$ appear in (\ref{smodex}) differently from their conjugates $q^\dagger$ and ${\tilde q}^\dagger$; this fact has important consequences on the holomorphicity properties of the instanton-induced correlators, as we will see later. In the moduli action (\ref{smodex}), the superspace coordinates $x_0^{\mu}$ and $\theta^{\alpha}$, defined in (\ref{xtheta}), appear only through superfields $q(x_0,\theta), \tilde q(x_0,\theta),\ldots$. It is therefore convenient to separate these coordinates from the remaining centered moduli, denoted by $\widehat{\mathcal{M}}_k$, and rewrite the effective action (\ref{Z1}) in terms of a $k$-instanton induced superpotential $W_k$, namely \begin{equation} S_{k}= \int d^4x_0\, d^2\theta ~W_{k}(q,\tilde q)~, \label{Wk} \end{equation} where \begin{equation} W_{k}(q,\tilde q)= {\cal C}_k ~\mathrm{e}^{-\frac{8 \pi^2}{g_a^2}\,k}~ \mathrm{e}^{\mathcal{A}^\prime_{5_a}} \int d{\widehat{\mathcal M}_{k}}~ \mathrm{e}^{-S_{\rm mod}(q,\tilde q;{\widehat{\cal M}_{k}})}~. \label{zk3} \end{equation} Even if $S_{\rm mod}(q,\tilde q;{\widehat{\cal M}_{k}})$ has an explicit dependence on $q^\dagger$ and ${\tilde q}^\dagger$, this dependence disappears upon integrating over $\widehat{\mathcal M}_{k}$ as a consequence of the cohomology properties of the integration measure on the instanton moduli space \cite{Hollowood:2002ds,Dorey:2002ik,Billo:2006jm}. Thus, $W_k(q,\tilde q)$ depends holomorphically on the chiral superfields $q$ and $\tilde q$. However, the annulus amplitude $\mathcal{A}^\prime_{5_a}$ that appears in the prefactor of Eq. (\ref{zk3}) could introduce a non-holomorphic dependence on the complex and K\"ahler structure moduli of the compactification space. On the other hand, the multiplets $q$ and $\tilde q$ have to be rescaled according to Eq. (\ref{qQ}) to express the result in the supergravity variables, and the holomorphic Wilsonian renormalization group invariant scale $\Lambda_{\scriptscriptstyle\mathrm{hol}}$ has to be introduced. We will consider the interplay of all these observations in Section \ref{sec:relation}, after explicitly evaluating the instantonic annulus amplitude $\mathcal{A}^\prime_{5_a}$ in Section \ref{subsec:mix_ann}. Before this, however, we briefly comment on the non-perturbative superpotential for $k=1$. \subsection{The ADS/TVY superpotential} \label{subsec:ADS} The measure $d{\widehat{\mathcal M}_{k}}$ in Eq. (\ref{zk3}) contains many fermionic zero modes. Among them, the $\lambda^{\dot\alpha}$ are Lagrange multipliers for the fermionic ADHM constraints but, after enforcing these constraints, the $\mu$'s, $\bar\mu$'s, $\mu^\prime$'s and ${\tilde\mu}^\prime$'s must be exactly compensated otherwise the entire integral vanishes. The single instanton case, $k=1$, is already very interesting. First of all, in this case it is easy to see that the balancing of the fermionic zero-modes requires that $N_F = N_a - 1$. After having integrated over the fermions, we are left with a (constrained) Gaussian integration over the bosonic moduli $w_{\dot\alpha}$ and $\bar w_{\dot\alpha}$, which can be explicitly performed {\it e.g.} by going to a region of the moduli space where the chiral fields are diagonal, up to rows/columns of zeroes. Furthermore, the D-terms in the gauge sector constrain the superfields to obey $q^{\dagger f}{q}_f={\tilde q}^f\tilde{q}^\dagger_f$, so that the bosonic integration brings the square of a simple determinant in the denominator, which cancels the anti-holomorphic contributions produced by the fermionic integrals. In the end, one finds \cite{Dorey:2002ik} (see also Refs. \cite{Akerblom:2006hx,Argurio:2007vq}) \begin{equation} \label{Wk1} W_{k=1}(q,\tilde q) = {\cal C}_k ~\mathrm{e}^{-\frac{8 \pi^2}{g_a^2}\,k}~\mathrm{e}^{\mathcal{A}^\prime_{5_a}} \,\frac1{\det\big(\tilde q q\big)}~, \end{equation} which has the same form of the ADS/TVY superpotential \cite{Affleck:1983mk,Taylor:1982bp}. As we will explicitly see in the following, the prefactor $\mathrm{e}^{\mathcal{A}^\prime_{5_a}}$ is crucial to establish the correct holomorphic properties of this superpotential when everything is expressed in terms of the supergravity variables (\ref{stu}), the chiral superfields are normalized with their K\"ahler metrics and the Wilsonian scale $\Lambda_{\scriptscriptstyle\mathrm{hol}}$ is introduced. \section{The mixed annuli} \label{subsec:mix_ann} To describe explicitly the instanton induced effects on the low energy effective action, the only ingredient yet to be specified is the annulus amplitude $\mathcal{A}_{5_a}$, whose ``primed'' part appears in the equations from Eq. (\ref{Z1}) on. This amplitude represents the 1-loop vacuum energy of open strings with at least one end point on the wrapped instantonic branes E$5_a$. Because of supersymmetry, the annulus amplitude associated to the E$5_a$/E$5_a$ strings identically vanishes, so $\mathcal{A}_{5_a}$ receives contributions only from mixed annuli with one boundary on the E$5_a$'s and the other on the D$9$ branes. In particular, the 1-loop contribution of the charged instantonic open strings is denoted as \begin{equation} \label{a5a2or} \mathcal{A}_{5_a;9_a} = \mathcal{A}(9a/5a) + \mathcal{A}(5a/9a)~, \end{equation} where on the r.h.s. we distinguish the contributions of the D$9_a$/E$5_a$ and E$5_a$/D$9_a$. Similarly, for the flavored instantonic open strings \begin{equation} \label{bc5a2or} \mathcal{A}_{5_a;9_b} = \mathcal{A}(9b/5a) + \mathcal{A}(5a/9b) \quad\mbox{and}\quad \mathcal{A}_{5_a;9_c} = \mathcal{A}(9c/5a) + \mathcal{A}(5a/9c) \end{equation} for the two different stacks of flavor branes used to engineer $\mathcal{N}=1$ SQCD. It has been noticed in the literature \cite{Abel:2006yk,Akerblom:2006hx} that the computation of mixed annuli is related to the stringy computation of the 1-loop threshold corrections to the coupling of the color gauge group living on the D$9_a$. In Ref. \cite{Billo:2007sw} we showed that this relation is explained by the fact that, in a supersymmetric theory, the mixed annuli compute just the running coupling by expanding around the classical instanton background, namely \begin{equation} \label{a5tog} \mathcal{A}_{5_a} = -\frac{8\pi^2 k}{g_a^2(\mu)}\,\Bigg|_{\mathrm{\,at\,1-loop}}~. \end{equation} Applying this argument to our system and keeping distinct the charged and flavored sectors, we expect therefore to find \begin{subequations} \label{ma_running} \begin{align} \mathcal{A}_{5_a;9_a} & = - 8\pi^2k\left(\frac{3N_a}{16 \pi^2} \log(\alpha'\mu^2) + \Delta_{\scriptscriptstyle\mathrm{color}}\right)~, \label{macolor}\\ \mathcal{A}_{5_a;9_b}+\mathcal{A}_{5_a;9_c} & = - 8\pi^2k\left(-\frac{N_F}{16 \pi^2} \log(\alpha'\mu^2) + \Delta_{\scriptscriptstyle\mathrm{flavor}}\right)~. \label{maflavor} \end{align} \end{subequations} In these expressions, $\mu$ is the scale that regularizes the IR divergences of the annuli amplitudes% \footnote{These open string annulus amplitudes exhibit both UV and IR divergences. The UV divergences, corresponding to IR divergences in the dual closed string channel, cancel in consistent tadpole-free models; even if in this paper we take only a local point of view, we assume that globally the closed string tadpoles are absent so that we can ignore the UV divergences.} due to the massless states circulating in the loop, and the coefficients of the logarithms arise by counting (with appropriate sign and weight) the bosonic and fermionic ground states of mixed open strings with one end point on the E$5_a$s branes, {\it i.e.} the charged and flavored instanton moduli that we listed in Section \ref{sec:inst_branes}. This counting agrees, as it should, with the 1-loop $\beta$-function coefficients that are appropriate, respectively, for the gauge and the flavor multiplets. Let us now describe the explicit form of the various annuli amplitudes. \paragraph{Charged sector} For a given open string orientation, we have \begin{equation} \label{ma0} \mathcal{A}(9_a/5_a) = \int _0^\infty \frac{d\tau}{2\tau}\left[ \mathrm{Tr}\,_{\mathrm{NS}}\left(P_{\mathrm{GSO}}^{(9_a/5_a)} \,P_{\mathrm{orb.}}\, q^{L_0}\right) - \mathrm{Tr}\,_{\mathrm{R}}\left(P_{\mathrm{GSO}}^{(9_a/5_a)}\,P_{\mathrm{orb.}}\, q^{L_0}\right)\right]~, \end{equation} where $q= \exp(-2\pi\tau)$, $P_{\mathrm{GSO}}^{(9_a,5_a)}$ is the appropriate GSO projector, and \begin{equation} \label{porb} P_{\mathrm{orb.}}= \frac 14\left(1 + \sum_{i=1}^3 h_i\right) \end{equation} is the orbifold projector, with $h_i$ being the three non-trivial elements of the $\mathbb{Z}_2\times\mathbb{Z}_2$ orbifold action of our background. Each element $h_i$ is in fact the generator of a $\mathbb{Z}_2$ subgroup which leaves invariant the $i$-th torus (see Appendix \ref{appsub:geom}). The corresponding term in the amplitude is therefore identical in form to the one encountered in the computation of the $9a/5a$ amplitude in a $\mathcal{N}=2$ background $\mathcal{T}_2^{(j)}\times \mathcal{T}_2^{(k)}$ (with $j,k\not=i$). This computation is described, for instance, in Section 4 of Ref. \cite{Billo:2007sw} to which we refer for notations and details. It turns out that the GSO projection in the R sector has to be defined differently for the two string orientations (see Appendix \ref{appsub:vert}) so that the amplitude $\mathcal{A}(9a/5a)$ vanishes, and one remains with \begin{equation} \label{atotfin} \mathcal{A}_{5_a;9_a} = \mathcal{A}(5a/9a) = N_a k \sum_{i=1}^3 \,\int_0^\infty \frac{d\tau}{2\tau} \, \mathcal{Y}^{(i)}~. \end{equation} In the end all string excitations cancel and only the zero-modes contribute: they correspond to the charged instanton moduli listed in Table \ref{tab:moduli}, and their Kaluza-Klein partners on the torus $\mathcal{T}_2^{(i)}$ fixed by the element $h_i$ of the orbifold group; these states reconstruct the sum \begin{equation} \label{ma3} \mathcal{Y}^{(i)}\equiv \sum_{(r_1 , r_2) \in \mathbb{Z}^2} q^{ \frac{ | r_{1} U^{(i)}-r_{2} |^2}{U_{2}^{(i)}T_{2}^{(i)} }}~. \end{equation} The integration over the modular parameter can be done \cite{Lust:2003ky,Billo:2007sw} with the assumption that the UV divergence for $\tau\to 0$, which corresponds to an IR divergences in the closed string channel, cancel in a globally consistent, {\it i.e.} tadpole-free, model (of which here we are considering just the ``local'' aspects on some given stacks of branes far from the orientifold planes). The IR divergence for $\tau\to\infty$ requires the introduction of cut-offs $m_{(i)}$ which are conveniently taken to be complex, as advocated in \cite{Di Vecchia:2003ae,Di Vecchia:2005vm,Billo:2007sw}. The resulting amplitude is then \begin{equation} \label{A5finaa1} \mathcal{A}_{5_a;9_a} = - \,\frac{N_ak}{2} \, \sum_{i=1}^3 \, \left( \log(\alpha^\prime m_{(i)}^2) + \log \big(U_2^{(i)}T_2^{(i)} |\eta(U^{(i)}|^4 \big)\right)~. \end{equation} Choosing \cite{Billo:2007sw} \begin{equation} m_{(i)}\,=\, \mu\,\mathrm{e}^{\mathrm{i}\varphi_{(i)}}\,=\, \mu\,\mathrm{e}^{2\mathrm{i}\, {\rm arg}(\eta(U^{(i)}))}~, \label{ircutoff} \end{equation} the final result is \begin{equation} \label{A5finaa} \mathcal{A}_{5_a;9_a} = - 8 \pi^2 k \left[\frac{3N_a}{16\pi^2} \log(\alpha^\prime \mu^2) + \frac{N_a}{16\pi^2} \sum_{i=1}^3 \log \Big(U_2^{(i)}T_2^{(i)} (\eta(U^{(i)})^4\Big)\right]~, \end{equation} which is of the expected form (\ref{macolor}). \paragraph{Flavored sectors} The amplitude $\mathcal{A}_{5_a;9_b}$ receives contributions from the two possible orientations of the open strings, as described in Eq. (\ref{bc5a2or}). These contributions, and therefore also the total amplitude $\mathcal{A}_{5_a;9_b}$, are defined in perfect analogy with Eq. (\ref{ma0}), in particular they contain the orbifold projector $P_{\rm orb}$ given in (\ref{porb}). Taking into account that the D$5_a$ branes transform according to the trivial representation of the orbifold group, while the D$9_b$ and D$9_c$ transform according to the representation $R_1$ defined in Table \ref{frac3}, we can make explicit the orbifold action on the Chan-Paton factors. We can then write the total amplitude as the sum of four sectors corresponding to the insertions of the various group elements as follows% \footnote{In this notation the identity element $e$ corresponds to $h_0$.}: \begin{equation} {\cal{A}}_{5_a;9_b} \equiv \frac 14 \sum_{I=0}^3 R_1(h_I)\, {\cal{A}}^{h_I}_{5_a;9_b} = \frac 14 \left\{{\cal{A}}^{e}_{5_a;9_b} + {\cal A}_{5_a;9_b}^{h_1} -{\cal A}_{5_a;9_b}^{h_2} - {\cal A}_{5_a;9_b}^{h_3} \right\}~. \label{somma34} \end{equation} The annulus amplitudes ${\cal{A}}^{h_I}_{5_a;9_b}$ take into account the action of the orbifold elements $h_I$ on the string fields $Z^i$ and $\Psi^i$, in the various sectors, as described in Appendix \ref{appsub:geom}. Such amplitudes are computed in detail in Appendix \ref{appsub:annuli}; here we simply write the final results. In the untwisted sector we find \begin{equation} \label{z59bis} {\cal{A}}_{5_a;9_b}^{e} = \frac{\mathrm{i}\, k\,{N_F}}{2 \pi} \int_{0}^{\infty} \frac{d\tau}{2 \tau} \left[\,\sum_{i=1}^3 R_1(h_i)\,\partial_{z} \log \theta_1 (z| \mathrm{i} \tau )\big|_{z= \mathrm{i} \tau \nu^{(i)}} \right]~, \end{equation} while in the three twisted sectors we have \begin{equation} \label{59tw} \begin{aligned} {\cal A}_{5_a;9_b}^{h_i} = & \frac{\mathrm{i}\, k\,{N_F}}{2 \pi} \int_{0}^{\infty} \frac{d\tau}{2 \tau}\\ & \times \Bigg[ \partial_{z}\log\theta_1(z|\mathrm{i}\tau)\big|_{z=\mathrm{i}\tau\nu^{(i)}} + R_1(h_{i}) \sum_{j\neq i=1}^3 R_1(h_j) \,\partial_{z}\log \theta_2(z|\mathrm{i}\tau)\big|_{z= \mathrm{i}\tau\nu^{(j)}}\Bigg]~. \end{aligned} \end{equation} It is worth pointing out that the contribution of the odd spin-structure, which is divergent due to the superghost zero-modes, actually cancels out when in each sector we sum over the two open string orientations, leading to finite and well-defined expressions. Inserting the amplitudes (\ref{z59bis}) and (\ref{59tw}) in (\ref{somma34}), we find \begin{equation} {\cal{A}}_{5_a;9_b}=\frac{\mathrm{i}\, k\,{N_F}}{4 \pi} \int_{0}^{\infty} \frac{d\tau}{2 \tau} \left\{\,\sum_{i=1}^3 R_1(h_i)\,\partial_{z} \log \big[\theta_1 (z| \mathrm{i} \tau )\,\theta_2 (z| \mathrm{i} \tau )\big]\big|_{z= \mathrm{i} \tau \nu^{(i)}} \right\}~. \label{d5d9fin1} \end{equation} Then, if we use the identity \begin{eqnarray} \theta_1 (z| i \tau )\, \theta_2 (z| i \tau ) = \theta_1 (2z| 2i \tau )\,\prod_{n=1}^{\infty} \left(\frac{ 1 - q^{2n}}{1 + q^{2n}} \right)~, \label{1+2=1} \end{eqnarray} where $q = \exp(-2\pi\tau)$, it is easy to see that the total flavored amplitude (\ref{d5d9fin1}) becomes \begin{equation} {\cal A}_{5_a;9_b} = \frac{\mathrm{i}\, k\,{N_F}}{2 \pi} \int_{0}^{\infty} \frac{d\tau}{2 \tau} \left[\,\sum_{i=1}^3 R_1(h_i)\,\partial_{z} \log \theta_1 (z| \mathrm{i} \tau )\big|_{z= \mathrm{i} \tau \nu^{(i)}} \right]~. \label{d5d9fin2} \end{equation} Notice that this is identical to the contribution (\ref{z59bis}) of the untwisted sector. This means that the flavored amplitude of the orbifold theory is the same as the one without the orbifold, so that the $\mathcal{N}=1$ structure realized with the magnetic fluxes is fully preserved by the orbifold projection. The mixed amplitude (\ref{d5d9fin2}) agrees with the quadratic term in the gauge field $f$ of the annulus amplitude ${\cal A}_{9a;9b}(f)$ computed in Ref.~\cite{Lust:2003ky} to evaluate the gauge threshold corrections in intersecting brane models (see also Refs. \cite{Abel:2006yk,Akerblom:2006hx,Akerblom:2007np}). We now need to evaluate the integral over $\tau$ that appears in Eq. (\ref{d5d9fin2}). It is not difficult to realize that this integral is divergent both in the ultraviolet ($\tau\to 0$) and in the infrared ($\tau\to\infty$). The ultraviolet divergence can be eliminated by considering tadpole free models as mentioned above, while the infrared divergence can be cured by introducing, for example, a regulator $R(\tau)= \big(1-\mathrm{e}^{- 1/ (\alpha' m^2 \tau) }\big)$ with the cut-off $m\rightarrow 0$. The original evaluation \cite{Lust:2003ky} of the $\tau$ integral appearing in (\ref{d5d9fin2}) has been recently revisited in Ref. \cite{Akerblom:2007np}. Using this revised result in our case% \footnote{See in particular Eq. (3.16) of Ref. \cite{Akerblom:2007np} with all numerical additive constants absorbed in a redefinition of the cut-off $m$.}, we obtain \begin{equation} \label{A5finab} \mathcal{A}_{5_a;9_b} = 8\pi^2 k \left(\frac{N_F}{32\pi^2} \log(\alpha^\prime m^2) + \frac{N_F}{32\pi^2} \log \boldsymbol{\Gamma}_{ba}\right)~, \end{equation} where \begin{equation} \label{Gammab} \boldsymbol{\Gamma}_{ba} = \frac{\Gamma(1-\nu_{ba}^{(1)})}{\Gamma(\nu_{ba}^{(1)})} \frac{\Gamma(\nu_{ba}^{(2)})}{\Gamma(1 - \nu_{ba}^{(2)})} \frac{\Gamma(\nu_{ba}^{(3)})}{ \Gamma(1 - \nu_{ba}^{(3)})}~. \end{equation} Considering also the contribution of the flavor branes of type $c$ that are characterized by twist angles $\nu_{ac}^{(i)}$, and writing $m=\mu\,\mathrm{e}^{\mathrm{i}\varphi}$, for our realization of ${\cal{N}}=1$ SQCD the total flavored annulus amplitude is \begin{equation} \label{A5finac} \mathcal{A}_{5_a;9_b} +\mathcal{A}_{5_a;9_c} = 8\pi^2 k \left(\frac{N_F}{16\pi^2} \big[\log(\alpha^\prime \mu^2) + 2 \mathrm{i} \varphi \big] + \frac{N_F}{32\pi^2} \log \left( \boldsymbol{\Gamma}_{ba} \, \boldsymbol{\Gamma}_{ac} \right) \right)~, \end{equation} which is indeed of the expected form (\ref{maflavor}). \section{Relation to the matter K\"ahler metric} \label{sec:relation} In this section we elaborate on the previous results. In particular we will rewrite the annulus amplitudes (\ref{A5finaa}) and (\ref{A5finac}) in terms of the variables (\ref{stu}) of the supergravity basis in order to obtain information on the K\"ahler metrics for the fundamental $\mathcal{N}=1$ chiral multiplets, and then check that the instanton induced superpotential $W_k$ acquires the correct holomorphy properties required by $\mathcal{N}=1$ supersymmetry. \subsection{Holomorphic coupling redefinition} \label{subsec:hol_red} As remarked already in Refs.~\cite{Kaplunovsky:1994fg,Louis:1996ya}, the UV cutoff that has to be used in the field theory analysis of a string model is the four-dimensional Planck mass $M_P$, which is related to $\alpha'$ as follows: \begin{equation} \label{mp} M_P^2\,=\,\frac{1}{\alpha'}\,{\rm e}^{-\phi_{10}}\,s_2~, \end{equation} where $\phi_{10}$ is the ten-dimensional dilaton. In terms of this cut-off, Eqs. (\ref{A5finaa}) and (\ref{A5finac}) become, respectively, \begin{equation} \label{A5finaa2} \mathcal{A}_{5_a;9_a} = - 8 \pi^2 k \left(\frac{3N_a}{16\pi^2} \log\frac{\mu^2}{M_P^2}\, + \widetilde{\Delta}_{\scriptscriptstyle\mathrm{color}}\right) \end{equation} with \begin{equation} \label{delc} \widetilde{\Delta}_{\scriptscriptstyle\mathrm{color}}\,=\, \frac{N_a}{16\pi^2} \left( 3 \log ({\rm e}^{-\phi_{10}}\,s_2)\,+\, \sum_{i=1}^3 \log \left(U_2^{(i)} T_2^{(i)} (\eta(U^{(i)})^4\right)\right)~, \end{equation} and \begin{equation} \label{A5finab2} \mathcal{A}_{5_a;9_b} + \mathcal{A}_{5_a;9_c} \,=\,-\,8\pi^2k\left(-\frac{N_{F}}{16 \pi^2} \log \frac{\mu^2}{M_P^2}\,+\, \widetilde{\Delta}_{\scriptscriptstyle\mathrm{flavor}}\right) \end{equation} with \begin{equation} \label{del} \widetilde{\Delta}_{\scriptscriptstyle\mathrm{flavor}} \,=\,-\, \frac{N_{F}}{16 \pi^2} \left( \log ({\rm e}^{-\phi_{10}}\,s_2)\,+\,2\mathrm{i}\varphi \,+\, \frac{1}{2} \log \big( \boldsymbol{\Gamma}_{ba}\,\boldsymbol{\Gamma}_{ac} \big) \right)~. \end{equation} Since in $\widetilde{\Delta}_{\scriptscriptstyle\mathrm{flavor}}$ there are no analytic terms, we can consistently set $\varphi=0$ in the following. We now rewrite the above expressions in terms of the geometrical variables of the supergravity basis. For the charged amplitude $\mathcal{A}_{5_a;9_a}$ the procedure is very similar to the one we have applied in the ${\mathcal N}=2$ case \cite{Billo:2007sw}. In fact, using the tree-level relation between the string and the supergravity moduli given in (\ref{stu}) and the bulk K\"ahler potential (\ref{kpot}), Eq. (\ref{A5finaa2}) can be recast in the following form: \begin{equation} \label{A5aa4} \mathcal{A}_{5_a;9_a} \,=k \left[- \frac{3N_{a}}{2} \log \frac{\mu^2}{M_P^2} - N_a \sum_{i=1}^3 \log \left(\eta(u^{(i)})^2\right)+\frac{N_a}2 \,K +N_a \log g_a^2 \right]~. \end{equation} Turning to the flavored amplitude (\ref{A5finab2}), we easily see that it can be rewritten as follows: \begin{equation} {\cal A}_{5_a;9_b}+{\cal A}_{5_a;9_c} = k\left[\frac{N_{F}}{2} \log \frac{\mu^2}{M_{P}^{2}} - \frac{N_{F}}{2} K + \frac{N_{F}}{2} \log (\mathcal{Z}_Q \mathcal{Z}_{\widetilde Q})\right]~, \label{kalo} \end{equation} where the quantities $\mathcal{Z}_Q$ and $\mathcal{Z}_{\widetilde Q}$, defined through the equation \begin{equation} \label{gkq} \log ({\rm e}^{-\phi_{10}}\,s_2)\,+\, \frac{1}{2} \log \left( \boldsymbol{\Gamma}_{ba} \boldsymbol{\Gamma}_{ca} \right) \,=\, - K + \log (\mathcal{Z}_Q \mathcal{Z}_{\widetilde Q})~, \end{equation} are explicitly given by \begin{equation} \mathcal{Z}_Q = \big(4\pi\,s_{2}\big)^{-\frac14}\, \big( t_{2}^{(1)} t_{2}^{(2)} t_{2}^{(3)} \big)^{-\frac14}\, \big( u_{2}^{(1)} u_{2}^{(2)} u_{2}^{(3)}\big)^{-\frac12} \,\big(\boldsymbol{\Gamma}_{ba}\big)^{\frac12}~, \label{KQ} \end{equation} and \begin{equation} {\mathcal{Z}}_{\widetilde{Q}} = \big(4\pi\,s_{2}\big)^{-\frac14}\, \big( t_{2}^{(1)} t_{2}^{(2)} t_{2}^{(3)} \big)^{-\frac14}\, \big( u_{2}^{(1)} u_{2}^{(2)} u_{2}^{(3)}\big)^{-\frac12} \,\big(\boldsymbol{\Gamma}_{ac}\big)^{\frac12}~. \label{KQtilde} \end{equation} Eqs. (\ref{A5aa4}) and (\ref{kalo}) can be combined in the general formula at 1-loop \begin{equation} \mathcal{A} =k\left[ -\frac{b}2\,\log \frac{\mu^2}{M_P^2}\,+\,f^{(1)}\,+\,\frac{c}{2}\,K \,-\,T(G_a)\,\log\left(\frac{1}{g_a^2}\right)+\sum_r n_r\,T(r)\,\log \mathcal{Z}_r \right] \label{kl} \end{equation} where $f^{(1)}$ is a holomorphic function and \begin{equation} \label{bc} \begin{aligned} &T(r)\,\delta_{AB}=\mathrm{Tr}\,_r\big(T_AT_B\big)\quad,\quad T(G_a)=T(\mathrm{adj})~, \\ &b=3\,T(G_a)-\sum_r n_r\, T(r)\quad,\quad c=T(G_a)-\sum_r n_r\,T(r) \end{aligned} \end{equation} with $T_A$ being the generators of the gauge group $G_a$ and $n_r$ the number of $\mathcal{N}=1$ chiral multiplets transforming in the representation $r$. Indeed, Eq. (\ref{A5aa4}) is obtained from Eq. (\ref{kl}) when we consider the $\mathcal N=1$ vector multiplet ({\it i.e.} $b= 3 N_a$ and $c= N_a$), and take \begin{equation} \label{ff} f^{(1)} =- \, N_a \sum_{i=1}^3 \log \left(\eta(u^{(i)})^2\right)~. \end{equation} On the contrary Eq. (\ref{kalo}) is obtained from Eq. (\ref{kl}) by considering the fundamental matter fields of $\mathcal{N}=1$ SQCD with $N_F$ flavors ({\it i.e.} $b = c= -N_F$), taking $f^{(1)}=0$ and identifying $\mathcal{Z}_r$ with $\mathcal{Z}_Q$ and ${\mathcal{Z}}_{\widetilde{Q}}$ of Eqs. (\ref{KQ}) and (\ref{KQtilde}). In view of the relation (\ref{a5tog}), we see that by adding the disk contribution to the above annulus amplitude one obtains the running coupling constant of the effective theory in the 1-loop approximation, namely \begin{equation} \mathcal{A}_{\mathrm{1-loop}} = -\frac{8\pi^2k}{g_a^2} + \mathcal{A} \label{new1loop} \end{equation} with $\mathcal{A}$ given by (\ref{kl}), {\it i.e.} by the sum of Eqs. (\ref{A5aa4}) and (\ref{kalo}). On the other hand, according to Refs. \cite{Dixon:1990pc,Kaplunovsky:1994fg,Louis:1996ya,Shifman:1986zi} this has to be expressed in terms of the Wilsonian gauge coupling ${\tilde g}_a$, the (tree-level) bulk K\"ahler potential $K$ and the (tree-level) K\"ahler metrics $K_r$ of the chiral multiplets in the representation $r$ of the gauge group $G_a$ as follows \begin{equation} \mathcal{A}_{\mathrm{1-loop}} = -\frac{8\pi^2k}{{\tilde g}_a^2} +k\left[ -\frac{b}2\,\log \frac{\mu^2}{M_P^2}+f^{(1)}+\frac{c}{2}\,K -T(G_a)\log\left(\frac{1}{{\tilde g}_a^2}\right) +\sum_r n_r\,T(r)\log K_r \right] \label{kl1} \end{equation} with $f^{(1)}$ being a holomorphic function, and $b$ and $c$ defined as in (\ref{bc}). Notice that the gauge coupling constant $g_a$ obtained from the disk amplitude may not coincide with the Wilsonian coupling ${\tilde g}_a$ appearing in (\ref{kl1}): in general there may be loop effects, related to sigma-model anomalies in the low energy supergravity theory \cite{Derendinger:1991hq}, so that \begin{equation} \frac{1}{g_a^2} = \frac{1}{{\tilde g}_a^2}+\frac{\delta}{8\pi^2}~. \label{deltags} \end{equation} Since ${\tilde g}_a$ is the Wilsonian coupling, it has to be (the imaginary part of) a chiral field: at tree-level this is indeed what happens (see Eq. (\ref{gym})), but such a relation may be spoiled by loop corrections leading to $\delta\not=0$. Furthermore, ${\tilde g}_a$ runs only at 1-loop and its $\beta$-function is given by \begin{equation} \beta_{\mathrm{W}}({\tilde g}_a) = -\frac{b_1\,{\tilde g}_a^3}{16\pi^2}~. \label{betaw} \end{equation} Comparing Eq. (\ref{kl1}) with the string expression (\ref{new1loop}) for $\mathcal{A}_{\mathrm{1-loop}}$, we see: \begin{itemize} \item that in the right hand side of Eq. (\ref{kl}) the $\log(1/g_a^2)$ term can be replaced by $\log(1/{\tilde g}_a^2)$ since the difference yields a higher order correction; \item that $f^{(1)}$ is given by (\ref{ff}) and \item that if $\delta$ contains a term $\delta^{(0)}$ of order ${\tilde{g}_a}^{\,0}$, the tree-level K\"ahler metrics of the chiral fields are given by \begin{equation} K_r = \mathcal{Z}_r \mathcal{X}_r \label{ZX} \end{equation} where the non-holomorphic factors $\mathcal{X}_r$ are such that \begin{equation} \delta^{(0)} + \sum_r n_r\,T(r)\log \mathcal{X}_r=0~. \label{deltaX} \end{equation} \end{itemize} Note that if $\delta$ is of higher order in $\tilde{g}_a$, {\it i.e.} if $\delta^{(0)}=0$, then $\mathcal{X}_r=1$ and the tree-level K\"ahler metric of the chiral multiplets reduces to $\mathcal{Z}_r$ (see Eqs. (\ref{KQ}) and (\ref{KQtilde})), that is to what can be directly read from the string annulus amplitude (\ref{kl}). In the following subsection we will check the consistency of this result by showing that the instanton induced superpotential has the correct holomorphy properties required by $\mathcal{N}=1$ supersymmetry when everything is expressed in the appropriate variables of the low energy effective action. Moreover in Section \ref{sec:yukawa} we will compare our findings against the holomorphy properties of the Yukawa superpotential computed in Ref. \cite{Cremades:2004wa} for systems of magnetized D9 branes in the field theory limit. \subsection{Field redefinitions and the instanton induced superpotential} \label{subsec:fr_ADS} The threshold corrections $\widetilde{\Delta}_{\scriptscriptstyle\mathrm{color}}$ and $\widetilde{\Delta}_{\scriptscriptstyle\mathrm{flavor}}$, and especially their non-holomorphic parts, play an important r\^ole since they are related to the ``primed'' part of the annulus amplitude that appears in the prefactor of the instantonic correlators; in fact \begin{equation} \mathcal{A}^\prime_{5_a} = - 8 \pi^2 k \left(\widetilde{\Delta}_{\scriptscriptstyle\mathrm{color}} + \widetilde{\Delta}_{\scriptscriptstyle\mathrm{flavor}} \right)~. \label{aprime} \end{equation} In Ref. \cite{Akerblom:2007uc} it has been suggested that some of the terms of $\mathcal{A}^\prime_{5_a}$ are related to the rescalings of the fields appearing in the instanton induced correlator that are necessary in order to have a pure holomorphic expression. In Ref. \cite{Billo:2007sw} we have showed in detail that for $\mathcal{N}=2$ models, where the instantons determine corrections to the gauge prepotential, this is indeed what happens. Here we show that the same is true also for the instanton-induced superpotential for $\mathcal{N}=1$ theories, thus clarifying the general procedure. We concentrate in the one-instanton case ($k=1$), where one finds, for $N_F = N_a-1$, the instanton-induced ADS/TVY-like superpotential of \eq{Wk1}. For $k=1$, the ``primed'' amplitude (\ref{aprime}) explicitly reads \begin{equation} \label{a5p} \mathcal{A}_{5_a}^\prime = - N_a \sum_{i=1}^3 \log \left(\eta(u^{(i)})^2 \right) + N_a \log g_a^2 + \frac{N_a - N_F}{2} K + \frac{N_F}{2} \log(\mathcal{Z}_Q \mathcal{Z}_{\widetilde Q})~. \end{equation} Using this expression in Eq. (\ref{Wk1}) and introducing the K\"ahler metrics $K_Q=\mathcal{Z}_Q \mathcal{X}_Q$ and $K_{\widetilde Q}=\mathcal{Z}_{\widetilde Q}\mathcal{X}_{\widetilde Q}$, with $\mathcal{Z}_{Q}$ and $\mathcal{Z}_{\widetilde Q}$ given in (\ref{KQ}) and (\ref{KQtilde}), and $\mathcal{X}_{Q}$ and $\mathcal{X}_{\widetilde Q}$ such that \begin{equation} \delta^{(0)} + \frac{N_F}{2}\,\log(\mathcal{X}_Q \mathcal{X}_{\widetilde Q})=0~, \end{equation} we obtain \begin{equation} W_{k=1} = \mathrm{e}^{{K}/2}\,\prod_{i=1}^3\left(\eta(u^{(i)})^{-2 N_a}\right)\, \Big((\sqrt{\alpha'})^{-(2 N_a +1)}\,\mathrm{e}^{-\frac{8 \pi^2}{{\widetilde g}_a^2}}\Big)\, \big(K_Q K_{\tilde Q}\big)^{\frac{N_a-1}2}\,\frac{1}{\det(\tilde q q)}~. \label{Wk12} \end{equation} where we have used Eq. (\ref{deltags}) and the fact that $N_F=N_a-1$. If we now introduce the chiral multiplets $Q$ and $\tilde Q$ in the supergravity basis through the rescalings (\ref{qQ}), and the holomorphic renormalization group invariant scale through the $\beta$-function (\ref{betaw}), namely \begin{equation} \Lambda_{\scriptscriptstyle\mathrm{hol}}^{b} = (\sqrt{\alpha'})^{-b}\,\mathrm{e}^{-\frac{8 \pi^2}{{\widetilde g}_a^2}}~, \end{equation} we find \begin{equation} \label{Wk1hol} W_{k=1} = \mathrm{e}^{K/2}\,\prod_{i=1}^3\left(\eta(u^{(i)})^{-2 N_a}\right)\, \Lambda_{\scriptscriptstyle\mathrm{hol}}^{2 N_a +1}\,\frac{1}{\det(\widetilde Q \,Q)} \equiv \mathrm{e}^{K/2} \, \widehat{\Lambda}_{\scriptscriptstyle\mathrm{hol}}^{\,2 N_a +1}\,\frac{1}{\det(\widetilde Q Q)} ~. \end{equation} In the last step we have absorbed the moduli dependent factors of $\eta(u^{(i)})$ with a holomorphic redefinition of the Wilsonian scale $\Lambda_{\scriptscriptstyle\mathrm{hol}}$ into $\widehat{\Lambda}_{\scriptscriptstyle\mathrm{hol}}$. The final form of \eq{Wk1hol} is the correct one for a holomorphic ADS/TVY superpotential term in a non-trivial background. The factor of $\mathrm{e}^{K/2}$ is the contribution of the bulk K\"ahler potential, while the remaining part \begin{equation} {\widehat W}_{k=1} = \widehat{\Lambda}_{\scriptscriptstyle\mathrm{hol}}^{\,2 N_a +1}\,\frac{1}{\det(\widetilde Q Q)} \label{hatW12} \end{equation} is a holomorphic expression in the appropriate variables of the Wilsonian scheme. Thus, the various pieces of the ``primed'' instantonic annulus amplitude $\mathcal{A}_{5_a}^\prime$ have conspired to reproduce the required factors to obtain a fully holomorphic ADS/TVY superpotential ${\widehat W}_{k=1}$. \section{Comparison with the Yukawa couplings} \label{sec:yukawa} It is well known that the K\"ahler metrics of the chiral multiplets play a key r\^ole in relating the holomorphic superpotential couplings in the effective supergravity Lagrangian to the physical Yukawa couplings of the canonically normalized matter fields. This relation represents therefore a possible test on the structure of the K\"ahler metrics $K_Q$ and $K_{\tilde Q}$. Let us recall some basic points, and set up appropriate notations. When various stacks of branes, of types $a,b,c,\ldots$, are present, there are chiral multiplets arising from the massless open strings stretching between them. We will denote as $q^{ba}$ the chiral multiplet (as well as the scalar therein) coming from the D$9_b$/D$9_a$ strings, which we formerly indicated as $q$, and as $q^{ac}$ the chiral multiplet corresponding to D$9_a$/D$9_c$ strings, which was previously indicated as $\tilde q$. We will then similarly have the multiplets $q^{cb}$, \ldots . The corresponding multiplets in the ``supergravity'' basis will be denoted as $Q^{ba}$, $Q^{ac}$, $Q^{cb}$, \ldots, and their K\"ahler metrics will be $K_{ba}$ (formerly $K_Q$), $K_{ac}$ (formerly $K_{\tilde Q}$) and so on. These metrics will contain the appropriate factors $\boldsymbol{\Gamma}_{ba}$, $\boldsymbol{\Gamma}_{ac}$, $\boldsymbol{\Gamma}_{cb}$, \ldots~, which in turn are given by the analogue of \eq{Gammab} in terms of the twist angles $\nu^{(i)}_{ba}$, $\nu^{(i)}_{ac}$, $\nu^{(i)}_{cb}$, \ldots~. \begin{figure} \begin{center} \begin{picture}(0,0)% \includegraphics{yuk.eps}% \end{picture}% \setlength{\unitlength}{1973sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5348,4971)(76,-4444) \put(4501,-3811){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\chi_{ac}$}}}} \put(2401,239){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$q_{cb}$}}}} \put(4501,-1561){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_c$}}}} \put( 76,-1411){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_b$}}}} \put(2476,-4336){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$9_a$}}}} \put(451,-3961){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4} {\familydefault}{\mddefault}{\updefault}$\chi_{ba}$}}}} \end{picture}% \end{center} \caption{A disk diagram leading to a Yukawa coupling.} \label{fig:yuk} \end{figure} In this situation, there are non-trivial interactions supported on disks whose boundary is partly attached to three different branes, say of types $a$, $c$ and $b$, provided the twist angles $\nu^{(i)}_{ba}$, $\nu^{(i)}_{ac}$, $\nu^{(i)}_{cb}$ for each $i$ are either the internal or the external angles of a triangle. These interactions in the field-theory limit correspond to Yukawa couplings between the fields of the chiral multiplets $q^{ac}$, $q^{cb}$ and $q^{ba}$, like for instance the one associated to Fig. \ref{fig:yuk}: \begin{equation} \label{yk} \int d^4x \,\, Y_{acb}\, \mathrm{Tr}\,\big(\chi^{ac} q^{cb} \chi^{ba}\big)~, \end{equation} plus its supersymmetric completion terms. Altogether such interactions can be encoded in the cubic superpotential \begin{equation} \label{supyuk} W_{\mathrm{Y}}=Y_{acb}\, \mathrm{Tr}\,\big(q^{ac} q^{cb} q^{ba}\big)~. \end{equation} If we rewrite the above superpotential in terms of the multiplets in the supergravity basis via the rescalings (\ref{qQ}), we obtain \begin{equation} \label{supyukQ} W_{\mathrm{Y}}=Y_{acb}\, \big( K_{ac}\,K_{bc}\,K_{ba}\big)^{\frac 12}\,\,\mathrm{Tr}\,\big(Q^{ac} Q^{cb} Q^{ba}\big)~. \end{equation} On the other hand, in the effective supergravity action this superpotential must take the form \begin{equation} \label{supyuksugra} W_{\mathrm{Y}}= \mathrm{e}^{K/2} \,{\widehat W}_{acb} \,\mathrm{Tr}\,\big(Q^{ac} Q^{cb} Q^{ba}\big)~, \end{equation} where $\mathrm{e}^{K/2}$ is the standard contribution of the bulk K\"ahler potential and ${\widehat W}_{acb}$ are purely holomorphic functions of the geometric moduli. Comparing these last two equations, we deduce that \begin{equation} \label{YW} Y_{acb} = \mathrm{e}^{K/2}\, \big(K_{ac}\,K_{cb}\,K_{ba}\big)^{-\frac 12}\,{\widehat W}_{acb}~. \end{equation} If we now use the bulk K\"ahler potential (\ref{kpot}) and the K\"ahler metric $K_{ba}=\mathcal{Z}_{ba}\mathcal{X}_{ba}$, with $\mathcal{Z}_{ba}$ given in (\ref{KQ}) and similarly for $K_{ac}$ and $K_{cb}$, we easily obtain \begin{equation} \label{YWus} \begin{aligned} Y_{acb} & = (4\pi)^{\frac 38} \,s_2^{-\frac 18} \,\big(t^{(1)}_{2} t^{(2)}_{2} t^{(3)}_{2}\big)^{-\frac 18} \,\big(u^{(1)}_{2} u^{(2)}_{2}u^{(3)}_{2}\big)^{\frac 14}\, \big(\boldsymbol{\Gamma}_{ac} \boldsymbol{\Gamma}_{cb} \boldsymbol{\Gamma}_{ba}\big)^{-\frac 14}\, \big(\mathcal{X}_{ac} \mathcal{X}_{cb} \mathcal{X}_{ba}\big)^{-\frac 12}\,{\widehat W}_{acb} \\ & = \sqrt{4\pi}\,\,\mathrm{e}^{\frac{\phi_4}{2}} \,\big(u^{(1)}_{2} u^{(2)}_{2}u^{(3)}_{2}\big)^{\frac 14}\, \big(\boldsymbol{\Gamma}_{ac} \boldsymbol{\Gamma}_{cb} \boldsymbol{\Gamma}_{ba}\big)^{-\frac 14}\, \big(\mathcal{X}_{ac} \mathcal{X}_{cb} \mathcal{X}_{ba}\big)^{-\frac 12}\, {\widehat W}_{acb}~, \end{aligned} \end{equation} where $\phi_4=\phi_{10} -\frac{1}{2}\sum_i \log(T_2^{(i)})$ is the four-dimensional dilaton. We now compare this finding with the results of Ref. \cite{Cremades:2004wa} for the physical Yukawa couplings of toroidal models with magnetized D9 branes% \footnote{In the T-dual intersecting brane version, these couplings have been studied in Refs.~\cite{Cremades:2003qj,Cvetic:2003ch,Abel:2003vv,Lust:2004cx}.}. The expression for $Y_{acb}$ is given in their Eq. (7.13). Setting to zero the value of the Wilson lines, and rewriting it in terms of the supergravity moduli through Eq.s (5.47)-(5.49) of the same reference, in our notation it reads \begin{equation} \label{cr} Y_{acb} = \mathrm{e}^{\frac{\phi_4}{2}}\, \big(u^{(1)}_{2} u^{(2)}_{2}u^{(3)}_{2}\big)^{\frac 14} \,\prod_{i=1}^3 \left| \frac{{\vartheta}_1^{(i)} {\vartheta}_2^{(i)}}{{\vartheta}_1^{(i)} + {\vartheta}_2^{(i)}}\right|^{\frac 14} W'_{acb}~. \end{equation} The detailed expression of the quantities ${\vartheta}_1^{(i)}$, ${\vartheta}_2^{(i)}$ and $W'_{acb}$ is not relevant here; the only important points that we want to emphasize are that $W'_{acb}$ is a holomorphic function of the complex structure moduli $u^{(i)}$ and that the expression (\ref{cr}) has been obtained starting from the non-abelian Yang-Mills theory on the D9 branes, rather than from the full fledged DBI action. Therefore one expects that it only represents the field theory limit of the string result% \footnote{See Ref. \cite{Russo:2007tc} for a string theory calculation of the Yukawa couplings and a direct derivation of the $\boldsymbol{\Gamma}$ factors.}. However, as already argued in Ref. \cite{Cremades:2004wa}, one can extend \eq{cr} by observing that in the field theory limit $\alpha'\to 0$ ({\it i.e.} in the small twist limit) one has \begin{equation} \label{ItoG} \big(\boldsymbol{\Gamma}_{ac} \boldsymbol{\Gamma}_{cb} \boldsymbol{\Gamma}_{ba}\big)^{-\frac 14}\,\sim\,\prod_{i=1}^3 \left| \frac{\vartheta_1^{(i)} \vartheta_2^{(i)}}{ \vartheta_1^{(i)} + \vartheta_2^{(i)}}\right|^{\frac 14} ~. \end{equation} With this understanding, \eq{cr} can then be generalized as \begin{equation} \label{crs} Y_{acb} = \mathrm{e}^{\frac{\phi_4}{2}}\, \big(u^{(1)}_{2} u^{(2)}_{2}u^{(3)}_{2}\big)^{\frac 14} \,\big(\boldsymbol{\Gamma}_{ac} \boldsymbol{\Gamma}_{cb} \boldsymbol{\Gamma}_{ba}\big)^{-\frac 14} \,W'_{acb}~, \end{equation} which agrees with Eq. (\ref{YWus}) by taking $W'_{acb} =\sqrt{4\pi}\,\widehat W_{acb}$, provided the non-holomorphic factors obey \begin{equation} \mathcal{X}_{ac} \mathcal{X}_{cb} \mathcal{X}_{ba}=1~. \label{vincx} \end{equation} Indeed, stripping off the various factors of the K\"ahler potential and of the K\"ahler metrics from the physical Yukawa couplings $Y_{acb}$ according to \eq{YWus}, we can obtain the expected holomorphic structure of the superpotential only if (\ref{vincx}) is satisfied. The simplest solution to this constraint is clearly $\mathcal{X}_{ba}=\mathcal{X}_{cb}=\mathcal{X}_{ac}=1$. This would imply that the K\"ahler metrics for the twisted chiral matter fields are given by the expressions (\ref{KQ}) and (\ref{KQtilde}). Notice that in this case the only dependence on the twist parameters would be through the $\Gamma$-functions contained in the factors $\big(\boldsymbol{\Gamma}_{ba}\big)^{\frac12}$ and $\big(\boldsymbol{\Gamma}_{ac}\big)^{\frac12}$ (see Eq. (\ref{Gammab})). Such factors are the same as the ones that can be obtained directly from a 3-point string scattering amplitude involving one (closed string) geometric modulus and two twisted scalar fields, as explained in Refs. \cite{Lust:2004cx,Bertolini:2005qh}. On the other hand, the possibility of non-trivial $\mathcal{X}$ factors has been considered in Refs. \cite{Akerblom:2007uc,Blumenhagen:2007ip}. Besides satisfying the constraint (\ref{vincx}), such non-holomorphic factors should also be related to a non-vanishing sigma-model anomaly term $\delta^{(0)}$, as explained in Section \ref{subsec:hol_red}. It would be very interesting to do an independent calculation to check this point. We close with a few concluding remarks. Loop corrections to the bulk K\"ahler potential or to the Einstein term in the bulk action, which for Type II theories have been computed in Refs. \cite{Antoniadis:1996vw,Berg:2005ja}, in general induce shifts of the supergravity variables and in particular are responsible for a non-vanishing $\delta$-term in Eq. (\ref{deltags}). However, such a $\delta$-term appears to be of order $g_s \sim {\tilde g}_a^{\,2}$ and thus it does not affect the form of the K\"ahler metric for the $\mathcal{N}=1$ twisted matter at tree-level. Furthermore, using the known expressions for the K\"ahler potential and K\"ahler metrics, in Ref. \cite{Billo:2007sw} we have explicitly checked that in $\mathcal{N}=2$ SQCD no shift $\delta^{(0)}$ is produced. It would be very interesting to explore further this issue and see whether and how an anomalous term $\delta^{(0)}$ in $\mathcal{N}=1$ theories with magnetized branes is possible. \acknowledgments{We thank R. Russo for useful discussions. This work is partially supported by the Italian MUR under contract PRIN-2005023102 {``Strings, D-branes and Gauge Theories''} and by the European Commission FP6 Programme under contract MRTN-CT-2004-005104 ``{Constituents, Fundamental Forces and Symmetries of the Universe}'', in which A.L. is associated to University of Torino, and R.M to INFN-Frascati. We thank the Galileo Galilei Institute for Theoretical Physics for the hospitality and the INFN for partial support during the completion of this work.}
1,941,325,220,876
arxiv
\section{Introduction} \noindent The study of binary stochastic processes has a long-standing tradition in probability theory. There exist many versions of such processes, for example, the telegraph process in continuous time or the simple discrete-time Markov chain. These processes found applications in many fields, for example, in renewal theory, signal processing, \citep{SBRS}, and in statistical physics, \citep{IIA}. The focus of this paper is the switch process with independent switching times. More specifically, we consider a continuous-time stochastic process taking values in $\{-1,1\}$, starting at $1$ at the origin, and then switching according to an i.i.d sequence of non-negative random variables. The switch process always starts from one hence it is not stationary, however, a convenient stationary counterpart can be defined. This counterpart will be referred to as the stationary switch process. The expected value of the switch process is intrinsically connected with the switching time distribution. This is also the case for the covariance of the stationary switch process. Formalizing this connection is the main contribution of the paper, among other contributions such as formulation and deriving the underlying properties of the switch process. The connection also leads to a class of distributions which constitutes a proper sub-classes of geometric infinite divisible distributions introduced by \cite{klebanov85}. There is a natural connection between geometric divisibility of the switching time distribution and thinning of a renewal process. The main result of this paper allows also to recover the original renewal arrival times distribution from the thinned process, see also \cite{Thin}. Obtaining the switching time distribution from the covariance function of a stationary binary process is a well-known problem in signal processing \citep{SBRS}. In this work, a complete solution is presented for the class of geometric divisible distributions. This is done using a relation between the covariance of the stationary switch process and the expected value of the switch process. Therefore an analogous characterization is provided to explicitly identify a function as the expected value of the switch process. Approximating exceedance time distributions is important in statistical physics because the explicit analytical solution is a well-known difficult problem in probability theory. There are several possible approaches and among them are numerical based on the generalized Rice formula, see for example \cite{Joint22}, and quasi-analytical such as the independent interval approximation (IIA) framework, see for example \cite{Sire2007}. Our derived relation between expected value, covariance and the switching time distribution contributes to this framework. Often, the tail behavior is of the main interest, see for example \cite{IIA} and \cite{Sire2007}. There is a challenge from a mathematical point of view to provide conditions when this approximation leads to a valid probability distribution, which is not obvious at all. The results of this paper allow us to obtain an explicit representation of the approximated distribution for many stochastic processes commonly used in statistical physics. This does not only provides information about the tail behavior but also yields the explicit approximated distribution of excursions above or below zero. The diffusion process in two dimensions is used to illustrate this application to IIA approximation of the exceedance time distribution. The paper is self-contained in that it contains the formulation of the switch process, complete derivations of the relations between the expected value and covariance of switch process, and the switching time distribution. The structure of the paper is as follows. In Section~\ref{preliminaries}, preliminaries such as the definition of the switch process, results connected with its expected value, and the concept of $r$-geometric divisibility are presented. The relation between the $2$-geometric divisible switching time distributions and the expected value of the switch process is presented in Section~\ref{mainresult}. The definition of the stationary switch process and the relation between its covariance and the expected value function of the switch process are presented in Section~\ref{SBRS} and are applied to obtain a parallel characterization of the covariance of the switch process. Some examples of deriving expected value and covariance for different switching time distributions follow in Section~\ref{examples}. In Section~\ref{appli}, an application to the independent interval approximation (IIA) method in statistical physics is presented. \section{Preliminaries} \label{preliminaries} \subsection{The switch process and its expected value} A natural definition of the switch process is through the counting process. Let $\left\{ T_k \right\}_{k\geqslant 1}$ be a sequence of i.i.d non-negative random variables with the distribution function $F$. Additionally, $F$ is assumed to be absolutely continuous with respect to the Lebesgue measure with the density that is bounded on any closed interval of the positive half-line, and such that $F(0)=0$. Define a count process for $t\in [0,\infty)$, by \begin{align*} N(t)= \begin{cases} \sup \left\{ n\in \mathbb N; { \sum_{k=1}^n {T_k}} \leqslant t \right\}, & t\geqslant T_1,\\ 0, & 0\le t<T_1, \end{cases} \end{align*} In other words, $N(t)$ is the number of renewal events up to a time point t. \begin{definition} \label{Defswitch} Let $N(t), t\ge 0$ be a count process. Then the switch process is defined by \begin{align*} X(t)=(-1)^{N(t)},~t\ge 0. \end{align*} \end{definition} \noindent The process $X(t)$ switches between the values $1$ and $-1$ at each renewal event, hence the name. Before moving to the expected value function of $X(t)$, a comment concerning convolutions is needed. Throughout this paper, $\star$ will denote convolutions of probability measures, i.e. the distribution functions of the sum of i.i.d random variables. Convolutions of functions will be denoted by $\ast$. The switch process and all its properties are determined by the switching time distribution $F$. One such property is its expected value function, that throughout this paper will be denoted by $E(t)=EX(t)$. The following result is well-known in renewal theory, however, for completeness, a proof is provided in Appendix~\ref{proofs}, together with the other proofs relating to this section. \begin{proposition} \label{EXt} Let $X(t)$ be a switch process. Then its expected value function is given through the switching time distribution $F$ by \begin{align*} E(t)&=\sum_{n=0}^\infty (-1)^n \left( F^{n \star } - F^{(n+1) \star } \right)(t). \end{align*} \end{proposition} \noindent The implicit nature of this relation between the expected value function and the switching time distribution poses a problem when one wants to verify that a given function is the expected value function of a switch process. To deal with the repeated convolutions we use the Laplace transform, which is one of the standard tools in renewal theory. In the Laplace domain, the relation between the switching time distribution and the expected value function becomes explicit. To clarify the notation of the next proposition, let $\Psi_F(s)=\int_0^\infty e^{-ts}dF(t)$, i.e $\Psi_F$ is the Laplace transform of the distribution function $F$. Throughout the paper $\mathcal{L}(\cdot)$ is used to denote Laplace transform. \begin{proposition} \label{LEXt} Let $X(t)$ be a switch process with the expected value function $E(t)$. Then for $s>0$, \begin{align*} \mathcal{L}(E)(s)&=\frac{1}{s}\frac{1-\Psi_F(s)}{1+\Psi_F(s)}, \\ \Psi_F(s)&=\frac{1-s\mathcal{L}(E)(s)}{1+s\mathcal{L}(E)(s)}. \end{align*} \end{proposition} \noindent This result can initially appear to be explicit when compared to Proposition~\ref{EXt}, but still hard to utilize in practice. The right-hand side of the last expression in Proposition~\ref{LEXt} must be a completely monotone function. Recall that a function is completely monotone if and only if $(-1)^n \frac{d^n}{ds^n} f(s)\geqslant0$ for $s>0$ and all $n\geqslant 0$. This is a consequence of Bernstein’s theorem (see Theorem 1, page 415 in \cite{FellerV2}) which states that a function is the Laplace transform of a probability distribution if and only if it is completely monotone. We conclude that a simple criterion that characterizes the set of functions such that they correspond to expected value functions of some switch process is not easy to obtain from its definition. The limiting properties of the expected value of the switch process are important. The effect of starting from 1 at time zero will diminish and for large $t$ the expected value tends to zero due to the symmetry, i.e. equal chances to take value one and minus one. This leads to the next proposition which follows from the Key Renewal Theorem, see Appendix~\ref{proofs}. \begin{proposition} \label{limitXt} Let $X(t)$ be a switch process, with a continuous switching time distribution and the expected value $E(t), ~t\ge 0$, then \begin{align*} \lim_{t \rightarrow 0^+} E(t)=1, \ \ \lim_{t \rightarrow \infty} E(t)=0. \end{align*} \end{proposition} \noindent In characterizing the functional properties of the expected value function of the switch process, the derivative of $E(t)$ with respect to time is needed. \begin{proposition} \label{EXtprim} Let $X(t)$ be a switch process with the expected value function $E(t)$. Let the switching time distribution have a density, $f$, with support in $[0,\infty)$ and that there exists $l\in \mathbb{N}$ for which $\sup_{t>0}f^{\ast l}(t)<\infty$. Then \begin{align*} E'(t)=2\sum_{k=1}^{\infty} (-1)^k f^{\ast k}(t), t>0. \end{align*} Moreover, the convergence of the above series as well as the infinite series in Proposition~\ref{EXt}, is locally uniform. \end{proposition} \noindent With the expected value functions and its properties presented, we can now present some properties of a special class of switching time distributions. \subsection{Geometric divisibility} \noindent \cite{klebanov85} introduced the concept of geometric infinite divisibility. It describes distributions that can be represented as a sum of i.i.d random variables where the number of terms in the sum follows a geometric distribution with an arbitrary parameter $p\in (0,1)$. For a broader overview of geometric infinite divisibility see \cite{TomonGID}, and \cite{onGID}. Our main focus is on a weaker concept than the geometric infinite divisibility, defined next. \begin{definition}\label{rdiv} Let $\nu_p$ be a geometric random variable with the probability mass function $p_{\nu_p} (k)=(1-p)^{k-1}p$ for $k=1,2...$ and $\{\tilde{W}_k\}_{k\geqslant 1}$ a sequence of i.i.d non-negative random variables independent of $\nu_p$. If a random variable has the stochastic representation \begin{align*} W=\sum_{k=1}^{\nu_p} \tilde{W}_k, \end{align*} then $W$ follows a $r$-geometric divisible distribution with $r=E\nu_p$ and said to belong to the class $GD(r)$ and we write $F\in GD(r)$ \end{definition} \noindent The distribution of $\tilde{W}$ is then called the $r$-geometric divisor the distribution of $W$. The notion of divisibility comes naturally from splitting a random variable into a sum of smaller parts, in essence dividing it into, on average, $r$ random variables. If $W$ belongs to $GD(r)$ with the divisor $\tilde{W}$, it follows from Wald's equation that \begin{align*} EW=rE\tilde{W}. \end{align*} \noindent If $X$ belongs to $GD(r)$, with the distribution function $F$, then it has the following Laplace transform \begin{align*} \Psi_F(s)&=\frac{\frac{1}{r}\Psi_{\tilde{F}}(s)}{1-(1-\frac{1}{r})\Psi_{\tilde{F}}(s)}, \end{align*} where $\Psi_{\tilde{F}}$ is the Laplace transform of the $r$-geometric divisor $\tilde{W}$. By solving the expression above for $\Psi_{\tilde{F}}$, conditions are obtained to verify if a given distribution is $r$-geometric divisible. The function obtained should be a completely monotone function which again follows from Bernstein’s theorem. To highlight this we have the next proposition. \begin{proposition} \label{LGD} A distribution $F$ belongs to $GD(r)$, $r>1$, if and only if \begin{align*} \frac{r\Psi_F(s)}{1 +(r-1)\Psi_F(s)} \end{align*} is a completely monotone function, for $s \geqslant 0$ and is equal to one for $s=0$. \end{proposition} For the class of geometric infinite divisible distributions, the divisibility property needs to be satisfied for all $r>1$. The concept of $r$-geometric divisibility is hence weaker. Throughout the rest of the paper, $GD(\infty)$ will refer to the set of geometric infinite divisible distributions, and $GD(r)$ denotes the set of $r$-geometric divisible distributions. There exist a monotone relation between sets $r$-geometric divisible distributions. \begin{proposition} \label{setlemma} For $1<u\leqslant r < \infty$, then $GD(r) \subseteq GD(u)$. \end{proposition} \noindent In essence if $W$ belongs to $GD(u)$ it also belong to $GD(r)$, for $u\leqslant r$. \section{Switch processes with monotonic expected value function} \label{mainresult} \noindent In this section, we fully characterize switch processes with monotonic expected value functions. For the main result, we recall the assumptions on the switching time distribution. First, it is assumed here that $F(t)$ has support on $(0,\infty)$ and that its corresponding density $f(t)$ is well defined at each $t\in(0,\infty)$. Secondly, it is assumed that $F(0)=0$, i.e, there is zero probability of instantaneous switching back after a switch. \begin{theorem} \label{Th1} Let $X(t)$ be a switch process, with $E(t)$ as its expected value function and with the switching time distribution $F(t)$. Then the following conditions are equivalent \begin{itemize} \item[$(i)$] $F(t)\in GD(2)$ \item[$(ii)$] $E(t)$ is non-negative and decreasing. \end{itemize} \end{theorem} \begin{proof} \textit{$(i) \Rightarrow (ii)$:} \\ Since $F(t)\in GD(2)$, it has the following Laplace transform, as described in Section~\ref{preliminaries}, \begin{align*} \Psi_F(s)=\frac{\frac{1}{2} \Psi_{\tilde{F}(s)}}{1- \frac{1}{2} \Psi_{\tilde{F}(s)}}. \end{align*} Substituting this into the first expression of Proposition~\ref{LEXt}, we have \begin{align*} \mathcal{L}(E)(s)&=\frac{1}{s}\frac{1-\frac{\frac{1}{2} \Psi_{\tilde{F}}(s)}{1- \frac{1}{2} \Psi_{\tilde{F}}(s)}}{1+\frac{\frac{1}{2} \Psi_{\tilde{F}}(s)}{1- \frac{1}{2} \Psi_{\tilde{F}}(s)}} =\frac{1}{s} (1- \Psi_{\tilde{F}}(s)), \end{align*} which is equivalent to \begin{align*} s\mathcal{L}(E)(s)-1&=-\Psi_{\tilde{F}}(s). \end{align*} Using the Laplace transform of derivative, $\mathcal{L}(h')(s)=s\mathcal{L}(h)(s)-h(0)$, and Proposition~\ref{limitXt}, \begin{align*} \mathcal{L}(-E')(s)&=\Psi_{\tilde{F}}(s). \end{align*} By taking the inverse Laplace transform, this implies that $-E'(t)$ is a probability density function. Therefore, to satisfy the limiting results of Proposition~\ref{limitXt}, $E(t)$ must satisfy the conditions of $(ii)$. \\ \noindent \textit{$(ii) \Rightarrow (i)$:} \\ Under the assumptions of $(ii)$ and the results of Proposition~\ref{limitXt} we have \begin{align*} \int_0^\infty E'(t)dt=\lim_{t \rightarrow \infty} E(t) - \lim_{t \rightarrow 0} E(t)=-1, \end{align*} $-E'(t)$ is thus a probability density function. Combining the derivative property of Laplace transform, presented above, and Proposition~\ref{limitXt} into the second equation of Proposition~\ref{LEXt}, we have \begin{align*} \Psi_F (s)&=\frac{1-s\mathcal{L}(E)(s)}{1+s\mathcal{L}(E)(s)} =\frac{\mathcal{L}(-E')(s)}{2-\mathcal{L}(-E')(s)} =\frac{\frac{1}{2} \mathcal{L}(-E')(s) }{1-\frac{1}{2} \mathcal{L}(-E')(s)}. \end{align*} This is the Laplace transform of a $GD(2)$ distribution, as described in Section~\ref{preliminaries}. Therefore $F(t)\in GD(2)$, with the divisor $-E'(t)$ which yields $(i)$. \end{proof} Theorem~\ref{Th1} directly relates functional properties of the expected value of the switch process with the switching time distribution for the class of $GD(2)$ distributions. In many situations, there is a need to obtain either the expected value function from the switching time distribution or the switching time distribution from the expected value function. The first problem is directly solved by using Proposition~\ref{LEXt}, but for the second problem, it is not usually known what kind of expected value function will produce proper switching time distributions, if any at all. By combining Theorem~\ref{Th1} and properties of $E(t)$ derived in Section~\ref{preliminaries}, a partial solution can be obtained for the case when switching time distribution belongs to $GD(2)$. To highlight this partial characterization we have the following proposition which follows directly from the second part of the proof of Theorem~\ref{Th1}. \begin{proposition} \label{Efunk} Let $E(t)$ be a function for $t\geqslant0$ such that the following conditions are satisfied \begin{itemize} \item[$(i)$] $\lim_{t\rightarrow 0+}E(t)=1$, \item[$(ii)$] $\lim_{t\rightarrow \infty}E(t)=0$, \item[$(iii)$] $E(t)$ is at least once differentiable on $(0,\infty)$, \item[$(iv)$] $E'(t)\leqslant 0$, for all $t\geqslant0$, \end{itemize} then it is an expected value function of a switch process with a $GD(2)$ switching time distribution. \end{proposition} From Theorem~\ref{Th1} an explicit representation of the $2$-geometric divisors distribution function is obtained. \begin{corollary} \label{Corr1} Let the switching time distribution, $F(t)$, belong to $GD(2)$, with the divisor $\tilde{F}(t)$, then for $t\geqslant 0$ \begin{align*} E(t)&=1-\tilde{F}(t), \\ E'(t)&=-\tilde{f}(t), \end{align*} \end{corollary} Corollary~\ref{Corr1} gives an explicit representation of the distribution function and density for the $2$-geometric divisor, of the switching time distribution in terms of $E(t)$. The standard method to obtain the switching time distribution from the divisor is by using the Laplace transform as seen in Section~\ref{preliminaries}. Proposition~\ref{setlemma} can be used to extend the results of Theorem~\ref{Th1}. \begin{corollary}\label{Corr2} Let the switching time distribution be $GD(r)$, for some $r\geqslant 2$, then the corresponding expected value function of the switch process, $E(t)$, is non-negative and decreasing for $t\geqslant 0$. \end{corollary} \noindent However, a non-negative and decreasing expected value function does not, necessarily imply a $r$-geometric divisible, switching time for $r>2$. Let us consider a switch process which is constructed from a count process $N(t)$ and satisfying the conditions of Theorem~\ref{Th1}. Further, let $\tilde{N}(t)$ be a count process with the arrivals times distributed according to the divisor of this switch process. The two count processes are related through thinning. More specifically, $N(t)$ is a thinning of $\tilde{N}(t)$, with the probability of thinning equal to $\frac{1}{2}$. Thus we have the following result. \begin{corollary} A switch process $X(t)$ is $\frac 1 2 $-thinned if and only if its expected value is non-negative and decreasing. \end{corollary} From a given trajectory of $N(t)$, the trajectory of process $\tilde{N}(t)$ cannot be recovered, in general. However, it follows from Proposition~\ref{Efunk} that the distribution of arrival times of $\tilde{N}(t)$ can be recovered. For further relations between geometric divisibility of the switching time distribution and the thinned renewal processes, see \cite{Thin,ThinG}. \section{The autocovariance of the stationary switch process} \label{SBRS} \noindent The switch process is not stationary, however there exist a stationary counterpart. In order to define it, the behavior around zero needs to be addressed. Let $\mu$ be the expected value of the switching time distribution, and $((A,B),\delta )$ be non-negative random variables, mutually independent and independent of $X(t)$ with the following densities \begin{align*} f_{A,B}(a,b)&=\frac{f_T(a+b)}{\mu}, \\ f_A(t)&=f_B(t)=\frac{1-F_T(t)}{\mu}, \\ P\{\delta=1\}&=P\{\delta=-1\}=\frac{1}{2}. \end{align*} \begin{definition} \label{Defssp} Let $X_+(t)$ and $X_-(t)$ be two independent switch processes, and $((A,B),\delta )$ be as described above, define \begin{align*} Y(t)= \begin{cases} - \delta, & -B < t <A, \\ \delta X_+(t-A), & t \geqslant A, \\ -\delta X_-(-(t+B)), & t \leqslant -B. \end{cases} \end{align*} Then $Y(t)$ called a stationary switch process. \end{definition} \noindent The stationarity of $Y(t)$ follows from well known results in renewal theory, which follows from the key renewal theorem, see for example \cite{theoryofpoint2}. The switch process is characterized by its expected value function $E(t)$ or, equivalently, by its switching time distribution. Obviously, the stationary switch process is also characterized by the switching time distribution. It will be seen that it is also characterized by its covariance function, denoted by $C(\cdot)$. This is due to the following relation between the expected value function and the covariance function. \begin{proposition} \label{propcovexp} Let $Y(t)$ be a stationary switch process, $E(t)$ be the expected value of the corresponding switch process, and let $\mu$ be the expected value of the switching time distribution. Then for $s>0$ \begin{align*} \mathcal{L}(C)(s)=\frac{1}{s} \left( 1-\frac{2}{\mu }\mathcal{L}(E)(s) \right). \end{align*} \end{proposition} \begin{proof} Starting with the covariance of $Y(t)$, and utilizing symmetry, we have \\ $(-Y(t)\vert \delta=1) \stackrel{d}{=}(Y(t)\vert \delta=-1)$, and for $t>0$ \begin{align*} C(t)&=E \left( E(Y(t)Y(0) \vert \delta)\right) \\ &=\frac{1}{2}\left( E(Y(t) (-\delta) \vert \delta=1) + E(Y(t) (-\delta) \vert \delta=-1 )\right) \\ &=-E(Y(t) \vert \delta=1) \\ &=-\left( \int_0^\infty E(Y(t) \vert \delta=1, A=x) f_{A \vert \delta=1}(x)dx \right)\\ &= - \left( \int_0^t E(\delta X(t-x)\vert \delta=1, A=x)f_A(x)dx + \int_t^\infty (-1) f_A(x)dx \right) \\ &= -\int_0^t E(t-x)f_A(x)dx + 1-F_A(t). \end{align*} Since $E(t-x)=0$, for $x>t$, we obtain \begin{align*} C(t)=1-F_A(t)-\left(E\ast f_A \right)(t). \end{align*} \noindent From Definition~\ref{Defssp} and the Laplace transform of $F_A(t)$ \begin{align*} \mathcal{L}(f_A)(s)=\frac{1-\Psi_F(t)}{s \mu}. \end{align*} Using the above expression and Proposition~\ref{LEXt}, \begin{align*} \mathcal{L}(C)(s)&= \frac{1}{s} - \frac{1}{s}\mathcal{L}(f_A)(s)-\mathcal{L}(E)(s) \mathcal{L}(f_A)(s) \\ &=\frac{1}{s}-\mathcal{L}(f_A)(s) \left(\frac{1}{s}+\frac{1}{s}\frac{1-\Psi_F(s)}{1+\Psi_F(s)} \right)\\ &=\frac{1}{s} - \mathcal{L}(f_A)(s) \left( \frac{2}{s}\frac{1}{1+\Psi_F(s)} \right) \\ &=\frac{1}{s} - \frac{1-\Psi_F(s)}{\mu s} \left( \frac{2}{s}\frac{1}{1+\Psi_F(s)} \right) \\ &=\frac{1}{s} - \frac{2}{\mu s} \left( \frac{1}{s} \frac{1-\Psi_F(s)}{1+\Psi_F(s)} \right)\\ &=\frac{1}{s}-\frac{2}{\mu s}\mathcal{L}(E)(s)=\frac{1}{s}\left( 1- \frac{2}{\mu }\mathcal{L}(E)(s) \right). \end{align*} \end{proof} The above relation between $E(t)$ and $C(t)$ is somewhat implicit. By investigating the limit when $t$ approaches zero of $C(t)=1-F_A(t)-\left(E\ast f_A \right)(t)$, it is clear that $C(0)=1$. With this remark, the relation between $E(t)$ and $C(t)$ becomes explicit in the next theorem. \begin{theorem} \label{Th2} Let $C(t)$ be the covariance of the stationary switch process, $E(t)$ be the expected value function of the switch process, and $\mu$ be the expected value of the switching time distribution, then for $t\geqslant0$ \begin{align*} C'(t)=-\frac{2}{\mu}E(t). \end{align*} \end{theorem} \newpage \begin{proof} From Proposition~\ref{propcovexp} we have \begin{align*} \mathcal{L}(C)(s)&=\frac{1}{s}\left(1-\frac{2}{\mu }\mathcal{L}(E)(s)\right). \end{align*} Using the the property of the Laplace transform that $\mathcal{L}(f')=s\mathcal{L}(f)-f(0)$ and $C(0)=1$, \begin{align*} \mathcal{L}(C)(s)&=\frac{1}{s}-\frac{2}{\mu s}\mathcal{L}(E)(s), \\ s\mathcal{L}(C)(s)-1&=-\frac{2}{\mu}\mathcal{L}(E)(s), \\ \mathcal{L}(C')(s)&=-\frac{2}{\mu}\mathcal{L}(E)(s),\\ C'(t)&=-\frac{2}{\mu}E(t). \end{align*} \end{proof} Theorem~\ref{Th2} allows us to use functional properties of the expected value of the switch process to investigate the covariance of the stationary switch process. In particular, combining Theorem~\ref{Th2} with Theorem~\ref{Th1} yields a partial characterization of the covariance functions. \begin{theorem}\label{Th3} Let $C(t)$ be a symmetric function around zero, $t\in\mathbb{R}$ such that following conditions are satisfied for all $t\in[0,\infty)$, \begin{itemize} \item[$(i)$] $C(t)\geqslant 0$, \item[$(ii)$] $C'(t)\leqslant 0$, \item[$(iii)$] $C''(t)\geqslant 0$, \item[$(iv)$] $C(0)=1$. \end{itemize} Then $C(t)$ is the covariance function of a stationary switch process with a $GD(2)$ switching time distribution. \end{theorem} \begin{proof} The conditions follows directly from Proposition~\ref{Efunk}, Theorem~\ref{Th2} and the condition $C(0)=1$. \end{proof} If the switching time distribution also belongs to $GD(2)$, then from Corollary~\ref{Corr1} there is an explicit relation between the expected value of the switch process and the switching time distribution. A similar result exists for the stationary switch process. \begin{corollary}\label{Corr3} Let $C(t)$ be the covariance of the stationary switch process and the switching time distribution belong to $GD(2)$, with the divisor $\tilde{F}$ then \begin{align*} 1+\frac{\mu}{2} C'(t)&=\tilde{F}(t) \\ \frac{\mu}{2}C''(t)&=\tilde{f}(t) \end{align*} \end{corollary} Even if the switching time distribution does not belong to $GD(2)$ the expected value of the switching time distribution can still be obtained using Theorem~\ref{Th2}. \begin{corollary}\label{Corr4} Let $\mu$ be the expected value of the switching time distribution, $C(t)$ the covariance of the stationary switch process, and $E(t)$ the expected value function of the corresponding switch process, then \begin{align*} \mu=2 \int_0^\infty E(u)du. \end{align*} \end{corollary} \begin{proof} From Theorem~\ref{Th2}, we have $ C'(t)=-\frac{2}{\mu}E(t)$ and for some constant $\alpha$ \begin{align*} -\frac{2}{\mu}\int_0^t E(u)du&=C(t)+\alpha. \\ \end{align*} We obtain $\alpha=-1$ by taking $t\rightarrow 0^+$ and using $C(0)=1$. Then \begin{align*} -\frac{2}{\mu}\int_0^\infty E(u)du&=-1. \end{align*} \end{proof} \section{Examples}\label{examples} \noindent To illustrate the switch process and the relation between covariance, expected value, and the switching time distribution three examples are presented. The first is a very simple example used to illustrate the basic concepts, the second shows the utility of Theorem~\ref{Th2} and the third employs the conditions of Theorem~\ref{Th3} to derive the switching time distribution. \subsection*{Exponential switching time} Consider a switch process with exponential switching times with intensity $\lambda$. It is well known that this distribution belongs to $GD(\infty)$ and therefore also belongs to $GD(2)$. We derive the expected value function through the Laplace transform using Proposition~\ref{LEXt}. In particular, \begin{align*} \mathcal{L}(E)(s)=\frac{1}{s}\frac{1-\frac{\lambda}{\lambda+s}}{1+\frac{\lambda}{\lambda+s}}=\frac{1}{2\lambda+s}, \end{align*} which corresponds to \begin{align*} E(t)=e^{-2\lambda t}. \end{align*} We note that $E(t)$ is non-negative and decreasing for all $t\geqslant 0$, which is in agreement with the results of Theorem~\ref{Th1}. Let us now derive the covariance function of the corresponding stationary switch process using Theorem~\ref{Th2}. Clearly $\mu=\frac{1}{\lambda}$ and we have \begin{align*} C'(t)=-\frac{2}{\frac{1}{\lambda}}e^{-2\lambda t}=-2\lambda e^{-2\lambda t}. \end{align*} By solving for $C(t)$ and computing $C''(t)$ it is clear that the conditions of Theorem~\ref{Th3} are satisfied. \subsection*{Gamma switching time} Consider instead a process with gamma distributed switching times, with parameters $\theta=2$ and $k=2$. This distribution does not belong to the class of $GD(2)$ distributions, which can be verified through Proposition~\ref{LGD}. Computing the expected value function, through the Laplace transform, we have \begin{align*} \mathcal{L}(E(t))(s)&=\frac{1}{s} \frac{1-(1+2s)^{-2}}{1+(1+2s)^{-2}}=\frac{2+2s}{1+2s+2s^2}, \\ E(t)&=\sqrt{2} \sin \left(\frac{2t+\pi }{4} \right) e^{-\frac{t}{2}}. \end{align*} Since $E(t)$ is oscillating, the condition $(ii)$ of Theorem~\ref{Th1} is not satisfied. This is not surprising since \cite{ThinG} has shown that a count process with gamma-distributed arrival times, $k>1$ cannot be obtained as the thinning of some other count process. In the context of this paper, the divisor cannot be obtained and therefore the connection between geometric divisibility and the functional properties of the expected value cannot explicitly be derived in a straightforward manner from Theorem \ref{Th1}. However, the covariance function of its stationary counterpart can still be obtained through Theorem~\ref{Th2}. Since the switching time distribution is known, $\mu$ is in this case equal to four. Solving the differential equation with the condition $C(0)=1$, we have for $t \geqslant 0$ \begin{align*} C'(t)&=- \frac{1}{\sqrt{2}}\sin\left(\frac{2t+\pi }{4}\right) e^{-\frac{t}{2}}, \\ C(t)&=\cos\left( \frac{t}{2}\right)e^{-\frac{t}{2}}. \end{align*} This example illustrates the utility of Theorem~\ref{Th2}, even if the switching time distribution does not belong to $GD(2)$. In Figure~\ref{fig1} a realization of the switch process with gamma switching times and its corresponding $E(t)$ and $C(t)$, is shown. \begin{figure}[H] \label{fig1} \includegraphics[width=0.95 \textwidth]{SimGamma22.jpg} \caption{Sample path of switch process generated from $T \in \Gamma(2,2)$, $E(t)$ and $C(t)$ of its stationary counterpart.} \label{figgamma} \end{figure} \subsection*{GD(2) switching time from covariance} In the previous two examples, we started with a switching time distribution and derived $E(t)$ of the switch process and $C(t)$ of the corresponding stationary switch process. Let us instead investigate if the function \begin{align*} h(t)=\frac{2}{\pi} \arcsin \left( \frac{1}{\cosh(\frac{t}{2})} \right), \end{align*} is a valid covariance function of a stationary switch process. We start by computing the first and second derivative of $h(t)$: \begin{align*} h'(t)&=-\frac{1}{\pi}\frac{\tanh(\frac{t}{2}) \rm{sech}(\frac{t}{2})}{\sqrt{1-{\rm sech}^2(\frac{t}{2})}}=-\frac{1}{\pi}\rm{sech}\left(\frac{t}{2}\right), \\ h''(t)&=\frac{1}{2 \pi} \tanh\left( \frac{t}{2}\right) \rm{sech}\left( \frac{t}{2}\right). \end{align*} From this, the conditions of Theorem~\ref{Th3} are satisfied and the function $h(t)$ corresponds to the covariance of a specific stationary switch process, with a $GD(2)$ switching time distribution. From Corollary~\ref{Corr4} $\mu$ can be obtained and combining this with Corollary~\ref{Corr3} the switching time divisor is obtained \begin{align*} \tilde{F}(t)&=1-\rm{sech}\left( \frac{t}{2}\right), \\ \tilde{f}(t)&= \frac{1}{2} \tanh\left( \frac{t}{2}\right) \rm{sech}\left( \frac{t}{2}\right). \end{align*} To obtain the explicit switching time distribution is difficult using the standard method, through the Laplace transform. Numerical methods can be used to obtain the inverse of the Laplace transform. An alternative approach is to simulate from the divisor by using the inverse sampling method and the stochastic representation of $2$-geometric divisible distributions to simulate from the full distribution. \section{Applications} \label{appli} \noindent The concept of persistency is of interest in statistical physics. Persistency is related to the exceedance time distribution of a stochastic process, i.e, the time between an up crossing of some level and a down crossing of the same level. These, in turn, lead to Palm distributions of the point process of instants of the level crossings. To obtain the explicit exceedance time distribution, or the Palm distribution, is a well-known difficult problem in probability theory, see \cite{LittlewoodOfford,Rice,https://doi.org/10.1111/sjos.12248}. Therefore a \textit{persistency coefficient}, which describes the tail behavior of the exceedance time distribution is often sought after instead. However, even obtaining exact and analytical persistency coefficients is still to a large extent an open problem. For example, the persistency coefficient was explicitly found for diffusion processes in dimension two by \cite{Exact2018} and then only for the zero level crossings. Therefore approximation methods are commonly used for the purpose. One of them is the independent interval approximation \textit{(IIA)}. For an extensive overview see for example \cite{IIA}. In essence, the IIA approach works as follows. For the stochastic process of interest, define the clipped process by computing the sign of the process. The time intervals when the process is one or minus one will not be independent. However, for simplicity, one can assume that the dependency is negligible and treat them as independent. This process can be viewed as a stationary switch process and the covariance can then be used to infer information from the switching time distribution. The covariance of the clipped process is directly related to the covariance of the process of interest and has an explicit form if the underlying process is Gaussian. The covariance function of the clipped process is then matched to the covariance of the stationary switch process. This is then used to approximate the persistency coefficient. Two fundamental questions related to this approach should be posed. The first is if the approximated distribution is indeed a valid probability distribution, i.e. the IIA is mathematically sound. The second is if an explicit form of this distribution can be obtained. The covariance function of a clipped Gaussian process is \begin{align*} C(t)=\frac{2}{\pi} \arcsin\left(r(t)\right), \end{align*} where $r(t)$ is the covariance function of the process which is clipped, i.e our process of interest. By computing $C''(t)$, conditions on $r(t)$ can be obtained for when $C(t)$ corresponds to a $GD(2)$ distribution. We have the subsequent proposition which follows directly from Theorem~\ref{Th3}. \newpage \begin{proposition}\label{propapli} Let $r(t)$ be the covariance function of a zero-mean Gaussian process such that the following conditions are satisfied for $t\geqslant0$ \begin{itemize} \item[(i)] $r(t)\geqslant0,$ \item[(ii)] $r'(t)\leqslant0,$ \item[(iii)] $r''(t)\geqslant-(r'(t))^2 r(t) / (1-r(t)^2).$ \end{itemize} Then the approximated switching time distribution, using the IIA method, belong to the class of $GD(2)$ distributions. \end{proposition} This proposition answers the posed questions and provides a partial answer to when the IIA method provides a valid probability distribution. The conditions of Proposition~\ref{propapli} are easy to verify and the conditions are satisfied by a large set of functions. Consider the diffusion process in two dimensions. The covariance of the clipped process has the following form \begin{align*} C(t)=\frac{2}{\pi} \arcsin \left( \frac{1}{\cosh(\frac{t}{2})} \right). \end{align*} From the last example in Section~\ref{examples} this corresponds to a $GD(2)$ switching time distribution. Since the distribution of the divisor is known, it is possible to infer more information than only the persistency coefficient, such as the full approximated distribution and its properties. It might require some numerical methods but requires fewer computations than, for example simulating trajectories and then estimating the exceedance time from these simulations. \section{Conclusions} \noindent To characterize which functions correspond to the expected value of the switch process is a difficult problem. By exploring the relationship between the functional properties of the expected value and the class of $2$-geometric divisible distributions, a partial answer to the problem is given. An explicit relation between the expected value function of the switch process and the covariance function of the stationary switch process is presented. It leads to corresponding relations between the $2$-geometric divisible switching time distributions and the covariance of the stationary switch process. It enables the recovery of the switching time distribution from the covariance function under some conditions which are easy to verify. This constitutes a partial solution to the well-known problem of obtaining the switching time distribution from the covariance function of a continuous-time binary process. The complete answers to both the above-mentioned problems are still unknown. However, the partial answers given in this work provide an explicit answer for an important class of functions and switching time distribution. \section*{Acknowledgment} \noindent The author is very grateful for many fruitful discussions with Krzysztof Podgórski, helpful inputs on geometric divisibility from Tomasz Kozubowski, and the inspiration and direction from Georg Lindgren. The author is also thankful for the helpful comments and support from Yvette Baurne and Joel Danielsson. Financial support of the Swedish Research Council (VR) Grant DNR: 2020-05168 is acknowledged. \newpage
1,941,325,220,877
arxiv
\section{Introduction} The last decade has witnessed a rapid development of the theory of planar semimodular lattices; see the bibliographic section in the present paper and see many additional papers referenced in the book chapter Cz\'edli and Gr\"atzer~\cite{czgggltsta}. Also, see \cite{czgggltsta} for a survey and for all concepts not defined here. Since every planar semimodular lattice can be obtained from a slim semimodular lattice, a particularly intensive attention was paid to slim (hence necessarily planar) semimodular lattices; definitions will be given later. \begin{figure}[ht] \centerline {\includegraphics[scale=1.0]{czg-makay-fig02}} \caption{A \pseq{} from $\inp$ to $\inq$ in a planar semimodular lattice}\label{figpsa} \end{figure} \subsection*{First target: the swing lemma} Semimodularity is \emph{upper semimodularity}, that is, a lattice is \emph{semimodular} if the implication $x\preceq y\Rightarrow x\vee z\preceq y\vee z$ holds for all of its elements $x$, $y$ and $z$. A lattice $L$ is \emph{planar} if it has a planar Hasse-diagram. Although Cz\'edli~\cite{czgdiagrectext}, which is a long paper, assigns a unique planar diagram to an arbitrary planar semimodular lattice, we will not rely on \cite{czgdiagrectext} in the present elementary paper; we always assume that a planar diagram of our lattice is \emph{fixed} somehow. (Some concepts, like ``left'' or ''eye'', will depend on the choice of the diagram, but this fact will not cause any trouble.) \emph{Edges} $\inp=[a,b]$ of (the diagram of) $L$ are also called \emph{prime intervals}. For a prime interval $\inp=[a,b]$ of $L$, we denote $a$ and $b$ by $0_\inp$ and $1_\inp$, respectively. It follows from semimodularity that the edges divide the area of the diagram into quadrangles, which we call \emph{$4$-cells}; more details will be given later. The least congruence collapsing (the two elements of) a prime interval $\inp$ is denoted by $\con(\inp)$ or $\con(0_\inp,1_\inp)$. In order to characterize whether $\con(\inp)$ collapses another prime interval $\inq$ or not, we need the following definition. \begin{definition}\label{defSsd} Let $\inr$ and $\ins$ be distinct prime intervals of a planar semimodular lattice such that they belong to the same $4$-cell $S$. \begin{enumeratei} \item\label{defSsda} If $\inr$ and $\ins$ are opposite sides of $S$ then $\inr$ is \emph{cell-perspective} to $\ins$. \item\label{defSsdb} If $1_\inr=1_\ins$, $1_\inr$ has at least three lover covers, and $0_\ins$ is neither the leftmost, nor the rightmost lower cover of $1_\inr$, then $\inr$ \emph{swings} to $\ins$. \item\label{defSsdc} If $0_\inr=0_\ins$, $0_\inr$ has at least three covers, and $1_\ins$ is neither the leftmost, nor the rightmost cover of $0_\inr$, then $\inr$ \emph{tilts} to $\ins$. \end{enumeratei} For $n\in\set{0,1,2,\dots}$, a sequence \begin{equation} \vec{\inr}: \inr_0,\inr_1,\dots,\inr_n \label{eqsseqpseq} \end{equation} of prime intervals is called an \emph{\pseq} if for each $i\in\set{1,\dots,n}$, $\inr_{i-1}$ is cell-perspective to or swings to or tilts to $\inr_i$. (The acronym ``SL'' comes from ``swing lemma''.) In $\vec{\inr}$, $\inr_0$ and $\inr_n$ play a distinguished role, and we often say that $\vec{\inr}$ is an \emph{\pseq{} from $\inr_0$ to $\inr_n$}. It is \emph{cyclic} if $\inr_0=\inr_{n}$. \end{definition} \begin{figure}[ht] \centerline {\includegraphics[scale=1.0]{czg-makay-fig05}} \caption{Cyclic \spseq{}s in $M_3$ and $M_6$}\label{figcyclM6} \end{figure} While \eqref{defSsda} describes a symmetric relation, \eqref{defSsdb} and \eqref{defSsdc} do not. To see some examples, consider the planar semimodular lattice in Figure~\ref{figpsa}. Then $\inr_{11}$ and $\inr_{12}$ are mutually cell-perspective to each other, $\inr_{2}$ and $\inr_{3}$ mutually swing to each other, so do $\inr_{16}$ and $\inr_{17}$; $\inr_{8}$ tilts to $\inr_{9}$, and $\inr_{6}$ swings to $\inr_{7}$. However, $\inr_{9}$ does not tilt to $\inr_{8}$ and $\inr_{7}$ does not swing to $\inr_{6}$. The sequence $\inr_{0}$, $\inr_{1}$, \dots, $\inr_{24}$ is an \pseq{} from $\inp$ to $\inq$, and it remains an \pseq{} if we omit $\inr_{7}$ and~$\inr_{8}$. In Figure~\ref{figcyclM6}, the sequence $\inr_0$, $\inr_1$, \dots, $\inr_{14}=\inr_0$ is a cyclic \pseq{} in $M_6$. \begin{remark}\label{remswtlslopes} If the diagram of $L$ belongs to the class $\mathcal C_1$ defined in Cz\'edli~\cite{czgdiagrectext}, then \eqref{defSsdb} and \eqref{defSsdc} from Definition~\ref{defSsd} can be formulated in the following, more visual way; see \cite{czgdiagrectext}. Namely, for \emph{distinct} edges $\inr$ and $\ins$ of the \emph{same} 4-cell, \begin{enumerate} \item[(ii)$'$] $\inr$ \emph{swings} to $\ins$ if $1_\inr=1_\ins$ and the slope of $\ins$ is neither $45^\circ$, nor $135^\circ$. \item[(iii)$'$] $\inr$ \emph{tilts} to $\ins$ if $0_\inr=0_\ins$ and the slope of $\ins$ is neither $45^\circ$, nor $135^\circ$. \end{enumerate} \end{remark} Note that the diagrams in this paper belong to $\mathcal C_2$, which is a subclass of $\mathcal C_1$; the reader may want (but does not need) to see \cite{czgdiagrectext} for details. Note also that, by \cite[Observation 6.2]{czgdiagrectext}, the condition that ``the slope of $\ins$ is neither $45^\circ$, nor $135^\circ$'' above is equivalent to the condition that ``the slope of $\ins$ is strictly between neither $45^\circ$ and $135^\circ$''. The following result was proved in Cz\'edli and Gr\"atzer~\cite{czgggswing}. \begin{gslemma}[Cz\'edli and Gr\"atzer~\cite{czgggswing}]\label{genswinglemma} Let $L$ be a planar semimodular lattice, and let $\inp$ and $\inq$ be prime intervals of $L$. Then $\pair{0_\inq}{1_\inq}\in\con(\inp)$ if and only if there is an \pseq{} from $\inp$ to $\inq$. \end{gslemma} For a bit stronger but more technical variant of the swing lemma, see Theorem~\ref{thmstrongswinglemma}. Although the proof in Gr\"atzer and Cz\'edli~\cite{czgggswing} is short, it relies on a particular case, which we will call \emph{slim swing lemma}; see Section~\ref{sectionslimsl}. The slim swing lemma is due to Gr\"atzer~\cite{swinglemma} and there is another proof in Cz\'edli~\cite{czgdiagrectext}, but both these papers give long and complicated proofs. Furthermore, the proof in \cite{czgggswing} uses a lemma from Cz\'edli~\cite{czgrepres} that needed a three-page long proof. So, if \cite{swinglemma} (or the relevant part of \cite{czgdiagrectext}) and the three pages from \cite{czgrepres} are also counted, the proof of the swing lemma is quite long. Our main goal is to give a much shorter proof. \subsection*{Second target: the Swing lattice game} Section~\ref{sectiongame} describes our online game called \emph{Swing lattice game}. Its purpose is to increase the popularity of lattice theory in an entertaining way. Besides the swing lemma, the game is also motivated by mechanical pinball games with flippers. A computer program realizing the game is available from the authors websites. Note that the game has a screen saver mode. Another motivation for the Swing lattice game is that this paper is devoted to Professor Emeritus B\'ela Cs\'ak\'any, who is not only a highly appreciated algebraist and the scientific father or grandfather of almost all algebraists in Szeged, but he is interested in mathematical games. This interest is witnessed by, say, Cs\'ak\'any~\cite{csbhun} and Cs\'ak\'any and Juh\'asz~\cite{csbjr}. \color{black} \section{Preliminaries and a survey} Besides collecting some known facts that will be needed in our proof, the majority of this section gives a restricted survey on planar semimodular lattices. For a more extensive survey, the reader can resort to Cz\'edli and Gr\"atzer~\cite{czgggltsta}. A lattice $L$ is \emph{slim} if $J(L)$, the poset of join-irreducible elements of $L$, contains no 3-element antichain. By convention, both slim lattices and planar lattices are \emph{finite} by definition. By a \emph{diamond} we mean an $M_3$ (sub)lattice; see on the left of Figure~\ref{figcyclM6}. We know from Cz\'edli and Gr\"atzer~\cite[Lemma 3-4.1]{czgggltsta} that slimness implies planarity. Hence, we will drop ``planar'' from ``slim planar semimodular''. A sublattice $S$ of a lattice $L$ is a \emph{cover-preserving sublattice} if for any $a,b\in S$, $a\prec_S b$ implies that $a\prec_L b$. By Cz\'edli and Gr\"atzer~\cite[Thm.\ 3-4.3]{czgggltsta} or, originally, by Cz\'edli and Schmidt~\cite{czgschtJH} and Gr\"atzer and Knapp~\cite{gratzerknapp1}, a planar semimodular lattice is slim iff it contains no diamond iff it contains no cover-preserving diamond. For example, by Cz\'edli and Gr\"atzer~\cite[Theorem 3-4.3]{czgggltsta} or by Proposition~\ref{propczsdB}, Figure~\ref{figslima} is a slim semimodular lattice. Also, if we omit the four black-filled elements from the planar semimodular lattice given Figure~\ref{figpsa}, then we obtain a slim semimodular lattice. \begin{figure}[ht] \centerline {\includegraphics[scale=1.0]{czg-makay-fig01}} \caption{An \sseq{} from $\inp$ to $\inq$ in a slim semimodular (actually, a slim rectangular) lattice}\label{figslima} \end{figure} In (the fixed planar diagram of) a planar semimodular lattice $L$, let $a<b$ but $a\nprec b$. If $C_1$ and $C_2$ are maximal chains in the interval $[a,b]$ such that $C_1\cap C_2=\set{a,b}$ and every element of $C_2\setminus\set{a,b}$ is on the right of $C_1$, then the elements of $[a,b]$ that are simultaneously on the right of $C_1$ and on the left of $C_2$ form a \emph{region} of (the diagram of) $L$. Note that $C_1\cup C_2$ is a subset of this region. For example, the elements belonging to the grey area in the second lattice of Figure~\ref{figindstep} form a region denoted by $R$. We know from Kelly and Rival~\cite[Prop.\ 1.4 and Lemma 1.5]{kellyrival} that, in (the fixed planar diagram of) a planar lattice, \begin{equation} \parbox{6.4cm}{every interval is a region and every region is a cover-preserving sublattice.} \label{eqtxtKellyRival} \end{equation} If we drop the condition $C_1\cap C_2=\set{a,b}$ above, then we obtain a union (actually, a so-called glued sum) of regions, which is clearly still a sublattice. More precisely, for elements $a<b$ in a planar lattice $L$, \begin{equation} \parbox{7.6cm}{if $C_1$ and $C_2$ are maximal chains in $[a,b]$ such that every element of $C_2$ is on the right of $C_1$, then $\{x\in [a,b]: x$ is on the right of $C_1$ and on the left of $C_2\}$ is a cover-preserving sublattice of $L$.} \label{eqtxtmltKR} \end{equation} For more about planar lattice diagrams (of planar semimodular lattices), the reader may but need not look into Kelly and Rival~\cite{kellyrival} (or Cz\'edli and Gr\"atzer \cite{czgggltsta}). Minimal regions are called \emph{cells}. For example, the grey area in Figure~\ref{figslima} and that in the first lattice of Figure~\ref{figindstep} are cells; actually, they are \emph{$4$-cells} since they are formed by four vertices and four edges. In (the planar diagram of) a planar semimodular lattice, every cell is a 4-cell; see Gr\"atzer and Knapp~\cite[Lemma 4]{gratzerknapp1}. Hence, by Cz\'edli and Schmidt~\cite[Lemma 13]{czgschvisual}, \begin{equation} \parbox{9 cm}{If $x$ and $y$ are neighboring lower covers of an element $z$ in a planar semimodular lattice, then $\set{x\wedge y, x, y, z}$ is a 4-cell.} \label{eqtxttwcodhBtn} \end{equation} A 4-cell can be turned into a diamond by adding a new element into its interior. The new element is called an \emph{eye} and we refer to this step as \emph{adding an eye}. Note that after adding an eye, one ``old'' 4-cell is replaced with two new 4-cells. We know from Cz\'edli and Gr\"atzer~\cite[Cor.\ 3-4.10]{czgggltsta} that \begin{equation} \parbox{8.5cm}{ every planar semimodular $L$ lattice is obtained from a slim semimodular lattice $L_0$ by adding eyes, one by one.} \label{eqtxtbddyS} \end{equation} Note that $L_0$ is a sublattice of $L$. Although $L_0$ is not unique as a sublattice, it is unique up to isomorphism; see \cite[Lemma 3-4.8]{czgggltsta}. We call $L_0$ the \emph{full slimming} of $L$, while $L$ is an \emph{antislimming} of $L_0$. Note that the full slimming of $L$ can be obtained from $L$ by omitting all eyes. For example, the full slimming $L_0$ of the planar semimodular lattice $L$ given in Figure~\ref{figpsa} is obtained by omitting the four black-filled elements. Conversely, we obtain $L$ from $L_0$ by adding eyes, four times. Based on, say, Gr\"atzer and Knapp~\cite[Lemma 8]{gratzerknapp1}, eyes are easy to recognize: an element $x$ of a planar semimodular lattice is an eye if and only if $x$ is doubly (that is, both meet and join) irreducible, its unique lower cover, denoted by $\dstar x$, has at least three covers, and $x$ is neither the leftmost, nor the rightmost cover of $\dstar x$. \begin{figure}[ht] \centerline {\includegraphics[scale=1.0]{czg-makay-fig03}} \caption{Inserting a fork}\label{figaddfork} \end{figure} \begin{definition}\label{defstrswtlt} Let $\inr$ and $\ins$ distinct edges of the \emph{same} 4-cell in (the planar diagram of) a planar semimodular lattice $L$, and let $\Eye(L)$ denote the set of eyes of $L$. \begin{enumerate} \item[(ii)$'$] $\inr$ \emph{strongly swings} to $\ins$ if $\inr$ swings to $\ins$ and, in addition, the implication $0_\inr\in \Eye(L) \Longrightarrow 0_\ins\in\Eye(L)$ holds. \end{enumerate} The sequence $\vec\inr$ in \eqref{eqsseqpseq} will be called an \spseq{} if for each $i\in\set{1,\dots,n}$, $\inr_{i-1}$ is cell-perspective to or tilts to or strongly swings to $\inr_i$. (The acronym ``SSL'' comes from ``strong swing lemma''.) \end{definition} In a planar semimodular lattice, \begin{equation} \text{every \spseq{} is a \pseq{},} \label{eqtxtspseqpseq} \end{equation} but not conversely. For example, in Figure~\ref{figpsa}, the two-element sequence $\inr_{18}$, $[x,y]$ is an \pseq{} but not an \spseq. Now, we are in the position to formulate the following theorem. By \eqref{eqtxtspseqpseq}, it implies Lemma~\ref{genswinglemma}, the swing lemma. \begin{strswlemma}[Cz\'edli and Gr\"atzer~\cite{czgggswing}]\label{thmstrongswinglemma} If $L$ is a planar semimodular lattice and $\inp$ and $\inq$ are prime intervals of $L$, then the following two implications hold. \begin{enumeratei} \item\label{thmstrongswinglemmaa} If there exists an \pseq{} from $\inp$ to $\inq$ $($in particular, if there is an \spseq{} from $\inp$ to $\inq)$, then $\pair{0_\inq}{1_\inq}\in\con(\inp)$. \item\label{thmstrongswinglemmab} Conversely, if $\pair{0_\inq}{1_\inq}\in\con(\inp)$, then there exists an \spseq{} from $\inp$ to $\inq$. \end{enumeratei} \end{strswlemma} By \eqref{eqtxtbddyS}, in order to have a satisfactory insight into planar semimodular lattices, it suffices to describe the slim ones. In order to do so, we need the following concepts. Based on Cz\'edli and Schmidt~\cite{czgschvisual}, Figure~\ref{figaddfork} visualizes how we \emph{insert a fork} into a 4-cell $S$ of a slim semimodular lattice $L$ in order to obtain a new slim semimodular lattice $L'$. First, we add a new element $s$ into the interior of $S$. Next, we add two lower covers of $s$ that will be on the lower boundary of $S$ as indicated in the figure. Finally, we do a series of steps: as long as there is a chain $u\prec v\prec w$ such that $T=\set{x=z\wedge u, z, u, w=z\vee u}$ is a 4-cell in the original $L$ and $x\prec z$ at the present stage, then we insert a new element $y$ such that $x\prec y\prec z$ and $y\prec v$; see on the right of the figure. The new elements of $L'$, that is, the elements of $L'\setminus L$, are the black-filled ones in Figure~\ref{figaddfork}. A doubly irreducible element $x$ on the boundary of a slim semimodular lattice is called a \emph{corner} if it has a unique upper cover $\ustar x$ and a unique lower cover $\dstar x$, $\ustar x$ covers exactly two elements, and $\dstar x$ is covered by exactly two elements. For example, after omitting the black-filled elements from Figure~\ref{figpsa}, there are exactly two corners, $u$ and $v$. Note that there is no corner in the slim semimodular lattice given by Figure~\ref{figslima}. A \emph{grid} is the (usual diagram of the) direct product of two finite non-singleton chains. \begin{proposition}[Cz\'edli and Schmidt~\cite{czgschvisual}]\label{propczsdB} Every slim semimodular lattice with at least three elements can be obtained from a grid such that \begin{enumeratei} \item\label{propczsdBa} first we add finitely many forks one by one, \item\label{propczsdBb} and then we remove corners, one by one, finitely many times. \end{enumeratei} Furthermore, all lattices obtained in this way are slim and semimodular. \end{proposition} Note that by Cz\'edli and Schmidt~\cite[Prop.\ 2.3]{czgschslim2}, the lattices we obtain by \eqref{propczsdBa} but without \eqref{propczsdBb} are exactly the \emph{slim rectangular lattices} introduced by Gr\"atzer and Knapp~\cite{gratzerknapp3}; see Figure~\ref{figslima} for an example. We can add eyes to these lattices; what we obtain in this way are the so-called \emph{rectangular lattices}; see ~\cite[Prop.\ 2.3]{czgschslim2} and Gr\"atzer and Knapp~\cite{gratzerknapp3}. \section{Slim swing lemma}\label{sectionslimsl} The slim lemma was first stated and proved only for slim semimodular lattices; to make a terminological distinction, we will refer to it as the ``slim swing lemma". \begin{definition} The sequence $\inr$ from \eqref{eqsseqpseq} is an \emph{\sseq} if for each $i\in\set{1,\dots,n}$, $\inr_{i-1}$ is cell-perspective to or swings to $\inr_i$. \end{definition} For example, the edges $\inr_0$, $\inr_1$, \dots, $\inr_{16}$ in Figure~\ref{figslima} form an \sseq. In a planar semimodular lattice, every \sseq{} is an \pseq{} but, in general, not conversely. Since every element of a slim semimodular lattice has at most two covers by Gr\"atzer and Knapp~\cite[Lemma 8]{gratzerknapp1}, tilts are impossible in \emph{slim} semimodular lattices. That is, \begin{equation} \parbox{8.2cm}{In a \emph{slim} semimodular lattice, \pseq{}s, \spseq{}s, and \sseq{}s are exactly the same.} \label{eqtxthgmnB} \end{equation} Therefore, the following statement is a particular case of Lemma~\ref{genswinglemma}. \begin{sslemma}[Gr\"atzer~\cite{swinglemma}]\label{specswinglemma} Let $L$ be a slim semimodular lattice, and let $\inp$ and $\inq$ be prime intervals of $L$. Then $\pair{0_\inq}{1_\inq}\in\con(\inp)$ if and only if there is an \sseq{} from $\inp$ to $\inq$. \end{sslemma} Note that Gr\"atzer~\cite{swinglemma} states this lemma in another way. In order to see that our version implies his version trivially, two easy observations will be given below. For prime intervals $\inp$ and $\inq$, if $1_\inp\vee 0_\inq= 1_\inq$ and $1_\inp\wedge 0_\inq=0_\inp$, then $\inp$ is \emph{up-perspective} to $\inq$ and $\inq$ is \emph{down-perspective} to $\inp$. \emph{Perspectivity} is the disjunction of up-perspectivity and down-perspectivity. As an important property of \sseq s, we claim that, for prime intervals $\inp$ and $\inq$ in a finite semimodular lattice $L$, \begin{equation} \parbox{9cm}{If $\inp$ is up-perspective to $\inq$, then there is an \sseq{} $\vec\inr=\tuple{\inr_0,\dots,\inr_n}$ from $\inp$ to $\inq$ such that $\inr_{i-1}$ is upward cell-perspective to $\inr_i$ for all $i\in\set{1,\dots, n}$. Conversely, if there is such an $\vec\inr$, then $\inp$ is up-perspective to $\inq$.} \label{eqtxbsLsP} \end{equation} The second part of \eqref{eqtxbsLsP} is trivial. In order to see its first part, assume that $\inp$ is up-perspective to $\inq$, and pick maximal chain $0_\inp=x_0\prec x_1\prec\dots\prec x_n=0_\inq$. For $i\in\set{1,\dots, n}$, $\set{x_{i-1}, x_i, 1_\inp\vee x_{i-1},1_\inp\vee {x_i}}$ is a covering square by semimodularity. (For more details, if necessary, see the explanation around Figure 1 in Cz\'edli and Schmidt~\cite{czgschthowtoderive}.) Covering squares are 4-cells by Cz\'edli and Gr\"atzer~\cite[Thm.\ 3-4.3(v)]{czgggltsta}, whence there is an \sseq{} $\vec\inr$ from $\inp$ to $\inq$ with the required property. This proves \eqref{eqtxbsLsP}. It is clear from Cz\'edli and Schmidt~\cite[Lemma 2.8]{czgschtJH}, and it can also be derived from Proposition~\ref{propczsdB} by induction, that in a slim semimodular lattice, \begin{equation} \parbox{8.5cm}{For a \emph{repetition-free} \sseq{} $\vec\inr$ from \eqref{eqsseqpseq} in a \emph{slim} semimodular lattice, if $\inr_{i-1}$ is up-perspective to $\inr_i$, then $\inr_{j-1}$ is up-perspective to $\inr_j$ for all $j\in\set{1,2,\dots, i}$.} \label{eqtxtlttEcS} \end{equation} Now it is clear that, by \eqref{eqtxbsLsP} and \eqref{eqtxtlttEcS}, Lemma~\ref{specswinglemma} and its original version in Gr\"atzer~\cite{swinglemma} mutually imply each other. \begin{figure}[ht] \centerline {\includegraphics[scale=1.0]{czg-makay-fig04}} \caption{Illustration for \eqref{eqtxtindstp}}\label{figindstep} \end{figure} \section{The short proof} \begin{proof}[Proof of Theorem~\ref{thmstrongswinglemma}] Part \eqref{thmstrongswinglemmaa} follows easily from known results and \eqref{eqtxtspseqpseq}. For example, it follows from Cz\'edli~\cite[Theorems 3.7 and 5.5 (or 7.3)]{czgtrajcolor} and Cz\'edli~\cite[Thm.\ 2.2, Cor.\ 2.3, and Prop.\ 5.10]{czgdiagrectext}, ; however, the reader will certainly find it more convenient to observe that both $\con(w_\ell,t)$ and $\con(w_r,t)$ collapses the pairs $\pair{s_i}{t}$ of $\sf S^{(n)}_7$ in \cite[Fig.\ 1]{czgtrajcolor} by routine calculations. Before proving part \eqref{thmstrongswinglemmab}, some preparation is needed. For $n\in\set{3,4,5,\dots}$, the $n+2$-element modular lattice of length 2 is denoted by $M_n$. For example, $M_3$ and $M_6$ are given in Figure~\ref{figcyclM6}. As this figure suggests, it is easy to see that, for $n\in\set{3,4,5,\dots}$, \begin{equation} \text{$M_n$ has a cyclic \spseq{} that contains all edges.} \label{eqtxtM6} \end{equation} For a prime interval $\inr$ and elements $u\leq v$ of a planar semimodular lattice $L$, we will say that $\inr\,$ \emph{\spspan s} (respectively, $\inr\,$ \emph{\sspan s}) the interval $[u,v]$ if there is an $n\in\set{0,1,2,\dots}$ and there exists a maximal chain $u=w_0\prec w_1\prec \dots \prec w_n=v$ in $[u,v]$ such that, for each $i\in\set{1,\dots,n}$, there is an \spseq{} (respectively, an \sseq{}) from $\inr$ to $[w_{i-1},w_i]$. First, we focus on \sspan{}ning. We claim the following; see Figure~\ref{figindstep}. \begin{equation} \parbox{8.5cm}{If $a,b,c$ are elements of a \emph{slim} semimodular lattice $K$ such that $a\prec b$, then $[a,b]$ \sspan s $[a\wedge c, b\wedge c]$.} \label{eqtxtindstp} \end{equation} We prove \eqref{eqtxtindstp} by induction on $|K|$. The base of the induction, $|K|\leq 4$, is obvious. We can assume that $c\leq b$, because otherwise we can replace $c$ with $b\wedge c$. Actually, we assume that $c<b$ but $c\nleq a$, since otherwise the satisfaction of \eqref{eqtxtindstp} is trivial. Pick an element $d$ such that $c\leq d\prec b$; see Figure~\ref{figindstep}. Since $c\nleq a$ and $a\prec b$, $a$ and $d$ are distinct lower covers of $b$. By left-right symmetry, we assume that $a$ is to the left of $d$. There are two cases to consider. First, assume that among the lower covers of $b$, $a$ is immediately to the left of $d$; see the first lattice of Figure~\ref{figindstep}. Let $a'=a\wedge d$. By \eqref{eqtxttwcodhBtn}, $\set{a',a,d,b}$ is a 4-cell. Hence, there is a ``one-step'' \sseq{} from $[a,b]$ to $[a',d]$, which consists of a downwards cell-perspectivity. Observe that $a\wedge c=a\wedge (d\wedge c)=(a\wedge d)\wedge c=a'\wedge c$ and the principal ideal $\ideal d$ does not contain $a$. Hence, $|\ideal d|<|K|$. Thus, the induction hypotheses yields that $[a',d]$ \sspan s $[a'\wedge c, c]=[a\wedge c, b\wedge c]$. This is witnessed by some \sseq s; combining them with the one-step \sseq{} mentioned above, we conclude that $[a,b]$ \sspan s $[a\wedge c, b\wedge c]$, as required. Second, assume that there is a lower cover of $b$ strictly to the right of $a$ and to the left of $d$. Let $e$ denote the rightmost one of these lower covers and let $a':=e\wedge d$; see the second lattice in Figure~\ref{figindstep}. Since $\set{a',e,d,b}$ is a 4-cell by \eqref{eqtxttwcodhBtn}, there is a one-step \sseq{} from $[e,b]$ to $[a',d]$. Combining it with a sequence of swings from $[a,b]$ to $[e,b]$, we obtain a \sseq{} from $[a,b]$ to $[a',d]$. Applying the induction hypothesis to $\ideal d$, we obtain that $[a',d]$ \sspan s $[a'\wedge c, d\wedge c]= [a'\wedge c, b\wedge c]$. Taking the above-mentioned \sseq{} into account, it follows that $[a,b]$ \sspan s $[a'\wedge c, c]= [a'\wedge c, b\wedge c]$. We know from Cz\'edli and Gr\"atzer~\cite[Exercise 3.4]{czgggltsta} and it also follows from \eqref{eqtxtKellyRival} that $a\wedge d\leq e\wedge d=a'$. Hence, $a\wedge c= a\wedge d\wedge c \leq a'\wedge c$. In the interval $[a\wedge c,b]$, let $C_2$ be a maximal chain such that $\set{a'\wedge c, a', e}\subseteq C_2$. The elements of $[a\wedge c,b]$ on the left of $C_2$ form a cover-preserving sublattice $L_1$, because \eqref{eqtxtmltKR} applies for the leftmost maximal chain of $[a\wedge c,b]$ and $C_2$. Since $a$ is on the left of $e$, $a\in L_1$ by Kelly and Rival~\cite[Prop.\ 1.6]{kellyrival}. Pick a maximal chain $C_1$ in $L_1$ such that $a\in C_1$, and let $R$ denote the cover-preserving sublattice of $L_1$ determined by $C_1$ and $C_2$ in the sense of \eqref{eqtxtmltKR}. Since $d$ is strictly on the right of $e\in C_2$, $d\notin R$ by Kelly and Rival~\cite[Prop.\ 1.6]{kellyrival}. Thus, $|R|<|K|$. Hence, the induction hypothesis applies for $\tuple{R, a, b, a'\wedge c}$ in the role of $\tuple{K, a, b, c}$, and we obtain that $[a,b]$ \sspan s $[a\wedge c,a'\wedge c]$ in $R$. Since $R$ is a cover-preserving sublattice and also a region, the same holds in $K$. Therefore, since $[a,b]$ \sspan s both $[a\wedge c,a'\wedge c]$ and $[a'\wedge c,b\wedge c]$, it \sspan s $[a\wedge c,b\wedge c]$. This proves \eqref{eqtxtindstp}. Next, we claim that \begin{equation} \parbox{8.2cm}{If $a,b,c$ are elements of a \emph{planar} semimodular lattice $L$ such that $a\prec b$, then $[a,b]$ \spspan s $[a\wedge c, b\wedge c]$.} \label{eqtxtindPln} \end{equation} By \eqref{eqtxthgmnB}, \eqref{eqtxtindPln} generalizes \eqref{eqtxtindstp}. In order to prove \eqref{eqtxtindPln}, let $K$ denote the full slimming of $L$. Its elements and edges will be called \emph{old}, while the rest of elements and edges are \emph{new}; this terminology is explained by \eqref{eqtxtbddyS} and the paragraph following it. The new elements are exactly the eyes. As in the proof of \eqref{eqtxtindstp}, we can assume that $c<b$ but $c\nleq a$. First, we deal only with the case where $[a,b]$ is an \emph{old edge}. Since (the segments of) \sseq s are also \spseq s by \eqref{eqtxthgmnB}, \eqref{eqtxtM6} implies that \begin{equation} \parbox{9.7cm}{if $\ins_1$ and $\ins_2$ are old edges and there is an \sseq{} from $\ins_1$ to $\ins_2$ in $K$, then there is an \spseq{} from $\ins_1$ to $\ins_2$ in $L$.} \label{eqtxtKthnLvn} \end{equation} Hence, for an old prime interval $\ins$ and old elements $u\leq v$, \begin{equation} \parbox{8.8cm}{if $\ins$ \sspan s $[u,v]$ in $K$, then $\ins\,$ \spspan s $[u,v]$ in $L$.} \label{eqtxtLbnsPns} \end{equation} If $c$ is also an old element, then $\set{a\wedge c,b\wedge c}\subseteq K$, so the validity of \eqref{eqtxtindPln} follows from \eqref{eqtxtindstp} and \eqref{eqtxtLbnsPns}. Hence, we can assume that $c$ is an eye. Let $\ustar c$ and $\dstar c$ stand for its (unique) cover and lower cover, respectively; they are old elements. Since $c<b$ and $c$ is meet-irreducible, $\ustar c \leq b$. \eqref{eqtxtindstp} yields that $[a,b]$ \sspan s $[a\wedge \dstar c, b\wedge \dstar c]=[a\wedge \dstar c,\dstar c]$ in $K$. Since $c\nleq a$, $a\wedge c<c$. Using that $c$ is join-irreducible, we have that $a\wedge c=a\wedge \dstar c$. Hence, by \eqref{eqtxtLbnsPns}, \begin{equation} \text{$[a,b]$ \spspan s $[a\wedge c,\dstar c]=[a\wedge \dstar c,\dstar c]$ in $L$.} \label{eqtxtabspstcc} \end{equation} On the other hand, $a\wedge \ustar c< \ustar c$, since otherwise $c<\ustar c\leq a$ would contradict $c\nleq a$. \eqref{eqtxtindstp} yields that $[a,b]$ \sspan s $[a\wedge \ustar c, b\wedge \ustar c]=[a\wedge \ustar c,\ustar c]$. Thus, we can pick an old element $w$ such that $a\wedge \ustar c\leq w\prec \ustar c$ and there is an \sseq{} from $[a,b]$ to $[w,\ustar c]$ in $K$. By \eqref{eqtxtKthnLvn}, we have an \spseq{} from $[a,b]$ to $[w,\ustar c]$ in $L$. By left-right symmetry, we can assume that $w$ is to the left of $c$. Listing them from left to right, let $w=w_0, w_1,\dots, w_t$ be the old lower covers of $\ustar c$ that are neither strictly to the left of $w$, nor strictly to the right of $c$; see Figure~\ref{figabwcst} for $t=3$. Note that the old elements are empty-filled while the new ones are black-filled, and the elements in the figure do not form a sublattice. Let $w_{t+1}$ be the neighboring old lower cover of $\ustar c$ to the right of $w_t$ in $K$; it is also to the right of $c$. By \eqref{eqtxttwcodhBtn}, $\set{w_{i-1}\wedge w_i, w_{i-1}, w_i, \ustar c}$ is a 4-cell of $K$ for $i\in\set{1,\dots,t}$; these 4-cells are colored by alternating shades of grey in the figure. Clearly, $[w_{i-1}, \ustar c]$ strongly swings to $[w_{i}, \ustar c]$ in $K$, for $i\in\set{1,\dots,t}$. Hence, there is an \sseq{} in $K$ from $[w,\ustar c]=[w_0,\ustar c]$ to $[w_t,\ustar c]$. By \eqref{eqtxtKthnLvn}, we have an \spseq{} from $[w,\ustar c]$, and thus also from $[a,b]$, to $[w_t,\ustar c]$. Also, since $\dstar c$, $\ustar c$, $w_t$, $w_{t+1}$, and the lower covers of $\ustar c$ between $w_t$ and $w_{t+1}$ form a region in $L$ and a cover-preserving sublattice $M_n$ for some $n$, \eqref{eqtxtM6} allows us to continue the above-mentioned \spseq{} to $[\dstar c, c]$. Hence, $[a,b]$ \spspan s $[\dstar c, c]=[\dstar c, b\wedge c]$ in $L$. This fact and \eqref{eqtxtabspstcc} yield that $[a,b]\,$ \spspan s $[a\wedge c, b\wedge c]$ in $L$, proving \eqref{eqtxtindPln} for \emph{old edges} $[a,b]$. Second, we assume that $[a,b]$ is a new edge. If $b$ is an eye, which has only one lower cover, then $c<b$ gives that $c\leq a$, whence $[a\wedge c, b\wedge c]$ is a singleton, which is clearly \spspan{}ned. So we can assume that $a$ is an eye with upper and lover covers $\ustar a=b$ and $\dstar a$, respectively. Let $S=\set{\dstar a,w_\ell,w_r,b}$ denote the 4-cell of $K$ into which $a$ has been added. Here this is understood so that several eyes could have been added to this 4-cell simultaneously, whence $[\dstar a,b]_L$ is isomorphic to $M_n$ for some $n\in\set{3,4,\dots}$. Applying \eqref{eqtxtM6} to $[\dstar a,b]_L$ and using \eqref{eqtxtKellyRival}, we obtain that \begin{equation} \text{$[a,b]$ \spspan{}s both $[\dstar a, w_r]$ and $[w_r, b]$ in $L$.} \label{eqtxtAbsslPsn} \end{equation} By the already proved ``old edge version'' of \eqref{eqtxtindPln}, \begin{equation} \text{$[\dstar a, w_r]$ \spspan{}s $[\dstar a\wedge c, w_r\wedge c]$ and $[w_r, b]$ \spspan{}s $[w_r\wedge c, b\wedge c]$.} \label{eqtxtdzBnWqVy} \end{equation} In \eqref{eqtxtAbsslPsn}, prime intervals are \spspan{}ned, whence \eqref{eqtxtAbsslPsn} yields \spseq{}s. Combining these \spseq{}s with those provided by \eqref{eqtxtdzBnWqVy} and using transitivity, we obtain that $[a,b]$ \spspan{}s $[\dstar a\wedge c, b\wedge c]$. Hence, we need to show only that $\dstar a\wedge c=a\wedge c$. If we had that $a\leq c$, then $a\prec b$ and $b<c$ would give that $a=c$, contradicting $c\nleq a$. Thus, $a\nleq c$ and $a\wedge c < a$. Since $\dstar a$ is the only lower cover of $a$, we have that $a\wedge c\leq \dstar a$ and so $a\wedge c\leq \dstar a\wedge c$. Since the converse inequality is obvious, $a\wedge c = \dstar a\wedge c$, as required. This completes the proof of \eqref{eqtxtindPln}. \begin{figure}[ht] \centerline {\includegraphics[scale=1.0]{czg-makay-fig06}} \caption{From $[w,\ustar c]$ to $[\dstar c, c]$}\label{figabwcst} \end{figure} Next, let $\ba=\{\pair xy\in L^2: \inp\,\text{ \spspan s }[x\wedge y, x\vee y]\}$, where $\inp$ is the prime interval from Theorem~\ref{thmstrongswinglemma}\eqref{thmstrongswinglemmab}. We are going to show that $\ba$ is a congruence. Obviously, $\pair x y\in\ba\iff \pair{x\wedge y}{x\vee y}\in\ba$ and \begin{equation} \bigl(x\leq y\leq z, \text{ } \pair x y\in\ba, \text{ and } \pair y z\in\ba \bigr) \Longrightarrow \pair x z\in\ba. \label{equptrN} \end{equation} Hence, by Gr\"atzer~\cite[Lemma 11]{rGrLTFound}, it suffices to show that whenever $x\leq y$, $\pair x y\in \ba$, and $z\in L$, then $\pair{x\vee z}{y\vee z}\in\ba$ and $\pair{x\wedge z}{y\wedge z}\in\ba$. To do so, pick a maximal chain $x=u_0\prec u_1\prec\dots\prec u_n=y$ that witnesses $\pair x y\in\ba$. Then, for each $i\in\set{1,\dots,n}$, there is an \spseq{} from $\inp$ to $[u_{i-1},u_i]$. By \eqref{eqtxtindPln}, $\pair{u_{i-1}\wedge z}{u_i\wedge z}\in\ba$ for $i\in\set{1,\dots,n}$, and \eqref{equptrN} yields that $\pair{x\wedge z}{y\wedge z}=\pair{u_{0}\wedge z}{u_n\wedge z}\in\ba$. By semimodularity, either $[{u_{i-1}},{u_i}]$ is up-perspective to $[{u_{i-1}\vee z},{u_i\vee z}]$, or ${u_{i-1}\vee z}={u_i\vee z}$. Hence, either by \eqref{eqtxbsLsP} or trivially, $\pair{u_{i-1}\vee z}{u_i\vee z}\in\ba$. Thus, \eqref{equptrN} implies that $\pair{x\vee z}{y\vee z}=\pair{u_{0}\vee z}{u_n\vee z}\in\ba$, and we have shown that $\ba$ is a congruence. Finally, since $\ba$ collapses $\inp$, we have that $\con(\inp)\subseteq \ba$. So if $\pair{0_\inq}{1_\inq}\in\con(\inp)$, then the containment $\pair{0_\inq}{1_\inq}\in\ba$ and the definition of $\ba$ yield an \spseq{} from $\inp$ to $\inq$. This completes the proof of the slim swing lemma. \end{proof} \begin{remark} For a \emph{slim} semimodular lattice $L$, \eqref{eqtxtindPln} is equivalent to \eqref{eqtxtindstp} by \eqref{eqtxthgmnB}. Actually, \eqref{eqtxtindPln} is not needed in this case. In this way, we obtain a proof for the Swing slim lemma (Lemma~\ref{specswinglemma}) that is much shorter than the proof above. \end{remark} \section{Swing lattice game}\label{sectiongame} In order to describe the essence of our online game, the \emph{Swing lattice game}, we need only two concepts. First, in Cz\'edli~\cite{czgdiagrectext}, a class $\sf C_2$ of aesthetic slim semimodular lattice diagrams has been introduced. Instead of repeating the long definition of $\sf C_2$ here, we only mention that the diagrams in Figures~\ref{figpsa}, \ref{figslima}, and \ref{figindstep} and $L'$ in Figure~\ref{figaddfork} belong to $\sf C_2$, but the diagrams in Figure~\ref{figcyclM6} and $L$ in Figure~\ref{figaddfork} do not. Second, an \pseq{} $\vec\inr$ from \eqref{eqsseqpseq} is called an \emph{\gseq} if, for $i\in\set{1,2,\dots,n}$, $\inr_{i-1}\neq \inr_i$. (The acronym comes from ``Swing lemma game''.) For the player, who can see the diagram, the exact definition of $\sf C_2$ is not at all important. In order to avoid the concept of \gseq{}s, which may cause difficulty for a non-mathematician player, the program says simply that a \emph{monkey} keeps moving from edge to edge such that the two edges in question have to belong to the same 4-cell. The monkey can jump or swing or tilt (these steps are easily described in a plain language), but it cannot move back to the edge it came from in the very next step. The purpose of the game is to make sure that a random \gseq{} $\vec\inr$ continues as long as possible in a slightly varying diagram $L'$, to be specified later. In the language of the game, which we will use frequently below, the monkey should live as long as the player's luck and, much more significantly, his skill allows. The recent position, $\inr_i$, of the monkey is always indicated by a red thick edge. At the beginning of the game, the program displays a randomly chosen diagram $L\in\sf C_2$ of a given length. This $L$ is fixed for a while. In order to obtain a bit larger planar semimodular lattice diagram $L'$, the player is allowed to add an eye to one of the 4-cells of $L$ (by a mouse click). Whenever he adds a new eye, the old one disappears; this action is called a \emph{change of the eye}. In this way, $L'$ is varying but the equality $|L'\setminus L|=1$ always holds. Besides the edges of $L$, which are called \emph{original edges}, $L'$ has two additional edges, the \emph{new edges}. In order to influence the monkey's lifetime, \begin{equation} \text{the player's main tool is to change the eye frequently.} \label{eqtxtchnEye} \end{equation} If the player clicks on a 4-cell while the monkey is moving between two old edges or when it has just arrived at an old edge, then the eye is immediately changed. However, if the monkey is moving from an old edge to a new one or conversely, then the change is delayed till the monkey arrives at an old edge. At the beginning, \begin{equation} \text{the player has three seconds to choose an edge $\inr_0$ of $L$;} \label{eqtxtinitedgE} \end{equation} if he is late, then the computer chooses one randomly. After departing from $\inr_0$, the monkey moves at a constant speed at the beginning; later, in order to increase the difficulty, this speed slowly increases. If the monkey can make several moves, then the program chooses the actual move randomly. From time to time, the program turns a 4-cell into a \emph{bonus cell}, indicated by grey color; if the monkey can jump or swing between two edges of the grey cell within ten moves, then it earns an extra life. Similarly, the program also offers \emph{candidate cells} in blue color; \begin{equation} \parbox{6.5cm}{if the player accepts the candidate cell by clicking on it within three moves, then this 4-cell becomes a purple \emph{adventure cell}.} \label{eqtxtadVent} \end{equation} The monkey earns two extra lives if it jumps or swings between two edges of the adventure cell within 20 moves but it looses a life otherwise. Also, the monkey looses a life when no move is possible; this can happen only at a boundary edge of the diagram. If a life is lost but the monkey still has at least one life, then the game continues on a new random diagram. When the monkey has no more lives left, the game terminates. The player, if quick enough, can always save the monkey at boundary edges by using \eqref{eqtxtchnEye}. Also, using \eqref{eqtxtchnEye} appropriately, the player can increase the probability that the monkey will go in a desired direction. In order to make a good decision how to use \eqref{eqtxtinitedgE}, when to use \eqref{eqtxtadVent}, and when and how to apply \eqref{eqtxtchnEye}, the player should have some experience and insight into the process. Hence, the Swing lattice game is not only a reflex game. The game is realized by a JavaScript program; see Cz\'edli and Makay~\cite{czgmgthegame}. Most browsers, like Mozilla, can run this program automatically. The diagrams of length $n$ in $\sf C_2$ are conveniently given by their Jordan-H\"older permutations belonging to the symmetric group $S_n$. Since not every diagram in $\sf C_2$ of a given length is appropriate for the game, the program defines the concept of ``good diagrams''. For example, neither a distributive diagram, nor a glued sum decomposable diagram is good. We have characterized goodness in terms of permutations. Whenever a new diagram is needed, the program generates a random good permutation $\pi\in S_n$, and the diagram is derived from $\pi$. The lattice theoretical background of this algorithm is not quite trivial. However, instead of going into details in the \emph{present} paper, we only mention that several tools given by Cz\'edli~\cite{czgcoord} and \cite{czgdiagrectext} and Cz\'edli and Schmidt~\cite{czgschperm} have extensively been used. \color{black}
1,941,325,220,878
arxiv
\section{Introduction} Monte Carlo event generators provide an important bridge between collider experiments and the underlying physics of the SM. They enable low-multiplicity fixed-order calculations to be compared against the busy hadronic environment that characterises modern-day experimental particle physics. This is achieved by dressing the fixed-order calculation with a parton shower and introducing underlying events and hadronisation effects. The current state-of-the art for these predictions involves matching NLO QCD matrix elements to a parton shower. Moving beyond this perturbative accuracy requires either performing an NNLO QCD or an NLO EW calculation, both of which require new technology and often contribute a similar order of magnitude to the correction. There has been a lot of work on both calculating fixed order NNLO QCD corrections such as~\cite{Ridder:2013mf,Boughezal:2013uia,Czakon:2013goa} and on the matching of NNLO QCD cross sections with a parton shower~\cite{Hamilton:2012np,Hoche:2014dla,Hoeche:2014aia}. However, the work presented in the following considers the extension of Monte Carlo event generators, specifically the SHERPA~\cite{Gleisberg:2003xi,Gleisberg:2008ta} event generator, to include NLO EW corrections. There is a particular focus on the EW Sudakov approximation, the origin of which is outlined and the implementation within SHERPA briefly explained. Some initial results are presented for a 14 TeV LHC. \section{NLO QCD} To begin a discussion on NLO EW, it is instructive to consider the comparative case of NLO QCD. Calculating the NLO QCD corrections to a process analytically is given by, \begin{equation}\label{EQN:NLOana} \sigma_{\text{NLO}}^{\text{QCD}} = \int(\text{B} + \text{V})\text{d}\Phi_\text{B} + \int\text{R}\text{d}\Phi_\text{R}\,, \end{equation} which cannot be easily transferred to a numerical calculation due to the divergence of the virtual ($\text{V}$) and real ($\text{R}$) integrals on the RHS. The born contribution, $\text{B}$, obviously does not contain any such divergences. Each term in eqn.\,(\ref{EQN:NLOana}) is integrated over its appropriate phase-space to give the full NLO cross section, $\sigma_{\text{NLO}}^{\text{QCD}}$. The solution to this problem which is almost universally adopted is to use a subtraction scheme~\cite{Catani:1996vz,Bevilacqua:2013iha,Frederix:2009yq,Campbell:1998nn,Kosower:2003bh}. This subtracts a quantity, $\text{S}$ which exactly matches the divergent behaviour of the integral over the real emission. This must be analytically integrable over the 1 parton sub-space and not introduce any new divergences~\cite{Catani:1996vz}. Once these conditions are met, the subtraction term can be added to the virtual integral. Once it is appropriately integrated it will then also match exactly the divergent structure of the virtual contribution. With this introduction, eqn~(\ref{EQN:NLOana}) becomes a sum of finite integrals, \begin{equation} \sigma_\text{NLO}^{\text{QCD}} = \int(\text{B}+\text{V}+\int\text{S}\text{d}\Phi_1)\text{d}\Phi_\text{B}+\int(\text{R}-\text{S})\text{d}\Phi_\text{R}\,. \end{equation} \section{NLO EW} \subsection{Performing the calculation} There are several differences between an NLO QCD calculation and an NLO EW calculation. One obvious difference is the appearance of masses of the EW gauge bosons, which regulate the divergences from soft and collinear radiation. The real emission of $W^\pm$ and $Z$ bosons decay into other particles and are, theoretically, distinguishable processes from the underlying Born term. This reduces the problem of including real radiation to that of QED, which can be treated in a similar way to the NLO QCD calculation above. Because the real radiation can be, at least to a large extent, classified as a distinct process, the large corrections in the high energy regime, where the mass of the weak bosons becomes negligible, are physically meaningful. This high energy regime is the limit where the Sudakov logarithmic approximation becomes valid. Another difference introduced in NLO EW calculations is the dependence of the EW bosons on the helicity of the particles involved, particularly clear in $W^\pm$ boson emission. Unlike in QCD, the couplings of the weak bosons are strongly dependent of the helicity of the particle. Furthermore, the exchange of weak particles can change the underlying Born term, for example mixing electrons with neutrinos, and therefore create interferences between previously distinct processes. These differences affect both the NLO EW calculation and the implementation of the EW Sudakov logarithmic approximation. \subsection{EW Sudakov Approximation} A full NLO EW calculation is very computationally intensive, whereas the EW Sudakov approximation does not introduce much overhead and is therefore comparatively cheap. It is also easier to include on top of NLO QCD calculations. This approximation considers only the logarithmic contribution to the correction, and is dominant in the high-energy regime. The EW Sudakov approximation in SHERPA follows an excellent and clear paper by Denner and Pozzorini~\cite{Denner:2000jv}, which provides a break down of how to implement Sudakov logarithms in a process-independent way. These logarithms, $L$, typically take the form \begin{equation}\label{EQN:sud1} L\sim\log\left(\frac{s}{M_V^2}\right)\,, \end{equation} which captures the high energy behaviour of the NLO EW calculations. In eqn\,(\ref{EQN:sud1}), $s$ is the centre of mass energy, and $M_V$ is the mass of the relevant weak boson. There is always a non-logarithmic piece which is neglected, and this creates a limit to the accuracy EW Sudakov logarithms can reach, typically $\sim 1\%$. These logarithms are produced by the exchange of soft-collinear weak bosons, which become divergent in the limit of vanishing mass, and are referred to as mass singular diagrams, as discussed in ref.~\cite{Kinoshita:62}. These diagrams involve either the exchange of an EW boson between 2 external legs (double logarithms) or the emission of a soft or collinear boson from an external leg (single logarithms). These contributing diagrams are shown in fig.\,\ref{FIG:mass}. The right-hand diagram in fig.\,\ref{FIG:mass} includes the wave-function renormalisation terms where the boson is reabsorbed by the emitting line. There are also single logarithms from parameter renormalisation, which are not depicted. \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{relevantdiagrams.png} \caption{\label{FIG:mass} Diagrams which lead to mass singularities and contribute to the logarithmic approximation.} \end{center} \end{figure} It is clear that the double logarithmic terms are the dominant contribution, with the single logarithms providing a subleading contribution. However, while the double logarithms are typically a negative correction to the Born cross section, the subleading contribution is a positive correction. Considering only the leading contribution therefore leads to an overestimation of the size of the Sudakov correction. In the one-loop approximation, the EW Sudakov logarithms introduce corrections, summed over external legs $i$ and $j$ and EW bosons with typical mass $M$, \begin{equation}\label{EQN:sudfin} L=\frac{\alpha}{4\pi}\left[A\log^{2}\left(\frac{(p_i+p_j)^2}{M^2}\right) +B\log\left(\frac{(p_i+p_j)^2}{M^2}\right)\right]\,, \end{equation} where $p_i$ denotes the momentum of leg $i$. \subsection{EW Sudakov results} Within the SHERPA framework, the EW Sudakov approximation is included as a K-factor that is applied at each phase-space point. As is implied by eqn~(\ref{EQN:sudfin}), it depends only on the final state legs, and iterates over all possible exchanges of EW bosons. All bosons are assumed to have a mass equal to the $W^\pm$ boson mass. This introduces only a small logarithmic correction for the $Z$ boson, which can be neglected to the order considered here. The mass difference between the photon and the $W^\pm$ boson introduces large logarithms, but these largely cancel against real photon radiation. Therefore, the assumption that the EW bosons all have equal mass does not have a significant impact on the correction. The EW Sudakov approximation only affects the hard process and can be easily employed with the parton shower or applied to an NLO QCD computation. However, the implementation does rely on the COMIX~\cite{Gleisberg:2008fv} matrix element generator. The results shown here are the first results with the implementation of NLO EW Sudakov logarithms within SHERPA. The calculations are performed for a 14 TeV LHC collider. The left-hand side of fig.\,\ref{FIG:onoff} shows the effect of including NLO EW Sudakov corrections at 14 TeV in off-shell $W^\pm$+jets production. It is clear that as the $p_T$ of the leading jet increases, the relative correction from the EW Sudakov logarithms becomes larger and more negative. This reaches almost $40\%$ at 1 TeV. The behaviour is similar on the right-hand side of fig.\,\ref{FIG:onoff}, throughout the $p_T$ spectrum, which shows the same distribution but for on-shell production of the $W^\pm$ boson. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{PTjet_Wjoffshell.pdf} \includegraphics[width=0.4\textwidth]{PTjet_Wjonshell.pdf} \caption{\label{FIG:onoff} $p_T$ of the leading jet in off-shell (left-hand side) and on-shell (right-hand side) production of a $W^\pm$ boson with a jet.} \end{center} \end{figure} \subsection{Full NLO EW} Although the NLO EW Sudakov approximation is comparatively quick and easy to include on top of NLO QCD computations, for some studies, either to a much higher precision or outside of the high energy regime, a full NLO EW computation must be employed. This faces the same challenges and subtleties that the NLO EW Sudakov approach dealt with, alongside ambiguities in process definition. It must be decided, for example, what counts as a photon emitted in the NLO EW calculation and what is simply radiation from a jet. Also, the difference between NLO QCD corrections to EW processes and NLO EW corrections to QCD processes must be defined in order to avoid double counting. Within the SHERPA event generator, there is currently a working interface to the OpenLoops~\cite{Cascioli:2011va} loop provider for the NLO EW virtual amplitude and QED subtraction handled within SHERPA. There are already publications on NLO EW corrections, including multijet merging, to $V$+jets~\cite{Kallweit:2014xda,Kallweit:2015dum}, both with an on-shell $W^\pm$ boson and including off-shell effects, however the code is not yet public. There is also ongoing effort to implement an NLO EW interface between SHERPA and Recola~\cite{Actis:2016mpe}, as another one-loop provider for both NLO QCD and NLO EW virtual amplitudes. \section{Conclusions} Improving the perturbative accuracy of the hard interaction in Monte Carlo event simulation now involves the calculation of NLO EW contributions, which are often of a comparable size to NNLO QCD. This includes several new challenges, and the full computation is quite time-intensive. EW Sudakov logarithms provide a simple way for the dominant behaviour of the NLO EW calculation to be taken into account without the computational overhead. It is also trivial to include on top of QCD corrections, unlike the full calculation where the interference terms must be carefully considered to avoid double counting. This is an important complementary approach to be implemented alongside the full calculation. There is also a lot of promising progress in the automated evaluation of the full NLO EW correction. \begin{acknowledgments} This work has been supported by the European Commission through the networks MCnetITN (PITN--GA--2012--315877). \end{acknowledgments}
1,941,325,220,879
arxiv
\section{Introduction} The existence of the electric dipole moment (EDM) for any particle or closed system of particles violates both the space parity ($\mathcal{P}$) and time-reversal ($\mathcal{T}$) symmetries~\cite{Khrip91,Gin04,Saf18}. Up to date the most stringent experimental constraints for the particles' EDMs are obtained for the electron ($e$EDM) due to its strong enhancement in heavy atoms and diatomic molecules. The most restrictive $e$EDM bounds were established in experiments with the ThO molecule ($|d_e|<1.1\times 10^{-29}$ $e$ cm \cite{ACME18}). Here $e$ is the electron charge. Previously, accurate results were obtained on the Tl atom \cite{Reg02}, YbF molecule~\cite{Hud11} and HfF$^+$ cation~\cite{Cair17}. For extraction of the $e$EDM values from the experimental data, accurate theoretical calculations are required. These calculations were performed for Tl~\cite{Liu92,Dzuba09,Nat11,Por12,Chub18}, for YbF~\cite{Quiney:98, Parpia:98, Mosyagin:98, Abe:14}, for PbF~\cite{Skripnikov:14c,Sudip:15,Chub19:3}, for ThO~\cite{Skripnikov:13c,Skripnikov:15a,Skripnikov:16b,Fleig:16}, and for HfF$^+$~\cite{Petrov:07a,Skripnikov:17c, Fleig:17, Petrov:18b}. In the same experiments it is possible to search for another $\mathcal{P}$,~$\mathcal{T}$-odd effect: $\mathcal{P}$,~$\mathcal{T}$-odd electron-nucleus interaction~\cite{San75,Gor79,Koz95}. Effects originating from this interaction and from $e$EDM can be observed in an external electric field and cannot be distinguished in any particular atomic or molecular experiment. However, they can be distinguished in a series of experiments with different species ( see, e.g. \cite{Bon15,Skripnikov:17c}). Theoretical predictions of the $d_e$ value are rather uncertain. Within the Standard Model (SM) none of them promises the $e$EDM value larger than $10^{-38}$ $e$ cm~\cite{Pos14}. However, predictions of the SM extensions are many orders of magnitude larger~\cite{Engel2013}. Different models for the $\mathcal{P}$,~$\mathcal{T}$-odd interactions within the SM framework are discussed in Refs.~\cite{Pos14,Chub16}. In modern experiments for the $\mathcal{P}$,~$\mathcal{T}$-odd effects observation in atomic and molecular systems, either the shift of the magnetic resonance~\cite{Reg02} or the electron spin precession~\cite{Hud11,ACME18,Cair17} in an external electric field is studied. Due to a large gap between the current experimental bound and the maximum SM theoretical prediction, alternative methods for the observation of the $\mathcal{P}$,~$\mathcal{T}$-odd effects are of interest. In Refs.~\cite{Baran78,Sush78}, it was mentioned the existence of the effect of the optical rotation of linearly polarized light propagating through a medium in an external electric field \--- the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday effect. The possibility of its observation was first studied theoretically and experimentally in Ref.~\cite{Bar88} (see the review on the subject \cite{Bud02}). Recently, a possible observation of the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday effect by the intracavity absorption spectroscopy (ICAS) methods \cite{Boug14,Baev99,Dur10} using atoms was considered \cite{Chub17}. In Ref.~\cite{Boug14} an experiment on the observation of the $\mathcal{P}$-odd optical rotation in the Xe, Hg, and I atoms was discussed. The techniques~\cite{Boug14} are close to what is necessary for the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday effect observation. In Refs.~\cite{Chub18,Chub19:1,Chub19:2} an accurate evaluation of this effect oriented to the application of the techniques~\cite{Boug14} was undertaken for the atomic case and was extended to molecules in Ref.~\cite{Chub19:3}. In the present paper, we consider PbF and ThO for the beam-based ICAS $\mathcal{P}$,~$\mathcal{T}$-odd Faraday effect observation. According to our estimates, these molecules are promising candidate systems for such type of experiment (see below). As it was shown in earlier works~\cite{Khrip91}, heavy atoms and molecules containing such atoms are promising systems to search for the $\mathcal{P}$,~$\mathcal{T}$-odd effects. For the case of $\mathcal{P}$,~$\mathcal{T}$-odd Faraday effect such systems should also satisfy the following requirements. The natural linewidth of the chosen transitions $\Gamma_{\text{nat}}$ (the collisional width is negligible for beam-based experiments) should be as small as possible, since it allows for the large saturating intensities at large detuning necessary for the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday experiment (see sections~\RNumb{2}-\RNumb{3} below). In other words, it allows reaching a better signal-to-noise ratio in such experiments. For this reason the most suitable are the transition from the ground to the metastable statem, X1$^2\Pi_{1/2}\rightarrow$ X2$^2\Pi_{3/2}$, in the PbF molecule and the transition from the ground to the metastable state X$^1\Sigma_0\rightarrow$ H$^3\Delta_1$ in the ThO molecule. The characteristics of these molecules are discussed in Section~\RNumb{2}. For the molecular case, the applied electric field $\mathcal{E}_{\text{ext}}$ should be close to the saturating field $\mathcal{E}_{\text{sat}}$, which almost completely polarizes a molecule. For diatomic molecules with total electronic angular momentum projection on the molecular axis, $\Omega$, equal to $1/2$, such as PbF, $\mathcal{E}_{\text{sat}}$ is about $10^4$ V/cm. Such a field can be created only within the space of about several centimeters. Diatomic molecules with $\Omega > 1/2$ can be polarized at smaller external electric fields due to closely lying levels of opposite parity (so-called $\Omega$-doubling). The importance of the use of $\Omega$-doubling for the search of $\mathcal{P}$- and $\mathcal{P}$,~$\mathcal{T}$-odd effects was noted in Refs.~\cite{Lab77,Sush78,Gor79}. We can imagine an ICAS-beam experiment for the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday rotation observation as follows. A molecular beam crosses the cavity in a transverse direction. Within the cavity it meets an intracavity laser beam. The crossing of these two beams is located in an electric field oriented along the laser beam. The detection of optical rotation (either using simple polarimetry, or phase-sensitive techniques) happens at the output/transmission of the cavity (the scheme of the proposed experimental setup is given in Fig.~\ref{f:1}). \begin{figure}[h] \begin{center} \includegraphics[width=10.0 cm]{Sketch_setup.jpg} \end{center} \caption{\label{f:1} {The principle scheme of the proposed experimental setup. A molecular beam crosses the cavity in a transverse direction. Within the cavity it meets an intracavity laser beam. The crossing of the two beams is located in an electric field oriented along the laser beam. The detection of the optical rotation happens at the output/transmission of the cavity.}} \end{figure} Let us discuss the ultimate ICAS advances necessary for the proposed $\mathcal{P}$,~$\mathcal{T}$-odd Faraday experiments. In Ref.~\cite{Boug14} a possibility to have a total optical path length of about 100~km in a cavity of 1~m length was considered. This results in $10^5$ passes of the light inside the cavity and $10^5$ reflections of the light from the mirrors. For a molecular beam-based experiment with a beam of 1~cm in diameter, typical total optical interaction path-lengths are of about 1~km, i.e. $10^2$ times smaller. However, in another ICAS experiment~\cite{Baev99} an optical path length of $7\times 10^4$~km for a cavity of the same size as in Ref.~\cite{Boug14} was reported. This means that 700 times higher light-pass number inside a cavity may become realistic. Another important property of ICAS experiments is the sensitivity of the rotation-angle measurement. Using a cavity-enhanced scheme a shot-noise-limited birefringence-phase-shift sensitivity at the $3\times 10^{-13}$~rad level was demonstrated~\cite{Dur10}. We consider the above mentioned parameters used in ICAS experiments to assess the realizability of the proposed $\mathcal{P}$,~$\mathcal{T}$-odd Faraday ICAS experiment for the search of the $\mathcal{P}$,~$\mathcal{T}$-odd interactions in molecular physics. \section{$\mathcal{P}$,~$\mathcal{T}$-odd Faraday experiment on molecules} The $\mathcal{P}$,~$\mathcal{T}$-odd Faraday effect manifests itself as circular birefringence arising from the light propagating through a medium in an external electric field when the $\mathcal{P}$,~$\mathcal{T}$-odd interactions are taken into account. Its origin is the same as for the ordinary Faraday effect in an external magnetic field. In a magnetic field the Zeeman sublevels split in energy. Then the transitions between two states with emission (absorption) of the right (left) circularly polarized photons correspond to different frequencies since they occur between different Zeeman sublevels. This causes birefringence, i.e. different refractive indices $n^{\pm}$ for the right and left photons. The same happens in an external electric field taking into account the $\mathcal{P}$,~$\mathcal{T}$-odd interactions. In this case, the level splitting is proportional to the linear Stark shift $S^{\Delta}$. The rotation angle $\psi(\omega)$ of the light polarization plane for any type of birefringence looks like \begin{equation} \label{1} \psi(\omega)=\pi \frac{l}{\lambda} \text{Re} \left[n^+(\omega)-n^-(\omega)\right], \end{equation} where $n^{\pm}$ are the refractive indices for the right and left circularly polarized light, $l$ is the optical path length, $\omega$ is the light frequency and $\lambda$ is the corresponding wavelength. In the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday rotation case \cite{Chub18} \begin{equation} \label{2} \text{Re} \left[n^+(\omega)-n^-(\omega)\right]= \frac{d}{d\omega} \text{Re}\left[n(\omega)\right] S^{\Delta}, \end{equation} where $n(\omega)$ is the refractive index of linear polarized light. In the case of a completely polarised molecule the linear Stark shift of molecular levels is determined by \begin{equation} \label{3} S^{\Delta} = d_e \mathcal{E}_{\text{eff}} , \end{equation} where $\mathcal{E}_{\text{eff}}$ is the internal molecular effective electric field acting on the electron EDM. If the molecule is not completely polarized one introduces a corresponding polarization factor that depends on an external electric field $\mathcal{E}_{\text{ext}}$. To extract $d_e$ from the experimental data it is necessary to know the value of $\mathcal{E}_{\text{eff}}$ which cannot be measured and should be calculated (see, e.g., Refs.~\cite{Tit06,Skrip17}). We evaluated the effective electric fields for the PbF molecule. The effective electric fields in the PbF molecule were calculated within the relativistic coupled cluster with single, double and noniterative triple cluster amplitudes method using the Dirac-Coulomb Hamiltonian~\cite{Chub19:3}. All electrons were included in the correlation treatment. For Pb the augmented all-electron triple-zeta AAETZ~\cite{Dyall:06} basis set was used. For F the all-electron triple-zeta AETZ~\cite{Dyall:16,Dyall:10,Dyall:12} basis sets were used. The theoretical uncertainty of these calculations can be estimated as 5\%. The value of $\mathcal{E}_{\text{eff}}$ for the ground electronic state is in good agreement with previous studies~\cite{Skripnikov:14c,Sudip:15}. The rotation signal $R(\omega)$ in the experiment reads \begin{equation} \label{6} R(\omega)= \frac{\psi(\omega)N_{\text{ev}}}{2 \pi}, \end{equation} where $\psi $ is the rotation angle, $N_{\text{ev}}$ is the number of ``events'' in a statistical experiment. In the case under consideration $N_{\text{ev}}$ is the number of photons that had interacted with molecules, and then were detected. In principle, apart from the losses in the absorber inside the cavity, we have to take into account also the losses in the cavity itself, i.e. in the mirrors. In this work we briefly discuss this part of the losses in section~\RNumb{5}, it changes as a function of intracavity losses and strongly depends on cavity parameters. Expressed via the spectral characteristics of resonance absorption line the rotation signal reads \begin{eqnarray} \label{8} R (\omega) & = & \frac{\pi}{3} \frac{l}{\lambda} \rho e^2 |\langle i | \bm{r}| f \rangle|^2 \frac{h(u,v)}{\hbar\Gamma_D} \nonumber \\ & \times & \frac{2d_e (\mathcal{E}^i_{\text{eff}}+\mathcal{E}^f_{\text{eff}})}{\Gamma_D}N_{\text{ev}}, \end{eqnarray} \begin{equation} \label{9} \omega=\omega_0+ \Delta \omega. \end{equation} Here $\rho$ is the molecular number density, $| i \rangle$ and $| f \rangle$ are the initial and final states for the resonance transition, $\bm{r}$ is the electron radius-vector, $\Gamma_D$ is the Doppler width, $\mathcal{E}^i_{\text{eff}}$ and $\mathcal{E}^f_{\text{eff}}$ are the effective fields for the initial and final states, $\omega_0$ is the transition frequency, $\Delta \omega$ is frequency detuning; $\hbar$, $c$ are the reduced Planck constant and the speed of light. \Eq{8} corresponds to the case of E1 resonant transition. For M1 transitions the factor $e^2|\langle i | \bm{r}| f \rangle|^2$ should be replaced by $\mu_0^2 |\langle i | \bm{l}-g_S\bm{s}| f\rangle|^2$ where $\bm{s}$, $\bm{l}$ are the spin and orbital electron angular momenta operators, respectively, $g_S=-2.0023$ is a free-electron $g$ factor and $\mu_0$ is the Bohr magneton. We employ the Voigt parametrization of the spectral line profile~\cite{Khrip91}: \begin{equation} \label{11} g(u,v) = \text{Im} \; \mathcal{F} (u,v), \end{equation} \begin{equation} \label{12} f(u,v)= \text{Re}\; \mathcal{F} (u,v) , \end{equation} \begin{equation} \label{13} \mathcal{F} (u,v) = \sqrt{\pi} e^{-(u+iv)^2} \left[ 1- \text{Erf} (-i(u+iv)) \right], \end{equation} where $\text{Erf}(z)$ is the error function, \begin{equation} \label{14} u=\frac{\Delta\omega}{\Gamma_D}, \end{equation} and \begin{equation} \label{15} v=\frac{\Gamma_{\text{nat}}}{2\Gamma_D}. \end{equation} $\Gamma_{\text{nat}}$ is the natural width. Finally, \begin{equation} \label{16} h(u,v)=\frac{d}{du} g(u,v). \end{equation} The comment can be made on the behavior of the spectral line shape for the considered $\mathcal{P}$, $\mathcal{T}$-odd Faraday effect. The behavior of the functions $g(u)$ and $f(u)$ with $v \ll 1$ is presented in Fig.~\ref{f:4} (a) and Fig.~\ref{f:4} (b), respectively. In Fig.~\ref{f:4} (c), the function $h(u)=\frac{dg}{du}$ with $v \ll 1$ is presented. \begin{figure}[h] \begin{center} \begin{minipage}[h]{0.49\linewidth} \center{\includegraphics[width=6 cm]{g.jpeg}\\ (a)} \end{minipage} \hfill \begin{minipage}[h]{0.49\linewidth} \center{\includegraphics[width=6 cm]{f.jpeg} \\ (b)} \end{minipage} \vfill \begin{minipage}[h]{0.49\linewidth} \center{\includegraphics[width=6 cm]{h.jpeg} \\ (c)} \end{minipage} \renewcommand{\figurename}{Fig.} \caption{\label{f:4} Behavior of the functions $g(u)$, $f(u)$ and $h(u)$ with $v\ll 1$ close to the resonance: (a) behavior of the rotation angle for optical rotation (natural or $\mathcal{P}$-odd), (b) behavior of the inverse of the absorption length $L$, (c) behavior of the rotation angle for the Faraday effect (ordinary or $\mathcal{P},\mathcal{T}$-odd).} \end{center} \end{figure} The function $g(u,v)$ defines the behavior of the dispersion, the function $h(u,v)$ determines the behavior of the rotation angle with the detuning. The function $f(u,v)$ defines the behavior of absorption and has its maximum at $\omega_0$. We also introduce $L(\omega)=(\rho \sigma(\omega) )^{-1}$ \--- absorption length with some detuning from the resonance. The cross-section $\sigma(\omega)$ for the photon absorption by a molecule in case of E1 transition looks like \begin{equation} \label{10} \sigma(\omega)=4\pi \frac{\omega_0}{\Gamma_D} f(u,v) \frac{e^2|\langle i |\bm{r} | f\rangle |^2}{3\hbar c}. \end{equation} Expressed via the absorption length at the arbitrary detuning, the rotation signal reads \begin{eqnarray} \label{16a} R (\omega) = \frac{h(u,v)}{f(u,v)} \frac{l}{L(u,v)} \frac{d_e (\mathcal{E}^i_{\text{eff}}+\mathcal{E}^f_{\text{eff}})}{2\Gamma_D}N_{\text{ev}}. \end{eqnarray} The maximum of $h(u,v)$ also coincides with $\omega_0$. However, it has a second maximum~\cite{Chub18}, which allows to observe the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday effect off-resonance, in the region where absorption is small. In the following we choose $\Delta\omega=5\Gamma_{D}$. At this detuning the absorption drops down essentially ($f(u,v)\sim v/u^2$), but the rotation is still close to its second maximum ($h(u,v)\sim 1/u^2$). Here we do not consider the hyperfine structure. If the hyperfine structure is resolved, it does not change the order-of-magnitude estimate for the rotation angle. However, the choice of certain hyperfine levels depends on particular experiment. \begin{table} [h!] \caption{Parameters of transitions under investigation in molecular species. The adopted number density for different species is $\rho\sim 10^{10}$~cm$^{-3}$.} \tabcolsep=0.01cm \scalebox{0.9}{\begin{tabular}{cccccc} \hline\hline Molecule & Transition & Wavelength & Linewidth & Effective field & Absorption length \\ & & $\lambda$, nm & $\Gamma_{\text{nat}}$, s$^{-1}$ & $\mathcal{E}_{\text{eff}}$, GV/cm & $L(u=5)$, cm \\ \hline PbF & X1 $^2\Pi_{1/2} \rightarrow$ X2 $^2\Pi_{3/2}$ & 1210 & $2.7 \times 10^3$ & 38.0(X1), 9.3(X2) & $2\times 10^{9}$ \\ ThO & X $^1\Sigma_0\rightarrow$ H $^3\Delta_1$ & 1810 & $5\times 10^{2}$ & 0(X), 80(H) & $1\times 10^{10}$ \\ \hline \hline \end{tabular}} \label{table:1} \end{table} 1) One of promising candidates for the ICAS $\mathcal{P}$,~$\mathcal{T}$-odd Faraday experiment with diatomic molecules is the PbF molecule with the X1 $^2\Pi_{1/2} \rightarrow$ X2 $^2\Pi_{3/2}$ transition ($\lambda=1210$~nm). The natural linewidth of the X2 state is $\Gamma_{\text{nat}}=2.7 \times 10^3$~s$^{-1}$~\cite{Das02}. For PbF beam we adopt the transverse temperature of 1~K (e.g., in Ref.~\cite{Alm17} the transverse temperature of the supersonic YbF beam was reported to be about 1~K) and the transverse $\Gamma_D= 4.5 \times 10^{7}$~s$^{-1}$. Our calculations give the following effective electric fields values: $\mathcal{E}_{\text{eff}} (^2\Pi_{+1/2}) = 38$~GV/cm and $\mathcal{E}_{\text{eff}} (^2\Pi_{+3/2}) = 9.3$~GV/cm. One can adopt the achievable number density of PbF molecules approximately as $\rho\sim 10^{10}$~cm$^{-3}$. Then, according to \Eq{10}, the absorption length at dimensionless detuning $u=5$, $L(u=5)\sim 2\times 10^{9}$~cm. In Table~\ref{table:1} the parameters of the transition under investigation in PbF are listed. 2) Consider the X $^1\Sigma_0\rightarrow$ H $^3\Delta_1$ transition ($\lambda=1810$~nm) in ThO. This transition lies in the infrared region. It is interesting to consider such a molecular system since the best constraint on the $e$EDM was obtained on ThO. The natural linewidth of the metastable H state is $\Gamma_{\text{nat}}=5\times 10^{2}$~s$^{-1}$~\cite{ACME18}. The effective electric field for the H state was calculated in~\cite{Skripnikov:13c,Skripnikov:15a,Skripnikov:16b,Fleig:16}. For the ThO beam ($T=1$~K) the transverse $\Gamma_D= 2.9 \times 10^{7}$~s$^{-1}$. In Ref.~\cite{ACME14} the number density of ThO molecular beam was reported to be about $\rho\sim (10^{10}-10^{11})$~cm$^{-3}$. We adopt the number density of ThO molecules as $\rho\sim 10^{10}$~cm$^{-3}$. Then, according to \Eq{10}, the absorption length at dimensionless detuning $u=5$, $L(u=5)\sim 1\times 10^{10}$~cm. In Table~\ref{table:1} the parameters of the transition under investigation in ThO are listed. In the following sections we will investigate theoretically in more detail the ICAS-beam $\mathcal{P}$,~$\mathcal{T}$-odd Faraday experiment on the PbF and ThO molecules with the intensities near the saturation threshold. \section{Shot-noise limit and saturation limit} The signal (R) to noise (F) ratio in this section we will write via the number of ``events'' $N_{\text{ev}}$: \begin{equation} \label{17} \frac{R}{F}=\frac{\psi N_{\text{ev}}}{2\pi \sqrt{N_{\text{ev}}}}= \frac{\psi\sqrt{N_{\text{ev}}}}{2\pi} . \end{equation} Here $\psi$ is the rotation angle of light polarization plane. The number $N_{\text{ev}}$ in the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday experiment should be defined as a number of photons which have interacted with molecules and then detected. The total number of photons $N_{\text{phot}}$ involved in the experiment may be larger than the number of involved molecules $N_{\text{mol}}$, may be smaller than $N_{\text{mol}}$, may be equal to it. We will be interested in the case when $N_{\text{phot}}\gg N_{\text{mol}}$. For shot-noise limited measurement, the condition $\frac{R}{F} > 1$ should be fulfilled. One way of collecting statistics is a continuous experiment with a cw laser. The other way is to collect statistics in many sets (e.g. with a pulsed laser) if the condition $\frac{R}{F} > 1$ is not fulfilled during one set of the experiment. Nevertheless, after repeating the experiment $n$ times the statistically improved signal-to-noise ratio \begin{equation} \label{17d} \frac{R}{F}=\frac{\psi\sqrt{nN_{\text{ev}}}}{2\pi}, \end{equation} in principle, can be made arbitrary large. This means that the shot-noise limited measurement without observation of the angle $\psi$ in any particular measurement, in principle, is also possible. In this case one should collect the statistics from many measurements. The same way of collecting statistics is used in the ACME experiments~\cite{ACME18}. For the shot-noise limited measurement we have to make the number $N_{\text{ev}}$ (i.e. the number of photons) as large as possible. However, this number is limited by the saturation effects. In section~\RNumb{4} we show that for suggested experiments we do not reach this limit. For laser beams of high intensity the laws of nonlinear optics should be applied. The refractive index $n(\omega)$ depends on the intensity of the light $I(\omega)$ in the following way~\cite{Boyd}: \begin{equation} \label{18} n(\omega)=\frac{n_0(\omega)}{1+I(\omega)/I_{\text{sat}}(\omega)}, \end{equation} where $n_0(\omega)$ is the refractive index for weak light and $I_{\text{sat}}(\omega)$ is the saturation intensity. When the light intensity exceeds the saturation one, $I(\omega)> I_{\text{sat}}(\omega)$, both absorption and dispersion decrease. Equation~\Br{18} is derived within the two-level model of an atom (a molecule) which is valid for the resonant processes of our interest. It is instructive to look at \Eq{18} from the point of view of Einstein relations between the spontaneous and stimulated emission~\cite{Ber83}: \begin{equation} \label{20} W_{if}^{\text{st}}=W_{fi} = \frac{\pi^2c^2}{\hbar \omega^3} J(\omega) W_{if}^{\text{sp}}, \end{equation} where $W_{if}^{\text{sp}}$ is the spontaneous probability (transition rate) for transition between the initial ($i$) and final ($f$) states (which can be approximated as natural linewidth for the transition $\Gamma_{\text{nat}}$), $W_{if}^{\text{st}}$ stands for stimulated emission and $W_{fi}$ corresponds to the absorption probability. Equation~\Br{20} is written for the polarized anisotropic (laser beam) radiation with frequency $\omega$, $J(\omega)d\omega=I(\omega)$. The dimensionless coefficient at $W_{if}^{\text{sp}}$ defines the ``number of photons in the field'' $N$. When a certain transition $i\rightarrow f$ is considered, $d\omega\sim \Gamma_{\text{nat}}$. Then the number $N$ defines actually the relative importance of the spontaneous and stimulated emission. If $N<1$, the spontaneous emission dominates, for $N>1$ the stimulated emission dominates. The condition $I=I_{\text{sat}}$ in \Eq{18} according to Ref.~\cite{Boyd} corresponds to $N\approx 1$ in \Eq{20}, i.e. the saturation intensity can be obtained from the condition $W_{if}^{\text{st}}\approx W_{if}^{\text{sp}}$. Taking into account the detuning from the resonance and the Doppler width, one can also represent the stimulated emission and absorption probabilities in terms of the absorption cross-section as it was done, for instance, in Ref.~\cite{Sieg86}: \begin{equation} \label{21} W_{if}^{\text{st}}=W_{fi} = \frac{\sigma(\omega)I(\omega)}{\hbar \omega}. \end{equation} Then, the saturation intensity which reduces the refractive index down to one-half can be expressed as follows: \begin{equation} \label{22} I_{\text{sat}}= \frac{\hbar \omega}{\sigma \tau_s}, \end{equation} where $ \tau_s$ is the saturation time constant (or the effective lifetime or the recovery time). It is the time for the molecules to become excited and to decay again. This time can be approximated as $ \tau_s\approx \left(\Gamma_{\text{nat}}\right)^{-1}$. As it was noted in Ref.~\cite{Sieg86}, from Eqs.~\Br{21}-\Br{22} clear physical meaning of the saturation intensity follows. It means one photon incident on each atom or molecule, within its cross-section $\sigma$, per the recovery time $ \tau_s$. Substituting the absorption cross-section from \Eq{10} to \Eq{22} one obtains the expression for the saturation intensity: \begin{equation} \label{23} I_{\text{sat}}(\omega,u)= \frac{\hbar \omega^3 \Gamma_D}{\pi c^2 f(u,v)}. \end{equation} The most important feature is that for any intensity $I\geqslant I_s$ the effect of saturation does not arise instantaneously and takes the saturation time $t_{\text{sat}} \sim \left(W_{if}^{\text{st}} \right)^{-1}$ for its formation. For the off-resonance measurement $t_{\text{sat}}$ can be large enough. It is interesting to compare the resonance and large-detuned cases in terms of signal-to-noise ratio. Then, the figure-of-merit is as follows. One should consider the next ratio: \begin{equation} \label{26b} \frac{\psi(u=5) \sqrt{I_{\text{sat}}(u=5)}}{\psi(u=0) \sqrt{I_{\text{sat}}(u=0)}}\sim \frac{1}{v} \frac{l}{L(u=5)} \sqrt{\frac{u^2}{v}}. \end{equation} For the case of ThO, $v\sim 10^{-5}$, $L(u=5)\sim 10^5$~km (for $\rho\sim 10^{10}$~cm$^{-3}$). Then, for two existing cavities with achievable effective optical pathlnegths: 1) $l=1$~km, according to \Eq{26b}, the ratio~$\sim 10^3$; 2) $l=700$~km, according to \Eq{26b}, the ratio~$\sim 10^6$. It follows that large detuned case has a great advantage over the resonance one. \section{ICAS-beam experiment with the number of photons larger than the number of molecules} For large detunings in an ICAS-beam experiment one can have the number of photons much larger than the number of molecules (as long as $I/I_{\text{sat}}\lesssim 1$), so a medium (molecules) is by no means continuous and the saturation effects are of importance. The Beer-Lambert law and the standard optimization ($l=2L(\omega)$~\cite{Khrip91}) are no longer valid. To zeroth order, we can set $N_{\text{ev}} \approx N_{0}$ in \Eq{6} where $N_{0}$ is the initial number of photons injected into the cavity. According to \Eq{6} and \Eq{8}, the expression for the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday rotation angle can be presented as: \begin{equation} \label{24} \psi(\omega)= (\rho l) \left[ \text{cm}^{-2} \right] K \left[ \frac{\text{cm}}{e} \right] d_e \left[e\;\text{cm} \right]. \end{equation} For the X1 $^1\Pi_{1/2}\rightarrow$ X2 $^3\Pi_{1/2}$ transition in the PbF molecule, $ K \approx 2 \times 10^3 \;\text{cm}/e $. In the scenario employed in this paper there is no optimal condition $l=2L(\omega)$. In principle, the optical pathlength is limited only by the quality of the mirrors in a cavity. For $\rho\sim 10^{10}$~cm$^{-3}$, $d_e \approx 1.1 \times 10^{-29}$~$e$~cm and optical pathlength $l=1$~km (corresponding to cavity~\cite{Boug14} intersected by a molecular beam of 1~cm in diameter), according to \Eq{24}, one obtains $ \psi\sim 2\times 10^{-11}$~rad. Under the same conditions but with optical pathlength $l=700$~km (corresponding to cavity~\cite{Baev99} intersected by a molecular beam of 1~cm in diameter), according to \Eq{24}, one obtains $ \psi\sim 10^{-8}$~rad. Another thing one should worry about is that the experiment cannot last more than the saturation time. However, in our scenario not the time of experiment but the transit time of a molecule through the laser beam plays a key role. Let us estimate the saturation intensity for the transition under investigation in the PbF molecule. According to \Eq{23}, for $\omega=1.56\times 10^{15}$ s$^{-1}$, $\Gamma_D=4.5\times10^7$ s$^{-1}$, $\Gamma_{\text{nat}}=2.7\times 10^3$ s$^{-1}$ and $u=5$ one gets \begin{equation} \label{25} I_{\text{sat}}(u=5)=5.3 \times 10^3 \;\; \text{W/cm}^2. \end{equation} Such an intensity corresponds to the injection of $N\sim 3\times 10^{20}$ photons per second through a laser beam cross-section of 1 mm$^2$. Taking into account \Eq{20}, such a saturation intensity corresponds to the case when $W_{if}^{\text{st}}=W_{fi} \approx \Gamma_{\text{nat}}=2.7\times 10^3 \;\; \text{s}^{-1}$. The next question is: how many PbF molecules inside the crossing volume are in the excited (X2 $^2\Pi_{3/2}$) state if the laser intensity is equal to the saturation one? For simplicity and without loss of generality we consider the following statement of the problem and do not consider any technical issues. The PbF molecular beam of 1~cm in diameter travels through a cavity of 1~m length in a transverse direction with the speed $v_{\text{mol}} \approx 300$~m/s. Continuous laser light of 1~mm in diameter of the saturation intensity is coupled to the cavity. Then the transit time of the PbF molecule to pass through the laser beam is $\tau_{\text{tr}}\approx 10^{-5} $~s. The fraction of the molecules in the excited state for the case when $W_{fi}\tau_{\text{tr}} \ll 1$ can be estimated as \begin{equation} \label{26} (1-e^{-W_{fi}\tau_{\text{tr}}})\approx W_{fi}\tau_{\text{tr}}=\Gamma_{\text{nat}}\tau_{\text{tr}}\approx 0.03. \end{equation} It means that if the saturation intensity is coupled to the cavity then only 3\% of the total number of PbF molecules in the crossing volume will be in the excited state. Alternatively, one can define the saturation parameter $\kappa=$excitation rate$(u)/$relaxation rate. The excitation rate$(u)$ is proportional to the intensity $I$. At the detuning the excitation rate $(u=5)/$ excitation rate $(u=0)$ scales as$\sim f(u,v)/ f(0,v)\sim v/u^2$ (at the considered conditions $v/u^2$ is a small number). The choice of the saturation intensity $I_{\text{sat}}$ corresponds to $\kappa=1$. In this case one has $\sim 33\%$ of molecules in the excited state and $\sim 67\%$ of molecules in the ground state. It means that one does not ``bleach'' the molecules. However, in our proposal, since $1/\tau_{\text{tr}}>$relaxation rate, we should define the saturation parameter as $\kappa=$excitation rate$(u)\cdot\tau_{\text{tr}}$. As a result, in such a beam-based ICAS experiment, the number of detected photons can be increased by several orders of magnitude. For the ICAS-beam experiment with the ThO molecules, the coefficient $K$ in \Eq{24} is $K\approx 4\times 10^3\;\text{cm}/e$. Substituting the adopted parameters of ThO ($\omega=1.04\times 10^{15}$~s$^{-1}$, $\Gamma_D=2.9\times10^7$~s$^{-1}$, $\Gamma_{\text{nat}}=5\times 10^2$~s$^{-1}$ and $u=5$) in \Eq{23}, one obtains \begin{equation} \label{26a} I_{\text{sat}}(u=5)=3.5 \times 10^3 \;\; \text{W/cm}^2. \end{equation} This value corresponds to the injection of $N\sim 3\times 10^{20}$ photons per second through the laser cross-section of 1~mm$^2$. Estimating, similarly to the the PbF case, the fraction of the molecules in excited state $\Gamma_{\text{nat}} \tau_{\text{tr}}\approx 0.005$. That is, if the saturation intensity is coupled to the cavity then only 0.5\% of the total number of ThO molecules in the crossing volume will be in the excited state. This makes it possible to increase the intensity coupled to the cavity by an order of magnitude. In this case, one has $N\sim 3\times 10^{21}$ photons per second through the laser cross-section $\sim$ 1 mm$^2$ and 5\% of the total number of ThO molecules in the excited state in the crossing volume. The next question concerns fundamental noises which determine the statistical error of the experiment. The figure-of-merit for the fundamental noise-limited experiment on the molecular spin-precession observation (ACME-style) is as follows: \begin{equation} \label{26c} \delta d_e \sim \frac{1}{\mathcal{E}_{\text{eff}}}\frac{1}{\tau_{\text{coh}}} \frac{1}{\sqrt{\dot{N}_{\text{mol}}T}}, \end{equation} where $\tau_{\text{coh}}$ is the coherence time (a few ms), $\dot{N}_{\text{mol}}$ is the number of molecules supplied to the experiment by the molecular beam per unit time in the desired initial molecular state and $T$ is the time of the experiment. In the ACME experiment the statistics is determined by the number of molecules (molecular spin-noise). The experiment is carried out on the excited state of a molecule with nonzero total angular momentum (spin). Contrary to this, in our proposed experiment we do not need to prepare molecules in the excited state. Our experiment is carried out with molecules in the ground (zero spin) states. In the ground state of the ThO molecule, we don't have any spin. In turn, excited by a laser state is optically inactive to this laser (provided decoherence effects are negligible). So there is no molecular spin noise in this case. Note, that taking into account the nuclear spin, the PbF molecule can also formally be considered as a molecule with zero total angular momentum in the ground state \cite{Rav08}. The excited states are produced in small amounts ($\sim$1\%) during the experiment and also they have no spin noise. So statistics in our case is defined by the number of detected photons. This number can be much larger than that of molecules due to large detuning. The figure-of-merit for the fundamental noise-limited experiment in our proposed large detuned case is as follows: \begin{equation} \label{26d} \delta d_e \sim \frac{1}{\mathcal{E}_{\text{eff}}} \frac{1}{\tau_{\text{coh}}} \frac{\Gamma_{\text{nat}} L(u)}{c} \frac{1}{\sqrt{\dot{N}_{\text{phot}}T}}, \end{equation} where $\dot{N}_{\text{phot}}$ is the number of detected photons per unit time and $\tau_{\text{coh}}=l/c$ is the coherence time in the optical rotation experiment. The factor $N_{\text{phot}}$ is the key difference between our proposal and the ACME-style experiments. \section{Cavity transmission in the ICAS-beam experiment} In this section we consider the cavity transmission, $T_{\text{cav}}$, i. e. the transmission of the light determined by the properties of the mirrors. In principle, the sensitivity of the ICAS experiments strongly depends on the parameters of mirrors. In this paper we only briefly consider the problem of the cavity transmission in the simplest model. Let us consider two identical mirrors with high reflectivity $R=1-\delta$, $\delta\approx (10^{-5}-10^{-7})$. Transmission of such an interferometer can be described as \cite{Mes}: \begin{equation} \label{27} T_{\text{cav}}=\frac{I_{\text{tr}}}{I_{\text{in}}}\approx \frac{4(1-R)^2(1-A/2)}{(2-2R+A)^2}, \end{equation} where $I_{\text{tr}}$ is the transmitted intensity, $I_{\text{in}}$ is the initial laser intensity and $A$ is the light intensity loss in the absorber during one round trip. For the case of PbF beam for one pass ($l\sim 1$~cm) and a detuning $u=5$, $A <\rho \sigma l\sim 10^{-9}$. Then, $A\ll \delta$ and, in principle, can be negligible. In this case, $I_{\text{tr}} \approx I_{\text{in}}$. Note also, that it is possible to choose such an initial laser intensity, that the coupled intracavity intensity $I_{\text{int}}=I_{\text{tr}}/\delta$ will reach the saturation intensity. This means that the transmitted intensity is now $I_{\text{tr}}=I_{\text{sat}}\times \delta$. The photon shot-noise limit for an ideal polarimeter (see, e.g., the review~\cite{Bud02}) is \begin{equation} \label{28} \delta\psi\approx \frac{1}{2\sqrt{N_{\text{phot}}}}, \end{equation} where $N_{\text{phot}}$ is the number of detected photons. Consider two cases of the existing cavities: 1a) The cavity~\cite{Boug14} with $\delta\sim 10^{-5}$, according to \Eq{25}, for the PbF case gives $I_{\text{tr}}=I_{\text{sat}}\times \delta\approx 5.3\times 10^{-2}$~W/cm$^2$. It corresponds to the detection of $N_{\text{phot}}\sim 3\times 10^{15}$ photons per second. Then, in such experiments with the integration time of the order of two weeks $\sim 10^6$~s (such an observation time was in the ACME experiments), the number of detected photons is $N_{\text{phot}}\sim 3\times 10^{21}$. According to \Eq{28}, in this case $\delta\psi\sim 10^{-11}$~rad. According to \Eq{24} for such a cavity and the recent ACME experimental bound on the $e$EDM value, $\psi \sim 10^{-11}$~rad. As a result, for such parameters PbF is a candidate to verify the recent ACME results via the alternative method. 1b) The cavity~\cite{Boug14} with $\delta\sim 10^{-5}$, for the ThO case, gives $N\times \delta\approx 3\times 10^{21}\times 10^{-5}\approx 3\times 10^{16}$ detected photons per second. Then, in such experiments with the integration time on the order of two weeks $\sim 10^6$~s, the number of detected photons is $N_{\text{phot}}\sim 3\times 10^{22}$. According to \Eq{28}, in this case $\delta\psi\sim 3\times 10^{-12}$~rad. According to \Eq{24} for such a cavity and the recent ACME experimental bound on the $e$EDM value, $\psi \sim 4 \sim 10^{-11}$~rad. As a result, ThO is a good candidate for improving the $e$EDM bound by 1~order of magnitude. 2a) The cavity~\cite{Baev99} with $\delta\sim 10^{-7}$, according to \Eq{25}, for the PbF case gives $I_{\text{tr}}=I_{\text{sat}}\times \delta\approx 5.3\times 10^{-4}$~W/cm$^2$. It corresponds to the detection of $N_{\text{phot}}\sim 3\times 10^{13}$ photons per second. Then, in such experiments with the integration time on the order of two weeks $\sim 10^6$~s, the number of detected photons is $N_{\text{phot}}\sim 3\times 10^{19}$. According to \Eq{28}, in this case $\delta\psi\sim 10^{-10}$~rad. According to \Eq{24} for such a cavity and the recent ACME experimental bound on the $e$EDM value, $\psi \sim 10^{-8}$~rad. As a result, PbF is a good candidate for improving the $e$EDM bound by 2~orders of magnitude in this case. 2b) The cavity~\cite{Baev99} with $\delta\sim 10^{-7}$, for the ThO case, gives $N\times \delta\approx 3\times 10^{21}\times 10^{-7}\approx 3\times 10^{14}$ detected photons per second. Then, in such experiments with the integration time on the order of two weeks $\sim 10^6$~s, the number of detected photons is $N_{\text{phot}}\sim 3\times 10^{20}$. According to \Eq{28}, in this case $\delta\psi\sim 3\times 10^{-11}$~rad. According to \Eq{24} for such a cavity and the recent ACME experimental bound on the $e$EDM value, $\psi \sim 3 \times 10^{-8}$~rad. As a result, ThO is a good candidate for improving the $e$EDM bound by 3~orders of magnitude in this case. In conclusion of the section, we comment on the possible sources of improving the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday signal-to-noise ratio. Note, that according to \Eq{8}, the rotation angle is proportional to $\psi \sim h(u,v)/\Gamma_D^2$. For large dimensionless detunings $u$ (e.g., $u=5$), $\psi\sim 1/(u \Gamma_D)^2$. For the case where the number of photons is much larger than the number of molecules (near the saturation threshold), one can neglect the absorption of photons. Then, it is no longer necessary to make such a large detuning. However, even at the detuning $u=1.5$ (the second extremum of the $h(u,v)$ function) where $h(u,v) \approx 0.5$, $\psi\sim (1/5) \times 1/\Gamma_D^2$. Thus, the rotation angle enhances by a factor of $~5$, but the shot-noise (connected in our case with the saturation intensity \Eq{23} which depends on $u$) drops down by a factor of $\sim \sqrt{5}$. As a result, such smaller detuning can improve the signal-to-noise ratio by no more than a factor of two. Obviously, increasing the number density of molecular beam and increasing of the optical pathlength (the quality of the mirrors) lead to improving the signal-to-noise ratio. Note also, the value of the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday rotation is determined, among other things, by the largest of the widths (natural, collisional, transit-time, Doppler, etc.). For instance, for the PbF molecular beam case, the largest width is the Doppler one ($\Gamma_{\text{nat}}=2.7 \times 10^3$~s$^{-1}$, the transit-time width $\Gamma_{\text{tr}}\sim 1/(2\pi\tau_{\text{tr}})\approx 1.6\times 10^4$~s$^{-1}$, $\Gamma_D= 4.5 \times 10^{7}$~s$^{-1}$). Finally, we would like to mention that with squeezed states of light the photon shot-noise limit can be surpassed which would be favorable for the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday effect observation. However, these squeezed states of light have not yet found their application in polarimetry. \section{Conclusions} The recent most advanced $e$EDM constraint obtained in the experiment with ThO is $|d_e|<1.1\times 10^{-29}$ $e$ cm. In this experiment, electron spin precession in an external electric field is employed and the effect is proportional to the time spent by a particular molecule in an electric field. In the present paper we suggest another method for observation of such effects \--- a beam-ICAS $\mathcal{P}$,~$\mathcal{T}$-odd Faraday experiment with molecules. A theoretical simulation of the proposed experiment is based on the recently available ICAS parameters. In this experiment it is not necessary to keep a separate molecule in an electric field since the effect is accumulated in the laser beam which encounters many molecules. According to our estimates for the PbF molecule, the current $e$EDM sensitivity can be improved by 1-2 orders of magnitude. In its turn, for the ThO molecule the current $e$EDM sensitivity can be improved by 1-3 orders of magnitude. This implies testing of new particles at energy 1-2 order of magnitude larger than the current best constraint. Making these predictions we understand that some technical problems, not mentioned here, may arise. In this paper we did not discuss the possible systematic errors among which the stray magnetic fields, the uncontrolled ellipticity of the laser beam and uncontrolled drift of mirrors are the most evident. A problem of avoiding the $\mathcal{P}$-odd optical rotation, much stronger than the $\mathcal{P}$,~$\mathcal{T}$-odd rotation also should be resolved. All these problems we hope to address in the future studies. \begin{acknowledgments} Preparing the paper and the calculations of the $\mathcal{P}$,~$\mathcal{T}$-odd Faraday signals, as well as finding optimal parameters for experiment were supported by the Russian Science Foundation grant 17-12-01035. L.V.S. acknowledges the support of the Foundation for the advancement of theoretical physics and mathematics ``BASIS'' grant according to the research projects No.~18-1-3-55-1. D.V.C. acknowledges the support of the President of Russian Federation Stipend No.~SP-1213.2021.2. The authors would like to thank Dr. Dmitry Budker, Dr. Mikhail G. Kozlov, Dr. L. Bougas, Dr. Timur A. Isaev and Dr. Peter Rakitzis for helpful discussions. \end{acknowledgments}
1,941,325,220,880
arxiv
\section{Introduction} \section{Introduction}\label{intro} In the recent years, experimental advances in low temperature physics substantially increased the interest in the effects of noise and disorder in mesoscopic systems \cite {shakar,gustavo,gustavo3,zurek,sachdevbook,zvyagin, heyl}. In this paper we study a continuous quantum field theory with quenched disorder linearly coupled to a scalar field. In a classical situation, a random field can model a binary fluid in porous media \cite{broch,dierker}. When the binary-fluid correlation length is smaller than the porous radius, one has finite-size effects in the presence of a surface field. When the binary fluid correlation length is bigger than the porous radius, the random porous can exert a random field effect. In this case, the random field is linearly coupled with a classical field. The above example illustrates a general situation that inhomogeneous backgrounds and impurities can be modelled using random fields and random potentials within the formalism of continuous field theory. This issue raises fundamental questions regarding the role played by thermal, quantum and disorder induced fluctuations in a situation close to a second-order phase transitions. To drive the system to the criticality there are two quite distinct situations. The first one is when thermal or disorder-induced fluctuations are dominant. The second one is when quantum and disorder-induced fluctuations prevail over the thermal fluctuations \cite{hertz}. In systems at low temperatures, in the regime when the fluctuations' intrinsic frequencies $\omega$ satisfy $\omega\gg \beta^{-1}$, quantum dominate over thermal fluctuations. In the case in which disorder-induced fluctuations prevail over thermal fluctuations, to investigate the low temperature behavior of the system one can work in the imaginary time formalism \cite{lebellac}. The steps are the following, first vacuum expectation values of operator products are continued analytically to imaginary time; then, under these analytically continued vacuum expectation values one imposes periodic boundary conditions in imaginary time. Finally, one can use functional methods where the finite temperature Schwinger functions are moments of a measure of some functional space \cite{yaglom1}. In such a situation of low temperatures, the disorder is strongly correlated in imaginary time. This situation has been studied in the literature, mainly in the context of the random mass model \cite{mvojta,vojta2,gr,gr2}. The algebraic decay of the correlation functions for generic control parameter values is known as generic scale invariance. For a spontaneously continuous broken symmetries, the presence of Goldstone modes is a signature of generic scale invariance, known as direct generic scale invariance. Nevertheless this is not the only way to produce generic scale invariance, for a review see Ref. \cite{belitz}. In the case of discrete symmetry, the presence of quenched disorder also leads to the generic scale invariance. Such behavior is in agreement with Garrido \textit{et al} \cite{garrido}, who claim that a necessary, but not sufficient, condition for generic scale invariance is an anisotropic system. Some years latter Vespignani and Zapperi \cite{zapperi} showed that the breakdown of locality is essential to the generic scale invariance. In the the Ref. \cite{gustavo2} the authors proved that anisotropic disordr is a source of generic scale invariance. The purpose of this paper is to generalize the results of Ref. \cite{gustavo2} for a Euclidean quantum $O(N)$ model with $N=2$. At low temperatures the system is in the ordered phase, with quantum and disorder induced fluctuations prevailing over thermal fluctuations. In such a situation, we prove the appearance of indirect generic scale invariance. To find the averaged free energy or generating functional of connected correlation functions, we use the distributional zeta-function method \cite{distributional,distributional2,zarro1,zarro2,polymer,1haw,spin-glass}. After averaging the free energy over the disorder, we obtain a series representation in terms of the moments of the partition function. Due to the strongly correlation of the disorder in imaginary time, there appears a non-local contribution in each moment of the partition function. We circumvent the nagging nonlocality by using of the formalism of fractional derivative \cite{rr}. We proved, in one loop approximation, that the system can make a transition from the ordered to the disordered phase by quantum and disorder induced fluctuations. We show that, below the critical temperature of the pure system, with the bulk in the ordered phase, there exist a large number of critical temperatures that take each of these moments from an ordered to a disordered phase. This situation share some similarities with the Griffiths-McCoy phase, when in a quantum disordered system there appear finite size spatial regions in the disordered phase with bulk of the the system in the spontaneously broken, ordered, phase \cite{gri1,gri2}. The structure of this paper follows. In Sec. \ref{sec:disoderedLG} we discuss the $O(N)$ for $N=2$ scalar field theory. In Sec. \ref{sec:frompathtoSPDE2} we study this model coupled with quenched disorder. We use a series representation for the averaged free energy, the generating functional of connected correlation functions. In Sec. \ref{sec:thermal mass2} we discuss in the one-loop approximation the effects of disorder in the broken symmetry phase. We give our conclusions in Sec. \ref{sec:conclusions}. We use the units $\hbar=c=k_{B}=1$ throughout the paper. \section{The Euclidean complex scalar Field}\label{sec:disoderedLG} The action functional $S(\chi^{*},\chi)$ for an Euclidean complex scalar field at finite temperature in the imaginary time formalism \cite{cb,jackiw, rep} is given by \begin{align}\label{eq:1} &S(\chi^{*},\chi)=\int_{0}^{\beta}d\tau\int d^{d}x \left[\chi^{*}(\tau,\mathbf{x})\left( -\frac{\partial^{2}}{\partial\tau^{2}}-\Delta+\mu_{0}^{2}\right)\chi(\tau,\mathbf{x})+\frac{\lambda }{4}| \chi^{*}(\mathbf{x}) \chi(\mathbf{x})|^{2} \right], \end{align} where $\beta$ is the reciprocal of the temperature, the symbol $\Delta$ denotes the Laplacian in $\mathbb{R}^{d}$, and $\lambda$ and $\mu_{0}^{2}$ are respectively the bare coupling constant and the mass squared of the model. We omit subscripts indicating unrenormalized field and physical parameters, mass and coupling constant. The perturbative renormalization consists in the introduction of additive counterterms with coefficients $Z_{1}$, $\delta m^{2}$ and $Z_{2}$ to absorb the divergences in those quantities. The partition function is defined by the functional integral \begin{equation} Z=\int\left[ d\chi\right] \left[ d\chi^{*}\right] \,\,\exp\bigl(-S(\chi^{*},\chi)\bigr), \end{equation} \\ where $\left[ d\chi\right] \left[ d\chi^{*}\right]$ is a functional measure and the field variables satisfy the periodicity condition $\chi(0,\mathbf{x})=\chi(\beta,\mathbf{x})$ and $\chi^{*}(0,\mathbf{x})=\chi^{*}(\beta,\mathbf{x})$. The generating functional of correlation functions is defined by introducing an external complex source $j(\tau,\mathbf{x})$ linearky coupled to the field. We are interested in computing the ground state of the system in the situation where $O(2)$ symmetry is spontaneously broken. For accessing them, we replace $\mu^2_0$ by $-\mu^2_0$ in Eq. (\ref{eq:1}) and work with the Cartesian representation for the complex field $\chi(\tau,\mathbf{x})$. We define the real fields $\phi_{1}(\tau,\mathbf{x})$ and $\phi_{2}(\tau,\mathbf{x})$ such that \begin{equation}\label{eq:ezk33} \chi(\tau,\mathbf{x})=\frac{1}{\sqrt{2}}\bigl[\phi_{1}(\tau,\mathbf{x})+i\phi_{2}(\tau,\mathbf{x})\bigr] \end{equation} and \begin{equation}\label{eq:ezk333} \chi^{*}(\tau,\mathbf{x})=\frac{1}{\sqrt{2}}\bigl[\phi_{1}(\tau,\mathbf{x})-i\phi_{2}(\tau,\mathbf{x})\bigr]. \end{equation} The potential contribution to the action functional, $V(\phi_{1},\phi_{2})$, is given in terms of these real fields by: \begin{align} \label{eq:effectivehamiltonianaaa} V(\phi_{1},\phi_{2})=\left[-\frac{\mu^{2}_{0}}{2}\bigl(\phi_{1}^{2}+\phi_{2}^{2}\bigr) +\frac{\lambda}{16}\bigl(\phi_{1}^{2}+\phi_{2}^{2}\bigr)^{2}\right]. \end{align} The $O(2)$ symmetry corresponds to the invariance of the action under rotations in the real fields $(\phi_{1},\phi_{2})$ plane. The minima of this contribution of $V(\phi_{1},\phi_{2})$, for $\mu_{0} > 0$ and $\lambda>0$, are on the circle with squared radius $v^{2}$, $\phi_{1}^{2}(\tau,\mathbf{x})+\phi_{2}^{2}(\tau,\mathbf{x}) = v^{2}$, where $v^{2} = \frac{4\mu^2_{0}}{\lambda}$. There is an infinite number of degenerate ground states. This is the standard situation of spontaneous symmetry breaking. Defining $\varphi(\tau,\mathbf{x}) = \phi_{1}(\tau,\mathbf{x})-v$ and $\psi(\tau,\mathbf{x})=\phi_{2}(\tau,\mathbf{x})$, the action functional in terms of $\varphi$ and $\psi$, $S(\varphi,\psi)$ is given by \begin{align} \label{eq:effectivehamiltonian33} S(\varphi,\psi) &= \int_{0}^{\beta}d\tau\int d^{d}x\, \Biggl[\frac{1}{2}\varphi(\tau,\mathbf{x}) \left(-\frac{\partial^{2}}{\partial\tau^{2}}-\Delta + m_{0}^{2}\right) \varphi(\tau,\mathbf{x}) +\lambda_{0}\Bigl(\varphi^{2} +\psi^{2}\Bigr)^{2} \nonumber \\ &+\frac{1}{2}\psi(\tau,\mathbf{x}) \left(-\frac{\partial^{2}}{\partial\tau^{2}}-\Delta\right) \psi(\tau,\mathbf{x}) +\rho_{0}\varphi(\tau,\mathbf{x}) \Bigl(\varphi^{2}+\psi^{2}\Bigr) \Biggr], \end{align} where we defined $m_{0}^{2}=2\mu_{0}^{2}$, $\lambda_{0} = \lambda/{16}$ and $\rho_{0} = \lambda v/{4}$. In the one-loop approximation it is possible to show that temperature restores the symmetry. In this case the ground state is unique. The presence of the Goldstone-modes is a source of direct generic scale invariance. However we will go further and prove that the disorder field is able to generate indirect generic scale invariance. In the next section we introduce disorder in the system and discuss its effects in the restoration of the spontaneously broken $O(2)$ symmetry. \section{Euclidean complex scalar fields in Disordered Media}\label{sec:frompathtoSPDE2} In this section we discuss the behavior of complex fields in disordered media. For the case of a statistical field theory with external randomness, one defines the action and has to find the quenched free energy, or the average of the generating functional of connected correlation functions in the presence of the disorder~\cite{englert, lebo,lebowitz}. There are different ways to perform this average. Examples are the replica trick \cite{re,emery}, the dynamics approach \cite{dominicis, zip}, and the supersymmetry technique \cite{efe1}. Here we use the distributional zeta-function method \cite{distributional,distributional2,zarro1,zarro2,polymer,1haw,spin-glass}. In a general situation, a disordered medium can be modelled by a real random field $\xi(\mathbf{x})=\xi_{\omega}(\mathbf{x})$ in $\mathbb{R}^{d}$ with $\mathbb{E}[\xi(\mathbf{x})]=0$ and covariance $\mathbb{E}[\xi(\mathbf{x})\xi(\mathbf{y})]$, where $\mathbb{E}[...]$ means average over an ensemble of realizations, i.e., over parameters $\omega$, characterising the disorder. In the case of a complex random field, the generalization is straightforward \cite{yaglom}. Let us consider a complex random field of real variables $h(x_{1},x_{2},...,x_{d})\equiv h(\mathbf{x})$. In the general situation we have: \begin{equation} \mathbb{E}[h(\mathbf{x})]=m(\mathbf{x}),\,\,\,\,\,\mathbb{E}[h(\mathbf{x})h^{*}(\mathbf{y})]=B(\mathbf{x},\mathbf{y}), \end{equation} where $m(\mathbf{x})$ and $B(\mathbf{x},\mathbf{y})$ are respectively the first and the second moments of the random field. For simplicity we assume that the first moment is zero and the random field is delta correlated. Therefore the probability distribution of the disorder field is written as $[dh][dh^{*}]\,P(h,h^{*})$, where \begin{equation} P(h,h^{*})=p_{0}\,\exp\Biggl(-\frac{1}{2\,\varrho^{2}}\int\,d^{d}x|h(\mathbf{x})|^{2}\Biggr), \label{dis2} \end{equation} where $\varrho$ is a positive parameter associated with the disorder and $p_{0}$ is a normalization constant. In this case, we have a delta correlated disorder, i.e., $\mathbb{E}[{h^{*}(\mathbf{x})h(\mathbf{y})}]=\varrho^{2}\delta^{d}(\mathbf{x}-\mathbf{y})$. The $[dh][dh^{*}]$ is a functional measure, where $[dh]=\prod_{\mathbf{x}} dh(\mathbf{x})$. The action functional in the presence of the complex disorder field is given by \begin{align} &S(\chi,\chi^{*},h,h^{*})=S(\chi,\chi^{*}) + \, \int_{0}^{\beta} d\tau\int d^{d}x\,\Bigl(h(\mathbf{x})\chi^{*}(\tau,\mathbf{x})+h^{*}(\mathbf{x})\chi(\tau,\mathbf{x})\Bigr). \end{align} For examples of complex disordered fields, see Refs. \cite{tou,sham}. One introduces the functional $Z(j,j^{*},h,h^{*})$, the disorder generating functional of correlation functions $i.e.$ generating functional of correlation functions for one disorder realization, where $j(\tau,\mathbf{x})$ is an external complex source. As in the pure system case, one can define an average free energy as the average over the ensemble of all realizations of the disorder: \begin{equation} \mathbb{E}\left[W(j,j^{*})\right] =\! \int\! [dh][dh^{*}]P(h,h^{*})\ln Z(j,j^{*},h,h^{*}). \label{eq:disorderedfreeenergy} \end{equation} The distributional zeta-function method computes this average of the free energy as follows. For a general disorder probability distribution, one defines the distributional zeta-function $\Phi(s)$: \begin{equation} \Phi(s)=\int [dh][dh^{*}]P(h,h^{*})\frac{1}{Z(j,j^{*},h,h^{*})^{s}}, \hspace{0.15cm}s\in \mathbb{C}, \label{pro1} \vspace{.2cm} \end{equation} from which one obtains $\mathbb{E}\left[W(j,j^{*})\right]$ as \begin{equation} \mathbb{E}\bigl[W(j,j^{*})\bigr] = - (d/ds)\Phi(s)|_{s=0^{+}}, \,\,\,\,\,\,\,\,\,\, \Re(s) \geq 0. \end{equation} \iffalse where one defines the complex exponential $n^{-s}=\exp(-s\log n)$, with $\log n\in\mathbb{R}$. \fi Next, one uses Euler's integral representation for the gamma function $\Gamma(s)$ to write $Z^s$ as \begin{equation} \frac{1}{Z^s} = \frac{1}{\Gamma(s)} \int^\infty_0 dt \, t^{s-1} \, e^{-Z \, t} \end{equation}\\ Then, one breaks this $t$ integral into two integrals, one from $0$ to $a$ and another from $a$ to $\infty$, where $a$ is an arbitrary dimensionless real number, and expands the exponential into a power series of $t$ so that one can write the quenched free energy as \begin{align} \mathbb{E}\bigl[W(j,j^{*})\bigr] = \sum_{k=1}^{\infty} \frac{(-1)^{k+1}a^{k}}{k k!}\, \mathbb{E}\,[Z^k(j,j^{*})] + \gamma - \ln(a) + R(a,j,j^{*}), \label{m23e} \end{align} \noindent where $\gamma$ is the Euler-Mascheroni constant~ \cite{abramowitz}, $\mathbb{E}\,[Z^{k}(j,j^{*})]$ is $k-$th moment of the partition partition function: \begin{align} \hspace{-0.25cm}\mathbb{E}\,[Z^{\,k}(j,j^{*})] &= \int\,\prod_{i=1}^{k}[d\varphi_{i}^{(k)}]\prod_{j=1}^{k}[d\psi_{j}^{(k)}] \exp\biggl(-S_{\textrm{eff}}(\varphi_{i}^{(k)},\psi_{j}^{(k)},j_{i}^{(k)},j_{j}^{(k)})\biggr), \label{aa11} \end{align} in which the action $S_{\textrm{eff}}(\varphi_{i}^{(k)},\psi_{j}^{(k)})$ describes the field theory of $k-$field multiplets, and \begin{align} R(a,j)=&-\int[dh] [dh^{*}]P(h,h^{*}) \int_{a}^{\infty}\,\dfrac{dt}{t}\, \exp\Bigl(-Z(j,j^{*},h,h^{*})t\Bigr). \end{align} For large $a$, $|R(a)|$ goes to zero exponentially, as shown in Ref. \cite{distributional}. Therefore, the dominant contribution to the average free energy is given by the moments of the partition function of the model. The effective action $S_{\textrm{eff}}(\varphi_{i}^{(k)},\psi_{j}^{(k)},j_{i}^{(k)},j_{j}^{(k)})$ is a sum of local and nonlocal terms with repect to the $\tau$ integral; we present them shortly ahead. The result of these steps is that, after the coarse-grained procedure with a reduced description of the disordered degrees of freedom, one gets collective variables that are multiplets of fields in the moments of the partition function. To proceed, we absorb $a$ in the functional measure and, following Klein and Brout, we define the augmented partition function $\mathcal{Z}(j,j^{*})$: \begin{align} \ln \mathcal{Z}(j,j^{*})=\sum_{k=1}^{\infty} c(k)\,\mathbb{E}\,[(Z(j,j^{*},h,h^{*}))^{\,k}]. \label{m23e} \end{align} where $c(k)=(-1)^{k+1}/{k k!}$. Our purpose is to discuss the ordered phase of the model. Due a non-local contribution to the effective action, we restrict the discussion to equal fields in a given multiplet, that is: $\varphi^{(k)}_{i}(\mathbf{x})=\varphi^{(k)}_{j}(\textbf{x})$, $\psi^{(k)}_{i}(\mathbf{x}) = \psi^{(k)}_{j}(\textbf{x})$ $\forall \,i,\,j$ in the function space. Likewise, we take $j_{i}^{(k)}(\textbf{x}) = j_{l}^{(k)}(\textbf{x})$. Therefore, all the terms of the series in Eq.~ \eqref{m23e} have the same structure. The effective action contain a local ($S_{\rm eff, L}$) and a non-local ($S_{\rm eff, NL}$) contribution, \begin{align} &S_{\textrm{eff,L}}\left(\varphi_{i}^{(k)},j_{i}^{(k)}\right) = \frac{1}{2}\int_{0}^{\beta}d\tau\int d^{\,d}x \, \sum_{i=1}^{k} \Bigg\{ \varphi_{i}^{(k)}(\tau,\mathbf{x}) \left(-\frac{\partial^{2}}{\partial \tau^{2}} - \Delta+m_{0}^{2}\right)\varphi_{i}^{(k)} \nonumber\\ &+ \, \psi_{i}^{(k)}(\tau,\mathbf{x}) \left(-\frac{\partial^{2}}{\partial \tau^{2}} - \Delta\right)\psi_{i}^{(k)}(\tau,\mathbf{x}) + \rho_0 \varphi_{i}^{(k)}(\tau,\mathbf{x}) \left[\bigl(\varphi_{i}^{(k)}(\tau,\mathbf{x})\bigr)^2 + \bigl(\psi_{i}^{(k)}(\tau,\mathbf{x})\bigr)^2 \right] \nonumber \\ &+ \lambda_0\left[\left(\varphi_{i}^{(k)}(\tau,\mathbf{x})\right)^2 + \bigl(\psi_{i}^{(k)}(\tau,\mathbf{x})\bigr)^2 \right]^2 \Biggr\}, \label{SeffA} \end{align} \begin{align} S_{\textrm{eff,NL}}\left(\varphi_{i}^{(k)},\psi_{i}^{(k)}\right) &= -\frac{\varrho^{2}}{2\beta^2}\int_{0}^{\beta} d\tau\int_{0}^{\beta} d\tau'\int d^{d}x \, \sum_{i,j=1}^{k} \left[\varphi_{i}^{(k)}(\tau,\mathbf{x}) \varphi_{j}^{(k)}(\tau',\mathbf{x}) \right.\nonumber \\ &\left. +\, \psi_{i}^{(k)}(\tau,\mathbf{x}) \psi_{j}^{(k)}(\tau',\mathbf{x}) \right]. \label{SeffA} \end{align} Here, we defined $\varphi'^{\,(k)}_{i}(\mathbf{x})= \frac{1}{\sqrt{k}}\varphi^{(k)}_{i}(\mathbf{x})$, $\psi'^{\,(k)}_{i}(\mathbf{x})= \frac{1}{\sqrt{k}}\psi^{(k)}_{i}(\mathbf{x})$, $\lambda'_{0}=\lambda_{0}k$ and $\rho'_{0}=\rho_{0}k$. As for the pure system, the fields variables satisfy periodicity condition in imaginary time: $\varphi_{i}^{(k)}(0,\mathbf{x})=\varphi_{i}^{(k)}(\beta,\mathbf{x})$ and $\psi_{i}^{(k)}(0,\mathbf{x})=\psi_{i}^{(k)}(\beta,\mathbf{x})$. In the context of replica trick, efforts was put to compute the contribution of the non-local action, in the replica symmetric case \cite{kirkpatrick} and in the replica symmetry break case, for a exemple see Refs. \cite{goldschmidt, azimi}. We would like to point out that, the restriction of equal fields in each multiplet is to avoid a technical problem raised by the non-local contribution. In the case of a local action one can perform the calculations choosing different fields in each multiplet in the functional space. However, in the context of distributional zeta-function method, the authors of Ref. \cite{gustavo2}, shows how the fractional derivative can deal naturally with the non-local contribution. Through the Fourier transform, for a generic fractional derivative, we get $\mathcal{F}\bigl[\frac{d^{\mu}g(x)}{d|x|^{\mu}}\bigr]=-|k|^{\mu}g(k)$, for $1\leq\mu<2$. The non-local contribution will appear in the Fourier representation of the Matsubara modes as $\frac{2\pi|n|}{\beta}$. \section{Disorder and quantum effects in the one-loop contribution to the renormalized mass}\label{sec:thermal mass2} The purpose of this section is to discuss the renormalized squared mass at very low temperatures. Since we use a regularization procedure where the Matsubara modes appear, we call it the thermal mass, although we are working at very low temperatures, in the spontaneously broken symmetry phase. We compute in the one-loop approximation the thermal mass in $k^{th}$ moment of the partition function. A Goldstone mass can not be generated in perturbation theory. Therefore, we concentrate on the mass of the non-Goldstone mode. There are two kinds of loops which give the first nontrivial contributions at the one-loop level; those with one insertion will be refereed to as tadpoles and with two insertions as self-energies. Below the critical temperature, one can write the square of the renormalized mass associated with the $k-$th momentum field $\varphi_{i}^{(k)}(\tau,\mathbf{x})$ as: \begin{align} m_{R}^{2}(\beta,\varrho,k) &= m_{0}^{2} + \delta m^{2}_{0}(k) + \, \lambda_{0} \Bigl[ 12I_{1}(\beta,\varrho,k) + \, 2I_{1}(\beta,\varrho,k)|_{m^{2}_{0}=0} \Bigr] \nonumber \\ &- \, \rho_{0}^{2} \Bigl[ 18I_{1}(\beta,\varrho,k) +18I_{2}(\beta,\varrho,k) + \, 2I_{2}(\beta,\varrho,k)|_{m^{2}_{0}=0} + 6I_{1}(\beta,\varrho,k)|_{m_{0}^{2}=0} \Bigr] \end{align} where $\delta m_{0}^{2}(k)$ is the usual mass countertem that must be introduced in the renormalization procedure, for $k^{th}-$moment fields, where the multiplicative numbers are symmetry factors, $I_1$ gives the tadpole contribution \begin{align} &I_{1}(\beta,\varrho,k)=\frac{1}{2L(2\pi)^{d}} \int\,\prod_{i=1}^{d}dq_{i}\sum_{n\in \mathbb{Z}}\Biggl(q_{1}^{2}+...+q_{d}^{2} +\biggr(\frac{2\pi n}{\beta}\biggl)^{2}+\frac{2\pi|n|}{\beta}\,k\varrho^{2}+m_{0}^{2} \Biggr)^{-1}, \end{align} and $I_2$ is the self-energy contribution \begin{align} I_{2}(\beta,\varrho,k)=\frac{1}{2L(2\pi)^{d}} \int\,\prod_{i=1}^{d}dq_{i}\sum_{n\in \mathbb{Z}}\Biggl(q_{1}^{2}+...+q_{d}^{2} +\biggr(\frac{2\pi n}{\beta}\biggl)^{2}+\frac{2\pi|n|}{\beta}\,k\varrho^{2}+m_{0}^{2} \Biggr)^{-2}. \end{align} Note that we have infrared divergences for Nambu-Goldstone virtual loops. There are different ways to deal with those divergences~\cite{s1,s2,s3}. Here, we employ analytic regularization procedure~\cite{dim1,dim2,physica,bbb,ca12,ca13,ca14,ca15}. First, we define $\lambda(\mu,s)=\lambda_{0}(\mu^{2})^{s-1}$. Then, we define $\rho_{1}(\mu,s)=\rho_{0}(\mu^{2})^{s-1}$ and $\rho_{2}(\mu,s)=\rho_{0}(\mu^{2})^{s-2}$, where $\mu$ has mass dimension. Let us discuss first the $I_{1}(\beta,\varrho,k)$ integral. We perform the angular part of the integral over the continuous momenta. $I_{1}(\beta,\varrho,k)$ can be written as the analytical continuation of $I(\beta,\varrho,k,s)$ with $s\in \mathbb{C}$: \begin{align} I(\beta,\varrho,k,s)=\frac{\beta}{2^{d+2}\pi^{\frac{d}{2}+1}\Gamma\bigl(\frac{d}{2}\bigr)} \int_{0}^{\infty}dp\,p^{d-1} \, \sum_{n\in \mathbb{Z}}\Biggl(\pi n^{2}+\frac{\beta}{2}k\varrho^{2}|n|+\frac{\beta^{2}}{4\pi}\Bigl(p^{2}+m_{0}^{2}\Bigr) \Biggr)^{-s}, \end{align} which converges for $\Re(s)>s_{0}$. Specifically, $I_1(\beta,\varrho,k)$ is given by the analytic continuation $I(\beta,\varrho,k,s)|_{s=1}$. Similarly, the finite part of the self-energy contribution is given by the analytical continuation $I_{2}(\beta,\varrho,k)=I(\beta,\varrho,k,s)|_{s=2}$. To proceed, we define the dimensionless quantity $r^{2}={\beta^{2}p^{2}}/{4\pi}$. After a Mellin transform, and performing the~$r$~integral, $I(\beta,\varrho,k,s)$ is given by: \begin{align} &I(\beta,\varrho,k,s)=\frac{1}{8\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1}\int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1}\sum_{n\in \mathbb{Z}}\exp{\Biggl[-\biggl(\pi\,n^{2}+ \frac{\beta}{2}k\varrho^{2}|n|+ \frac{m_{0}^{2}\beta^{2}}{4\pi}\biggr)t\Biggr]}. \end{align} Note that we are assuming at this point that $m_{0}^{2}\neq 0$. Let us split the summation into~$n=0$ and~ $n\neq 0$ contributions: \begin{equation} I(\beta,\varrho,k,s)= I(\beta,\varrho,k,s)|_{n=0}+I(\beta,\varrho,k,s)|_{n\neq 0}. \end{equation} The $n=0$ contribution is given by: \begin{align} I(\beta,\rho,k,s)|_{n=0}&=\frac{1}{8\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1}A(s,d), \end{align} where $A(s,d)$ is \begin{align} A(s,d)=\int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1} \exp{\Biggl[-t\biggl( \frac{m_{0}^{2}\beta^{2}}{4\pi}} \biggr)\biggr]. \end{align} The integral $A(s,d)$ is defined for $Re(s)>\frac{d}{2}$, and can be analytically continued to $Re(s)>\frac{d}{2}-1$ for $s\neq \frac{d}{2}$. Using the identity \begin{align} \int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1} \exp{\Biggl(-t\biggl( \frac{m_{0}^{2}\beta^{2}}{4\pi}} \biggr)\Biggr)&=\int_{0}^{1}dt\, t^{s-\frac{d}{2}-1} \Biggl[\exp{\biggl(-t\biggl( \frac{m_{0}^{2}\beta^{2}}{4\pi}} \biggr)\biggr)-1\Biggr]\nonumber \\ &+\int_{1}^{\infty}dt\, t^{s-\frac{d}{2}-1} \exp{\Biggl(-t\biggl( \frac{m_{0}^{2}\beta^{2}}{4\pi}} \biggr)\Biggr)+\frac{1}{\bigl(s-\frac{d}{2}\bigr)}, \end{align} which is valid for $Re(s)>\frac{d}{2}$. For $Re(s)>\frac{d}{2}-1$ and $s\neq \frac{d}{2}$, the right-hand side exists and defines a regularization of the original integral, that we denote $A_{R}(s,d)$. The contribution $I(\beta,\varrho,k,s)|_{n\neq 0}$ is written as \begin{align} &I(\beta,\varrho,k,s)|_{n\neq 0}=\frac{1}{4\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1} \, \int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1}\, \sum_{n=1}^{\infty}\exp{\Biggl[-\pi t\biggl(n^{2}+ \frac{k \beta\varrho^{2}}{2\pi}n+\frac{m_{0}^{2}\beta^{2}}{4\pi^{2}} \biggr)\Biggr]}. \end{align} Next, we make use of the properties of the Hurwitz-zeta function $\zeta(z,a)$ to handle this integral. $\zeta(z,a)$ is defined as \begin{equation} \zeta(z,a)=\sum_{n=0}^{\infty}\frac{1}{\bigl(n+a\bigr)^{z}}, \,\,\,\,\,\,\,a\neq 0,-1,-2,...\, . \end{equation} for $z \in \mathbb{C}$. The series converges absolutely for ${\rm Re}(z)>1$. It is possible to find the analytic continuation with a simple pole at $z=1$. Inspired in the Hurwitz-zeta functions one defines the generalized Hurwitz zeta-function $Z(z,a)$ such that \begin{equation} Z(z,a)= \sum_{n=1}^{\infty}\frac{1}{\Bigl(\omega_{n}^{(k)}+a\Bigr)^{z}}, \end{equation} for $a\notin (-\infty, -\omega_{n}^{(k)}]$ and $z \in \mathbb{C}$. Since in our case $a=\frac{m_{0}^{2}\beta^{2}}{4\pi}$ and $\omega_{n}^{(k)}=\pi n^{2} +\frac{1}{2}\beta\varrho^{2} n k$, then one can write: \begin{align} I(\beta,\varrho,&k,s)|_{n\neq 0}=\frac{1}{4\pi}\biggl(\frac{1}{\beta}\biggr)^{d-1}\, \Gamma\biggl(s-\frac{d}{2}\biggr)\Bigr(\Gamma(s)\Bigl)^{-1} \sum_{n=1}^{\infty}\frac{1}{\Bigl(\omega_{n}^{(k)}+a\Bigr)^{s-\frac{d}{2}}}. \end{align} We discuss further $I(\beta,\varrho,k,s)|_{n\neq 0}$ observing that a more general proof using generalized Hurwitz-zeta functions is based in the fact that zeta function regularization with a meromorphic extension to the whole complex plane needs an eligible sequence of numbers \cite{voros}. Therefore, in the series representation for the free energy with $k=1,2,..$, we have that for the moments of the partition function such that $k_{(q)}\leq \lfloor(\frac{2\pi q}{\beta})\frac{2}{\varrho^{2}}\rfloor$, where $(\frac{2\pi q}{\beta})$ are the positive Matsubara frequencies $\omega_{q}$, the system is critical for $q=\mathbb{N}$. This is an interesting result, since there are critical moments in the series representation for the free energy, after averaging the quenched disorder. Substituting the above result for $k_{(q)}= \lfloor(\frac{2\pi q}{\beta})\frac{2}{\varrho^{2}}\rfloor$, one gets that $I(\beta,q,s)|_{n\neq 0}$ can be written as \begin{align} I(\beta,q,s)|_{n\neq 0 } &= \frac{1}{4\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1} \int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1}\exp\left[ -\pi t\left(\frac{m_{0}^{2}\beta^{2}}{4\pi^{2}}-q^{2}\right) \right] \nonumber\\ &\times\sum_{n=1}^{\infty}\exp{\Bigl[-\pi t\bigl(n+q \bigr)^{2}\Bigr]}. \end{align} Finally, a simple calculation shows that choosing $q$ such that $q_{0}=\lfloor\frac{m_{0}\beta}{2\pi}\rfloor$, the quantity $ I(\beta,q,s)|_{n\neq 0}$ is given by: \begin{align} &I(L,q_{0},s)|_{n\neq 0} = \frac{1}{4\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1} \int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1} \sum_{n=1}^{\infty}\exp{\Bigl[-\pi t\bigl(n+q_{0} \bigr)^{2}\Bigr]}. \end{align} This simplification allows one to write $I(\beta,q_{0},s)|_{n\neq 0}$ as \begin{align} &I(\beta,q_{0},s)|_{n\neq 0}=\frac{1}{4\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1} \Biggl[\int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1} \sum_{n=0}^{\infty}\exp{\Bigl[-\pi t\bigl(n+q_{0} \bigr)^{2}\Bigr]}-A_{R}(s,d)\Biggr]. \end{align} Using the Hurwitz-zeta function and the integral $A_{R}(s,d)$ we can write \begin{align} &I(\beta,q_{0},s)|_{n\neq 0}=\frac{1}{4\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1}\Biggl[\frac{1}{\pi^{s-\frac{d}{2}+1}} \Gamma\biggl(s-\frac{d}{2}\biggr)\zeta(2s-d,q_{0})-\frac{1}{\pi}A_{R}(s,d)\Biggr]. \end{align} The contribution coming from the loops with Nambu-Goldstone bosons can be calculated, assuming $m^{2}_{0}=0$. All of these contributions must be regularized in the lower limit of the integrals. The Mellin transform of the contribution from the Nambu-Goldstone bosons is \begin{align} G(\beta,\varrho,k,s)|_{n\neq 0}&=\frac{1}{4\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1}\lim_{m_0 \to 0} \int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1}\nonumber \\ &\times \sum_{n=1}^{\infty}\exp{\Biggl[-\pi t\biggl(n^{2}+ k_{(q_0)} \frac{\beta\varrho^{2}}{2\pi}n + m_0^2\biggr)\Biggr]}. \end{align} Using the same regularization procedure we used to control infrared divergences, we obtain $[G(\beta,\varrho,k,s)|_{n\neq 0}]_{R}$. One can show that in low temperatures the contribution coming from $[G(\beta,\varrho,k,s)|_{n\neq 0}]_{R}$ is negligible. Now we will prove that for a fixed $q_{0}$, the renormalized squared mass vanishes for a family of $\beta's$. There are many critical temperatures where the renormalized squared mass is zero. We get that: \begin{align} m_{R}^{2}(\beta,q_{0})&=m_{0}^{2}+\delta m_{0}^{2}+c_{1}\biggl(I(\beta,s=1)|_{n=0}\biggr)+c_{1} \biggl(I(\beta,q_{0},s=1)|_{n\neq 0}\biggr) \nonumber\\ &+c_{2}\biggl(I(\beta,s=2)|_{n=0}\biggr)+c_{2} \biggl(I(\beta,q_{0},s=2)|_{n\neq 0}\biggr), \end{align} where $c_{1}=12\lambda(\mu,s)-18\rho_{1}^2(\mu,s)$ and $c_{2}=18\rho^{2}_{2}(\mu,s)$. Defining the dimensionless quantity $b = m_0 \beta$, we write \begin{align} \frac{b^{d-1}}{m_0^{d-3}} &- \frac{c_1}{8\pi}A_{R}(1,d) + \frac{c_{2}}{8\pi}A_{R}(2,d) +\frac{c_1}{4\pi^{2-\frac{d}{2}}}\Gamma\left(1-\frac{d}{2}\right)\zeta\left(2-d,\frac{b}{2\pi}\right) \nonumber \\ &-\frac{c_{2}}{4\pi^{3-\frac{d}{2}}}\Gamma\left(2-\frac{d}{2}\right)\zeta\left(4-d,\frac{b}{2\pi}\right)+\delta m_{0}^{2}=0. \end{align} Let us discuss the important case where $d=3$. We get \begin{align} &b^{2} - \frac{c_1}{8\pi}A_{R}(1,3) + \frac{c_{2}}{8\pi\mu^2}A_{R}(2,3) -\frac{c_1}{2} \zeta\left(-1,\frac{b}{2\pi}\right) -\frac{c_{2}}{4\pi}\lim_{d \to 3}\zeta\left(4-d,\frac{b}{2\pi}\right)+\delta m_{0}^{2} =0. \end{align} A formula that is relevant in the renormalization procedure is \begin{equation} \lim_{z\rightarrow 1}\biggl[\zeta(z,a)-\frac{1}{z-1}\biggr]=-\psi(a), \label{psi} \end{equation} where $\psi(a)$ is the digamma function defined as $\psi(z)=\frac{d}{dz}\bigl[\ln(z)\bigr]$. The contribution coming from $A_R(s,d)$ is irrelevant for large $m_{0}\beta$. Using the identity $(n+1)\zeta(-n,a) = -B_{n+1}(a)$, where the $B_{n+1}(a)$ are the Bernoulli polynomials, we rewrite the Hurwitz-zeta function as \begin{align} \zeta\left(-1,\frac{b}{2\pi}\right) &= -\left(\frac{b^2}{8\pi^2} - \frac{b}{4\pi} + \frac{1}{12}\right). \end{align} Using Eq. (\ref{psi}), we fix the counterterm contribution in the renormalization procedure. Then we have: \begin{align} &b^{2} + \frac{c_1}{2}\left(\frac{b^2}{8\pi^2} - \frac{b}{4\pi} + \frac{1}{12}\right) +\frac{c_{2}}{4\pi}\psi\left(\frac{b}{2\pi}\right)\,=0. \end{align} Recognizing that $q_0=\lfloor\frac{b}{2\pi}\rfloor$, we can write the digamma function as \begin{align} \psi(q_0 + \alpha) = \psi(\alpha) + \sum_{q=1}^{q_0}\frac{1}{\alpha + q}\,, \end{align} where $\alpha$ is the non-integer part of $\frac{b}{2\pi}$. With $\alpha <1$ we can use a Taylor's series and write \begin{align}\label{eq:b} & b^{2} + \frac{c_1}{2}\left(\frac{b^2}{8\pi^2} - \frac{b}{4\pi} + \frac{1}{12}\right) + \, \frac{c_{2}}{4\pi}\left(-\frac{1}{\alpha} - \gamma + \frac{\pi^2}{6}\alpha + H_{q_0}^{(1)} + \alpha H_{q_0}^{(2)}\right)=0, \end{align} where $H_{q_0}^{(1)}$ and $H_{q_0}^{(2)}$ are the generalized harmonic numbers. The above equation has zeroes for different values of $\beta$. \begin{figure}[ht!]\label{fig:b} \centering\includegraphics[scale=1]{./figure_1.pdf} \label{fig:2} \caption{Plot of Eq. (\ref{eq:b}) as a function of $b = m_0\beta$ for two different values of $\lambda$ (once $\rho_0^2 = \mu_0^2\frac{\lambda}{4}$): $\lambda = 1$ (continuous black) and $\lambda = 15$ (dashed red). We set $\mu^2=m_0^2$.} \end{figure} Our equation (47) and the Fig. 1, are manifestations of indirect generic scale invariance. In summary, we have proved in one-loop approximation that, in the set of moments that defines the quenched free energy, there is a denumerable collection of moments that can develop local critical behavior. Even in the situation where the bulk is in the ordered phase, temperature effects lead those moments from the ordered to a local disordered phase. This is our main result in this paper. \section{Conclusions} \label{sec:conclusions} Recent experimental and theoretical advances increased activities in low temperature physics and quantum phase transitions. The intersection of these two area of research, the physics of quenched disordered systems and low temperatures lead to the following questions: how is the effect of randomness in the restoration of a spontaneously broken continuous symmetry at low temperature? For a spontaneously continuous broken symmetries the presence of Goldstone modes is a signature of direct generic scale invariance. Here we discussed the consequences of introduce a random field in a system described by a complex field prepared in the ordered phase at low temperatures, with can be interpreted as the emergence of generic scale invariance. For a discrete symmetry the same behavior was obtained \cite{gustavo2}. \iffalse In a recent work it was discussed the effects of random fields in a scalar field model for $N=1$. It was proved that dynamical symmetry restoration occurs induced by quantum and disorder induced fluctuations . It was employ the equivalence between the model defined in a $d$-dimensional space with imaginary time with the classical model defined on a space ${\mathbb R}^{d}\times S^{1}$, where the structure of the correlation functions of the theories are identical. In the regime where quantum fluctuations dominates, taking into account that the disorder is strongly correlated in time it was discussed a generalization for the disordered scalar $\lambda\varphi^{4}_{d+1}$ model with anisotropic disorder. To study the modified dynamics induced by the disorder, the authors discussed a non-linear stochastic partial differential equation with a additive noise. In the Gaussian approximation, they demonstrated that the temporal correlation for the model decays exponentially, with a disorder modified relaxation rate. \fi In one-loop approximation they proved that with the bulk in the ordered phase, there is a denumerable set of moments that lead the system to this critical regime. In these moments appear a large number of critical temperatures. We study the case where phase transitions is governed mainly by quantum and disorder induced fluctuations. The limit situation, when thermal fluctuations are absent is the case of quantum phase transitions. In this case the ground states of systems change in some fundamental way tuned by non-thermal control parameters of the systems. Note that the low temperature behavior of the system, the disorder is strongly correlated in imaginary time. Using the distributional zeta-function method after averaging the disorder under a coarse-graining, a non-local contribution appears in each effective action. In one-loop approximation we discuss the effects of the disorder fluctuations in the restoration of the continuous broken symmetry by quantum and temperature effects. There are two results in the work. The first result is that we show that the contribution coming from the Nambu-Godstone loops is irrelevant to drive the phase transition. This is an expected result. In the one-loop approximation the criticality is obtained with the contribution coming from the thermal mass, and the Goldstone thermal mass can not be generated in perturbation theory. The second one, in one-loop approximation, we proved that with the bulk in the ordered phase, there is a denumerable set of moments that lead the system to this critical regime. In these moments appear a large number of critical temperatures. This is a indication of indirect generic scale invariance in the system. A natural continuation is show the result holds for high-order loops or even a non-perturbative regime, using a composite operator formalism \cite{cjt,pettini,gino}. \begin{acknowledgments} This work was partially supported by Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico - CNPq, 305894/2009-9 (G.K.), 303436/2015-8 (N.F.S.), INCT F\'{\i}sica Nuclear e Apli\-ca\-\c{c}\~oes, 464898/2014-5 (G.K) and Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo - FAPESP, 2013/01907-0 (G.K). G.O.H thanks Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de Nivel Superior - CAPES for a PhD scholarship. \end{acknowledgments} \section{Introduction} \section{Introduction}\label{intro} In the recent years, experimental advances in low temperature physics substantially increased the interest in the effects of noise and disorder in mesoscopic systems \cite {shakar,gustavo,gustavo3,zurek,sachdevbook,zvyagin, heyl}. In this paper we study a continuous quantum field theory with quenched disorder linearly coupled to a scalar field. In a classical situation, a random field can model a binary fluid in porous media \cite{broch,dierker}. When the binary-fluid correlation length is smaller than the porous radius, one has finite-size effects in the presence of a surface field. When the binary fluid correlation length is bigger than the porous radius, the random porous can exert a random field effect. In this case, the random field is linearly coupled with a classical field. The above example illustrates a general situation that inhomogeneous backgrounds and impurities can be modelled using random fields and random potentials within the formalism of continuous field theory. This issue raises fundamental questions regarding the role played by thermal, quantum and disorder induced fluctuations in a situation close to a second-order phase transitions. To drive the system to the criticality there are two quite distinct situations. The first one is when thermal or disorder-induced fluctuations are dominant. The second one is when quantum and disorder-induced fluctuations prevail over the thermal fluctuations \cite{hertz}. In systems at low temperatures, in the regime when the fluctuations' intrinsic frequencies $\omega$ satisfy $\omega\gg \beta^{-1}$, quantum dominate over thermal fluctuations. In the case in which disorder-induced fluctuations prevail over thermal fluctuations, to investigate the low temperature behavior of the system one can work in the imaginary time formalism \cite{lebellac}. The steps are the following, first vacuum expectation values of operator products are continued analytically to imaginary time; then, under these analytically continued vacuum expectation values one imposes periodic boundary conditions in imaginary time. Finally, one can use functional methods where the finite temperature Schwinger functions are moments of a measure of some functional space \cite{yaglom1}. In such a situation of low temperatures, the disorder is strongly correlated in imaginary time. This situation has been studied in the literature, mainly in the context of the random mass model \cite{mvojta,vojta2,gr,gr2}. The algebraic decay of the correlation functions for generic control parameter values is known as generic scale invariance. For a spontaneously continuous broken symmetries, the presence of Goldstone modes is a signature of generic scale invariance, known as direct generic scale invariance. Nevertheless this is not the only way to produce generic scale invariance, for a review see Ref. \cite{belitz}. In the case of discrete symmetry, the presence of quenched disorder also leads to the generic scale invariance. Such behavior is in agreement with Garrido \textit{et al} \cite{garrido}, who claim that a necessary, but not sufficient, condition for generic scale invariance is an anisotropic system. Some years latter Vespignani and Zapperi \cite{zapperi} showed that the breakdown of locality is essential to the generic scale invariance. In the the Ref. \cite{gustavo2} the authors proved that anisotropic disordr is a source of generic scale invariance. The purpose of this paper is to generalize the results of Ref. \cite{gustavo2} for a Euclidean quantum $O(N)$ model with $N=2$. At low temperatures the system is in the ordered phase, with quantum and disorder induced fluctuations prevailing over thermal fluctuations. In such a situation, we prove the appearance of indirect generic scale invariance. To find the averaged free energy or generating functional of connected correlation functions, we use the distributional zeta-function method \cite{distributional,distributional2,zarro1,zarro2,polymer,1haw,spin-glass}. After averaging the free energy over the disorder, we obtain a series representation in terms of the moments of the partition function. Due to the strongly correlation of the disorder in imaginary time, there appears a non-local contribution in each moment of the partition function. We circumvent the nagging nonlocality by using of the formalism of fractional derivative \cite{rr}. We proved, in one loop approximation, that the system can make a transition from the ordered to the disordered phase by quantum and disorder induced fluctuations. We show that, below the critical temperature of the pure system, with the bulk in the ordered phase, there exist a large number of critical temperatures that take each of these moments from an ordered to a disordered phase. This situation share some similarities with the Griffiths-McCoy phase, when in a quantum disordered system there appear finite size spatial regions in the disordered phase with bulk of the the system in the spontaneously broken, ordered, phase \cite{gri1,gri2}. The structure of this paper follows. In Sec. \ref{sec:disoderedLG} we discuss the $O(N)$ for $N=2$ scalar field theory. In Sec. \ref{sec:frompathtoSPDE2} we study this model coupled with quenched disorder. We use a series representation for the averaged free energy, the generating functional of connected correlation functions. In Sec. \ref{sec:thermal mass2} we discuss in the one-loop approximation the effects of disorder in the broken symmetry phase. We give our conclusions in Sec. \ref{sec:conclusions}. We use the units $\hbar=c=k_{B}=1$ throughout the paper. \section{The Euclidean complex scalar Field}\label{sec:disoderedLG} The action functional $S(\chi^{*},\chi)$ for an Euclidean complex scalar field at finite temperature in the imaginary time formalism \cite{cb,jackiw, rep} is given by \begin{align}\label{eq:1} &S(\chi^{*},\chi)=\int_{0}^{\beta}d\tau\int d^{d}x \left[\chi^{*}(\tau,\mathbf{x})\left( -\frac{\partial^{2}}{\partial\tau^{2}}-\Delta+\mu_{0}^{2}\right)\chi(\tau,\mathbf{x})+\frac{\lambda }{4}| \chi^{*}(\mathbf{x}) \chi(\mathbf{x})|^{2} \right], \end{align} where $\beta$ is the reciprocal of the temperature, the symbol $\Delta$ denotes the Laplacian in $\mathbb{R}^{d}$, and $\lambda$ and $\mu_{0}^{2}$ are respectively the bare coupling constant and the mass squared of the model. We omit subscripts indicating unrenormalized field and physical parameters, mass and coupling constant. The perturbative renormalization consists in the introduction of additive counterterms with coefficients $Z_{1}$, $\delta m^{2}$ and $Z_{2}$ to absorb the divergences in those quantities. The partition function is defined by the functional integral \begin{equation} Z=\int\left[ d\chi\right] \left[ d\chi^{*}\right] \,\,\exp\bigl(-S(\chi^{*},\chi)\bigr), \end{equation} \\ where $\left[ d\chi\right] \left[ d\chi^{*}\right]$ is a functional measure and the field variables satisfy the periodicity condition $\chi(0,\mathbf{x})=\chi(\beta,\mathbf{x})$ and $\chi^{*}(0,\mathbf{x})=\chi^{*}(\beta,\mathbf{x})$. The generating functional of correlation functions is defined by introducing an external complex source $j(\tau,\mathbf{x})$ linearky coupled to the field. We are interested in computing the ground state of the system in the situation where $O(2)$ symmetry is spontaneously broken. For accessing them, we replace $\mu^2_0$ by $-\mu^2_0$ in Eq. (\ref{eq:1}) and work with the Cartesian representation for the complex field $\chi(\tau,\mathbf{x})$. We define the real fields $\phi_{1}(\tau,\mathbf{x})$ and $\phi_{2}(\tau,\mathbf{x})$ such that \begin{equation}\label{eq:ezk33} \chi(\tau,\mathbf{x})=\frac{1}{\sqrt{2}}\bigl[\phi_{1}(\tau,\mathbf{x})+i\phi_{2}(\tau,\mathbf{x})\bigr] \end{equation} and \begin{equation}\label{eq:ezk333} \chi^{*}(\tau,\mathbf{x})=\frac{1}{\sqrt{2}}\bigl[\phi_{1}(\tau,\mathbf{x})-i\phi_{2}(\tau,\mathbf{x})\bigr]. \end{equation} The potential contribution to the action functional, $V(\phi_{1},\phi_{2})$, is given in terms of these real fields by: \begin{align} \label{eq:effectivehamiltonianaaa} V(\phi_{1},\phi_{2})=\left[-\frac{\mu^{2}_{0}}{2}\bigl(\phi_{1}^{2}+\phi_{2}^{2}\bigr) +\frac{\lambda}{16}\bigl(\phi_{1}^{2}+\phi_{2}^{2}\bigr)^{2}\right]. \end{align} The $O(2)$ symmetry corresponds to the invariance of the action under rotations in the real fields $(\phi_{1},\phi_{2})$ plane. The minima of this contribution of $V(\phi_{1},\phi_{2})$, for $\mu_{0} > 0$ and $\lambda>0$, are on the circle with squared radius $v^{2}$, $\phi_{1}^{2}(\tau,\mathbf{x})+\phi_{2}^{2}(\tau,\mathbf{x}) = v^{2}$, where $v^{2} = \frac{4\mu^2_{0}}{\lambda}$. There is an infinite number of degenerate ground states. This is the standard situation of spontaneous symmetry breaking. Defining $\varphi(\tau,\mathbf{x}) = \phi_{1}(\tau,\mathbf{x})-v$ and $\psi(\tau,\mathbf{x})=\phi_{2}(\tau,\mathbf{x})$, the action functional in terms of $\varphi$ and $\psi$, $S(\varphi,\psi)$ is given by \begin{align} \label{eq:effectivehamiltonian33} S(\varphi,\psi) &= \int_{0}^{\beta}d\tau\int d^{d}x\, \Biggl[\frac{1}{2}\varphi(\tau,\mathbf{x}) \left(-\frac{\partial^{2}}{\partial\tau^{2}}-\Delta + m_{0}^{2}\right) \varphi(\tau,\mathbf{x}) +\lambda_{0}\Bigl(\varphi^{2} +\psi^{2}\Bigr)^{2} \nonumber \\ &+\frac{1}{2}\psi(\tau,\mathbf{x}) \left(-\frac{\partial^{2}}{\partial\tau^{2}}-\Delta\right) \psi(\tau,\mathbf{x}) +\rho_{0}\varphi(\tau,\mathbf{x}) \Bigl(\varphi^{2}+\psi^{2}\Bigr) \Biggr], \end{align} where we defined $m_{0}^{2}=2\mu_{0}^{2}$, $\lambda_{0} = \lambda/{16}$ and $\rho_{0} = \lambda v/{4}$. In the one-loop approximation it is possible to show that temperature restores the symmetry. In this case the ground state is unique. The presence of the Goldstone-modes is a source of direct generic scale invariance. However we will go further and prove that the disorder field is able to generate indirect generic scale invariance. In the next section we introduce disorder in the system and discuss its effects in the restoration of the spontaneously broken $O(2)$ symmetry. \section{Euclidean complex scalar fields in Disordered Media}\label{sec:frompathtoSPDE2} In this section we discuss the behavior of complex fields in disordered media. For the case of a statistical field theory with external randomness, one defines the action and has to find the quenched free energy, or the average of the generating functional of connected correlation functions in the presence of the disorder~\cite{englert, lebo,lebowitz}. There are different ways to perform this average. Examples are the replica trick \cite{re,emery}, the dynamics approach \cite{dominicis, zip}, and the supersymmetry technique \cite{efe1}. Here we use the distributional zeta-function method \cite{distributional,distributional2,zarro1,zarro2,polymer,1haw,spin-glass}. In a general situation, a disordered medium can be modelled by a real random field $\xi(\mathbf{x})=\xi_{\omega}(\mathbf{x})$ in $\mathbb{R}^{d}$ with $\mathbb{E}[\xi(\mathbf{x})]=0$ and covariance $\mathbb{E}[\xi(\mathbf{x})\xi(\mathbf{y})]$, where $\mathbb{E}[...]$ means average over an ensemble of realizations, i.e., over parameters $\omega$, characterising the disorder. In the case of a complex random field, the generalization is straightforward \cite{yaglom}. Let us consider a complex random field of real variables $h(x_{1},x_{2},...,x_{d})\equiv h(\mathbf{x})$. In the general situation we have: \begin{equation} \mathbb{E}[h(\mathbf{x})]=m(\mathbf{x}),\,\,\,\,\,\mathbb{E}[h(\mathbf{x})h^{*}(\mathbf{y})]=B(\mathbf{x},\mathbf{y}), \end{equation} where $m(\mathbf{x})$ and $B(\mathbf{x},\mathbf{y})$ are respectively the first and the second moments of the random field. For simplicity we assume that the first moment is zero and the random field is delta correlated. Therefore the probability distribution of the disorder field is written as $[dh][dh^{*}]\,P(h,h^{*})$, where \begin{equation} P(h,h^{*})=p_{0}\,\exp\Biggl(-\frac{1}{2\,\varrho^{2}}\int\,d^{d}x|h(\mathbf{x})|^{2}\Biggr), \label{dis2} \end{equation} where $\varrho$ is a positive parameter associated with the disorder and $p_{0}$ is a normalization constant. In this case, we have a delta correlated disorder, i.e., $\mathbb{E}[{h^{*}(\mathbf{x})h(\mathbf{y})}]=\varrho^{2}\delta^{d}(\mathbf{x}-\mathbf{y})$. The $[dh][dh^{*}]$ is a functional measure, where $[dh]=\prod_{\mathbf{x}} dh(\mathbf{x})$. The action functional in the presence of the complex disorder field is given by \begin{align} &S(\chi,\chi^{*},h,h^{*})=S(\chi,\chi^{*}) + \, \int_{0}^{\beta} d\tau\int d^{d}x\,\Bigl(h(\mathbf{x})\chi^{*}(\tau,\mathbf{x})+h^{*}(\mathbf{x})\chi(\tau,\mathbf{x})\Bigr). \end{align} For examples of complex disordered fields, see Refs. \cite{tou,sham}. One introduces the functional $Z(j,j^{*},h,h^{*})$, the disorder generating functional of correlation functions $i.e.$ generating functional of correlation functions for one disorder realization, where $j(\tau,\mathbf{x})$ is an external complex source. As in the pure system case, one can define an average free energy as the average over the ensemble of all realizations of the disorder: \begin{equation} \mathbb{E}\left[W(j,j^{*})\right] =\! \int\! [dh][dh^{*}]P(h,h^{*})\ln Z(j,j^{*},h,h^{*}). \label{eq:disorderedfreeenergy} \end{equation} The distributional zeta-function method computes this average of the free energy as follows. For a general disorder probability distribution, one defines the distributional zeta-function $\Phi(s)$: \begin{equation} \Phi(s)=\int [dh][dh^{*}]P(h,h^{*})\frac{1}{Z(j,j^{*},h,h^{*})^{s}}, \hspace{0.15cm}s\in \mathbb{C}, \label{pro1} \vspace{.2cm} \end{equation} from which one obtains $\mathbb{E}\left[W(j,j^{*})\right]$ as \begin{equation} \mathbb{E}\bigl[W(j,j^{*})\bigr] = - (d/ds)\Phi(s)|_{s=0^{+}}, \,\,\,\,\,\,\,\,\,\, \Re(s) \geq 0. \end{equation} \iffalse where one defines the complex exponential $n^{-s}=\exp(-s\log n)$, with $\log n\in\mathbb{R}$. \fi Next, one uses Euler's integral representation for the gamma function $\Gamma(s)$ to write $Z^s$ as \begin{equation} \frac{1}{Z^s} = \frac{1}{\Gamma(s)} \int^\infty_0 dt \, t^{s-1} \, e^{-Z \, t} \end{equation}\\ Then, one breaks this $t$ integral into two integrals, one from $0$ to $a$ and another from $a$ to $\infty$, where $a$ is an arbitrary dimensionless real number, and expands the exponential into a power series of $t$ so that one can write the quenched free energy as \begin{align} \mathbb{E}\bigl[W(j,j^{*})\bigr] = \sum_{k=1}^{\infty} \frac{(-1)^{k+1}a^{k}}{k k!}\, \mathbb{E}\,[Z^k(j,j^{*})] + \gamma - \ln(a) + R(a,j,j^{*}), \label{m23e} \end{align} \noindent where $\gamma$ is the Euler-Mascheroni constant~ \cite{abramowitz}, $\mathbb{E}\,[Z^{k}(j,j^{*})]$ is $k-$th moment of the partition partition function: \begin{align} \hspace{-0.25cm}\mathbb{E}\,[Z^{\,k}(j,j^{*})] &= \int\,\prod_{i=1}^{k}[d\varphi_{i}^{(k)}]\prod_{j=1}^{k}[d\psi_{j}^{(k)}] \exp\biggl(-S_{\textrm{eff}}(\varphi_{i}^{(k)},\psi_{j}^{(k)},j_{i}^{(k)},j_{j}^{(k)})\biggr), \label{aa11} \end{align} in which the action $S_{\textrm{eff}}(\varphi_{i}^{(k)},\psi_{j}^{(k)})$ describes the field theory of $k-$field multiplets, and \begin{align} R(a,j)=&-\int[dh] [dh^{*}]P(h,h^{*}) \int_{a}^{\infty}\,\dfrac{dt}{t}\, \exp\Bigl(-Z(j,j^{*},h,h^{*})t\Bigr). \end{align} For large $a$, $|R(a)|$ goes to zero exponentially, as shown in Ref. \cite{distributional}. Therefore, the dominant contribution to the average free energy is given by the moments of the partition function of the model. The effective action $S_{\textrm{eff}}(\varphi_{i}^{(k)},\psi_{j}^{(k)},j_{i}^{(k)},j_{j}^{(k)})$ is a sum of local and nonlocal terms with repect to the $\tau$ integral; we present them shortly ahead. The result of these steps is that, after the coarse-grained procedure with a reduced description of the disordered degrees of freedom, one gets collective variables that are multiplets of fields in the moments of the partition function. To proceed, we absorb $a$ in the functional measure and, following Klein and Brout, we define the augmented partition function $\mathcal{Z}(j,j^{*})$: \begin{align} \ln \mathcal{Z}(j,j^{*})=\sum_{k=1}^{\infty} c(k)\,\mathbb{E}\,[(Z(j,j^{*},h,h^{*}))^{\,k}]. \label{m23e} \end{align} where $c(k)=(-1)^{k+1}/{k k!}$. Our purpose is to discuss the ordered phase of the model. Due a non-local contribution to the effective action, we restrict the discussion to equal fields in a given multiplet, that is: $\varphi^{(k)}_{i}(\mathbf{x})=\varphi^{(k)}_{j}(\textbf{x})$, $\psi^{(k)}_{i}(\mathbf{x}) = \psi^{(k)}_{j}(\textbf{x})$ $\forall \,i,\,j$ in the function space. Likewise, we take $j_{i}^{(k)}(\textbf{x}) = j_{l}^{(k)}(\textbf{x})$. Therefore, all the terms of the series in Eq.~ \eqref{m23e} have the same structure. The effective action contain a local ($S_{\rm eff, L}$) and a non-local ($S_{\rm eff, NL}$) contribution, \begin{align} &S_{\textrm{eff,L}}\left(\varphi_{i}^{(k)},j_{i}^{(k)}\right) = \frac{1}{2}\int_{0}^{\beta}d\tau\int d^{\,d}x \, \sum_{i=1}^{k} \Bigg\{ \varphi_{i}^{(k)}(\tau,\mathbf{x}) \left(-\frac{\partial^{2}}{\partial \tau^{2}} - \Delta+m_{0}^{2}\right)\varphi_{i}^{(k)} \nonumber\\ &+ \, \psi_{i}^{(k)}(\tau,\mathbf{x}) \left(-\frac{\partial^{2}}{\partial \tau^{2}} - \Delta\right)\psi_{i}^{(k)}(\tau,\mathbf{x}) + \rho_0 \varphi_{i}^{(k)}(\tau,\mathbf{x}) \left[\bigl(\varphi_{i}^{(k)}(\tau,\mathbf{x})\bigr)^2 + \bigl(\psi_{i}^{(k)}(\tau,\mathbf{x})\bigr)^2 \right] \nonumber \\ &+ \lambda_0\left[\left(\varphi_{i}^{(k)}(\tau,\mathbf{x})\right)^2 + \bigl(\psi_{i}^{(k)}(\tau,\mathbf{x})\bigr)^2 \right]^2 \Biggr\}, \label{SeffA} \end{align} \begin{align} S_{\textrm{eff,NL}}\left(\varphi_{i}^{(k)},\psi_{i}^{(k)}\right) &= -\frac{\varrho^{2}}{2\beta^2}\int_{0}^{\beta} d\tau\int_{0}^{\beta} d\tau'\int d^{d}x \, \sum_{i,j=1}^{k} \left[\varphi_{i}^{(k)}(\tau,\mathbf{x}) \varphi_{j}^{(k)}(\tau',\mathbf{x}) \right.\nonumber \\ &\left. +\, \psi_{i}^{(k)}(\tau,\mathbf{x}) \psi_{j}^{(k)}(\tau',\mathbf{x}) \right]. \label{SeffA} \end{align} Here, we defined $\varphi'^{\,(k)}_{i}(\mathbf{x})= \frac{1}{\sqrt{k}}\varphi^{(k)}_{i}(\mathbf{x})$, $\psi'^{\,(k)}_{i}(\mathbf{x})= \frac{1}{\sqrt{k}}\psi^{(k)}_{i}(\mathbf{x})$, $\lambda'_{0}=\lambda_{0}k$ and $\rho'_{0}=\rho_{0}k$. As for the pure system, the fields variables satisfy periodicity condition in imaginary time: $\varphi_{i}^{(k)}(0,\mathbf{x})=\varphi_{i}^{(k)}(\beta,\mathbf{x})$ and $\psi_{i}^{(k)}(0,\mathbf{x})=\psi_{i}^{(k)}(\beta,\mathbf{x})$. In the context of replica trick, efforts was put to compute the contribution of the non-local action, in the replica symmetric case \cite{kirkpatrick} and in the replica symmetry break case, for a exemple see Refs. \cite{goldschmidt, azimi}. We would like to point out that, the restriction of equal fields in each multiplet is to avoid a technical problem raised by the non-local contribution. In the case of a local action one can perform the calculations choosing different fields in each multiplet in the functional space. However, in the context of distributional zeta-function method, the authors of Ref. \cite{gustavo2}, shows how the fractional derivative can deal naturally with the non-local contribution. Through the Fourier transform, for a generic fractional derivative, we get $\mathcal{F}\bigl[\frac{d^{\mu}g(x)}{d|x|^{\mu}}\bigr]=-|k|^{\mu}g(k)$, for $1\leq\mu<2$. The non-local contribution will appear in the Fourier representation of the Matsubara modes as $\frac{2\pi|n|}{\beta}$. \section{Disorder and quantum effects in the one-loop contribution to the renormalized mass}\label{sec:thermal mass2} The purpose of this section is to discuss the renormalized squared mass at very low temperatures. Since we use a regularization procedure where the Matsubara modes appear, we call it the thermal mass, although we are working at very low temperatures, in the spontaneously broken symmetry phase. We compute in the one-loop approximation the thermal mass in $k^{th}$ moment of the partition function. A Goldstone mass can not be generated in perturbation theory. Therefore, we concentrate on the mass of the non-Goldstone mode. There are two kinds of loops which give the first nontrivial contributions at the one-loop level; those with one insertion will be refereed to as tadpoles and with two insertions as self-energies. Below the critical temperature, one can write the square of the renormalized mass associated with the $k-$th momentum field $\varphi_{i}^{(k)}(\tau,\mathbf{x})$ as: \begin{align} m_{R}^{2}(\beta,\varrho,k) &= m_{0}^{2} + \delta m^{2}_{0}(k) + \, \lambda_{0} \Bigl[ 12I_{1}(\beta,\varrho,k) + \, 2I_{1}(\beta,\varrho,k)|_{m^{2}_{0}=0} \Bigr] \nonumber \\ &- \, \rho_{0}^{2} \Bigl[ 18I_{1}(\beta,\varrho,k) +18I_{2}(\beta,\varrho,k) + \, 2I_{2}(\beta,\varrho,k)|_{m^{2}_{0}=0} + 6I_{1}(\beta,\varrho,k)|_{m_{0}^{2}=0} \Bigr] \end{align} where $\delta m_{0}^{2}(k)$ is the usual mass countertem that must be introduced in the renormalization procedure, for $k^{th}-$moment fields, where the multiplicative numbers are symmetry factors, $I_1$ gives the tadpole contribution \begin{align} &I_{1}(\beta,\varrho,k)=\frac{1}{2L(2\pi)^{d}} \int\,\prod_{i=1}^{d}dq_{i}\sum_{n\in \mathbb{Z}}\Biggl(q_{1}^{2}+...+q_{d}^{2} +\biggr(\frac{2\pi n}{\beta}\biggl)^{2}+\frac{2\pi|n|}{\beta}\,k\varrho^{2}+m_{0}^{2} \Biggr)^{-1}, \end{align} and $I_2$ is the self-energy contribution \begin{align} I_{2}(\beta,\varrho,k)=\frac{1}{2L(2\pi)^{d}} \int\,\prod_{i=1}^{d}dq_{i}\sum_{n\in \mathbb{Z}}\Biggl(q_{1}^{2}+...+q_{d}^{2} +\biggr(\frac{2\pi n}{\beta}\biggl)^{2}+\frac{2\pi|n|}{\beta}\,k\varrho^{2}+m_{0}^{2} \Biggr)^{-2}. \end{align} Note that we have infrared divergences for Nambu-Goldstone virtual loops. There are different ways to deal with those divergences~\cite{s1,s2,s3}. Here, we employ analytic regularization procedure~\cite{dim1,dim2,physica,bbb,ca12,ca13,ca14,ca15}. First, we define $\lambda(\mu,s)=\lambda_{0}(\mu^{2})^{s-1}$. Then, we define $\rho_{1}(\mu,s)=\rho_{0}(\mu^{2})^{s-1}$ and $\rho_{2}(\mu,s)=\rho_{0}(\mu^{2})^{s-2}$, where $\mu$ has mass dimension. Let us discuss first the $I_{1}(\beta,\varrho,k)$ integral. We perform the angular part of the integral over the continuous momenta. $I_{1}(\beta,\varrho,k)$ can be written as the analytical continuation of $I(\beta,\varrho,k,s)$ with $s\in \mathbb{C}$: \begin{align} I(\beta,\varrho,k,s)=\frac{\beta}{2^{d+2}\pi^{\frac{d}{2}+1}\Gamma\bigl(\frac{d}{2}\bigr)} \int_{0}^{\infty}dp\,p^{d-1} \, \sum_{n\in \mathbb{Z}}\Biggl(\pi n^{2}+\frac{\beta}{2}k\varrho^{2}|n|+\frac{\beta^{2}}{4\pi}\Bigl(p^{2}+m_{0}^{2}\Bigr) \Biggr)^{-s}, \end{align} which converges for $\Re(s)>s_{0}$. Specifically, $I_1(\beta,\varrho,k)$ is given by the analytic continuation $I(\beta,\varrho,k,s)|_{s=1}$. Similarly, the finite part of the self-energy contribution is given by the analytical continuation $I_{2}(\beta,\varrho,k)=I(\beta,\varrho,k,s)|_{s=2}$. To proceed, we define the dimensionless quantity $r^{2}={\beta^{2}p^{2}}/{4\pi}$. After a Mellin transform, and performing the~$r$~integral, $I(\beta,\varrho,k,s)$ is given by: \begin{align} &I(\beta,\varrho,k,s)=\frac{1}{8\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1}\int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1}\sum_{n\in \mathbb{Z}}\exp{\Biggl[-\biggl(\pi\,n^{2}+ \frac{\beta}{2}k\varrho^{2}|n|+ \frac{m_{0}^{2}\beta^{2}}{4\pi}\biggr)t\Biggr]}. \end{align} Note that we are assuming at this point that $m_{0}^{2}\neq 0$. Let us split the summation into~$n=0$ and~ $n\neq 0$ contributions: \begin{equation} I(\beta,\varrho,k,s)= I(\beta,\varrho,k,s)|_{n=0}+I(\beta,\varrho,k,s)|_{n\neq 0}. \end{equation} The $n=0$ contribution is given by: \begin{align} I(\beta,\rho,k,s)|_{n=0}&=\frac{1}{8\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1}A(s,d), \end{align} where $A(s,d)$ is \begin{align} A(s,d)=\int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1} \exp{\Biggl[-t\biggl( \frac{m_{0}^{2}\beta^{2}}{4\pi}} \biggr)\biggr]. \end{align} The integral $A(s,d)$ is defined for $Re(s)>\frac{d}{2}$, and can be analytically continued to $Re(s)>\frac{d}{2}-1$ for $s\neq \frac{d}{2}$. Using the identity \begin{align} \int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1} \exp{\Biggl(-t\biggl( \frac{m_{0}^{2}\beta^{2}}{4\pi}} \biggr)\Biggr)&=\int_{0}^{1}dt\, t^{s-\frac{d}{2}-1} \Biggl[\exp{\biggl(-t\biggl( \frac{m_{0}^{2}\beta^{2}}{4\pi}} \biggr)\biggr)-1\Biggr]\nonumber \\ &+\int_{1}^{\infty}dt\, t^{s-\frac{d}{2}-1} \exp{\Biggl(-t\biggl( \frac{m_{0}^{2}\beta^{2}}{4\pi}} \biggr)\Biggr)+\frac{1}{\bigl(s-\frac{d}{2}\bigr)}, \end{align} which is valid for $Re(s)>\frac{d}{2}$. For $Re(s)>\frac{d}{2}-1$ and $s\neq \frac{d}{2}$, the right-hand side exists and defines a regularization of the original integral, that we denote $A_{R}(s,d)$. The contribution $I(\beta,\varrho,k,s)|_{n\neq 0}$ is written as \begin{align} &I(\beta,\varrho,k,s)|_{n\neq 0}=\frac{1}{4\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1} \, \int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1}\, \sum_{n=1}^{\infty}\exp{\Biggl[-\pi t\biggl(n^{2}+ \frac{k \beta\varrho^{2}}{2\pi}n+\frac{m_{0}^{2}\beta^{2}}{4\pi^{2}} \biggr)\Biggr]}. \end{align} Next, we make use of the properties of the Hurwitz-zeta function $\zeta(z,a)$ to handle this integral. $\zeta(z,a)$ is defined as \begin{equation} \zeta(z,a)=\sum_{n=0}^{\infty}\frac{1}{\bigl(n+a\bigr)^{z}}, \,\,\,\,\,\,\,a\neq 0,-1,-2,...\, . \end{equation} for $z \in \mathbb{C}$. The series converges absolutely for ${\rm Re}(z)>1$. It is possible to find the analytic continuation with a simple pole at $z=1$. Inspired in the Hurwitz-zeta functions one defines the generalized Hurwitz zeta-function $Z(z,a)$ such that \begin{equation} Z(z,a)= \sum_{n=1}^{\infty}\frac{1}{\Bigl(\omega_{n}^{(k)}+a\Bigr)^{z}}, \end{equation} for $a\notin (-\infty, -\omega_{n}^{(k)}]$ and $z \in \mathbb{C}$. Since in our case $a=\frac{m_{0}^{2}\beta^{2}}{4\pi}$ and $\omega_{n}^{(k)}=\pi n^{2} +\frac{1}{2}\beta\varrho^{2} n k$, then one can write: \begin{align} I(\beta,\varrho,&k,s)|_{n\neq 0}=\frac{1}{4\pi}\biggl(\frac{1}{\beta}\biggr)^{d-1}\, \Gamma\biggl(s-\frac{d}{2}\biggr)\Bigr(\Gamma(s)\Bigl)^{-1} \sum_{n=1}^{\infty}\frac{1}{\Bigl(\omega_{n}^{(k)}+a\Bigr)^{s-\frac{d}{2}}}. \end{align} We discuss further $I(\beta,\varrho,k,s)|_{n\neq 0}$ observing that a more general proof using generalized Hurwitz-zeta functions is based in the fact that zeta function regularization with a meromorphic extension to the whole complex plane needs an eligible sequence of numbers \cite{voros}. Therefore, in the series representation for the free energy with $k=1,2,..$, we have that for the moments of the partition function such that $k_{(q)}\leq \lfloor(\frac{2\pi q}{\beta})\frac{2}{\varrho^{2}}\rfloor$, where $(\frac{2\pi q}{\beta})$ are the positive Matsubara frequencies $\omega_{q}$, the system is critical for $q=\mathbb{N}$. This is an interesting result, since there are critical moments in the series representation for the free energy, after averaging the quenched disorder. Substituting the above result for $k_{(q)}= \lfloor(\frac{2\pi q}{\beta})\frac{2}{\varrho^{2}}\rfloor$, one gets that $I(\beta,q,s)|_{n\neq 0}$ can be written as \begin{align} I(\beta,q,s)|_{n\neq 0 } &= \frac{1}{4\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1} \int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1}\exp\left[ -\pi t\left(\frac{m_{0}^{2}\beta^{2}}{4\pi^{2}}-q^{2}\right) \right] \nonumber\\ &\times\sum_{n=1}^{\infty}\exp{\Bigl[-\pi t\bigl(n+q \bigr)^{2}\Bigr]}. \end{align} Finally, a simple calculation shows that choosing $q$ such that $q_{0}=\lfloor\frac{m_{0}\beta}{2\pi}\rfloor$, the quantity $ I(\beta,q,s)|_{n\neq 0}$ is given by: \begin{align} &I(L,q_{0},s)|_{n\neq 0} = \frac{1}{4\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1} \int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1} \sum_{n=1}^{\infty}\exp{\Bigl[-\pi t\bigl(n+q_{0} \bigr)^{2}\Bigr]}. \end{align} This simplification allows one to write $I(\beta,q_{0},s)|_{n\neq 0}$ as \begin{align} &I(\beta,q_{0},s)|_{n\neq 0}=\frac{1}{4\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1} \Biggl[\int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1} \sum_{n=0}^{\infty}\exp{\Bigl[-\pi t\bigl(n+q_{0} \bigr)^{2}\Bigr]}-A_{R}(s,d)\Biggr]. \end{align} Using the Hurwitz-zeta function and the integral $A_{R}(s,d)$ we can write \begin{align} &I(\beta,q_{0},s)|_{n\neq 0}=\frac{1}{4\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1}\Biggl[\frac{1}{\pi^{s-\frac{d}{2}+1}} \Gamma\biggl(s-\frac{d}{2}\biggr)\zeta(2s-d,q_{0})-\frac{1}{\pi}A_{R}(s,d)\Biggr]. \end{align} The contribution coming from the loops with Nambu-Goldstone bosons can be calculated, assuming $m^{2}_{0}=0$. All of these contributions must be regularized in the lower limit of the integrals. The Mellin transform of the contribution from the Nambu-Goldstone bosons is \begin{align} G(\beta,\varrho,k,s)|_{n\neq 0}&=\frac{1}{4\pi\Gamma(s)}\biggl(\frac{1}{\beta}\biggr)^{d-1}\lim_{m_0 \to 0} \int_{0}^{\infty}dt\,t^{s-\frac{d}{2}-1}\nonumber \\ &\times \sum_{n=1}^{\infty}\exp{\Biggl[-\pi t\biggl(n^{2}+ k_{(q_0)} \frac{\beta\varrho^{2}}{2\pi}n + m_0^2\biggr)\Biggr]}. \end{align} Using the same regularization procedure we used to control infrared divergences, we obtain $[G(\beta,\varrho,k,s)|_{n\neq 0}]_{R}$. One can show that in low temperatures the contribution coming from $[G(\beta,\varrho,k,s)|_{n\neq 0}]_{R}$ is negligible. Now we will prove that for a fixed $q_{0}$, the renormalized squared mass vanishes for a family of $\beta's$. There are many critical temperatures where the renormalized squared mass is zero. We get that: \begin{align} m_{R}^{2}(\beta,q_{0})&=m_{0}^{2}+\delta m_{0}^{2}+c_{1}\biggl(I(\beta,s=1)|_{n=0}\biggr)+c_{1} \biggl(I(\beta,q_{0},s=1)|_{n\neq 0}\biggr) \nonumber\\ &+c_{2}\biggl(I(\beta,s=2)|_{n=0}\biggr)+c_{2} \biggl(I(\beta,q_{0},s=2)|_{n\neq 0}\biggr), \end{align} where $c_{1}=12\lambda(\mu,s)-18\rho_{1}^2(\mu,s)$ and $c_{2}=18\rho^{2}_{2}(\mu,s)$. Defining the dimensionless quantity $b = m_0 \beta$, we write \begin{align} \frac{b^{d-1}}{m_0^{d-3}} &- \frac{c_1}{8\pi}A_{R}(1,d) + \frac{c_{2}}{8\pi}A_{R}(2,d) +\frac{c_1}{4\pi^{2-\frac{d}{2}}}\Gamma\left(1-\frac{d}{2}\right)\zeta\left(2-d,\frac{b}{2\pi}\right) \nonumber \\ &-\frac{c_{2}}{4\pi^{3-\frac{d}{2}}}\Gamma\left(2-\frac{d}{2}\right)\zeta\left(4-d,\frac{b}{2\pi}\right)+\delta m_{0}^{2}=0. \end{align} Let us discuss the important case where $d=3$. We get \begin{align} &b^{2} - \frac{c_1}{8\pi}A_{R}(1,3) + \frac{c_{2}}{8\pi\mu^2}A_{R}(2,3) -\frac{c_1}{2} \zeta\left(-1,\frac{b}{2\pi}\right) -\frac{c_{2}}{4\pi}\lim_{d \to 3}\zeta\left(4-d,\frac{b}{2\pi}\right)+\delta m_{0}^{2} =0. \end{align} A formula that is relevant in the renormalization procedure is \begin{equation} \lim_{z\rightarrow 1}\biggl[\zeta(z,a)-\frac{1}{z-1}\biggr]=-\psi(a), \label{psi} \end{equation} where $\psi(a)$ is the digamma function defined as $\psi(z)=\frac{d}{dz}\bigl[\ln(z)\bigr]$. The contribution coming from $A_R(s,d)$ is irrelevant for large $m_{0}\beta$. Using the identity $(n+1)\zeta(-n,a) = -B_{n+1}(a)$, where the $B_{n+1}(a)$ are the Bernoulli polynomials, we rewrite the Hurwitz-zeta function as \begin{align} \zeta\left(-1,\frac{b}{2\pi}\right) &= -\left(\frac{b^2}{8\pi^2} - \frac{b}{4\pi} + \frac{1}{12}\right). \end{align} Using Eq. (\ref{psi}), we fix the counterterm contribution in the renormalization procedure. Then we have: \begin{align} &b^{2} + \frac{c_1}{2}\left(\frac{b^2}{8\pi^2} - \frac{b}{4\pi} + \frac{1}{12}\right) +\frac{c_{2}}{4\pi}\psi\left(\frac{b}{2\pi}\right)\,=0. \end{align} Recognizing that $q_0=\lfloor\frac{b}{2\pi}\rfloor$, we can write the digamma function as \begin{align} \psi(q_0 + \alpha) = \psi(\alpha) + \sum_{q=1}^{q_0}\frac{1}{\alpha + q}\,, \end{align} where $\alpha$ is the non-integer part of $\frac{b}{2\pi}$. With $\alpha <1$ we can use a Taylor's series and write \begin{align}\label{eq:b} & b^{2} + \frac{c_1}{2}\left(\frac{b^2}{8\pi^2} - \frac{b}{4\pi} + \frac{1}{12}\right) + \, \frac{c_{2}}{4\pi}\left(-\frac{1}{\alpha} - \gamma + \frac{\pi^2}{6}\alpha + H_{q_0}^{(1)} + \alpha H_{q_0}^{(2)}\right)=0, \end{align} where $H_{q_0}^{(1)}$ and $H_{q_0}^{(2)}$ are the generalized harmonic numbers. The above equation has zeroes for different values of $\beta$. \begin{figure}[ht!]\label{fig:b} \centering\includegraphics[scale=1]{./figure_1.pdf} \label{fig:2} \caption{Plot of Eq. (\ref{eq:b}) as a function of $b = m_0\beta$ for two different values of $\lambda$ (once $\rho_0^2 = \mu_0^2\frac{\lambda}{4}$): $\lambda = 1$ (continuous black) and $\lambda = 15$ (dashed red). We set $\mu^2=m_0^2$.} \end{figure} Our equation (47) and the Fig. 1, are manifestations of indirect generic scale invariance. In summary, we have proved in one-loop approximation that, in the set of moments that defines the quenched free energy, there is a denumerable collection of moments that can develop local critical behavior. Even in the situation where the bulk is in the ordered phase, temperature effects lead those moments from the ordered to a local disordered phase. This is our main result in this paper. \section{Conclusions} \label{sec:conclusions} Recent experimental and theoretical advances increased activities in low temperature physics and quantum phase transitions. The intersection of these two area of research, the physics of quenched disordered systems and low temperatures lead to the following questions: how is the effect of randomness in the restoration of a spontaneously broken continuous symmetry at low temperature? For a spontaneously continuous broken symmetries the presence of Goldstone modes is a signature of direct generic scale invariance. Here we discussed the consequences of introduce a random field in a system described by a complex field prepared in the ordered phase at low temperatures, with can be interpreted as the emergence of generic scale invariance. For a discrete symmetry the same behavior was obtained \cite{gustavo2}. \iffalse In a recent work it was discussed the effects of random fields in a scalar field model for $N=1$. It was proved that dynamical symmetry restoration occurs induced by quantum and disorder induced fluctuations . It was employ the equivalence between the model defined in a $d$-dimensional space with imaginary time with the classical model defined on a space ${\mathbb R}^{d}\times S^{1}$, where the structure of the correlation functions of the theories are identical. In the regime where quantum fluctuations dominates, taking into account that the disorder is strongly correlated in time it was discussed a generalization for the disordered scalar $\lambda\varphi^{4}_{d+1}$ model with anisotropic disorder. To study the modified dynamics induced by the disorder, the authors discussed a non-linear stochastic partial differential equation with a additive noise. In the Gaussian approximation, they demonstrated that the temporal correlation for the model decays exponentially, with a disorder modified relaxation rate. \fi In one-loop approximation they proved that with the bulk in the ordered phase, there is a denumerable set of moments that lead the system to this critical regime. In these moments appear a large number of critical temperatures. We study the case where phase transitions is governed mainly by quantum and disorder induced fluctuations. The limit situation, when thermal fluctuations are absent is the case of quantum phase transitions. In this case the ground states of systems change in some fundamental way tuned by non-thermal control parameters of the systems. Note that the low temperature behavior of the system, the disorder is strongly correlated in imaginary time. Using the distributional zeta-function method after averaging the disorder under a coarse-graining, a non-local contribution appears in each effective action. In one-loop approximation we discuss the effects of the disorder fluctuations in the restoration of the continuous broken symmetry by quantum and temperature effects. There are two results in the work. The first result is that we show that the contribution coming from the Nambu-Godstone loops is irrelevant to drive the phase transition. This is an expected result. In the one-loop approximation the criticality is obtained with the contribution coming from the thermal mass, and the Goldstone thermal mass can not be generated in perturbation theory. The second one, in one-loop approximation, we proved that with the bulk in the ordered phase, there is a denumerable set of moments that lead the system to this critical regime. In these moments appear a large number of critical temperatures. This is a indication of indirect generic scale invariance in the system. A natural continuation is show the result holds for high-order loops or even a non-perturbative regime, using a composite operator formalism \cite{cjt,pettini,gino}. \begin{acknowledgments} This work was partially supported by Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico - CNPq, 305894/2009-9 (G.K.), 303436/2015-8 (N.F.S.), INCT F\'{\i}sica Nuclear e Apli\-ca\-\c{c}\~oes, 464898/2014-5 (G.K) and Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo - FAPESP, 2013/01907-0 (G.K). G.O.H thanks Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de Nivel Superior - CAPES for a PhD scholarship. \end{acknowledgments}
1,941,325,220,881
arxiv
\section{Introduction} \subsection{Background and Motivation} \label{ssec:background} In recent years, the impressive practical success of deep learning has motivated the development of provably efficient learning algorithms for various classes of neural networks. A large body of research (see Section~\ref{ssec:related} for a brief overview) has resulted in efficient learning algorithms for shallow networks with common activation functions (e.g., ReLUs or sigmoids) under various assumptions on the underlying distribution and the weight structure of the network. Despite intensive investigation, the broad question of whether deep neural networks are efficiently learnable with provable guarantees remains an outstanding theoretical challenge in machine learning. In particular, the class of networks for which efficient learners are known is relatively limited, even in the realizable case (i.e., when the data is drawn from a neural network in the class). In this work, we continue this line of investigation by studying the learnability of a simple class of networks without imposing strong restrictions on the structure of its weights. Specifically, we focus on the problem of learning one-hidden-layer ReLU networks under the Gaussian distribution in the presence of additive random label noise. Our goal is to understand the complexity of this problem {\em in the PAC learning model without assumptions on the weight matrix of the network}. \begin{definition}[One-hidden-layer ReLU networks] \label{def:sum-of-relus} Let $\mathcal{C}_k$ denote the concept class of one-hidden-layer ReLU networks on $\mathbb{R}^d$ with $k$ hidden units. That is, $f_{\alpha, \mathbf{W}} \in \mathcal{C}_k$ if and only if there exist weight vectors $\mathbf{w}^{(i)} \in \mathbb{R}^d$ and real coefficients $\alpha_i$, $i \in [k]$, such that $f_{\alpha, \mathbf{W}}(\mathbf{x}) = \sum_{i=1}^k \alpha_i \phi (\langle \mathbf{w}^{(i)}, \mathbf{x} \rangle)$, where $\phi(t) = \max\{0, t\}$, $t \in \mathbb{R}$. We will denote by $\alpha = (\alpha_i)_{i=1}^k$ the vector of coefficients and by $\mathbf{W} = [\mathbf{w}^{(i)}]_{i=1}^k$ the weight matrix of the network. We will use $\mathcal{C}_k^{+}$ to denote the subclass of $\mathcal{C}_k$ where $\alpha \in \mathbb{R}_+^{k}$. \end{definition} The (distribution-specific) PAC learning problem for a concept class $\mathcal{C}$ of real-valued functions is the following: The input is a multiset of i.i.d. labeled examples $(\mathbf{x}, y)$, where $\mathbf{x}$ is generated from the standard Gaussian distribution on $\mathbb{R}^d$ and $y = f(\mathbf{x})+\xi$, where $f \in \mathcal{C}$ is the unknown target concept and $\xi$ is some type of random observation noise. The goal of the learner is to output a hypothesis $h: \mathbb{R}^d \to \mathbb{R}$ that with high probability is close to $f$ in $L_2$-norm. The hypothesis $h$ is allowed to lie in any efficiently representable hypothesis class $\mathcal{H}$. If $\mathcal{H} = \mathcal{C}$, the PAC learning algorithm is called {\em proper}. Perhaps surprisingly, the complexity of PAC learning one-hidden-layer ReLU networks (even with positive weights) has remained open, even in the realizable setting, under Gaussian marginals, and for $k=3$~\cite{Kliv17}\footnote{Formally speaking, the $k=2$ case does not appear explicitly in the literature, but an efficient algorithm easily follows from prior work on parameter estimation (e.g.,~\cite{GeLM18}).}. A line of prior work~\cite{GeLM18, BakshiJW19, GeKLW19} had studied the task of {\em parameter estimation} for this concept class, i.e., the task of recovering the unknown coefficients $\alpha_i$ and weight vectors $\mathbf{w}^{(i)}$ of the data generating network within small accuracy. It should be noted that for parameter estimation to even be information-theoretically possible, some assumptions on the target function are necessary. The aforementioned prior works made the common assumption that the weight matrix $\mathbf{W} = [\mathbf{w}^{(i)}]_{i=1}^k$ is {\em full-rank}. Under this assumption, they provided efficient parameter learning algorithms with respect Gaussian marginals for the case of {\em positive coefficients}, i.e., for $\mathcal{C}_k^{+}$. Importantly, the sample and computational complexity of these algorithms scale polynomially with the condition number of $\mathbf{W}$. In contrast, no such algorithm is known for general coefficients, i.e., for $\mathcal{C}_k$, even under the aforementioned strong assumptions on the weights. In contrast to parameter estimation, PAC learning one-hidden-layer ReLU networks does not require any assumptions on the structure of the weight matrix. The PAC learning problem for this class is information-theoretically solvable with polynomially many samples. The question is whether a computationally efficient algorithm exists. It should also be noted that proper PAC learning is not generally equivalent to parameter estimation, as it is in principle possible to have two networks that define close-by functions and whose parameters are significantly different. \subsection{Our Results} \label{ssec:results} We are ready to describe the main contributions of this work. Our main positive result is the first PAC learning algorithm for $\mathcal{C}_k^{+}$ (one-hidden-layer Relu networks with positive coefficients) under Gaussian marginals that runs in polynomial time for any $k = \tilde{O}(\sqrt{\log d})$. On the lower bound side, we establish a Statistical Query (SQ) lower bound suggesting that no such algorithm is possible for $\mathcal{C}_k$ (general coefficients) for any $k = \omega(1)$ (also under Gaussian marginals). Our SQ lower bound provides a separation between $\mathcal{C}_k^{+}$ and $\mathcal{C}_k$ in terms of efficient learnability. Before we state our main theorems, we formally define the PAC learning problem. \begin{definition}[Distribution-Specific PAC Learning] \label{def:PAC} Let $\mathcal{F}$ be a concept class of real-valued functions over $\mathbb{R}^d$, $\mathcal{D}$ be a distribution on $\mathbb{R}^d$, $\mathcal{F} \in L_2(\mathcal{D}, \mathbb{R}^d)$, and $0< \epsilon <1$. Let $f$ be an unknown target function in $\mathcal{F}$. A {\em noisy example oracle}, $\mathrm{EX}^{\mathrm{noise}}(f, \mathcal{F})$, works as follows: Each time $\mathrm{EX}^{\mathrm{noise}}(f, \mathcal{F})$ is invoked, it returns a labeled example $(\mathbf{x}, y)$, such that: (a) $\mathbf{x} \sim \mathcal{D}$, and (b) $y = f(\mathbf{x}) + \xi$, where $\xi$ is a zero-mean and standard deviation $\sigma$ subgaussian random variable that is independent of $\mathbf{x}$. A learning algorithm is given i.i.d. samples from the noisy oracle and its goal is to output a hypothesis $h$ such that with high probability $h$ is $\epsilon$-close to $f$ in $L_2$-norm, i.e., it holds $\E_{\mathbf{x} \sim \mathcal{D}}[(f(\mathbf{x}) - h(\mathbf{x}))^2] \leq \epsilon^2 \left( \E_{\mathbf{x} \sim \mathcal{D}}[f^2(\mathbf{x})] +\sigma^2\right)$. \end{definition} Our main positive result is the first computationally efficient PAC learning algorithm for $\mathcal{C}_k^{+}$. \begin{theorem}[Proper PAC Learner for $\mathcal{C}_k^{+}$] \label{thm:alg-inf} There is a proper PAC learning algorithm for $\mathcal{C}_k^{+}$ with respect to the standard Gaussian distribution on $\mathbb{R}^d$ with the following performance guarantee: The algorithm draws $\mathrm{poly}(k/\epsilon) \cdot \tilde{O}(d) $ noisy labeled examples from an unknown target $f \in \mathcal{C}_k^{+}$, runs in time $\mathrm{poly}(d/\epsilon)+ (k/\epsilon)^{O(k^2)}$, and outputs a hypothesis $h \in \mathcal{C}_k^{+}$ that with high probability is $\epsilon$-close to $f$ in $L_2$-norm. \end{theorem} \noindent Theorem~\ref{thm:alg-inf} gives the first polynomial-time PAC learning algorithm for one-hidden-layer ReLU networks under any natural distributional assumptions, answering a question posed by~\cite{Kliv17}. Our algorithm runs in polynomial time for some $k = \tilde{\Omega}(\sqrt{\log d})$. The existence of such an algorithm was previously open, even for $k=3$. We remark that our main algorithmic result is more general, in the sense that it immediately extends to positive coefficient one-hidden-layer networks composed of any non-negative Lipschitz activation function. See Theorem~\ref{thm:alg} for a detailed statement. Some additional remarks are in order: As stated in Theorem~\ref{thm:alg-inf}, our learning algorithm is proper, i.e., $h \in \mathcal{C}_k^{+}$. An important distinguishing feature of our algorithm from prior related work is that it requires no assumptions on the weight matrix of the network, and in particular that its sample complexity is independent of its condition number. Prior work had given parameter estimation algorithms for this concept class with sample complexity (and running time) polynomial in the condition number. On the other hand, the running time of our algorithm scales with $\exp(k)$, while previous parameter estimation algorithms had $\mathrm{poly}(k)$ dependence. The existence of a $\mathrm{poly}(k)$ time PAC learning algorithm remains an outstanding open question. An additional advantage of our algorithm is that it also immediately extends to the agnostic setting and in particular is robust to a small (dimension-independent) amount of adversarial $L_2$-error. The algorithm of Theorem~\ref{thm:alg-inf} crucially uses the assumption that the coefficients of the target network are positive. A natural question is whether an algorithm with similar guarantees can be obtained for unrestricted coefficients. Perhaps surprisingly, we provide evidence that such an algorithm does not exist. Specifically, our second main result is a correlational Statistical Query (SQ) lower bound ruling out a broad family of $\mathrm{poly}(d)$-time algorithms for $\mathcal{C}_k$ for $\epsilon = \Omega(1)$, for {\em any} $k = \omega(1)$. Specifically, we prove a lower bound for PAC learning $\mathcal{C}_k$ under Gaussian marginals in the correlational SQ model. A correlational SQ algorithm has query access to the target concept $f: \mathbb{R}^d \to \mathbb{R}$ via the following oracle: The oracle takes as input any bounded query function $q: \mathbb{R}^d \to [-1, 1]$ and an accuracy parameter $\tau>0$, and outputs an estimate $\gamma$ of the expectation $\E_{\mathbf{x} \sim \mathcal{D}}[f(\mathbf{x})q(\mathbf{x})]$ such that $|\gamma - \E_{\mathbf{x} \sim \mathcal{D}}[f(\mathbf{x})q(\mathbf{x})]| \leq \tau.$ We note that the correlational SQ model captures a broad family of algorithms, including first-order methods (e.g., gradient-descent), dimension-reduction, and moment-based methods. (In particular, our algorithm establishing Theorem~\ref{thm:alg-inf} can be easily simulated in this model.) We establish the following: \begin{theorem}[Correlational SQ Lower Bound for $\mathcal{C}_k$] \label{thm:SQ-lb-inf} Any correlational SQ learning algorithm for $\mathcal{C}_k$ under the standard Gaussian distribution on $\mathbb{R}^d$ that guarantees error $\epsilon = \Omega(1)$ requires either queries of accuracy $d^{-\Omega(k)}$ or $2^{d^{\Omega(1)}}$ many queries. \end{theorem} The natural interpretation of Theorem~\ref{thm:SQ-lb-inf} is the following: If the SQ algorithm uses statistical queries of accuracy $d^{-\Omega(k)}$, then simulating a single query with iid samples would require $d^{\Omega(k)}$ samples (hence time). Otherwise, the algorithm would require $2^{d^{\Omega(1)}}$ time (since each query requires at least one unit of time). Theorem~\ref{thm:SQ-lb-inf}, combined with our Theorem~\ref{thm:alg-inf}, provides a (super-polynomial) computational separation between the PAC learnability of $\mathcal{C}_k$ and $\mathcal{C}_k^{+}$ in the correlational SQ model. We note that the statement of our general SQ lower bound (Theorem~\ref{thm:sq_theorem}) is much more general than Theorem~\ref{thm:SQ-lb-inf}. Specifically, we obtain a correlational SQ lower bound for PAC learning (under Gaussian marginals) a class of functions of the form $\sigma(\sum_{i=1}^k \alpha_i \phi(\mathbf{w}^{(i)}, \mathbf{x}))$, where roughly speaking $\sigma$ is any odd non-vanishing function and $\phi$ is not a low-degree polynomial. \subsection{Our Techniques} \label{ssec:techniques} Here we provide an overview of our techniques in tandem with a comparison to prior work. We start with our algorithm establishing Theorem~\ref{thm:alg-inf}. Our learning algorithm for $\mathcal{C}_k^{+}$ employs a data-dependent dimension reduction procedure. Specifically, we give an efficient method to reduce our $d$-dimensional learning problem down to a $k$-dimensional problem, that can in turn be efficiently solved by a simple covering method. Let $f(\mathbf{x}) = \sum_{i=1}^k \alpha_i \phi (\langle \mathbf{w}^{(i)}, \mathbf{x} \rangle)$ be the target function and observe that $f$ depends only on the $k$ unknown linear forms $\langle \mathbf{w}^{(i)}, \mathbf{x} \rangle$, $i \in [k]$. If we could identify the subspace $V$ spanned by the $\mathbf{w}^{(i)}$'s exactly, then we could also identify $f$ by brute-force on $V$, noting that we only need to search a $k^2$-dimensional space of functions and that for any $\mathbf{x} \in \mathbb{R}^d$ it holds $f(\mathbf{x}) = f(\mathrm{proj}_V(\mathbf{x}))$. Our algorithm is based on a robust version of this idea. In particular, if we can find a subspace $V'$ that closely approximates $V$, then it suffices to solve for $f$ on $V'$ and use this projection to obtain an approximation to $f$. To find a subspace $V'$ approximating $V$, we consider the matrix of degree-$2$ Chow parameters (second moments) of $f$, i.e., $\E_{\mathbf{x} \sim \mathcal{N}(0, I)} [f(\mathbf{x}) (\mathbf{x}\bx^T-\mathbf{I})]$. It is not hard to see that the (normalized) second moments of $f$ are positive in the directions along $V$ and $0$ in orthogonal directions. Thus, if we could compute the second moments exactly, we could solve for $V$ as the span of the second moment matrix. Unfortunately, we can only approximate the true second moment matrix via samples. To deal with this approximation, we note that the true second moments will be large in the direction of $\mathbf{w}^{(i)}$ for components with large coefficients $\alpha_i$ and $0$ in directions orthogonal to $V$. Using this fact, we show that if $V'$ is the span of the $k$ largest eigenvalues of an approximate second moment matrix (obtained via sampling), the weight vectors $\mathbf{w}^{(i)}$ corresponding to the important components of $f$ will still be close to $V'$. From this point, can use a net-based argument to find a hypothesis $h \in \mathcal{C}_k^{+}$ with weight vectors on $V'$ so that $f(\mathbf{x})$ is close to $h(\mathrm{proj}_{V'}(\mathbf{x}))$ in $L_2$-norm. We note that the idea of using dimension-reduction to find a low-dimensional invariant subspace has been previously used in the context of PAC learning intersections of LTFs~\cite{Vempala10a, DKS18-nasty}. Our algorithm and its analysis of correctness are quite different from these prior works. We also note that \cite{GeLM18} also used information based on low-degree moments for their parameter estimation algorithm, but in a qualitatively different way. In particular, \cite{GeLM18} used tensor-decomposition techniques (based on moments of degree up to four) to uniquely identify the weight vectors, under structural assumptions on the weight matrix (full-rank and bounded condition number). We now proceed to explain our SQ lower bound construction. As is well-known, there is a general methodology to establish such lower bounds, via an appropriate notion of SQ dimension~\cite{BFJ+:94, FeldmanGRVX17}. In our setting, to prove an SQ lower bound, it suffices to find a large collection of functions $f_1,\ldots,f_m \in \mathcal{C}_k$ with the following properties: (1) The $f_i$'s are pairwise far away from each other, and (2) The $f_i$'s have small pairwise correlations. The difficulty is, of course, to construct such a family. We describe our construction in the following paragraph. First, it is not hard to see that (1) and (2) can only be simultaneously satisfied if almost all of the $f_i$'s have nearly-matching low-degree moments. In fact, we provide a construction in which all the low-degree moments of all of the $f_i$'s vanish. To achieve this, we build on an idea introduced in~\cite{DKS17-sq}. Roughly speaking, the idea is to define a family of functions whose interesting information is hidden in a random low-dimensional subspace, so that learning an unknown function in the family amounts to finding the hidden subspace. In more detail, we will define a function in two dimensions which has the correct moments, and then embed it in a randomly chosen subspace. For simplicity, we explain our $2$-dimensional construction for ReLU activations, even though our SQ lower bound is more general. We provide an explicit $2$-dimensional construction of a mixture $F$ of $2k$ ReLUs whose first $k-1$ moments vanish exactly. For any $2$-dimensional subspace $V$, we can define $F_V(\mathbf{x}) = F(\mathrm{proj}_V(\mathbf{x})).$ From there, we can show that if $U$ and $V$ are two subspaces that are far apart --- in the sense that no unit vector in $U$ has large projection in $V$ --- then $F_U$ and $F_V$ will have small correlation --- on the order of the $k$-th power of the closeness parameter between the defining subspaces. Moreover, it is not hard to show that two randomly chosen $U$ and $V$ are far from each other with high probability. This allows us to find an exponentially large family of $F_V$'s that have pairwise exponentially small correlation. \subsection{Related Work} \label{ssec:related} In recent years, there has been an explosion of research on provable algorithms for learning neural networks in various settings, see, e.g.,~\cite{Janz15, SedghiJA16, DanielyFS16, ZhangLJ16, ZhongS0BD17, GeLM18, GeKLW19, BakshiJW19, GoelKKT17, Manurangsi18, GoelK19, VempalaW19} for some works on the topic. The majority of these works focused on parameter learning, i.e., the problem of recovering the weight matrix of the data generating neural network. In contrast, the focus of this paper is on PAC learning. We also note that PAC learning of simple classes of neural networks has been studied in a number of recent works~\cite{GoelKKT17, Manurangsi18, GoelK19, VempalaW19}. However, the problem of PAC learning linear combinations of (even) $3$ ReLUs under any natural distributional assumptions (and in particular under the Gaussian distribution) has remained open. At a high-level, prior works either rely on tensor decompositions~\cite{SedghiJA16, ZhongS0BD17, GeLM18, GeKLW19, BakshiJW19} or on kernel methods~\cite{ZhangLJ16, DanielyFS16, GoelKKT17, GoelK19}. In the following paragraphs, we describe in detail the prior works more closely related to the results of this paper. The work of~\cite{GeLM18} studies the parameter learning of positive linear combinations of ReLUs under the Gaussian distribution in the presence of additive (mean zero sub-gaussian) noise. That is, they consider the same concept class and noise model as we do, but study parameter learning as opposed to PAC learning. \cite{GeLM18} show that the parameters can be approximately recovered efficiently, under the assumption that the weight matrix is full-rank with bounded condition number. The sample complexity and running time of their algorithm scales polynomially with the condition number. More recently,~\cite{BakshiJW19, GeKLW19} obtained efficient parameter learning algorithms for vector-valued depth-$2$ ReLU networks under the Gaussian distribution. Similarly, the algorithms in these works have sample complexity and running time scaling polynomially with the condition number. We note that the algorithmic results in the aforementioned works do not apply to $\mathcal{C}_k$, i.e., the class of arbitrary linear combinations of ReLUs. \cite{VempalaW19} show that gradient descent agnostically PAC learns low-degree polynomials using neural networks as the hypothesis class. Their approach has implications for (realizable) PAC learning of certain neural networks under the uniform distribution on the sphere. We note that their method implies an algorithm with sample complexity and running time exponential in $1/\epsilon$, even for a single ReLU. \cite{GoelK19} give an efficient PAC learning algorithm for certain $2$-hidden-layer neural networks under arbitrary distributions on the unit ball. We emphasize that their algorithm does not apply for (positive) linear combinations of ReLUs. In fact, recent work has shown that the problem we solve in this paper is NP-hard under arbitrary distributions, even for $k=2$~\cite{GKMR20}. The SQ model was introduced by~\cite{Kearns:98} in the context of learning Boolean-valued functions as a natural restriction of the PAC model~\cite{Valiant:84}. A recent line of work~\cite{Feldman13, FeldmanPV15, FeldmanGV15, Feldman16} extended this framework to general search problems over distributions. One can prove unconditional lower bounds on the computational complexity of SQ algorithms via an appropriate notion of {\em Statistical Query dimension}. A lower bound on the SQ dimension of a learning problem provides an unconditional lower bound on the computational complexity of any SQ algorithm for the problem. The work of \cite{VempalaW19} establishes correlational SQ lower bounds for learning a class of degree-$k$ polynomials in $d$ variables. \cite{Shamir18} shows that gradient-based algorithms (a special case of correlational SQ algorithms) cannot efficient learn certain families of neural networks under well-behaved distributions (including the Gaussian distribution). We note that the lower bound constructions in these works do not imply corresponding lower bounds for one-hidden-layer ReLU networks. \paragraph{Concurrent and Independent Work.} Contemporaneous work~\cite{GGJKK20}, using a different construction, obtained super-polynomial SQ lower bounds for learning one-hidden-layer neural networks (with ReLU and other activations) under the Gaussian distribution. \section{Preliminaries} \label{sec:prelims} \noindent {\bf Notation.} For $n \in \mathbb{Z}_+$, we denote $[n] \stackrel{{\mathrm {\footnotesize def}}}{=} \{1, \ldots, n\}$. We will use small boldface characters for vectors. For $\mathbf{x} \in \mathbb{R}^d$, and $i \in [d]$, $\mathbf{x}_i$ denotes the $i$-th coordinate of $\mathbf{x}$, and $\|\mathbf{x}\|_2 \stackrel{{\mathrm {\footnotesize def}}}{=} (\mathop{\textstyle \sum}_{i=1}^d \mathbf{x}_i^2)^{1/2}$ denotes the $\ell_2$-norm of $\mathbf{x}$. We denote by $\snorm{2}{\matr A}$ the spectral norm of matrix $\matr A$. We will use $\langle \mathbf{x}, \mathbf{y} \rangle$ for the inner product between $\mathbf{x}, \mathbf{y} \in \mathbb{R}^d$. We will use $\E[X]$ for the expectation of random variable $X$ and $\mathbf{Pr}[\mathcal{E}]$ for the probability of event $\mathcal{E}$. We denote by $\mathbf{Var}[X]$ its variance. For $d\in \mathbb{N}$, we denote $\mathbb{S}^{d-1}$ the $d$-dimensional sphere. Denote by $ \theta(\vec u, \vec v)$ the angle between the vectors $\vec u,\vec v$. For a vector of weights $\vec \alpha = (\alpha_1,\ldots, \alpha_k) \in \mathbb{R}^{2k}$, and matrix $\matr W \in \mathbb{R}^{k \times d}$ we denote $ f_{\alpha, \matr W}(\vec x) = \alpha^T \phi(\matr W x) =\sum_{i=1}^{k} \alpha_i\ \phi(\langle \vec w^{(i)}, \vec x \rangle)$. Let $\mathcal{N}$ denote the standard univariate Gaussian distribution, we also denote $\mathcal{N}^2$ the two dimensional Gaussian distribution and $\mathcal{N}^d$ the $d$-dimensional one. \section{Efficient Learning Algorithm} \label{sec:alg} In this section, we give our upper bound for the problem of learning positive linear combinations of Lipschitz activations, thereby establishing Theorem~\ref{thm:alg-inf}. We prove the following more general statement: \begin{theorem}[Learning Sums of Lipschitz Activations] \label{thm:alg} Let $f(\vec x) = \sum_{i=1}^k \alpha_i \phi\big(\dotp{\vec w^{(i)}}{\vec x}\big)$ with $\alpha_i>0$ for all $i\in[k]$, where $\phi(t)$ is an $L$-Lipschitz, non-negative activation function such that $\E_{t \sim \mathcal{N}}[\phi(t)] \geq C$, $\E_{t\sim \mathcal{N}}[\phi(t)(t^2-1)]\geq C$, where $C>0$ and $\E_{t \sim \mathcal{N}}[\phi^2(t)]$ is finite. There exists an algorithm that given $k\in \mathbb{N}, \epsilon>0$, and sample access to a noisy set of samples from $f: \mathbb{R}^d \rightarrow \mathbb{R}_+$, draws $m = d \cdot \mathrm{poly}(k, 1/\epsilon)\cdot \mathrm{poly}(L/C)$ samples, runs in time $\mathrm{poly}(m) + \widetilde{O}((1/\epsilon)^{k^2})$, and outputs a proper hypothesis $h$ that, with probability at least $9/10$, satisfies \[ \E_{\vec x \sim\mathcal{N}^d}[(f(\vec x) - h(\vec x))^2] \leq \epsilon^2 \mathrm{poly}(L/C) \left(\sigma^2 + \E_{\vec x \sim \mathcal{N}^d}[f(\mathbf{x})^2]\right)\;. \] \end{theorem} \begin{remark} Theorem~\ref{thm:alg-inf} follows as a corollary of the above, by noting that the ReLU satisfies $L=1$ and $C=\frac{1}{\sqrt{2\pi}}$. \end{remark} The following fact gives formulas for the low-degree Chow parameters of a one-layer network (see Appendix~\ref{app:upper_bound}, Fact~\ref{fct:chow_app}). \begin{fact}[Low-degree Chow Parameters] \label{lem:chow_formulas} Let $f: \mathbb{R}^d \to \mathbb{R}$ be of the form $f(\vec x) = \sum_{i=1}^k \alpha_i\cdot \phi\left(\dotp{\vec w^{(i)}}{\vec x}\right)$. Then $ \E_{\vec x \sim \mathcal{N}^d}\left[f(\vec x)\right] =\E_{t\sim \mathcal{N}}[\phi(t)] \sum_{i=1}^k \alpha_i \;,$ $ \E_{\vec x \sim \mathcal{N}^d}\left[f(\vec x) \vec x\right] =\E_{t\sim \mathcal{N}}[\phi(t)t]\cdot \sum_{i=1}^k \alpha_i \vec w^{(i)} \;,$ and \begin{align} \label{eq:degree_2_formula} \matr A = \E_{\vec x \sim \mathcal{N}^d}\left[f(\vec x) ( \vec x\vec x^T-\matr I)\right]= \E_{t\sim \mathcal{N}}[\phi(t)(t^2-1)] \sum_{i=1}^k \alpha_i\vec w^{(i)}{\vec w^{(i)}}^T \;. \end{align} \end{fact} The crucial formula is the one of the degree-$2$ Chow parameters, Equation~\eqref{eq:degree_2_formula}. In fact, we can already describe the main idea of our upper bound. Let us assume that we have the degree-$2$ Chow parameters matrix $\matr A$ {\em exactly}. Then, by using singular value decomposition, we would obtain a basis of the vector space spanned by the parameters $\vec w^{(i)}$. The dimension of this space is at most $k$ and therefore in that way we essentially reduce the dimension of the problem from $d$ down to $k$. To find parameters $\hat{\alpha}_i, \hat{\vec w}^{(i)}$ that give small mean squared error, we can now make a grid $\mcal G$ and pick the ones that minimize the empirical mean squared error with the samples, that is $$ \min_{\vec \beta, \matr U \in \mcal G} \sum_{i=1}^m (f_{\vec \beta, \matr U}(\vec x^{(i)}) - y^{(i)})^2 \;. $$ Even though we do not have access to the matrix $\matr A$ exactly, we can estimate it empirically. Since the activation function $\phi(\cdot)$ is well-behaved and the distribution of the examples is Gaussian, we can get a very accurate estimate of $\matr A$ with roughly $\widetilde{O}(d k /\epsilon^2)$ samples. We give the following lemma whose proof relies on matrix concentration and concentration of polynomials of Gaussian random variables (see Appendix~\ref{app:chow_estimation}, Lemma~\ref{lem:appendix_chow_est}). \begin{lemma}[Estimation of degree-$2$ Chow parameters] \label{lem:empirical_chow_parameters} Let $ f_{\vec \alpha, \matr W}(\vec x) = \sum_{i=1}^k \alpha_i \phi(\langle \vec w^{(i)}, \vec x \rangle)$, where $\phi(t)$ is an $L$-Lipschitz, non-negative activation function such that $\E_{t \sim \mathcal{N}}[\phi(t)] \geq C$. Let $\matr \Sigma = \E_{\vec x \sim \mathcal{N}^d}[ f_{\alpha, \matr W}(\vec x) \vec x \otimes \vec x ]$ be the degree-$2$ Chow parameters of $f_{\alpha, \matr W}$. Then, for some $N = \widetilde{O}(d k/\epsilon^2)$ samples $(\vec x^{(i)}, y^{(i)})$, where $y^{(i)}=f_{\vec \alpha, \matr W}(\vec x^{(i)}) +\xi_i$ and $\xi_i$ is a zero-mean, subgaussian noise with variance $\sigma^2$, it holds with probability at least $99\%$ that $$\snorm{2}{ \frac{1}{N} \sum_{i=1}^N \vec x^{(i)} \otimes \vec x^{(i)} y^{(i)} - \matr \Sigma } \leq \epsilon \left( \sigma + \frac{L}{C} \E_{\vec x \sim \mathcal{N}^d}[f_{\vec \alpha, \matr W}(\vec x)] \right) \;. $$ \end{lemma} The next step is to quantify how accurately we need to estimate the degree-$2$ Chow parameters, so that doing SVD on the empirical matrix gives us a good approximation of the subspace spanned by the true parameters $\vec w^{(i)}$. We show that that estimating the degree-$2$ Chow parameter matrix within spectral norm roughly $\epsilon/k$ suffices. In particular, we show that the top-$k$ eigenvectors of our empirical estimate span approximately the subspace where the true parameters $\vec w^{(i)}$ lie. For the proof, we are going to use the following lemma that bounds the difference of a function evaluated at correlated normal random variables. \begin{lemma}[Correlated Differences, Lemma 6 of \cite{KTZ19}] \label{lem:correlated_differences} Let $r(\vec x) \in L_2(\mathbb{R}^d, \mathcal{N}^d)$ be differentiable almost everywhere and let \[ D_\rho = \mathcal{N}\left(\vec 0, \begin{pmatrix} \matr I & \rho \matr I \\ \rho \matr I & \matr I \end{pmatrix} \right). \] We call $\rho$-correlated a pair of random variables $(\vec x, \vec y) \sim D_{\rho}$. It holds \[ \frac{1}{2} \E_{(\vec x, \vec z) \sim D_\rho}[(r(\vec x) - r(\vec z))^2] \leq (1-\rho) \E_{\vec x \sim \mathcal{N}^d}\left[ \snorm{2}{\nabla r(\vec x)}^2 \right]\, . \] \end{lemma} We are now ready to prove the key technical lemma of our approach. We remark that the following dimension reduction lemma is rather general and holds for any reasonable activation function, in the sense that the error is bounded as long as its expected derivative $\E_{t \sim \mathcal{N}}[(\phi'(t))^2]$ is bounded. \begin{lemma}[Dimension Reduction]\label{lem:dimension_reduction} Let $f_{\vec \alpha,\matr W}(\vec x) = \sum_{i=1}^k \alpha_i \phi\big(\dotp{\vec w^{(i)}}{\mathbf{x}}\big)$ with $\alpha_i>0$, let $\matr A=\E_{\vec x\sim \mathcal{N}^d}[f(\vec x)\vec x\vec x^T ]$ and $ \E_{t\sim \mathcal{N}}[\phi(t)(t^2-1)]= C_1$. Let $\matr M \in \mathbb{R}^{d \times d}$ be a matrix such that $\snorm{2}{\matr A - \matr M}^2 \leq \epsilon$ and let $\cal V$ be the subspace of $\mathbb{R}^d$ that is spanned by the top-$k$ eigenvectors of $\matr M$. There exist $k$ vectors $\vec v^{(i)}\in \cal V$ such that for the matrix $\matr V\in \mathbb{R}^{k\times d}$ constructed by the vectors $\vec v^{(i)}$, it holds $ \E_{\vec x\sim \mathcal{N}^d}[(f_{\vec \alpha,\matr W}(\vec x) - f_{\vec \alpha,\matr V}(\vec x))^2] \leq 2 k \epsilon \E_{\vec x\sim \mathcal{N}^d}[f_{\vec \alpha,\matr W}(\vec x)] \E_{t\sim \mathcal{N}}[(\phi'(t))^2]/C_1\;. $ \end{lemma} \begin{proof} For simplicity, let us denote $\epsilon = \snorm{2}{\matr A - \matr M}$. Moreover, let $\matr A'= \matr A- \E_{t\sim \mathcal{N}}[\phi(t)] \sum_i\alpha_i \matr I$, $\matr M'=\matr M- \E_{t\sim \mathcal{N}}[\phi(t)] \sum_i\alpha_i \matr I$, and observe that $\snorm{2}{\matr A-\matr M}=\snorm{2}{\matr A'-\matr M'}$. We note that $\matr A' =\E_{t\sim \mathcal{N}}[\phi(t)(t^2-1)] \sum_{i=1}^k \alpha_i\vec w^{(i)}{\vec w^{(i)}}^T $, from Fact~\ref{lem:chow_formulas}. Let $\vec b^{(1)},\ldots, \vec b^{(k)}$ be the eigenvectors corresponding to the top-$k$ eigenvalues of $\matr M'$ (which are also the top $k$ eigenvectors of $\matr M$), and let ${\cal V}=\mathrm{span}(\vec b_1,\ldots, \vec b_k)$. Let $\vec v^{(i)}=\mathrm{proj}_{\cal V}(\vec w^{(i)})$ and $\vec r^{(i)}=\vec w^{(i)}-\vec v^{(i)} $. Let $\vec v^{(1)}, \ldots, \vec v^{(k)}$ be any $k$ vectors in $\mathbb{R}^d$. Then we have, \begin{align} \label{eq:fourier_inequality} \E_{\vec x \sim \mathcal{N}^d} [(f_{\vec \alpha, \matr W}(\vec x) - f_{\vec \alpha, \matr V}(\vec x))^2 ] &\leq k \sum_{i=1}^k \alpha_i^2 \E_{\vec x \sim \mathcal{N}^d} \left[ \left(\phi\big(\langle{\vec w^{(i)}},{\vec x}\rangle\big) - \phi\big(\langle{\vec v^{(i)}},{\vec x\rangle}\big)\right)^2 \right] \nonumber \\ &\leq 2 k \E_{t \sim \mathcal{N}} \left[ (\phi'(t))^2 \right] \sum_{i=1}^k \alpha_i^2 (1- \langle\vec w^{(i)},\vec v^{(i)}\rangle), \end{align} where for the last inequality we used Lemma~\ref{lem:correlated_differences} and the fact that the random variables $\langle \vec w^{(i)}, \vec x \rangle$ and $\langle \vec v^{(i)}, \vec x \rangle$ are $\rho_i$-correlated with $\rho_i = \langle \vec w^{(i)}, \vec v^{(i)} \rangle$. It suffices to prove that $\snorm{2}{\vec r^{(i)}}=\snorm{2}{\vec w^{(i)}-\vec v^{(i)}}\leq \epsilon'$ for some sufficiently small $\epsilon'$. Note that because $\vec r^{(i)} \in \cal V^{\perp}$, it holds ${\vec r^{(i)}}^T \matr M' \vec r^{(i)} \leq \snorm{2}{\vec r^{(i)}}^2 \max_{\vec u \in {\cal V}^{\perp}} \frac{\vec u^T \matr M' \vec u}{\snorm{2}{\vec u}}$, because we know that the subspace $\cal W$ is spanned by the top $k$ eigenvectors of $\matr M'$. Let $\vec u=\sum_{i=1}^d \vec u^{(i)}$, where $\vec u^{(i)}\in \ker(\matr M-\lambda_i \matr I)$ for all $i\in\{k+1,\ldots, d\} $ and $\lambda_i$ is the $i$-th greatest eigenvalue. From Weyl's inequality, we have that if $A_i$ are the eigenvalues of $\matr A'$ in decreasing order then $\snorm{1}{A_i -\lambda_i}\leq \epsilon$ and we know that the eigenvalues of $\matr A'$ for $i>k$ are zero, because the $\rank(\matr A)\leq k$. Thus, $$ \max_{\vec u \in {\cal V}^{\perp}} \frac{\vec u^T \matr M' \vec u}{\snorm{2}{\vec u}}\leq \lambda_{k+1}\leq \epsilon \;,$$ because the eigenvalues of the eigenvectors of $\matr M'$ in ${\cal V}^{\perp}$ are less than $\epsilon$, which implies that ${\vec r^{(i)}}^T \matr M' \vec r^{(i)}$ $ \leq \epsilon \snorm{2}{\vec r^{(i)}}^2 \;.$ We also have $ {\vec r^{(i)}}^T \matr A' \vec r^{(i)} \geq \E_{t\sim \mathcal{N}}[\phi(t)(t^2-1)]\alpha_i {\vec r^{(i)}}^T \vec w^{(i)} {\vec w^{(i)}}^T \vec r^{(i)} = C_1\alpha_i \cdot \left(1-\snorm{2}{{\vec v^{(i)}}}^2 \right)^2 = C_1\alpha_i \snorm{2}{\vec r^{(i)}}^4 \;,$ where the last equality follows from the Pythagorean theorem. Therefore, $$\snorm{2}{\vec r^{(i)}}^2 \epsilon \geq {\vec r^{(i)}}^T \matr M' \vec r^{(i)} \geq {\vec r^{(i)}}^T \matr A' \vec r^{(i)} -\epsilon \snorm{2}{\vec r^{(i)}}^2 \geq C_1\alpha_i \snorm{2}{\vec r^{(i)}}^4 -\epsilon \snorm{2}{\vec r^{(i)}}^2\;.$$ Thus, we obtain $\alpha_i \snorm{2}{\vec r^{(i)}}^2 \leq 2\epsilon/C_1\;. $ The bound now follows directly from~\eqref{eq:fourier_inequality} since $2 \alpha_i (1- \langle\vec w^{(i)},\vec v^{(i)}\rangle) = \alpha_i \snorm{2}{\vec w^{(i)} - \vec v^{(i)}}^2 = \alpha_i \snorm{2}{\vec r^{(i)}}^2 \leq 2 \epsilon/C_1$. \end{proof} Now we have all the ingredients to complete our proof. Since the dimension of the subspace that we have learned is at most $k$, we can construct a grid with $(k/\epsilon)^{O(k)}$ candidates that contains an approximate solution. Our full algorithm is summarized as Algorithm~\ref{alg:nn_learner}. The proof of Theorem~\ref{thm:alg} follows from the above discussion and can be found in Appendix~\ref{app:upper_bound} (Theorem~\ref{thm:appendix}. \begin{algorithm}[H]) \caption{Learning One-Hidden-Layer Networks with Positive Coefficients and Lipschitz Activations} \label{alg:nn_learner} \begin{algorithmic}[1] \Procedure{NNLearner}{$k, \epsilon$} \Comment{$k$: number of rows of weight matrix $\matr W$, $\epsilon$: accuracy.} \State Draw $m = d\, \mathrm{poly}( k, 1/\epsilon)$ samples, $(\vec x^{(i)}, y^{(i)})$, to estimate $\widehat{\matr M}$.\Comment{Lemma~\ref{lem:empirical_chow_parameters}} \State Find the SVD of $\widehat{\matr M}$ to obtain the $k$ eigenvectors $\vec v^{(1)}, \ldots, \vec v^{(k)}$ that correspond to the $k$ largest eigenvalues, and let $\mathcal{V}$ be the subspace spanned by these vectors. \State Draw $m'=O(k L^2)$ samples and compute an estimation $\hat{\mu}$ of the expectation of $f(x)$ \State Let $\mcal G$ be an $\epsilon/k$-cover of a $k$-ball wth radius $(\hat{\mu}+c\sigma)^2$ over $\cal V$, with respect the $\ell_2$-norm. \State Draw $n = \mathrm{poly}(k,1/\epsilon)$ fresh samples $(\vec x^{(i)}, y^{(i)})$. \State For every $\matr U = (\vec u^{(1)}, \ldots, \vec u^{(k)}) \in \mathcal{G}^k$, let $f_{\matr U}= \sum_{i=1}^k \snorm{2}{ \vec u^{(i)}} \phi\big(\dotp{\vec u^{(i)}}{\vec x}/\snorm{2}{ \vec u^{(i)}}\big)$ and compute $ e_{\matr U} = \frac{1}{n} \sum_{i=1}^n \left(f_{\matr U}(\vec x^{(i)}) - y^{(i)}\right)^2 $\label{alg:estimator} \State Output the candidate $f_{\matr U}$ which minimizes its corresponding error $e_{\matr U}$. \EndProcedure \end{algorithmic} \end{algorithm} \section{Statistical Query Lower Bound} \label{sec:sq} We start by formally defining the class of algorithms for which our lower bound applies. In the standard statistical query model, we do not have direct access to samples from the distribution, but instead can pick a function $q$ and get an approximation to its expected value. In this work, we consider algorithms that have access to correlational statistical queries, which are more restrictive and are defined as follows. We remark that in the following definition of inner product queries we do not assume that the concept $f(\mathbf{x})$ is bounded pointwise but only in the $L_2$ sense. The properties that we shall need for our result hold also under this weaker assumption. \nnew{ \begin{definition}[Correlational/Inner Product Queries] Let $\mathcal{D}$ be a distribution over some domain $X$ and let $f:X\mapsto \mathbb{R}$, where $\E_{\mathbf{x} \sim \mathcal{D}}[f^2(\mathbf{x})] \leq 1$. An inner product query is specified by some function $q: X \mapsto [-1,1]$ and a tolerance $\tau>0$, and returns a value $u$ such that $u\in [\E_{\mathbf{x} \sim \mathcal{D}}[q(\mathbf{x})f(\mathbf{x})]-\tau,\E_{\mathbf{x} \sim \mathcal{D}}[q(\mathbf{x})f(\mathbf{x})]+\tau]$. \end{definition} } We will prove that almost any reasonable choice of activations $\sigma$, $\phi$ defines a family of functions that is hard to learn. More precisely, for a pair of activations $\sigma, \phi$, we define the following function $f_{\sigma, \phi}: \mathbb{R}^2 \to \mathbb{R}$: \begin{equation} \label{eq:2_dim_concept} f_{\sigma, \phi}(x,y) = \sigma\left( \sum_{m=1}^{2k} (-1)^{m} \phi\left(x \cos\big(\frac{\pi m}{k}\big) + y \sin\big(\frac{\pi m}{k}\big) \right) \right)\,. \end{equation} We are now ready to define the conditions on the activations $\sigma, \phi$ that are needed for our construction. We define \begin{align} \mcal{H} = &\Big\{ f_{\sigma, \phi} \ :\ \sigma \text{ is odd } \text{ and } ~ f_{\sigma, \phi} \not \equiv 0 \Big \} \;, \label{eq:bad_activations} \end{align} where the second condition means that $ f_{\sigma, \phi}(x,y)$ \emph{as a function of $x,y$} is not identically zero. We can now define the class of (normalized) functions on $\mathbb{R}^d$ for which our lower bound holds. Given a set $\mcal{W}$ of $2 \times d$ matrices, we can embed $f_{\sigma, \phi}$ into $\mathbb{R}^d$ by defining the following class of functions \begin{equation} \label{eq:hard_functions} \mcal{F}_{\sigma, \phi}^{\mcal{W}} = \{ \vec x \mapsto f_{\sigma,\phi}(\vec W \vec x)/\E_{\mathbf{x} \sim \mathcal{N}^d}[f_{\sigma, \phi}(\vec W \vec x)] : \vec W \in \mcal{W} \} \,. \end{equation} \begin{remark} For any $f\in \mcal{H}$, we have that $f:\mathbb{R}^2\rightarrow \mathbb{R}$. We embedded $f$ into $\mathbb{R}^d$ by taking $f_{\vec W}(\vec x) = f(\vec W \vec x)$ for some $2\times d$ matrix $\vec W$ with orthogonal rows. We prove correlational SQ lower bounds against learning an approximation of the embedding plane $\vec W$ from a function $f_{\vec W}$. This will imply a lower bound against learning $f_{\vec W}$ so long as the function does not vanish identically. However, this is not an entirely trivial condition. For example, if $\phi$ is a polynomial of degree less than $k$, this will happen. However, as we show in Appendix~\ref{sec:iterpolation}, this is essentially the only way that things can go wrong. In particular, so long as $\phi$ is not a low degree polynomial and the parity of $k$ is chosen appropriately, this function $f$ will not vanish, and our lower bounds will apply. \end{remark} \begin{theorem}[Correlational SQ Lower Bound] \label{thm:sq_theorem} Let $\sigma, \phi$ be activations such that $f_{\sigma, \phi} \in \mcal{H}$ (see Eq.~\eqref{eq:bad_activations}). There exists a set $\mcal W$ of matrices $\matr W \in \mathbb{R}^{2 \times d}$ such that for all $f \in \mcal{F}_{\sigma, \phi}^{\mcal{W}}$ (see Eq.~\eqref{eq:hard_functions}) $\E_{\mathbf{x} \sim \mathcal{N}^d}[f^2(\mathbf{x})] = 1$ and the following holds: Any correlational SQ learning algorithm that for every concept $f \in {\mcal F}_{\sigma, \phi}^{\mcal{W}}$ learns a hypothesis $h$ such that $\E_{\mathbf{x} \sim \mathcal{N}^d}[ (f(\mathbf{x}) - h(\mathbf{x}))^2 ] \leq \epsilon $, where $\epsilon>0$ is some sufficiently small constant, requires either $2^{d^{\Omega(1)}}$ inner product queries or at least one query with tolerance $d^{-\Omega(k)}+2^{-d^{\Omega(1)}}$. \end{theorem} To prove our lower bound we will use an appropriate notion of SQ dimension. \nnew{Specifically, we define the Correlational SQ Dimension that captures the difficulty of learning a class $\cal C$. \begin{definition}[Correlational Statistical Query Dimension] Let $\rho >0$, let $\mathcal{D}$ be a probability distribution over some domain $X$, and let $\cal C$ be a family of functions $f:X\mapsto \mathbb{R}$. We denote by $\rho(\cal C)$ the average pairwise correlation of any two functions in $\cal C$, that is $ \rho(\mcal C) = \frac{1}{|\mcal C|^2} \sum_{g, r \in \mcal C} \E_{\mathbf{x}\sim \mathcal{D}}[g(\vec x) \cdot r(\vec x)] $. The correlational statistical dimension of $\cal C$ relative to $\mathcal{D}$ with average correlation, denoted by $\text{SDA}({\cal C},\mathcal{D},\rho)$, is defined to be the largest integer $m$ such that for every subset ${\cal C}' \subseteq \cal C$ of size at least $|{\cal C}'|\geq |{\cal C}|/m$, we have $\rho({\cal C}')\leq \rho$. \end{definition} } The following lemma relates the Correlational Statistical Query Dimension of a concept class with the number of correlational statistical queries needed to learn it. The difficulty lies in creating a large family of functions with small average correlation. We will use the following result that translates correlational statistical dimension to a lower bound on the number of inner product queries needed to learn the function $f \in \mcal{C}$. We note that in this paper we consider inner-product queries of the form $g(x) y $ where $y$ is not necessarily bounded. In fact, the proof of the following lemma does not require $g(x) y$ to be pointwise bounded (bounded $L_2$ norm is sufficient) as it can be seen from the arguments in \cite{szorenyi2009characterizing}, \cite{GGJKK20}, \cite{VempalaW19}. \begin{lemma} \label{theorem:vem} Let $\mathcal{D}$ be a distribution on a domain $X$ and let $\cal C$ be a family of functions $f:X\mapsto \mathbb{R}$. Suppose for some $m, \tau >0$, we have $\textsc{SDA}({\cal C},\mathcal{D},\tau)\geq m$ and assume that for all $f\in \cal C$, $1\geq \E_{\mathbf{x} \sim \mathcal{D}}[f^2(\mathbf{x})] > \eta^2$. Any SQ learning algorithm that is allowed to make only inner product queries and for any $f\in \cal C$ outputs some hypothesis $h$ such that $\E_{\vec x \sim \mathcal{D}}[(h(\vec x) - f(\vec x))^2] \leq c \, \eta^2$, where $c>0$ is a sufficiently small constant, requires at least $\Omega(m)$ queries of tolerance $\sqrt{\tau}$. \end{lemma} We will require the following technical lemma, whose proof relies on Hermite polynomials, and can be found in Appendix~\ref{app:hermite_polynomials} (Lemma~\ref{lem:bound_coleration_app}). \begin{lemma}\label{lem:bound_coleration} Let $p(\vec x): \mathbb{R}^2 \mapsto \mathbb{R}$ be a function and let $\matr U, \matr V \in \mathbb{R}^{2 \times d}$ be linear maps such that $\matr U \matr U^T = \matr V \matr V^T = \matr I \in \mathbb{R}^{2 \times 2}$. Then, $ \E_{\vec x \sim \mathcal{N}^d}[p(\matr U \vec x) p(\matr V \vec x)] \leq \sum_{m=0}^{\infty} \snorm{2}{\nnew{\matr U \matr V^T}}^m \E_{\vec x \sim \mathcal{N}^d}[(p^{[m]}(\vec x))^2] . $ \end{lemma} In the following simple lemma, we show that two random $2$-dimensional subspaces in high dimensions are nearly orthogonal. In particular, we can have an exponentially large family of almost orthogonal planes. For the proof see Appendix~\ref{app:hermite_polynomials} (Lemma~\ref{lem:sq_rota_app}). \begin{lemma}\label{lem:size_of_rotations} For any $0<c<1/2$, there exists a set $S$ of at least $2^{\Omega(d^c)}$ matrices in $\mathbb{R}^{2 \times d}$ such that for each pair $\matr A, \matr B \in S$, it holds $\snorm{2}{\nnew{\matr A \matr B^T}} \leq O(d^{c-1/2})$. \end{lemma} The following lemma shows that the correlation of any function $f$ of $\mcal{H}$ with any low-degree polynomial is zero. For the proof see Appendix~\ref{app:hermite_polynomials} (Lemma~\ref{lem:function_low_ap}). \begin{lemma}\label{lem:function_low} Let $f_{\sigma, \phi} \in \mcal{H}$. For every polynomial $p(\mathbf{x})$ of degree at most $k$, it holds $\E_{\mathbf{x}\sim \mathcal{D}}[f_{\sigma,\phi}(\mathbf{x}) \cdot p(\mathbf{x})]=0$. \end{lemma} We are now ready to prove our main result. \begin{proof}[Proof of Theorem~\ref{thm:sq_theorem}] Let $f: \mathbb{R}^2 \mapsto \mathbb{R}$ from Lemma~\ref{lem:function_low}. Let $c>0$ and fix a set $\cal W$ of matrices in $\mathbb{R}^{2\times d}$ satisfying the properties of Lemma~\ref{lem:size_of_rotations}. We consider the class of functions $F_{\sigma,\phi}^{ \mcal{W}}$ (see Eq.~\eqref{eq:hard_functions}). In particular, for all $\matr A_i,\matr A_j \in \cal W$, \nnew{ let functions $G_{i}(\vec x) = f(\matr A_i \vec x)/\sqrt{\E_{\mathbf{x}\sim \mathcal{N}^2}[f^2(\mathbf{x})]} $ and $G_{j}(\vec x)=f(\matr A_j \vec x)/\sqrt{\E_{\mathbf{x}\sim \mathcal{N}^2}[f^2(\mathbf{x})]} $. Notice that since $\matr A_i \matr A_i^T = \matr I$ we have that $\E_{\mathbf{x} \sim \mathcal{N}^d}[G_{i}^2(\mathbf{x})] = 1$ for all $i$. } The pairwise correlation of $G_i$ and $G_j$ is \begin{align} \rho(G_i,G_j)= \frac{ \E_{\vec x \sim \mathcal{N}^d}[G_{i}(\vec x) G_{j}(\vec x)]}{\E_{\mathbf{x}\sim \mathcal{N}^2}[f^2(\mathbf{x})] } \;, \label{eq:pairwise_corelation} \end{align} where in the second equality we used that Gaussian distributions are invariant under rotations and in the last that the expectation of $p(\mathbf{x})$ is zero. Then, using Lemma~\ref{lem:bound_coleration}, it holds \begin{align} \E_{\vec x \sim \mathcal{N}^d}[G_{i}(\vec x) G_{j}(\vec x)] &= \E_{\vec x \sim \mathcal{N}^d}[f(\matr A_i \vec x) p(\matr A_j \vec x)] \leq \sum_{m>k} \snorm{2}{\nnew{\matr A_i \matr A_j^T}}^m \E_{\vec x \sim \mathcal{N}^2}[(f^{[m]}(\vec x))^2]\nonumber \\ &\leq \snorm{2}{\nnew{\matr A_i \matr A_j^T}}^{k+1} \sum_{m>k} \E_{\vec x \sim \mathcal{N}^2}[(f^{[m]}(\vec x))^2] \leq \snorm{2}{\nnew{\matr A_i \matr A_j^T}}^{k+1} \E_{\vec x \sim \mathcal{N}^2}[(f(\vec x))^2]\nonumber \\ &\leq O(d^{k(c-1/2)}) \E_{\vec x \sim \mathcal{N}^2}[(f(\vec x))^2] \;, \label{eq:theorem_prove} \end{align} where in the first inequality we used that the first $k$ moments are zero, in the second the fact that the spectral norm of these two matrices is less than one, and in the third inequality we used Parseval's theorem. Thus, using Equation~\eqref{eq:theorem_prove} into Equation~\eqref{eq:pairwise_corelation}, we get that the pairwise correlation is less than $\tau=O(d^{k(c-1/2)})$. Thus, from a straighforward calculation, the average correlation of the set $\mcal F_{\sigma, \phi}^{ \mcal{W}}$ is less $\tau + \frac{1-\tau}{|\mcal F_{\sigma,\phi}^{ \mcal{W}},|}\leq \tau +{ |\mcal F_{\sigma,\phi}^{ \mcal{W}}}|^{-1}\leq \tau +2^{-\Omega(d^{c})} $. Moreover, for $\tau'=d^{O(k(c-1/2))}+2^{-\Omega(d^{c})}$, the $\textsc{SDA}(\mcal F_{\sigma,\phi}^{ \mcal{W}},\mathcal{D},\tau')=2^{\Omega(d^{c})}$ and the result follows from Lemma~\ref{theorem:vem}. \end{proof} \section{Conclusions and Future Directions} \label{sec:conc} In this paper, we studied the problem of PAC learning one-hidden-layer neural networks with $k$ hidden units on $\mathbb{R}^d$ under the Gaussian distribution. For the case of positive coefficients, we gave a polynomial time learning algorithm for $k$ up to $\tilde{O}(\sqrt{\log d})$. On the negative side, we showed that no such algorithm is possible for unrestricted coefficients in the Correlational SQ model. This work is part of an extensive recent literature on designing provable algorithms for learning simple families of neural networks. In the context of one-hidden-layer networks, a number of concrete open questions remain: Can we improve the dependence on $k$ in the running time to polynomial? Can we design learning algorithms that succeed under less stringent distributional assumptions? We believe that progress in both these directions is attainable. \paragraph{Acknowledgements} We thank the authors of~\cite{GGJKK20} for useful comments that helped us improve the presentation of our lower bound proof.
1,941,325,220,882
arxiv
\section{Introduction} Submodular set functions are defined by the following condition for all pairs of sets $S,T$: $$ f(S \cup T) + f(S \cap T) \leq f(S) + f(T),$$ or equivalently by the property that the {\em marginal value} of any element, $f_S(j) = f(S+j)-f(S)$, satisfies $f_T(j) \leq f_S(j)$, whenever $j \notin T \supset S$. In addition, a set function is called monotone if $f(S) \leq f(T)$ whenever $S \subseteq T$. Throughout this paper, we assume that $f(S)$ is nonnegative. Submodular functions have been studied in the context of combinatorial optimization since the 1970's, especially in connection with matroids \cite{E70,E71,NWF78,NWF78II,NW78,W82a,W82b,L83,F97}. Submodular functions appear mostly for the following two reasons: (i) submodularity arises naturally in various combinatorial settings, and many algorithmic applications use it either explicitly or implicitly; (ii) submodularity has a natural interpretation as the property of {\em diminishing returns}, which defines an important class of utility/valuation functions. Submodularity as an abstract concept is both general enough to be useful for applications and it carries enough structure to allow strong positive results. A fundamental algorithmic result is that any submodular function can be {\em minimized} in strongly polynomial time \cite{FFI01,Lex00}. In contrast to submodular minimization, submodular maximization problems are typically hard to solve exactly. Consider the classical problem of maximizing a monotone submodular function subject to a cardinality constraint, $\max \{f(S): |S|\leq k\}$. It is known that this problem admits a $(1-1/e)$-approximation \cite{NWF78} and this is optimal for two different reasons: (i) Given only black-box access to $f(S)$, we cannot achieve a better approximation, unless we ask exponentially many value queries \cite{NW78}. This holds even if we allow unlimited computational power. (ii) In certain special cases where $f(S)$ has a compact representation on the input, it is NP-hard to achieve an approximation better than $1-1/e$ \cite{Feige98}. The reason why the hardness threshold is the same in both cases is apparently not well understood. An optimal $(1-1/e)$-approximation for the problem $\max \{f(S): |S| \leq k\}$ where $f$ is monotone sumodular is achieved by a simple greedy algorithm \cite{NWF78}. This seems to be rather coincidental; for other variants of submodular maximization, such as unconstrained (nonmonotone) submodular maximization \cite{FMV07}, monotone submodular maximization subject to a matroid constraint \cite{NWF78II,CCPV07,Vondrak08}, or submodular maximization subject to linear constraints \cite{KTS09,LMNS09}, greedy algorithms achieve suboptimal results. A tool which has proven useful in approaching these problems is the {\em multilinear relaxation}. \ \noindent{\bf Multilinear relaxation.} Let us consider a discrete optimization problem $\max \{f(S): S \in {\cal F}\}$, where $f:2^X \rightarrow {\boldmath R}$ is the objective function and ${\cal F} \subset 2^X$ is the collection of feasible solutions. In case $f$ is a linear function, $f(S) = \sum_{j \in S} w_j$, it is natural to replace this problem by a linear programming problem. For a general set function $f(S)$, however, a linear relaxation is not readily available. Instead, the following relaxation has been proposed \cite{CCPV07,Vondrak08,CCPV09}: For ${\bf x} \in [0,1]^X$, let $\hat{{\bf x}}$ denote a random vector in $\{0,1\}^X$ where each coordinate of $x_i$ is rounded independently to $1$ with probability $x_i$ and $0$ otherwise.\footnote{We denote vectors consistently in boldface $({\bf x})$ and their coordinates in italics $(x_i)$. We also identify vectors in $\{0,1\}^n$ with subsets of $[n]$ in a natural way. } We define $$ F({\bf x}) = {\bf E}[f(\hat{{\bf x}})] = \sum_{S \subseteq X} f(S) \prod_{i \in S} x_i \prod_{j \notin S} (1-x_j).$$ This is the unique {\em multilinear polynomial} which coincides with $f$ on $\{0,1\}$-vectors. We remark that although we cannot compute the exact value of $F({\bf x})$ for a given ${\bf x} \in [0,1]^X$ (which would require querying $2^n$ possible values of $f(S)$), we can compute $F({\bf x})$ approximately, by random sampling. Sometimes this causes technical issues, which we also deal with in this paper. Instead of the discrete problem $\max \{f(S): S \in {\cal F}\}$, we consider a continuous optimization problem $\max \{F({\bf x}): {\bf x} \in P({\cal F})\}$, where $P({\cal F})$ is the convex hull of characteristic vectors corresponding to ${\cal F}$, $$ P({\cal F}) = \left\{ \sum_{S \in {\cal F}} \alpha_S {\bf 1}_S: \sum_{S \in {\cal F}} \alpha_S = 1, \alpha_S \geq 0 \right\}.$$ The reason why the extension $F({\bf x}) = {\bf E}[f(\hat{{\bf x}})]$ is useful for submodular maximization problems is that $F({\bf x})$ has convexity properties that allow one to solve the continuous problem $\max \{F({\bf x}): {\bf x} \in P\}$ (within a constant factor) in a number of interesting cases. Moreover, fractional solutions can be often rounded to discrete ones without losing {\em anything} in terms of the objective value. Then, our ability to solve the multilinear relaxation translates directly into an algorithm for the original problem. In particular, this is true when the collection of feasible solutions forms a matroid. {\em Pipage rounding} was originally developed by Ageev and Sviridenko for rounding solutions in the bipartite matching polytope \cite{AS04}. The technique was adapted to matroid polytopes by Calinescu et al. \cite{CCPV07}, who proved that for any submodular function $f(S)$ and any ${\bf x}$ in the matroid base polytope $B({\cal M})$, the fractional solution ${\bf x}$ can be rounded to a base $B \in {\cal B}$ such that $f(B) \geq F({\bf x})$. This approach leads to an optimal $(1-1/e)$-approximation for the Submodular Welfare Problem, and more generally for monotone submodular maximization subject to a matroid constraint \cite{CCPV07,Vondrak08}. It is also known that the factor of $1-1/e$ is optimal for the Submodular Welfare Problem both in the NP framework \cite{KLMM05} and in the value oracle model \cite{MSV08}. Under the assumption that the submodular function $f(S)$ has {\em curvature} $c$, there is a $\frac{1}{c}(1-e^{-c})$-approximation and this is also optimal in the value oracle model \cite{VKyoto08}. The framework of pipage rounding can be also extended to nonmonotone submodular functions; this presents some additional issues which we discuss in this paper. For the problem of unconstrained (nonmonotone) submodular maximization, a $2/5$-approximation was developed in \cite{FMV07}. This algorithm implicitly uses the multilinear relaxation $\max \{F({\bf x}): {\bf x} \in [0,1]^X\}$. For {\em symmetric} submodular functions, it is shown in \cite{FMV07} that a uniformly random solution ${\bf x} = (1/2,\ldots,1/2)$ gives $F({\bf x}) \geq \frac12 OPT$, and there is no better approximation algorithm in the value oracle model. Recently, a $1/2$-approximation was found for unconstrained maximization of a general nonnegative submodular function \cite{BFNS12}. This algorithm can be formulated in the multilinear relaxation framework, but also as a randomized combinatorial algorithm. Using additional techniques, the multilinear relaxation can be also applied to submodular maximization with knapsack constraints ($\sum_{j \in S} c_{ij} \leq 1$). For the problem of maximizing a monotone submodular function subject to a constant number of knapsack constraints, there is a $(1-1/e-\epsilon)$-approximation algorithm for any $\epsilon > 0$ \cite{KTS09}. For maximizing a nonmonotone submodular function subject to a constant number of knapsack constraints, a $(1/5-\epsilon)$-approximation was designed in \cite{LMNS09}. One should mention that not all the best known results for submodular maximization have been achieved using the multilinear relaxation. The greedy algorithm yields a $1/(k+1)$-approximation for monotone submodular maximization subject to $k$ matroid constraints \cite{NWF78II}. Local search methods have been used to improve this to a $1/(k+\epsilon)$-approximation, and to obtain a $1/(k+1+1/(k-1)+\epsilon)$-approximation for the same problem with a nonmonotone submodular function, for any $\epsilon>0$ \cite{LMNS09,LSV09}. For the problem of maximizing a nonmonotone submodular function over the {\em bases} of a given matroid, local search yields a $(1/6-\epsilon)$-approximation, assuming that the matroid contains two disjoint bases \cite{LMNS09}. \subsection{Our results} Our main contribution (Theorem~\ref{thm:general-hardness}) is a general hardness construction that yields inapproximability results in the value oracle model in an automated way, based on what we call the {\em symmetry gap} for some fixed instance. In this generic fashion, we are able to replicate a number of previously known hardness results (such as the optimality of the factors $1-1/e$ and $1/2$ mentioned above), and we also produce new hardness results using this construction (Theorem~\ref{thm:matroid-bases}). Our construction helps explain the particular hardness thresholds obtained under various constraints, by exhibiting a small instance where the threshold can be seen as the gap between the optimal solution and the best symmetric solution. The query complexity results in \cite{FMV07,MSV08,VKyoto08} can be seen in hindsight as special cases of Theorem~\ref{thm:general-hardness}, but the construction in this paper is somewhat different and technically more involved than the previous proofs for particular cases. \paragraph{Concrete results} Before we proceed to describe our general hardness result, we present its implications for two more concrete problems. We also provide closely matching approximation algorithms for these two problems, based on the multilinear relaxation. In the following, we assume that the objective function is given by a value oracle and the feasibility constraint is given by a membership oracle: a value oracle for $f$ returns the value $f(S)$ for a given set $S$, and a membership oracle for ${\cal F}$ answers whether $S \in {\cal F}$ for a given set $S$. First, we consider the problem of maximizing a nonnegative (possibly nonmonotone) submodular function subject to a \emph{matroid base} constraint. (This generalizes for example the maximum bisection problem in graphs.) We show that the approximability of this problem is related to base packings in the matroid. We use the following definition. \begin{definition} \label{def:fractional-base} For a matroid ${\cal M}$ with a collection of bases ${\cal B}$, the fractional base packing number is the maximum possible value of $\sum_{B \in {\cal B}} \alpha_B$ for $\alpha_B \geq 0$ such that $\sum_{B \in {\cal B}: j \in B} \alpha_B \leq 1$ for every element $j$. \end{definition} \noindent{\bf Example.} Consider a uniform matroid with bases ${\cal B} = \{ B \subset [n]: |B| = k \}$. ($[n]$ denotes the set of integers $\{1,2,\cdots,n\}$.) Here, we can take each base with a coefficient $\alpha_B = 1 / {n-1 \choose k-1}$, which satisfies the condition $\sum_{B \in {\cal B}: j \in B} \alpha_B \leq 1$ for every element $j$ since every element is contained in ${n-1 \choose k-1}$ bases. We obtain that the fractional packing number is at least ${n \choose k} / {n-1 \choose k-1} = \frac{n}{k}$. It is also easy to check that the fractional packing number cannot be larger than $\frac{n}{k}$. \begin{theorem} \label{thm:matroid-bases} For any $\nu$ in the form $\nu = \frac{k}{k-1}, k \geq 2$, and any fixed $\epsilon>0$, a $(1-\frac{1}{\nu}+\epsilon) = \frac{1}{k}$-approximation for the problem $\max \{f(S): S \in {\cal B}\}$, where $f(S)$ is a nonnegative submodular function, and ${\cal B}$ is a collection of bases in a matroid with fractional packing number at least $\nu$, would require exponentially many value queries. On the other hand, for any $\nu \in (1,2]$, there is a randomized $\frac{1}{2}(1-\frac{1}{\nu}-o(1))$-approximation for the same problem. \end{theorem} In case the matroid contains two disjoint bases ($\nu=2$), we obtain a $(\frac14-o(1))$-approximation, improving the previously known factor of $\frac16-o(1)$ \cite{LMNS09}. In the range of $\nu \in (1,2]$, our positive and negative results are within a factor of $2$. For maximizing a submodular function over the bases of a general matroid, we obtain the following. \begin{corollary} \label{coro:matroid-bases} For the problem $\max \{f(S): S \in {\cal B}\}$, where $f(S)$ is a nonnegative submodular function, and ${\cal B}$ is a collection of bases in a matroid, any constant-factor approximation requires an exponential number of value queries. \end{corollary} We also consider the problem of maximizing a nonnegative submodular function subject to a matroid independence constraint. \begin{theorem} \label{thm:matroid-indep} For any $\epsilon>0$, a $(\frac12 + \epsilon)$-approximation for the problem $\max \{f(S): S \in {\cal I} \}$, where $f(S)$ is a nonnegative submodular function, and ${\cal I}$ is a collection of independent sets in a matroid, would require exponentially many value queries. On the other hand, there is a randomized $\frac14 (-1+\sqrt{5}-o(1)) \simeq 0.309$-approximation for the same problem. \end{theorem} Our algorithmic result improves a previously known $(\frac14 - o(1))$-approximation \cite{LMNS09}. The hardness threshold follows from our general result, but also quite easily from \cite{FMV07}. \medskip \noindent{\bf Hardness from the symmetry gap.} Now we describe our general hardness result. Consider an instance $\max \{f(S): S \in {\cal F}\}$ which exhibits a certain degree of symmetry. This is formalized by the notion of a {\em symmetry group} ${\cal G}$. We consider permutations $\sigma \in {\bf S}(X)$ where ${\bf S}(X)$ is the symmetric group (of all permutations) on the ground set $X$. We also use $\sigma$ for the naturally induced mapping of subsets of $X$: $\sigma(S) = \{ \sigma(i): i \in S \}$. We say that the instance is invariant under ${\cal G} \subset {\bf S}(X) $, if for any $\sigma \in {\cal G}$ and any $S \subseteq X$, $f(S) = f(\sigma(S))$ and $S \in {\cal F} \Leftrightarrow \sigma(S) \in {\cal F}$. We emphasize that even though we apply $\sigma$ to sets, it must be derived from a permutation on $X$. For ${\bf x} \in [0,1]^X$, we define the ``symmetrization of ${\bf x}$'' as $$\bar{{\bf x}} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf x})],$$ where $\sigma \in {\cal G}$ is uniformly random and $\sigma({\bf x})$ denotes ${\bf x}$ with coordinates permuted by $\sigma$. \medskip {\bf Erratum:} The main hardness result in the conference version of this paper \cite{Vondrak09} was formulated for an arbitrary feasibility constraint ${\cal F}$, invariant under ${\cal G}$. Unfortunately, this was an error and the theorem does not hold in that form --- an algorithm could gather some information from querying ${\cal F}$, and combining this with information obtained by querying the objective function $f$ it could possibly determine the hidden optimal solution. The possibility of gathering information from the membership oracle for ${\cal F}$ was neglected in the proof. (The reason for this was probably that the feasibility constraints used in concrete applications of the theorem were very simple and indeed did not provide any information about the optimum.) Nevertheless, to correct this issue, one needs to impose a stronger symmetry constraint on ${\cal F}$, namely the condition that $S \in {\cal F}$ depends only on the symmetrized version of $S$, $\overline{{\bf 1}_S} = {\bf E}_{\sigma \in {\cal G}}[{\bf 1}_{\sigma(S)}]$. This is the case in all the applications of the hardness theorem in \cite{Vondrak09} and \cite{OV11} and hence these applications are not affected. \begin{definition} \label{def:total-sym} We call an instance $\max \{f(S): S \in {\cal F}\}$ on a ground set $X$ strongly symmetric with respect to a group of permutations ${\cal G}$ on $X$, if $f(S) = f(\sigma(S))$ for all $S \subseteq X$ and $\sigma \in {\cal G}$, and $S \in {\cal F} \Leftrightarrow S' \in {\cal F}$ whenever ${\bf E}_{\sigma \in {\cal G}}[{\bf 1}_{\sigma(S)}] = {\bf E}_{\sigma \in {\cal G}}[{\bf 1}_{\sigma(S')}] $. \end{definition} \noindent{\bf Example.} A cardinality constraint, ${\cal F} = \{S \subseteq [n]: |S| \leq k \}$, is strongly symmetric with respect to all permutations, because the condition $S \in {\cal F}$ depends only on the symmetrized vector $\overline{{\bf 1}_S} = \frac{|S|}{n} {\bf 1}$. Similarly, a partition matroid constraint, ${\cal F} = \{S \subseteq [n]: |S \cap X_i| \leq k \}$ for disjoint sets $X_i$, is also strongly symmetric. On the other hand, consider a family of feasible solution ${\cal F} = \{ \{1,2\}, \{2,3\}, \{3,4\}, \{4,1\} \}$. This family is invariant under a group generated by the cyclic rotation $1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1$. It is not strongly symmetric, because the condition $S \in {\cal F}$ does not depend only the symmetrized vector $\overline{{\bf 1}_S} = (\frac14 |S|,\frac14 |S|,\frac14 |S|,\frac14 |S|)$; some pairs are feasible and others are not. \ Next, we define the symmetry gap as the ratio between the optimal solution of $\max \{F({\bf x}): {\bf x} \in P({\cal F})\}$ and the best {\em symmetric} solution of this problem. \begin{definition}[Symmetry gap] Let $\max \{ f(S): S \in {\cal F} \}$ be an instance on a ground set $X$, which is strongly symmetric with respect to ${\cal G} \subset {\bf S}(X)$. Let $F({\bf x}) = {\bf E}[f(\hat{{\bf x}})]$ be the multilinear extension of $f(S)$ and $P({\cal F}) = \mbox{conv}(\{{\bf 1}_I: I \in {\cal F} \})$ the polytope associated with $\cal F$. Let $\bar{{\bf x}} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf x})]$. The symmetry gap of $\max \{ f(S): S \in {\cal F} \}$ is defined as $\gamma = \overline{OPT} / OPT$ where $$OPT = \max \{F({\bf x}): {\bf x} \in P({\cal F})\},$$ $$\overline{OPT} = \max \{F(\bar{{\bf x}}): {\bf x} \in P({\cal F}) \}.$$ \end{definition} \noindent We give examples of computing the symmetry gap in Section~\ref{section:hardness-applications}. Next, we need to define the notion of a {\em refinement} of an instance. This is a natural way to extend a family of feasible sets to a larger ground set. In particular, this operation preserves the types of constraints that we care about, such as cardinality constraints, matroid independence, and matroid base constraints. \begin{definition}[Refinement] Let ${\cal F} \subseteq 2^X$, $|X|=k$ and $|N| = n$. We say that $\tilde{{\cal F}}\subseteq 2^{N \times X}$ is a refinement of $\cal F$, if $$ \tilde{{\cal F}} = \left\{ \tilde{S} \subseteq N \times X \ \big| \ (x_1,\ldots,x_k) \in P({{\cal F}}) \mbox{ where } x_j = \frac{1}{n} |\tilde{S} \cap (N \times \{j\})| \right\}. $$ \end{definition} In other words, in the refined instance, each element $j \in X$ is replaced by a set $N \times \{j\}$. We call this set the {\em cluster} of elements corresponding to $j$. A set $\tilde{S}$ is in $\tilde{{\cal F}}$ if and only if the fractions $x_j$ of the respective clusters that are intersected by $\tilde{S}$ form a vector ${\bf x} \in P({\cal F})$, i.e. a convex combination of sets in ${\cal F}$. Our main result is that the symmetry gap for any strongly symmetric instance translates automatically into hardness of approximation for refined instances. (See Definition~\ref{def:total-sym} for the notion of being ``strongly symmetric".) We emphasize that this is a query-complexity lower bound, and hence independent of assumptions such as $P \neq NP$. \begin{theorem} \label{thm:general-hardness} Let $\max \{ f(S): S \in {\cal F} \}$ be an instance of nonnegative (optionally monotone) submodular maximization, strongly symmetric with respect to ${\cal G}$, with symmetry gap $\gamma = \overline{OPT} / OPT$. Let $\cal C$ be the class of instances $\max \{\tilde{f}(S): S \in \tilde{{\cal F}}\}$ where $\tilde{f}$ is nonnegative (optionally monotone) submodular and $\tilde{{\cal F}}$ is a refinement of ${\cal F}$. Then for every $\epsilon > 0$, any (even randomized) $(1+\epsilon) \gamma$-approximation algorithm for the class $\cal C$ would require exponentially many value queries to $\tilde{f}(S)$. \end{theorem} We remark that the result holds even if the class $\cal C$ is restricted to instances which are themselves symmetric under a group related to ${\cal G}$ (see the discussion in Section~\ref{section:hardness-proof}, after the proofs of Theorem~\ref{thm:general-hardness} and \ref{thm:multilinear-hardness}). On the algorithmic side, submodular maximization seems easier for symmetric instances and in this case we obtain optimal approximation factors, up to lower-order terms (see Section~\ref{section:symmetric}). Our hardness construction yields impossibility results also for solving the continuous problem $\max \{F({\bf x}): {\bf x} \in P({\cal F}) \}$. In the case of matroid constraints, this is easy to see, because an approximation to the continuous problem gives the same approximation factor for the discrete problem (by pipage rounding, see Appendix~\ref{app:pipage}). However, this phenomenon is more general and we can show that the value of a symmetry gap translates into an inapproximability result for the multilinear optimization problem under any constraint satisfying a symmetry condition. \begin{theorem} \label{thm:multilinear-hardness} Let $\max \{ f(S): S \in {\cal F} \}$ be an instance of nonnegative (optionally monotone) submodular maximization, strongly symmetric with respect to ${\cal G}$, with symmetry gap $\gamma = \overline{OPT} / OPT$. Let $\cal C$ be the class of instances $\max \{\tilde{f}(S): S \in \tilde{{\cal F}}\}$ where $\tilde{f}$ is nonnegative (optionally monotone) submodular and $\tilde{{\cal F}}$ is a refinement of ${\cal F}$. Then for every $\epsilon > 0$, any (even randomized) $(1+\epsilon) \gamma$-approximation algorithm for the multilinear relaxation $\max \{\tilde{F}({\bf x}): {\bf x} \in P(\tilde{{\cal F}})\}$ of problems in $\cal C$ would require exponentially many value queries to $\tilde{f}(S)$. \end{theorem} \ \noindent{\bf Additions to the conference version and follow-up work.} An extended abstract of this work appeared in IEEE FOCS 2009 \cite{Vondrak09}. As mentioned above, the main theorem in \cite{Vondrak09} suffers from a technical flaw. This does not affect the applications, but the general theorem in \cite{Vondrak09} is not correct. We provide a corrected version of the main theorem with a complete proof (Theorem~\ref{thm:general-hardness}) and we extend this hardness result to the problem of solving the multilinear relaxation (Theorem~\ref{thm:multilinear-hardness}). Subsequently, further work has been done which exploits the symmetry gap concept. In \cite{OV11}, it has been proved using Theorem~\ref{thm:general-hardness} that maximizing a nonnegative submodular function subject to a matroid independence constraint with a factor better than $0.478$ would require exponentially many queries. Even in the case of a cardinality constraint, $\max \{f(S): |S| \leq k\}$ cannot be approximated within a factor better than $0.491$ using subexponentially many queries \cite{OV11}. In the case of a matroid base constraint, assuming that the fractional base packing number is $\nu = \frac{k}{k-1}$ for some $k \geq 2$, there is no $(1-e^{-1/k}+\epsilon)$-approximation in the value oracle model \cite{OV11}, improving the hardness of $(1-\frac{1}{\nu}+\epsilon) = (\frac{1}{k}+\epsilon)$-approximation from this paper. These applications are not affected by the flaw in \cite{Vondrak09}, and they are implied by the corrected version of Theorem~\ref{thm:general-hardness} here. Recently \cite{DV11}, it has been proved using the symmetry gap technique that combinatorial auctions with submodular bidders do not admit any truthful-in-expectation $1/m^\gamma$-approximation, where $m$ is the number of items and $\gamma>0$ some absolute constant. This is the first nontrivial hardness result for truthful-in-expectation mechanisms for combinatorial auctions; it separates the classes of monotone submodular functions and coverage functions, where a truthful-in-expectation $(1-1/e)$-approximation is possible \cite{DRY11}. The proof is self-contained and does not formally refer to \cite{Vondrak09}. Moreover, this hardness result for truthful-in-expectation mechanisms as well as the main hardness result in this paper have been converted from the oracle setting to a computational complexity setting \cite{DV12a,DV12b}. This recent work shows that the hardness of approximation arising from symmetry gap is not limited to instances given by an oracle, but holds also for instances encoded explicitly on the input, under a suitable complexity-theoretic assumption. \ \noindent{\bf Organization.} The rest of the paper is organized as follows. In Section~\ref{section:hardness-applications}, we present applications of our main hardness result (Theorem~\ref{thm:general-hardness}) to concrete cases, in particular we show how it implies the hardness statements in Theorem~\ref{thm:matroid-indep} and \ref{thm:matroid-bases}. In Section~\ref{section:hardness-proof}, we present the proofs of Theorem~\ref{thm:general-hardness} and Theorem~\ref{thm:multilinear-hardness}. In Section~\ref{section:algorithms}, we prove the algorithmic results in Theorem~\ref{thm:matroid-indep} and \ref{thm:matroid-bases}. In Section~\ref{section:symmetric}, we discuss the special case of symmetric instances. In the Appendix, we present a few basic facts concerning submodular functions, an extension of pipage rounding to matroid independence polytopes (rather than matroid base polytopes), and other technicalities that would hinder the main exposition. \section{From symmetry to inapproximability: applications} \label{section:hardness-applications} Before we get into the proof of Theorem~\ref{thm:general-hardness}, let us show how it can be applied to a number of specific problems. Some of these are hardness results that were proved previously by an ad-hoc method. The last application is a new one (Theorem~\ref{thm:matroid-bases}). \ \noindent{\bf Nonmonotone submodular maximization.} Let $X = \{1,2\}$ and for any $S \subseteq X$, $f(S) = 1$ if $|S|=1$, and $0$ otherwise. Consider the instance $\max \{ f(S): S \subseteq X \}$. In other words, this is the Max Cut problem on the graph $K_2$. This instance exhibits a simple symmetry, the group of all (two) permutations on $\{1,2\}$. We get $OPT = F(1,0) = F(0,1) = 1$, while $\overline{OPT} = F(1/2,1/2) = 1/2$. Hence, the symmetry gap is $1/2$. \begin{figure}[here] \begin{tikzpicture}[scale=.50] \draw (-10,0) node {}; \filldraw [fill=gray,line width=1mm] (0,0) rectangle (4,2); \filldraw [fill=white] (0.5,1) .. controls +(0,1) and +(0,1) .. (1.5,1) .. controls +(0,-1) and +(0,-1) .. (0.5,1); \fill (1,1) circle (5pt); \fill (3,1) circle (5pt); \draw (1,1) -- (3,1); \draw (-1,1) node {$X$}; \draw (7,1) node {$\bar{x}_1 = \bar{x}_2 = \frac{1}{2}$}; \end{tikzpicture} \caption{Symmetric instance for nonmonotone submodular maximization: Max Cut on the graph $K_2$. The white set denotes the optimal solution, while $\bar{{\bf x}}$ is the (unique) symmetric solution.} \end{figure} Since $f(S)$ is nonnegative submodular and there is no constraint on $S \subseteq X$, this will be the case for any refinement of the instance as well. Theorem~\ref{thm:general-hardness} implies immediately the following: any algorithm achieving a $(\frac12 + \epsilon)$-approximation for nonnegative (nonmonotone) submodular maximization requires exponentially many value queries (which was previously known \cite{FMV07}). Note that a ``trivial instance" implies a nontrivial hardness result. This is typically the case in applications of Theorem~\ref{thm:general-hardness}. The same symmetry gap holds if we impose some simple constraints: the problems $\max \{ f(S): |S| \leq 1 \}$ and $\max \{ f(S): |S| = 1 \}$ have the same symmetry gap as above. Hence, the hardness threshold of $1/2$ also holds for nonmonotone submodular maximization under cardinality constraints of the type $|S| \leq n/2$, or $|S| = n/2$. This proves the hardness part of Theorem~\ref{thm:matroid-indep}. This can be derived quite easily from the construction of \cite{FMV07} as well. \ \noindent{\bf Monotone submodular maximization.} Let $X = [k]$ and $f(S) = \min \{|S|, 1\}$. Consider the instance $\max \{f(S): |S| \leq 1 \}$. This instance is invariant under all permutations on $[k]$, the symmetric group ${\bf S}_k$. Note that the instance is {\em strongly symmetric} (Def.~\ref{def:total-sym}) with respect to ${\bf S}_k$, since the feasibility constraint $|S| \leq 1$ depends only on the symmetrized vector $\overline{{\bf 1}_S} = (\frac{1}{k}|S|, \ldots, \frac{1}{k}|S|)$. We get $OPT = F(1,0,\ldots,0) = 1$, while $\overline{OPT} = F(1/k,1/k,\ldots,1/k) = 1 - (1-1/k)^k$. \iffalse \begin{figure}[here] \begin{tikzpicture}[scale=.50] \draw (-8,0) node {}; \filldraw [fill=gray,line width=1mm] (0,0) rectangle (9,2); \filldraw [fill=white] (0.5,1) .. controls +(0,1) and +(0,1) .. (1.5,1) .. controls +(0,-1) and +(0,-1) .. (0.5,1); \fill (1,1) circle (5pt); \fill (2,1) circle (5pt); \fill (3,1) circle (5pt); \fill (4,1) circle (5pt); \fill (5,1) circle (5pt); \fill (6,1) circle (5pt); \fill (7,1) circle (5pt); \fill (8,1) circle (5pt); \draw [dotted] (1,1) -- (8,1); \draw (-1,1) node {$X$}; \draw (11,1) node {$\bar{x}_{i} = \frac{1}{k}$}; \end{tikzpicture} \caption{Symmetric instance for monotone submodular maximization.} \end{figure} \fi Here, $f(S)$ is monotone submodular and any refinement of $\cal F$ is a set system of the type $\tilde{{\cal F}} = \{S: |S| \leq \ell \}$. Based on our theorem, this implies that any approximation better than $1 - (1-1/k)^k$ for monotone submodular maximization subject to a cardinality constraint would require exponentially many value queries. Since this holds for any fixed $k$, we get the same hardness result for any $\beta > \lim_{k \rightarrow \infty} (1 - (1-1/k)^k) = 1-1/e$ (which was previously known \cite{NW78}). \iffalse \noindent{\bf Submodular welfare maximization.} Let $X = [k] \times [k]$, $ {\cal F} = \{ S: S$ contains at most $1$ pair $(i,j)$ for each $j \}$, and $f(S) = |\{ i: \exists (i,j) \in S \}|$. Consider the instance $\max \{f(S): S \in {\cal F}\}$. This instance can be interpreted as an allocation problem of $k$ items to $k$ players. A set $S$ represents an assignment in the sense that $(i,j) \in S$ if item $j$ is allocated to player $i$. A player is satisfied is she receives at least 1 item; the objective function is the number of satisfied players. This instance exhibits the symmetry of all permutations of the players (and also all permutations of the items, although we do not use it here). Note that the feasibility constraint $S \in {\cal F}$ depends only on the symmetrized vector $\overline{{\bf 1}_S}$ which averages out the allocation of each item across all players. Therefore the instance is strongly symmetric with respect to permutations of players. An optimum solution allocates each item to a different player, and $OPT = k$. The symmetrized optimum allocates a $1/k$-fraction of each item to each player, which gives $\overline{OPT} = F(\frac{1}{k},\frac{1}{k},\ldots,\frac{1}{k}) = k(1-(1-\frac{1}{k})^k)$. \begin{figure}[here] \begin{tikzpicture}[scale=.50] \draw (-8,0) node {}; \filldraw [fill=gray,line width=1mm] (0,0) rectangle (8,2); \filldraw [fill=white] (0.5,1) .. controls +(0,1) and +(0,1) .. (1.5,1) .. controls +(0,-1) and +(0,-1) .. (0.5,1); \filldraw [fill=gray,line width=1mm] (0,2) rectangle (8,4); \filldraw [fill=white] (2.5,3) .. controls +(0,1) and +(0,1) .. (3.5,3) .. controls +(0,-1) and +(0,-1) .. (2.5,3); \filldraw [fill=gray,line width=1mm] (0,4) rectangle (8,6); \filldraw [fill=white] (4.5,5) .. controls +(0,1) and +(0,1) .. (5.5,5) .. controls +(0,-1) and +(0,-1) .. (4.5,5); \filldraw [fill=gray,line width=1mm] (0,6) rectangle (8,8); \filldraw [fill=white] (6.5,7) .. controls +(0,1) and +(0,1) .. (7.5,7) .. controls +(0,-1) and +(0,-1) .. (6.5,7); \fill (1,1) circle (5pt); \fill (3,1) circle (5pt); \fill (5,1) circle (5pt); \fill (7,1) circle (5pt); \fill (1,3) circle (5pt); \fill (3,3) circle (5pt); \fill (5,3) circle (5pt); \fill (7,3) circle (5pt); \fill (1,5) circle (5pt); \fill (3,5) circle (5pt); \fill (5,5) circle (5pt); \fill (7,5) circle (5pt); \fill (1,7) circle (5pt); \fill (3,7) circle (5pt); \fill (5,7) circle (5pt); \fill (7,7) circle (5pt); \draw [dotted] (1,1) -- (7,1); \draw [dotted] (1,3) -- (7,3); \draw [dotted] (1,5) -- (7,5); \draw [dotted] (1,7) -- (7,7); \draw (-1.5,4) node {players}; \draw (4,9) node {items}; \draw (10,4) node {$\bar{x}_{ij} = \frac{1}{k}$}; \end{tikzpicture} \caption{Symmetric instance for submodular welfare maximization.} \end{figure} A refinement of this instance can be interpreted as an allocation problem where we have $n$ copies of each item, we still have $k$ players, and the utility functions are monotone submodular. Our theorem implies that for submodular welfare maximization with $k$ players, a better approximation factor than $1 - (1-\frac{1}{k})^k$ is impossible.\footnote{We note that our theorem here assumes an oracle model where only the total value can be queried for a given allocation. This is actually enough for the $(1-1/e)$-approximation of \cite{Vondrak08} to work. However, the hardness result holds even if each player's valuation function can be queried separately; this result was proved in \cite{MSV08}.} \fi \ \noindent{\bf Submodular maximization over matroid bases.} Let $X = A \cup B$, $A = \{a_1, \ldots, a_k\}$, $B = \{b_1, \ldots, b_k\}$ and ${\cal F} = \{ S: |S \cap A| = 1 \ \& \ |S \cap B| = k-1 \}$. We define $f(S) = \sum_{i=1}^{k} f_i(S)$ where $f_i(S) = 1$ if $a_i \in S \ \& \ b_i \notin S$, and $0$ otherwise. This instance can be viewed as a Maximum Directed Cut problem on a graph of $k$ disjoint arcs, under the constraint that exactly one arc tail and $k-1$ arc heads should be on the left-hand side ($S$). An optimal solution is for example $S = \{a_1, b_2, b_3, \ldots, b_k \}$, which gives $OPT = 1$. The symmetry here is that we can apply the same permutation to $A$ and $B$ simultaneously. Again, the feasibility of a set $S$ depends only on the symmetrized vector $\overline{{\bf 1}_S}$: in fact $S \in {\cal F}$ if and only if $\overline{{\bf 1}_S} = (\frac1k,\ldots,\frac1k,1-\frac1k,\ldots,1-\frac1k)$. There is a unique symmetric solution $\bar{{\bf x}} = (\frac1k,\ldots,\frac1k,1-\frac1k,\ldots,1-\frac1k)$, and $\overline{OPT} = F(\bar{{\bf x}}) = {\bf E}[f(\hat{\bar{{\bf x}}})] = \sum_{i=1}^{k} {\bf E}[f_i(\hat{\bar{{\bf x}}})] = \frac1k$ (since each arc appears in the directed cut induced by $\hat{\bar{{\bf x}}}$ with probability $\frac{1}{k^2}$). \begin{figure}[here] \begin{tikzpicture}[scale=.60] \draw (-6,0) node {}; \filldraw [fill=gray,line width=1mm] (0,0) rectangle (9,2); \filldraw [fill=gray,line width=1mm] (0,2) rectangle (9,4); \filldraw [fill=white] (0.5,3) .. controls +(0,1) and +(0,1) .. (1.5,3) .. controls +(0,-1) and +(0,-1) .. (0.5,3); \filldraw [fill=white] (1.5,1) .. controls +(0,1) and +(0,1) .. (8.5,1) .. controls +(0,-1) and +(0,-1) .. (1.5,1); \fill (1,1) circle (5pt); \fill (2,1) circle (5pt); \fill (3,1) circle (5pt); \fill (4,1) circle (5pt); \fill (5,1) circle (5pt); \fill (6,1) circle (5pt); \fill (7,1) circle (5pt); \fill (8,1) circle (5pt); \fill (1,3) circle (5pt); \fill (2,3) circle (5pt); \fill (3,3) circle (5pt); \fill (4,3) circle (5pt); \fill (5,3) circle (5pt); \fill (6,3) circle (5pt); \fill (7,3) circle (5pt); \fill (8,3) circle (5pt); \draw[-latex] [line width=0.5mm] (1,3) -- (1,1); \draw[-latex] [line width=0.5mm] (2,3) -- (2,1); \draw[-latex] [line width=0.5mm] (3,3) -- (3,1); \draw[-latex] [line width=0.5mm] (4,3) -- (4,1); \draw[-latex] [line width=0.5mm] (5,3) -- (5,1); \draw[-latex] [line width=0.5mm] (6,3) -- (6,1); \draw[-latex] [line width=0.5mm] (7,3) -- (7,1); \draw[-latex] [line width=0.5mm] (8,3) -- (8,1); \draw (-1,1) node {$B$}; \draw (-1,3) node {$A$}; \draw (11,3) node {$\bar{x}_{a_i} = \frac{1}{k}$}; \draw (11,1) node {$\bar{x}_{b_i} = 1 - \frac{1}{k}$}; \end{tikzpicture} \caption{Symmetric instance for submodular maximization over matroid bases.} \end{figure} The refined instances are instances of (nonmonotone) submodular maximization over the bases of a matroid, where the ground set is partitioned into $A \cup B$ and we should take a $\frac1k$-fraction of $A$ and a $(1-\frac1k)$-fraction of $B$. (This means that the fractional packing number of bases is $\nu = \frac{k}{k-1}$.) Our theorem implies that for this class of instances, an approximation better than $1/k$ is impossible - this proves the hardness part of Theorem~\ref{thm:matroid-bases}. \ Observe that in all the cases mentioned above, the multilinear relaxation is equivalent to the original problem, in the sense that any fractional solution can be rounded without any loss in the objective value. This implies that the same hardness factors apply to solving the multilinear relaxation of the respective problems. In particular, using the last result (for matroid bases), we obtain that the multilinear optimization problem $\max \{ F({\bf x}): {\bf x} \in P \}$ does not admit a constant factor for nonnegative submodular functions and matroid base polytopes. (We remark that a $(1-1/e)$-approximation can be achieved for any {\em monotone} submodular function and any solvable polytope, i.e.~polytope over which we can optimize linear functions \cite{Vondrak08}.) As Theorem~\ref{thm:multilinear-hardness} shows, this holds more generally - any symmetry gap construction gives an inapproximability result for solving the multilinear optimization problem $\max \{F({\bf x}): {\bf x} \in P\}$. This in fact implies limits on what hardness results we can possibly hope for using this technique. For instance, we cannot prove using the symmetry gap that the monotone submodular maximization problem subject to the intersection of $k$ matroid constraints does not admit a constant factor - because we would also prove that the respective multilinear relaxation does not admit such an approximation. But we know from \cite{Vondrak08} that a $(1-1/e)$-approximation is possible for the multilinear problem in this case. Hence, the hardness arising from the symmetry gap is related to the difficulty of solving the multilinear optimization problem rather than the difficulty of rounding a fractional solution. Thus this technique is primarily suited to optimization problems where the multilinear optimization problem captures closely the original discrete problem. \section{From symmetry to inapproximability: proof} \label{section:hardness-proof} \paragraph{The roadmap} At a high level, our proof resembles the constructions of \cite{FMV07,MSV08}. We construct instances based on continuous functions $F({\bf x})$, $G({\bf x})$, whose optima differ by a gap for which we want to prove hardness. Then we show that after a certain perturbation, the two instances are very hard to distinguish. This paper generalizes the ideas of \cite{FMV07,MSV08} and brings two new ingredients. First, we show that the functions $F({\bf x}), G({\bf x})$, which are ``pulled out of the hat'' in \cite{FMV07,MSV08}, can be produced in a natural way from the multilinear relaxation of the respective problem, using the notion of a {\em symmetry gap}. Secondly, the functions $F({\bf x}), G({\bf x})$ are perturbed in a way that makes them indistinguishable and this forms the main technical part of the proof. In \cite{FMV07}, this step is quite simple. In \cite{MSV08}, the perturbation is more complicated, but still relies on properties of the functions $F({\bf x}), G({\bf x})$ specific to that application. The construction that we present here (Lemma~\ref{lemma:final-fix}) uses the symmetry properties of a fixed instance in a generic fashion. \ First, let us present an outline of our construction. Given an instance $\max \{f(S): S \in {\cal F}\}$ exhibiting a symmetry gap $\gamma$, we consider two smooth submodular\footnote{"Smooth submodularity" means the condition $\mixdiff{F}{x_i}{x_j} \leq 0$ for all $i,j$.} functions, $F({\bf x})$ and $G({\bf x})$. The first one is the multilinear extension $F({\bf x}) = {\bf E}[f(\hat{{\bf x}})]$, while the second one is its symmetrized version $G({\bf x}) = F(\bar{{\bf x}})$. We modify these functions slightly so that we obtain functions $\hat{F}({\bf x})$ and $\hat{G}({\bf x})$ with the following property: For any vector ${\bf x}$ which is close to its symmetrized version $\bar{{\bf x}}$, $\hat{F}({\bf x}) = \hat{G}({\bf x})$. The functions $\hat{F}({\bf x}), \hat{G}({\bf x})$ induce instances of submodular maximization on the refined ground sets. The way we define discrete instances based on $\hat{F}({\bf x}), \hat{G}({\bf x})$ is natural, using the following lemma. Essentially, we interpret the fractional variables as fractions of clusters in the refined instance. \begin{lemma} \label{lemma:smooth-submodular} Let $F:[0,1]^X \rightarrow {\boldmath R}$, $N = [n]$, $n \geq 1$, and define $f:2^{N \times X} \rightarrow {\boldmath R}$ so that $f(S) = F({\bf x})$ where $x_i = \frac{1}{n} |S \cap (N \times \{i\})|$. Then \begin{enumerate} \item If $\partdiff{F}{x_i} \geq 0$ everywhere for each $i$, then $f$ is monotone. \item If the first partial derivatives of $F$ are absolutely continuous\footnote{ A function $F:[0,1]^X \rightarrow {\boldmath R}$ is absolutely continuous, if $\forall \epsilon>0; \exists \delta>0; \sum_{i=1}^{t} ||{\bf x}_i-{\bf y}_i|| < \delta \Rightarrow \sum_{i=1}^{t} |F({\bf x}_i) - F({\bf y}_i)| < \epsilon$.} and $\mixdiff{F}{x_i}{x_j} \leq 0$ almost everywhere for all $i,j$, then $f$ is submodular. \end{enumerate} \end{lemma} \begin{proof} First, assume $\partdiff{F}{x_i} \geq 0$ everywhere for all $i$. This implies that $F$ is nondecreasing in every coordinate, i.e. $F({\bf x}) \leq F({\bf y})$ whenever ${\bf x} \leq {\bf y}$. This means that $f(S) \leq f(T)$ whenever $S \subseteq T$. Next, assume $\partdiff{F}{x_i}$ is absolutely continuous for each $i$ and $\mixdiff{F}{x_j}{x_i} \leq 0$ almost everywhere for all $i,j$. We want to prove that $\partdiff{F}{x_i} |_{\bf x} \geq \partdiff{F}{x_i} |_{\bf y}$ whenever ${\bf x} \leq {\bf y}$, which implies that the marginal values of $f$ are nonincreasing. Let ${\bf x} \leq {\bf y}$, fix $\delta>0$ arbitrarily small, and pick ${\bf x}',{\bf y}'$ such that $||{\bf x}'-{\bf x}||<\delta, ||{\bf y}'-{\bf y}||<\delta, {\bf x}' \leq {\bf y}'$ and on the line segment $[{\bf x}', {\bf y}']$, we have $\mixdiff{F}{x_j}{x_i} \leq 0$ except for a set of (1-dimensional) measure zero. If such a pair of points ${\bf x}', {\bf y}'$ does not exist, it means that there are sets $A,B$ of positive measure such that ${\bf x} \in A, {\bf y} \in B$ and for any ${\bf x}' \in A, {\bf y}' \in B$, the line segment $[{\bf x}',{\bf y}']$ contains a subset of positive (1-dimensional) measure where $\mixdiff{F}{x_j}{x_i}$ for some $j$ is positive or undefined. This would imply that $[0,1]^X$ contains a subset of positive measure where $\mixdiff{F}{x_j}{x_i}$ for some $j$ is positive or undefined, which we assume is not the case. Therefore, there is a pair of points ${\bf x}', {\bf y}'$ as described above. We compare $\partdiff{F}{x_i} |_{{\bf x}'}$ and $\partdiff{F}{x_i} |_{{\bf y}'}$ by integrating along the line segment $[{\bf x}', {\bf y}']$. Since $\partdiff{F}{x_i}$ is absolutely continuous and $\partdiff{}{x_j} \partdiff{F}{x_i} = \mixdiff{F}{x_j}{x_i} \leq 0$ along this line segment for all $j$ except for a set of measure zero, we obtain $\partdiff{F}{x_i} |_{{\bf x}'} \geq \partdiff{F}{x_i} |_{{\bf y}'}$. This is true for ${\bf x}', {\bf y}'$ arbitrarily close to ${\bf x}, {\bf y}$, and hence by continuity of $\partdiff{F}{x_i}$, we get $\partdiff{F}{x_i} |_{{\bf x}} \geq \partdiff{F}{x_i} |_{{\bf y}}$. This implies that the marginal values of $f$ are nonincreasing. \end{proof} The way we construct $\hat{F}({\bf x}), \hat{G}({\bf x})$ is such that, given a large enough refinement of the ground set, it is impossible to distinguish the instances corresponding to $\hat{F}({\bf x})$ and $\hat{G}({\bf x})$. As we argue more precisely later, this holds because if the ground set is large and labeled in a random way (considering the symmetry group of the instance), a query about a vector ${\bf x}$ effectively becomes a query about the symmetrized vector $\bar{{\bf x}}$. We would like this property to imply that all queries with high probability fall in the region where $\hat{F}({\bf x}) = \hat{G}({\bf x})$ and the inability to distinguish between $\hat{F}$ and $\hat{G}$ gives the hardness result that we want. The following lemma gives the precise properties of $\hat{F}({\bf x})$ and $\hat{G}({\bf x})$ that we need. \begin{lemma} \label{lemma:final-fix} Consider a function $f:2^X \rightarrow {\boldmath R}_+$ invariant under a group of permutations $\cal G$ on the ground set $X$. Let $F({\bf x}) = {\bf E}[f(\hat{{\bf x}})]$, $\bar{{\bf x}} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf x})]$, and fix any $\epsilon > 0$. Then there is $\delta > 0$ and functions $\hat{F}, \hat{G}:[0,1]^X \rightarrow {\boldmath R}_+$ (which are also symmetric with respect to ${\cal G}$), satisfying: \begin{enumerate} \item For all ${\bf x} \in [0,1]^X$, $\hat{G}({\bf x}) = \hat{F}(\bar{{\bf x}})$. \item For all ${\bf x} \in [0,1]^X$, $|\hat{F}({\bf x}) - F({\bf x})| \leq \epsilon$. \item Whenever $||{\bf x} - \bar{{\bf x}}||^2 \leq \delta$, $\hat{F}({\bf x}) = \hat{G}({\bf x})$ and the value depends only on $\bar{{\bf x}}$. \item The first partial derivatives of $\hat{F}, \hat{G}$ are absolutely continuous. \item If $f$ is monotone, then $\partdiff{\hat{F}}{x_i} \geq 0$ and $\partdiff{\hat{G}}{x_i} \geq 0$ everywhere. \item If $f$ is submodular, then $\mixdiff{\hat{F}}{x_i}{x_j} \leq 0$ and $\mixdiff{\hat{G}}{x_i}{x_j} \leq 0$ almost everywhere. \end{enumerate} \end{lemma} The proof of this lemma is the main technical part of this paper and we defer it to the end of this section. Assuming this lemma, we first finish the proof of the main theorem. We prove the following. \begin{lemma} \label{lemma:indistinguish} Let $\hat{F}, \hat{G}$ be the two functions provided by Lemma~\ref{lemma:final-fix}. For a parameter $n \in {\boldmath Z}_+$ and $N = [n]$, define two discrete functions $\hat{f}, \hat{g}: 2^{N \times X} \rightarrow {\boldmath R}_+$ as follows: Let $\sigma^{(i)}$ be an arbitrary permutation in ${\cal G}$ for each $i \in N$. For every set $S \subseteq N \times X$, we define a vector $\xi(S) \in [0,1]^X$ by $$ \xi_j(S) = \frac{1}{n} \left|\{i \in N: (i,\sigma^{(i)}(j)) \in S \}\right|.$$ Let us define: $$ \hat{f}(S) = \hat{F}(\xi(S)), \ \ \ \ \ \hat{g}(S) = \hat{G}(\xi(S)).$$ In addition, let $\tilde{{\cal F}} = \{\tilde{S}: \xi(\tilde{S}) \in P({\cal F})\}$ be a feasibility constraint such that the condition $S \in {\cal F}$ depends only on the symmetrized vector $\overline{{\bf 1}_S}$. Then deciding whether an instance given by value/membership oracles is $\max \{\hat{f}(S): S \in \tilde{{\cal F}}\}$ or $\max \{\hat{g}(S): S \in \tilde{{\cal F}} \}$ (even by a randomized algorithm, with any constant probability of success) requires an exponential number of queries. \end{lemma} \begin{proof} Let $\sigma^{(i)} \in {\cal G}$ be chosen independently at random for each $i \in N$ and consider the instances $\max\{\hat{f}(S): S \in \tilde{{\cal F}} \}$, $\max \{\hat{g}(S): S \in \tilde{{\cal F}} \}$ as described in the lemma. We show that every deterministic algorithm will follow the same computation path and return the same answer on both instances, with high probability. By Yao's principle, this means that every randomized algorithm returns the same answer for the two instances with high probability, for some particular $\sigma^{(i)} \in {\cal G}$. The feasible sets in the refined instance, $S \in \tilde{{\cal F}}$, are such that the respective vector $\xi(S)$ is in the polytope $P({{\cal F}})$. Since the instance is strongly symmetric, the condition $S \in {\cal F}$ depends only on the symmetrized vector $\overline{{\bf 1}_S}$. Hence, the condition $\xi(S) \in P({\cal F})$ depends only on the symmetrized vector $\overline{\xi(S)}$. Therefore, $S \in \tilde{{\cal F}} \Leftrightarrow \xi(S) \in P({\cal F}) \Leftrightarrow \overline{\xi(S)} \in P({\cal F})$. We have $\overline{\xi(S)}_j = {\bf E}_{\sigma \in {\cal G}}[\frac{1}{n} \left|\{i \in N: (i,\sigma^{(i)}(\sigma(j))) \in S \}\right|].$ The distribution of $\sigma^{(i)} \circ \sigma$ is the same as that of $\sigma$, i.e.~uniform over ${\cal G}$. Hence, $\overline{\xi(S)}_j$ and consequently the condition $S \in \tilde{{\cal F}}$ does not depend on $\sigma^{(i)}$ in any way. Intuitively, an algorithm cannot learn any information about the permutations $\sigma^{(i)}$ by querying the feasibility oracle, since the feasibility condition $S \in \tilde{{\cal F}}$ does not depend on $\sigma^{(i)}$ for any $i \in N$. The main part of the proof is to show that even queries to the objective function are unlikely to reveal any information about the permutations $\sigma^{(i)}$. The key observation is that for any fixed query $Q$ to the objective function, the associated vector ${\bf q} = \xi(Q)$ is very likely to be close to its symmetrized version $\bar{{\bf q}}$. To see this, consider a query $Q$. The associated vector ${\bf q} = \xi(Q)$ is determined by $$ q_j = \frac{1}{n} |\{i \in N: (i,\sigma^{(i)}(j)) \in Q\}| = \frac{1}{n} \sum_{i=1}^{n} Q_{ij} $$ where $Q_{ij}$ is an indicator variable of the event $(i,\sigma^{(i)}(j)) \in Q$. This is a random event due to the randomness in $\sigma^{(i)}$. We have $$ {\bf E}[Q_{ij}] = \Pr[Q_{ij}=1] = \Pr_{\sigma^{(i)} \in {\cal G}}[(i,\sigma^{(i)}(j)) \in Q].$$ Adding up these expectations over $i \in N$, we get $$ \sum_{i \in N} {\bf E}[Q_{ij}] = \sum_{i \in N} \Pr_{\sigma^{(i)} \in {\cal G}}[(i,\sigma^{(i)}(j)) \in Q] = {\bf E}_{\sigma \in {\cal G}} [|\{i \in N: (i,\sigma(j)) \in Q \}|].$$ For the purposes of expectation, the independence of $\sigma^{(1)}, \ldots, \sigma^{(n)}$ is irrelevant and that is why we replace them by one random permutation $\sigma$. On the other hand, consider the symmetrized vector $\bar{{\bf q}}$, with coordinates $$ \bar{q}_j = {\bf E}_{\sigma \in {\cal G}}[q_{\sigma(j)}] = \frac{1}{n} {\bf E}_{\sigma \in {\cal G}}[|\{i \in N: (i,\sigma^{(i)}(\sigma(j))) \in Q\}|] = \frac{1}{n} {\bf E}_{\sigma \in {\cal G}}[|\{i \in N: (i,\sigma(j)) \in Q \}|] $$ using the fact that the distribution of $\sigma^{(i)} \circ \sigma$ is the same as the distribution of $\sigma$ - uniformly random over ${\cal G}$. Note that the vector ${\bf q}$ depends on the random permutations $\sigma^{(i)}$ but the symmetrized vector $\bar{{\bf q}}$ does not; this will be also useful in the following. For now, we summarize that $$ \bar{q}_j = \frac{1}{n} \sum_{i=1}^{n} {\bf E}[Q_{ij}] = {\bf E}[q_j].$$ Since each permutation $\sigma^{(i)}$ is chosen independently, the random variables $\{Q_{ij}: 1 \leq i \leq n\}$ are independent (for a fixed $j$). We can apply Chernoff's bound (see e.g. \cite{AlonSpencer}, Corollary A.1.7): $$ \Pr\left[\left|\sum_{i=1}^{n} Q_{ij}- \sum_{i=1}^{n} {\bf E}[Q_{ij}] \right| > a \right] < 2e^{-2a^2 / n}.$$ Using $q_j = \frac{1}{n} \sum_{i=1}^{n} Q_{ij}$, $\bar{q}_j = \frac{1}{n} \sum_{i=1}^{n} {\bf E}[Q_{ij}]$ and setting $a = n \sqrt{\delta/|X|}$, we obtain $$ \Pr\left[|q_j - \bar{q}_j| > \sqrt{{\delta}/{|X|}}\right] < 2e^{-2 n \delta / |X|}.$$ By the union bound, \begin{equation} \label{eq:D(q)} \Pr[||{\bf q}-\bar{{\bf q}}||^2 > \delta] \leq \sum_{j \in X} \Pr[|q_j-\bar{q}_j|^2 > {\delta}/{|X|}] < 2|X| e^{-2n \delta/|X|}. \end{equation} Note that while $\delta$ and $|X|$ are constants, $n$ grows as the size of the refinement and hence the probability is exponentially small in the size of the ground set $N \times X$. Define $D({\bf q}) = ||{\bf q}-\bar{{\bf q}}||^2$. As long as $D({\bf q}) \leq \delta$ for every query issued by the algorithm, the answers do not depend on the randomness of the input. This is because then the values of $\hat{F}({\bf q})$ and $\hat{G}({\bf q})$ depend only on $\bar{{\bf q}}$, which is independent of the random permutations $\sigma^{(i)}$, as we argued above. Therefore, assuming that $D({\bf q}) \leq \delta$ for each query, the algorithm will always follow the same path of computation and issue the same sequence of queries $\cal S$. (Note that this is just a fixed sequence which can be written down before we started running the algorithm.) Assume that $|{\cal S}| = e^{o(n)}$, i.e.,~the number of queries is subexponential in $n$. By (\ref{eq:D(q)}), using a union bound over all $Q \in {\cal S}$, it happens with probability $1 - e^{-\Omega(n)}$ that ${D}({\bf q}) = ||{\bf q}-\bar{{\bf q}}||^2 \leq \delta$ for all $Q \in {\cal S}$. (Note that $\delta$ and $|X|$ are constants here.) Then, the algorithm indeed follows this path of computation and gives the same answer. In particular, the answer does not depend on whether the instance is $\max \{\hat{f}(S): S \in \tilde{{\cal F}}\}$ or $\max \{\hat{g}(S): S \in \tilde{{\cal F}}\}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:general-hardness}] Fix an $\epsilon > 0$. Given an instance $\max \{f(S): S \in {\cal F}\}$ strongly symmetric under ${\cal G}$, let $\hat{F}, \hat{G}: [0,1]^X \rightarrow {\boldmath R}$ be the two functions provided by Lemma~\ref{lemma:final-fix}. We choose a large number $n$ and consider a refinement $\tilde{{\cal F}}$ on the ground set $N \times X$, where $N = [n]$. We define discrete instances of submodular maximization $\max \{ \hat{f}(S): S \in \tilde{{\cal F}} \}$ and $\max \{ \hat{g}(S): S \in \tilde{{\cal F}} \}$. As in Lemma~\ref{lemma:indistinguish}, for each $i \in N$ we choose a random permutation $\sigma^{(i)} \in {\cal G}$. This can be viewed as a random shuffle of the labeling of the ground set before we present it to an algorithm. For every set $S \subseteq N \times X$, we define a vector $\xi(S) \in [0,1]^X$ by $$ \xi_j(S) = \frac{1}{n} \left|\{i \in N: (i,\sigma^{(i)}(j)) \in S \}\right|.$$ In other words, $\xi_j(S)$ measures the fraction of copies of element $j$ contained in $S$; however, for each $i$ the $i$-copies of all elements are shuffled by $\sigma^{(i)}$. Next, we define $$ \hat{f}(S) = \hat{F}(\xi(S)), \ \ \ \ \ \hat{g}(S) = \hat{G}(\xi(S)).$$ We claim that $\hat{f}$ and $\hat{g}$ are submodular (for any fixed $\xi$). Note that the effect of $\sigma^{(i)}$ is just a renaming (or shuffling) of the elements of $N \times X$, and hence for the purpose of proving submodularity we can assume that $\sigma^{(i)}$ is the identity for all $i$. Then, $\xi_j(S) = \frac{1}{n} |S \cap (N \times \{j\})|$. Due to Lemma~\ref{lemma:smooth-submodular}, the property $\mixdiff{\hat{F}}{x_i}{x_j} \leq 0$ (almost everywhere) implies that $\hat{f}$ is submodular. In addition, if the original instance was monotone, then $\partdiff{\hat{F}}{x_j} \geq 0$ and $\hat{f}$ is monotone. The same holds for $\hat{g}$. The value of $\hat{g}(S)$ for any feasible solution $S \in \tilde{{\cal F}}$ is bounded by $\hat{g}(S) = \hat{G}(\xi(S)) = \hat{F}(\overline{\xi(S)}) \leq \overline{OPT} + \epsilon$. On the other hand, let ${\bf x}^*$ denote a point where the optimum of the continuous problem $\max \{\hat{F}({\bf x}): {\bf x} \in P({{\cal F}}) \}$ is attained, i.e. $\hat{F}({\bf x}^*) \geq OPT - \epsilon$. For a large enough $n$, we can approximate the point ${\bf x}^*$ arbitrarily closely by a rational vector with $n$ in the denominator, which corresponds to a discrete solution $S^* \in \tilde{{\cal F}}$ whose value $\hat{f}(S^*)$ is at least, say, $OPT - 2 \epsilon$. Hence, the ratio between the optima of the {\em discrete} optimization problems $\max \{\hat{f}(S): S \in \tilde{{\cal F}}\}$ and $\max \{\hat{g}(S): S \in \tilde{{\cal F}} \}$ can be made at most $\frac{\overline{OPT} + \epsilon}{OPT - 2 \epsilon}$, i.e. arbitrarily close to the symmetry gap $\gamma = \frac{\overline{OPT}}{OPT}$. By Lemma~\ref{lemma:indistinguish}, distinguishing the two instances $\max \{\hat{f}(S): S \in \tilde{{\cal F}} \}$ and $\max \{\hat{g}(S): S \in \tilde{{\cal F}} \}$, even by a randomized algorithm, requires an exponential number of value queries. Therefore, we cannot estimate the optimum within a factor better than $\gamma$. \end{proof} Next, we prove Theorem~\ref{thm:multilinear-hardness} (again assuming Lemma~\ref{lemma:final-fix}), i.e.~an analogous hardness result for solving the multilinear optimization problem. \begin{proof}[Proof of Theorem~\ref{thm:multilinear-hardness}] Given a symmetric instance $\max \{f(S): S \in {\cal F}\}$ and $\epsilon>0$, we construct refined and modified instances $\max \{\hat{f}(S): S \in \tilde{{\cal F}} \}$, $\max \{\hat{g}(S): S \in \tilde{{\cal F}} \}$, derived from the continuous functions $\hat{F}, \hat{G}$ provided by Lemma~\ref{lemma:final-fix}, exactly as we did in the proof of Theorem~\ref{thm:general-hardness}. Lemma~\ref{lemma:indistinguish} states that these two instances cannot be distinguished using a subexponential number of value queries. Furthermore, the gap between the two modified instances corresponds to the symmetry gap $\gamma$ of the original instance: $\max \{\hat{f}(S): S \in \tilde{{\cal F}} \} \geq OPT - 2 \epsilon$ and $\max \{\hat{g}(S): S \in \tilde{{\cal F}} \} \leq \overline{OPT} + \epsilon = \gamma OPT + \epsilon$. Now we consider the multilinear relaxations of the two refined instances, $\max \{\check{F}({\bf x}): {\bf x} \in P(\tilde{{\cal F}})\}$ and $\max \{\check{G}({\bf x}): {\bf x} \in P(\tilde{{\cal F}})\}$. Note that $\check{F}, \check{G}$, (although related to $\hat{F}, \hat{G}$) are not exactly the same as the functions $\hat{F}, \hat{G}$; in particular, they are defined on a larger (refined) domain. However, we show that the gap between the optima of the two instances remains the same. First, the value of $\max \{\check{F}({\bf x}): {\bf x} \in P({\cal F})\}$ is at least the optimum of the discrete problem, $\max \{\hat{f}(S): S \in \tilde{{\cal F}} \}$, which is at least $OPT - 2\epsilon$ as in the proof of Theorem~\ref{thm:general-hardness}. The value of $\max \{\check{G}({\bf x}): {\bf x} \in P({\cal F})\}$ can be analyzed as follows. For any fractional solution ${\bf x} \in P({\cal F})$, the value of $\check{G}({\bf x})$ is the expectation ${\bf E}[\hat{g}(\hat{{\bf x}})]$, where $\hat{{\bf x}}$ is obtained by independently rounding the coordinates of ${\bf x}$ to $\{0,1\}$. Recall that $\hat{g}$ is obtained by discretizing the continuous function $\hat{G}$ (using Lemma~\ref{lemma:smooth-submodular}). In particular, $\hat{g}(S) = \hat{G}(\tilde{{\bf x}})$ where $\tilde{x}_i = \frac{1}{n} |S \cap (N \times \{i\})|$ is the fraction of the respective cluster contained in $S$, and $|N| = n$ is the size of each cluster (the refinement parameter). If ${\bf 1}_S = \hat{{\bf x}}$, i.e. $S$ is chosen by independent sampling with probabilities according to ${\bf x}$, then for large $n$ the fractions $\frac{1}{n}|S \cap (N \times \{i\})|$ will be strongly concentrated around their expectation. As $\hat{G}$ is continuous, we get $\lim_{n \rightarrow \infty} {\bf E}[\hat{g}(\hat{{\bf x}})] = \lim_{n \rightarrow \infty} {\bf E}[\hat{G}(\tilde{{\bf x}})] = \hat{G}({\bf E}[\tilde{{\bf x}}]) = \hat{G}(\bar{{\bf x}})$. Here, $\bar{{\bf x}}$ is the vector ${\bf x}$ projected back to the original ground set $X$, where the coordinates of each cluster have been averaged. By construction of the refinement, if ${\bf x} \in P(\tilde{{\cal F}})$ then $\bar{{\bf x}}$ is in the polytope corresponding to the original instance, $P({\cal F})$. Therefore, $\hat{G}(\bar{{\bf x}}) \leq \max \{\hat{G}({\bf x}): {\bf x} \in P({\cal F}) \} \leq \gamma OPT + \epsilon$. For large enough $n$, this means that $\max \{\check{G}({\bf x}): {\bf x} \in P(\tilde{{\cal F}}) \} \leq \gamma OPT + 2 \epsilon$. This holds for an arbitrarily small fixed $\epsilon>0$, and hence the gap between the instances $\max \{\check{F}({\bf x}): {\bf x} \in P(\tilde{{\cal F}})\}$ and $\max \{\check{G}({\bf x}): {\bf x} \in P(\tilde{{\cal F}})\}$ (which cannot be distinguished) can be made arbitrarily close to $\gamma$. \end{proof} \paragraph{Hardness for symmetric instances} We remark that since Lemma~\ref{lemma:final-fix} provides functions $\hat{F}$ and $\hat{G}$ symmetric under ${\cal G}$, the refined instances that we define are invariant with respect to the following symmetries: permute the copies of each element in an arbitrary way, and permute the classes of copies according to any permutation $\sigma \in {\cal G}$. This means that our hardness results also hold for instances satisfying such symmetry properties. \ It remains to prove Lemma~\ref{lemma:final-fix}. Before we move to the final construction of $\hat{F}({\bf x})$ and $\hat{G}({\bf x})$, we construct as an intermediate step a function $\tilde{F}({\bf x})$ which is helpful in the analysis. \ \paragraph{Construction} Let us construct a function $\tilde{F}({\bf x})$ which satisfies the following: \begin{itemize} \item For ${\bf x}$ ``sufficiently close'' to $\bar{{\bf x}}$, $\tilde{F}({\bf x}) = G({\bf x})$. \item For ${\bf x}$ ``sufficiently far away'' from $\bar{{\bf x}}$, $\tilde{F}({\bf x}) \simeq F({\bf x})$. \item The function $\tilde{F}({\bf x})$ is ``approximately" smooth submodular. \end{itemize} Once we have $\tilde{F}({\bf x})$, we can fix it to obtain a smooth submodular function $\hat{F}({\bf x})$, which is still close to the original function $F({\bf x})$. We also fix $G({\bf x})$ in the same way, to obtain a function $\hat{G}({\bf x})$ which is equal to $\hat{F}({\bf x})$ whenever ${\bf x}$ is close to $\bar{{\bf x}}$. We defer this step until the end. We define $\tilde{F}({\bf x})$ as a convex linear combination of $F({\bf x})$ and $G({\bf x})$, guided by a ``smooth transition'' function, depending on the distance of ${\bf x}$ from $\bar{{\bf x}}$. The form that we use is the following:\footnote{We remark that a construction analogous to \cite{MSV08} would be $\tilde{F}({\bf x}) = F({\bf x}) - \phi(H({\bf x}))$ where $H({\bf x}) = F({\bf x}) - G({\bf x})$. While this makes the analysis easier in \cite{MSV08}, it cannot be used in general. Roughly speaking, the problem is that in general the partial derivatives of $H({\bf x})$ are not bounded in any way by the value of $H({\bf x})$.} $$ \tilde{F}({\bf x}) = (1-\phi(D({\bf x}))) F({\bf x}) + \phi(D({\bf x})) G({\bf x}) $$ where $\phi:{\boldmath R}_+ \rightarrow [0,1]$ is a suitable smooth function, and $$ D({\bf x}) = ||{\bf x} - \bar{{\bf x}}||^2 = \sum_i (x_i - \bar{x}_i)^2.$$ The idea is that when ${\bf x}$ is close to $\bar{{\bf x}}$, $\phi(D({\bf x}))$ should be close to $1$, i.e. the convex linear combination should give most of the weight to $G({\bf x})$. The weight should shift gradually to $F({\bf x})$ as ${\bf x}$ gets further away from $\bar{{\bf x}}$. Therefore, we define $\phi(t) = 1$ in a small interval $t \in [0,\delta]$, and $\phi(t)$ tends to $0$ as $t$ increases. This guarantees that $\tilde{F}({\bf x}) = G({\bf x})$ whenever $D({\bf x}) = ||{\bf x} - \bar{{\bf x}}||^2 \leq \delta$. We defer the precise construction of $\phi(t)$ to Lemma~\ref{lemma:phi-construction}, after we determine what properties we need from $\phi(t)$. Note that regardless of the definition of $\phi(t)$, $\tilde{F}({\bf x})$ is symmetric with respect to ${\cal G}$, since $F({\bf x}), G({\bf x})$ and $D({\bf x})$ are. \ \paragraph{Analysis of the construction} Due to the construction of $\tilde{F}({\bf x})$, it is clear that when $D({\bf x}) = ||{\bf x}-\bar{{\bf x}}||^2$ is small, $\tilde{F}({\bf x}) = G({\bf x})$. When $D({\bf x})$ is large, $\tilde{F}({\bf x}) \simeq F({\bf x})$. The main issue, however, is whether we can say something about the first and second partial derivatives of $\tilde{F}$. This is crucial for the properties of monotonicity and submodularity, which we would like to preserve. Let us write $\tilde{F}({\bf x})$ as $$ \tilde{F}({\bf x}) = F({\bf x}) - \phi(D({\bf x})) H({\bf x}) $$ where $H({\bf x}) = F({\bf x}) - G({\bf x})$. By differentiating once, we get \begin{equation} \label{eq:partdiff} \partdiff{\tilde{F}}{x_i} = \partdiff{F}{x_i} - \phi(D({\bf x})) \partdiff{H}{x_i} - \phi'(D({\bf x})) \partdiff{D}{x_i} H({\bf x}) \end{equation} and by differentiating twice, \begin{eqnarray} \label{eq:mixdiff} \mixdiff{\tilde{F}}{x_i}{x_j} & = & \mixdiff{F}{x_i}{x_j} - \phi({D}({\bf x})) \mixdiff{H}{x_i}{x_j} - \phi''({D}({\bf x})) \partdiff{{D}}{x_i} \partdiff{{D}}{x_j} H({\bf x}) \\ & & - \phi'({D}({\bf x})) \left( \partdiff{{D}}{x_j} \partdiff{H}{x_i} + \mixdiff{{D}}{x_i}{x_j} H({\bf x}) + \partdiff{{D}}{x_i} \partdiff{H}{x_j} \right) \nonumber. \end{eqnarray} The first two terms on the right-hand sides of (\ref{eq:partdiff}) and (\ref{eq:mixdiff}) are not bothering us, because they form convex linear combinations of the derivatives of $F({\bf x})$ and $G({\bf x})$, which have the properties that we need. The remaining terms might cause problems, however, and we need to estimate them. Our strategy is to define $\phi(t)$ in such a way that it eliminates the influence of partial derivatives of $D$ and $H$ where they become too large. Roughly speaking, $D$ and $H$ have negligible partial derivatives when ${\bf x}$ is very close to $\bar{{\bf x}}$. As ${\bf x}$ moves away from $\bar{{\bf x}}$, the partial derivatives grow but then the behavior of $\phi(t)$ must be such that their influence is supressed. We start with the following important claim.\footnote{ We remind the reader that $\nabla F$, the gradient of $F$, is a vector whose coordinates are the first partial derivatives $\partdiff{F}{x_i}$. We denote by $\nabla F |_{{\bf x}}$ the gradient evaluated at ${\bf x}$.} \begin{lemma} \label{lemma:grad-symmetry} Assume that $F:[0,1]^X \rightarrow {\boldmath R}$ is differentiable and invariant under a group of permutations of coordinates ${\cal G}$. Let $G({\bf x}) = F(\bar{{\bf x}})$, where $\bar{{\bf x}} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf x})]$. Then for any ${\bf x} \in [0,1]^X$, $$ \nabla{G}|_{\bf x} = \nabla{F}|_{\bar{{\bf x}}}.$$ \end{lemma} \begin{proof} To avoid confusion, we use ${\bf x}$ for the arguments of the functions $F$ and $G$, and ${\bf u}$, $\bar{{\bf u}}$, etc. for points where their partial derivatives are evaluated. To rephrase, we want to prove that for any point ${\bf u}$ and any coordinate $i$, the partial derivatives of $F$ and $G$ evaluated at $\bar{u}$ are equal: $\partdiff{G}{x_i} \Big|_{{\bf x}={\bf u}} = \partdiff{F}{x_i} \Big|_{{\bf x}=\bar{{\bf u}}}$. First, consider $F({\bf x})$. We assume that $F({\bf x})$ is invariant under a group of permutations of coordinates $\cal G$, i.e. $F({\bf x}) = F(\sigma({\bf x}))$ for any $\sigma \in {\cal G}$. Differentiating both sides at ${\bf x}={\bf u}$, we get by the chain rule: $$ \partdiff{F}{x_i} \Big|_{{\bf x}={\bf u}} = \sum_j \partdiff{F}{x_j} \Big|_{{\bf x}=\sigma({\bf u})} \partdiff{}{x_i} (\sigma({\bf x}))_j = \sum_j \partdiff{F}{x_j} \Big|_{{\bf x}=\sigma({\bf u})} \partdiff{x_{\sigma(j)}}{x_i}. $$ Here, $\partdiff{x_{\sigma(j)}}{x_i} = 1$ if $\sigma(j) = i$, and $0$ otherwise. Therefore, $$ \partdiff{F}{x_i} \Big|_{{\bf x}={\bf u}} = \partdiff{F}{x_{\sigma^{-1}(i)}} \Big|_{{\bf x}=\sigma({\bf u})}. $$ Now, if we evaluate the left-hand side at $\bar{{\bf u}}$, the right-hand side is evaluated at $\sigma(\bar{{\bf u}}) = \bar{{\bf u}}$, and hence for any $i$ and any $\sigma \in {\cal G}$, \begin{equation} \label{eq:F-inv} \partdiff{F}{x_i} \Big|_{{\bf x}=\bar{{\bf u}}} = \partdiff{F}{x_{\sigma^{-1}(i)}} \Big|_{{\bf x}=\bar{{\bf u}}}. \end{equation} Turning to $G({\bf x}) = F(\bar{{\bf x}})$, let us write $\partdiff{G}{x_i}$ using the chain rule: $$ \partdiff{G}{x_i} \Big|_{{\bf x}={\bf u}} = \partdiff{}{x_i} F(\bar{{\bf x}}) \Big|_{{\bf x}={\bf u}} = \sum_j \partdiff{F}{x_j} \Big|_{{\bf x}=\bar{{\bf u}}} \cdot \partdiff{\bar{x}_j}{x_i}. $$ We have $\bar{x}_j = {\bf E}_{\sigma \in {\cal G}}[{\bf x}_{\sigma(j)}]$, and so $$ \partdiff{G}{x_i} \Big|_{{\bf x}={\bf u}} = \sum_j \partdiff{F}{x_j} \Big|_{{\bf x}=\bar{{\bf u}}} \cdot \partdiff{}{x_i} {\bf E}_{\sigma \in {\cal G}}[x_{\sigma(j)}] = {\bf E}_{\sigma \in {\cal G}} \left[\sum_j \partdiff{F}{x_j} \Big|_{x=\bar{u}} \cdot \partdiff{x_{\sigma(j)}}{x_i} \right]. $$ Again, $\partdiff{x_{\sigma(j)}}{x_i} = 1$ if $\sigma(j) = i$ and $0$ otherwise. Consequently, we obtain $$ \partdiff{G}{x_i} \Big|_{{\bf x}={\bf u}} = {\bf E}_{\sigma \in {\cal G}} \left[ \partdiff{F}{x_{\sigma^{-1}(i)}} \Big|_{{\bf x}=\bar{{\bf u}}} \right] = \partdiff{F}{x_i} \Big|_{{\bf x}=\bar{{\bf u}}} $$ where we used Eq. (\ref{eq:F-inv}) to remove the dependence on $\sigma \in {\cal G}$. \end{proof} Observe that the symmetrization operation $\bar{{\bf x}}$ is idempotent, i.e. $\bar{\bar{{\bf x}}} = \bar{{\bf x}}$. Because of this, we also get $\nabla{G}|_{\bar{{\bf x}}} = \nabla{F}|_{\bar{{\bf x}}}$. Note that $G(\bar{{\bf x}}) = F(\bar{{\bf x}})$ follows from the definition, but it is not obvious that the same holds for gradients, since their definition involves points where $G({\bf x}) \neq F({\bf x})$. For second partial derivatives, the equality no longer holds, as can be seen from a simple example such as $F(x_1,x_2) = 1 - (1-x_1)(1-x_2)$, $G(x_1,x_2) = 1 - (1-\frac{x_1+x_2}{2})^2$. Next, we show that the functions $F({\bf x})$ and $G({\bf x})$ are very similar in the close vicinity of the region where $\bar{{\bf x}} = {\bf x}$. Recall our definitions: $H({\bf x}) = F({\bf x}) - G({\bf x})$, $D({\bf x}) = ||{\bf x} - \bar{{\bf x}}||^2$. Based on Lemma~\ref{lemma:grad-symmetry}, we know that $H(\bar{{\bf x}}) = 0$ and $\nabla H|_{\bar{{\bf x}}} = 0$. In the following lemmas, we present bounds on $H({\bf x})$, $D({\bf x})$ and their partial derivatives. \begin{lemma} \label{lemma:H-bounds} Let $f:2^X \rightarrow [0,M]$ be invariant under a permutation group $\cal G$. Let $\bar{{\bf x}} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf x})]$, $D({\bf x}) = ||{\bf x} - \bar{{\bf x}}||^2$ and $H({\bf x}) = F({\bf x}) - G({\bf x})$ where $F({\bf x}) = {\bf E}[f(\hat{{\bf x}})]$ and $G({\bf x}) = F(\bar{{\bf x}})$. Then \begin{enumerate} \item $ |\mixdiff{H}{x_i}{x_j}| \leq 8M $ everywhere, for all $i,j$; \item $ ||\nabla H({\bf x})|| \leq 8M|X| \sqrt{D({\bf x})}$; \item $ |H({\bf x})| \leq 8M|X| \cdot D({\bf x}). $ \end{enumerate} \end{lemma} \begin{proof} First, let us get a bound on the second partial derivatives. Assuming without loss of generality $x_i=x_j=0$, we have\footnote{${\bf x} \vee {\bf y}$ denotes the coordinate-wise maximum, $({\bf x} \vee {\bf y})_i = \max \{x_i,y_i\}$ and ${\bf x} \wedge {\bf y}$ denotes the coordinate-wise minimum, $({\bf x} \wedge {\bf y})_i = \min \{x_i,y_i\}$.} $$ \mixdiff{F}{x_i}{x_j} = {\bf E}[f(\hat{{\bf x}} \vee ({\bf e}_i+{\bf e}_j)) - f(\hat{{\bf x}} \vee {\bf e}_i) - f(\hat{{\bf x}} \vee {\bf e}_j) + f(\hat{{\bf x}})] $$ (see \cite{Vondrak08}). Consequently, $$ \Big| \mixdiff{F}{x_i}{x_j} \Big| \leq 4 \max |f(S)| = 4 M.$$ It is a little bit more involved to analyze $\mixdiff{G}{x_i}{x_j}$. Since $G({\bf x}) = F(\bar{{\bf x}})$ and $\bar{{\bf x}} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf x})]$, we get by the chain rule: $$ \mixdiff{G}{x_i}{x_j} = \sum_{k,\ell} \mixdiff{F}{x_k}{x_\ell} \partdiff{\bar{x}_k}{x_i} \partdiff{\bar{x}_\ell}{x_j} = {\bf E}_{\sigma,\tau \in {\cal G}} \left[ \sum_{k,\ell} \mixdiff{F}{x_k}{x_\ell} \partdiff{x_{\sigma(k)}}{x_i} \partdiff{x_{\tau(\ell)}}{x_j} \right].$$ It is useful here to use the Kronecker symbol, $\delta_{i,j}$, which is $1$ if $i=j$ and $0$ otherwise. Note that $\partdiff{x_{\sigma(k)}}{x_i} = \delta_{i,\sigma(k)} = \delta_{\sigma^{-1}(i),k}$, etc. Using this notation, we get $$ \mixdiff{G}{x_i}{x_j} = {\bf E}_{\sigma,\tau \in {\cal G}} \left[ \sum_{k,\ell} \mixdiff{F}{x_k}{x_\ell} \delta_{\sigma^{-1}(i),k} \delta_{\sigma^{-1}(j),\ell} \right] = {\bf E}_{\sigma,\tau \in {\cal G}} \left[ \mixdiff{F}{x_{\sigma^{-1}(i)}}{x_{\tau^{-1}(j)}} \right], $$ $$ \Big| \mixdiff{G}{x_i}{x_j} \Big| \leq {\bf E}_{\sigma,\tau \in {\cal G}} \left[ \Big| \mixdiff{F}{x_{\sigma^{-1}(i)}}{x_{\tau^{-1}(j)}} \Big| \right] \leq 4 M $$ and therefore $$ \Big| \mixdiff{H}{x_i}{x_j} \Big| = \Big| \mixdiff{F}{x_i}{x_j} - \mixdiff{G}{x_i}{x_j} \Big| \leq 8 M.$$ Next, we estimate $\partdiff{H}{x_i}$ at a given point ${\bf u}$, depending on its distance from $\bar{{\bf u}}$. Consider the line segment between $\bar{{\bf u}}$ and ${\bf u}$. The function $H({\bf x}) = F({\bf x}) - G({\bf x})$ is $C_\infty$-differentiable, and hence we can apply the mean value theorem to $\partdiff{H}{x_i}$: There exists a point $\tilde{{\bf u}}$ on the line segment $[\bar{{\bf u}}, {\bf u}]$ such that $$ \partdiff{H}{x_i} \Big|_{{\bf x}={\bf u}} - \partdiff{H}{x_i} \Big|_{{\bf x}=\bar{{\bf u}}} = \sum_j \mixdiff{H}{x_j}{x_i} \Big|_{{\bf x}=\tilde{{\bf u}}} (u_j-\bar{u}_j).$$ Recall that $\partdiff{H}{x_i} \Big|_{{\bf x}=\bar{{\bf u}}} = 0$. Applying the Cauchy-Schwartz inequality to the right-hand side, we get $$ \left( \partdiff{H}{x_i} \Big|_{{\bf x}={\bf u}} \right)^2 \leq \sum_j \left( \mixdiff{H}{x_j}{x_i} \Big|_{{\bf x}=\tilde{{\bf u}}} \right)^2 || {\bf u} - \bar{{\bf u}} ||^2 \leq (8M)^2 |X| ||{\bf u} - \bar{{\bf u}}||^2.$$ Adding up over all $i \in X$, we obtain $$ || \nabla H({\bf u}) ||^2 = \sum_{i}\left( \partdiff{H}{x_i} \Big|_{{\bf x}={\bf u}} \right)^2 \leq (8M|X|)^2 ||{\bf u} - \bar{{\bf u}}||^2.$$ Finally, we estimate the growth of $H({\bf u})$. Again, by the mean value theorem, there is a point $\tilde{{\bf u}}$ on the line segment $[\bar{{\bf u}},{\bf u}]$, such that $$ H({\bf u}) - H(\bar{{\bf u}}) = ({\bf u} - \bar{{\bf u}}) \cdot \nabla H(\tilde{{\bf u}}).$$ Using $H(\bar{{\bf u}}) = 0$, the Cauchy-Schwartz inequality and the above bound on $\nabla H$, $$ (H({\bf u}))^2 \leq ||\nabla H({\tilde{{\bf u}}})||^2 ||{\bf u} - \bar{{\bf u}}||^2 \leq (8M|X|)^2 ||\tilde{{\bf u}}-\bar{{\bf u}}||^2 ||{\bf u}-\bar{{\bf u}}||^2. $$ Clearly, $||\tilde{{\bf u}} - \bar{{\bf u}}|| \leq ||{\bf u} - \bar{{\bf u}}||$, and therefore $$ |H({\bf u})| \leq 8M|X| \cdot ||{\bf u}-\bar{{\bf u}}||^2.$$ \end{proof} \begin{lemma} \label{lemma:D-bounds} For the function $D({\bf x}) = ||{\bf x} - \bar{{\bf x}}||^2$, we have \begin{enumerate} \item $\nabla D = 2({\bf x} - \bar{{\bf x}})$, and therefore $||\nabla D|| = 2 \sqrt{D({\bf x})}$. \item For all $i,j$, $|\mixdiff{D}{x_i}{x_j}| \leq 2$. \end{enumerate} \end{lemma} \begin{proof} Let us write $D({\bf x})$ as $$ D({\bf x}) = \sum_i (x_i - \bar{x}_i)^2 = \sum_i {\bf E}_{\sigma \in {\cal G}}[x_i - x_{\sigma(i)}] {\bf E}_{\tau \in {\cal G}}[x_i - x_{\tau(i)}]. $$ Taking the first partial derivative, $$ \partdiff{D}{x_j} = 2 \sum_i {\bf E}_{\sigma \in {\cal G}}[x_i - x_{\sigma(i)}] \partdiff{}{x_j} {\bf E}_{\tau \in {\cal G}}[x_i - x_{\tau(i)}].$$ As before, we have $\partdiff{x_i}{x_j} = \delta_{ij}$. Using this notation, we get \begin{eqnarray*} \partdiff{D}{x_j} & = & 2 \sum_i {\bf E}_{\sigma \in {\cal G}}[x_i - x_{\sigma(i)}] {\bf E}_{\tau \in {\cal G}}[\delta_{ij} - \delta_{\tau(i),j}] \\ & = & 2 \sum_i {\bf E}_{\sigma,\tau \in {\cal G}}[(x_i - x_{\sigma(i)}) (\delta_{ij} - \delta_{i,\tau^{-1}(j)})] \\ & = & 2 \ {\bf E}_{\sigma,\tau \in {\cal G}}[x_j - x_{\sigma(j)} - x_{\tau^{-1}(j)} + x_{\sigma(\tau^{-1}(j))}]. \end{eqnarray*} Since the distributions of $\sigma(j)$, $\tau^{-1}(j)$ and $\sigma(\tau^{-1}(j))$ are the same, we obtain $$ \partdiff{D}{x_j} = 2 \ {\bf E}_{\sigma \in {\cal G}}[x_j - x_{\sigma(j)}] = 2(x_j - \bar{x}_j) $$ and $$ ||\nabla D||^2 = \sum_j \Big| \partdiff{D}{x_j} \Big|^2 = 4 \sum_j (x_j - \bar{x}_j)^2 = 4 D({\bf x}).$$ Finally, the second partial derivatives are $$ \mixdiff{D}{x_i}{x_j} = 2 \partdiff{}{x_i} (x_j - \bar{x}_j) = 2 \partdiff{}{x_i} {\bf E}_{\sigma \in {\cal G}}[x_j - x_{\sigma(j)}] = 2 {\bf E}_{\sigma \in {\cal G}}[\delta_{ij} - \delta_{i,\sigma(j)}] $$ which is clearly bounded by $2$ in the absolute value. \end{proof} Now we come back to $\tilde{F}({\bf x})$ and its partial derivatives. Recall equations (\ref{eq:partdiff}) and (\ref{eq:mixdiff}). The problematic terms are those involving $\phi'(D({\bf x}))$ and $\phi''(D({\bf x}))$. Using our bounds on $H({\bf x})$, $D({\bf x})$ and their derivatives, however, we notice that $\phi'(D({\bf x}))$ always appears with factors on the order of $D({\bf x})$ and $\phi''(D({\bf x}))$ appears with factors on the order of $(D({\bf x}))^2$. Thus, it is sufficient if $\phi(t)$ is defined so that we have control over $t \phi'(t)$ and $t^2 \phi''(t)$. The following lemma describes the function that we need. \begin{lemma} \label{lemma:phi-construction} For any $\alpha, \beta > 0$, there is $\delta \in (0, \beta)$ and a function $\phi:{\boldmath R}_+ \rightarrow [0,1]$ with an absolutely continuous first derivative such that \begin{enumerate} \item For $t \leq \delta$, $\phi(t) = 1$. \item For $t \geq \beta$, $\phi(t) < e^{-1/\alpha}$. \item For all $t \geq 0$, $|t \phi'(t)| \leq 4 \alpha$. \item For almost all $t \geq 0$, $|t^2 \phi''(t)| \leq 10 \alpha$. \end{enumerate} \end{lemma} \begin{proof} First, observe that if we prove the lemma for some particular value $\beta>0$, we can also prove prove it for any other value $\tilde{\beta}>0$, by modifying the function as follows: $\tilde{\phi}(t) = \phi(\beta t / \beta')$. This corresponds to a scaling of the parameter $t$ by $\beta' / \beta$. Observe that then $|t \tilde{\phi}'(t)| = |\frac{\beta}{\beta'} t \phi'(\beta t / \beta')| \leq 4 \alpha$ and $|t^2 \tilde{\phi}''(t)| = |(\frac{\beta}{\beta'})^2 t^2 \phi''(\beta t / \beta')| \leq 10 \alpha$, so the conditions are still satisfied. Therefore, we can assume without loss of generality that $\beta>0$ is a value of our choice, for example $\beta = e^{1/(2\alpha^2)}+1$. If we want to prove the result for a different value of $\beta$, we can just scale the argument $t$ and the constant $\delta$ by $\beta / (e^{1/(2\alpha^2)}+1)$; the bounds on $t \phi'(t)$ and $t^2 \phi''(t)$ still hold. We can assume that $\alpha \in (0,\frac18)$ because for larger $\alpha$, the statement only gets weaker. As we argued, we can assume WLOG that $\beta = e^{1/(2\alpha^2)}+1$. We set $\delta = 1$ and $\delta_2 = 1 + (1+\alpha)^{-1/2} \leq 2$. (We remind the reader that in general these values will be scaled depending on the actual value of $\beta$.) We define the function as follows: \begin{enumerate} \item $\phi(t) = 1$ for $t \in [0,\delta]$. \item $\phi(t) = 1 - \alpha (t-1)^2$ for $t \in [\delta, \delta_2]$. \item $\phi(t) = (1+\alpha)^{-1-\alpha} (t - 1)^{-2 \alpha}$ for $t \in [\delta_2, \infty)$. \end{enumerate} \noindent Let's verify the properties of $\phi(t)$. For $t \in [0,\delta]$, we have $\phi'(t) = \phi''(t) = 0$. For $t \in [\delta,\delta_2]$, we have $$ \phi'(t) = -2\alpha \left(t-1 \right), \ \ \ \ \ \ \phi''(t) = -2\alpha, $$ and for $t \in [\delta_2, \infty)$, $$ \phi'(t) = -{2\alpha}{(1+\alpha)^{-1-\alpha}} \left( t-1 \right)^{-2\alpha-1}, $$ $$ \phi''(t) = {2\alpha(1+2\alpha)}{(1+\alpha)^{-1-\alpha}} \left( t-1 \right)^{-2\alpha-2}. $$ First, we check that the values and first derivatives agree at the breakpoints. For $t = \delta = 1$, we get $\phi(1) = 1$ and $\phi'(1) = 0$. For $t = \delta_2 = 1 + (1+\alpha)^{-1/2}$, we get $\phi(\delta_2) = (1+\alpha)^{-1}$ and $\phi'(\delta_2) = -2 \alpha (1+\alpha)^{-1/2}$. Next, we need to check is that $\phi(t)$ is very small for $t \geq \beta$. The function is decreasing for $t > \beta$, therefore it is enough to check $t = \beta = e^{1/(2\alpha^2)}+1$: $$ \phi\left(\beta \right) = (1+\alpha)^{-1-\alpha} (\beta - 1)^{-2\alpha} \leq (\beta-1)^{-2\alpha} = e^{-1/\alpha}. $$ The derivative bounds are satisfied trivially for $t \in [0,\delta]$. For $t \in [\delta, \delta_2]$, using $t \leq \delta_2 = 1 + (1+\alpha)^{-1/2}$, $$ |t \phi'(t)| = t \cdot 2 \alpha (t-1) \leq 2 \alpha (1 + (1+\alpha)^{-1/2}) (1+\alpha)^{-1/2} \leq 4 \alpha $$ and using $\alpha \in (0,\frac18)$, $$ |t^2 \phi''(t)| = t^2 \cdot 2 \alpha \leq 2 \alpha (1 + (1+\alpha)^{-1/2})^2 \leq 8 \alpha.$$ For $t \in [\delta_2,\infty)$, using $t-1 \geq (1+\alpha)^{-1/2}$, \begin{eqnarray*} |t \phi'(t)| & = & t \cdot \frac{2 \alpha}{(1+\alpha)^{1+\alpha}} \left(t-1 \right)^{-2\alpha-1} = \frac{2 \alpha}{1+\alpha} \left( (1 + \alpha) \left( t-1 \right)^2 \right)^{-\alpha} \frac{t}{t-1} \\ & \leq & \frac{2 \alpha}{1 + \alpha} \cdot \frac{t}{t-1} \leq \frac{2 \alpha}{1+\alpha} \cdot \frac{1 + (1+\alpha)^{-1/2}}{(1+\alpha)^{-1/2}} = 2 \alpha \cdot \frac{1 + (1+\alpha)^{-1/2}}{(1+\alpha)^{1/2}} \leq 4 \alpha \end{eqnarray*} and finally, using $\alpha \in (0,\frac18)$, \begin{eqnarray*} |t^2 \phi''(t)| & = & t^2 \cdot \frac{2\alpha(1+2\alpha)}{(1+\alpha)^{1+\alpha}} \left( t-1 \right)^{-2\alpha-2} = \frac{2\alpha(1+2\alpha)}{1+\alpha} \left( (1 + \alpha) \left( t-1 \right)^2 \right)^{-\alpha} \left( \frac{t}{t-1} \right)^2 \\ & \leq & \frac{2\alpha(1+2\alpha)}{1+\alpha} \cdot \left( \frac{1 + (1+\alpha)^{-1/2}}{(1+\alpha)^{-1/2}} \right)^2 \leq 8 \alpha (1 + 2 \alpha) \leq 10 \alpha. \end{eqnarray*} \end{proof} Using Lemmas~\ref{lemma:H-bounds}, \ref{lemma:D-bounds} and \ref{lemma:phi-construction}, we now prove bounds on the derivatives of $\tilde{F}({\bf x})$. \begin{lemma} \label{lemma:F-bounds} Let $ \tilde{F}({\bf x}) = (1-\phi(D({\bf x}))) F({\bf x}) + \phi(D({\bf x})) G({\bf x}) $ where $F({\bf x}) = {\bf E}[f(\hat{{\bf x}})]$, $f:2^X \rightarrow [0,M]$, $G({\bf x}) = F(\bar{{\bf x}})$, $D({\bf x})=||{\bf x}-\bar{{\bf x}}||^2$ are as above and $\phi(t)$ is provided by Lemma~\ref{lemma:phi-construction} for a given $\alpha>0$. Then, whenever $\mixdiff{F}{x_i}{x_j} \leq 0$, $$ \mixdiff{\tilde{F}}{x_i}{x_j} \leq 512 M |X| \alpha. $$ If, in addition, $\partdiff{F}{x_i} \geq 0$, then $$ \partdiff{\tilde{F}}{x_i} \geq -64 M |X| \alpha. $$ \end{lemma} \begin{proof} We have $ \tilde{F}({\bf x}) = F({\bf x}) - \phi(D({\bf x})) H({\bf x}) $ where $H({\bf x}) = F({\bf x}) - G({\bf x})$. By differentiating once, we get $$ \partdiff{\tilde{F}}{x_i} = \partdiff{F}{x_i} - \phi(D({\bf x})) \partdiff{H}{x_i} - \phi'(D({\bf x})) \partdiff{D}{x_i} H({\bf x}), $$ i.e. $$ \Big| \partdiff{\tilde{F}}{x_i} - \left(\partdiff{F}{x_i} - \phi(D({\bf x})) \partdiff{H}{x_i} \right) \Big| = \Big| \phi'(D({\bf x})) \partdiff{D}{x_i} H({\bf x}) \Big|.$$ By Lemma~\ref{lemma:H-bounds} and \ref{lemma:D-bounds}, we have $|\partdiff{D}{x_i}| = 2 |x_i - \bar{x}_i| \leq 2$ and $|H({\bf x})| \leq 8M|X| \cdot D({\bf x})$. Therefore, $$ \Big| \partdiff{\tilde{F}}{x_i} - \left(\partdiff{F}{x_i} - \phi(D({\bf x})) \partdiff{H}{x_i} \right) \Big| \leq 16 M |X| D({\bf x}) \cdot \Big| \phi'(D({\bf x})) \Big|.$$ By Lemma~\ref{lemma:phi-construction}, $|D({\bf x}) \phi'(D({\bf x}))| \leq 4 \alpha$, and hence $$ \Big| \partdiff{\tilde{F}}{x_i} - \left(\partdiff{F}{x_i} - \phi(D({\bf x})) \partdiff{H}{x_i} \right) \Big| \leq 64 M |X| \alpha.$$ Assuming that $\partdiff{F}{x_i} \geq 0$, we also have $\partdiff{G}{x_i} \geq 0$ (see Lemma~\ref{lemma:grad-symmetry}) and therefore, $ \partdiff{F}{x_i} - \phi(D({\bf x})) \partdiff{H}{x_i} = (1-\phi(D({\bf x}))) \partdiff{F}{x_i} + \phi(D({\bf x})) \partdiff{G}{x_i} \geq 0$. Consequently, $$ \partdiff{\tilde{F}}{x_i} \geq - 64 M |X| \alpha.$$ By differentiating $\tilde{F}$ twice, we obtain \begin{eqnarray*} \mixdiff{\tilde{F}}{x_i}{x_j} & = & \mixdiff{F}{x_i}{x_j} - \phi({D}({\bf x})) \mixdiff{H}{x_i}{x_j} - \phi''({D}({\bf x})) \partdiff{{D}}{x_i} \partdiff{{D}}{x_j} H({\bf x}) \\ & & - \phi'({D}({\bf x})) \left( \partdiff{{D}}{x_j} \partdiff{H}{x_i} + \mixdiff{{D}}{x_i}{x_j} H({\bf x}) + \partdiff{{D}}{x_i} \partdiff{H}{x_j} \right). \end{eqnarray*} Again, we use Lemma~\ref{lemma:H-bounds} and \ref{lemma:D-bounds} to bound $|H({\bf x})| \leq 8M|X| D({\bf x})$, $|\partdiff{H}{x_i}| \leq 8M|X| \sqrt{D({\bf x})}$, $|\mixdiff{H}{x_i}{x_j}| \leq 8M$, $|\partdiff{D}{x_i}| \leq 2 \sqrt{D({\bf x})}$ and $|\mixdiff{D}{x_i}{x_j}| \leq 2$. We get \begin{eqnarray*} \Big| \mixdiff{\tilde{F}}{x_i}{x_j} - \mixdiff{F}{x_i}{x_j} + \phi({D}({\bf x})) \mixdiff{H}{x_i}{x_j} \Big| \leq 32 M|X| \Big| D^2({\bf x}) \phi''({D}({\bf x})) \Big| + 48 M|X| \Big| D({\bf x}) \phi'({D}({\bf x})) \Big| \end{eqnarray*} Observe that $\phi'(D({\bf x}))$ appears with $D({\bf x})$ and $\phi''(D({\bf x}))$ appears with $(D({\bf x}))^2$. By Lemma~\ref{lemma:phi-construction}, $|D({\bf x}) \phi'(D({\bf x}))| \leq 4 \alpha$ and $|D^2({\bf x}) \phi''(D({\bf x}))| \leq 10 \alpha$. Therefore, \begin{eqnarray*} \Big| \mixdiff{\tilde{F}}{x_i}{x_j} - \left( \mixdiff{F}{x_i}{x_j} - \phi({D}({\bf x})) \mixdiff{H}{x_i}{x_j} \right) \Big| \leq 320 M|X|\alpha + 192 M|X| \alpha = 512 M |X| \alpha. \end{eqnarray*} If $\mixdiff{F}{x_i}{x_j} \leq 0$ for all $i,j$, then also $\mixdiff{G}{x_i}{x_j} = {\bf E}_{\sigma,\tau \in {\cal G}}[\mixdiff{F}{x_{\sigma^{-1}(i)}}{x_{\tau^{-1}(j)}}] \leq 0$ (see the proof of Lemma~\ref{lemma:H-bounds}). Also, $\mixdiff{F}{x_i}{x_j} - \phi({D}({\bf x})) \mixdiff{H}{x_i}{x_j} = \phi(D({\bf x})) \mixdiff{F}{x_i}{x_j} + (1-\phi(D({\bf x}))) \mixdiff{G}{x_i}{x_j} \leq 0$. We obtain \begin{eqnarray*} \mixdiff{\tilde{F}}{x_i}{x_j} \leq 512 M |X| \alpha. \end{eqnarray*} \end{proof} Finally, we can finish the proof of Lemma~\ref{lemma:final-fix}. \begin{proof}[Proof of Lemma~\ref{lemma:final-fix}] Let $\epsilon>0$ and $f:2^X \rightarrow [0,M]$. We choose $\beta = \frac{\epsilon}{16 M|X|}$, so that $|H({\bf x})| = |F({\bf x}) - G({\bf x})| \leq \epsilon/2$ whenever $D({\bf x}) = ||{\bf x} - \bar{{\bf x}}||^2 \leq \beta$ (due to by Lemma~\ref{lemma:H-bounds}, which states that $|H({\bf x})| \leq 8 M |X| D({\bf x})$). Also, let $\alpha = \frac{\epsilon}{2000 M |X|^3}$. For these values of $\alpha,\beta>0$, let $\delta > 0$ and $\phi:{\boldmath R}_+ \rightarrow [0,1]$ be provided by Lemma~\ref{lemma:phi-construction}. We define $$ \tilde{F}({\bf x}) = (1-\phi(D({\bf x}))) F({\bf x}) + \phi(D({\bf x})) G({\bf x}). $$ Lemma~\ref{lemma:F-bounds} provides bounds on the first and second partial derivatives of $\tilde{F}({\bf x})$. Finally, we modify $\tilde{F}({\bf x})$ so that it satisfies the required conditions (submodularity and optionally monotonicity). For that purpose, we add a suitable multiple of the following function: $$ J({\bf x}) = |X|^2 + 3|X| \sum_{i \in X} x_i - \left(\sum_{i \in X} x_i \right)^2.$$ We have $0 \leq J({\bf x}) \leq 3|X|^2$, $\partdiff{J}{x_i} = 3|X| - 2 \sum_{i \in X} x_i \geq |X|$. Further, $\mixdiff{J}{x_i}{x_j} = -2$. Note also that $J(\bar{{\bf x}}) = J({\bf x})$, since $J({\bf x})$ depends only on the sum of all coordinates $\sum_{i \in X} x_i$. To make $\tilde{F}({\bf x})$ submodular and optionally monotone, we define: $$ \hat{F}({\bf x}) = \tilde{F}({\bf x}) + 256 M |X| \alpha J({\bf x}),$$ $$ \hat{G}({\bf x}) = G({\bf x}) + 256 M |X| \alpha J({\bf x}). $$ We verify the properties of $\hat{F}({\bf x})$ and $\hat{G}({\bf x})$: \begin{enumerate} \item For any ${\bf x} \in P({\cal F})$, we have \begin{eqnarray*} \hat{G}({\bf x}) & = & G({\bf x}) + 256 M |X| \alpha J({\bf x}) \\ & = & F(\bar{{\bf x}}) + 256 M |X| \alpha J(\bar{{\bf x}}) \\ & = & \hat{F}(\bar{{\bf x}}). \end{eqnarray*} \item When $D({\bf x}) = ||{\bf x} - \bar{{\bf x}}||^2 \geq \beta$, Lemma~\ref{lemma:phi-construction} guarantees that $0 \leq \phi(D({\bf x})) < e^{-1/\alpha} \leq \alpha$ and \begin{eqnarray*} |\hat{F}({\bf x}) - F({\bf x})| & \leq & \phi(D({\bf x})) |G({\bf x}) - F({\bf x})| + 256 M |X| \alpha J({\bf x}) \\ & \leq & \alpha M + 768 M |X|^3 \alpha \\ & \leq & \epsilon \end{eqnarray*} using $0 \leq F({\bf x}), G({\bf x}) \leq M$, $|X| \leq J({\bf x}) \leq 3|X|^2$ and $\alpha = \frac{\epsilon}{2000 M |X|^3}$. When $D({\bf x}) = ||{\bf x} - \bar{{\bf x}}||^2 < \beta$, we chose the value of $\beta$ so that $|G({\bf x}) - F({\bf x})| < \epsilon/2$ and so by the above, \begin{eqnarray*} |\hat{F}({\bf x}) - F({\bf x})| & \leq & \phi(D({\bf x})) |G({\bf x}) - F({\bf x})| + 256 M |X| \alpha J({\bf x}) \\ & \leq & \epsilon/2 + 768 M |X|^3 \alpha \\ & \leq & \epsilon. \end{eqnarray*} \item Due to Lemma~\ref{lemma:phi-construction}, $\phi(t) = 1$ for $t \in [0,\delta]$. Hence, whenever $D({\bf x}) = ||{\bf x} - \bar{{\bf x}}||^2 \leq \delta$, we have $ \tilde{F}({\bf x}) = G({\bf x}) = F(\bar{{\bf x}})$, which depends only on $\bar{{\bf x}}$. Also, we have $\hat{F}({\bf x}) = \hat{G}({\bf x}) = F(\bar{{\bf x}}) + 256 M |X| \alpha J({\bf x})$ and again, $J({\bf x})$ depends only on $\bar{{\bf x}}$ (in fact, only on the average of all coordinates of ${\bf x}$). Therefore, $\hat{F}({\bf x})$ and $\hat{G}({\bf x})$ in this case depend only on $\bar{{\bf x}}$. \item The first partial derivatives of $\hat{F}$ are given by the formula $$ \partdiff{\hat{F}}{x_i} = \partdiff{F}{x_i} - \phi(D({\bf x})) \partdiff{H}{x_i} - \phi'(D({\bf x})) \partdiff{D}{x_i} H({\bf x}) + 256 M|X|\alpha \partdiff{J}{x_i}. $$ The functions $F, H, D, J$ are infinitely differentiable, so the only possible issue is with $\phi$. By inspecting our construction of $\phi$ (Lemma~\ref{lemma:phi-construction}), we can see that it is piecewise infinitely differentiable, and $\phi'$ is continuous at the breakpoints. Therefore, it is also absolutely continuous. This implies that $\partdiff{\hat{F}}{x_i}$ is absolutely continuous. The function $\hat{G}({\bf x}) = F(\bar{{\bf x}}) + 256 M|X|\alpha J({\bf x})$ is infinitely differentiable, so its first partial derivatives are also absolutely continuous. \item Assuming $\partdiff{F}{x_i} \geq 0$, we get $\partdiff{\tilde{F}}{x_i} \geq - 64 M |X| \alpha$ by Lemma~\ref{lemma:F-bounds}. Using $\partdiff{J}{x_i} \geq |X|$, we get $\partdiff{\hat{F}}{x_i} = \partdiff{\tilde{F}}{x_i} + 256 M|X| \alpha J({\bf x}) \geq 0$. The same holds for $\partdiff{\hat{G}}{x_i}$ since $\partdiff{G}{x_i} \geq 0$. \item Assuming $\mixdiff{F}{x_i}{x_j} \leq 0$, we get $\mixdiff{\tilde{F}}{x_i}{x_j} \leq 512 M |X| \alpha$ by Lemma~\ref{lemma:F-bounds}. Using $\mixdiff{J}{x_i}{x_j} = -2$, we get $\mixdiff{\hat{F}}{x_i}{x_j} = \mixdiff{\tilde{F}}{x_i}{x_j} + 256 M |X| \alpha \mixdiff{J}{x_i}{x_j} \leq 0$. The same holds for $\mixdiff{\hat{G}}{x_i}{x_j}$ since $\mixdiff{G}{x_i}{x_j} \leq 0$. \end{enumerate} \end{proof} This concludes the proofs of our main hardness results. \section{Algorithms using the multilinear relaxation} \label{section:algorithms} Here we turn to our algorithmic results. First, we discuss the problem of maximizing a submodular (but not necessarily monotone) function subject to a matroid independence constraint. \subsection{Matroid independence constraint} \label{section:submod-independent} Consider the problem $\max \{ f(S): S \in {\cal I} \}$, where ${\cal I}$ is the collection of independent sets in a matroid ${\cal M}$. We design an algorithm based on the multilinear relaxation of the problem, $\max \{ F({\bf x}): {\bf x} \in P({\cal M}) \}$. Our algorithm can be seen as "continuous local search" in the matroid polytope $P({\cal M})$, constrained in addition by the box $[0,t]^X$ for some fixed $t \in [0,1]$. The intuition is that this forces our local search to use fractional solutions that are more fuzzy than integral solutions and therefore less likely to get stuck in a local optimum. On the other hand, restraining the search space too much would not give us much freedom in searching for a good fractional point. This leads to a tradeoff and an optimal choice of $t \in [0,1]$ which we leave for later. The matroid polytope is defined as $ P({\cal M}) = \mbox{conv} \{ {\bf 1}_I: I \in {\cal I} \} $. or equivalently \cite{E70} as $ P({\cal M}) = \{ {\bf x} \geq 0: \forall S; \sum_{i \in S} x_i \leq r_{\cal M}(S) \}$, where $r_{\cal M}(S)$ is the rank function of ${\cal M}$. We define $$ P_t({\cal M}) = P({\cal M}) \cap [0,t]^X = \{ {\bf x} \in P({\cal M}): \forall i; x_i \leq t \}.$$ We consider the problem $\max \{F({\bf x}): {\bf x} \in P_t({\cal M})\}$. We remind the reader that $F({\bf x}) = {\bf E}[f(\hat{{\bf x}})]$ denotes the multilinear extension. Our algorithm works as follows. \ \noindent{\bf Fractional local search in $P_t({\cal M})$} \\ (given $t = \frac{r}{q}$, $r \leq q$ integer) \begin{enumerate} \item Start with ${\bf x} := (0,0,\ldots,0)$. Fix $\delta = 1/q$. \item If there is $i,j \in X$ and a direction ${\bf v} \in \{{\bf e}_j, -{\bf e}_i, {\bf e}_j - {\bf e}_i \}$ such that ${\bf x} + \delta {\bf v} \in P_t({\cal M})$ and $F({\bf x} + \delta {\bf v}) > F({\bf x})$, set ${\bf x} := {\bf x} + \delta {\bf v}$ and repeat. \item If there is no such direction ${\bf v}$, apply pipage rounding to ${\bf x}$ and return the resulting solution. \end{enumerate} \noindent{\em Notes.} The procedure as presented here would not run in polynomial time. A modification which runs in polynomial time is that we move to a new solution only if $F({\bf x}+\delta {\bf v}) > F({\bf x}) + \frac{\delta}{poly(n)} OPT$ (where we first get a rough estimate of $OPT$ using previous methods). For simplicity, we analyze the variant above and finally discuss why we can modify it without losing too much in the approximation factor. We also defer the question of how to estimate the value of $F({\bf x})$ to the end of this section. For $t=1$, we have $\delta=1$ and the procedure reduces to discrete local search. However, it is known that discrete local search alone does not give any approximation guarantee. With additional modifications, an algorithm based on discrete local search achieves a $(\frac14-o(1))$-approximation \cite{LMNS09}. Our version of fractional local search avoids this issue and leads directly to a good fractional solution. Throughout the algorithm, we maintain ${\bf x}$ as a linear combination of $q$ independent sets such that no element appears in more than $r$ of them. A local step corresponds to an add/remove/switch operation preserving this condition. Finally, we use pipage rounding to convert a fractional solution into an integral one. As we show in Lemma~\ref{lemma:pipage} (in the Appendix), a modification of the technique from \cite{CCPV07} can be used to find an integral solution without any loss in the objective function. \begin{theorem} \label{thm:submod-matroid-approx} The fractional local search algorithm for any fixed $t \in [0,\frac12 (3-\sqrt{5})]$ returns a solution of value at least $(t - \frac12 t^2) OPT$, where $OPT = \max \{f(S): S \in {\cal I}\}$. \end{theorem} We remark that for $t=\frac12(3-\sqrt{5})$, we would obtain a $\frac14(-1+\sqrt{5}) \simeq 0.309$-approximation, improving the factor of $\frac14$ \cite{LMNS09}. This is not a rational value, but we can pick a rational $t$ arbitrarily close to $\frac12 (3-\sqrt{5})$. For values $t > \frac12 (3-\sqrt{5})$, our analysis does not yield a better approximation factor. First, we discuss properties of the point found by the fractional local search algorithm. \begin{lemma} \label{lemma:local-opt} The outcome of the fractional local search algorithm $x$ is a ``fractional local optimum'' in the following sense. (All the partial derivatives are evaluated at $x$.) \begin{itemize} \item For any $i$ such that ${\bf x} - \delta {\bf e}_i \in P_t({\cal M})$, $\partdiff{F}{x_i} \geq 0.$ \item For any $j$ such that ${\bf x} + \delta {\bf e}_j \in P_t({\cal M})$, $\partdiff{F}{x_j} \leq 0.$ \item For any $i,j$ such that ${\bf x} + \delta ({\bf e}_j - {\bf e}_i) \in P_t({\cal M})$, $\partdiff{F}{x_j} - \partdiff{F}{x_i} \leq 0.$ \end{itemize} \end{lemma} \begin{proof} We use the property (see \cite{CCPV07}) that along any direction ${\bf v} = \pm {\bf e}_i$ or ${\bf v} = {\bf e}_i - {\bf e}_j$, the function $F({\bf x} + \lambda {\bf v})$ is a convex function of $\lambda$. Also, observe that if it is possible to move from ${\bf x}$ in the direction of ${\bf v}$ by any nonzero amount, then it is possible to move by $\delta {\bf v}$, because all coordinates of ${\bf x}$ are integer multiples of $\delta$ and all the constraints also have coefficients which are integer multiples of $\delta$. Therefore, if $\frac{dF}{d\lambda} > 0$ and it is possible to move in the direction of ${\bf v}$, we would get $F({\bf x} + \delta {\bf v}) > F({\bf x})$ and the fractional local search would continue. If ${\bf v} = -{\bf e}_i$ and it is possible to move along $-{\bf e}_i$, we get $\frac{dF}{d\lambda} = -\partdiff{F}{x_i} \leq 0$. Similarly, if ${\bf v} = {\bf e}_j$ and it is possible to move along ${\bf e}_j$, we get $\frac{dF}{d\lambda} = \partdiff{F}{x_j} \leq 0$. Finally, if ${\bf v} = {\bf e}_j - {\bf e}_i$ and it is possible to move along ${\bf e}_j - {\bf e}_i$, we get $\frac{dF}{d\lambda} = \partdiff{F}{x_j} - \partdiff{F}{x_i} \leq 0$. \end{proof} We refer to the following exchange property for matroids (which follows easily from \cite{Schrijver}, Corollary 39.12a; see also \cite{LMNS09}). \begin{lemma} \label{lemma:exchange} If $I, C \in {\cal I}$, then for any $j \in C \setminus I$, there is $\pi(j) \subseteq I \setminus C$, $|\pi(j)| \leq 1$, such that $I \setminus \pi(j) + j \in {\cal I}$. Moreover, the sets $\pi(j)$ are disjoint (each $i \in I \setminus C$ appears at most once as $\pi(j) = \{i\}$). \end{lemma} Using this, we prove a lemma about fractional local optima which generalizes Lemma 2.2 in \cite{LMNS09}. \begin{lemma} \label{lemma:fractional-search} Let ${\bf x}$ be the outcome of fractional local search over $P_t({\cal M})$. Let $C \in {\cal I}$ be any independent set. Let $C' = \{i \in C: x_i < t\}$. Then $$ 2 F({\bf x}) \geq F({\bf x} \vee {\bf 1}_{C'}) + F({\bf x} \wedge {\bf 1}_C).$$ \end{lemma} Note that for $t = 1$, the lemma reduces to $2 F({\bf x}) \geq F({\bf x} \vee {\bf 1}_C) + F({\bf x} \wedge {\bf 1}_C)$ (similar to Lemma 2.2 in \cite{LMNS09}). For $t < 1$, however, it is necessary to replace $C$ by $C'$ in the first expression, which becomes apparent in the proof. The reason is that we do not have any information on $\partdiff{F}{x_i}$ for coordinates where $x_i = t$. \begin{proof} Let $C \in {\cal I}$ and assume ${\bf x} \in P_t({\cal M})$ is a local optimum. Since ${\bf x} \in P({\cal M})$, we can decompose it into a convex linear combination of vertices of $P({\cal M})$, ${\bf x} = \sum_{I \in {\cal I}} x_I {\bf 1}_I$ where $\sum x_I = 1$.. By the smooth submodularity of $F({\bf x})$ (see \cite{Vondrak08}), \begin{eqnarray*} F({\bf x} \vee {\bf 1}_{C'}) - F({\bf x}) \leq \sum_{j \in C'} (1 - x_j) \partdiff{F}{x_j} = \sum_{j \in C'} \sum_{I: j \notin I} x_I \partdiff{F}{x_j} = \sum_I x_I \sum_{j \in C' \setminus I} \partdiff{F}{x_j}. \end{eqnarray*} All partial derivatives here are evaluated at ${\bf x}$. On the other hand, also by submodularity, \begin{eqnarray*} F({\bf x}) - F({\bf x} \wedge {\bf 1}_C) \geq \sum_{i \notin C} x_i \partdiff{F}{x_i} = \sum_{i \notin C} \sum_{I: i \in I} x_I \partdiff{F}{x_i} = \sum_I x_I \sum_{i \in I \setminus C} \partdiff{F}{x_i}. \end{eqnarray*} To prove the lemma, it remains to prove the following. \ \noindent {\bf Claim.} Whenever $x_I > 0$, $\sum_{j \in C' \setminus I} \partdiff{F}{x_j} \leq \sum_{i \in I \setminus C} \partdiff{F}{x_i}$. \ \noindent {\em Proof:} For any $I \in {\cal I}$, we can apply Lemma~\ref{lemma:exchange} to get a mapping $\pi$ such that $I \setminus \pi(j) + j \in {\cal I}$ for any $j \in C \setminus I$. Now, consider $j \in C' \setminus I$, i.e. $j \in C \setminus I$ and $x_j < t$. If $\pi(j) = \emptyset$, is possible to move from ${\bf x}$ in the direction of ${\bf e}_j$, because $I + j \in {\cal I}$ and hence we can replace $I$ by $I+j$ (or at least we can do this for some nonzero fraction of its coefficient) in the linear combination. Because $x_j < t$, we can move by a nonzero amount inside $P_t({\cal M})$. By Lemma~\ref{lemma:local-opt}, $\partdiff{F}{x_j} \leq 0$. Similarly, if $\pi(j) = \{i\}$, it is possible to move in the direction of ${\bf e}_j - {\bf e}_i$, because $I$ can be replaced by $I \setminus \pi(j) + i$ for some nonzero fraction of its coefficient. By Lemma~\ref{lemma:local-opt}, in this case $\partdiff{F}{x_j} - \partdiff{F}{x_i} \leq 0$. Finally, for any $i \in I$ we have $x_i > 0$ and therefore we can decrease $x_i$ while staying inside $P_t({\cal M})$. By Lemma~\ref{lemma:local-opt}, we have $\partdiff{F}{x_i} \geq 0$ for all $i \in I$. This means $$ \sum_{j \in C' \setminus I} \partdiff{F}{x_j} \leq \sum_{j \in C' \setminus I: \pi(j) = \emptyset} \partdiff{F}{x_j} + \sum_{j \in C' \setminus I: \pi(j) = \{i\}} \partdiff{F}{x_i} \leq \sum_{i \in I \setminus C} \partdiff{F}{x_i} $$ using the inequalities we derived above, and the fact that each $i \in I \setminus C$ appears at most once in $\pi(j)$. This proves the Claim, and hence the Lemma. \end{proof} Now we are ready to prove Theorem~\ref{thm:submod-matroid-approx}. \begin{proof} Let ${\bf x}$ be the outcome of the fractional local search over $P_t({\cal M})$. Define $A = \{ i: x_i = t \}$. Let $C$ be the optimum solution and $C' = C \setminus A = \{ i \in C: x_i < t\}$. By Lemma~\ref{lemma:fractional-search}, $$ 2 F({\bf x}) \geq F({\bf x} \vee {\bf 1}_{C'}) + F({\bf x} \wedge {\bf 1}_C).$$ First, let's analyze $F({\bf x} \wedge {\bf 1}_C)$. We apply Lemma~\ref{lemma:rnd-threshold} (in the Appendix), which states that $F({\bf x} \wedge {\bf 1}_C) \geq {\bf E}[f(T({\bf x} \wedge {\bf 1}_C))]$. Here, $T({\bf x} \wedge {\bf 1}_C)$ is a random threshold set corresponding to the vector ${\bf x} \wedge {\bf 1}_C$, i.e. $$ T({\bf x} \wedge {\bf 1}_C) = \{ i: ({\bf x} \wedge {\bf 1}_C)_i > \lambda \} = \{ i \in C: x_i > \lambda \} = T({\bf x}) \cap C $$ where $\lambda \in [0,1]$ is uniformly random. Equivalently, $$ F({\bf x} \wedge {\bf 1}_C) \geq {\bf E}[f(T({\bf x}) \cap C)].$$ Due to the definition of a threshold set, with probability $t$ we have $\lambda < t$ and $T({\bf x})$ contains $A = \{ i: x_i = t \} = C \setminus C'$. Then, $f(T({\bf x}) \cap C) + f(C') \geq f(C)$ by submodularity. We conclude that \begin{equation} \label{eq:1} F({\bf x} \wedge {\bf 1}_C) \geq t (f(C) - f(C')). \end{equation} Next, let's analyze $F({\bf x} \vee {\bf 1}_{C'})$. We consider the ground set partitioned into $X = C \cup \bar{C}$, and we apply Lemma~\ref{lemma:submod-split} (in the Appendix). (Here, $\bar{C}$ denotes $X \setminus C$, the complement of $C$ inside $X$.) We get $$ F({\bf x} \vee {\bf 1}_{C'}) \geq {\bf E}[f((T_1({\bf x} \vee {\bf 1}_{C'}) \cap C) \cup (T_2({\bf x} \vee {\bf 1}_{C'}) \cap \bar{C}))].$$ The random threshold sets look as follows: $T_1({\bf x} \vee {\bf 1}_{C'}) \cap C = (T_1({\bf x}) \cup C') \cap C$ is equal to $C$ with probability $t$, and equal to $C'$ otherwise. $T_2({\bf x} \vee {\bf 1}_{C'}) \cap \bar{C} = T_2({\bf x}) \cap \bar{C}$ is empty with probability $1-t$. (We ignore the contribution when $T_2({\bf x}) \cap \bar{C} \neq \emptyset$.) Because $T_1$ and $T_2$ are independently sampled, we get $$ F({\bf x} \vee {\bf 1}_{C'}) \geq t(1-t) f(C) + (1-t)^2 f(C').$$ Provided that $t \in [0,\frac12 (3 - \sqrt{5})]$, we have $t \leq (1-t)^2$. Then, we can write \begin{equation} \label{eq:1'} F({\bf x} \vee {\bf 1}_{C'}) \geq t(1-t) f(C) + t f(C'). \end{equation} Combining equations (\ref{eq:1}) and (\ref{eq:1'}), we get \begin{eqnarray*} & & F({\bf x} \vee {\bf 1}_{C'}) + F({\bf x} \wedge {\bf 1}_C) \geq t (f(C) - f(C')) + t(1-t) f(C) + t \, f(C') = (2t - t^2) f(C). \end{eqnarray*} Therefore, $$ F({\bf x}) \geq \frac12 (F({\bf x} \vee {\bf 1}_{C'}) + F({\bf x} \wedge {\bf 1}_C)) \geq (t - \frac12 t^2) f(C).$$ Finally, we apply the pipage rounding technique which does not lose anything in terms of objective value (see Lemma~\ref{lemma:pipage}). \end{proof} \paragraph{Technical remarks} In each step of the algorithm, we need to estimate values of $F({\bf x})$ for given ${\bf x} \in P_t({\cal M})$. We accomplish this by using the expression $F({\bf x}) = {\bf E}[f(R({\bf x}))]$ where $R({\bf x})$ is a random set associated with ${\bf x}$. By standard bounds, if the values of $f(S)$ are in a range $[0,M]$, we can achieve accuracy $M / poly(n)$ using a polynomial number of samples. We use the fact that $OPT \geq \frac{1}{n} M$ (see Lemma~\ref{lemma:solution-value} in the Appendix) and therefore we can achieve $OPT / poly(n)$ additive error in polynomial time. We also relax the local step condition: we move to the next solution only if $F({\bf x} + \delta {\bf v}) > F({\bf x}) + \frac{\delta}{poly(n)} OPT$ for a suitable polynomial in $n$. This way, we can only make a polynomial number of steps. When we terminate, the local optimality conditions (Lemma~\ref{lemma:local-opt}) are satisfied within an additive error of $OPT / poly(n)$, which yields a polynomially small error in the approximation bound. \subsection{Matroid base constraint} \label{section:submod-bases} Let us move on to the problem $\max \{f(S): S \in {\cal B}\}$ where ${\cal B}$ are the bases of a matroid. For a fixed $t \in [0,1]$, let us consider an algorithm which can be seen as local search inside the base polytope $B({\cal M})$, further constrained by the box $[0,t]^X$. The matroid base polytope is defined as $ B({\cal M}) = \mbox{conv} \{ {\bf 1}_B: B \in {\cal B} \} $ or equivalently \cite{E70} as $ B({\cal M}) = \{ {\bf x} \geq 0: \forall S \subseteq X; \sum_{i \in S} x_i \leq r_{\cal M}(S), \sum_{i \in X} x_i = r_{\cal M}(X) \}, $ where $r_{\cal M}$ is the matroid rank function of ${\cal M}$. Finally, we define $$ B_t({\cal M}) = B({\cal M}) \cap [0,t]^X = \{ {\bf x} \in B({\cal M}): \forall i \in X; x_i \leq t \}.$$ Observe that $B_t({\cal M})$ is nonempty if and only if there is a convex linear combination ${\bf x} = \sum_{B \in {\cal B}} \xi_B {\bf 1}_B$ such that $x_i \in [0,t]$ for all $i$. This is equivalent to saying that there is a linear combination (a fractional base packing) ${\bf x}' = \sum_{B \in {\cal B}} \xi'_B {\bf 1}_B$ such that $x_i \in [0,1]$ and $\sum \xi'_B \geq \frac{1}{t}$, in other words the fractional base packing number is $\nu \geq \frac{1}{t}$. Since the optimal fractional packing of bases in a matroid can be found efficiently (see Corollary 42.7a in \cite{Schrijver}, Volume B), we can find efficiently the minimum $t \in [\frac12,1]$ such that $B_t({\cal M}) \neq \emptyset$. Then, our algorithm is the following. \ \noindent{\bf Fractional local search in $B_t({\cal M})$} \\ (given $t = \frac{r}{q}$, $r \leq q$ integer) \begin{enumerate} \item Let $\delta = \frac{1}{q}$. Assume that ${\bf x} \in B_t({\cal M})$; adjust ${\bf x}$ (using pipage rounding) so that each $x_i$ is an integer multiple of $\delta$. In the following, this property will be maintained. \item If there is a direction ${\bf v} = {\bf e}_j - {\bf e}_i$ such that ${\bf x} + \delta {\bf v} \in B_t({\cal M})$ and $F({\bf x} + \delta {\bf v}) > F({\bf x})$, then set ${\bf x} := {\bf x} + \delta {\bf v}$ and repeat. \item If there is no such direction ${\bf v}$, apply pipage rounding to ${\bf x}$ and return the resulting solution. \end{enumerate} \noindent{\em Notes.} We remark that the starting point can be found as a convex linear combination of $q$ bases, $x = \frac{1}{q} \sum_{i=1}^{q} {\bf 1}_{B_i}$, such that no element appears in more than $r$ of them, using matroid union techniques (see Theorem 42.9 in \cite{Schrijver}). In the algorithm, we maintain this representation. The local search step corresponds to switching a pair of elements in one base, under the condition that no element is used in more than $r$ bases at the same time. For now, we ignore the issues of estimating $F({\bf x})$ and stopping the local search within polynomial time. We discuss this at the end of this section. Finally, we use pipage rounding to convert the fractional solution ${\bf x}$ into an integral one of value at least $F({\bf x})$ (Lemma~\ref{lemma:base-pipage} in the Appendix). Note that it is not necessarily true that any of the bases in a convex linear combination ${\bf x} = \sum \xi_B {\bf 1}_{B}$ achieves the value $F({\bf x})$. \begin{theorem} \label{thm:submod-bases-approx} If there is a fractional packing of $\nu \in [1,2]$ bases in ${\cal M}$, then the fractional local search algorithm with $t = \frac{1}{\nu}$ returns a solution of value at least $\frac12 (1-t) \ OPT.$ \end{theorem} For example, assume that ${\cal M}$ contains two disjoint bases $B_1, B_2$ (which is the case considered in \cite{LMNS09}). Then, the algorithm can be used with $t = \frac{1}{2}$ and and we obtain a $(\frac{1}{4}-o(1))$-approximation, improving the $(\frac{1}{6}-o(1))$-approximation from \cite{LMNS09}. If there is a fractional packing of more than 2 bases, our analysis still gives only a $(\frac{1}{4}-o(1))$-approximation. If the dual matroid ${\cal M}^*$ admits a better fractional packing of bases, we can consider the problem $\max \{f(\bar{S}): S \in {\cal B}^* \}$ which is equivalent. For a uniform matroid, ${\cal B} = \{B: |B|=k\}$, the fractional base packing number is either at least $2$ or the same holds for the dual matroid, ${\cal B}^* = \{B: |B|=n-k\}$ (as noted in \cite{LMNS09}). Therefore, we get a $(\frac{1}{4}-o(1))$-approximation for any uniform matroid. The value $t=1$ can be used for any matroid, but it does not yield any approximation guarantee. \paragraph{Analysis of the algorithm} We turn to the properties of fractional local optima. We will prove that the point ${\bf x}$ found by the fractional local search algorithm satisfies the following conditions that allow us to compare $F({\bf x})$ to the actual optimum. \begin{lemma} \label{lemma:local-opt2} The outcome of the fractional local search algorithm ${\bf x}$ is a ``fractional local optimum'' in the following sense. \begin{itemize} \item For any $i,j$ such that ${\bf x} + \delta ({\bf e}_j - {\bf e}_i) \in B_t({\cal M})$, $$\partdiff{F}{x_j} - \partdiff{F}{x_i} \leq 0.$$ (The partial derivatives are evaluated at ${\bf x}$.) \end{itemize} \end{lemma} \begin{proof} Similarly to Lemma~\ref{lemma:local-opt}, observe that the coordinates of ${\bf x}$ are always integer multiples of $\delta$, therefore if it is possible to move from ${\bf x}$ in the direction of ${\bf v} = {\bf e}_j - {\bf e}_i$ by any nonzero amount, then it is possible to move by $\delta {\bf v}$. We use the property that for any direction ${\bf v} = {\bf e}_j - {\bf e}_i$, the function $F({\bf x} + \lambda {\bf v})$ is a convex function of $\lambda$ \cite{CCPV07}. Therefore, if $\frac{dF}{d\lambda} > 0$ and it is possible to move in the direction of ${\bf v}$, we would get $F({\bf x} + \delta {\bf v}) > F({\bf x})$ and the fractional local search would continue. For ${\bf v} = {\bf e}_j - {\bf e}_i$, we get $$ \frac{dF}{d\lambda} = \partdiff{F}{x_j} - \partdiff{F}{x_i} \leq 0.$$ \end{proof} We refer to the following exchange property for matroid bases (see \cite{Schrijver}, Corollary 39.21a). \begin{lemma} \label{lemma:exchange2} For any $B_1, B_2 \in {\cal B}$, there is a bijection $\pi:B_1 \setminus B_2 \rightarrow B_2 \setminus B_1$ such that $\forall i \in B_1 \setminus B_2$; $B_1-i+\pi(i) \in {\cal M}$. \end{lemma} Using this, we prove a lemma about fractional local optima analogous to Lemma 2.2 in \cite{LMNS09}. \begin{lemma} \label{lemma:fractional-search2} Let ${\bf x}$ be the outcome of fractional local search over $B_t({\cal M})$. Let $C \in {\cal B}$ be any base. Then there is ${\bf c} \in [0,1]^X$ satisfying \begin{itemize} \item $c_i = t$, if $i \in C$ and $x_i = t$ \item $c_i = 1$, if $i \in C$ and $x_i < t$ \item $0 \leq c_i \leq x_i$, if $i \notin C$ \end{itemize} such that $$ 2 F({\bf x}) \geq F({\bf x} \vee {\bf c}) + F({\bf x} \wedge {\bf c}).$$ \end{lemma} Note that for $t = 1$, we can set ${\bf c} = {\bf 1}_C$. However, in general we need this more complicated formulation. Intuitively, ${\bf c}$ is obtained from ${\bf x}$ by raising the variables $x_i, i \in C$ and decreasing $x_i$ for $i \notin C$. However, we can only raise the variables $x_i, i \in C$, where $x_i$ is below the threshold $t$, otherwise we do not have any information about $\partdiff{F}{x_i}$. Also, we do not necessarily decrease all the variables outside of $C$ to zero. \begin{proof} Let $C \in {\cal B}$ and assume ${\bf x} \in B_t({\cal M})$ is a fractional local optimum. We can decompose ${\bf x}$ into a convex linear combination of vertices of $B({\cal M})$, ${\bf x} = \sum \xi_B {\bf 1}_B$. By Lemma~\ref{lemma:exchange2}, for each base $B$ there is a bijection $\pi_B:B \setminus C \rightarrow C \setminus B$ such that $\forall i \in B \setminus C$; $B - i + \pi_B(i) \in {\cal B}$. We define $C' = \{i \in C: x_i < t \}$. The reason we consider $C'$ is that if $x_i = t$, there is no room for an exchange step increasing $x_i$, and therefore Lemma~\ref{lemma:local-opt2} does not give any information about $\partdiff{F}{x_i}$. We construct the vector ${\bf c}$ by starting from ${\bf x}$, and for each $B$ swapping the elements in $B \setminus C$ for their image under $\pi_B$, provided it is in $C'$, until we raise the coordinates on $C'$ to $c_i=1$. Formally, we set $c_i = 1$ for $i \in C'$, $c_i = t$ for $i \in C \setminus C'$, and for each $i \notin C$, we define $$ c_i = x_i - \sum_{B: i \in B, \pi_B(i) \in C'} \xi_B.$$ In the following, all partial derivatives are evaluated at ${\bf x}$. By the smooth submodularity of $F({\bf x})$ (see \cite{CCPV09}), \begin{eqnarray} \label{eq:2} F({\bf x} \vee {\bf c}) - F({\bf x}) & \leq & \sum_{j: c_j > x_j} (c_j - x_j) \partdiff{F}{x_j} = \sum_{j \in C'} (1 - x_j) \partdiff{F}{x_j} = \sum_B \sum_{j \in C' \setminus B} \xi_B \partdiff{F}{x_j} \end{eqnarray} because $\sum_{B: j \notin B} \xi_B = 1 - x_j$ for any $j$. On the other hand, also by smooth submodularity, \begin{eqnarray*} F({\bf x}) - F({\bf x} \wedge {\bf c}) & \geq & \sum_{i: c_i < x_i} (x_i - c_i) \partdiff{F}{x_i} = \sum_{i \notin C} (x_i - c_i) \partdiff{F}{x_i} = \sum_{i \notin C} \sum_{B: i \in B, \pi_B(i) \in C'} \xi_B \partdiff{F}{x_i} \end{eqnarray*} using our definition of $c_i$. In the last sum, for any nonzero contribution, we have $\xi_B > 0$, $i \in B$ and $j = \pi_B(i) \in C'$, i.e. $x_j < t$. Therefore it is possible to move in the direction ${\bf e}_j - {\bf e}_i$ (we can switch from $B$ to $B-i+j$). By Lemma~\ref{lemma:local-opt2}, $$ \partdiff{F}{x_j} - \partdiff{F}{x_i} \leq 0.$$ Therefore, we get \begin{eqnarray} F({\bf x}) - F({\bf x} \wedge {\bf c}) & \geq & \sum_{i \notin C} \sum_{B: i \in B, j=\pi_B(i) \in C'} \xi_B \partdiff{F}{x_j} \label{eq:3} = \sum_B \sum_{i \in B \setminus C: j=\pi_B(i) \in C'} \xi_B \partdiff{F}{x_j}. \end{eqnarray} By the bijective property of $\pi_B$, this is equal to $\sum_B \sum_{j \in C' \setminus B} \xi_B \partdiff{F}{x_j}$. Putting (\ref{eq:2}) and (\ref{eq:3}) together, we get $F({\bf x} \vee {\bf c}) - F({\bf x}) \leq F({\bf x}) - F({\bf x} \wedge {\bf c})$. \end{proof} Now we are ready to prove Theorem~\ref{thm:submod-bases-approx}. \begin{proof} Assuming that $B_t({\cal M}) \neq \emptyset$, we can find a starting point ${\bf x}_0 \in B_t({\cal M})$. From this point, we reach a fractional local optimum ${\bf x} \in B_t({\cal M})$ (see Lemma~\ref{lemma:local-opt2}). We want to compare $F({\bf x})$ to the actual optimum; assume that $OPT = f(C)$. As before, we define $C' = \{i \in C: x_i < t\}$. By Lemma~\ref{lemma:fractional-search2}, we know that the fractional local optimum satisfies: \begin{equation} \label{eq:4} 2 F({\bf x}) \geq F({\bf x} \vee {\bf c}) + F({\bf x} \wedge {\bf c}) \end{equation} for some vector ${\bf c}$ such that $c_i = t$ for all $i \in C \setminus C'$, $c_i = 1$ for $i \in C'$ and $0 \leq c_i \leq x_i$ for $i \notin C$. First, let's analyze $F({\bf x} \vee {\bf c})$. We have \begin{itemize} \item $({\bf x} \vee {\bf c})_i = 1$ for all $i \in C'$. \item $({\bf x} \vee {\bf c})_i = t$ for all $i \in C \setminus C'$. \item $({\bf x} \vee {\bf c})_i \leq t$ for all $i \notin C$. \end{itemize} We apply Lemma~\ref{lemma:submod-split} to the partition $X = C \cup \bar{C}$. We get $$ F({\bf x} \vee {\bf c}) \geq {\bf E}[f((T_1({\bf x} \vee {\bf c}) \cap C) \cup (T_2({\bf x} \vee {\bf c}) \cap \bar{C}))] $$ where $T_1({\bf x})$ and $T_2({\bf x})$ are independent threshold sets. Based on the information above, $T_1({\bf x} \vee {\bf c}) \cap C = C$ with probability $t$ and $T_1({\bf x} \vee {\bf c}) \cap C = C'$ otherwise. On the other hand, $T_2({\bf x} \vee {\bf c}) \cap \bar{C} = \emptyset$ with probability at least $1-t$. These two events are independent. We conclude that on the right-hand side, we get $f(C)$ with probability at least $t(1-t)$, or $f(C')$ with probability at least $(1-t)^2$: \begin{equation} \label{eq:5} F({\bf x} \vee {\bf c}) \geq t(1-t) f(C) + (1-t)^2 f(C'). \end{equation} Turning to $F({\bf x} \wedge {\bf c})$, we see that \begin{itemize} \item $({\bf x} \wedge {\bf c})_i = x_i$ for all $i \in C'$. \item $({\bf x} \wedge {\bf c})_i = t$ for all $i \in C \setminus C'$. \item $({\bf x} \wedge {\bf c})_i \leq t$ for all $i \notin C$. \end{itemize} We apply Lemma~\ref{lemma:submod-split} to $X = C \cup \bar{C}$. $$ F({\bf x} \wedge {\bf c}) \geq {\bf E}[f((T_1({\bf x} \wedge {\bf c}) \cap C) \cup (T_2({\bf x} \wedge {\bf c}) \cap \bar{C}))].$$ With probability $t$, $T_1({\bf x} \wedge {\bf c}) \cap C$ contains $C \setminus C'$ (and maybe some elements of $C'$). In this case, $f(T_1({\bf x} \wedge {\bf c}) \cap C) \geq f(C) - f(C')$ by submodularity. Also, $T_2({\bf x} \wedge {\bf c}) \cap \bar{C}$ is empty with probability at least $1-t$. Again, these two events are independent. Therefore, $ F({\bf x} \wedge {\bf c}) \geq t(1-t) (f(C) - f(C')).$ If $f(C') > f(C)$, this bound is vacuous; otherwise, we can replace $t(1-t)$ by $(1-t)^2$, because $t \geq 1/2$. In any case, \begin{equation} \label{eq:6} F({\bf x} \wedge {\bf c}) \geq (1-t)^2 (f(C) - f(C')). \end{equation} Combining (\ref{eq:4}), (\ref{eq:5}) and (\ref{eq:6}), $$ F({\bf x}) \geq \frac12 (F({\bf x} \vee {\bf c}) + F({\bf x} \wedge {\bf c})) \geq \frac12 (t(1-t) f(C) + (1-t)^2 f(C)) = \frac12 (1-t) f(C).$$ \end{proof} \paragraph{Technical remarks} Again, we have to deal with the issues of estimating $F({\bf x})$ and stopping the local search in polynomial time. We do this exactly as we did at the end of Section~\ref{section:submod-independent}. One issue to be careful about here is that if $f:2^X \rightarrow [0,M]$, our estimates of $F({\bf x})$ are within an additive error of $M / poly(n)$. If the optimum value $OPT = \max \{f(S): S \in {\cal B}\}$ is very small compared to $M$, the error might be large compared to $OPT$ which would be a problem. The optimum could in fact be very small in general. But it holds that if ${\cal M}$ contains no loops and co-loops (which can be eliminated easily), then $OPT \geq \frac{1}{n^2} M$ (see Appendix~\ref{app:base-value}). Then, our sampling errors are on the order of $OPT / poly(n)$ which yields a $1/poly(n)$ error in the approximation bound. \section{Approximation for symmetric instances} \label{section:symmetric} We can achieve a better approximation assuming that the instance exhibits a certain symmetry. This is the same kind of symmetry that we use in our hardness construction (Section~\ref{section:hardness-proof}) and the hard instances exhibit the same symmetry as well. It turns out that our approximation in this case matches the hardness threshold up to lower order terms. Similar to our hardness result, the symmetries that we consider here are permutations of the ground set $X$, corresponding to permutations of coordinates in ${\boldmath R}^X$. We start with some basic properties which are helpful in analyzing symmetric instances. \begin{lemma} \label{lemma:grad-sym} Assume that $f:2^X \rightarrow {\boldmath R}$ is invariant with respect to a group of permutations ${\cal G}$ and $F({\bf x}) = {\bf E}[f(\hat{{\bf x}})]$. Then for any symmetrized vector $\bar{{\bf c}} = {\bf E}_{\sigma \in {{\cal G}}}[\sigma({\bf c})]$, $\nabla F |_{\bar{{\bf c}}}$ is also symmetric w.r.t. ${\cal G}$. I.e., for any $\tau \in {\cal G}$, $$ \tau(\nabla F |_{{\bf x}=\bar{{\bf c}}}) = \nabla F |_{{\bf x}=\bar{{\bf c}}}. $$ \end{lemma} \begin{proof} Since $f(S)$ is invariant under ${\cal G}$, so is $F({\bf x})$, i.e. $F({\bf x}) = F(\tau({\bf x}))$ for any $\tau \in {\cal G}$. Differentiating both sides at ${\bf x}={\bf c}$, we get by the chain rule: $$ \partdiff{F}{x_i} \Big|_{{\bf x}={\bf c}} = \sum_j \partdiff{F}{x_j} \Big|_{{\bf x}=\tau({\bf c})} \partdiff{}{x_i} (\tau(x))_j = \sum_j \partdiff{F}{x_j} \Big|_{{\bf x}=\tau({\bf c})} \partdiff{x_{\tau(j)}}{x_i}. $$ Here, $\partdiff{x_{\tau(j)}}{x_i} = 1$ if $\tau(j) = i$, and $0$ otherwise. Therefore, $$ \partdiff{F}{x_i} \Big|_{{\bf x}={\bf c}} = \partdiff{F}{x_{\tau^{-1}(i)}} \Big|_{{\bf x}=\tau({\bf c})}. $$ Note that $\tau(\bar{{\bf c}}) = {\bf E}_{\sigma \in {\cal G}}[\tau(\sigma({\bf c}))] = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf c})] = \bar{{\bf c}}$ since the distribution of $\tau \circ \sigma$ is equal to the distribution of $\sigma$. Therefore, $$ \partdiff{F}{x_i} \Big|_{{\bf x}=\bar{{\bf c}}} = \partdiff{F}{x_{\tau^{-1}(i)}} \Big|_{{\bf x}=\bar{{\bf c}}} $$ for any $\tau \in {\cal G}$. \end{proof} Next, we prove that the ``symmetric optimum'' $\max \{F(\bar{{\bf x}}): {\bf x} \in P({\cal F})\}$ gives a solution which is a local optimum for the original instance $\max \{F({\bf x}): {\bf x} \in P({\cal F})\}$. (As we proved in Section~\ref{section:hardness-proof}, in general we cannot hope to find a better solution than the symmetric optimum.) \begin{lemma} \label{lemma:sym-local-opt} Let $f:2^X \rightarrow {\boldmath R}$ and ${\cal F} \subset 2^X$ be invariant with respect to a group of permutations $\cal G$. Let $\overline{OPT} = \max \{ F(\bar{{\bf x}}): {\bf x} \in P({\cal F}) \}$ where $\bar{{\bf x}} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf x})]$, and let ${\bf x}_0$ be the symmetric point where $\overline{OPT}$ is attained ($\bar{{\bf x}}_0 = {\bf x}_0$). Then ${\bf x}_0$ is a local optimum for the problem $\max \{F({\bf x}): {\bf x} \in P({\cal F}) \}$, in the sense that $({\bf x}-{\bf x}_0) \cdot \nabla F |_{{\bf x}_0} \leq 0$ for any ${\bf x} \in P({\cal F})$. \end{lemma} \begin{proof} Assume for the sake of contradiction that $({\bf x}-{\bf x}_0) \cdot \nabla F|_{{\bf x}_0} > 0$ for some ${\bf x} \in P({\cal F})$. We use the symmetric properties of $f$ and ${\cal F}$ to show that $(\bar{{\bf x}} - {\bf x}_0) \cdot \nabla F|_{{\bf x}_0} > 0$ as well. Recall that ${\bf x}_0 = \bar{{\bf x}}_0$. We have $$ (\bar{{\bf x}} - {\bf x}_0) \cdot \nabla F|_{{\bf x}_0} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf x}-{\bf x}_0) \cdot \nabla F|_{{\bf x}_0}] = {\bf E}_{\sigma \in {\cal G}}[({\bf x}-{\bf x}_0) \cdot \sigma^{-1}(\nabla F|_{{\bf x}_0})] = ({\bf x}-{\bf x}_0) \cdot \nabla F|_{{\bf x}_0} > 0 $$ using Lemma~\ref{lemma:grad-sym}. Hence, there would be a direction $\bar{{\bf x}} - {\bf x}_0$ along which an improvement can be obtained. But then, consider a small $\delta > 0$ such that ${\bf x}_1 = {\bf x}_0 + \delta (\bar{{\bf x}} - {\bf x}_0) \in P({\cal F})$ and also $F({\bf x}_1) > F({\bf x}_0)$. The point ${\bf x}_1$ is symmetric ($\bar{{\bf x}}_1 = {\bf x}_1$) and hence it would contradict the assumption that $F({\bf x}_0) = \overline{OPT}$. \end{proof} \subsection{Submodular maximization over independent sets in a matroid} Let us derive an optimal approximation result for the problem $\max \{f(S): S \in {\cal I}\}$ under the assumption that the instance is "element-transitive". \begin{definition} For a group ${\cal G}$ of permutations on $X$, the orbit of an element $i \in X$ is the set $\{ \sigma(i): \sigma \in {\cal G} \}$. ${\cal G}$ is called element-transitive, if the orbit of any element is the entire ground set $X$. \end{definition} In this case, we show that it is easy to achieve an optimal $(\frac12-o(1))$-approximation for a matroid independence constraint. \begin{theorem} \label{thm:sym-submod-independent} Let $\max \{f(S): S \in {\cal I}\}$ be an instance symmetric with respect to an element-transitive group of permutations ${\cal G}$. Let $\overline{OPT} = \max \{F(\bar{{\bf x}}): {\bf x} \in P({\cal M})\}$ where $\bar{{\bf x}} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf x})]$. Then $\overline{OPT} \geq \frac12 OPT$. \end{theorem} \begin{proof} Let $OPT = f(C)$. By Lemma~\ref{lemma:sym-local-opt}, $\overline{OPT} = F({\bf x}_0)$ where ${\bf x}_0$ is a local optimum for the problem $\max \{F({\bf x}): {\bf x} \in P({\cal M})$. This means it is also a local optimum in the sense of Lemma~\ref{lemma:local-opt}, with $t=1$. By Lemma~\ref{lemma:fractional-search}, $$ 2 F({\bf x}_0) \geq F({\bf x}_0 \vee {\bf 1}_C) + F({\bf x}_0 \wedge {\bf 1}_C).$$ Also, ${\bf x}_0 = \bar{{\bf x}}_0$. As we are dealing with an element-transitive group of symmetries, this means all the coordinates of ${\bf x}_0$ are equal, ${\bf x}_0 = (\xi,\xi,\ldots,\xi)$. Therefore, ${\bf x}_0 \vee {\bf 1}_C$ is equal to $1$ on $C$ and $\xi$ outside of $C$. By Lemma~\ref{lemma:rnd-threshold} (in the Appendix), $$ F({\bf x}_0 \vee {\bf 1}_C) \geq (1-\xi) f(C).$$ Similarly, ${\bf x}_0 \wedge {\bf 1}_C$ is equal to $\xi$ on $C$ and $0$ outside of $C$. By Lemma~\ref{lemma:rnd-threshold}, $$ F({\bf x}_0 \wedge {\bf 1}_C) \geq \xi f(C).$$ Combining the two bounds, $$ 2 F({\bf x}_0) \geq F({\bf x}_0 \vee {\bf 1}_C) + F({\bf x}_0 \wedge {\bf 1}_C) \geq (1-\xi) f(C) + \xi f(C) = f(C) = OPT.$$ \end{proof} Since all symmetric solutions ${\bf x} = (\xi,\xi,\ldots,\xi)$ form a 1-parameter family, and $F(\xi,\xi,\ldots,\xi)$ is a concave function, we can search for the best symmetric solution (within any desired accuracy) by binary search. By standard techniques, we get the following. \begin{corollary} There is a $(\frac12 - o(1))$-approximation ("brute force" search over symmetric solutions) for the problem $\max \{f(S): S \in {\cal I}\}$ for instances symmetric under an element-transitive group of permutations. \end{corollary} The hard instances for submodular maximization subject to a matroid independence constraint correspond to refinements of the Max Cut instance for the graph $K_2$ (Section~\ref{section:hardness-applications}). It is easy to see that such instances are element-transitive, and it follows from Section~\ref{section:hardness-proof} that a $(\frac12+\epsilon)$-approximation for such instances would require exponentially many value queries. Therefore, our approximation for element-transitive instances is optimal. \subsection{Submodular maximization over bases} Let us come back to the problem of submodular maximization over the bases of matroid. The property that $\overline{OPT}$ is a local optimum with respect to the original problem $\max \{F({\bf x}): {\bf x} \in P({\cal F})\}$ is very useful in arguing about the value of $\overline{OPT}$. We already have tools to deal with local optima from Section~\ref{section:submod-bases}. Here we prove the following. \begin{lemma} \label{lemma:base-local-opt} Let $B({\cal M})$ be the matroid base polytope of ${\cal M}$ and ${\bf x}_0 \in B({\cal M})$ a local maximum for the submodular maximization problem $\max \{F({\bf x}): {\bf x} \in B({\cal M})\}$, in the sense that $({\bf x}-{\bf x}_0) \cdot \nabla F|_{{\bf x}_0} \leq 0$ for any ${\bf x} \in B({\cal M})$. Assume in addition that ${\bf x}_0 \in [s,t]^X$. Then $$ F({\bf x}_0) \geq \frac12 (1-t+s) \cdot OPT.$$ \end{lemma} \begin{proof} Let $OPT = \max \{ f(B): B \in {\cal B} \} = f(C)$. We assume that ${\bf x}_0 \in B({\cal M})$ is a local optimum with respect to any direction ${\bf x}-{\bf x}_0$, ${\bf x} \in B({\cal M})$, so it is also a local optimum with respect to the fractional local search in the sense of Lemma~\ref{lemma:fractional-search2}, with $t=1$. The lemma implies that $$ 2 F({\bf x}_0) \geq F({\bf x} \vee {\bf 1}_C) + F({\bf x} \wedge {\bf 1}_C).$$ By assumption, the coordinates of ${\bf x} \vee {\bf 1}_C$ are equal to $1$ on $C$ and at most $t$ outside of $C$. With probability $1-t$, a random threshold in $[0,1]$ falls between $t$ and $1$, and Lemma~\ref{lemma:rnd-threshold} (in the Appendix) implies that $$ F({\bf x} \vee {\bf 1}_C) \geq (1-t) \cdot f(C).$$ Similarly, the coordinates of ${\bf x} \wedge {\bf 1}_C$ are $0$ outside of $C$, and at least $s$ on $C$. A random threshold falls between $0$ and $s$ with probability $s$, and Lemma~\ref{lemma:rnd-threshold} implies that $$ F({\bf x} \wedge {\bf 1}_C) \geq s \cdot f(C).$$ Putting these inequalities together, we get $$2 F({\bf x}_0) \geq F({\bf x} \vee {\bf 1}_C) + F({\bf x} \wedge {\bf 1}_C) \geq (1-t+s) \cdot f(C).$$ \end{proof} \ \noindent {\bf Totally symmetric instances.} The application we have in mind here is a special case of submodular maximization over the bases of a matroid, which we call {\em totally symmetric}. \begin{definition} \label{def:totally-symmetric} We call an instance $\max \{f(S): S \in {\cal F} \}$ totally symmetric with respect to a group of permutations ${\cal G}$, if both $f(S)$ and ${\cal F}$ are invariant under ${\cal G}$ and moreover, there is a point ${\bf c} \in P({\cal F})$ such that ${\bf c} = \bar{{\bf x}} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf x})]$ for every ${\bf x} \in P({\cal F})$. We call ${\bf c}$ the center of the instance. \end{definition} Note that this is indeed stronger than just being invariant under ${\cal G}$. For example, an instance on a ground set $X = X_1 \cup X_2$ could be symmetric with respect to any permutation of $X_1$ and any permutation of $X_2$. For any ${\bf x} \in P({\cal F})$, the symmetric vector $\bar{{\bf x}}$ is constant on $X_1$ and constant on $X_2$. However, in a totally symmetric instance, there should be a unique symmetric point. \paragraph{Bases of partition matroids} A canonical example of a totally symmetric instance is as follows. Let $X = X_1 \cup X_2 \cup \ldots \cup X_m$ and let integers $k_1,\ldots,k_m$ be given. This defines a partition matroid ${\cal M} = (X,{\cal B})$, whose bases are $$ {\cal B} = \{B: \forall j; |B \cap X_j| = k_j \}.$$ The associated matroid base polytope is $$ B({\cal M}) = \{{\bf x} \geq 0: \forall j; \sum_{i \in X_j} x_i = k_j \}.$$ Let ${\cal G}$ be a group of permutations such that the orbit of each element $i \in X_j$ is the entire set $X_j$. This implies that for any ${\bf x} \in B({\cal M})$, $\bar{{\bf x}}$ is the same vector ${\bf c}$, with coordinates $k_j / |X_j|$ on $X_j$. If $f(S)$ is also invariant under ${\cal G}$, we have a totally symmetric instance $\max \{f(S): S \in {\cal B} \}$. \paragraph{Example: welfare maximization} To present a more concrete example, consider $X_j = \{ a_{j1}, \ldots, a_{jk} \}$ for each $j \in [m]$, a set of bases ${\cal B} = \{B: \forall j; |B \cap X_j| = 1\}$, and an objective function in the form $f(S) = \sum_{i=1}^{k} v(\{j: a_{ji} \in S\})$, where $v:2^{[m]} \rightarrow {\boldmath R}_+$ is a submodular function. This is a totally symmetric instance, which captures the welfare maximization problem for combinatorial auctions where each player has the same valuation function. (Including element $a_{ji}$ in the solution corresponds to allocating item $j$ to player $i$; see \cite{CCPV09} for more details.) We remark that here we consider possibly nonmonotone submodular functions, which is not common for combinatorial auctions; nevertheless the problem still makes sense. \ We show that for such instances, the center point achieves an improved approximation. \begin{theorem} \label{thm:sym-solution} Let $\max \{f(S): S \in {\cal B}\}$ be a totally symmetric instance. Let the fractional packing number of bases be $\nu$ and the fractional packing number of dual bases $\nu^*$. Then the center point ${\bf c}$ satisfies $$ F({\bf c}) \geq \left(1 - \frac{1}{2\nu} - \frac{1}{2\nu^*} \right) OPT.$$ \end{theorem} Recall that in the general case, we get a $\frac12 (1-1/\nu-o(1))$-approximation (Theorem~\ref{thm:submod-bases-approx}). By passing to the dual matroid, we can also obtain a $\frac12 (1-1/\nu^*-o(1))$-approximation, so in general, we know how to achieve a $\frac12 (1-1/\max\{\nu,\nu^*\}-o(1))$-approximation. For totally symmetric instances where $\nu=\nu^*$, we improve this to the optimal factor of $1-1/\nu$. \begin{proof} Since there is a unique center ${\bf c} = \bar{{\bf x}}$ for any ${\bf x} \in B({\cal M})$, this means this is also the symmetric optimum $F({\bf c}) = \max \{ F(\bar{{\bf x}}): {\bf x} \in B({\cal M})\}$. Due to Lemma~\ref{lemma:sym-local-opt}, ${\bf c}$ is a local optimum for the problem $\max \{F({\bf x}): {\bf x} \in B({\cal M})\}$. Because the fractional packing number of bases is $\nu$, we have $c_i \leq 1/\nu$ for all $i$. Similarly, because the fractional packing number of dual bases (complements of bases) is $\nu^*$, we have $1-c_i \leq 1/\nu^*$. This means that ${\bf c} \in [1-1/\nu^*,1/\nu]$. Lemma~\ref{lemma:base-local-opt} implies that $$ 2 F({\bf c}) \geq \left( 1-\frac{1}{\nu} + 1-\frac{1}{\nu^*} \right) OPT.$$ \end{proof} \begin{corollary} Let $\max \{f(S): S \in {\cal B}\}$ be an instance on a partition matroid where every base takes at least an $\alpha$-fraction of each part, at most a $(1-\alpha)$-fraction of each part, and the submodular function $f(S)$ is invariant under a group ${\cal G}$ where the orbit of each $i \in X_j$ is $X_j$. Then, the center point ${\bf c} = {\bf E}_{\sigma \in {\cal G}}[\sigma({\bf 1}_B)]$ (equal for any $B \in {\cal B}$) satisfies $F({\bf c}) \geq \alpha \cdot OPT$. \end{corollary} \begin{proof} If the orbit of any element $i \in X_j$ is the entire set $X_j$, it also means that $\sigma(i)$ for a random $\sigma \in {\cal G}$ is uniformly distributed over $X_j$ (by the transitive property of ${\cal G}$). Therefore, symmetrizing any fractional vector ${\bf x} \in B({\cal M})$ gives the same vector $\bar{{\bf x}} = {\bf c}$, where $c_i = k_j / |X_j|$ for $i \in X_j$. Also, our assumptions mean that the fractional packing number of bases is $1/(1-\alpha)$, and the fractional packing number of dual bases is also $1/(1-\alpha)$. Due to Lemma~\ref{thm:sym-solution}, the center ${\bf c}$ satisfies $F({\bf c}) \geq \alpha \cdot OPT$. \end{proof} The hard instances for submodular maximization over matroid bases that we describe in Section~\ref{section:hardness-applications} are exactly of this form (see the last paragraph of Section~\ref{section:hardness-applications}, with $\alpha=1/k$). There is a unique symmetric solution, $x = (\alpha,\alpha,\ldots,\alpha, 1-\alpha,1-\alpha,\ldots,1-\alpha)$. The fractional base packing number for these matroids is $\nu = 1/(1-\alpha)$ and Theorem~\ref{thm:general-hardness} implies that any $(\alpha+\epsilon) = (1-1/\nu+\epsilon)$-approximation for such matroids would require exponentially many value queries. Therefore, our approximation in this special case is optimal. \section*{Acknowledgment} The author would like to thank Jon Lee and Maxim Sviridenko for helpful discussions.
1,941,325,220,883
arxiv
\section{Introduction} Since the "cold fusion" publication by Fleischmann and Pons in 1989 \cit {FP1} a new field of experimental physics has emerged. Although even the possibility of the phenomenon of nuclear fusion at low energies is in doubt in mainstream physics, the quest for low-energy nuclear reactions (LENR) flourished and hundreds of publications (mostly experimental) have been devoted to various aspects of the problem. (For the summary of experimental observations, the theoretical efforts, and background events see e.g. \cit {Krivit}, \cite{Storms2}.) The main reasons for revulsion against the topic according to standard nuclear physics have been: (a) due to the Coulomb repulsion no nuclear reaction should take place at energies corresponding to room temperature, (b) the observed extra heat attributed to nuclear reactions is not accompanied by the nuclear end products expected from hot fusion experiences, (c) traces of nuclear transmutations were also observed, that considering the repulsive Coulomb interaction is an even more inexplicable fact at these energies. Also in the last two decades, investigating astrophysical factors of nuclear reactions of low atomic numbers, which have great importance in nuclear astrophysics \cite{Angulo}, \cite{Descouvemont}, in the cross section measurements of the $dd$ reactions in deuterated metal targets extraordinary observations were made in low energy accelerator physics \cite{Raiola1}, \cite{Raiola2}, \cite{Bonomo}, \cite{Kasagi}, \cite{Czerski 1}, \cit {Czerski 2}, \cite{Huke 1}. The phenomenon of increasing cross sections of the reactions measured in solids compared to the cross sections obtained in gaseous targets is the so called anomalous screening effect. Several years ago a systematical survey of the experimental methods applied in investigating and the theoretical efforts for the explanation of the anomalous screening effect was made \cite{Huke} from which one can conclude that the full theoretical explanation of the effect is still open. Motivated by the observations in the above two fields we search for physical phenomena that may have modifying effect on nuclear reactions in solid state environment. Earlier we theoretically found \cite{kk2}, \cite{kk0} that if the reaction $p+d\rightarrow $ $^{3}He$ takes place in solid material then the nuclear energy is mostly taken away by an electron of the environment instead of the emission of a $\gamma $ photon, a result that calls the attention to the possible role of electrons. Concerning the assistance of the electrons and other charged constituents of the solid, a family of electron assisted nuclear reactions, especially the electron assisted neutron exchange process, furthermore the electron assisted nuclear capture process and the heavy charged particle assisted nuclear processes were discussed mostly in crystalline solid state (particularly in metal) environment \cite{kk3}, \cite{kk1}, \cite{kk4}. The aim of this paper is to summarize our theoretical findings and on this basis to explain some experimental observations. We adopt the approach standard in nuclear physics when describing the cross section of nuclear reactions. Accordingly, heavy, charged particles $j$ and k$ of like positive charge of charge numbers $z_{j}$ and $z_{k}$ need considerable amount of relative kinetic energy $E$ determined by the height of the Coulomb barrier in order to let the probability of their nuclear interaction have significant value. The cross section of such a process can be\ derived applying the Coulomb solution $\varphi (\mathbf{r})$, \begin{equation} \varphi (\mathbf{r})=e^{i\mathbf{k}\cdot \mathbf{r}}f(\mathbf{k,r})/\sqrt{V}, \label{Cb1} \end{equation which is the wave function of a free particle of charge number $z_{j}$ in a repulsive Coulomb field of charge number $z_{k}$ \cite{Alder}, in the description of relative motion of projectile and target. In $\left( \ref{Cb1 \right) $ $V$ denotes the volume of normalization, $\mathbf{r}$ is the relative coordinate of the two particles, $\mathbf{k}$ is the wave number vector in their relative motion and \begin{equation} f(\mathbf{k},\mathbf{r})=e^{-\pi \eta _{jk}/2}\Gamma (1+i\eta _{jk})_{1}F_{1}(-i\eta _{jk},1;i[kr-\mathbf{k}\cdot \mathbf{r}]), \label{Hyperg} \end{equation where $_{1}F_{1}$ is the confluent hypergeometric function and $\Gamma $ is the Gamma function. Since $\varphi (\mathbf{r})\sim e^{-\pi \eta _{jk}/2}\Gamma (1+i\eta _{jk})$, the cross section of the process is proportional to \begin{equation} \left\vert e^{-\pi \eta _{jk}/2}\Gamma (1+i\eta _{jk})\right\vert ^{2}=\frac 2\pi \eta _{jk}\left( E\right) }{\exp \left[ 2\pi \eta _{jk}\left( E\right) \right] -1}=F_{jk}(E), \label{Fjk} \end{equation the so-called Coulomb factor. Here \begin{equation} \eta _{jk}\left( E\right) =z_{j}z_{k}\alpha _{f}\sqrt{a_{jk}\frac{m_{0}c^{2 }{2E}} \label{etajk} \end{equation is the Sommerfeld parameter in the case of colliding particles of mass numbers $A_{j}$, $A_{k}$ and rest masses $m_{j}=A_{j}m_{0}$, m_{k}=A_{k}m_{0}$. $m_{0}c^{2}=931.494$ $MeV$ is the atomic energy unit, \alpha _{f}$ is the fine structure constant and $E$ \ is taken in the center of mass $\left( CM\right) $ coordinate system \begin{equation} a_{jk}=\frac{A_{j}A_{k}}{A_{j}+A_{k}} \label{ajk} \end{equation is the reduced mass number of particles $j$ and $k$ of mass numbers $A_{j}$ and $A_{k}$. Thus the rate of the nuclear reaction of heavy, charged particles of like positive charge becomes very small at low energies as a consequence of $F_{jk}(E)$ being very small. In the processes investigated the Coulomb and the strong interactions play crucial role. The interaction Hamiltonian $H_{I}$ comprises the Coulomb interaction potential $V_{Cb}$ with the charged constituents of surroundings (solid) and the interaction potential $V_{St}$ of the strong interaction: \begin{equation} H_{I}=V_{Cb}+V_{St}. \label{HI} \end{equation (The Coulomb interaction between the charged participants of the nuclear reaction is taken into account using $\left( \ref{Cb1}\right) $.) Therefore the charged particle assisted nuclear reactions are at least second order in terms of standard perturbation calculation. According to $\left( \ref{HI \right) $, the lowest order of S-matrix element of a charged particle assisted nuclear reaction has two terms which can be visualized with the aid of two graphs. However, the contribution by the term, in which $V_{St}$ according to chronological order precedes $V_{Cb}$, is negligible because of the smallness of the Coulomb factor the root square of which is appearing in the matrix element of $V_{St}$ in this case. (In the following we only depicts the graph of the dominant term.) When describing the effect of the Coulomb interaction between the nucleus of charge number $Z$ and a slow electron one can also use Coulomb function, consequently, the cross section of the process to be investigated is proportional to \begin{equation} F_{e}(E)=\frac{2\pi \eta _{e}\left( E\right) }{\exp \left[ 2\pi \eta _{e}\left( E\right) \right] -1}, \label{FeE} \end{equation but with \begin{equation} \eta _{e}=-Z\alpha _{f}\sqrt{\frac{m_{e}c^{2}}{2E}}. \label{eatae} \end{equation Here $m_{e}$ is the rest mass of the electron. In the case of low (less than $1$ $keV$) kinetic energy of the electron $F_{e}(E)$ reads approximately as F_{e}(E)=\left\vert 2\pi \eta _{e}\left( E\right) \right\vert >1$. For instance, the cross section of electron assisted neutron exchange process (as it will be discussed later, and the graph of which is depicted in Fig. 1) is proportional to $F_{e}(E)$ only (instead of $F_{jk}(E)$) since the neutron takes part in strong interaction and so the corresponding matrix element does not contain Coulomb factor. The increment in the cross section due to changing $F_{jk}(E)$ for $F_{e}(E)$ in the case of electron assisted neutron exchange process can be characterized by the ratio F_{e}(E)/F_{jk}(E)$ which is an extremely large number. The cross section of electron assisted neutron exchange process has a further (about a factor 10^{22}$) increase due to the presence of the lattice since the cross section is also proportional to $1/v_{c}$. Here $v_{c}\sim d^{3}$ is the volume of the elementary cell of the solid with $d$ the lattice parameter of order of magnitude of $10^{-8}$ $cm$. The extremely huge increment in the Coulomb factor increased further by the effect of the lattice makes it possible that the cross section of electron assisted neutron exchange process may reach an observable magnitude even in the very low energy case. Thus it can be concluded that the actual Coulomb factors are the clue to the charged particles assisted nuclear reactions and therefore we focus our attention to them especially concerning the Coulomb factors of heavy charged particles. It is worth mentioning, that usual nuclear experiments, in which nuclear reactions of heavy charged particles are investigated, are usually devised taking into account the hindering effect of Coulomb repulsion. Consequently, the beam energy is taken to be appropriately high to reach the energy domain where the cross section of the processes becomes appropriately large.\ Therefore in an ordinary nuclear experiment the role of charged particle assisted reactions is not essential. \section{Applied method presented in electron assisted neutron exchange process} Recognizing the possibility and advantage of the assistance of electrons in LENR we consider first the electron assisted neutron exchange process, namely the \begin{equation} e+\text{ }_{Z_{1}}^{A_{1}}X+\text{ }_{Z_{2}}^{A_{2}}Y\rightarrow e^{\prime } \text{ }_{Z_{1}}^{A_{1}-1}X+\text{ }_{Z_{2}}^{A_{2}+1}Y+\Delta \label{exchange} \end{equation reaction \cite{kk3}\ (see Fig.1). Here $e$ and $e^{\prime }$ denote electron and $\Delta $ is the energy of the reaction, i.e. the difference between the rest energies of initial $\left( _{Z_{1}}^{A_{1}}X+_{Z_{2}}^{A_{2}}Y\right) $ and final $\left( _{Z_{1}}^{A_{1}-1}X+\text{ }_{Z_{2}}^{A_{2}+1}Y\right) $ states. In $\left( \ref{exchange}\right) $ the electron (particle $1$) Coulomb interacts with the nucleus $_{Z_{1}}^{A_{1}}X$ (particle $2$). A scattered electron (particle $1^{\prime }$), the intermediate neutron (particle $3$) and the nucleus $_{Z_{1}}^{A_{1}-1}X$ (particle $2^{\prime }$) are created due to this interaction. The intermediate neutron (particle $3$) is captured due to the strong interaction by the nucleus $_{Z_{2}}^{A_{2}}Y$ (particle 4 $) forming the nucleus $_{Z_{2}}^{A_{2}+1}Y$ (particle $5$) in this manner. All told, in $\left( \ref{exchange}\right) $ the nucleus _{Z_{1}}^{A_{1}}X$ (particle $2$) looses a neutron which is taken up by the nucleus $_{Z_{2}}^{A_{2}}Y$ (particle $4$). The process is energetically forbidden if $\Delta <0$. It was found, as it will be seen later, that the electron takes away negligible energy. In this process the Coulomb factor of electrons arises only since the particle, which is exchanged, is a neutron. \begin{figure}[tbp] \resizebox{6.0cm}{!}{\includegraphics{Fig1.eps}} \caption{The graph of electron assisted neutron exchange process. Particle 1 (and 1') is an electron, particle 2 is a nucleus which looses a neutron and becomes particle 2'. Particle 3 is an intermediate neutron. Particle 4 is the nucleus which absorbs the neutron and becomes particle 5. The filled dot denotes Coulomb-interaction and the open circle denotes nuclear (strong) interaction. } \label{figure1} \end{figure} The physical background to the virtual neutron stripping due to the Coulomb interaction is worth mentioning. The attractive Coulomb interaction acts between the $Z_{1}$ protons and the electron. The neutrons do not feel Coulomb interaction. So one can say that in fact the nucleus _{Z_{1}}^{A_{1}-1}X$ is stripped of the neutron due to the Coulomb attraction. As an example we take $Ni$ and $Pd$ as target material. It is thought that the metal ($Ni$ or $Pd$) is irradiated with slow, free electrons. In this case reaction $\left( \ref{exchange}\right) $ reads a \begin{equation} e+\text{ }_{Z}^{A_{1}}X+\text{ }_{Z}^{A_{2}}X\rightarrow e^{\prime }+\text{ _{Z}^{A_{1}-1}X+\text{ }_{Z}^{A_{2}+1}X+\Delta \label{exchange metal} \end{equation with $Z=Z_{1}=Z_{2}$. Now we demonstrate our calculation. Let us take a solid (in our case a metal) which is irradiated by a monoenergetic beam of slow, free electrons. The corresponding sub-system Hamiltonians are $H_{solid}$ and $H_{e}$. It is supposed that their eigenvalue problems are solved, and the complete set of the eigenvectors of the two independent systems are known. The interaction between them is the Coulomb interaction of potential $V^{Cb}\left( \mathbf{x \right) $ and the other interaction that is taken into account between the nucleons of the solid is the strong interaction potential $V^{St}\left( \mathbf{x}\right) $. In the second order process investigated an electron takes part in a Coulomb scattering with an atomic nucleus of the solid. In the intermediate state a \ virtual free neutron $n$ is created which is captured due to the strong interaction with some other nucleus of the solid. The reaction energy $\Delta $ is shared between the quasi-free final electron and the two final nuclei which take part in the process. Since the aim of this paper is to show the fundamentals of the main effect, the simplest description is chosen. The electron of charge $-e$ and the nucleus $_{Z}^{A_{1}}X$ of charge $Ze$ take part in Coulomb-interaction. We use a screened Coulomb potential of the for \begin{equation} V^{Cb}\left( \mathbf{x}\right) =\int \frac{-4\pi e^{2}Z}{q^{2}+\lambda ^{2} \exp \left( i\mathbf{q}\cdot \mathbf{x}\right) d\mathbf{q} \label{Vcb1} \end{equation with screening parameter $\lambda $ and coupling strength $e^{2}=\alpha _{f}\hbar c$. For the strong interaction the interaction potential \begin{equation} V^{St}\left( \mathbf{x}\right) =-f\frac{\exp \left( -s\left\vert \mathbf{x \right\vert \right) }{\left\vert \mathbf{x}\right\vert } \label{VSt1} \end{equation is applied, where the strong coupling strength $f=0.08\hbar c$ \cite{Bjorken} and $1/s$ is the range of the strong interaction. ($\hbar $ is the reduced Planck constant, $c$ is the velocity of light and $e$ is the elementary charge.) According to the standard perturbation theory of quantum mechanics the transition probability per unit time $\left( W_{fi}\right) $ of this second order process can be written a \begin{equation} W_{fi}=\frac{2\pi }{\hbar }\sum_{f}\left\vert T_{fi}\right\vert ^{2}\delta (E_{f}-E_{i}-\Delta ) \label{Wfie} \end{equation with \begin{equation} T_{fi}=\sum_{\mu }\frac{V_{f\mu }^{St}V_{\mu i}^{Cb}}{\Delta E_{\mu i}}. \label{Tif} \end{equation Here $V_{\mu i}^{Cb}$ \ is the matrix element of the Coulomb potential between the initial and intermediate states and $V_{f\mu }^{St}$ is the matrix element of the potential of the strong interaction between the intermediate and final states, furthermor \begin{equation} \Delta E_{\mu i}=E_{\mu }-E_{i}-\Delta _{i\mu }. \label{DeltaEmui} \end{equation $E_{i}$, $E_{\mu }$ and $E_{f}$ are the kinetic energies in the initial, intermediate and final states, respectively, $\Delta $ is the reaction energy,~and $\Delta _{i\mu }$ is the difference between the rest energies of the initial $\left( _{Z}^{A_{1}}X\right) $ and intermediate $\left( _{Z}^{A_{1}-1}X\text{ and }n\right) $ states \begin{equation} \Delta =\Delta _{-}+\Delta _{+},\text{ }\Delta _{i\mu }=\Delta _{-}-\Delta _{n} \label{Delta} \end{equation with \begin{equation} \Delta _{-}=\Delta _{A_{1}}-\Delta _{A_{1}-1}\text{ and }\Delta _{+}=\Delta _{A_{2}}-\Delta _{A_{2}+1}. \label{Delta-} \end{equation $\Delta _{A_{1}}$, $\Delta _{A_{1}-1}$, $\Delta _{A_{2}}$, $\Delta _{A_{2}+1} $ and $\Delta _{n}$\ are the energy excesses of the neutral atoms of mass numbers $A_{1}$, $A_{1}-1$, $A_{2}$, $A_{2}+1$ and the neutron, respectively. \cite{Shir}. The sum of initial kinetic energies $\left( E_{i}\right) $ is neglected in the energy Dirac-delta $\delta (E_{f}-E_{i}-\Delta )$ and $\Delta E_{\mu i}$ further on. Now for the sake of simplicity we reindex the particles. Particle indexed with $e$ is the electron, particle indexed with $1$ is initially the nucleus $_{Z}^{A_{1}}X$ (particle 2 in Fig. 1) and finally $_{Z}^{A_{1}-1}X$ (particle 2' in Fig. 1), particle indexed with $2$ is initially the nucleus _{Z}^{A_{2}}X$ (particle 4 in Fig. 1) and finally $_{Z}^{A_{2}+1}X$ (particle 5 in Fig. 1). \begin{equation} E_{f}=E_{fe}\left( \mathbf{k}_{fe}\right) +E_{f1}\left( \mathbf{k _{1}\right) +E_{f2}\left( \mathbf{k}_{2}\right) , \label{Ef} \end{equation} \begin{equation} E_{\mu }=E_{fe}\left( \mathbf{k}_{fe}\right) +E_{\mu 1}\left( \mathbf{k _{1}\right) +E_{n}\left( \mathbf{k}_{n}\right) , \label{Em} \end{equation where \begin{equation} E_{fj}\left( \mathbf{k}_{j}\right) =\frac{\hbar ^{2}\mathbf{k}_{j}^{2}} 2m_{j}} \label{Efj} \end{equation is the kinetic energy, $\mathbf{k}_{fj}\equiv \mathbf{k}_{j}$ is the wave vector and $m_{j}$ is the rest mass of particle indexed with $j$ in the final state $\left( j=1,2\right) $. \begin{equation} E_{n}\left( \mathbf{k}_{n}\right) =\frac{\hbar ^{2}\mathbf{k}_{n}^{2}}{2m_{n } \label{En} \end{equation is the kinetic energy, $\mathbf{k}_{n}$ is the wave vector in the intermediate state and $m_{n}$ is the rest mass of the neutron. $E_{\mu 1}\left( \mathbf{k}_{1}\right) $ is the kinetic energy of the first particle in the intermediate state, and $E_{\mu 1}\left( \mathbf{k}_{1}\right) =E_{f1}\left( \mathbf{k}_{1}\right) $. The kinetic energy of the electron in the initial and final state \begin{equation} E_{ie}=\frac{\hbar ^{2}\mathbf{k}_{ie}^{2}}{2m_{e}}\text{ and }E_{fe}=\frac \hbar ^{2}\mathbf{k}_{fe}^{2}}{2m_{e}} \label{E1f} \end{equation with $\mathbf{k}_{ie}$ and $\mathbf{k}_{fe}$ denoting the wave vector of the electron in the initial and final state. The initial wave vectors $\mathbf{k _{i1}$ and $\mathbf{k}_{i2}$ of particles $1$ and $2$ are neglected. The initial, intermediate and final states are determined in Appendix A., the V_{\mu i}^{Cb}$, $V_{f\mu }^{St}$ matrix-elements are calculated in Appendix B. and the transition probability per unit time is calculated in Appendix C.. (Appendix D. is devoted to the approximations, identities and relations which are used in the calculation of the cross section.) \subsection{Cross section of electron assisted neutron exchange process} The cross section $\sigma $ of the process can be obtained from the transition probability per unit time $\left( \ref{Wfi22}\right) $ dividing it by the flux $v_{e}/V$ \ of the incoming electron where $v_{e}$ is the velocity of the electron. \begin{eqnarray} \sigma &=&\int \frac{c}{v_{e}}\frac{64\pi ^{3}\alpha _{f}^{2}\hbar cZ^{2}\sum_{l_{2}=-m_{2}}^{l_{2}=m_{2}}\left\vert F_{2}\left( \mathbf{k _{2}\right) \right\vert ^{2}}{v_{c}\left( \left\vert \mathbf{k}_{1}+\mathbf{ }_{2}\right\vert ^{2}+\lambda ^{2}\right) ^{2}\left( \Delta E_{\mu i}\right) _{\mathbf{k}_{n}=\mathbf{k}_{2}}^{2}} \label{sigma} \\ &&\times \frac{F_{e}(E_{ie})}{F_{e}(E_{f1})}\left\langle \left\vert F_{1}\left( \mathbf{k}_{2}\right) \right\vert ^{2}\right\rangle A_{2}^{2}r_{A_{2}}\delta (E_{f}-\Delta )d^{3}k_{1}d^{3}k_{2}, \notag \end{eqnarray where $v_{c}$ is the volume of elementary cell in the solid, $r_{A_{2}}$ is the relative natural abundance of atoms $_{Z}^{A_{2}}X$, \begin{equation} F_{1}\left( \mathbf{k}_{2}\right) =\int \Phi _{i1}\left( \mathbf{r _{n1}\right) e^{-i\mathbf{k}_{2}\frac{A_{1}}{A_{1}-1}\cdot \mathbf{r _{n1}}d^{3}r_{n1}, \label{F1kalk2} \end{equation \begin{equation} \left\langle \left\vert F_{1}\left( \mathbf{k}_{2}\right) \right\vert ^{2}\right\rangle =\frac{1}{2l_{1}+1}\sum_{l_{1}=-m_{1}}^{l_{1}=m_{1}}\lef \vert F_{1}\left( \mathbf{k}_{2}\right) \right\vert ^{2} \label{F1av} \end{equation an \begin{eqnarray} F_{2}\left( \mathbf{k}_{2}\right) &=&\int \Phi _{f2}^{\ast }\left( \mathbf{r _{n2}\right) e^{i\mathbf{k}_{2}\cdot \mathbf{r}_{n2}}\times \label{F2k2} \\ &&\times \left( -f\frac{\exp (-s\frac{A_{2}+1}{A_{2}}r_{n2}}{\frac{A_{2}+1} A_{2}}r_{n2}}\right) d^{3}r_{n2}. \notag \end{eqnarray Here $\Phi _{i1}$ and $\Phi _{f2}$ are the initial and final bound neutron states (for the definition of $l_{1}$ and $l_{2}$ see below). The cross section calculation result that the $k_{2}\simeq k_{0}=\sqrt{2\mu _{12}\Delta }/\hbar $ substitution may be used (see in Appendix D.) in calculating $F_{1}$ and $F_{2}$ in $\sigma $, where $\mu _{12}=m_{0}$ $\left[ \left( A_{1}-1\right) \left( A_{2}+1\right) \right] /\left( A_{1}+A_{2}\right) $. When evaluating $\left( \ref{sigma}\right) $ first the Weisskopf approximation is applied, i.e. for the initial and final bound neutron states we take $\Phi _{W}\left( \mathbf{r}_{nj}\right) =\phi _{jW}\left( r_{nj}\right) Y_{l_{j}m_{j}}\left( \Omega _{j}\right) ,$ \ $j=1,2$ where Y_{l_{j}m_{j}}\left( \Omega _{j}\right) $ is a spherical harmonics and $\phi _{jW}\left( r_{nj}\right) =\sqrt{3/R_{j}^{3}},$ \ $j=1,2$ if $\left\vert \mathbf{r}_{nj}\right\vert \leq R_{j}$ and $\phi _{jW}\left( r_{nj}\right) =0 $ for $\left\vert \mathbf{r}_{nj}\right\vert >R_{j}$, where R_{j}=r_{0}A_{j}^{1/3}$is the radius of a nucleus of nucleon number $A_{j}$ with $r_{0}=1.2\times 10^{-13}$ $cm$. We apply the $A_{1}\simeq A_{2}\simeq A_{1}-1\simeq A_{2}+1=A$ approximation further on. Calculating $F_{1}\left( \mathbf{k}_{0}\right) $ and $F_{2}\left( \mathbf{k}_{0}\right) $ the long wavelength approximations (LWA) ($\exp \left( -i\mathbf{k}_{0}\cdot \mathbf{ }_{n1}\right) =1$ and $\exp \left( i\mathbf{k}_{0}\cdot \mathbf{r _{n2}\right) =1$) are also used with $s=1/r_{0}$ that result approximately \begin{equation} \left\langle \left\vert F_{1}\left( \mathbf{k}_{0}\right) \right\vert ^{2}\right\rangle \sum_{l_{2}=-m_{2}}^{l_{2}=m_{2}}\left\vert F_{2}\left( \mathbf{k}_{0}\right) \right\vert ^{2}=16\pi ^{2}r_{0}^{4}f^{2}\left( 2l_{2}+1\right) . \label{F1K2av} \end{equation Using the results of Appendix D., the $E_{f1}=\Delta /2$ relation and if E_{e}<0.1$ $MeV$ (i.e. if $F_{e}(E_{ie})=\left\vert 2\pi \eta _{e}\left( E_{ie}\right) \right\vert =2\pi Z\alpha _{f}\sqrt{m_{e}c^{2}/2E_{ie}}$) then the cross section in the Weisskopf-LWA approximation reads as \begin{equation} \sigma _{W}=\frac{C_{W0}\left( 2l_{2}+1\right) }{\left[ 1+\frac{2\left( \Delta _{n}-\Delta _{-}\right) }{A\Delta }\right] ^{2}}\frac{r_{A_{2}}} F_{e}(\Delta /2)}\frac{A^{3/2}Z^{2}}{\Delta ^{3/2}E_{ie}} \label{sigma2} \end{equation with $C_{W0}=2^{15}\pi ^{9}\alpha _{f}^{3}\left( 0.08\right) ^{2}a_{B}r_{0}\left( \frac{r_{0}}{d}\right) ^{3}\left( m_{0}c^{2}\right) ^{3/2}m_{e}c^{2}$. Here $a_{B}$ is the Bohr-radius, the relation $c/v_{e} \sqrt{m_{e}c^{2}/\left( 2E_{ie}\right) }$ with $E_{ie}$ the kinetic energy of the ingoing electrons is also applied and $d=3.52\times 10^{-8}$ $cm$ ( Ni $ lattice) and $d=3.89\times 10^{-8}$ $cm$ ($Pd$ lattice). $F_{e}(\Delta /2)$ is determined by $\left( \ref{FeE}\right) $ and $\left( \ref{eatae \right) $. The subscript $W$ refers to the Weisskopf-LWA approximation and in $\left( \ref{sigma2}\right) $ the quantities $\Delta $ and $E_{ie}$ have to be substituted in $MeV$ units. $C_{W0}\left( Ni\right) =8.9\times 10^{-10} $ $MeV^{5/2}b$ and $C_{W0}\left( Pd\right) =6.6\times 10^{-10}$ $MeV^{5/2}b$. We have calculated $\sum_{l_{2}=-m_{2}}^{l_{2}=m_{2}}\left\vert F_{2}\left( \mathbf{k}_{0}\right) \right\vert ^{2}$, $\left\langle \left\vert F_{1}\left( \mathbf{k}_{0}\right) \right\vert ^{2}\right\rangle $ and the cross section in the single particle shell model with isotropic harmonic oscillator potential and without the long wavelength approximation (see Appendix E.). We introduce the ratio \begin{equation} \eta =\frac{\left\langle \left\vert F_{1}\left( \mathbf{k}_{0}\right) \right\vert ^{2}\right\rangle _{Sh}\sum_{l_{2}=-m_{2}}^{l_{2}=m_{2}}\left\vert F_{2}\left( \mathbf{k _{0}\right) \right\vert _{Sh}^{2}}{\left\langle \left\vert F_{1}\left( \mathbf{k}_{0}\right) \right\vert ^{2}\right\rangle _{W}\sum_{l_{2}=-m_{2}}^{l_{2}=m_{2}}\left\vert F_{2}\left( \mathbf{k _{0}\right) \right\vert _{W}^{2}}. \label{etha} \end{equation (The subscript $Sh$ refers to the shell model.) With the aid of $\eta \equiv \eta _{l_{1},n_{1},l_{2},n_{2}}\left( A_{1},A_{2}\right) $ given by $\left( \ref{etha2}\right) $ (see Appendix E.) the cross section $\sigma _{Sh}$ calculated in the shell model can be written as \begin{equation} \sigma _{Sh}=\eta _{l_{1},n_{1},l_{2},n_{2}}\left( A_{1},A_{2}\right) \sigma _{W}. \label{sigmaSh} \end{equation} \subsection{Yield of events of electron assisted neutron exchange process} The yield $dN/dt$ of events of electron assisted neutron exchange process A_{1},A_{2}\rightarrow A_{1}-1,A_{2}+1$ can be written as \begin{equation} \frac{dN}{dt}=N_{t}N_{ni}\sigma \Phi , \label{rate1} \end{equation where $\sigma =\left\{ \sigma _{W}\text{ or }\sigma _{Sh}\right\} $, $\Phi $ is the flux of electrons, $N_{t}$ is the number of target particles, i.e. the number $N_{A_{1}}$ of irradiated atoms of mass number $A_{1}$ in the metal. The contribution of $N_{ni}$ neutrons in each nucleus $_{Z}^{A_{1}}X$ is also taken into account. $N_{ni}$ is the number of neutrons in the uppermost energy level of the initial nucleus $_{Z}^{A_{1}}X$. If $F$ and $D$ are the irradiated surface and the width of the sample, respectively, then the number of elementary cells $N_{c}$ in the sample is N_{c}=FD/v_{c}=4FD/d^{3}$ in the case of $Ni$ and $Pd$, and the number of atoms in the elementary cell is $2r_{A_{1}}$ with $r_{A_{1}}$ the relative natural abundance of atoms $_{Z}^{A_{1}}X$ thus the number $N_{t}$ of target atoms of mass number $A_{1}$ in the process is \begin{equation} N_{t}=\frac{8}{d^{3}}r_{A_{1}}FD. \label{NA1} \end{equation} The wave numbers and energies of the two outgoing heavy particles are approximately $\mathbf{k}_{1}=-\mathbf{k}_{2}$ \begin{equation} E_{1}=\frac{A_{2}+1}{A_{1}+A_{2}}\Delta \text{ and }E_{2}=\frac{A_{1}-1} A_{1}+A_{2}}\Delta . \label{E1} \end{equation} \subsection{Numerical data of electron assisted neutron exchange processes in $Ni$ and $Pd$} \begin{table}[tbp] \tabskip=8pt \centerline {\vbox{\halign{\strut $#$\hfil&\hfil$#$\hfil&\hfil$#$ \hfil&\hfil$#$\hfil&\hfil$#$\hfil&\hfil$#$\hfil&\hfil$#$\cr \noalign{\hrule\vskip2pt\hrule\vskip2pt} A &58 &60 &61 &62 &64 \cr \Delta_{-} &-4.147 &-3.317 &0.251 &-2.526 &-1.587 \cr \Delta_{+} &0.928 &-0.251 &2.526 &-1.234 &-1.973 \cr r_{A} &0.68077 &0.26223 &0.0114 &0.03634 &0.00926 \cr \noalign{\vskip2pt\hrule\vskip2pt\hrule}}}} \caption{Numerical data of the $\text{ }e+\text{ }_{28}^{A_{1}}Ni+\text{ _{28}^{A_{2}}Ni\rightarrow e^{\prime }+\text{ }_{28}^{A_{1}-1}Ni+\text{ _{28}^{A_{2}+1}Ni+\Delta $ reaction. The reaction is energetically allowed if $\Delta =\Delta _{-}(A_{1})+\Delta _{+}(A_{2})>0$ holds. $A$ is the mass number, $r_{A}$ is the relative natural abundance, $\Delta _{-}(A)=\Delta _{A}-\Delta _{A-1}$ and $\Delta _{+}(A)=\Delta _{A}-\Delta _{A+1}$ are given in $MeV$ units.} \label{Table1} \end{table} \begin{table}[tbp] \tabskip=8pt \centerline {\vbox{\halign{\strut $#$\hfil&\hfil$#$\hfil&\hfil$#$ \hfil&\hfil$#$\hfil&\hfil$#$\hfil&\hfil$#$\hfil&\hfil$#$\cr \noalign{\hrule\vskip2pt\hrule\vskip2pt} A &102 &104 &105 &106 &108 &110 \cr \Delta_{-} &-2.497 &-1.912 &0.978 &-1.491 &-1.149 &-0.747 \cr \Delta_{+} &-0.446 &-0.978 &1.491 &-1.533 &-1.918 &-2.320 \cr r_{A} &0.0102 &0.1114 &0.2233 &0.2733 &0.2646 &0.1172 \cr \noalign{\vskip2pt\hrule\vskip2pt\hrule}}}} \caption{Numerical data of the $\text{ }e+\ _{46}^{A_{1}}Pd+_{46}^{A_{2}}Pd\rightarrow e^{\prime }+\text{ _{46}^{A_{1}-1}Pd+_{46}^{A_{2}+1}Pd+\Delta$ reaction. The reaction is energetically allowed if $\Delta=\Delta_{-}(A_{1})+\Delta_{+}(A_{2})>0$ holds. $A$ is the mass number, $r_{A}$ is the relative natural abundance, \Delta_{-}(A)=\Delta_{A}-\Delta_{A-1} $ and $\Delta_{+}(A)=\Delta_{A} \Delta_{A+1} $ are given in $MeV$ units.} \label{Table2} \end{table} \begin{table}[tbp] \tabskip=8pt \centerline {\vbox{\halign{\strut $#$\hfil &\hfil$#$\hfil&\hfil$#$ \hfil&\hfil$#$\hfil\cr \noalign{\hrule\vskip2pt\hrule\vskip2pt} A_{1}\rightarrow A_{1}-1&A_{2}\rightarrow A_{2}+1&\Delta($MeV$)&\eta \cr \noalign{\vskip2pt\hrule\vskip2pt} 61 \rightarrow 60 &58 \rightarrow 59 & 1.179 &7.02\times10^{-3}\cr 61 \rightarrow 60 &61 \rightarrow 62 & 2.777 &2.42\times10^{-8}\cr 64 \rightarrow 63 &61 \rightarrow 62 & 0.939 &2.08\times10^{-4}\cr \noalign{\vskip2pt\hrule\vskip2pt\hrule}}}} \caption{The values of the quantities $\protect\eta$ and $\Delta =\Delta _{-}(A_{1})+\Delta _{+}(A_{2})>0$, the later in $MeV $ units, of the $\text{ }e+\text{ }_{28}^{A_{1}}Ni+\text{ }_{28}^{A_{2}}Ni\rightarrow e^{\prime } \text{ }_{28}^{A_{1}-1}Ni+\text{ }_{28}^{A_{2}+1}Ni+\Delta $ reaction. The \Delta _{-}(A_{1})$ and $\Delta _{+}(A_{2})$ values can be found in Table I. For the definition of $\protect\eta $ see $\left( \protect\ref{etha}\right)$ and $\left( \protect\ref{etha2}\right)$.} \label{Table3} \end{table} \begin{table}[tbp] \tabskip=8pt \centerline {\vbox{\halign{\strut $#$\hfil &\hfil$#$\hfil&\hfil$#$ \hfil&\hfil$#$\hfil\cr \noalign{\hrule\vskip2pt\hrule\vskip2pt} A_{1}\rightarrow A_{1}-1&A_{2}\rightarrow A_{2}+1&\Delta($MeV$)&\eta \cr \noalign{\vskip2pt\hrule\vskip2pt} 105 \rightarrow 104 &102 \rightarrow 103 & 0.532 & 1.84\times10^{-4}\cr 105 \rightarrow 104 &105 \rightarrow 106 & 2.469 & 8.88\times10^{-11}\cr 108 \rightarrow 107 &105 \rightarrow 106 & 0.342 & 2.82\times10^{-3}\cr \noalign{\vskip2pt\hrule\vskip2pt\hrule}}}} \caption{The values of the quantities $\protect\eta$ and $\Delta =\Delta _{-}(A_{1})+\Delta _{+}(A_{2})>0$, the later in $MeV $ units, of the $\text{ }e+\ _{46}^{A_{1}}Pd+_{46}^{A_{2}}Pd\rightarrow e^{\prime }+\text{ _{46}^{A_{1}-1}Pd+_{46}^{A_{2}+1}Pd+\Delta $ reaction. The $\Delta _{-}(A_{1})$ and $\Delta _{+}(A_{2})$ values can be found in Table II. For the definition of $\protect\eta$ see $\left( \protect\ref{etha}\right)$ and \left( \protect\ref{etha2}\right)$.} \label{Table4} \end{table} As a first example we take $Ni$ as target material. In this case the possible processes ar \begin{equation} \text{ }e+\text{ }_{28}^{A_{1}}Ni+\text{ }_{28}^{A_{2}}Ni\rightarrow e^{\prime }+\text{ }_{28}^{A_{1}-1}Ni+\text{ }_{28}^{A_{2}+1}Ni+\Delta . \label{NiAp} \end{equation Tables I. and III. contain the relevant data for reaction $\left( \ref{NiAp \right) $. Describing neutrons in the uppermost energy level of $_{28}^{A}Ni$ isotopes we used $1p$ shell model states in the cases of $A=58-60$ and $0f$ shell model states in the cases of $A=61-64$. Another interesting target material is $Pd$ in which the electron assisted neutron exchange processes are the \begin{equation} \text{ }e+\ _{46}^{A_{1}}Pd+\text{ }_{46}^{A_{2}}Pd\rightarrow e^{\prime } \text{ }_{46}^{A_{1}-1}Pd+\text{ }_{46}^{A_{2}+1}Pd+\Delta \label{PdAp} \end{equation reactions. The relevant data can be found in Tables II. and IV.. Describing neutrons in the uppermost energy level of $_{46}^{A}Pd$ isotopes we used $0g$ shell model states in the cases of $A=102-104$ and $1d$ shell model states in the cases of $A=105-108$. The nuclear data to the Tables are taken from \cite{Shir}. One can see from Tables III. and IV. that in both cases three possible pairs of isotopes exist which are energetically allowed (for which \Delta >0$) and their rates differ in the factor $\left( 2l_{2}+1\right) N_{ni}\eta _{l_{1},n_{1},l_{2},n_{2}}\left( A_{1},A_{2}\right) r_{A_{1}}r_{A_{2}}\Delta ^{-3/2}$ only. The $\eta \equiv \eta _{l_{1},n_{1},l_{2},n_{2}}\left( A_{1},A_{2}\right) $ values of $Ni$ and $Pd$ can also be found in Tables III. and IV., respectively. The results of numerical investigation of $\left( 2l_{2}+1\right) N_{ni}\eta _{l_{1},n_{1},l_{2},n_{2}}\left( A_{1},A_{2}\right) r_{A_{1}}r_{A_{2}}\Delta ^{-3/2}$ shows that the $61\rightarrow 60,58\rightarrow 59$ and the 108\rightarrow 107,105\rightarrow 106$ reactions are the dominant among the processes in $Ni$ and $Pd$, respectively. In the case of $Ni$ it is found that the \begin{equation} \text{ }e+\text{ }_{28}^{61}Ni+\text{ }_{28}^{58}Ni\rightarrow e^{\prime } \text{ }_{28}^{60}Ni+\text{ }_{28}^{59}Ni+1.179\text{ }MeV \label{Nilead} \end{equation process of $\sigma _{Sh}=0.54/E_{ie}$ $mb$ with $E_{ie}$ in $MeV$ is leading. In this case the $_{28}^{60}Ni$ and the $_{28}^{59}Ni$ isotopes take away $0.585$ $MeV$ and $0.594$ $MeV$, respectively. In the case of $Pd$ the \begin{equation} e+\text{ }_{46}^{108}Pd+\text{ }_{46}^{105}Pd\rightarrow e^{\prime }+\text{ _{46}^{107}Pd+\text{ }_{46}^{106}Pd+0.342\text{ }MeV \label{Pdlead} \end{equation reaction of $\sigma _{Sh}=1.6/E_{ie}$ $mb$ with $E_{ie}$ in $MeV$ is found to be the leading one. In this case the $_{46}^{107}Pd$ and the _{46}^{106}Pd$ isotopes take away $0.170$ $MeV$ and $0.172$ $MeV$, respectively. \section{Other results - Other charged particle assisted reactions} The transition probability per unit time and the cross section of the processes, which will be discussed below, may be determined in similar manner as was done above in the case of electron assisted neutron exchange process. The main difference is that in matrix elements $V_{\mu i}^{Cb}$ and $V_{f\mu }^{St}$ different Coulomb factors appear according to the particles which take part in the reaction. \subsection{Electron assisted heavy charged particle exchange process} There is an other possibility in the family of electron assisted exchange processes, when a charged heavy particle (such as $p$, $d$, $t$, $_{2}^{3}He$ and $_{2}^{4}He$) is exchanged. The process is called electron assisted heavy charged particle exchange process and it can be visualized with the aid of Fig.1 too. Denoting the intermediate particle (particle $3$ in Fig. 1) by $_{z_{3}}^{A_{3}}w$, which is exchanged, the general electron assisted heavy charged particle exchange processes reads a \begin{equation} e+\text{ }_{Z_{1}}^{A_{1}}X+\text{ }_{Z_{2}}^{A_{2}}Y\rightarrow e^{\prime } \text{ }_{Z_{1}-z_{3}}^{A_{1}-A_{3}}X^{\ast }+\text{ _{Z_{2}+z_{3}}^{A_{2}+A_{3}}Y^{\ast }+\Delta . \label{hpexchange} \end{equation Here $e$ and $e^{\prime }$ denote electron and $\Delta $ is the energy of the reaction, i.e. the difference between the rest energies of initial \left( _{Z_{1}}^{A_{1}}X+_{Z_{2}}^{A_{2}}Y\right) $ and final $\left( _{Z_{1}-z_{3}}^{A_{1}-A_{3}}X^{\ast }+\text{ _{Z_{2}+z_{3}}^{A_{2}+A_{3}}Y^{\ast }\right) $ states. $\Delta =\Delta _{-}+\Delta _{+},$ with $\Delta _{-}=\Delta _{Z_{1}}^{A_{1}}-\Delta _{Z_{1}-z_{3}}^{A_{1}-A_{3}}$ and $\Delta _{+}=\Delta _{Z_{2}}^{A_{2}}-\Delta _{Z_{2}+z_{3}}^{A_{2}+A_{3}}$. $\Delta _{Z_{1}}^{A_{1}}$, $\Delta _{Z_{1}-z_{3}}^{A_{1}-A_{3}}$ , $\Delta _{Z_{2}}^{A_{2}}$, $\Delta _{Z_{2}+z_{3}}^{A_{2}+A_{3}}$ are the energy excesses of neutral atoms of mass number-charge number pairs $A_{1}$, $Z_{1} ; $A_{1}-A_{3}$, $Z_{1}-z_{3}$; $A_{2}$, $Z_{2}$; $A_{2}+A_{3}$, Z_{2}+z_{3} $, respectively \cite{Shir}. In $\left( \ref{hpexchange}\right) $ the electron (particle $1$) Coulomb interacts with the nucleus $_{Z_{1}}^{A_{1}}X$ (particle $2$). A scattered electron (particle $1^{\prime }$), the intermediate particle _{z_{3}}^{A_{3}}w$ (particle $3$) and the nucleus _{Z_{1}-z_{3}}^{A_{1}-A_{3}}X^{\ast }$ (particle $2^{\prime }$) are created due to this interaction. The intermediate particle $_{z_{3}}^{A_{3}}w$ (particle $3$) is captured due to the strong interaction by the nucleus _{Z_{2}}^{A_{2}}Y$ (particle $4$) forming the nucleus _{Z_{2}+z_{3}}^{A_{2}+A_{3}}Y^{\ast }$ (particle $5$) in this manner. So in \left( \ref{hpexchange}\right) $ the nucleus $_{Z_{1}}^{A_{1}}X$ (particle 2 $) looses a particle $_{z_{3}}^{A_{3}}w$ which is taken up by the nucleus _{Z_{2}}^{A_{2}}Y$ (particle $4$). The process is energetically forbidden if $\Delta <0$. Since particles $2^{\prime }$, $3$ and $4$ all have positive charge, furthermore they all are heavy, the two Coulomb factors, which appear in the cross section, are $F_{2^{\prime }3}$ and $F_{34}$. Therefore the cross section of process $\left( \ref{hpexchange}\right) $ is expected to be much smaller than the cross section of process $\left( \ref{exchange \right) $. However process $\left( \ref{hpexchange}\right) $ may play an essential role in explaining nuclear transmutations stated \cite{Storms2} (see below). Since Coulomb factors $F_{2^{\prime }3}$ and $F_{34}$ determine the order of magnitude of the cross section of the process (the cross section of the process is proportional to $F_{2^{\prime }3}F_{34}$) we treat them in more detail in Appendix F. \subsection{Electron assisted nuclear capture process} \begin{figure}[tbp] \resizebox{6.0cm}{!}{\includegraphics*{Fig2.eps}} \caption{The graph of electron assisted nuclear capture reactions. The simple lines represent free (initial (1) and final (1')) electrons. The doubled lines represent free, heavy, charged initial (2) particles (such as p, d), their intermediate state (2'), target nuclei (3) and reaction product (4). The filled dot denotes Coulomb-interaction and the open circle denotes nuclear (strong) interaction.} \label{figure2} \end{figure} Now the electron assisted nuclear caption process (see Fig. 2) is considered, in which an electron-nucleus Coulomb scattering is followed by a capture process governed by strong interaction \cite{kk1}. When describing the effect of the Coulomb interaction between the nucleus of charge number Z $ and a slow electron one can also use the Coulomb factor $F_{e}(E)$ $(\re {FeE})$ of the electron defined above. As an example we consider the electron assisted $d+d\rightarrow $ _{2}^{4}He $ process with slow deuterons. In this case, one of the slow deuterons (as particle $2$) can enter into Coulomb interaction with a quasi-free, slow electron (as particle $1$) of the solid (see Fig. 2). In Coulomb scattering of free deuterons and electrons the wave number vector (momentum) is preserved since their relative motion may be described by a plane wave which is multiplied by the corresponding Coulomb factor. In this second order process the Coulomb interaction is followed by strong interaction, which induces a nuclear capture process. The energy $\Delta $ of the nuclear reaction is divided between the electron and the heavy nuclear product. Since $m_{N}\gg m_{e}$ ($m_{N}$ is the rest mass of the nuclear product), the electron will take almost all the total nuclear reaction energy $\Delta $ away (there is no gamma emission) and the magnitude $k_{1^{\prime }}$ of its wave number vector $\mathbf{k}_{1^{\prime }}$ reads $k_{1^{\prime }}=\sqrt{\Delta ^{2}+2m_{e}c^{2}\Delta }/\left( \hbar c\right) \simeq \Delta /\left( \hbar c\right) $ $\left( \text{if \Delta \gg m_{e}c^{2}\right) $. If initially the electron and the deuteron move slowly and the magnitudes of their wave number vectors are much smaller than $\Delta /\left( \hbar c\right) $, then the initial wave number vectors can be neglected in the wave number vector (momentum) conservation and consequently, in the intermediate state (in state $2^{\prime }$) the deuteron gets a wave number vector $\mathbf{k}_{2^{\prime }}=-\mathbf{k _{1^{\prime }}$. If $\Delta =23.84$ $MeV$, which is the reaction energy of the $d+d\rightarrow $ $_{2}^{4}He$ reaction, then the deuteron $2^{\prime }$ will have $k_{2^{\prime }}=\Delta /\left( \hbar c\right) $ and its corresponding (virtual) kinetic energy $E_{2^{\prime }}=\Delta ^{2}/\left( 4m_{0}c^{2}\right) =76.5$ $keV$ in the $CM$ coordinate system. At this energy the Coulomb factor value between particles $2^{\prime }$and $3$ reads as $F_{2^{\prime }3}=0.103$. It must be compared to the extremely small Coulomb factor value, e.g. in the case of energy $E=1$ $eV$ to $F_{23}\left( 1\text{ }eV\right) =1.1\times 10^{-427}$, that is characteristic of the usual, first order process. If one compares again the cross sections of second order and first order (electron assisted and usual) processes then their ratio is approximately proportional to $F_{e}F_{2^{\prime }3}/F_{23}(E) $ that becomes extremely large with decreasing $E$ too. (The model and the details of calculation, the results and their discussion can be found in \cite{kk1}.) The cross section of the electron assisted neutron exchange process is expected to be larger than the cross section of electron assisted nuclear capture process because of the appearance of the Coulomb factor in it. \subsection{Heavy particle assisted nuclear processes} \begin{figure}[tbp] \resizebox{6.0cm}{!}{\includegraphics*{Fig3.eps}} \caption{The graph of heavy particle assisted nuclear capture reactions. The lines 1, 1' represent free (initial (1) and final (1')) heavy particle which assists the reaction. The other lines represent heavy, charged initial (2) particles, their intermediate state (2'), target nuclei (3) and reaction product (4). The filled dot denotes Coulomb-interaction and the open circle denotes nuclear (strong) interaction.} \label{figure3} \end{figure} \begin{figure}[tbp] \resizebox{7.0cm}{!}{\includegraphics*{Fig4.eps}} \caption{The graph of heavy particle assisted heavy charged particle (such as $p$, $d$, $t$, $_{2}^{3}He$ and $_{2}^{4}He$) exchange reaction. The lines 1, 1' represent free (initial (1) and final (1')) heavy particle which assists the reaction. The other lines represent heavy, charged initial nuclei (2), their final state (2', which is a nucleus lost particle 3), the transferred particle (3), target nuclei (4) and reaction product (5). The filled dot denotes Coulomb-interaction and the open circle denotes nuclear (strong) interaction.} \label{figure4} \end{figure} In electron assisted nuclear reactions heavy, charged particles of energy of a few $MeV$ may be created. In the decelerating process of reaction products of the electron assisted processes the energy of these heavy particles may become intermediately low (of about $0.01$ $\left[ MeV\right] $) so their Coulomb factor, if the particles are light, may be intermediately small so their assistance in nuclear processes have to be also considered among the accountable nuclear processes. The corresponding graphs can be seen in Fig. 3 and Fig. 4. Fig. 3 depicts a heavy, charged particle assisted nuclear capture process and Fig. 4 represents heavy, charged particle assisted heavy charged particle (such as $p$, $d$, $t$, $_{2}^{3}He$ and $_{2}^{4}He$) exchange reaction. Now all particles are heavy. According to the applied notation, particles $2^{\prime }$, $3$ (in Fig. 3) and particles $3$, $4$ (in Fig. 4) take part in a nuclear process and particle $1$ only assists it. The different processes will be distinguished by the type of the assisting particle and also by the type of the nuclear process. In our model charged, heavy particles, such as protons $\left( p\right) $, deuterons $\left( d\right) $ may be particle $1$, which are supposed to move freely in a solid (e.g. in a metal). The other particles, that may take part in the processes are: localized heavy, charged particles (bound, localized $p$, $d$ and other nuclei) as the participants of Coulomb scattering (with particle $1$) and localized heavy, charged particles (bound, localized $p$, $d$ and other nuclei) as nuclear targets (as particle $3$ in Fig. 3 and particle $4$ in Fig. 4). The problem, that there may be identical particles in the system that are indistinguishable, is also disregarded here. The calculation of the transition probability per unit time of the process can be performed through similar steps to those applied for the calculation of the rate of an electron assisted process. The main difference is that now particle $1$ is heavy. In order to show the capability of the heavy particle assisted nuclear processes, some cases of the proton assisted proton captures \begin{equation} p+\text{ }_{Z}^{A}X+p\rightarrow \text{ }_{Z+1}^{A+1}Y+p^{\prime }+\Delta \end{equation were investigated in Appendix III. (Ch. IX.) of \cite{kk1}. \section{Discussion - Analysis of experimental observations} \subsection{Anomalous screening effect} Recently the electron assisted low energy $dd$ reactions were investigated in solids \cite{kk4}. It was shown that if deuterized metal is irradiated with slow, free deuterons then the $e+d+d\rightarrow e^{\prime }+p+t$ and e+d+d\rightarrow e^{\prime }+n+$ $^{3}He$\ electron assisted $dd$ processes have measurable probabilities even in the case of slow deuterons. (The electron assisted $d+d\rightarrow p+t$ and $d+d\rightarrow n+$ $^{3}He$ reactions are electron assisted neutron and proton exchange processes, respectively. In these processes electrons of the conduction band of metals, which may be considered quasi-free, assist the reaction.) The cross sections and the yields in an irradiated sample were also determined. The cross sections $\sigma _{pt}$ and $\sigma _{nHe}$\ of the electron assisted d+d\rightarrow p+t$ and $d+d\rightarrow n+$ $^{3}He $ reactions read as \sigma _{pt}=uC_{pt}/E$ and $\sigma _{nHe}=uC_{nHe}/E$, respectively, with E $ the kinetic energy of the deuterons in the beam (in $MeV$ units) and $u$ the deuteron over metal number densities in the target. We have obtained C_{pt}=2.32\times 10^{-8}$ $MeVb$ and $C_{nHe}=1.82\times 10^{-8}$ $MeVb$ in the case of deuterized $Pd$. We are going to compare our cross section result with the cross sections of the usual $d(d,n)^{3}He$ and $d(d,p)t$ reactions. The energy dependence of the cross section $\left( \sigma \right) $ of the usual charged-particle induced reactions reads a \begin{equation} \sigma \left( E\right) =S\left( E\right) \exp \left[ -2\pi \eta _{jk}\left( E\right) \right] /E, \label{sigmaast} \end{equation where $S(E)=S(0)+S_{1}E+S_{2}E^{2}$ is the astrophysical factor \cite{Angulo . In the low energy range the $S(E)=S(0)$ approximation is valid. The $\exp \left[ -2\pi \eta _{jk}\left( E\right) \right] $ dependence originates from the Coulomb factor, thus the $S\left( 0\right) /E$ part of the cross section is worth comparing to our results, especially the $S(0)$ values to the C_{pt}$ and $C_{nHe}$ values obtained by us. The $S(0)$ values of processes d(d,n)^{3}He$ and $d(d,p)t$ are: $0.055$ $MeVb$\ and $0.056$ $MeVb$, respectively \cite{Angulo}. On the basis of these numbers we can also say that our result seems to be reasonable. It is useful to introduce the relative yield \begin{equation} r=\frac{\left( \frac{dN}{dt}\right) _{pt}}{\left( \frac{dN}{dt}\right) _{usual}}=\frac{g_{e}C_{pt}}{2N_{c}S(0)}\exp \left[ 2\pi \eta \left( E\right) \right] , \label{ratio} \end{equation which is the ratio of the yields of electron assisted $\left[ \left( dN/dt\right) _{pt}\right] $ and normal $\left[ \left( \left( dN/dt\right) _{usual}\right) \right] $ $d+d\rightarrow p+t$ processes in an elementary volume of the sample. Here $N_{c}$ is the number of atoms in the elementary cell and $g_{e}$ is the number of conduction electrons in an elementary cell. \begin{figure}[tbp] \resizebox{8.5cm}{!}{\includegraphics*{Fig5.eps}} \caption{The beam energy ($E$) dependence of $log_{10}(r)$ in the case of Pd $. The relative yield $r$ is the ratio of the rates of electron assisted and normal $d+d\rightarrow p+t$ processes in an elementary volume of the sample. } \label{figure5} \end{figure} The results were connected with the so called anomalous screening effect \cite{Huke}. In the case of deuterized $Pd$ we have got $r=1.04\times 10^{-6}\exp \left[ 2\pi \eta \left( E\right) \right] $ where $\eta \left( E\right) =\alpha _{f \sqrt{m_{0}c^{2}/E}$ resulting $r=5.43\times 10^{-4}$ at $E=0.05$ $MeV$\ and $r=409$ at $E=0.005$ $MeV$ (the energy interval $0.005$ $MeV<E<0.05$ $MeV$ \ was investigated in \cite{Huke}) and $r=1.006$ at $E=0.01031$ $MeV$. For E<0.023$ the relative yield is larger than $1\%$. The energy dependence of \log _{10}(r)$ can be seen in Fig. 5. From these data and from Fig. 5 one can see that the yield produced by the electron assisted $d+d\rightarrow p+t$ process with decreasing beam energy\ becomes comparable to and larger than the yield produced by the normal $d+d\rightarrow p+t$ process. Since C_{nHe}=1.82\times 10^{-8}$ $MeVb$ therefore $\sigma _{nHe}$ has the same order of magnitude as that of $\sigma _{pt}$ and so similar statement can be made in the case of electron assisted $d+d\rightarrow n+$ $^{3}He$ reaction too. Consequently, one can conclude that the electron assisted d+d\rightarrow p+t$ and $d+d\rightarrow n+$ $^{3}He$ processes should be taken into account when evaluating the data \cite{Huke}\ of low energy fusion reactions in metals. \subsection{Fleischmann-Pons experiment} In the experiment of \cite{FP1} $Pd$ was filled with deuterons during electrolysis. The electrolyte had $LiOD$ content too. Two types of electron assisted neutron exchange processes with $Pd$ nuclei are possible \begin{equation} e+d+_{46}^{A}Pd\rightarrow e^{\prime }+p+_{46}^{A+1}Pd+\Delta \label{Pd1} \end{equation with $\Delta =\Delta _{-}(d)+\Delta _{+}(A)$ and \begin{equation} e+_{3}^{7}Li+_{46}^{A}Pd\rightarrow e^{\prime }+_{3}^{6}Li+_{46}^{A+1}Pd+\Delta \label{Pd2} \end{equation with $\Delta =\Delta _{-}(Li)+\Delta _{+}(A)$ (the $\Delta _{+}(A)$ values can be found in Table III). $\Delta _{-}(d)=\Delta _{d}-\Delta _{p}=5.847$ MeV$ and $\Delta _{-}(Li)=\Delta (_{3}^{7}Li)-\Delta (_{3}^{6}Li)=0.821$ MeV $ are the energies of neutron loss of $d$ and $_{3}^{7}Li$, where \Delta _{d}$, $\Delta _{p}$, $\Delta (_{3}^{7}Li)$ and $\Delta (_{3}^{6}Li)$ are the mass excesses of deuteron, proton, $_{3}^{7}Li$ and $_{3}^{6}Li$, respectively. In reactions $\left( \ref{Pd1}\right) $ and $\left( \ref{Pd2 \right) $ electrons of the metal are particle $1$, $d$ and $_{3}^{7}Li$ are particle $2$ and $_{46}^{A}Pd$ appears as particle $4$ (see Fig. 1). Reaction $\left( \ref{Pd1}\right) $ is energetically allowed for all the natural isotopes of $Pd$ since $\Delta =\Delta _{-}(d)+\Delta _{+}(A)>0$ for each $A$ (see the $\Delta _{+}(A)$ values of Table II). In the case of reaction $\left( \ref{Pd2}\right) $ the $\Delta =\Delta _{-}(Li)+\Delta _{+}(A)>0$ condition holds at $A=102$ and $A=105$ resulting $\Delta =0.375$ MeV$ \ and $\Delta =2.312$ $MeV$, respectively. However, at the $Pd$ surface other types of electron assisted neutron exchange processes with $d$ and $Li$ nuclei of the electrolyte and $d$ solved in $Pd$ are possible \begin{equation} e+\text{ }d+d\rightarrow e^{\prime }+p+\text{ }t+\Delta , \label{ddtp} \end{equation \begin{equation} e+\text{ }d+d\rightarrow e^{\prime }+n+\text{ }_{2}^{3}He+\Delta , \label{ddnHe} \end{equation \begin{equation} e+d+\text{ }_{3}^{6}Li\rightarrow e^{\prime }+p+\text{ }_{3}^{7}Li+\Delta , \label{Li1} \end{equation \begin{equation} e+d+\text{ }_{3}^{6}Li\rightarrow e^{\prime }+2_{2}^{4}He+\Delta , \label{Li2} \end{equation \begin{equation} e+d+\text{ }_{3}^{7}Li\rightarrow e^{\prime }+2_{2}^{4}He+n+\Delta \label{Li3} \end{equation an \begin{equation} e+d+\text{ }_{3}^{7}Li\rightarrow e^{\prime }+\text{ }_{4}^{8}Be+n+\Delta , \label{Li4} \end{equation which is promptly followed by the decay $_{4}^{8}Be\rightarrow 2_{2}^{4}He$ $\Gamma _{\alpha }=6.8$ $eV$). In these reactions electrons of the metal are particle $1$ and $d$ is particle $2$. In reaction $\left( \ref{Pd1}\right) $ protons of energy up to $7.269$ $MeV$ and in reaction $\left( \ref{Pd2}\right) $ $_{3}^{6}Li$ particles of maximum energy $2.189$ $MeV$ are created which may enter into usual nuclear reactions with the nuclei of deuteron loaded $Pd$ and electrolyte which are (without completeness): the usual $pd\rightarrow $ $_{2}^{3}He+\gamma $ reaction \begin{equation} p+_{3}^{7}Li\rightarrow 2_{2}^{4}He+Q\text{ with }Q=\Delta +E_{kin}(p), \label{pLi} \end{equation \begin{equation} _{3}^{6}Li+d\rightarrow 2_{2}^{4}He+Q\text{ with }Q=\Delta +E_{kin}(Li), \label{dLi1} \end{equation \begin{equation} _{3}^{6}Li+d\rightarrow p+_{3}^{7}Li+Q\text{ with }Q=\Delta +E_{kin}(Li). \label{dLi2} \end{equation In $\left( \ref{pLi}\right) $ and $\left( \ref{dLi1}\right) $ the emitted _{2}^{4}He$ has energy $E_{^{4}He}>8.674$ $MeV$ and $E_{^{4}He}>11.186$ $MeV , and in $(\ref{dLi2})$ the created $p$ and $_{3}^{7}Li$ have energy E_{p}>4.397$ $MeV$ and $E_{^{7}Li}>0.628$ $MeV$, respectively. It can be seen that in $\left( \ref{pLi}\right) $ and $\left( \ref{dLi1}\right) $ _{2}^{4}He$ is produced. The $_{3}^{7}Li$ particles may enter into reactio \begin{equation} _{3}^{7}Li+d\rightarrow 2_{2}^{4}He+n+Q\text{ with }Q=\Delta +E_{kin}(Li \text{ } \label{dLi3} \end{equation which contributes to the $_{2}^{4}He$ production too. Here and above E_{kin}(p)$ and $E_{kin}(Li)$ are the kinetic energies of the initial protons, $_{3}^{6}Li$ and $_{3}^{7}Li$ isotopes. From the above one can see that at least twelve types of reactions (altogether $18$ reactions) exist which are capable of energy production and in half of them energy production is accompanied with $_{2}^{4}He$ production. It is reasonable that reactions $\left( \ref{Pd1}\right) $ and \left( \ref{Pd2}\right) $ have the highest rate in the above list of reactions. In the majority of the above reactions charged particles, mostly heavy charged particles are created with short range and so they loose their energy in the matter of the experimental apparatus mainly in the electrode (cathode) and the electrolyte, therefore their direct observation is difficult. It is mainly heat production, which is a consequence of deceleration in the matter of the apparatus, that can be experienced. The third of the processes, mainly the secondary processes are the sources of neutron emission. X- and $\gamma -$rays may be originated mainly from bremsstrahlung. The above reasoning tallies with experimental observations. In reactions $\left( \ref{Pd1}\right) -\left( \ref{dLi3}\right) $ heavy, charged particles of kinetic energy lying in the $MeV$ range are created which are able to assist nuclear reactions. One can obtain the possible heavy charged particles assisted reactions if in reactions $\left( \ref{Pd1 \right) -\left( \ref{Li4}\right) $ the electron is replaced by heavy charged particles ($p$, $t$, $_{2}^{3}He$, $_{2}^{4}He$, $_{3}^{6}Li$, $_{3}^{7}Li$, $_{4}^{8}Be$ and $_{46}^{A+1}Pd$ with $A=102,104-106,108,110$) which are created in reactions $\left( \ref{Pd1}\right) -\left( \ref{dLi3}\right) $. Since the number of possible heavy charged particles is $13$ and the number of reactions which may be assisted by them is $8$, at least $104$ heavy charged particle assisted reactions must be taken into account. Consequently, it is a rather great theoretical challenge and task to determine precisely the relative rates and their couplings of all the accountable reactions, a work which is, nevertheless, necessary for the accurate quantitative analysis of experiments. The relative rates of coupled reactions of many types depend significantly on the geometry, the kind of matter and other parameters of the experimental apparatus and on some further variables, which may be attached to a concrete experiment. This situation may be responsible for the diversity of the results of experiments, which are thought to have been carried out with seemingly in the same circumstances. \subsection{Nuclear transmutation} As to the phenomenon of nuclear transmutation \cite{Storms2} we demonstrate its possibility only. First let us see the possibility of normal reactions. For instance in a Fleischmann-type experiment $_{3}^{6}Li$ particles of energy up to $2.189$ $MeV$ are created in reaction $\left( \ref{Pd2}\right) $ so the reactio \begin{equation} _{3}^{6}Li+_{3}^{6}Li\rightarrow _{6}^{12}C+\gamma +Q\text{ } \label{LiLI} \end{equation may have minor, but measurable probability. Here $Q=\Delta +E_{kin}(Li)$. The Coulomb factor of reaction $\left( \ref{LiLI}\right) $ is F_{Li,Li}=1.71\times 10^{-3}$ at $2.189$ $MeV$ kinetic energy of $_{3}^{6}Li$ particles. The magnitude of the Coulomb factor indicates that the rate of reaction $\left( \ref{LiLI}\right) $ may be large enough to be able to produce carbon traces in observable quantity. Moreover, in reactions $\left( \ref{Pd1}\right) $ and $\left( \ref{Pd2 \right) $ free $_{46}^{A}Pd$ particles are created offering e.g. the possibility of the \begin{equation} e+_{46}^{A_{1}}Pd+_{46}^{A_{2}}Pd\rightarrow e^{\prime }+_{44}^{A_{1}-3}Ru+_{48}^{A_{2}+3}Cd+\Delta \label{He3exchange} \end{equation electron assisted $_{2}^{3}He$ exchange process. The electron and the other Pd$ particle are in the solid. Analyzing mass excess data \cite{Shir} it was found that e.g. the $e+_{46}^{103}Pd+_{46}^{111}Pd\rightarrow e^{\prime }+_{44}^{100}Ru+_{48}^{114}Cd+\Delta $ $\ _{2}^{3}He$ exchange process has reaction energy $\Delta =5.7305$ $MeV$. [$_{46}^{103}Pd$ and $_{46}^{111}Pd$ are produced in reaction $\left( \ref{Pd1}\right) $.] Calculating the F_{2^{\prime }3}=F_{34}$ Coulomb factors taking $A=100$, $Z=46$, $A_{3}=3$, \ z_{3}=2$ in $\left( \ref{F2'3F34}\right) $ one gets $F_{2^{\prime }3}F_{34}=2.5\times 10^{-12}$ which seems to be large enough number to produce $Cd$ and $Ru$ traces in an experiment lasting many days long. The above reactions may offer\ starting point for the explanation of nuclear transmutations. \subsection{Rossi-type reactor (E-cat)} Recently the Rossi-type reactor \cite{Rossi} (E-Cat) was experimentally investigated in detail \cite{Levi}. The fuel contained mostly $Ni$ and also Li$ in accountable measure, there was $0.011g$ $Li$ in $1$ $g$ fuel. The isotope composition of the unused fuel was equal to the relative natural abundances. But the isotope composition of the ash (the fuel after 32 day run of the reactor) strongly changed. (The measured relative abundances of Li$ and $Ni$ isotopes in fuel and ash can be seen in Table V. The natural abundances are also given for comparison. The data are taken from Appendix 3. of \cite{Levi}.) One can see that the $_{28}^{62}Ni$ isotope is enriched and the other $Ni$ isotopes are depleted. Furthermore, the relative _{3}^{7}Li$ content decreased from $0.917$ to $0.079$ while the relative _{3}^{6}Li$ content increased from $0.086$ to $0.921$. \begin{table}[tbp] \tabskip=8pt \centerline {\vbox{\halign{\strut $#$\hfil &\hfil$#$\hfil&\hfil$#$ \hfil&\hfil$#$\hfil\cr \noalign{\hrule\vskip2pt\hrule\vskip2pt} Isotope&Fuel&Ash&Natural \cr \noalign{\vskip2pt\hrule\vskip2pt} _{3}^{6}Li & 0.086 & 0.921 & 0.075\cr _{3}^{7}Li & 0.914 & 0.079 & 0.925\cr _{28}^{58}Ni & 0.67 & 0.008 & 0.681\cr _{28}^{60}Ni & 0.263 & 0.005 & 0.262\cr _{28}^{61}Ni & 0.019 & 0.000 & 0.018\cr _{28}^{62}Ni & 0.039 & 0.987 & 0.036\cr _{28}^{64}Ni & 0.01 & 0 & 0.009\cr \noalign{\vskip2pt\hrule\vskip2pt\hrule}}}} \caption{Measured relative abundances of $Li$ and $Ni$ isotopes in fuel and ash. The natural relative abundances are also given for comparison. The data are taken from \protect\cite{Levi}.} \label{Table5} \end{table} The reactor worked for about ten days at temperature $T_{1}=1533$ $K$ and the remaining time at temperature $T_{2}=1673$ $K$. At these temperatures a free electron gas may be created from the $Ni$ powder of the fuel due to the termionic emission process. The emitted flux of electrons can be determined from the current density of electrons according to the Richardson's law using the work function $U=5.24$ $eV$ of $Ni$. The obtained termionic electron fluxes are $\Phi _{1}=$ $7.5\times 10^{9}$ $cm^{-2}s^{-1}$and $\Phi _{2}=$ $2.4\times 10^{11}$ $cm^{-2}s^{-1}$ at $T_{1}$ and $T_{2}$, respectively. Regarding the large surface of the powder fuel it is reasonable to suppose that the free electron gas is formed near the surfaces of grains of powder. But if a free electron gas interacts with the LiAlH_{4}-Ni$ powder mixture applied then the above observations can be well explained by the electron assisted neutron exchange processes. $_{3}^{7}Li$ has $\Delta _{-}=0.8214$ $MeV$ so it is able to lose neutron. The $\Delta _{+}$values of the $Ni$ isotopes can be found in Table I. Completing Table I with $\Delta _{+}(_{28}^{59}Ni)=3.319$ $MeV$ (the half life of $_{28}^{59}Ni$ is $\tau =7.6\times 10^{4}$ $y$) one can recognize that the $e+$ $_{3}^{7}Li+ $ $_{28}^{A}Ni\rightarrow e^{\prime }+$ $_{3}^{6}Li+$ $_{28}^{A+1}Ni+\Delta $ reaction has $\Delta >0$ value for $A=58-61$ but in the case of $A=62$ the chain of reactions breaks since in this case $\Delta <0$ because $\Delta _{+}(_{28}^{62}Ni)=-1.234$ $MeV$. The $64\rightarrow 63;61\rightarrow 62$ reaction of type $\left( \ref{NiAp}\right) $ (see Table III) leads to production of $_{28}^{63}Ni$ ($\tau =100.1$ $y$) which has $\Delta _{-}(_{28}^{63}Ni)=1.2335$ $MeV$ allowing and coupling transition 63\rightarrow 62$ to transitions $58\rightarrow 59;59\rightarrow 60;60\rightarrow 61$ and $61\rightarrow 62$ in reaction $\left( \ref{NiAp \right) $. These facts explain the enrichment of $_{28}^{62}Ni$ and _{3}^{6}Li$ and the depletion of $_{3}^{7}Li$ and $Ni$ isotopes of $A=58-61$ and $64$. (Reactions $\left( \ref{NiAp}\right) $ too contribute to the enrichment of $_{28}^{62}Ni$ (see Table III).) \section{Conclusion} It is thought that, in principle, the electron assisted processes are able to answer the questions raised in the introduction. The exchange of the original, extremely small Coulomb factor to the Coulomb factor of order of unity of the electron in electron assisted processes answers problem (a). The electron assisted nuclear reactions and the reactions which are coupled with them are not accompanied by the expected nuclear end products answering problem (b). Problem (c), the asserted appearance of nuclear transmutations is partly answered in Section IV.C. with the aid of charged particle assisted and usual nuclear reactions. Summarizing, the theoretical results expounded and their successful applications in explaining some unresolved experimental facts inspire us to say that the studying of charged particles electron assisted nuclear reactions, especially the electron assisted neutron exchange processes may start a renaissance in the field of low energy nuclear physics. \section{Appendix} \subsection{Initial, intermediate and final states of electron assisted neutron exchange process} Let $\Psi _{i}$, $\Psi _{\mu }$ and $\Psi _{f}$ denote the space dependent parts of initial, intermediate and final states, respectively. The initial state has the form \begin{equation} \Psi _{i}(\mathbf{x}_{e},\mathbf{x}_{1},\mathbf{x}_{n1},\mathbf{x}_{2})=\psi _{ie}\left( \mathbf{x}_{e}\right) \psi _{i1n}(\mathbf{x}_{1},\mathbf{x _{n1})\psi _{i2}(\mathbf{x}_{2}), \label{Pszii} \end{equation where \begin{equation} \psi _{ie}\left( \mathbf{x}_{e}\right) =V^{-1/2}e^{\left( i\mathbf{k _{ie}\cdot \mathbf{x}_{e}\right) }\text{ and }\psi _{i2}(\mathbf{x _{2})=V^{-1/2}e^{\left( i\mathbf{k}_{i2}\cdot \mathbf{x}_{2}\right) } \label{psziei} \end{equation are the initial state of the electron and the nucleus $_{Z}^{A_{2}}X$, and \psi _{i1n}(\mathbf{x}_{1},\mathbf{x}_{n1})$ is the initial state of the neutron and the initial $A_{1}-1$ nucleon of the nucleus $_{Z}^{A_{1}}X$. \mathbf{x}_{e}$, $\mathbf{x}_{1},\mathbf{x}_{n1}$ and $\mathbf{x}_{2}$ are the coordinates of the electron, the center of mass of the initial $A_{1}-1$ nucleon, the neutron and the nucleus $_{Z}^{A_{2}}X$, respectively. $\mathbf k}_{ie}$ and $\mathbf{k}_{i2}$ are the initial wave vectors of the electron and the nucleus $_{Z}^{A_{2}}X$ and $V$ is the volume of normalization. The initial state $\psi _{i1n}(\mathbf{x}_{1},\mathbf{x}_{n1})$ of the neutron and the initial $A_{1}-1$ nucleon may be given in the variables $\mathbf{R _{1}$, $\mathbf{r}_{n1}$ \begin{equation} \psi _{i1n}(\mathbf{R}_{1},\mathbf{r}_{n1})=V^{-1/2}\exp (i\mathbf{k _{i1}\cdot \mathbf{R}_{1})\Phi _{i1}\left( \mathbf{r}_{n1}\right) \label{pszii1} \end{equation where $\mathbf{R}_{1}$ is the center of mass coordinate of the nucleus _{Z}^{A_{1}}X$ and $\mathbf{r}_{n1}$ is the relative coordinate of one of its neutrons. $\mathbf{R}_{1}$ and $\mathbf{r}_{n1}$are determined by the usual $\mathbf{x}_{n1}=\mathbf{R}_{1}+\mathbf{r}_{n1}$ and $\mathbf{R}_{1} \left[ \left( A_{1}-1\right) \mathbf{x}_{1}+\mathbf{x}_{n1}\right] /A_{1}$ relations where $\mathbf{x}_{n1}$ and $\mathbf{x}_{1}$ are the coordinates of the neutron and of the center of mass of the initial $A_{1}-1$ nucleon, respectively. The inverse formula for $\mathbf{x}_{1}$ is $\mathbf{x}_{1} \mathbf{R}_{1}-\mathbf{r}_{n1}/\left( A_{1}-1\right) $. In $\left( \re {pszii1}\right) $ the $\Phi _{i1}\left( \mathbf{r}_{n1}\right) $ is the wave function of the neutron in the initial bound state of nucleus $_{Z}^{A_{1}}X , $\mathbf{k}_{i1}$is the initial wave vector of nucleus $_{Z}^{A_{1}}X$. The intermediate state has the form \begin{equation} \Psi _{\mu }(\mathbf{x}_{e},\mathbf{x}_{1},\mathbf{x}_{n1},\mathbf{x _{2})=\psi _{fe}\left( \mathbf{x}_{e}\right) \psi _{\mu 1n}(\mathbf{x}_{1} \mathbf{x}_{n1})\psi _{i2}(\mathbf{x}_{2}), \label{Pszimu} \end{equation where \begin{equation} \psi _{fe}\left( \mathbf{x}_{e}\right) =V^{-1/2}e^{\left( i\mathbf{k _{fe}\cdot \mathbf{x}_{e}\right) } \label{pszief} \end{equation with $\mathbf{k}_{fe}$ the wave vector of the electron in the final state and $\psi _{i2}(\mathbf{x}_{2})$ is given in $\left( \ref{psziei}\right) $. The state $\psi _{\mu 1n}(\mathbf{x}_{1},\mathbf{x}_{n1})$ is the product of two plane waves $\psi _{f1}(\mathbf{x}_{1})=V^{-1/2}e^{\left( i\mathbf{k _{1}\cdot \mathbf{x}_{1}\right) }$ and $\psi _{n}\left( \mathbf{x _{n1}\right) =V^{-1/2}e^{i\mathbf{k}_{n}\cdot \mathbf{x}_{n1}}$, which are the final state of the nucleus $_{Z_{1}}^{A_{1}-1}X$ and the state of the free, intermediate neutron. Thus $\psi _{\mu 1n}(\mathbf{x}_{1},\mathbf{x _{n1})=V^{-1}e^{i\mathbf{k}_{1}\cdot \mathbf{x}_{1}}e^{i\mathbf{k}_{n}\cdot \mathbf{x}_{n1}}$ and it has the form in the coordinates $\mathbf{R}_{1}$, \mathbf{r}_{n1} \begin{equation} \psi _{\mu 1n}(\mathbf{R}_{1},\mathbf{r}_{n1})=V^{-1}e^{i\left( \mathbf{k _{1}+\mathbf{k}_{n}\right) \cdot \mathbf{R}_{1}}e^{i\left( \mathbf{k}_{n} \frac{\mathbf{k}_{1}}{A_{1}-1}\right) \mathbf{r}_{n1}}, \label{pszimu2} \end{equation where $\mathbf{k}_{1}$ and $\mathbf{k}_{n}$ are the wave vectors of the nucleus $_{Z}^{A_{1}-1}X$ and the neutron, respectively. The intermediate state may have an other for \begin{equation} \Psi _{\mu }(\mathbf{x}_{e},\mathbf{x}_{1},\mathbf{x}_{n1},\mathbf{x _{2})=\psi _{fe}\left( \mathbf{x}_{e}\right) \psi _{f1}(\mathbf{x}_{1})\psi _{\mu 2n}(\mathbf{x}_{n1},\mathbf{x}_{2}), \label{Pszimu2} \end{equation where \begin{equation} \psi _{\mu 2n}(\mathbf{x}_{n1},\mathbf{x}_{2})=\psi _{n}\left( \mathbf{x _{n1}\right) \psi _{i2}(\mathbf{x}_{2})=V^{-1}e^{i\mathbf{k}_{n}\cdot \mathbf{x}_{n1}}e^{i\mathbf{k}_{i2}\cdot \mathbf{x}_{2}} \label{pszimu3} \end{equation which can be written in the coordinates $\mathbf{r}_{n2}=\mathbf{x}_{n1} \mathbf{R}_{2}$ and $\mathbf{R}_{2}=\left( A_{2}\mathbf{x}_{2}+\mathbf{x _{n1}\right) /\left( A_{2}+1\right) $ as \begin{equation} \psi _{\mu 2n}(\mathbf{R}_{2},\mathbf{r}_{n2})=\frac{1}{V}e^{i\left( \mathbf k}_{i2}+\mathbf{k}_{n}\right) \cdot \mathbf{R}_{2}}e^{i\left( \mathbf{k}_{n} \frac{\mathbf{k}_{i2}}{A_{2}}\right) \mathbf{r}_{n2}}, \label{pszimu4} \end{equation where $\mathbf{R}_{2}$ is the center of mass coordinate of the nucleus _{Z}^{A_{2}+1}X$ and $\mathbf{r}_{n2}$ is the relative coordinate of the neutron in it. In these new variables $\mathbf{x}_{2}=\mathbf{R}_{2}-\mathbf r}_{n2}/A_{2}$ and $\mathbf{x}_{n1}-\mathbf{x}_{2}=\left( A_{2}+1\right) \mathbf{r}_{n2}/A_{2}$ which is used in the argument of $V^{St}$ (given by \left( \ref{VSt1}\right) $) in calculating $V_{f\mu }^{St}$. Evaluating the matrix elements $V_{\mu i}^{Cb}$ and $V_{f\mu }^{St}$ the forms $\left( \re {pszimu2}\right) $ and $\left( \ref{pszimu4}\right) $ of $\psi _{\mu }$ are used, respectively, and $\sum_{\mu }\rightarrow \frac{V}{\left( 2\pi \right) ^{3}}d^{3}k_{n}$ in $\left( \ref{Tif}\right) $. The final state has the form \begin{equation} \Psi _{f}(\mathbf{x}_{e},\mathbf{x}_{1},\mathbf{x}_{n1},\mathbf{x}_{2})=\psi _{fe}\left( \mathbf{x}_{e}\right) \psi _{f1}(\mathbf{x}_{1})\psi _{f2n} \mathbf{x}_{n1},\mathbf{x}_{2}), \label{Pszif} \end{equation where $\psi _{f2n}(\mathbf{x}_{n1},\mathbf{x}_{2})$ is given in the variables $\mathbf{R}_{2}$, $\mathbf{r}_{n2}$ as \begin{equation} \psi _{f2n}(\mathbf{R}_{2},\mathbf{r}_{n2})=V^{-1/2}\exp (i\mathbf{k _{2}\cdot \mathbf{R}_{2})\Phi _{f2}\left( \mathbf{r}_{n2}\right) , \label{pszif2} \end{equation and $\Phi _{f2}\left( \mathbf{r}_{n2}\right) $ is the bound state of the neutron in the nucleus $_{Z}^{A_{2}+1}X$. \subsection{Evaluation of matrix elements $V_{\protect\mu i}^{Cb}$ and $V_{ \protect\mu }^{St}$} The argument of the Coulomb potential $V^{Cb}$ is $\mathbf{x}_{e}-\mathbf{x _{1}$ therefore the integration with respect to the components of $\mathbf{x _{2}$ may be carried out and $\int \left\vert \psi _{i2}(\mathbf{x _{2})\right\vert ^{2}d^{3}x_{2}=1$. The remainder is \begin{eqnarray} V_{\mu i}^{Cb} &=&\int \psi _{fe}^{\ast }\left( \mathbf{x}_{e}\right) \psi _{\mu 1n}^{\ast }(\mathbf{x}_{1},\mathbf{x}_{n1})V^{Cb}\left( \mathbf{x}_{e} \mathbf{x}_{1}\right) \label{VCbmui} \\ &&\times \psi _{ie}\left( \mathbf{x}_{e}\right) \psi _{i1n}(\mathbf{x}_{1} \mathbf{x}_{n1})d^{3}x_{e}d^{3}x_{1}d^{3}x_{n1}. \notag \end{eqnarray Making the $\mathbf{x}_{1},\mathbf{x}_{n1}\rightarrow \mathbf{R}_{1},\mathbf r}_{n1}$ change in the variables, substituting the forms $\left( \ref{pszii1 \right) $ and $\left( \ref{pszimu2}\right) $ of $\psi _{i1n}$ and $\psi _{\mu 1n}$, and neglecting $\mathbf{k}_{i1}$, the integrations over the components of $\mathbf{x}_{e}$ and $\mathbf{R}_{1}$ result $V^{-1}\left( 2\pi \right) ^{3}\delta \left( \mathbf{q}+\mathbf{k}_{ie}-\mathbf{k _{fe}\right) $ and $V^{-3/2}\left( 2\pi \right) ^{3}\delta \left( \mathbf{q} \mathbf{k}_{1}-\mathbf{k}_{n}\right) $, respectively and the integration over the components of $\mathbf{r}_{n1}$ produces $F_{1}\left( \mathbf{k _{n}\right) $ where \begin{equation} F_{1}\left( \mathbf{k}_{n}\right) =\int \Phi _{i1}\left( \mathbf{r _{n1}\right) e^{-i\left( \mathbf{k}_{n}-\frac{\mathbf{k}_{1}+\mathbf{q}} A_{1}-1}\right) \cdot \mathbf{r}_{n1}}d^{3}r_{n1}. \label{Fk1kn} \end{equation Using the $\delta \left( \mathbf{q}+\mathbf{k}_{ie}-\mathbf{k}_{fe}\right) $ in carrying out the integration over the components of $\mathbf{q}$ in V_{\mu i}^{Cb}$ one get \begin{eqnarray} V_{\mu i}^{Cb} &=&-\frac{4\pi e^{2}Z}{\left\vert \mathbf{k}_{fe}-\mathbf{k _{ie}\right\vert ^{2}+\lambda ^{2}}\widetilde{F}_{1}\left( \mathbf{k _{n}\right) \frac{\left( 2\pi \right) ^{6}}{V^{5/2}}\times \label{VCbmui1} \\ &&\times \sqrt{G_{S}}\delta \left( \mathbf{k}_{ie}-\mathbf{k}_{fe}-\mathbf{k _{1}-\mathbf{k}_{n}\right) \notag \end{eqnarray an \begin{equation} \widetilde{F}_{1}\left( \mathbf{k}_{n}\right) =\int \Phi _{i1}\left( \mathbf r}_{n1}\right) e^{-i\left( \mathbf{k}_{n}-\frac{\mathbf{k}_{1}+\mathbf{k _{fe}-\mathbf{k}_{ie}}{A_{1}-1}\right) \cdot \mathbf{r}_{n1}}d^{3}r_{n1}. \label{Fk1kn2} \end{equation For particles $e$ and $1$ (ingoing electron of charge $-e$ and initial nucleus $_{Z}^{A_{1}}X$ of charge $Ze$) taking part in Coulomb interaction we have used plane waves therefore the matrix element must be corrected with the so called Sommerfeld factor \cite{Heitler} $\sqrt{G_{S}}$ wher \begin{equation} G_{S}=\frac{F_{e}(E_{ie})}{F_{e}(E_{f1})}. \label{Gs} \end{equation} Now we deal with $V_{f\mu }^{St}$. The strong interaction works between the neutron and the nucleons of the nucleus $_{Z}^{A_{2}}X$ therefore the argument of $V^{St}$ is $\mathbf{x}_{n1}-\mathbf{x}_{2}$. The integrations with respect to the components of $\ \mathbf{x}_{e}$ and $\mathbf{x}_{1}$ result $\int \left\vert \psi _{ef}(\mathbf{x}_{e})\right\vert ^{2}d^{3}x_{e}= $ $\int \left\vert \psi _{f1}(\mathbf{x}_{1})\right\vert ^{2}d^{3}x_{1}=1$. The remainder is \begin{equation} V_{f\mu }^{St}=\int \psi _{f2n}^{\ast }V^{St}\left( \mathbf{x}_{n1}-\mathbf{ }_{2}\right) \psi _{\mu 2n}d^{3}x_{2}d^{3}x_{n1}. \label{VStfmu} \end{equation Similarly to the above, making the $\mathbf{x}_{n1},\mathbf{x _{2}\rightarrow \mathbf{R}_{2},\mathbf{r}_{n2}$ change in the variables, substituting the forms $\left( \ref{pszimu4}\right) $ and $\left( \re {pszif2}\right) $ of $\psi _{\mu 2n}$ and $\psi _{f2n}^{\ast }$ and neglecting $\mathbf{k}_{i2}$, the integrations over the components of \mathbf{R}_{2}$ result $V^{-3/2}\left( 2\pi \right) ^{3}\delta \left( \mathbf{k}_{n}-\mathbf{k}_{2}\right) $ and the integrations with respect to the components of $\mathbf{r}_{n2}$ produces $F_{2}\left( \mathbf{k _{n}\right) $ wit \begin{eqnarray} F_{2}\left( \mathbf{k}_{n}\right) &=&\int \Phi _{f2}^{\ast }\left( \mathbf{r _{n2}\right) e^{i\mathbf{k}_{n}\cdot \mathbf{r}_{n2}}\times \label{F2ki2kn} \\ &&\times \left( -f\frac{\exp (-s\frac{A_{2}+1}{A_{2}}r_{n2}}{\frac{A_{2}+1} A_{2}}r_{n2}}\right) d^{3}r_{n2}, \notag \end{eqnarray where $r_{n2}=\left\vert \mathbf{r}_{n2}\right\vert $. Taking into account that the neutron interacts with each nucleon of the final nucleus of nucleon number $A_{2}$ \begin{equation} V_{f\mu }^{St}=\frac{\left( 2\pi \right) ^{3}}{V^{3/2}}A_{2}F_{2}\left( \mathbf{k}_{n}\right) \delta \left( \mathbf{k}_{n}-\mathbf{k}_{2}\right) . \label{VStfmu2} \end{equation} \subsection{Transition probability per unit time of electron assisted neutron exchange process} Substituting the obtained forms of $V_{\mu i}^{Cb}$ and $V_{f\mu }^{St}$ (formulae $\left( \ref{VCbmui1}\right) $ and $\left( \ref{VStfmu2}\right) $) into $\left( \ref{Tif}\right) $ and using the correspondence $\sum_{\mu }\rightarrow \frac{V}{\left( 2\pi \right) ^{3}}d^{3}k_{n}$ and the $\delta \left( \mathbf{k}_{n}-\mathbf{k}_{2}\right) $ in the integration over the components of $\mathbf{k}_{n}$ one get \begin{eqnarray} T_{fi} &=&-\frac{4\pi e^{2}ZA_{2}\widetilde{F}_{1}\left( \mathbf{k _{2}\right) F_{2}\left( \mathbf{k}_{2}\right) \sqrt{\frac{F_{e}(E_{ie})} F_{e}(E_{f1})}}}{\left\vert \mathbf{k}_{fe}-\mathbf{k}_{ie}\right\vert ^{2}+\lambda ^{2}}\times \label{Tfi22} \\ &&\times \frac{\left( 2\pi \right) ^{6}}{V^{3}}\frac{\delta \left( \mathbf{k _{1}+\mathbf{k}_{2}+\mathbf{k}_{fe}-\mathbf{k}_{ie}\right) }{\left( \Delta E_{\mu i}\right) _{\mathbf{k}_{n}=\mathbf{k}_{2}}}, \notag \end{eqnarray wher \begin{equation} \widetilde{F}_{1}\left( \mathbf{k}_{2}\right) =\int \Phi _{i1}\left( \mathbf r}_{n1}\right) e^{-i\left( \mathbf{k}_{2}-\frac{\mathbf{k}_{1}+\mathbf{k _{fe}-\mathbf{k}_{ie}}{A_{1}-1}\right) \cdot \mathbf{r}_{n1}}d^{3}r_{n1} \label{F1k2} \end{equation and $F_{2}\left( \mathbf{k}_{2}\right) $ is determined by $\left( \ref{F2k2 \right) $. Here $\Phi _{i1}$ and $\Phi _{f2}$ in $\left( \ref{F2k2}\right) $ are the initial and final bound neutron states. Substituting the above into \left( \ref{Wfie}\right) $, using the identities $\left[ \delta \left( \mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{fe}-\mathbf{k}_{ie}\right) \right] ^{2}=\delta \left( \mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{fe}-\mathbf{k _{ie}\right) \delta \left( \mathbf{0}\right) $ and $\left( 2\pi \right) ^{3}\delta \left( \mathbf{0}\right) =V$, the $\sum_{f}\rightarrow \sum_{m_{2}}\int \left[ V/\left( 2\pi \right) ^{3}\right] ^{3}d^{3}k_{1}d^{3}k_{2}d^{3}k_{fe}$ correspondence, averaging over the quantum number $m_{1\text{ }}$and integrating over the components of \mathbf{k}_{fe}$ (which gives $\mathbf{k}_{fe}=-\mathbf{k}_{1}-\mathbf{k _{2}+\mathbf{k}_{ie}$) one obtain \begin{eqnarray} W_{fi} &=&\int \frac{64\pi ^{3}\alpha _{f}^{2}\hbar c^{2}Z^{2}\sum_{l_{2}=-m_{2}}^{l_{2}=m_{2}}\left\vert F_{2}\left( \mathbf{k _{2}\right) \right\vert ^{2}}{v_{c}V\left( \left\vert \mathbf{k}_{1}+\mathbf k}_{2}\right\vert ^{2}+\lambda ^{2}\right) ^{2}\left( \Delta E_{\mu i}\right) _{\mathbf{k}_{n}=\mathbf{k}_{2}}^{2}} \label{Wfi22} \\ &&\times \left\langle \left\vert F_{1}\left( \mathbf{k}_{2}\right) \right\vert ^{2}\right\rangle \frac{F_{e}(E_{ie})}{F_{e}(E_{f1}) A_{2}^{2}r_{A_{2}}\delta (E_{f}-\Delta )d^{3}k_{1}d^{3}k_{2}, \notag \end{eqnarray where $A_{1}$, $A_{2}$ are the initial atomic masses, $l_{1},m_{1}$ and l_{2},m_{2}$ are the orbit and its projection quantum numbers of the neutron in its initial and final state. For $F_{1}\left( \mathbf{k}_{2}\right) $, \left\langle \left\vert F_{1}\left( \mathbf{k}_{2}\right) \right\vert ^{2}\right\rangle $ and $F_{2}\left( \mathbf{k}_{2}\right) $ see $\left( \re {F1kalk2}\right) $, $\left( \ref{F1av}\right) $ and $\left( \ref{F2k2 \right) $. Taking into account the effect of the number of atoms of atomic number $A_{2}$ in the solid target the calculation is similar to the calculation of e.g. the coherent neutron scattering \cite{Kittel} and the \left\vert T_{fi}\right\vert ^{2}$ must be multiplied by $N_{L}$ which is the number of atomic sites in the crystal and by $r_{A_{2}}$ which is the relative natural abundance of atoms $_{Z}^{A_{2}}X$. We have used N_{L}/V=2/v_{c}$ with $v_{c}$ the volume of the elementary cell of the $fcc$ lattice in which there are two lattice sites in the cases of $Ni$ and $Pd$ investigated. \subsection{Approximations, identities and relations in calculation of cross section} Now we deal with the energy denominator $\left( \Delta E_{\mu i}\right) $ in $\left( \ref{Wfi22}\right) $ and $\left( \ref{sigma}\right) $ $\left[ \text see }\left( \ref{DeltaEmui}\right) -\left( \ref{E1f}\right) \right] $. The shielding parameter $\lambda $ is determined by the innermost electronic shell of the atom $_{Z}^{A_{1}}X$ and it can be determined a \begin{equation} \lambda =\frac{Z}{a_{B}}, \label{lambda} \end{equation where $a_{B}=0.53\times 10^{-8}$ $cm$ is the Bohr-radius. The integrals in \left( \ref{Wfi22}\right) $ and $\left( \ref{sigma}\right) $ have accountable contributions i \begin{equation} \ \left\vert \mathbf{k}_{1}+\mathbf{k}_{2}\right\vert \lesssim \lambda \label{condlambda} \end{equation and then $E_{fe}\lesssim \hbar ^{2}\lambda ^{2}/\left( 2m_{e}\right) =\frac{ }{2}\alpha _{f}^{2}m_{e}c^{2}Z^{2}$ which can be neglected in $\Delta E_{\mu i}$ and in the energy Dirac-delta. Thu \begin{equation} \Delta E_{\mu i}=\frac{\hbar ^{2}\mathbf{k}_{1}^{2}}{2m_{1}}+\frac{\hbar ^{2 \mathbf{k}_{2}^{2}}{2m_{n}}-\Delta _{-}+\Delta _{n} \label{DeltaEmui2} \end{equation and in the Dirac-delt \begin{equation} E_{f}=\frac{\hbar ^{2}\mathbf{k}_{1}^{2}}{2m_{1}}+\frac{\hbar ^{2}\mathbf{k _{2}^{2}}{2m_{2}}. \label{Ef2} \end{equation In this case $\mathbf{k}_{1}=-\mathbf{k}_{2}+\delta \mathbf{k}$ with \left\vert \delta \mathbf{k}\right\vert =\delta k\sim \lambda $. Using \begin{equation} k_{1}\simeq k_{2}\simeq k_{0}=\sqrt{2\mu _{12}\Delta }/\hbar \label{k0} \end{equation (see below) with $\mu _{12}c^{2}=A_{12}m_{0}c^{2}$, where $A_{12}=\left( A_{1}-1\right) \left( A_{2}+1\right) /\left( A_{1}+A_{2}\right) $ is the reduced nucleon number, one can conclude that the $\mathbf{k}_{2}=-\mathbf{k _{1}$ relation fails with a very small error in the cases of events which fulfill condition $\left( \ref{condlambda}\right) $ since $k_{1}/k_{0}\simeq 1$, $k_{2}/k_{0}\simeq 1$,$\ \delta k/k_{0}\sim \lambda /k_{0}$ and $\lambda /k_{0}=\alpha _{f}Zm_{e}c^{2}/\sqrt{2\mu _{12}c^{2}\Delta }\ll 1$. Consequently, the quantity $E_{f}$ in the argument of the energy Dirac-delta can be written approximately as \begin{equation} E_{f}=\left( \frac{\hbar ^{2}}{2m_{1}}+\frac{\hbar ^{2}}{2m_{2}}\right) \mathbf{k}_{2}^{2}=\frac{\hbar ^{2}c^{2}\mathbf{k}_{2}^{2}}{2A_{12}m_{0}c^{2 }. \label{Ef22} \end{equation Furthermore taking $A_{1}/\left( A_{1}+1\right) \simeq 1$ \begin{equation} \Delta E_{\mu i}=\frac{\hbar ^{2}c^{2}\mathbf{k}_{2}^{2}}{2m_{0}c^{2} -\Delta _{-}+\Delta _{n}. \label{DeltaEmui3} \end{equation} We introduce the $\mathbf{Q}=\hbar c\mathbf{k}_{2}/\Delta $, $\mathbf{P =\hbar c\left( \delta \mathbf{k}\right) /\Delta $, $\varepsilon _{f}=E_{f}/\Delta =\left[ \mathbf{Q}^{2}/\left( 2A_{12}m_{0}c^{2}\right) \right] \Delta $ and $L=\hbar c\lambda /\Delta $ dimensionless quantities. The energy Dirac-delta modifies as $\delta (E_{f}-\Delta )=\delta \left[ \varepsilon _{f}\left( \mathbf{Q}\right) -1\right] /\Delta $. \ The relation $\left( \ref{lambda}\right) $ yields $L=\hbar cZ/\left( a_{B}\Delta \right) =Z\alpha _{f}m_{e}c^{2}/\Delta $ and $Z\alpha _{f}m_{e}c^{2}/\Delta \lesssim 1$. Now we change $d^{3}k_{1}d^{3}k_{2}$ to $\left( \frac{\Delta }{\hbar c \right) ^{6}d^{3}Qd^{3}P$ in the integration in $\left( \ref{sigma}\right) , use the $\delta \left[ g\left( Q\right) \right] =\delta \left( Q-Q_{0}\right) /g^{\prime }\left( Q_{0}\right) $ identity, where $Q_{0}$ is the root of the equation $g\left( Q\right) =0$ ($k_{0}=Q_{0}\Delta /\left( \hbar c\right) $, see $\left( \ref{k0}\right) $), estimate the integral with respect to the components of $\mathbf{P}$ by \begin{equation} \int_{0}^{\infty }\frac{4\pi P^{2}dP}{\left( P^{2}+L^{2}\right) ^{2}}=\frac \pi ^{2}}{L} \label{IntP} \end{equation and apply $v_{c}=d^{3}/4$ (the volume of unit cell of $fcc$ lattice for $Ni$ and $Pd$ of lattice parameter $d$). \subsection{$\left\langle \left\vert F_{1}\left( \mathbf{k}_{0}\right) \right\vert ^{2}\right\rangle _{Sh}$ and $\sum_{l_{2}=-m_{2}}^{l_{2}=m_{2} \left\vert F_{2}\left( \mathbf{k}_{0}\right) \right\vert _{Sh}^{2}$\ in single particle shell-model and without LWA} Now we calculate the quantities $\left\langle \left\vert F_{1}\left( \mathbf k}_{0}\right) \right\vert ^{2}\right\rangle _{Sh}$~and \sum_{l_{2}=-m_{2}}^{l_{2}=m_{2}}\left\vert F_{2}\left( \mathbf{k _{0}\right) \right\vert _{Sh}^{2}$ in the single particle shell model with isotropic harmonic oscillator potential and without the long wavelength approximation (see definitions: $\left( \ref{F1kalk2}\right) $, $\left( \re {F1av}\right) $ and $\left( \ref{F2k2}\right) $). Taking into account the spin-orbit coupling in the level scheme the emerging neutron states are $0l$ and $1l$ shell model states in the cases of $Ni$ and $Pd$ to be discussed numerically \cite{Pal}. So the initial and final neutron states $\left( \Phi _{i1},\Phi _{f2}\right) $ have the for \begin{equation} \Phi _{Sh}\left( \mathbf{r}_{nj}\right) =\frac{R_{n_{j}l_{j}}}{r_{nj} Y_{l_{j}m_{j}}\left( \Omega _{j}\right) \label{Fishell} \end{equation where $n_{j}=0,1$ in the cases of $0l$ and $1l$ investigated, respectively, and \begin{equation} R_{0l_{j}}=b_{j}^{-1/2}\left( \frac{2}{\Gamma (l_{j}+3/2)}\right) ^{1/2}\varrho _{j}^{l_{j}+1}\exp \left( -\frac{1}{2}\varrho _{j}^{2}\right) , \label{R0l} \end{equation \begin{eqnarray} R_{1l_{j}} &=&b_{j}^{-1/2}\left( \frac{2l_{j}+3}{\Gamma (l_{j}+3/2)}\right) ^{1/2}\varrho _{j}^{l_{j}+1}\times \label{R1l} \\ &&\times \left( 1-\frac{2}{2l_{j}+3}\varrho _{j}^{2}\right) \exp \left( \frac{1}{2}\varrho _{j}^{2}\right) \notag \end{eqnarray with $\varrho _{j}=r_{nj}/b_{j}$ where $b_{j}=\sqrt{\hbar /\left( m_{0}\omega _{j}\right) }$ \cite{Pal}. Here $\omega _{j}$ is the angular frequency of the oscillator that is determined by $\hbar \omega _{1}=40A_{1}^{-1/3}$ $MeV$ and\ $\hbar \omega _{2}=40\left( A_{2}+1\right) ^{-1/3}$ $MeV$ \cite{Bohr}. (The subscript $Sh$ refers to the shell model.) With the aid of these wave functions and for $n_{1}=0,1$ \begin{equation} \left\langle \left\vert F_{1}\left( \mathbf{k}_{0}\right) \right\vert ^{2}\right\rangle _{Sh}=b_{1}^{3}\frac{2^{l_{1}+2}}{\sqrt{\pi }\left( 2l_{1}+1\right) !!}4\pi I_{1,n_{1}}^{2} \label{F1K2Sh} \end{equation with \begin{equation} I_{1,0}=\int_{0}^{\infty }\varrho ^{l_{1}+2}j_{l_{1}}(k_{0}b_{1}\frac{A_{1}} A_{1}-1}\varrho )e^{-\frac{1}{2}\varrho ^{2}}d\varrho \text{ } \label{I10} \end{equation and \begin{eqnarray} I_{1,1} &=&\left( l_{1}+\frac{3}{2}\right) \int_{0}^{\infty }\varrho ^{l_{1}+2}\left( 1-\frac{2}{2l_{1}+3}\varrho ^{2}\right) \times \label{I11} \\ &&\times j_{l_{1}}(k_{0}b_{1}\frac{A_{1}}{A_{1}-1}\varrho )e^{-\frac{1}{2 \varrho ^{2}}d\varrho .\text{ } \notag \end{eqnarray Here $j_{l_{1}}(x)=\sqrt{\frac{\pi }{2x}}J_{l_{1}+1/2}(x)$ denotes spherical Bessel function with $J_{l_{1}+1/2}(x)$ the Bessel function of first kind. Similarl \begin{eqnarray} \sum_{l_{2}=-m_{2}}^{l_{2}=m_{2}}\left\vert F_{2}\left( \mathbf{k _{0}\right) \right\vert _{Sh}^{2} &=&b_{2}f^{2}\frac{2^{l_{2}+2}\left( 2l_{2}+1\right) }{\sqrt{\pi }\left( 2l_{2}+1\right) !!}\times \label{F2k2Sh} \\ &&\times 4\pi \left( \frac{A_{2}}{A_{2}+1}\right) ^{2}I_{2,n_{2}}^{2} \notag \end{eqnarray wit \begin{equation} I_{2,0}=\int_{0}^{\infty }\varrho ^{l_{2}+1}j_{l_{2}}(k_{0}b_{2}\varrho )e^{ \frac{1}{2}\varrho ^{2}-\frac{A_{2}+1}{A_{2}}\frac{b_{2}}{r_{0}}\varrho }d\varrho \text{ } \label{I20} \end{equation an \begin{eqnarray} I_{2,1} &=&\left( l_{2}+\frac{3}{2}\right) \int_{0}^{\infty }\varrho ^{l_{2}+1}\left( 1-\frac{2}{2l_{2}+3}\varrho ^{2}\right) \times \label{I21} \\ &&\times j_{l_{2}}(k_{0}b_{2}\varrho )e^{-\frac{1}{2}\varrho ^{2}-\frac A_{2}+1}{A_{2}}\frac{b_{2}}{r_{0}}\varrho }d\varrho . \notag \end{eqnarray Substituting the results of $\left( \ref{F1K2Sh}\right) $, $\left( \re {F2k2Sh}\right) $ and $\left( \ref{F1K2av}\right) $ into $\left( \ref{etha \right) $ one get \begin{eqnarray} \eta _{l_{1},n_{1},l_{2},n_{2}}\left( A_{1},A_{2}\right) &=&\frac 2^{l_{1}+l_{2}+4}}{\pi \left( 2l_{1}+1\right) !!\left( 2l_{2}+1\right) !! \times \label{etha2} \\ &&\times \frac{b_{1}^{3}b_{2}}{r_{0}^{4}}\left( \frac{A_{2}}{A_{2}+1}\right) ^{2}I_{1,n_{1}}^{2}I_{2,n_{2}}^{2}. \notag \end{eqnarray} \subsection{Coulomb factors $F_{2^{\prime }3}$ and $F_{34}$ in electron assisted heavy charged particle exchange process} If initial particles have negligible initial momentum then, because of momentum conservation, $\mathbf{k}_{2^{\prime }}=-\mathbf{k}_{5}$ in the final state. (It was obtained \cite{kk3} that the process has accountable cross section if the momentum of the final electron can be neglected, i.e. in the $\mathbf{k}_{1^{\prime }}\simeq 0$ case.) Thus the condition of energy conservation \begin{equation} \frac{\hbar ^{2}\mathbf{k}_{2^{\prime }}^{2}}{2m_{2^{\prime }}}+\frac{\hbar ^{2}\mathbf{k}_{5}^{2}}{2m_{5}}=\Delta \label{ec} \end{equation determines $\mathbf{k}_{2^{\prime }}$ a \begin{equation} \hbar ^{2}\mathbf{k}_{2^{\prime }}^{2}=2\mu _{2^{\prime }5}\Delta , \label{k2'} \end{equation where $\hbar $ is the reduced Planck-constant \begin{equation} \mu _{2^{\prime }5},=a_{2^{\prime }5}m_{0}c^{2} \label{mu2'5} \end{equation is the reduced rest mass of particles $2^{\prime }$ and $5$ of mass numbers A_{2^{\prime }}$ and $A_{5}$ [for $a_{2^{\prime }5}$ see $\left( \ref{ajk \right) $]. If the initial momenta and the momentum of particle $1^{\prime }$ are negligible then $\mathbf{k}_{3}=-\mathbf{k}_{2^{\prime }}$, since momentum is preserved in Coulomb scattering. Thus the energy $E_{3}$ of particle $3$ can be written as \begin{equation} E_{3}=\frac{\hbar ^{2}\mathbf{k}_{3}^{2}}{2m_{3}}=\frac{\mu _{2^{\prime }5}} m_{3}}\Delta =\frac{a_{2^{\prime }5}}{A_{3}}\Delta . \label{E3} \end{equation Calculating the Coulomb factor $F_{2^{\prime }3}$ [see $\left( \ref{Fjk \right) $]\ between particles $2^{\prime }$ and $3$ the energy determined by $\left( \ref{E3}\right) $ is given in their $CM$ coordinate system (since \mathbf{k}_{3}=-\mathbf{k}_{2^{\prime }}$) thus it can be substituted directly in $\left( \ref{etajk}\right) $ producin \begin{equation} \eta _{2^{\prime }3}=\left( Z_{2}-z_{3}\right) z_{3}\alpha _{f}A_{3}\sqrt \frac{A_{2^{\prime }}+A_{5}}{\left( A_{2^{\prime }}+A_{3}\right) A_{5}}\frac m_{0}c^{2}}{2\Delta }}. \label{eta2'3} \end{equation Since the above analysis is made in order to discuss the phenomenon of nuclear transmutation we take $A_{3}\ll A_{2^{\prime }}\simeq A_{5}=A$ ( \gtrsim 100$ in the case of $Pd$ discussed). So $\left( A_{2^{\prime }}+A_{5}\right) /\left[ \left( A_{2^{\prime }}+A_{3}\right) A_{5}\right] \simeq 2/A$ and $\eta _{2^{\prime }3}$ reads approximately as \begin{equation} \eta _{2^{\prime }3}=\left( Z_{2}-z_{3}\right) z_{3}\alpha _{f}A_{3}\sqrt \frac{m_{0}c^{2}}{A\Delta }}. \label{eta2'3approx} \end{equation Calculating the Coulomb factor $F_{34}$, the energy of particle $3$ determined by $\left( \ref{E3}\right) $ is now given in the laboratory frame of reference since particle $4$ is at rest. In the $CM$ system of particles 3$ and $4$ the energy $E_{3}(CM)$ is \begin{equation} E_{3}(CM)=\frac{A_{4}a_{2^{\prime }5}\Delta }{\left( A_{3}+A_{4}\right) A_{3 }. \label{E3CM} \end{equation Substituting it into $\left( \ref{etajk}\right) $ \begin{equation} \eta _{34}=\left( Z_{4}+z_{3}\right) z_{3}\alpha _{f}A_{3}\sqrt{\frac m_{0}c^{2}}{2a_{2^{\prime }5}\Delta }}. \label{eta34} \end{equation Applying the same approximation as above in which $2a_{2^{\prime }5}\simeq A$ \begin{equation} \eta _{34}=\left( Z_{4}+z_{3}\right) z_{3}\alpha _{f}A_{3}\sqrt{\frac m_{0}c^{2}}{A\Delta }}. \label{eta34approx} \end{equation Furthermore, if $Z_{2}\simeq Z_{4}=Z\gg z_{3}$ the \begin{equation} \eta _{2^{\prime }3}=\eta _{34}=Zz_{3}\alpha _{f}A_{3}\sqrt{\frac{m_{0}c^{2 }{A\Delta },} \label{F2'3F34} \end{equation consequently, $F_{2^{\prime }3}=F_{34}$.
1,941,325,220,884
arxiv
\section{Introduction} Emotions and expressions play a significant role in our daily lives. They influence memory~\cite{miranda2005mood}, decision-making~\cite{loewenstein2003role} and social communication. Non-verbal and verbal signals (e.g., speech, facial expressions, physiological responses, gestures, language) capture rich information about these affective states. Human-computer interactions are rich with emotional information that is not currently considered by conventional computer systems. Thanks to the development of miniaturized electronics, ubiquitous devices are now equipped with hardware that can be leveraged to sense human affective states~\cite{calvo2015oxford}. Such sensing can help us understand human-human interactions and human-computer interactions and design computers that more fluidly and naturally interface with people. Emotion-aware devices and services have a huge potential for new assistive devices~\cite{mcduff2018designing}. Expressions of emotion often manifest multimodally. For example, laughter frequently occurs with smiles, and frustration with an elevated pulse rate. Thus recognition of visual, auditory and physiological signals can provide a more complete picture of the emotional state of a person than a single modality alone~\cite{d2015review}. Building multimodal systems that are able to detect expressions of affect is non-trivial. Such systems require expertise in computer vision, audio processing, natural language processing and psychology. Once models are built there is considerable software engineering effort required to build real-time applications that successfully synchronize these data. Emotion-sensing software is a prerequisite for creating emotion-aware systems. To build such a platform for every new instance of an emotion-aware application has numerous drawbacks. First, it requires considerable effort and expertise. Much of this work can involve creating components that are not novel and simply ``re-inventing the wheel". Second, it makes it difficult to compare across systems and draw conclusions from meta-analyses if different emotion-sensing components are used each time. There is a non-negligible advantage in using a consistent set of algorithms for sensing. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{Summary.PNG} \caption{We have developed a real-time, multimodal platform for sensing affect. The platform is designed to be flexible and extensible allowing inputs from one or multiple sensors. The platform leverages ubiquitous sensors (webcam and microphone) and has both visual and audio processing pipelines. The sensing components include face detection, recognition and tracking, expression analysis, voice activity detection, speech recognition and sentiment analysis.} \label{fig:summary} \end{figure} Multimodal affect sensing platforms, such as Multisense~\cite{stratou2017multisense}, have proven effective at enabling the development of more complex human-computer interaction paradigms (e.g., SimSensei~\cite{devault2014simsensei}). These platforms make it easier to develop applications that benefit from affect sensing capabilities. We have developed a real-time, multimodal platform for sensing affect. The platform is designed to be flexible and extensible allowing inputs from one or multiple sensors (see Figure~\ref{fig:summary} for an illustration of the sensing modalities). The platform leverages ubiquitous sensors (webcam and microphone) and has both visual and audio processing pipelines. Components enable face detection, recognition and tracking and expression analysis, pose estimation, voice activity detection, speech recognition and sentiment analysis. To provide context we also include application logging components that monitor application usage, email and calendar activity. Combined these data provide a rich picture of the behavior, expressions and activities of the subject(s). There are trade-offs between designing a sensing platform that runs on-device or on a cloud server. Platforms that run on-device have the advantage of processing raw images and audio sensor data locally and minimizing the transmission of Personally Identifiable Information (PII). They remove the need for network connectivity. Cloud platforms have the advantage of less constraints on computational resources. The platform can be tailored to the hardware available and allow for greater scalability. However, cloud systems typically suffer from greater latency and the need to stream raw data over a network. Our platform can be run on Windows 10 devices locally or on a cloud virtual machine (VM). This enables the ability to create native and browser-based applications. We hope our platform facilitates the measurement and understanding of affective signals and enables research into how these can be integrated into a wide range of services. In the remainder of the paper, we describe the architecture, sensing components, data management and data storage. First, we start with a brief review of some related work. \section{Related Work} There is an extensive literature on automated affect recognition. We will not cover the prior work completely here as surveys of the existing work provide a much more complete review of the field than would be possible in this section~\cite{d2015review,soleymani2017survey}. However, we highlight a few highly relevant and seminal papers on multimodal affect recognition platforms and applications. Multimodal affect sensing has been applied in numerous contexts including teaching and learning environments~\cite{kapoor2005multimodal,d2010multimodal}, healthcare~\cite{devault2014simsensei}, the arts~\cite{camurri2000eyesweb}, and human-robot interaction~\cite{alonso2013multimodal}. The first work on affect recognition started almost three decades ago where physiological sensors, cameras and microphones were used to detect a host of affective responses. Early multimodal systems often comprised of bulky equipment and wired sensors~\cite{kapoor2005multimodal}. The miniturization of electronics and improvements in wireless communications now mean that sensing can be performed more easily using off-the-shelf devices that are small and ubiquitous (such as webcams, microphones, accelerometers). Multisense~\cite{stratou2017multisense} is a platform for multimodal affect sensing that incorporates both visual and audio components. Specifically, components included 3D head position-orientation and facial tracking, facial expression and gaze analysis, and audio analysis. It leverages existing public tools for some of these components. For example, audio analysis is performed using the OpenSmile~\cite{eyben2010opensmile} package. SimSensei~\cite{devault2014simsensei} is a virtual human interviewer designed to create engaging face-to-face interactions that are driven in part via the Multisense sensing algorithms. Multisense broadcasts signals to the Kiosk using the Physical Markup Language (PML) standard. The Platform for Situated Intelligence (PSI)~\cite{bohus2017rapid,psiblogpost} is a new open source platform for building multimodal interactive systems. One of the aims of this project was to make it easier for developers to create multi-sensor and multimodal architectures and handle low-level elements such as data synchronization. While not specifically designed for affective computing applications it contains many of the elements needed in this domain. \section{Architecture} Our platform is composed of a number of processing \textit{components} that are connected together to form \textit{pipelines}. The system is built using PSI~\cite{bohus2017rapid,psiblogpost}, from which we inherit this terminology. Figure~\ref{fig:architecture} shows an overview of the current sensing system. The nature of the components is that they can easily be removed and/or new components added, thus this specific sensing system is just one example of one configuration of components. Table~\ref{table_example} shows a summary of the audio and visual components and the outputs that are typically logged by the software. To help those who develop applications using the emotion-sensing platform we created a simple debugging window, shown in Figure~\ref{fig:metrics_window}, that displays the values of these outputs in real-time. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{Architecture.png} \caption{Block diagrams of the audio and visual processing pipelines in our platform. The components can easily be removed and/or new components added, this specific sensing system is just one example of one configuration of components.} \label{fig:architecture} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{Sensei_Metrics2.PNG} \caption{Debugging metrics window that shows the output parameters from the different sensing components in real-time. The green box highlights to location of a face within the video frame. The video frame is not shown to highlight that no raw video (or audio) data is stored.} \label{fig:metrics_window} \end{figure*} \section{Vision Components} \subsection{Face Detection and Tracking} The video signals from the webcam were sampled at 15 frames-per-second (FPS). We used the Microsoft Face API\footnote{https://azure.microsoft.com/en-us/services/cognitive-services/face/} to detect the faces in each of the video frames and apply a landmark detector to identify the eyes, nose, and mouth. We log the number of faces in the frame of the camera, the location of this faces (both bounding box and 15 landmark points). If multiple faces were present in a single frame the faces are given ids based on their location. Using a heuristic rule-based method these ids were propagated to the next frame. Therefore, if the people did not move dramatically or leave the frame of the camera they were given the same id in the subsequent frame. More sophisticated tracking can be enabled by employing a face recognition step, as described below. \subsection{Face Recognition} We leverage a deep neural network to determine an embedding for faces detected. The embeddings are designed to be unique for each individual and robust to changes in appearance (e.g., hair style and clothing) or the environment. This face embeddings can be used to supersede the heuristic rule-based tracking if the face recognition component is employed. \subsection{Facial Expression Recognition} The faces are cropped from the video frames using the bounding box information provided by the face detector and the resulting image-patches sent to an facial expression detection algorithm. The facial expression detector returns eight probabilities, one for each of the following basic emotional expressions: anger, disgust, fear, happiness, sadness, surprise and neutral. This is a frequently employed categorization of facial expressions; however, it is not without critics and displays of emotion are not uni-modal or necessarily universal~\cite{jack2012facial}. Nevertheless researchers have successfully built algorithms to identify useful signals in the noisy world of human expression and that these signals are consistent with socio-cultural norms~\cite{mcduff2017large}. We used the publicly available EmotionAPI\footnote{Microsoft, Inc.} emotion detector, allowing other researchers to replicate our method. The emotion detection algorithm is a convolutional neural network (CNN) based on VGG-13, more details can be found in~\cite{barsoum2016training}. We log the emotion probabilities for each face detected for each frame that is processed. The emotion detector can be run at less than 15 FPS in order to reduce the computational load of the software. We validated the facial affect classification on two independent public benchmark datasets, the CK+ dataset~\cite{lucey2010extended} and the FER dataset testing set~\cite{goodfellow2013challenges} (using the FER+ labels~\cite{barsoum2016training}). These datasets are comprised of 326 and 3,573 labeled images of the same basic emotion categories as our classifier, respectively. To characterize performance we report the true positive rate, false positive rate and accuracy for the task of categorizing the facial expression images. For CK+ the accuracy is 81.0\%, the false positive rate is 3.04\% and the true positive rate is 72.5\%. For FER the accuracy is 82.0\%, the false positive rate is 2.94\% and the true positive rate is 76.1\%. While facial expressions alone will not give a complete picture of an individual emotional state, these results reflect that the model is able to detect facial expressions of emotion with reasonable reliability and gives relatively consistent performance across two different datasets. \subsection{Body Pose Estimation} We use a video-based body pose estimation method based on the approach described in~\cite{xiao2018simple}. This is a 2D pose estimation algorithm based on RGB video input. The algorithm is trained on the COCO dataset~\cite{lin2014microsoft}. The component returns the location of keypoints on the body which are mapped to major joints (e.g., knees, shoulders, elbows, wrists, etc.). The component uses a ResNet-based~\cite{he2016deep} approach implemented in PyTorch. \subsection{Imaging PPG} We use an approach for imaging photoplethysmography (iPPG) to extract heart rate estimates from the faces within the webcam video feed. The iPPG algorithm is implemented in a manner similar to the method presented in~\cite{poh2011advancements,mcduff2014improvements}. The face detection bounding box is used to define the region of interest (ROI) of the face in each frame. The RGB color signals in this ROI are spatially averaged for each frame and a window of length 300 frames ($\sim$ 20 seconds) is constructed from the spatially averaged signals. These signals are then detrended and used as input to an Independent Component Analysis (ICA). The resulting source signals are analyzed in the frequency domain (using Fast Fourier Transforms) to determine the estimated HR. See~\cite{mcduff2014improvements} for more details. The method relies on observations from the face being complete for 300 frames in order to make an estimate; therefore, there is a short delay between when a face is detected and when as HR estimate is reported. We log the HR estimate for each 300 frame window. Previous, validations of this approach have found accuracy to be within one to two beat-per-minute (BPM) mean absolute error when subjects are well lit and stationary~\cite{poh2011advancements,blackford2015effects}. Respiration rate can be estimated using a similar approach, by analyzing the RGB values across time but employing different filtering parameters and fitting an auto-regressive model and performing pole selection~\cite{tarassenko2014non}. For a survey of techniques see~\cite{mcduff2015survey}. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Summary of the detection and recognition components in our platform. The specific parameters that are logged/broadcast are indicated.} \label{table_example} \centering \begin{tabular}{rll} \hline & Component & Outputs \\ \hline \hline \multirow{4}{*}{\rotatebox[origin=c]{90}{Microphone}} & Voice Activity Detection (VAD) & Boolean (VAD true/false) \\ & Speech Recognition & String: Text \\ & Language Sentiment Analysis & Floats: Probs. of 8 emotions \\ & Voice Prosody Analysis & Floats: Pitch, Energy \\ \hline \multirow{4}{*}{\rotatebox[origin=c]{90}{Camera}} & Face Detection & Ints: Bounding box locations \\ & Face Recognition & Ints: Face embeddings \\ & Face Expression Recognition & Floats: Probs. of 8 emotions \\ & Pose Estimation & Floats: Joint locations \\ & Physiology & Floats: HR and Resp. rate \\ \hline \multirow{4}{*}{\rotatebox[origin=c]{90}{Applications}} & Appications & Strings: Window titles \\ & Calendar & Strings: Calendar event details \\ & Email & Floats: Email sentiment scores \\ & Keyboard & Booleans: Keyboard typing \\ & Mouse & Booleans: Mouse movement \\ \hline \end{tabular} \end{table} \section{Audio Components} \subsection{Voice Activity Detection} The audio received from the microphone is sampled at 16kHz and passed through a Voice activity detector. We use the Microsoft Windows Voice Activity Detector (VAD)~\cite{tashev2015offline}. Speech activity is logged as a Boolean variable. In addition, to allowing speech to be segmented for the purposes of further analysis, this also provides useful contextual information about whether a subject is talking. Combined with facial detection this allow for inference of in-person social interactions. \subsection{Speech Recognition} Segments of audio for which voice activity is detected are sent to a cloud-based speech-to-text (STT)~\cite{MSspeechapi}. This returns the single most probably string based on the STT algorithm. This is the only part of the system that requires a cloud service, due to the superiority of the cloud-based speech recognition engines compared to the models that were available locally. \subsection{Language Sentiment Analysis} The strings returned from the STT engine are passed through a language sentiment analysis component. This classifier returns probabilities for eight sentiment categories: joviality, fear, sadness, surprise, hostility, serenity, fatigue and guilt. The classifier was trained on Twitter social media posts collected via the Twitter fire hose. Details of the data, training and validation can be found in~\cite{de2012happy}. The sentiment classification uses somewhat similar categories to the facial expression classification, but with some differences to make the categories more appropriate for spoken language. As with facial expression categorization these classes are unlikely to be universal and are certainly not exhaustive; however, they provide useful signals. \subsection{Voice Prosody Analysis} We employ a deep neutral network (DNN) model to estimate the emotional valence (negative, neutral, positive) of the voice tone. First, the fundamental frequency and MEL frequency features are extracted from the voiced segments of audio. Then a DNN model is used to estimate the probabilities of the the three emotional valence categories. The scores for the three classes (negative, neutral, positive) sum to one. \section{Application Logging Components} Often when tracking non-verbal expression and emotion data it is difficult to analyze or interpret the observations effectively without context on what the subject is doing. When running locally on a client our tool has components for tracking application usage, calendar events and email information. \subsection{Application Logging} The application logging captures the applications running on the machine. For each application we log the name, position on the screen and events (starting and closing the application, minimizing and maximizing the window, bringing the window to the foreground). \subsection{Email Logging} The body of each email that the participants sent is classified using a textual sentiment classifier to extract emotional valence (positive and negative). The classification algorithm used is a logistic regression trained on a set of text corpora from multiple domains labeled for sentiment~\cite{pang2004sentimental,blitzer2007biographies,ganapathibhotla2008identifying} and an internal dataset of labeled comments and reviews. The textual features are unigrams (single words) and bigrams (pairs of words in sequence). During training hyperparameter tuning was performed via a grid search of parameters and five-fold cross-validation. Feature selection was performed by using mutual information criteria to select the top 5,000 unigrams or bigrams, and domain specific features were manually pruned to leave 1,200 features. \subsection{Calendar Logging} One of the richest sources of contextual data stored digitally is a person's digital calendar. Our software logged details of calendar events from Microsoft Outlook, including the number of attendees, start time, duration and whether the meeting was in-person or via Skype. \section{Local versus Cloud} Our platform allows components to be run locally (on device) or in the cloud. If run in the cloud the video and audio are streamed via a communication protocol (i.e., WebRTC). Running the platform locally allows for minimal transmission of PII and for most components to be used even when there is no Internet connection available. Running the platform in the cloud allows greater scalability and exploitation of greater computational resources than are available on the device. In the simplest form this only requires the participant to have a device that can run a web browser and support a webcam and microphone (e.g., smartphone, tablet, Raspberry Pi etc.). \section{Logging and Broadcasting} The system was designed with two data interfaces. First, the data is logged to a secure cloud server. The output from each component is averaged (or appended in the case of strings) over an aggregation window (default length one second). These values are stored in a single row of the cloud database. Second, the data is broadcast over a secure service bus that allows other applications, with permission, to ``listen" for the data. This makes it easy to build applications on top of the sensing framework without having to interface with the code itself. The broadcast data can be read easily using Python and Javascript. To date, prototype applications have been developed using each of these interfaces. \section{Consent} Collecting rich information about behavior, expressions and applications raises important questions about privacy. When installed on personal computers the sensing platform is always run with explicit consent from the users. On installation they are presented with a consent form that details the information that will be logged. No raw audio, video or textual data is stored, including no titles or bodies of email messages, titles of calendar events or names of attendees of calendar events. The participants are always informed about how to start and stop the software and are told they are free to do so. The names, email addresses or other clearly identifiable information about the participants are also not linked to the data. \section{Conclusion} Non-verbal and verbal signals capture rich information about affect. Ubiquitous devices are now equipped with sensors that can be leveraged to quantify affective and emotional states. Multimodal sensing technology is non-trivial to build, such systems require expertise in computer vision, audio processing, natural language processing and psychology. To help expedite the development of emotion-aware systems we developed a multimodal sensing platform that is designed to run on device or in the cloud. It includes components for face detection, recognition and tracking and expression analysis, pose estimation, voice activity detection, speech recognition and sentiment analysis. The on device option allows data to be processed locally minimizing the need to transmit PII. The cloud option reduces constraints on the locally available computational resources. We hope that this platform is transparent and extensible and that we can create interfaces and experiences that build on top of it without ``reinventing" affect sensing components every time. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank Michael Gamon, Mark Encarnacion, Ivan Tashev, Cha Zhang, Emad Barsoum, Dan Bohus and Nick Saw for the contribution of models and PSI components that are used in this platform. \balance{} \bibliographystyle{IEEEtran}
1,941,325,220,885
arxiv
\section{Introduction} Developing methods to model and understand tempo and mode of macroevolution is an important goal for evolutionary biology (e.g., \citeauthor{harmon2010early}, \citeyear{harmon2010early}; \citeauthor{eastman2011novel}, \citeyear{eastman2011novel}; \citeauthor{revell2012phytools}, \citeyear{revell2012phytools}; \citeauthor{ingram2013surface}, \citeyear{ingram2013surface}) . Equally important are methods for effectively representing phenotypic differences between species (\citeauthor{adams2013geomorph}, \citeyear{adams2013geomorph}; \citeauthor{pampush2016introducing}, \citeyear{pampush2016introducing}; \citeauthor{winchester2016morphotester}, \citeyear{winchester2016morphotester}) without which many evolutionary modeling questions would be moot (\citeauthor{slater2018hierarchy}, \citeyear{slater2018hierarchy}). The potential for rapidly and objectively quantifying morphological phenotypes benefits greatly from the advent of easily accessible and widely available 3D digital models of anatomical structures. The unprecedented accessibility of 3D data is a direct result of technology improvements and cost reductions for generating them (\citeauthor{copes2016collection}, \citeyear{copes2016collection}), as well as proliferation and population of archives for sharing them (\citeauthor{boyer2016morphosource}, \citeyear{boyer2016morphosource}). The new potential for better quantifications of shape is timely because of growing recognition that analyzing the wrong traits or poorly justified quantifications of traits may lead to mis-impressions about which processes meaningfully describe a clade's evolution (\citeauthor{slater2018hierarchy}, \citeyear{slater2018hierarchy}). Instead, it has been suggested that choice of morphological traits and the method for their quantification should be justified based on clade-specific hypotheses that propose not only an evolutionary mode but an ecological explanation. In other words, demonstrating that one or more traits follow a particular evolutionary model does not go very far towards understanding the evolutionary processes at play in a clade, especially if there is no hypothesis relating variation in those traits to ecological variation. For instance, although \citeauthor{harmon2010early} (\citeyear{harmon2010early}) showed that the `adaptive radiation' (\citeauthor{osborn1902law}, \citeyear{osborn1902law}) or `early burst' (EB) model of evolution was rarely supported among dozens of clades tested, their study did not specify why the particular morphological traits they looked at should follow the EB model. Showing that different traits can have different evolutionary patterns in the same clade, \citeauthor{meloro2010cats} (\citeyear{meloro2010cats}) found that tooth size and carnassial angle variables followed very different evolutionary patterns within Carnivora. Carnassial angle, arguably the more directly functional variable, followed an adaptive radiation model, while m1 size followed a more simple brownian motion model. As another example of the importance of trait function, \citeauthor{cantalapiedra2017decoupled} (\citeyear{cantalapiedra2017decoupled}) chose to quantify relative tooth crown height (hypsodonty) in order to understand drivers of disparity and diversity in equids, because hypsodonty has seemingly obvious adaptive significance for grazing in many clades, even beyond horses. Moreover, hypsodonty has been formally demonstrated by \citeauthor{eronen2010impact} (\citeyear{eronen2010impact}) to be an ecometric (\citeauthor{eronen2010ecometrics}, \citeyear{eronen2010ecometrics}; \citeauthor{polly2015measuring}, \citeyear{polly2015measuring}) for grassland use in equids. A promising class of features are those that quantify the overall geometric quality of an object's surface. They are referred to as ``shape characterizers'' and distinguished from ``shape descriptors'' (\citeauthor{evans2013shape}, \citeyear{evans2013shape}), the latter primarily including geometric morphometric quantifications of shape (\citeauthor{adams2013geomorph}, \citeyear{adams2013geomorph}). Examples of shape characterizers include relief index (RFI) (e.g., \citeauthor{m2003occlusal}, \citeyear{m2003occlusal}), orientation patch count (OPC) (e.g., \citeauthor{evans2007high}, \citeyear{evans2007high}; \citeauthor{evans2014evolution}, \citeyear{evans2014evolution}; \citeauthor{melstrom2017relationship}, \citeyear{melstrom2017relationship}) and Dirichlet Normal Energy (DNE) (e.g., \citeauthor{bunn2011comparing}, \citeyear{bunn2011comparing}; \citeauthor{winchester2016morphotester}, \citeyear{winchester2016morphotester}; \citeauthor{pampush2016introducing}, \citeyear{pampush2016introducing}). RFI measures both the relative height and sharpness of an object; OPC measures the complexity or rugosity of a surface; DNE measures the bending energy of a surface. When different teeth exhibit substantially different values of a shape characterizer, they usually also look different and have easily conceivable functional and ecological differences. For instance, a tooth with higher relief often has sharper blades or longer, sharper cusps that cut or pierce food items more effectively than a tooth with lower relief. As another example, DNE differences among blood cells potentially correspond to the turbulence they induce in blood flow, or whether they tend to clog small arterioles. DNE has several advantages compared to popular shape characterizers like RFI and OPC. First, DNE is landmark-free and independent of the surface's initial position, orientation and scale making it less susceptible to observer induced error/noise. RFI and OPC rely on the orientation of the tooth relative to an arbitrarily defined occlusal plane. OPC also relies on the orientation of the tooth with regard to rotation around the central vertical axis. Second, direct comparisons show that DNE has a stronger dietary signal for teeth than RFI and OPC (\citeauthor{winchester2014dental}, \citeyear{winchester2014dental}). This greater success in dietary separation is likely due to its more effective isolation of information on the ``sharpness'' of surface features. In contrast, RFI only measures the relative cusp and/or crown height which does not describe sharpness; OPC is less sensitive to changes in blade orientation due to its binning protocol (\citeauthor{boyer2010evidence}, \citeyear{boyer2010evidence}). DNE computes a discrete approximation to the {\it Dirichlet Energy of the normal}. This quantity is a mathematical attribute of a continuous surface, coming from differential geometry; it is defined as the integral, over the surface, of change in the normal direction, indicating at each point of the surface, how much the surface bends. In practical applications, a continuous surface is represented as a triangular mesh, which can be described by a collection of points or nodes and triangles. (We note that the nomenclature is not standardized across all scientific fields; in computer science these would be called {\it vertices} and {\it triangular faces}, respectively; see e.g., \citeauthor{botsch2010polygon}, \citeyear{botsch2010polygon}). To compute DNE on such a discrete mesh, normal directions must be estimated for each point/triangle. The sum of the change of normal directions over the points/triangles is then used to approximate the Dirichlet energy of the normal for the continuous surface that the mesh represents. However, the DNE algorithm published in {\it MorphoTester} (\citeauthor{winchester2016morphotester}, \citeyear{winchester2016morphotester}) and in the {\it R} package ``molaR'' (\citeauthor{pampush2016introducing} is sensitive to varying mesh preparation protocols and requires special treatment for boundary triangles, which are triangles that have one side/node that fall on the boundary of the mesh (\citeauthor{pampush2016wear}, \citeyear{pampush2016wear}, \citeauthor{spradley2017smooth}, \citeyear{spradley2017smooth}), leading to concerns regarding the comparability and reproducibility when utilizing DNE for morphological research. Recent attempts to address this issue have developed protocols for standardizing the mesh preparation process (\citeauthor{spradley2017smooth}, \citeyear{spradley2017smooth}). Unlike previous work, we provide \underline{a} \underline{r}obustly \underline{i}mplemented \underline{a}lgorithm for Dirichlet energy of the normal (ariaDNE), that is insensitive to a greater range of mesh preparation protocols. Fig.~\ref{fig:teaser} shows DNE and ariaDNE values on an example tooth. The red surface shading indicates the value of curvature as measured by each approach; it is uniformized across each row by the row's highest local curvature value. To demonstrate this insensitivity empirically, we test the stability of our algorithm on tooth models with differing triangle counts, remeshing/mesh representation (i.e., a different set of points/nodes or triangles representing the same continuous surface) and simulated noise. We also test the effects of smoothing and boundary triangles as in \citeauthor{spradley2017smooth} (\citeyear{spradley2017smooth}). We furthermore assess the dietary differentiation power of ariaDNE. \begin{figure} \begin{center} \includegraphics[width = 12cm]{teaser.jpg} \end{center} \caption{ Comparing effects of mesh resolution (triangle count), re-meshing, noise and smoothing on ariaDNE (top) and DNE (bottom). (a) shows the distribution of curvature as measured by each method overlaid in shades of red on a grey 3D rendering of the surface. ariaDNE and DNE values, normalized for comparability by the values for the typical tooth, are shown above each surface and summarized in the bar plots (b), demonstrating the robustness of ariaDNE versus DNE. } \label{fig:teaser} \end{figure} \section{Materials and Methods} \label{sec:method} \subsection{Methodological analysis of ariaDNE} \label{sec:ariadne} \citeauthor{bunn2011comparing} (\citeyear{bunn2011comparing}) noted the relevance of the differential geometry concept of Dirichlet energy of the normal for morphology and provided an algorithm called DNE calculating an approximation to this quantity on discrete surface meshes by summing the local energy over all triangles. The local energy on a triangle is defined by the total change in the normals; this provides a local estimate for the curvature of the surface. However, this change in normals is sensitive to how a continuous surface is discretized. That is, a different resolution (triangle count), mesh representation, or contamination by noise or small artifacts can all lead to significantly different numerical values. To address this sensitivity problem, we leverage the observation that the local energy can be also expressed by the curvature at the query point on the surface (\citeauthor{willmore1965note}, \citeyear{willmore1965note}); another simple method for estimating curvature on discrete surfaces is by Principal Component Analysis (PCA). The procedure is outlined as follows. For each query point, find all its neighboring points within a fixed radius; the value of this radius is set as a parameter for the method (\citeauthor{yang2006robust}, \citeyear{yang2006robust}). Then apply PCA to the coordinates of those points; the plane spanned by the first two principal components typically approximates the tangent plane to the surface at the query point, with the third principal component approximating the normal direction. The corresponding smallest principal component score $\sigma = \lambda_0/ (\lambda_0 + \lambda_1 + \lambda_2), ~\mbox{where} ~\lambda_0<\lambda_1<\lambda_2.$ indicates the deviation from the fitted plane, i.e. the curvature. There are two issues with this PCA method: (1) The third principal component does not always approximate the surface normal, and therefore the smallest principal component score may not accurately reflect curviness as we discussed above, that is, the deviation of the surface from the tangent plane. Fig.~\ref{fig:normals} (top) shows an erroneous normal approximation for a pointed cusp, where the normals should be perpendicular to the surface but using standard PCA gives skewed estimation. (2) Standard PCA becomes numerically unstable (due to ill-conditioning) when the number of nearby neighbors is low. This implies that when the mesh is of low resolution, there may not be enough points to conduct PCA. \begin{figure} \begin{center} \includegraphics[width = 5cm]{normals.png} \end{center} \caption{Improved normal estimation with our modified PCA method. Top: traditional PCA method gives skewed normal estimates on a pointed cusp, leading to erroneous curvature approximation. Bottom: our method gives better normal approximation, and therefore improves curvature approximation.} \label{fig:normals} \end{figure} To resolve the first issue, we modify the algorithm to choose at each query point the principal component closest to its normal, and set the curvature at that point to be the score of the chosen principal component. Fig.~\ref{fig:normals} (bottom) illustrates the effects of this simple modification, which produces estimates more consistent with surface normals, thereby providing a better local estimate of the tangent plane, and in turn curvature. In practice, normals at a point are obtained by taking a weighted average of normals of adjacent triangles, easily computed on discrete meshes. To resolve the second issue, we propose a modification to the traditional PCA method. Selecting the neighbors within a fixed radius could result, near some point, in a small-sized neighborhood where few or even no points would be selected; instead, we apply a ``weighted PCA'', with weights decaying according to the distance away from the query point, retaining the rest of procedure. There are many ways to define the weight function. The traditional PCA method chooses the weight function to be the indicator function over the set of points within an a priori specified distance from the query point (i.e. the weight is one for the points within a fixed radius and zero elsewhere). For ariaDNE, we set the weight function to be the widely-used Gaussian kernel $f(x) = e^{-x^2/ \epsilon^2}$. The Gaussian kernel captures local geometric information on the surface. The parameter $\epsilon$ indicates the size of local influence. Figure~\ref{fig:plate} illustrates effects of different $\epsilon$ on the weight function: the larger $\epsilon$, the more points on the mesh have significant weight values, resulting in larger principal component scores for those points. In consequence, when $\epsilon$ increases, local energy for each point becomes larger, and therefore ariaDNE becomes larger. In practice, we suggest using $\epsilon$ ranging from 0.04 to 0.1. If $\epsilon$ is too small, the computed ariaDNE score will be highly sensitive to trivial features of the surface that are most likely to be noise (similar to traditional DNE); If $\epsilon$ is too large, the approximation will simply become non-local. Choosing an appropriate value of $\epsilon$ depends on the application in hand. \begin{figure} \begin{center} \includegraphics[width = 12cm]{plate.jpg} \end{center} \caption{Effect of increasing the $\epsilon$ parameter (bandwidth) on the weight function (top; red indicates highest weight) and curvature computed by ariaDNE for molar teeth {\it Alouatta}, {\it Ateles}, {\it Brachyteles}, {\it Callicebus}, {\it Chiropotes}, {\it Pithecia}, {\it Saimiri}. Choices for $\epsilon$ are 0.02, 0.04, 0.06, 0.08, 0.10, 0.12, witch surface shading similar to Figure~\ref{fig:teaser}. When $\epsilon$ is small, both DNE and ariaDNE capture fine-scale features on the tooth. When $\epsilon$ is larger, ariaDNE captures larger scale features.} \label{fig:plate} \end{figure} In summary, we apply a weighted PCA, localized around each query point by means of the Gaussian kernel function. Then we find the principal component that is closest to its normal and set the curvature to be its principal score. AriaDNE is then computed by integrating this curvature estimate along the surface. For the exact procedure, see Appendix A in the supplementary materials. \subsection{Study samples} Understanding the correlation between surface geometry and a metric like DNE or ariaDNE helps understand whether these metrics are relevant to questions concerning morphology, ecology and evolution. The meaningfulness and success of a metric have to be measured against relevant samples and the research questions. Here we use a sample of new world monkey (platyrrhine) second mandibular molars downloaded from {\it Morphosource} (\citeauthor{winchester2014dental}, \citeyear{winchester2014dental}). The sample has significant inter-specific breadth (7 genera) and intra-specific depth (10 individuals per genus). It consists of meshes (117,623 - 665,001 points, 234,358 - 1,334,141 triangles) from 7 extant platyrrhine primate genera: {\it Alouatta}, {\it Ateles}, {\it Brachyteles}, {\it Callicebus}, {\it Chiropotes}, {\it Pithecia}, and {\it Saimiri}. Platyrrhine dentitions have been essential for questions about dental variation and dietary preference (\citeauthor{anthony1993tooth}, \citeyear{anthony1993tooth}; \citeauthor{dennis2004dental}, \citeyear{dennis2004dental}; \citeauthor{ledogar2013diet}, \citeyear{ledogar2013diet}; \citeauthor{winchester2014dental}, \citeyear{winchester2014dental}; \citeauthor{allen2015dietary}, \citeyear{allen2015dietary}; \citeauthor{pampush2016wear} 2016 a,b). Questions have included how dietarily diverse platyrrhines should be considered based on available behavioral data, whether and how dental morphology is reflective of diet differences, and how important tooth wear, individual variation, and scale of geometric features are when considering tooth differences between species. In the following sections, we tested the stability of ariaDNE by perturbing attributes like triangle count and mesh representation. We also tested the effects of noise, smoothing and boundary triangles on ariaDNE. Furthermore, we assessed its power in differentiating the 7 platyrrhine primate species according to dietary habits. \subsection{Sensitivity test} \subsubsection{Triangle count} \label{sec:one} To evaluate the sensitivity of ariaDNE under varying mesh resolution, each tooth was downsampled to produce simplified surfaces with 20k, 16k, 12k, 8k, 4k, and 2k triangles. We computed ariaDNE values ($\epsilon = 0.04, 0.06, 0.08, 0.1$ using the MATLAB function ``ariaDNE'' provided in Section~\ref{sec:code}. For comparison, we also computed traditional DNE values using the function ``DNE'' (Section~\ref{sec:code}), a MATLAB port of the R function ``DNE'' from ``molaR". Default parameters were used for DNE, with outlier percentile at 0.1 and boundary triangles excluded. \subsubsection{Mesh representation} \label{sec:two} A continuous surface can be represented by different discrete meshes; even with the same resolution (triangle count), they can differ by altering the position of points and their adjacency relations (i.e., triangles). We would like ariaDNE to be roughly the same for all meshes that represent the same continuous surface. To evaluate the sensitivity of ariaDNE under varying mesh representations, we tested on a surface generated by a mathematical function as well as real tooth samples. First, we tested it on the surface $S$ defined by $z = 0.3\sin(2x)\sin(2y)$ where $0 \leq x \leq 1$ and $0 \leq y \leq 1$ (Fig.~\ref{fig:triangulation}). To generate a mesh for this explicitly defined surface, we randomly picked 2000 sets of $(x,y)$ coordinates uniformly distributed on $0 \leq x \leq 1$ and $0 \leq y \leq 1$ and calculated their accompanying $z$-values using the equation above. Each set of $(x,y,z)$ coordinates represented a node/point in the mesh, and the triangles are obtained by applying Delaunay Triangulation to these points. We generated 100 meshes by repeating these steps and computed their DNE and ariaDNE values as in \ref{sec:one}. We remark here that meshes generated by this procedure do not necessarily have evenly distributed points; some areas of the mesh can have finer resolution than others. Real tooth samples are already given as meshes; we generated new mesh representations for each tooth sample by computing pairwise surface correspondences. Specifically, points and their adjacency relations from one surface were taken to the other surface in the samples by correspondence maps computed using the methods in (\citeauthor{boyer2011algorithms}, \citeyear{boyer2011algorithms}), between all pairs of surfaces in the sample. These correspondences resulted in 70 different mesh representations for each tooth in the sample. We computed their DNE and ariaDNE ($\epsilon = 0.04, 0.06, 0.08, 0.1$) as in \ref{sec:one}. \subsubsection {Simulated noise} To evaluate the sensitivity of ariaDNE to small artifacts on the surface, we tested it when simulated noise was added to the surface defined as in \ref{sec:two} as well as real tooth samples. First, given a mesh representing the same surface $S$ as in \ref{sec:two}, a noisy mesh was obtained by adding a random variable uniformly distributed on $[-0.001, 0.001]$ to the $x, y, z$ coordinates of each node/point on the mesh (Fig.~\ref{fig:noise}). We then generated 100 noisy versions of the given mesh by repeating the previous steps. For real tooth data, we generated a noisy mesh by adding a random variable uniformly distributed on $[-0.003, 0.003]$ to the $x, y, z$ coordinates for each node/point in the mesh (Fig.~\ref{fig:noise}). The noise level was chosen arbitrarily; we added more noise to the tooth samples to increase diversity of the test cases. We obtained 100 noisy meshes per tooth, and computed their DNE and ariaDNE (with $\epsilon = 0.04, 0.06, 0.08, 0.1$) values as in \ref{sec:one}. \begin{figure}[H] \begin{center} \includegraphics[width = 10cm]{triangulation_shape.png} \end{center} \caption{Effect of varying mesh representations on ariaDNE and DNE values computed for a synthetic surface (top) and a tooth from {\it Ateles} (bottom). Left panel: examples of different mesh representations. Right panel: scatter plots and box plots of ariaDNE ($\epsilon = 0.08$) and DNE values computed for $N$ meshes representing the synthetic surface (top, $N = 100$) and the tooth surface (bottom, $N = 70$).} \label{fig:triangulation} \end{figure} \begin{figure} \begin{center} \includegraphics[width = 10cm]{noise.png} \end{center} \caption{Effect of simulated noise on ariaDNE and DNE values computed for a synthetic surface (top) and a tooth from {\it Ateles} (bottom). Left panel: examples of original surfaces (left) and noisy surfaces (middle). Right panel: scatter plots and box plots of ariaDNE ($\epsilon = 0.08$) and DNE values computed for 100 meshes with random noise representing the synthetic surface (top) and tooth surface (bottom).} \label{fig:noise} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width = 12cm]{face.jpg} \end{center} \caption{Effect of increasing triangle count on ariaDNE (left) and DNE (right) values computed for 7 teeth from {\it Alouatta}, {\it Ateles}, {\it Brachyteles}, {\it Callicebus}, {\it Saimiri}, {\it Chiropotes}, {\it Pithecia}. The ariaDNE values for each tooth remain relatively unchanged, compared to the DNE values, under varying resolution/triangle counts.} \label{fig:face} \end{figure} \subsubsection{Smoothing} Smoothing is commonly used to eliminate noise produced during scanning, segmentation, and reconstruction. \citeauthor{spradley2017smooth} (\citeyear{spradley2017smooth}) tested the effects of various smoothing operators and smoothing amounts on DNE with surface meshes of hemispheres and primate molars. They suggested that aggressive smoothing procedures like Laplacian smoothing and implicit fairing should be avoided. To evaluate the performance of our method under different smoothing algorithms, we randomly picked 7 tooth models from our sample (one from each taxa) and generated their smooth surfaces by applying 100 iterations of the {\it Avizo} smoothing module, 3 iterations of the {\it Meshlab} function {\it HC Laplacian Smoothing}, or 3 iterations of the {\it implicit fairing method} using {\it MorphoTester}. Then we computed their DNE and ariaDNE ($\epsilon = 0.08$) as in \ref{sec:one}. To further evaluate the performance of our method under varying amounts of Avizo smoothing, we iteratively applied the {Avizo} smoothing module to a single molar tooth from {\it Ateles}. The smoothing function was performed in intervals of 20 on the raw surface mesh, evenly spaced from 20 to 200 to generate 10 new surface meshes. Default value for lambda was kept (lambda = 0.6). We then computed their DNE and ariaDNE ($\epsilon = 0.08$) as in \ref{sec:one}. \subsubsection{Boundary triangles} Triangles with one side/node that are on the boundary of the mesh have a large impact on traditional DNE, calling for special treatment (\citeauthor{spradley2017smooth}, \citeyear{spradley2017smooth}). We assess how such boundary triangles affect ariaDNE on two molar teeth, one of {\it Ateles} where crown side walls are relatively bulged outwardly, and one from {\it Brachyteles} where crown side walls are relatively unbulged (Fig.~\ref{fig:boundary}). For each tooth, we found its boundary triangles and computed their local energy using both ariaDNE and DNE (``BoundaryDiscard" = ``none'', i.e., no boundary triangles will be removed). \subsection{Tests on species differentiation} \label{sec:three} Previous studies revealed systematic variation among species with different dietary habits in DNE values and other topographic metrics, such as RFI. To test the differentiation power for species and their dietary preferences, we compared RFI, DNE and ariaDNE on the 70 lower second molars in our sample that belong to 7 taxa, {\it Alouatta}, {\it Ateles}, {\it Brachyteles}, {\it Callicebus}, {\it Chiropotes}, {\it Pithecia}, and {\it Saimiri}. The diet-classification scheme from \citeauthor{winchester2014dental} (\citeyear{winchester2014dental}) was used to identify {\it Alouatta} and {\it Brachyteles} as folivorous, {\it Ateles} and {\it Callicebus} as frugivorous, {\it Chiropotes} and {\it Pithecia} as hard-object feeding, and {\it Saimiri} as insectivorous. For each tooth we computed its RFI, DNE and ariaDNE ($\epsilon = 0.02, 0.04, 0.06, 0.08, 0.1, 0.12$) as in \ref{sec:one}. We then used ANOVA and multiple comparison tests to assess their differentiation power for dietary preferences. \section{Results} \subsection{Sensitivity tests} In numerical analysis, an algorithm is stable if perturbing inputs do not significantly affect outputs. To enable comparison, the change in the outputs can be quantified by {\it coefficient of variation}, which is the ratio of the standard deviation to the mean. We perturbed each mesh in the sample by varying the number of triangles, changing the mesh representation or adding simulated noise. Supplementary Tables 1-4 provide coefficients of variation of the DNE and ariaDNE values of the perturbed meshes in each collection per tooth model. For each tooth and each perturbed collection, the coefficient of variation of ariaDNE is less than that of DNE, meaning ariaDNE is relatively more stable than DNE under varying resolution/triangle count, remeshing/mesh representation and noise. Table~\ref{table:all} summarizes results, indicating means of coefficients of variation from Supplementary Tables 1-4. Fig.~\ref{fig:face} illustrates effects of increasing triangle count on ariaDNE ($\epsilon = 0.10$) and DNE values computed for 7 arbitrarily chosen teeth from {\it Alouatta}, {\it Ateles}, {\it Brachyteles}, {\it Callicebus}, {\it Saimiri}, {\it Chiropotes}, {\it Pithecia}. The ariaDNE values for each tooth (maximum percent change: 3.42 \%) remain relatively unchanged under varying resolution/triangle counts, compared to the DNE values (maximum percent change: 384\%). Figs.~\ref{fig:triangulation} and \ref{fig:noise} compare normalized DNE and ariaDNE values computed for a synthetic surface and a tooth by varying the mesh representation and adding noise. In the scatter plots, the ariaDNE and DNE values are normalized to have a mean one in each case; in the box plots, the values are normalized to have a median one in each case. \begin{table} \centering \resizebox{0.5\textwidth}{!}{% \begin{tabular}{llrrr} \toprule \multicolumn{2}{c}{Method} & Triangle Count & Remeshing & Noise\\ \midrule ARIADNE & $\epsilon = 0.04$ & 0.0213 & 0.0824 & 0.0055 \\ & $\epsilon = 0.06$ & 0.0114 & 0.0429 & 0.0044 \\ & $\epsilon = 0.08$ & 0.0117 & 0.0304 & 0.0039 \\ & $\epsilon = 0.10$ & 0.0117 &0.0293 & 0.0038 \\ DNE & & 0.420 & 2.3075 &0.0169\\ \bottomrule \end{tabular}} \caption{Robustness of ariaDNE under various mesh attributes perturbation. For each tooth in the 70 platyrrhine sample, we generated three collections of perturbed meshes by varying the number of triangles, changing the mesh representation or adding simulated noise. We computed the coefficient of variation of their DNE and ariaDNE values in each collection for each tooth (see supplementary tables 1-4). The numbers in the table are obtained by taking the mean across all 70 tooth samples.} \label{table:all} \end{table} Table~\ref{table:smooth} shows percent change of ariaDNE and DNE values subject to different smoothing algorithms. After 100 iterations of {Avizo} smoothing, ariaDNE increased 2\% of its original value whereas traditional DNE dropped to 46\%. After 3 iterations of HC Laplacian smoothing and implicit fairing, ariaDNE dropped to approximately 90\% of the original value whereas traditional DNE dropped to approximately 40\%. The larger drop in values using Laplacian smoothing and implicit fairing is consistent with the discussion by \citeauthor{spradley2017smooth} (\citeyear{spradley2017smooth}). However, for all smoothing algorithms, the variation in ariaDNE is significantly lower than for traditional DNE. This suggests that ariaDNE is relatively stable under varying smoothing algorithms. \begin{table} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{lrrrrrcrrrrr} \toprule & \multicolumn{5}{c}{DNE} & \phantom{a}& \multicolumn{5}{c}{ariaDNE} \\ \cmidrule{2-6} \cmidrule{8-12} & Raw & Avizo & Laplacian & Fairing &COV && Raw & Avizo & Laplacian & Fairing &COV \\ \midrule {\it Alouatta} & 1 & 0.38 & 0.35 & 0.48 &0.120 &&1 & 1.00 & 0.90 & 0.90 &0.049\\ {\it Ateles} & 1 & 0.57 & 0.46 & 0.57 &0.144 &&1 & 1.05 & 0.93 & 0.93 & 0.056 \\ {\it Brachyteles} & 1 & 0.54 & 0.45 & 0.57 &0.127 && 1 & 1.06 & 0.96 & 0.95 & 0.053\\ {\it Callicebus} & 1 & 0.46 & 0.39 & 0.52 &0.122 && 1& 0.99 & 0.90 & 0.91 &0.074 \\ {\it Chiropotes} & 1 & 0.51 & 0.37 & 0.48 &0.112 && 1 & 1.02 & 0.95 & 0.94 &0.06 \\ {\it Pithecia} & 1 & 0.43 & 0.29 & 0.41 &0.165 && 1 & 1.02 & 0.94 & 0.93 &0.058 \\ {\it Saimiri} & 1 & 0.33 & 0.44 & 0.56 &0.169 &&1 & 1.00 & 0.90 & 0.92 &0.056 \\ \bottomrule \end{tabular}} \caption{Effect of different smoothing algorithms on DNE and ariaDNE ($\epsilon = 0.08$) computed on the surfaces of molar teeth from {\it Alouatta}, {\it Ateles}, {\it Brachyteles}, {\it Callicebus}, {\it Chiropotes}, {\it Pithecia}, {\it Saimiri}. The numbers in the table are DNE and ariaDNE values divided by values of raw surfaces indicating the percent change. The table also contains coefficients of variation (COV) of DNE and ariaDNE computed on the three smooth surfaces in each taxa. The table demonstrates: (1) The effect of smoothing is limited on ariaDNE versus DNE. (2) ariaDNE is relatively more stable under varying smoothing algorithms.} \label{table:smooth} \end{table} For Avizo smoothing, results show that both DNE and ariaDNE decrease for the first 40 iterations of smoothing. After approximately 40 iterations, ariaDNE increases and the traditional DNE continues to decrease until 100 iterations. After 100 iterations both start to increase. Perhaps most importantly, the degree of overall change in ariaDNE from unsmoothed surfaces to smoothed surfaces is much less than the overall change in traditional DNE. The increase on DNE values with smoothing is caused by mesh artifacts created during smoothing. After 40 iterations of smoothing, cusps of the smoothed mesh grow taller, while the basin becomes lower. Overall, ariaDNE have a smaller variance than the traditional implementation. This suggests that ariaDNE is relatively more stable under varying Avizo smoothing iterations. \begin{figure}[H] \begin{center} \includegraphics[width = 5cm]{smoothing.jpg} \end{center} \caption{Percent change of varying {\it Avizo} smoothing amounts on ariaDNE ($\epsilon = 0.08$) and DNE values computed for an {\it Ateles} molar tooth. } \label{fig:smoothing} \end{figure} Fig.~\ref{fig:boundary} shows that the local energy of the boundary triangles computed with ariaDNE are among the smallest, whereas those computed with DNE have a few larger ones, which affect the DNE value for the whole surface. This histogram suggests that the effects of boundary triangles on ariaDNE are limited, and therefore no special treatment for them is needed. This represents another improvement for ariaDNE compared to DNE. \begin{figure} \begin{center} \includegraphics[width = 6cm]{boundary.jpg} \end{center} \caption{The boundary triangles have less impact on ariaDNE than DNE. The left panel shows an {\it Ateles} molar (top), with a curvy side wall and a {\it Brachyteles} molar (bottom), with a straight side wall. The right panel shows histograms of local energy values of the boundary triangles, computed by ariaDNE and DNE. To enable comparison, the values are normalized by the mean of those of all triangles.} \label{fig:boundary} \end{figure} \subsection{Species differentiation power} \label{sec:four} \begin{table} \centering \resizebox{0.5\textwidth}{!}{% \begin{tabular}{lrrrrrrrr} \toprule &RFI & DNE & \multicolumn{6}{c}{ariaDNE} \\ & & & $\epsilon = 0.02$ & 0.04 &0.06 &0.08 &0.10 &0.12 \\ \midrule Fo-Fr &0.0001 &0.0224 &0.0512 &0.8362 &0.0207 &0.0002 &0.0000 &0.0000 \\ Fo-H &0.0000 &0.0000 &0.0010 &0.0003 &0.0000 &0.0000 &0.0000 &0.0000\\ Fo-I &0.2414 &0.8463 &0.8986 &0.7564 &0.6544 &0.5584 &0.5893 &0.7376\\ Fr-H &0.9372 &0.2295 &0.5425 &0.0049 &0.0000 &0.0000 &0.0000 &0.0000\\ Fr-I &0.1888 &0.3910 &0.4738 &0.3461 &0.0034 &0.0000 &0.0000 &0.0000\\ H-I &0.0689 &0.0125 &0.0627 &0.0002 &0.0000 &0.0000 &0.0000 &0.0000\\ \bottomrule \end{tabular}} \caption{Multiple comparison tests on RFI, DNE and ariaDNE ($\epsilon = 0.02, 0.04,0.06, 0.08, 0.10,0.12 $) values of folivore (Fo), frugivore (Fr), hard-object feeding (H) and insectivore (I). The numbers in the table are $p$ values for the pairwise hypothesis test that the corresponding mean difference is not equal to 0. For $\epsilon = 0.08, 0.10, 0.12$, ariaDNE differentiated folivore, frugivore and hard-object feeding. None of the metrics differentiated insectivore from folivore. } \label{table:multiplecom} \end{table} \begin{figure} \begin{center} \includegraphics[width=12cm]{differentiation.jpg} \end{center} \caption{Box plots of RFI, DNE, and ariaDNE with $\epsilon = 0.02, 0.04, 0.06, 0.08, 0.1, 0.12$ for {\it Alouatta}(Al), {\it Ateles}(At), {\it Brachyteles}(B), {\it Callicebus}(C), {\it Chiropotes}(Ch), {\it Pithecia}(P), {\it Saimiri}(S). Color indicates dietary preference: green represents folivore, purple represents frugivore, red represents hard-object feeding and yellow represents insectivore.} \label{fig:differentiation} \end{figure} For each shape characterizer (RFI, DNE and ariaDNE), ANOVA rejects the hypothesis with $P < 0.05$ that all dietary groups have the same mean, which indicates that some dietary differentiation was detected. To further determine which group means are different, we used multiple comparison tests and the results are summarized in Table~\ref{table:multiplecom}. RFI separated folivore from frugivore and hard-object feeding; DNE in addition separated hard-object feeding from insectivore. As the bandwidth parameter $\epsilon$ increases, ariaDNE further separated frugivore from hard-object feeding and insectivore. No metrics separated folivore and insectivore. However, similarity in their ariaDNE values are not surprising. Insect and leaf tissues tend to be high in structural carbohydrates, which sharpened dental blades are capable of shearing, and therefore high ariaDNE values. What's more important here is the separation from teeth that have low cusps and wide basins, as these are used for crushing motions to efficiently break down soft (i.e., fruit) and hard objects. For $\epsilon$ = 0.08, 0.1 and 0.12, the box plots of ariaDNE (Fig.~\ref{fig:differentiation}) converge on a pattern in which folivorous {\it Alouatta}, {\it Brachyteles} and insectivorous {\it Saimiri} have higher values, reflecting sharper cusps, whereas frugivorous {\it Ateles} and {\it Callicebus} have lower values and hard-object feeding {\it Chiropotes} and {\it Pithecia} have the lowest values, reflecting low unsharp cusps. The separation was not as clear for RFI and DNE. For $\epsilon = 0.02$, ariaDNE shows a pattern similar to DNE. This suggests that when $\epsilon$ is small, both methods capture fine and/or local features on tooth models, and as $\epsilon$ becomes larger, ariaDNE starts capturing larger scale features, ignoring smaller scale features. Figure~\ref{fig:plate} demonstrates the feature scale of DNE and ariaDNE with various $\epsilon$ values. The ariaDNE values for teeth from {\it Callicebus}, {\it Chiropotes} and {\it Pithecia}, which evince less pointed cusps, but which exhibit more fine details on the basin such as enamel crenulations for the Pitheciines, start high when $\epsilon$ is small, but drop with larger $\epsilon$. The pattern is more pronounced in the {\it Pithecia} because their high energy features - the enamel crenulations - are even smaller than those of {\it Callicebus} and so are erased more completely. It may be hard to assess what is lost or gained by erasing small-scale features. \citeauthor{berthaume2017extant} (\citeyear{berthaume2017extant}) emphasized the importance of small scale features in their analyses of dental topography of extant apes, which also exhibit crenulated enamel similar to Pitheciines. Additionally, erasing small scale features makes the lower second molars of {\it Pithecia} more similar to those of Aye Ayes ({\it Daubentonia}). Previous studies have argued that the two species are analogous from an ecological point of view (\citeauthor{winchester2014dental}, \citeyear{winchester2014dental}). On the other hand, small scale features could reflect an important functional ability of {\it Pithecia} not available to {\it Daubentonia} (\citeauthor{ledogar2013diet}, \citeyear{ledogar2013diet}). In particular, these small scale features align {\it Pithecia} with {\it Callicebus}, which may be evidence of a close phylogenetic relationship between them - one that was debated prior to availability of genetic data, based on a dearth of obvious unique anatomical similarities. \section{Discussion} \subsection{Bandwidth and multi-scale quantifications} Even with a less sensitive implementation, ariaDNE still requires choices on the bandwidth parameter $\epsilon$. We have discussed the origin and interpretation of $\epsilon$ and how it affects values of ariaDNE in section~\ref{sec:ariadne}, and the resulting differentiation power to dietary preferences in section~\ref{sec:four}. To summarize: (1) for a given $\epsilon$, values of ariaDNE remained relatively unchanged compared to DNE, when the input mesh is perturbed (Fig.~\ref{fig:teaser}). This suggests that $\epsilon$ is independent of mesh attributes like resolution/triangle count, mesh representation, noise level, smoothness, etc. (2) $\epsilon$ indicates the size of local influence: the larger $\epsilon$, the more points on the mesh are considered important to quantify the local energy of the query point, and therefore resulted in larger ariaDNE values. This means $\epsilon$ determines the scale of features to be included in geometric quantification. Small $\epsilon$ will make surfaces with finer features have higher ariaDNE values, and large $\epsilon$ will make surfaces with large scale features have higher ariaDNE values. Parameter tuning was often achieved through optimization based on a priori goals, yet a single choice of parameter may not satisfy all goals. For example, the parameter that maximizes the differentiation between species in different diet groups may not minimize the effect of wear or optimize the differentiation between species irrespective of diet. The requirement of choosing a uniform scale applies to quantitative methods generally, and perhaps this is their biggest weakness compared to qualitative descriptions from more traditional comparative morphology, where multiple scales of perception were naturally integrated. However, the freedom in choosing the parameter also gives possibility in providing more informative comparisons, as seen in (Fig.~\ref{fig:differentiation}). Future work should aim to characterize samples using values computed across a range of $\epsilon$ values. \subsection{Wider applicability of ariaDNE} Many other applications of ariaDNE beyond functional questions of teeth are possible (Fig.~\ref{fig:examples}). For instance, in bivalves, burrowing benthic forms should benefit from shells with greater rugosity (higher ariaDNE) to help them stay embedded in the sea floor, whereas more planktonic forms should benefit from smoother, more hydrodynamic shells (lower ariaDNE). AriaDNE could also be useful for looking at the shape of distal phalanges (bones supporting the nail/claw) as claws suited for climbing are narrower and sharper (higher ariaDNE) while those suited for burrowing (or grasping) will be broader and blunter (lower ariaDNE). In addition, comparing the distribution of ariaDNE values over surfaces will likely provide even more insight into ecologically meaningful shape variation. For example, two surfaces with the same total ariaDNE may have very different distributions: one may have greater spatial variance in ariaDNE, with high ariaDNE features more clustered in one case than another. ariaDNE opens doors to defining other interesting shapes metrics that could potentially assists our understanding in morphology, evolution and ecology. \begin{figure} \begin{center} \includegraphics[width = 7cm]{examples.png} \end{center} \caption{ariaDNE values for surfaces representing astragulus (ankle bone) of {\it Oryctolagus} (saltatorial) and {\it Hemicentetes} (ambulatory), molars of {\it Mammut} (folivorous) and {\it Mammuthus} (grazing) and shells of {\it Lirobittium rugatum} and {\it Tornatellaria adelinae}. Surface shading indicates curvature computed by our algorithm; ariaDNE values are above each surface. In many cases we expect DNE to covary with mechanical demands of a species environment.} \label{fig:examples} \end{figure} \subsection{ariaDNE for previously published DNE analysis} The insensitivity of ariaDNE under varying mesh preparation protocols makes it more widely usable than traditional DNE for comparing and combining results from studies with varying samples or mesh preparation protocols. The computed ariaDNE values for previously published DNE studies (\citeauthor{boyer2008relief}, \citeyear{boyer2008relief}; \citeauthor{bunn2011comparing}, \citeyear{bunn2011comparing}; \citeauthor{winchester2014dental}, \citeyear{winchester2014dental}; \citeauthor{prufrock2016first}, \citeyear{prufrock2016first}; \citeauthor{pampush2016introducing} 2016a, b; \citeauthor{lopez2018dental}, \citeyear{lopez2018dental}) are now available to download as csv files from \url{https://sshanshans.github.io/articles/ariadne.html}. We will continue to update our website as we obtain access to more data samples. \subsection{Conclusion} We provided a robust implementation for DNE by utilizing weighted PCA. AriaDNE has the advantages of DNE, in that it is landmark-free and independent of initial position, scale, and orientation compared to other popular shape characterizers like OPC and RFI. In addition, ariaDNE is stable under a greater range of mesh preparation protocols, compared to DNE. Specifically, analyses indicated that the new implementation is insensitive to triangle counts, mesh representations, and artifacts on meshes representing both synthetic surface and real tooth data. Additionally, the effects of smoothing and boundary triangles on ariaDNE are limited. AriaDNE retains the potential of DNE for biological studies, illustrated by it effectively differentiating Platyrrhine primate species according to dietary preferences. While increasing the $\epsilon$ parameter of the method can erase small scale features and significantly affect how ariaDNE characterizes structures with small scale features compared to those with larger features (as it did with {\it Chiropotes} and {\it Pithecia} primates in our sample), we think this property can be leveraged to provide more informative comparisons. Future work should aim to characterize samples using values computed across a range of $\epsilon$ values. In this type of analysis, parameters could be optimized according to model selection criteria. Finally, as with other topographic metrics ariaDNE is likely most informative when deployed in combination with other shape metrics to achieve the goal of more accurately inferring morphological shape attributes. \section{Acknowledgements} DMB and JMW were supported in this work by NSF BCS 1552848 and DBI 1661386. ID, SZK and SS were supported in this work by Math + X Investigators Award 4000837. \section{Authors' Contributions} This research was led by SS. All authors contributed significantly to the manuscript and give final approval for publication. \section{Data Accessibility} \subsection{Sample locations} The platyrrhine sample we used in this paper was published by \citeauthor{winchester2014dental} (\citeyear{winchester2014dental}), and is available on {\it MorphoSource}, a project-based data archive for 3D morphological data: \url{https://www.morphosource.org/Detail/ProjectDetail/Show/project_id/89}. \subsection{Matlab scripts} \label{sec:code} Matlab scripts are available from the GitHub repository: \url{https://github.com/sshanshans/ariaDNE_code} and are archived with Zenodo DOI: \url{https://doi.org/10.5281/zenodo.1465949}. \section{References}
1,941,325,220,886
arxiv
\section{Introduction} The N\'eron--Severi group $\NS X$ of a smooth proper variety $X$ over a field $k$ is the group of divisors modulo algebraic equivalence. If $k$ is algebraically closed, $\NS X$ also equals the group of connected components of the Picard scheme of $X$. N\'eron \cite[p. 145, Th\'eor\`eme 2]{Ner} and Severi \cite{Sev} proved that $\NS X$ is finitely generated. The aim of this paper is to give an explicit upper bound on the order of $(\NS X)_{\tor}$. To the best of the author's knowledge, this is the first explicit bound on the order of $(\NS X)_{\tor}$. \begin{restatable*}{thm}{NSbound}\label{thm:NS better bound} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth projective variety defined by homogeneous polynomials of degree $\leq d$. Then \begin{equation} \#(\NS X)_\tor \leq 2^{d^{r^2+2r\log_2 r}}. \end{equation} \end{restatable*} \noindent For any prime number $\ell \neq \charac k$, there is a natural isomorphism $(\NS X)[\ell^\infty] \simeq H^2_\et(X,\mathbb{Z}_\ell)_\tor$ \cite[2.2]{SZ}. Thus, \Cref{thm:NS better bound} implies the corollary below. \begin{restatable*}{cor}{SHbound}\label{thm:H2 bound} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth projective variety defined by homogeneous polynomials of degree $\leq d$. Let $\ell \neq \charac k$ be a prime number. Then \begin{equation*} \prod_{\substack{\ell \neq \charac k\\ \ell \textup{ is prime}}} \# H^2_\et(X,\mathbb{Z}_\ell)_\tor \leq 2^{d^{r^2+2r\log_2 r}}. \end{equation*} \end{restatable*} \noindent We also give a uniform upper bound on the number of generators of $(\NS X)[\ell^\infty]$ as described below. This bound and a sketch of its proof were suggested by J\'anos Koll\'ar after seeing an earlier draft of our paper containing only \Cref{thm:NS better bound}. \begin{restatable*}{thm}{NSgen}\label{thm:NSgen} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth connected projective variety of degree $d$ over $k$. Let $p=\charac k$, and \[N = \begin{cases} (\NS X)_\tor, & \mbox{if } p=0 \\ (\NS X)_\tor/(\NS X)[p^\infty], & \mbox{if } p>0. \end{cases}\] Then $N$ is generated by less than or equal to $(d-1)(d-2)$ elements. \end{restatable*} \begin{restatable*}{cor}{SHgen} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth connected projective variety of degree $d$ over $k$. Let $\ell \neq \charac k$ be a prime number. Then $H^2_\et(X,\mathbb{Z}_\ell)_\tor$ is generated by less than or equal to $(d-1)(d-2)$ elements. \end{restatable*} The torsion-free quotient $(\NS X)/(\NS X)_\tor$ is the group of divisors modulo numerical equivalence. Its rank is bounded by the second Betti number of $X$. Katz found an upper bound on the sum of all Betti numbers of $X$ \cite[Theorem 1]{Kat}; this gives a rough bound on the rank of $\NS X$. The torsion subgroup $(\NS X)_{\tor}$ is the group of numerically zero divisors modulo algebraic equivalence. This is a birational invariant \cite[p. 177]{Bal}. Recently, Poonen, Testa and van Luijk gave an algorithm to compute it.\footnote{They also found an algorithm computing the torsion free quotient assuming the Tate conjecture.} The algorithm is based on their theorem that $(\NS X)_{\tor}$ injects into the set of connected components of $\Hilb_Q X$ for some polynomial $Q$ \cite[Lemma 8.29]{PTL}. The order of $(\NS X)_{\tor}$ will be bounded by finding an explicit $Q$ and bounding the number of connected components of $\Hilb_Q X$. We will show that an upper bound on the number of connected components of $\EffDiv_n X$ for an integer $n$ depending on $Q$ also gives an upper bound on the order of $(\NS X)_{\tor}$. \Cref{sec:numerical} shows that $Q$ may be taken to be the Hilbert polynomial of $mH$, where $H$ is a hyperplane section of $X$ and $m$ is an explicit integer. \Cref{sec:hilbert} bounds the number of irreducible components of $\Hilb_Q X$ by using its embedding in a Grassmannian. \Cref{sec:chow} bounds the number of irreducible component of $\EffDiv_n X$ by using Koll\'ar's technique \cite[Exercise I.3.28]{Jan}. Finally, \Cref{sec:generators} gives a uniform upper bound on the number of generators of $(\NS X)[\ell^\infty]$. From now on, the base field $k$ is assumed to be algebraically closed, since a base change makes the N\'eron--Severi group only larger \cite[Proposition 6.1]{PTL}. However, no assumption is made on the characteristic of $k$. \section{Notation} Given a scheme $X$ over $k$, let $\conn(X)$ and $\irr(X)$ be the set of connected components of $X$ and the set of irreducible components of $X$, respective. Let $X_\red$ be the reduced closed subscheme associated to $X$. If $X$ is smooth and proper, then $\NS X$ denote the N\'eron--Severi group of $X$. Let $H^i_\et(X,\mathscr{F})$ be the $i$-th \'etale cohomology group of $X$ corresponding to an \'etale sheaf $\mathscr{F}$. If $k = \mathbb{C}$, then let $X^\an$ be the analytic space of $X$. If $M$ is a topological manifold, then $H_\sing^i(M,\mathbb{Z})$ (resp. $H_i^\sing(M,\mathbb{Z})$) denotes the $i$-th singular cohomology (resp. homology) group with integer coefficients. A projective variety is a closed subscheme of $\mathbb{P}^r=\Proj k[x_0,\cdots,x_r]$ for some $r$. Suppose that $X \hookrightarrow \mathbb{P}^r$ is a projective variety. Let $I_X \subset k[x_0,\cdots,x_r]$ be the saturated ideal defining $X$. Given $f_0,\cdots,f_{t-1} \in k[x_0,\cdots,x_r]$, let $\var[X]{f_0,\cdots,f_{t-1}}$ be the subscheme of $X$ defined by the ideal $(f_0,\cdots,f_{t-1})+I_X$. Let $\mathcal{O}_X$, $\mathscr{I}_X$, $\Omega_X$ and $\omega_X$ be the sheaf of regular functions, the ideal sheaf, the sheaf of differentials and the canonical sheaf of $X$, respectively. Given a coherent sheaf $\mathscr{F}$ on $X$, let $\HP_\mathscr{F}$ be the Hilbert polynomial of $\mathscr{F}$, and $\Gamma(\mathscr{F})$ be the global section of $\mathscr{F}$. Given a graded module $M$ over $k$, let $\HP_M$ be the Hilbert polynomial of $M$, and $M_t$ be the degree $t$ part of $M$. Take an effective divisor $D$ on $X$. Let $\HP_D$ be the Hilbert polynomial of $D$ as a subscheme of $\mathbb{P}^r$. Then $\mathcal{O}(-D)\subset \mathcal{O}_X$ is the ideal sheaf corresponding to $D$, and \[ \HP_D = \HP_{\mathcal{O}_X} - \HP_{\mathcal{O}(-D)}.\] Let $\Hilb X$ be the Hilbert scheme of $X$. Given a polynomial $Q(t)$, let $\Hilb_Q X$ be the Hilbert scheme of $X$ parametrizing closed subschemes of $X$ with Hilbert polynomial $Q$. Let $\Chow_{\delta,n} X$ be the Chow variety of dimension $\delta$ and degree $n$ algebraic cycles on $X$. Let $\EffDiv X$ be the scheme parametrizing the effective Cartier divisors on $X$, and let $\EffDiv_n X$ be the open and closed subscheme of $\EffDiv X$ corresponding to the divisors of degree $n$. Let $\Pic X$ be the Picard scheme of $X$. Let $\Alb X$ be the Albanese variety of $X$. Given a vector space $V$ and nonnegative number $t$, let $\Gr(t,V)$ be the Grassmannian parametrizing $t$-dimensional subspaces of $V$. Let $\Gr(t,n) = \Gr(t,k^n)$. Given a set $S$, let $\# S$ be the number of elements in $S$. Give a group $A$, let $A^\ab$ be the abelianization of $A$, and $A^{(\ell)} = \varprojlim A^\ab/\ell^n A^\ab$ be the maximal pro-$\ell$ abelian quotient of $A$. If $A$ is abelian, let $A_\tor$, $A[n]$ and $A[\ell^\infty]$ be the set of torsion elements, $n$-torsion elements and $\ell$-power torsion elements, respectively. If $A$ is a finite abelian group, then $A^*$ is the Pontryagin dual of $A$. \section{Numerical Conditions}\label{sec:numerical} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth projective variety defined by polynomials of degree $\leq d$. Let $K$ and $H$ be a canonical divisor and a hyperplane section of $X$, respectively. The goal of this section is to give an explicit $m$ such that \begin{equation}\label{eqn:sec2goal} \# \conn(\Hilb_{\HP_{mH}}X) \geq \# (\NS X)_\tor. \end{equation} Poonen, Testa and van Luijk proved that an order of a N\'eron--Severi group is bounded by a number of connected components of a certain Hilbert scheme. The theorem and the proof below is a reformulation of their work in \cite[Section 8.4]{PTL}. \begin{thm}[Poonen, Testa and van Luijk]\label{thm:bound by Hilb} Let $F$ be a divisor on $X$ and $Q = \HP_{\mathcal{O}_X} - \HP_{\mathcal{O}(-F)}$. If $\mathcal{O}_X(F+D)$ has a global section for every numerically zero divisor $D$, then \[ \# \conn(\Hilb_Q X \cap \EffDiv X) \geq \# (\NS X)_\tor.\] \end{thm} \begin{proof} Recall that $\EffDiv X$ is an open and closed subscheme of $\Hilb X$ \cite[Exercise I.3.28]{Jan}, and there is a natural proper morphism $\pi: \EffDiv X \rightarrow \Pic X$ sending a divisor to the corresponding class \cite[p. 214]{BLR}. Let $\Pic^c X$ be the finite union of connected components of $\Pic X$ parametrizing the divisors numerically equivalent to $F$. Since the Hilbert polynomial of a divisor is a numerical invariant, $Q$ is the Hilbert polynomial of each divisor corresponding to a closed point of $\pi^{-1}(\Pic^c X)$. Thus, $\pi^{-1}(\Pic^c X) \subset \Hilb_Q X$. Since $\Pic^c X$ is open and closed in $\Pic X$, and $\EffDiv X$ is open and closed in $\Hilb X$, the scheme $\pi^{-1}(\Pic^c X)$ is open and closed in $\Hilb_Q X \cap \EffDiv X$. Thus, \begin{align*} \# \conn(\Hilb_Q X \cap \EffDiv X) &\geq \# \conn(\pi^{-1}(\Pic^c X)). \end{align*} Since $F+D$ is linearly equivalent to an effective divisor for every numerically zero divisor $D$, the morphism $\pi$ restricts to a surjection $\pi^{-1}(\Pic^c X) \rightarrow \Pic^c X$. Hence, \begin{align*} \# \conn(\pi^{-1}(\Pic^c X)) &\geq \# \conn(\Pic^c X) \\ &= (\NS X)_\tor. \qedhere \end{align*} \end{proof} The authors of \cite{PTL} chose $F = K + (\dim X + 2)H$ because of the following partial result towards Fujita's conjecture. \begin{thm}[Keeler {\cite[Theorem 1.1]{Kee}}]\label{thm:Fujita} Let $L$ be an ample divisor on X. Then \begin{enumerate}[label=(\alph*)] \item $\mathcal{O}_X(K+(\dim X)H+L)$ is generated by global sections, and \item $\mathcal{O}_X(K+(\dim X + 1)H+L)$ is very ample. \end{enumerate} \end{thm} However, computing the Hilbert polynomial of $\mathcal{O}(-K)$ is somehow difficult. Therefore, we will show that $F = ((d-1)\cdot\codim X)H$ is another choice. \begin{lem}\label{lem:section of det} Let $Y$ be a nonempty smooth closed subscheme of an affine space $\mathbb{A}^r = \Spec k[x_0,\cdots,x_{r-1}]$. Suppose that the ideal $I$ defining $Y$ is generated by polynomials of degree $\leq d$. Let $c = \codim Y$. Then there are polynomials $f_0, \cdots, f_{c-1} \in I$ such that \begin{enumerate}[label=(\alph*)] \item $\deg f_i = d$ for all $i$, and \item $f_0\wedge\cdots\wedge f_{c-1}$ represents a nonzero element in $\bigwedge^c(I/I^2)$. \end{enumerate} \end{lem} \begin{proof} Let $R = k[x_0,\cdots,x_{c-1}]/I$. Then $I/I^2$ is a locally free $R$-module of rank $\codim Y$ by \cite[Theorem 8.17]{Har}. Take any prime ideal $\mathfrak{p} \subset R$. Then $(I/I^2)_\mathfrak{p}$ is a free $R_\mathfrak{p}$-module. Thus, there are $p_0,\cdots,p_{c-1} \in I$ and $q_0,\cdots,q_{c-1} \in R \setminus \mathfrak{p}$ such that \[\frac{p_0}{q_0} \wedge \frac{p_1}{q_1} \wedge \cdots \wedge \frac{p_{c-1}}{q_{c-1}} \] represents a nonzero element of $\bigwedge^c(I/I^2)_\mathfrak{p}$. Hence, \begin{equation}\label{eqn:wedge} p_0 \wedge p_1 \wedge \cdots \wedge p_{c-1} \end{equation} represents a nonzero element of $\bigwedge^c(I/I^2)$. Let $g_0,\cdots,g_{b-1}$ be polynomials of degree $\leq d$ which generate $I$. Then each $p_i$ can be written as a $R$-linear combination of $g_i$'s. If we expand (\ref{eqn:wedge}), at least one term should be nonzero in $\bigwedge^c(I/I^2)$. Therefore, we may assume that \[g_0 \wedge g_1 \wedge \cdots \wedge g_{c-1} \] represents a nonzero element of $\bigwedge^c(I/I^2)$. Let $\ell \not\in I$ be a polynomial of degree 1, and let $f_i = \ell^{d - \deg g_i} g_i$. Then \[f_0 \wedge f_1 \wedge \cdots \wedge f_{c-1} \] represents a nonzero element of $\bigwedge^c(I/I^2)$ and $\deg f_i = d$ for every $i$. \end{proof} \begin{lem}\label{lem:section anticanonical} The sheaf $\mathcal{O}_X\left(-K+(d \cdot \codim X - r - 1)H\right)$ has a global section. \end{lem} \begin{proof} Let $c = \codim X$. Since $X$ is smooth, there is an exact sequence \[ 0 \rightarrow \mathscr{I}_X/\mathscr{I}_X^2 \rightarrow \Omega_{\mathbb{P}^r} \otimes \mathcal{O}_X \rightarrow \Omega_X \rightarrow 0, \] and $\mathscr{I}_X/\mathscr{I}_X^2$ is locally free of rank $c$ \cite[Theorem 8.17]{Har}. Taking the highest exterior power gives \begin{align*} \omega_{\mathbb{P}^r}|_X &\simeq \bigwedge\nolimits^c \left(\mathscr{I}_X/\mathscr{I}_X^2\right) \otimes \omega_X\\ \omega_X^{-1}(-r-1) &\simeq \bigwedge\nolimits^c \left(\mathscr{I}_X/\mathscr{I}_X^2\right). \end{align*} Let $U_i\subset X$ be the affine open set given by $x_i \neq 0$. Then there exist polynomials $f_0,\cdots,f_{c-1}$ of degree $d$ such that \[ f_0(x_0/x_i,\cdots,x_r/x_i) \wedge \cdots \wedge f_{c-1}(x_0/x_i,\cdots,x_r/x_i) \] represents a nonzero section of $\bigwedge^c(\mathscr{I}_X/\mathscr{I}_X^2)|_{U_i}$ by \Cref{lem:section of det}. Take another $U_j \subset X$ given by $x_j \neq 0$. Then two sections \[x_i^d f_0(x_0/x_i,\cdots,x_r/x_0) \wedge \cdots \wedge x_i^d f_{c-1}(x_0/x_i,\cdots,x_r/x_i)\] and \[x_j^d f_0(x_0/x_j,\cdots,x_r/x_j) \wedge \cdots \wedge x_j^d f_{c-1}(x_0/x_j,\cdots,x_r/x_j)\] give same restrictions in $\bigwedge^c(\mathscr{I}_X/\mathscr{I}_X^2(d))|_{U_i \cap U_j}$. Because $i$ and $j$ are arbitrary, the sections above extend to a global section of \[ \omega_X^{-1}(d\cdot\codim X-r-1) \simeq \bigwedge\nolimits^c \left(\mathscr{I}_X/\mathscr{I}_X^2(d)\right). \qedhere \] \end{proof} \begin{lem}\label{lem:numerical condition} Let $D$ be a divisor on $X$ numerically equivalent to 0.\footnote{The condition `numerically equivalent to 0' can be replaced by `numerically effective' due to Kleiman's criterion of ampleness \cite[Chapter IV \textsection 2 Theorem 2]{Kle}.} Then \begin{enumerate}[label=(\alph*)] \item $\mathcal{O}_X(D+((d-1)\codim X)H)$ is generated by global sections, and \item $\mathcal{O}_X(D+((d-1)\codim X+1)H)$ is very ample. \end{enumerate} \end{lem} \begin{proof} The divisor $D+H$ is ample, since ampleness is a numerical property. Then $K+(\dim X)H + (D+H)$ is generated by global section by \Cref{thm:Fujita}. Thus, \Cref{lem:section anticanonical} implies that \begin{align*} &(K+(\dim X)H + (D+H)) + (-K+(d \cdot \codim X - r - 1)H) \\ =\, &D+((d-1)\codim X)H \end{align*} is generated by global sections. Similarly, $D+((d-1)\codim X+1)H$ is very ample. \end{proof} \begin{thm}\label{cor:bound by hilb} Let $m = (d-1) \codim X$. Then \[\# \conn(\Hilb_{\HP_{m H}} X) \geq \# (\NS X)_\tor.\] \end{thm} \begin{proof} By \Cref{lem:numerical condition}(a), we may apply \Cref{thm:bound by Hilb} to $F=mH$. \end{proof} \section{Irreducible Components of Hilbert Schemes}\label{sec:hilbert} The aim of this section is to give an explicit upper bound on $\#(\NS X)_{\tor}$ for a smooth projective variety $X$. \Cref{cor:bound by hilb} implies that it suffices to give an upper bound on the number of connected components of some Hilbert scheme. Recall the definition of Castelnuovo–Mumford regularity and Gotzmann numbers. \begin{defi} A coherent sheaf $\mathscr{F}$ over $\mathbb{P}^r$ is $m$-regular if and only if \[ H^i\left(\mathbb{P}^r, \mathscr{F}(m-i)\right) = 0 \] for every integer $i > 0$. The smallest such $m$ is called the Castelnuovo–Mumford regularity of $\mathscr{F}$. \end{defi} \begin{defi} Let $P$ be the Hilbert polynomial of some ideal $I \subset k[x_0,\cdots,x_r]$. The Gotzmann number $\varphi(P)$ of $P$ is defined as \begin{align*} \varphi(P) = \inf \{ m \,|\,& \mathscr{I}_Z \text{ is $m$-regular for every} \\ &\text{closed subvariety $Z \subset \mathbb{P}^r$ with Hilbert polynomial $P$} \}. \end{align*} \end{defi} Hilbert schemes can be explicitly described as a closed subscheme of a Grassmannian, by Gotzmann \cite{Got}. \begin{thm}[{Gotzmann}]\label{thm:explicit construction of Hilb P} Let $P$ be the Hilbert polynomial of some ideal $I \subset k[x_0,\cdots,x_r]$. Assume that $t \geq \varphi(P)$. Then \begin{align*} \imath_t : \Hilb_{\monom{t}{r}-P(t)} \mathbb{P}^r &\rightarrow \Gr(P(t),k[x_0,\cdots,x_r]_t) \\ [Y] &\mapsto \Gamma(\mathscr{I}_Y(t)) \end{align*} gives a well-defined closed immersion. Moerover, the image is the collection of linear spaces $T \subset k[x_0,\cdots,x_r]_t$ such that \begin{enumerate}[label=(\alph*)] \item $\dim \left(x_0T+\cdots+x_rT\right) \leq P(t+1)$. \end{enumerate} \end{thm} \begin{proof} See \cite[Section 3]{Got}. \end{proof} Let $X \hookrightarrow \mathbb{P}^r$ be a projective variety and $Q$ be a polynomial. Then there is the natural closed embedding \[ \Hilb_Q X \hookrightarrow \Hilb_Q \mathbb{P}^r. \] \begin{thm}\label{thm:explicit construction} Use the notation in \Cref{thm:explicit construction of Hilb P}. Let $X \hookrightarrow \mathbb{P}^r$ be a projective variety defined by polynomials of degree $\leq d$. Assume that $t \geq \max \{\varphi(P),d\}$. Then the image of \[\Hilb_{\monom{t}{r}-P(t)} X \] under $\imath_t$ is the collection of linear spaces $T \subset k[x_0,\cdots,x_r]_t$ such that \begin{enumerate}[label=(\alph*)] \item $\dim \left(x_0T+\cdots+x_rT\right) \leq P(t+1)$ and \item $\Gamma(\mathscr{I}_X(t)) \subset T$. \end{enumerate} \end{thm} \begin{proof} See the proof of \cite[Lemma 8.23]{PTL} \end{proof} Therefore, an upper bound on Gotzmann numbers will give an explicit construction of a Hilbert scheme. Such a bound is given by Hoa {\cite[Theorem 6.4(i)]{Hoa}}. \begin{thm}[Hoa]\label{thm:Gotzmann bound} Let $I\subset k[x_0,\cdots,x_r]$ be an nonzero ideal generated by homogeneous polynomials of degree at most $d \geq 2$. Let $a$ be the Krull dimension of $k[x_0,\cdots,x_r]/I$. Then \[ \varphi(\HP_I) \leq \left( \frac{3}{2} d^{r+1-a} + d \right)^{a 2^{a-1}}. \] \end{thm} Once a Hilbert scheme is explicitly constructed, we can bound the number of the irreducible components by the lemma below. \begin{lem}\label{cor:conn} If $X \hookrightarrow \mathbb{A}^r$ is an affine scheme defined by polynomials of degree $\leq d$, then \[\# \irr(X) \leq d^r. \] \end{lem} \begin{proof} This is a special case of the Andreotti-Bézout inequality \cite[Lemma 1.28]{Cat}. \end{proof} The Grassmannian $\Gr(k,n)$ is covered by open sets isomorphic to $\mathbb{A}^{k(n-k)}$. The conditions in \Cref{thm:explicit construction} can be translated into explicit equations in such an affine space, giving the lemma below. \begin{lem}\label{lem:counting components} Let $V$ and $W$ be vector spaces. Let $n = \dim V$. Let $\varphi_i: V \rightarrow W$ be a linear map for each $i = 0,\cdots,r$. Let $U \subset V$ be a subspace. Let $\mathbf{X}$ be the collection of $T \in \Gr(q,V)$ satisfying \begin{enumerate}[label=(\alph*)] \item $\dim (\varphi_0(T)+\cdots+\varphi_r(T)) \leq p$, and \item $U \subset T$. \end{enumerate} Then $\mathbf{X}$ is a closed subscheme of $\Gr(q,V)$, and with $n = \dim V$, \[ \#\conn(\mathbf{X}) \leq \max\{p+1,q+1\}^{q(n-q)}.\] \end{lem} \begin{proof} Given a subspace $S \subset V$ of dimension $n-q$, there is an open set \[ U_S = \{T\in\Gr(q,V) \,|\, U \cap T = \{0\} \}. \] Choose a basis of $V$ such that $S$ is spanned by the first $n-q$ entries. Then every $T \in U_S$ is uniquely represented as a column space of a block matrix \[ N_T = \begin{pmatrix} X_{n-q,q} \\ \hline I_q \end{pmatrix},\] where $X_{n-q,q}$ is an $(n-q)\times q$ matrix, and $I_q$ is the $q\times q$ identity matrix. The entries of $X_{n-q,q}$ can be regarded as indeterminates, giving an isomorphism $U_S \simeq \mathbb{A}^{(n-q)q}$. Let $\varphi_i^{\oplus q}(N_T)$ be the $(\dim W) \times q$ matrix obtained by applying $\varphi_i$ to every column of $N_T$. Then \[\dim \left( \varphi_0(T)+\cdots +\varphi_r(T)\right) \leq p\] if and only if every $(p+1)\times(p+1)$ minor of the block matrix \[\left( \varphi_0^{\oplus k}\begin{pmatrix} X_{n-q,q} \\ I_q \end{pmatrix} \rvline{\arraycolsep} \varphi_1^{\oplus k}\begin{pmatrix} X_{n-q,q} \\ I_q \end{pmatrix} \rvline{\arraycolsep} \cdots \rvline{\arraycolsep} \varphi_r^{\oplus k}\begin{pmatrix} X_{n-q,q} \\ I_q \end{pmatrix} \right)\] is zero. Let $M_U$ be a matrix whose columns form a basis of $U$. Then $U \subset T$ if and only if every $(q+1)\times(q+1)$ minor of \[\begin{pmatrix} \begin{matrix}X_{n-q,q} \\ I_q \end{matrix} & \rvline{-\arraycolsep} & M_U \end{pmatrix}\] is zero. Thus, $\mathbf{X} \cap U_S$ is defined by the minors in $U_S$, meaning that $\mathbf{X}$ is a closed subscheme of $\Gr(q,V)$. Now, take one point from each irreducible component of $\mathbf{X}$, and let them be $T_0,\cdots,T_{\ell-1}\subset V$. Then $(n-k)$-dimensional subspace $S \subset V$ can be chosen such that \[ S \cap \left(\bigcup_{i=0}^\ell T_i\right) = \{0\},\] implying that every irreducible component intersects with $U_S$. Since $\mathbf{X} \cap U_S$ is defined by polynomials of degree $\leq \max\{p+1,q+1\}$, \Cref{cor:conn} implies \begin{equation*} \#\irr(\mathbf{X}) = \#\irr(\mathbf{X}\cap U_S) \leq \max\{p+1,q+1\}^{q(n-q)}. \qedhere \end{equation*} \end{proof} \begin{lem}\label{lem:conn bound} Let $X\hookrightarrow\mathbb{P}^r$ be a projective variety defined by polynomials of degree $\leq d$. Let $P$ be the Hilbert polynomial of an ideal. If $t \geq \max\{\varphi(P),d,8r\}$ and $r \geq 2$, then \[ \# \irr\left(\Hilb_{\monom{t}{r} - P(t)} X\right) \leq t^{r t^{2r}}. \] \end{lem} \begin{proof} \Cref{thm:explicit construction} and \Cref{lem:counting components} implies that \begin{align*} \# \irr\left(\Hilb_{\monom{t}{r} - P(t)} X\right) &\leq \max\left\{ P(t+1)+1, P(t)+1 \right\}^{P(t)\left(\monom{t}{r}-P(t)\right)}\\ &\leq \left( \monom{t+1}{r} + 1 \right)^{\monom{t}{r}^2}. \end{align*} Since $r\geq 2$ and $t \geq 8r$, \begin{align*} \monom{t+1}{r} &= \frac{(t+r+1)\cdots(t+2)}{r!} \\ &\leq \frac{(t+r+1)^r}{2^{r-1}} \\ &\leq \left(\frac{10}{8}t\right)^r \frac{1}{2^{r-1}} \\ &< t^r. \end{align*} Therefore, \[ \#\irr\left(\Hilb_{\monom{t}{r}-P(t)} X\right) \leq \left(t^r\right)^{\left({t^r}\right)^2} = t^{rt^{2r}}. \qedhere\] \end{proof} Now, we are ready to give an upper bound: \begin{thm}\label{thm:NS bound} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth projective variety defined by homogeneous polynomials of degree $\leq d$. Then \begin{equation} \# (\NS X)_{\tor} \leq 2^{d^{2^{r+3 \log_2 r}}}. \end{equation} \end{thm} \begin{proof} If $X$ is a curve or a projective space, then $(\NS X)_\tor = 0$. Thus, we may assume that $r \geq 3$, $d \geq 2$ and $\codim X \geq 1$. Moreover, $X$ may be assumed to be not contained in any hyperplane. Let $I$ be the ideal defining $X$. Let $H$ be a hyperplane section of $X$ cut by $x_r = 0$. Let $m = (d-1)\codim X \geq 1$. Then $I+(x_r^m)$ is the ideal defining $mH$ where $m=(d-1)\codim X$. Let $t = (2rd)^{(r+1)2^{r-2}}$ and $a = \dim X$. Then $t \geq d$ and $t \geq 8r$. Moreover, \begin{align*} t &= \left( \frac{3}{2} (rd)^{r+1-a} + \frac{1}{2} (rd)^{r+1-a} \right)^{a 2^{r-2}} \\ &\geq \left( \frac{3}{2} (rd)^{r+1-a} + rd \right)^{a 2^{a-1}} \\ &\geq \varphi(\HP_{mH}) \text{ (by \Cref{thm:Gotzmann bound})}. \end{align*} Thus, \Cref{cor:bound by hilb} and \Cref{lem:conn bound} implies that \[ \# (\NS X)_\tor \leq \#\conn(\Hilb_{\HP_{m H}} X) \leq \#\irr(\Hilb_{\HP_{m H}} X) \leq t^{rt^{2r}}. \] Notice that \begin{align*} \log_2 \log_d \log_2 \left( t^{rt^{2r}} \right) &= \log_2 \log_d \left( t^{2r} r \log_2t \right) \\ &\leq \log_2 \log_d \left( t^{2r} 2^{r-1} (r+1)r^2 d \right) \\ &\phantom{\leq} \text{ (since $\log_2 (2rd) \leq 2rd$)}\\ &\leq \log_2 \left( 2r \log_dt + r + \log_2 (r+1)r^2 \right) \\ &\phantom{\leq} \text{ (since $d \geq 2$)}\\ &\leq \log_2 \left( 2^{r-1}r(r+1)(\log_2 r + 2) + r + \log_2 (r+1)r^2 \right) \\ &\leq r + 3\log_2 r. \end{align*} As a result, \begin{equation*} \# (\NS X)_\tor \leq 2^{d^{2^{r+3 \log_2 r}}}. \qedhere \end{equation*} \end{proof} \section{Irreducible Components of Chow Varieties}\label{sec:chow} In this section, $X \hookrightarrow \mathbb{P}^r$ is a smooth projective variety defined by homogeneous polynomials of degree $\leq d$. The goal of this section is to give a better upper bound on the order of $(\NS X)_{\tor}$. J\'anos Koll\'ar pointed out that the bound may be also derived form an upper bound on the number of connected components of $\Chow_{\delta,n} X$. Our goal only requires a bound for $\EffDiv_n X$ instead of Chow varieties of arbitrary dimensions. \begin{thm}\label{thm:bound by EffDiv} Let $n = (d-1) \codim X \cdot \deg X$. Then \[\# \conn\left(\EffDiv_n X \right) \geq \# (\NS X)_\tor.\] \end{thm} \begin{proof} Let $H \subset X$ be a hyperplane section, $m = (d-1) \codim X$, and $Q = \HP_{mH}$. If $D$ is a closed subscheme with Hilbert polynomial $Q$, then $\deg D = \deg mH = n$. Hence, \[ \Hilb_Q X \cap \EffDiv X \subset \EffDiv_n X. \] Since $\Hilb_Q X$, $\EffDiv X$ and $\EffDiv_n X$ are open and closed in $\Hilb X$, \Cref{lem:numerical condition} and \Cref{thm:bound by Hilb} implies that \begin{align*} \#\conn(\EffDiv_n X) &\geq \#\conn(\Hilb_Q X \cap \EffDiv X) \\ &\geq \# (\NS X)_\tor. \qedhere \end{align*} \end{proof} In \cite[Exercise I.3.28]{Jan}, Koll\'ar gives an explicit upper bound on $\# \irr (\Chow_{\delta,n} \mathbb{P}^r)$ and an outline of the proof. Moreover, \cite[Exercise I.3.28.13]{Jan} suggests an exercise to find an explicit upper bound on $\# \irr (\Chow_{\delta,n} X)$, and the proof can be found in \cite[Section 2]{Gue}. However, the proof works only if $\charac k = 0$, and it does not give a bound in a closed form. Therefore, we will give another complete proof, but most of the proof up to \Cref{lem:IrrDiv bound} is just a modification of Koll\'ar's technique and \cite[Section 2]{Gue}. Moreover, we will only bound $\#\irr(\EffDiv_n X)$, because this restriction avoids the bad behavior of Chow varieties in positive characteristic, simplies the proof and slightly improves the result. \begin{lem}\label{lem:generic sec} Let $D \subset X$ be a nonzero effective divisor on $X$ of degree $n$. Then there are $f$ and $g$ in $\Gamma(X,\mathscr{I}_D(n))$ such that $D$ is the largest effective divisor contained in $V_X(f,g)$. \end{lem} \begin{proof} Take any point $x \in X$ and a generic linear projection $\rho_0\colon\mathbb{P}^r \dashrightarrow \mathbb{P}^{\dim X}$. Then $\rho_0(D)$ is a hypersurface and $\rho_0|_X$ is \'etale at $x$. Let $f_0$ be a homogeneous polynomial of degree $n$ defining $\rho_0(D)$, and $f \in \Gamma(X,\mathscr{I}_D(n))$ be its pullback. Then \[ \var[X]{f} = D \cup E_0 \cup E_1 \cup \cdots \cup E_{t-1} \] for some irreducible closed subschemes $E_i \subset X$ not set theoretically contained in $D$. Take $e_i \in E_i \setminus D$ for each $i$, and let $\rho_1\colon\mathbb{P}^r \dashrightarrow \mathbb{P}^{\dim X}$ be another generic linear projection. Then $e_i \not\in \rho_1(D)$ for every $i$. Let $g_0$ be a homogeneous polynomial of degree $n$ defining $\rho_1(D)$, and $g \in \Gamma(X,\mathscr{I}_D(n))$ be its pullback. Then $\var[X]{f,g}$ contains $D$ but not $E_i$ for all $i$. Thus, $D$ is the largest divisor contained in $\var[X]{f,g}$. \end{proof} \begin{defi}\label{def:S and T} Let \[ N_n = {n + r \choose n} -1. \] Then $\mathbb{P}^{N_n}$ parameterizes nonzero homogeneous polynomials of degree $n$ with $r+1$ variables up to constant factors. Let \begin{align*} S_n &= \left\{ (f,g) \in \left(\mathbb{P}^{N_n}\right)^2 \ \middle|\ \codim_X (\var[X]{f,g}) \leq 1 \right\} \text{ and}\\ T_n &= \left\{ ((f,g),[D]) \in S_n \times \EffDiv X \mid D \subset \var[X]{f,g} \right\}. \end{align*} Let $p\colon T_n \rightarrow S_n$ and $q \colon T_n \rightarrow \EffDiv X$ be the natural projections. \end{defi} \begin{lem}\label{lem:irreducible component biprojective} Let $X \hookrightarrow {\left(\mathbb{P}^r\right)}^2$ be a closed subscheme defined by bihomogeneous polynomials of total degree $\leq d$. Then \[ \#\irr(X) \leq d^{2r}. \] \end{lem} \begin{proof} Let $L_0$ and $L_1$ be generic hyperplanes of $\mathbb{P}^r$. Then $L_0\times\mathbb{P}^r$ and $\mathbb{P}^r\times L_1$ do not contain any irreducible component of $X$. Let \[ U = \left(\mathbb{P}^r \setminus L_0\right)\times\left(\mathbb{P}^r \setminus L_1\right) \simeq \mathbb{A}^{2r}.\] Then \[\#\irr(X) = \#\irr\left(X \cap U\right),\] and $X \cap U \hookrightarrow \mathbb{A}^{2r}$ is defined by polynomials of degree $\leq d$. Consequently, \Cref{cor:conn} proves the inequality. \end{proof} \begin{lem}\label{lem:bound Sn} Let $n$ be a positive integer. Then $S_n$ is closed in $\left(\mathbb{P}^{N_n}\right)^2$ and \[ \# \irr (S_n) \leq {2\max\{n,d\}+(r-1)d \choose r}^{2{n + r \choose r}-2}. \] \end{lem} \begin{proof} Take $(f,g) \in (\mathbb{P}^{N_n})^2$ as in \Cref{def:S and T}. Then $\codim_X(\var[X]{f,g}) = 1$, if and only if the intersection of $\var[X]{f,g}$ with $t = \dim X - 1$ number of generic hyperplane sections is nonempty. Let $h_i = \sum_{j=0}^{t-1} \xi_{i,j} x_j$ be a generic hyperplane section for each $i$, where $\xi_{i,j}$ are indeterminates. Take a base extension to $k(\{\xi_{i,j}\}_{i,j})$. Then \begin{align*} \phantom{\Longleftrightarrow}&\ \codim_X\left(\var[X]{f,g}\right) = 1 \\ \Longleftrightarrow&\ \var[X]{f,g,h_0\cdots,h_{t-1}} \neq \emptyset \\ \Longleftrightarrow&\ (x_0,\cdots,x_r)^{2\max\{n,d\} + (r-1)d- r} \not\subset (f,g,h_0\cdots,h_{t-1}) + I_X\\ \phantom{\Longleftrightarrow}&\ \text{(by \cite[Corollary I.7.4.4.3]{Jan})}\\ \Longleftrightarrow&\ \rank\left(\left(\left(f,g,h_0\cdots,h_{t-1}\right) + I_X\right)_{2\max\{n,d\} + (r-1)d- r}\right) < {2\max\{n,d\}+(r-1)d \choose r}. \end{align*} The last condition can be translated into bihomogeneous polynomials in the coefficients of $f$ and $g$ of total degree ${2\max\{n,d\}+(r-1)d \choose r}$. \Cref{lem:irreducible component biprojective} proves the inequality. \end{proof} \begin{lem} The set $T_n$ is a closed subset of $S_n \times \EffDiv X$. \end{lem} \begin{proof} Since $\EffDiv X$ is open and closed in $\Hilb X$, it suffices to show that \[ T_n^Q = \left\{ ((f,g),[Z]) \in S_n \times \Hilb_Q X \mid Z \subset \var[X]{f,g} \right\} \] is closed in $S_n \times \Hilb_Q X$ for every Hilbert polynomial $Q$. Recall that \Cref{thm:explicit construction of Hilb P} gives a closed embedding \begin{align*} \imath_t\colon\Hilb_Q X &\rightarrow \Gr(P(t),k[x_0,\cdots,x_r]_t) \\ [Z] &\mapsto \Gamma(\mathscr{I}_Z(t)) \end{align*} for some polynomial $P$ and every large $t$. This gives a closed embedding \[ (S_n\times\Hilb_Q X) \hookrightarrow (\mathbb{P}^N\times\Gr(P(t),k[x_0,\cdots,x_r]_t)). \] We may assume that $t\geq n$. Notice that $Z \subset V(f,g)$ if and only if the saturation of $(f,g)$ is contained in the saturated ideal defining $Z$. Thus, \begin{align*} Z \subset V(f,g) &\Leftrightarrow (f,g)_t \subset \Gamma(\mathscr{I}_Z(t)) \\ &\Leftrightarrow \dim\left( \Gamma(\mathscr{I}_Z(t)) + (f,g)_t\right) \leq P(t). \end{align*} Note that $\mathbb{P}^N\times\Gr(P(t),k[x_0,\cdots,x_r]_t)$ is covered by the standard affine open spaces. In such affine open spaces, the last condition is expressed as $(P(t)+1)\times (P(t)+1)$ minors of some matrix. Consequently, $T_n^Q$ is identified with a closed subset of $\mathbb{P}^N\times\Gr(P(t),k[x_0,\cdots,x_r]_t)$. \end{proof} \begin{defi} Let $\IrrDiv_n X$ be the union of the irreducible components of $\EffDiv_n X$ which contains at least one closed point corresponding to a reduced and irreducible divisor. \end{defi} \begin{lem}\label{lem:comp1} Let $F$ be an irreducible component of $\IrrDiv_n X$. Then there is a unique irreducible component $E$ of $T_n$ such that $q(E) = F$. \end{lem} \begin{proof} \Cref{lem:generic sec} implies that $\EffDiv_n X$ is contained in the image of $q:T_n \rightarrow \EffDiv X$. Because $q$ is proper, there is an irreducible component $E$ of $T_n$ such that $q(E) = F$. Moreover, for any $[D] \in F$, \[ q^{-1}([D]) \simeq \left\{ (f,g)\in\left(\mathbb{P}^{N_n}\right)^2 \ \middle|\ f, g \in \Gamma(\mathscr{I}_D(n)) \right\}. \] Then $q^{-1}([D])$ is irreducible, because it is a product of two projective spaces. Thus, such an $E$ is unique. \end{proof} \begin{lem}\label{lem:comp2} Let $F$ and $E$ be as in \Cref{lem:comp1}. Then there is $(f,g)\in S_n$ such that $E$ is the only irreducible component of $T_n$ satisfying $(f,g) \in p(E)$. \end{lem} \begin{proof} Let $W \subset \EffDiv_n$ be the complement of the image of the proper morphism \newlength{\mylength}\newlength{\mylengthA}\newlength{\mylengthB} \settowidth{\mylengthA}{$\times$}\settowidth{\mylengthB}{$,$} \setlength{\mylength}{0.5\mylengthA minus 0.5\mylengthB} \begin{alignat*}{3} \coprod_{t = 1}^{n-1} \EffDiv_{t} X &\times \EffDiv_{n - t} X &&\longrightarrow \EffDiv_n X \\ ([D_0] \kern-\mylength&\kern\mylength,[D_1]) &&\longmapsto [D_0 + D_1]. \end{alignat*} Then $W$ parametrizes the reduced and irreducible divisors of degree $n$ on X. Notice that $W \cap F \neq \emptyset$, because $F \subset \IrrDiv_n X$. The uniqueness of $E$ implies that there is a dense open set $U \subset F$ such that $q^{-1}(U)$ does not intersect with any irreducible component of $T_n$ other than $E$. Thus, we can take $[D] \in W \cap U$. Then $D$ is a reduced and irreducible divisor, since $[D] \in W$. \Cref{lem:generic sec} implies that there is $(f,g) \in S_n$ such that \[ p^{-1}((f,g)) = \left\{ ((f,g),[D]) \right\}. \] Then $E$ is the only irreducible component of $T_n$ containing $((f,g),[D])$, because $[D] \in U$. \end{proof} \begin{lem}\label{lem:comp3} Let $F$ and $E$ be as in \Cref{lem:comp1}. Then $p(E)$ is an irreducible component of $S_n$. \end{lem} \begin{proof} Let \[ R_n = q^{-1} (\EffDiv_1 X \cup \EffDiv_2 X \cup \cdots \cup \EffDiv_{n^2 \deg X} X) \subset T_n.\] Since $R_n$ is proper, the restriction $p|_{R_n}: R_n \rightarrow S_n$ is also proper. Moreover, $p|_{R_n}$ is surjective, because $\deg V_X(f,g) \leq n^2 \deg X$ for every $(f,g) \in S_n$. Thus, every irreducible component of $S_n$ is the image of an irreducible component of $R_n$ under $p|_{R_n}$. Let $B \subset S_n$ be the irreducible component containing $p(E)$. Then \Cref{lem:comp2} implies that $p(E) = B$. \end{proof} \begin{lem}\label{lem:IrrDiv bound} If $n$ is a positive integer, then \[ \# \irr (\IrrDiv_n X) \leq \# \irr (S_n).\] \end{lem} \begin{proof} \Cref{lem:comp1} and \Cref{lem:comp3} define a map \begin{align*} \irr(\IrrDiv_n X) &\longrightarrow \irr(S_n)\\ F &\longmapsto P(E), \end{align*} where $E$ is determined by $F$ as in \Cref{lem:comp1}. Then \Cref{lem:comp1} and \Cref{lem:comp2} implies that this map is injective. \end{proof} \begin{lem}\label{lem:effdiv bound} Let $n$ be a positive integer. Then \[ \# \irr (\EffDiv_n X) \leq 2^n {2\max\{n,d\}+(r-1)d \choose r}^{2 {n + r \choose r}-2}. \] \end{lem} \begin{proof} Notice that \begin{alignat*}{3} \coprod_{n_0 + \cdots + n_{t-1} = n} \IrrDiv_{n_0} X \times &\cdots \times \IrrDiv_{n_{t-1}} X &&\longrightarrow \EffDiv_n X \\ ([D_0],&\cdots,[D_{t-1}]) &&\longmapsto [D_0+\cdots+D_{t-1}]. \end{alignat*} is surjective, where the disjoint union runs over all integer partitions of $n$. Thus, the left-hand side has more irreducible components. Take any integer partition $n_0 + \cdots + n_{t-1} = n$. Then by \Cref{lem:bound Sn} and \Cref{lem:IrrDiv bound}, \begin{align*} & \#\irr(\IrrDiv_{n_0} X \times \cdots \times \IrrDiv_{n_{t-1}} X ) \\ =& \#\irr(\IrrDiv_{n_0} X) \times \cdots \times \#\irr(\IrrDiv_{n_{t-1}} X ) \\ \leq& {2\max\{n_0,d\}+(r-1)d \choose r}^{2{n_0 + r \choose r}-2} \times \cdots \times {2\max\{n_{t-1},d\}+(r-1)d \choose r}^{2{n_{t-1} + r \choose r}-2}\\ \leq& {2\max\{n,d\}+(r-1)d \choose r}^{\left({2{n_0 + r \choose r}}-2\right) + \cdots +\left({2{n_{t-1} + r \choose r}}-2\right)}. \end{align*} Let \[ D(x) = 2{x + r \choose r}-2. \] Then $D(0) = 0$ and $D$ is concave above in $[0,\infty)$. Therefore, \[ D(n_0) + D(n_1) + \cdots + D(n_{t-1}) \leq D(n). \] Furthermore, the number of integer partitions of $n$ is less than or equal to $2^n$. This proves the inequality. \end{proof} Now, we are ready to give a new upper bound: \NSbound \begin{proof} If $X$ is a curve or a projective space, then $(\NS X)_\tor = 0$. Thus, we may assume that $r \geq 3$, $d \geq 2$, $\deg X \geq 2$ and $r-2 \geq \codim X \geq 1$. Let $n = (d-1) \codim X \cdot \deg X$. Then \[ d \leq n \leq (r-2)(d-1)d^{r-2}. \] \Cref{thm:bound by EffDiv} and \Cref{lem:effdiv bound} implies that \[ \#(\NS X)_\tor \leq \conn\left(\EffDiv_n X \right) \leq \irr\left(\EffDiv_n X \right) \leq 2^n {2n+(r-1)d \choose r}^{2 {n + r \choose r}-2}. \] Since \begin{align*} \log_2 {2n+(r-1)d \choose r} &\leq \log_2 \left(2n+(r-1)d\right)^r \\ &\leq r \log_2 \left(rd^r\right) \\ &\leq r(\log_2 r + r \log_2 d) \\ &\leq r^2(1+\log_2 d) \\ &\leq {r^2 d} \intertext{and} 2 {n + r \choose r} &\leq \frac{2 (n+r)^r}{r!} \\ &\leq\frac{ (rd^{r-1})^r}{3}, \end{align*} we have \begin{align*} \log_d\log_2 \left(\#(\NS X)_\tor\right) &\leq \log_d\log_2 \left(2^n {2n+(r-1)d \choose r}^{2 {n + r \choose r}}\right) \\ &\leq \log_d \left(n + r^2 d \frac{ (rd^{r-1})^r}{3}\right)\\ &\leq \log_d \left( r^{r+2}d^{r(r-1)+1} \right)\\ &\leq (r+2)\log_2 r+r(r-1)+1 \\ &\leq 2r\log_2 r + r^2. \qedhere \end{align*} \end{proof} We now give an application to the torsion subgroups of second cohomology groups. \begin{lem}\label{thm:NS coh} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth projective variety of degree $d$. Let $\ell \neq \charac k$ be a prime number. The embedding $(\NS X)\otimes\mathbb{Z}_\ell \hookrightarrow H^2_\et(X,\mathbb{Z}_\ell)$ \cite[Remark V.3.29]{Mil2} induces by the Kummer sequence restricts to an isomorphism. \[ (\NS X)[\ell^\infty] \simeq H^2_\et(X,\mathbb{Z}_\ell)_\tor. \] \end{lem} \begin{proof} See \cite[2.2]{SZ}. \end{proof} Therefore, the bound of \Cref{thm:NS better bound} implies the corollaries below. \SHbound \begin{proof} This follows from \Cref{thm:NS better bound} and \Cref{thm:NS coh}. \end{proof} \begin{cor} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth projective variety over $\mathbb{C}$ defined by homogeneous polynomials of degree $\leq d$. Then \begin{equation*} \# H^2_\sing(X^\an,\mathbb{Z})_\tor \leq 2^{d^{r^2+2r\log_2 r}}. \end{equation*} \end{cor} \begin{proof} This follows from \Cref{thm:H2 bound} and the fact that \[\prod_{\ell \textup{ is prime}} H^2_\et(X,\mathbb{Z}_\ell)_\tor \simeq H^2_\sing(X^\an,\mathbb{Z})_\tor. \qedhere\] \end{proof} \section{The Number of Generators of \texorpdfstring{$(\NS X)_{\tor}$}{(NS X)tor}}\label{sec:generators} In this section, $X \hookrightarrow \mathbb{P}^r$ is a smooth connected projective variety, and $\ell$ is a prime number not equal to $\charac k$. The goal is to give a uniform upper bound on the number of generators of $(\NS X)[\ell^\infty]$. Our approach is a simplification of a brief sketch suggested by J\'anos Koll\'ar. \begin{lem}\label{thm:NS fun} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth connected projective variety of degree $d$, and $x_0$ be a geometric point of $X$. Let $\ell \neq \charac k$ be a prime number. Then \[(\NS X)[\ell^\infty]^* \simeq \pi^{\et}_1(X,x_0)^\ab[\ell^\infty].\] \end{lem} \begin{proof} The exact sequence in \cite[Proposition 69]{Jak} gives an exact sequence \[ 0 \rightarrow (\NS X)[\ell^\infty]^* \rightarrow \pi^{\et}_1(X,x_0)^{(\ell)} \rightarrow \pi^{\et}_1(\Alb X,0)^{(\ell)} \rightarrow 0, \] by taking the maximal pro-$\ell$ abelian quotient. Since $\pi^{\et}_1(\Alb X,0)^{(\ell)}$ is a free $\mathbb{Z}_\ell$-module, \[(\NS X)[\ell^\infty]^* \simeq \pi^{\et}_1(X,x_0)^{(\ell)}_\tor \simeq \pi^{\et}_1(X,x_0)^{\ab}[\ell^\infty]. \qedhere\] \end{proof} If $M$ is a topological manifold, the linking form implies that $H^2_\sing(M,\mathbb{Z})_\tor^* \simeq H^\sing_1(M,\mathbb{Z})_\tor.$ \Cref{thm:NS coh} and \Cref{thm:NS fun} imply the \'etale analogy that $H^2_\et(X,\mathbb{Z}_\ell)_\tor^* \simeq\pi^{\et}_1(X,x_0)^\ab[\ell^\infty].$ \begin{lem}\label{thm:pos bound} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth connected projective variety of degree $d$. Then $(\NS X)[\ell^\infty]$ is generated by less than or equal to $(d-1)(d-2)$ elements. \end{lem} \begin{proof} For a general linear space $L \subset \mathbb{P}^r$ of dimension $r - \dim X+1$, the intersection $C \coloneqq X \cap L$ is a connected smooth curve of degree $d$. Take a geometric point $x_0 \in C$. Then the natural map \[ \pi_1^{\et}(C,x_0) \rightarrow \pi_1^{\et}(X,x_0) \] is surjective, by repeatedly applying the Lefschetz hyperplane theorem for \'etale fundamental groups \cite[XII. Corollaire 3.5]{SGA2}. Let $g$ be the genus of $C$. Then \Cref{thm:NS fun} implies that $(\NS X)[\ell^\infty]^*$ is isomorphic to a subquotient of \[ \pi_1^{\et}(C,x_0)^{(\ell)} \simeq \mathbb{Z}_\ell^{2 g}. \] Notice that $2g \leq (d-1)(d-2)$ because $\deg C = d$. As a result, $(\NS X)[\ell^\infty]$ is generated by $\leq (d-1)(d-2)$ elements. \end{proof} \NSgen \begin{proof} Since a product of finitely many finite cyclic groups of coprime orders is again cyclic, \[N \simeq \prod_{\substack{\ell \neq p\\ \ell \textup{ is prime}}} (\NS X)[\ell^\infty] \] is a product of $\leq(d-1)(d-2)$ cyclic groups by \Cref{thm:pos bound}. \end{proof} \SHgen \begin{proof} This follows from \Cref{thm:pos bound} and \Cref{thm:NS coh}. \end{proof} \begin{cor} Let $X \hookrightarrow \mathbb{P}^r$ be a smooth connected projective variety of degree $d$ over $\mathbb{C}$. Then $H^2_\sing(X^\an,\mathbb{Z})$ is generated by less than or equal to $(d-1)(d-2)$ elements. \end{cor} \begin{proof} This follows from \Cref{thm:NSgen} and the fact that \[ (\NS X)_\tor \simeq \prod_{\ell \textup{ is prime}}(\NS X)[\ell^\infty] \simeq \prod_{\ell \textup{ is prime}} H^2_\et(X,\mathbb{Z}_\ell)_\tor \simeq H^2_\sing(X^\an,\mathbb{Z})_\tor. \qedhere\] \end{proof} The bounds in this section exclude the case $\ell = \charac k$, because of the bad behavior of the \'etale cohomology at $p$. One may try to overcome this by using Nori's fundamental group scheme \cite{Nor}. However, the Lefschetz hyperplane theorem for Nori's fundamental group scheme is no longer true \cite[Remark 2.4]{BH}. \begin{qst} Can one use another cohomology to prove the analogue of \Cref{thm:pos bound} for $(\NS X)[p^\infty]$ in characteristic $p$? \end{qst} \section*{Acknowledgement} The author thanks his advisor, Bjorn Poonen, for suggesting the problem, insightful conversation and careful guidance. The author thanks J\'anos Koll\'ar, whose comments led to \Cref{sec:chow} and \Cref{sec:generators}. The author thanks Wei Zhang for helpful conversation regarding \'etale homology. The author thanks Chenyang Xu for helpful conversation regarding Chow varieties. \bibliographystyle{plain}
1,941,325,220,887
arxiv
\section{Concluding Remarks} \label{sec:conclusion} In this paper we proved a tight lower bound $L(n,k)$ for Streett complementation. We note that we can improve the lower bound by two modifications. First, we allow $G(i)$ (resp. $B(i)$) to be arbitrary subsets of $P_{G}$ (resp. $P_{B}$). Second, we also use multi-dimensional $R$-rankings; the range of $r$ is a set of $k$-tuples of integers in $[1..n]$. As a result, both $R$-ranks and $H$-ranks are $k$-tuples of integers where $k$ can be as large as $2^{n}$ (the current effective $k$ is bounded by $n$). These two modifications require much more sophisticated definition of $Q$-rankings and construction of $Q$-words, but they have no asymptotic effect on $L(n,k)$. The situation is different from Rabin complementation~\cite{CZL09}, where $Q$-rankings are also multi-dimensional (though different terms other than $Q$-rankings and $Q$-words were used), and each component in a $k$-tuple (the value of a $Q$-ranking) is independent from one another, and hence each can impose an independent behavior on $Q$-words. Put it in another way, no matter how large the index set is (the maximum size can be $2^{n}$), all dual properties, each of which is parameterized with an index, can be realized in one $Q$-word. For Streett complementation, the diminishing gain when pushing up $k$ made us realize that with increasing number of $Q$-rankings, more and more correlations occur between $Q$-rankings. Exploiting these correlations leads us to the discovery of the corresponding upper bound. \section{Construction of $Q$-Words} \label{sec:construction} In this section we prove Theorem~\ref{thm:existence}. Recall that we need a construction to simultaneously satisfy all properties in Definition~\ref{def:Q-word}, which are parameterized with pairs of states (Condition~\eqref{en:Q-word-1}) or states (Condition~\eqref{en:Q-word-2}). The idea is to concatenate a sequence of finite $\Delta$-graphs, each of which satisfies the properties with respect to a specific pair of states or a specific state. With the help of the bypass track ($t$-track), properties associated with each individual subgraph are all preserved in the final concatenation, giving us a desired $Q$-word. Let $f=\langle r,h \rangle$. $\scrG_{f}$ divides into two sequential subgraphs $\scrG_{r}$ and $\scrG_{h}$, which satisfy Properties~\eqref{en:Q-word-1} and~\eqref{en:Q-word-2}, respectively. Properties~\eqref{en:Q-word-3} and~\eqref{en:Q-word-4} are obvious once the final construction is shown. As stated earlier, Property~\eqref{en:Q-word-1} and Property~\eqref{en:Q-word-2} are orthogonal; Property~\eqref{en:Q-word-1} only relies on $R$-rankings and Property~\eqref{en:Q-word-2} only relies on $H$-rankings. We call a finite $\Delta$-graph whose every level is ranked by the same $R$-ranking, an \emph{$R$-word} if it satisfies Properties~\eqref{en:Q-word-1},~\eqref{en:Q-word-3} and~\eqref{en:Q-word-4}. Similarly, a finite $\Delta$-graph whose every level is ranked by the same $H$-ranking, is called an \emph{$H$-word} if it satisfies Properties~\eqref{en:Q-word-2},~\eqref{en:Q-word-3} and~\eqref{en:Q-word-4}. As $Q$-words, $H$-words (resp. $R$-words) are not uniquely determined by $H$-rankings (resp. $R$-rankings). Nevertheless, all $H$-words (resp. $R$-words) corresponding to a specific $h$ (resp. $r$) serve the construction purpose equally well, and hence we simply name an arbitrarily chosen one by $\scrG_{h}$ (resp. $\scrG_{r}$). Theorem~\ref{thm:existence} builds on Lemmas~\ref{lem:R-word} and~\ref{lem:H-word}. \begin{lemma}[$R$-Word] \label{lem:R-word} An $R$-word exists for every $R$-ranking. \end{lemma} \begin{example}[$R$-Word] \label{ex:R-word} Let us revisit Example~\ref{ex:Q-word}. In Figure~\ref{fig:Q-word}, the $R$-word $\scrG_{r}$ consists of two parts: $\scrG^{(1)}_{r}$ (level $0$ to level $3$) and $\scrG^{(2)}_{r}$ (level $3$ to $6$). For Property~\eqref{en:Q-word-1} with respect to $q_{2}$ and $q_{1}$, we can obtain the desired $\varrho_{2, 1}$ as follows. In $\scrG^{(1)}_{r}$, $\varrho_{2, 1}$ starts from $\langle q_{2}, 0 \rangle$, visits $\langle b_{1}, 1 \rangle$, $\langle b_{2}, 2 \rangle$ and then $\langle q_{0}, 3 \rangle$. In $\scrG^{(2)}_{r}$, $\varrho_{2, 1}$ continues from $\langle q_{0}, 3 \rangle$, visits $\langle b_{1}, 4 \rangle$, $\langle b_{2}, 5 \rangle$ and lands at $\langle q_{1}, 6 \rangle$. For Property~\eqref{en:Q-word-1} with respect to $q_{0}$ and $q_{1}$, we can obtain the desired $\varrho_{0, 1}$ as follows. In $\scrG^{(1)}_{r}$, $\varrho_{0, 1}$ starts from $\langle q_{0}, 0 \rangle$, passes through $\scrG^{(1)}_{r}$ via $q_{0}$-track until it reaches $\langle q_{0}, 3 \rangle$ from where it visits $\langle b_{1}, 4 \rangle$, $\langle b_{2}, 5 \rangle$ and lands at $\langle q_{1}, 6 \rangle$. \end{example} \begin{lemma}[$H$-Word] \label{lem:H-word} An $H$-word exists for every $H$-ranking. \end{lemma} \begin{example}[$H$-Word] \label{ex:H-word} Let us revisit Example~\ref{ex:Q-word}. In Figure~\ref{fig:Q-word}, the $H$-word $\scrG_{h}$ consists of three parts: $\scrG^{(1)}_{h}$ (level $6$ to level $12$), $\scrG^{(2)}_{h}$ (level $12$ to level $18$), and $\scrG^{(3)}_{h}$ (level $18$ to level $24$). Let us take a look the paths $\varrho_{h}$ and $\varrho_{h'}$ (in $\scrG^{(1)}_{h}$) defined in Example~\ref{ex:Q-word}. The path $\varrho_{h}$ (marked green except the last edge) starts at $\langle q_{1}, 12 \rangle$, visits $\langle b_{2}, 13 \rangle$ and $\langle g_{1}, 14 \rangle$, and enters $t$-track at $\langle t, 15 \rangle$. It continues on $t$-track till reaching $\langle t, 17 \rangle$, and then takes $\langle \langle t, 17 \rangle, \langle q_{1}, 18 \rangle \rangle$ (marked blue) to the end. The path $\varrho_{h'}$ (marked red except the last edge) starts at $\langle q_{1}, 12 \rangle$, takes $q_{1}$-track to reach $\langle q_{1}, 15 \rangle$, from where it visits $\langle g_{2}, 16 \rangle$ and then enters $t$-track at $\langle t, 17 \rangle$. Same as $\varrho_{h}$, $\varrho_{h'}$ returns to $q_{1}$-track via $\langle \langle t, 17 \rangle, \langle q_{1}, 18 \rangle \rangle$. \end{example} \begin{varthm}{\ref{thm:existence}~($Q$-Words).} A $Q$-word exists for every $Q$-ranking. \end{varthm} \begin{proof} By Lemmas~\ref{lem:R-word} and~\ref{lem:H-word}, we have $\scrG_{r}$ and $\scrG_{h}$ as an $R$-word and an $H$-word, respectively. The desired $Q$-word $\scrG$ is just $\scrG_{r} \cat \scrG_{h}$. Properties~\eqref{en:Q-word-3} and~\eqref{en:Q-word-4} follow immediately because they hold both in $\scrG_{r}$ and $\scrG_{h}$. Let $\varrho^{r}_{i, i'}$ be the full path in $\scrG_{r}$ that satisfies Property~\eqref{en:Q-word-1} for $q_{i}$ and $q_{i'}$ where $i, i' \in [n]$ and $r(q_{i}) > r(q_{i'})$, and $\varrho^{h}_{i', k}$ the full path in $\scrG_{h}$ that satisfies Property~\eqref{en:Q-word-2} (with respect to $q_{i'}$ and index $k$). Then $\varrho^{r}_{i, i'} \cat \varrho^{h}_{i', k}$ is the path that Property~\eqref{en:Q-word-1} requires for vertex pair $q_{i}$ and $q_{i'}$. Let $i \in [n]$ and $j \in I$. Let $\varrho^{r}_{i,i}$ be the full $q_{i}$-track in $\scrG_{r}$, and $\varrho^{h}_{i, j}$ the full path in $\scrG_{h}$ that satisfies Property~\eqref{en:Q-word-2} (with respect to vertex $q_{i}$ and index $j$). Then in $\scrG$, for each $q \in Q$, we have $k$ full paths $\varrho^{r}_{i, i} \cat \varrho^{h}_{i, j}$ ($j \in I$), which takes care of the existence part of Property~\eqref{en:Q-word-2}. The exactness part follows from the exactness part of Property~\eqref{en:Q-word-2} for $\scrG_{h}$, and the fact that for each $i \in [n]$, $\varrho^{r}_{i,i}$ is unique in $\scrG_{r}$. \end{proof} \section{Introduction} \label{sec:introduction} Complementation is a fundamental notion in automata theory. Given an automaton $\calA$, the complementation problem asks to find an automaton $\calB$ that accepts exactly all words that $\calA$ does not accept. Complementation connects automata theory with mathematical logic due to the natural correspondence between language complementation and logical negation, and hence plays a pivotal role in solving many decision and definability problems in mathematical logic. A fundamental connection between automata theory and the monadic second order logics was demonstrated by B\"{u}chi~\cite{Buc60}, who started the theory of finite automata on infinite words ($\omega$-automata)~\cite{Buc66}. The original $\omega$-automata are now referred to as B\"{u}chi automata and B\"{u}chi complementation was a key to establish that the class of $\omega$-regular languages (sets of $\omega$-words generated by product $\cat$, union $\cup$, star ${}^{*}$ and limit ${}^{\omega}$) is closed under complementation~\cite{Buc66}. B\"{u}chi's discovery also has profound repercussions in applied logics. Since the '80s, with increasing demand of reasoning infinite computations of reactive and concurrent systems, $\omega$-automata have been acknowledged as unifying representation for \emph{programs} as well as for \emph{specifications}~\cite{VW86}. Complementation of $\omega$-automata is crucial in many of these applications. But complementation of $\omega$-automata is non-trivial. Only after extensive studies in the past two decades ~\cite{SVW87,Mic88,Saf88,FKV06,Yan06,Sch09} (also see survey~\cite{Var07}), do we have a good understanding of the complexity of B\"{u}chi complementation. But a question about a very important type of $\omega$-automata remains unanswered, namely the complexity of Streett complementation, where the gap between the current lower bound and upper bound is substantial. Streett automata are ones of a kind, because Streett acceptance conditions naturally encode \emph{strong fairness} that infinitely many requests are responded infinitely often, a necessary requirement for meaningful computations~\cite{FK84,Fra86}. \paragraph{Related Work.} Obtaining nontrivial lower bounds has been difficult. The first nontrivial lower bound for B\"{u}chi complementation is $n!\approx (0.36n)^{n}$, obtained by Michel~\cite{Mic88,Lod99}. In 2006, combining ranking with \emph{full automaton} technique, Yan improved the lower bound of B\"{u}chi complementation to $\Omega(L(n))$~\cite{Yan06}, which now is matched tightly by the upper bound $O(n^{2}(L(n))$~\cite{Sch09}, where $L(n)\approx(0.76n)^{n}$. Also established in~\cite{Yan06} was a $(\Omega(nk))^{n}=2^{\Omega(n\lg nk)}$ tight lower bound (where $k$ is the number of B\"{u}chi indices) for generalized B\"{u}chi complementation, which also applies to Streett complementation because generalized B\"{u}chi automata are a subclass of Streett automata. In~\cite{CZL09}, we proved a tight lower bound $2^{\Omega(nk \lg n)}$ for Rabin complementation (where Rabin index size $k$ can be as large as $2^{n-\epsilon}$ for any arbitrary but fixed $\epsilon > 0$). Several constructions for Streett complementation exist~\cite{SV89,Kla91,Saf92,KV05a,Pit06}, but all involve at least $2^{O(nk\lg nk)}$ state blow-up, which is significantly higher than the current best lower bound $2^{\Omega(n\lg nk)}$, since the Streett index size $k$ can reach $2^{n}$. Determining the complexity of Streett complementation has been posed as an open problem since the late '80s~\cite{SV89,KV05a,Yan06,Var07}. In~\cite{CZ11b} we showed a construction for Streett complementation with the upper bound $2^{O(n \lg n+nk \lg k)}$ for $k = O(n)$ and $2^{O(n^{2} \lg n)}$ for $k=\omega(n)$. In this paper we establish a matching lower bound $2^{\Omega(n \lg n+nk \lg k)}$ for $k = O(n)$ and $2^{\Omega(n^{2} \lg n)}$ for $k = \omega(n)$, and therefore showing that the construction in~\cite{CZ11b} is essentially optimal at the granularity of $2^{\Theta(\cdot)}$. This lower bound is obtained by applying two techniques: \emph{fooling set} and \emph{full automaton}. \paragraph{Fooling Set.} The fooling set technique is a classic way of obtaining lower bounds on nondeterministic finite automata on finite words (NFA). Let $\Sigma$ be an alphabet and $\scrL \subseteq \Sigma^{*}$ a regular language. A set of pairs $P=\{(x_{i}, y_{i}) \mid x_{i}, y_{i} \in \Sigma^{*}, 1 \le i \le n \}$ is called a \emph{fooling set} for $\scrL$, if $x_{i}y_{i} \in \scrL$ for $1 \le i \le n$ and $x_{i}y_{j} \not \in \scrL$ for $1 \le i,j \le n$ and $i \not = j$. If $\scrL$ has a fooling set $P$, then any NFA accepting $\scrL$ has at least $|P|$ states~\cite{GS96}. The purpose of a fooling set is to identify runs with dual properties (called fooling runs): fragments of accepting runs of $\scrL$, when pieced together in certain ways, induce non-accepting runs. By an argument in the style of Pumping Lemma, a small automaton would not be able to distinguish how it arrives at a state, and hence it cannot differentiate between some accepting runs and some non-accepting ones. In the setting of $\omega$-automata, a similar technique exists, which we refer to as Michel's scheme~\cite{Mic88}. A set $P=\{x_{i} \in \Sigma^{*} \mid 1 \le i \le n \}$ is called a \emph{fooling set} for $\scrL$, if $(x_{i})^{\omega} \in \scrL$ for $1 \le i \le n$ and $((x_{i})^{+}(y_{j})^{+})^{\omega} \subseteq \overline{\scrL}$ for $1 \le i,j \le n$ and $i \not = j$~\cite{Mic88,Lod99}. \paragraph{Full Automaton.} Sakoda and Sipser introduced the \emph{full automaton} technique~\cite{SS78} (the name was first coined in~\cite{Yan06}) and used it to obtain several completeness and lower bound results on transformations involving $2$-way finite automata~\cite{SS78}. In particular, they proved a classic result of automata theory: the lower bound of complementing an NFA with $n$ states is $2^{n}$. To establish lower bounds for complementation, one starts with designing a class of automata $\calA_{n}$ and then a class of words $\calW_{n}$ such that $\calW_{n}$ are not contained in $\scrL(\calA_{n})$. Next one shows that runs of purported complementary automata $\calC_{n}$ on $\calW_{n}$ exhibit dual properties by application of the fooling set technique. However, some fooling runs can only be generated by long and sophisticated words, which are very difficult to be ``guessed'' right from the beginning. The ingenuity of the full automaton technique is to remove two levels of indirections: since the ultimate goal is to construct fooling runs, why should not one start with runs directly, and build $\calW_{n}$ and $\calA_{n}$ later? Without a priori constraints imposed from $\calA_{n}$ or $\calW_{n}$ (they do not exist yet), full automata operate on all possible runs; for a full automaton of $n$ states, every possible unit transition graph (bipartite graph with $2n$ vertices) is identified with a letter, and words are nothing but potential run graphs. Removing the two levels of indirections proved to be powerful. By this technique, the $2^{n}$ lower bound proof for complementing NFA was surprisingly short and easy to understand~\cite{SS78} (a fooling set method was implicit in the proof). We should note that full automata operate on large alphabets whose size grows exponentially with the state size, but this does not essentially limit its application to automata on conventional alphabets. By an encoding trick, a large alphabet can be mapped to a small alphabet with no compromise to lower bound results~\cite{Sip79,Yan06,CZL09}. \paragraph{Ranking.} For $\omega$-automata, the power of fooling set and full automaton technique was further enhanced by the use of rankings on run graphs~\cite{Yan06,CZL09}. Since first introduced in~\cite{Kla91}, rankings have been shown to a powerful tool to represent properties of run graphs; complementation constructions for various types of $\omega$-automata were obtained by discovering respective rankings that precisely characterize those run graphs that contain no accepting path (with respect to source automata)~\cite{KV01,KV04,KV05a,FKV06,Kup06}. With the help of rankings, constructing a fooling set amounts to designing certain type of rankings. In fact, as shown below, an explicit description of a fooling set might be very hard to find, but the essential properties the fooling set induce can be concisely represented by certain type of rankings. \paragraph{Our Results.} In this paper we establish a lower bound $L(n,k)$ for Streett complementation: $2^{\Omega(n\lg n + k n\lg k)}$ for $k=O(n)$ and $2^{\Omega(n^{2} \lg n)}$ for $k=\omega(n)$, which matches the upper bound obtained in~\cite{CZ11b}. This lower bound applies to all Streett complementation constructions that output union-closed automata (see Section~\ref{sec:preliminaries}), which include B\"{u}chi, generalized B\"{u}chi and Streett automata. This bound considerably improves the current best bound $2^{\Omega(n \lg nk)}$~\cite{Yan06}, especially in the case $k = \Theta(n)$. Determinization is another fundamental concept in automata theory and it is closely related to complementation. A deterministic $T$-automaton can be easily complemented by switching from $T$-acceptance condition to the dual co-$T$ condition (e.g., Streett vs. Rabin). Therefore, the lower bound $L(n,k)$ also applies to Streett determinization if the output automata are the dual of union-closed automata. In particular, no construction for Streett determinization can output Rabin automata with state size asymptotically less than $L(n,k)$. We can get a slightly weaker result for constructions that output Rabin automata (which are not union-closed): no construction for Streett complementation can output Rabin automata with state size $n' \le L(n,k)$ and index size $k' = O(n')$, due to the fact that a Rabin automaton with state $n'$ and index size $k'$ can be translated to an equivalent B\"{u}chi automaton with $O(n'k')$ states. For the same reason, no construction for Streett determinization can output Streett automata with state size $n' \le L(n,k)$ and index size $k' = O(n')$. Even with the fooling set and full automaton techniques and the assistance of rankings, a difficulty remains: in the setting of Streett complementation, how large can a fooling set for a complementary automaton be? The challenge is two-fold. One is to implant potentially contradictory properties in each member of a fooling set so that complementary run graphs can be obtained by certain combinations of those members. The other is to avoid correlations between members of a fooling set so that each member has to be memorized by a distinct state in a purported complementary automaton. By exploiting the nature of Streett acceptance conditions, our fooling set is obtained via a type of multi-dimensional rankings, called $Q$-rankings, and members in the fooling set are called $Q$-words. To simultaneously accommodate potentially contradictory properties in multi-dimension requires handling nontrivial subtleties. We shall continue this discussion in Section~\ref{sec:lower-bound} after presenting the definition of $Q$-rankings. \paragraph{Paper Organization.} Section~\ref{sec:preliminaries} presents notations and basic terminology in automata theory. Section~\ref{sec:lower-bound} introduces full Streett automata, $Q$-rankings and $Q$-words, and use them to establish the lower bound. Section~\ref{sec:conclusion} concludes with a discussion. Technical proofs are omitted from the main text, but they can be found in the appendix. \section{Lower Bound} \label{sec:lower-bound} In this section we define full Streett automata, and related $Q$-rankings and $Q$-words, and use them to establish the lower bound. From now on, we reserve $n$ and $k$, respectively, for the effective state size and index size in our construction (except in Theorem~\ref{thm:lower-bound} and Section~\ref{sec:conclusion} where $n$ and $k$, respectively, mean the state size and index size of a complementation instance). All related notions are in fact parameterized with $n$ and $k$, but we do not list them explicitly unless required for clarity. Let $I$ be $[1..k]$. We first describe the plan of proof. For each $k, n >0$, we define a full Streett automaton $\calS=(\Sigma,S,Q,\Delta,\calF)$ and a set of $Q$-rankings $f: Q \to [1..n] \times I^{k}$. For each $Q$-ranking $f$, we define a finite $\Delta$-graph $\scrG_{f}$, called a $Q$-word. We then show that for each $f$, $(\scrG_{f})^{\omega} \not \in \scrL(\calS)$, yet $((\scrG_{f})^{+}(\scrG_{f'})^{+})^{\omega} \subseteq \scrL(\calS)$ for every distinct pair of $Q$-rankings $f$ and $f'$, that is, $Q$-words constitute a fooling set for $\overline{\scrL(\calS)}$. Using Michel's scheme~\cite{Mic88,Lod99,Yan06}, we show that if a union-closed automaton $\calC$ complements $\calS$, then its state size is no less than the number of $Q$-rankings, because otherwise we can ``weave'' the runs of $(\scrG_{f})^{\omega}$ and $(\scrG_{f'})^{\omega}$ in such a way that $\calC$ would accept a word in $((\scrG_{f})^{+}(\scrG_{f'})^{+})^{\omega}$, contradicting $((\scrG_{f})^{+}(\scrG_{f'})^{+})^{\omega} \subseteq \scrL(\calS)$. \begin{definition}[Full Streett Automata] \label{def:FS} A family of full Streett automata $\{\calS=\langle\Sigma,S,Q,\Delta,\calF\rangle\}_{n, k >0}$ is such that \begin{enumerate}[label=\ref{def:FS}.\arabic*,ref=\ref{def:FS}.\arabic*] \item $S=Q \cup P_{\mathrm{G}} \cup P_{\mathrm{B}} \cup T$ where $Q$, $P_{\mathrm{G}}$, $P_{\mathrm{B}}$ and $T$ are pairwise disjoint sets of the following forms: \begin{align*} Q &=\{q_{0},\cdots,q_{n-1}\}, & P_{\mathrm{G}} &=\{g_{1},\cdots,g_{k} \}, & T &=\{t\}, & P_{\mathrm{B}} &=\{b_{1},\cdots,b_{k} \} \, . \end{align*} \item $\calF=\langle G,B\rangle_{I}$ such that $G(i) = \{g_{i}\}$ and $B(i)=\{b_{i}\}$ for $i \in I$. \end{enumerate} \end{definition} $Q$ is intended to be the domain of $Q$-rankings. $P_{\mathrm{G}}$ and $P_{\mathrm{B}}$ are pools from which singletons $G(i)$'s and $B(i)$'s are formed. $T$ is to be used for building a \emph{bypass} track that makes graph concatenation behaves like a parallel composition so that properties associated with each subgraph are all preserved in the final concatenation. \begin{definition}[$Q$-Ranking] \label{def:Q-ranking} A \emph{$Q$-ranking} for $\calS$ is a function $f: Q \to [1..n] \times I^{k}$, which is identified with a pair of functions $\langle r,h \rangle$, where $r: Q \to [1..n]$ is one-to-one, and $h: Q \to I^{k}$ maps a state to a permutation of $I$. \end{definition} For a $Q$-ranking $f=\langle r, h \rangle$, we call $r$ (resp. $h$) the $R$-ranking or numeric ranking (resp. $H$-ranking or index ranking) of $f$. We use \emph{$Q$-ranks} (resp. \emph{$R$-ranks}, \emph{$H$-ranks}) to mean values of $Q$-rankings (resp. $R$-rankings, $H$-rankings). For $q \in Q$, we write $h(q)[i]$ ($i \in I$) to denote the $i$-th component of $h(q)$. Let $\calD^{Q}$ be the set of all $Q$-rankings and $|\calD^{Q}|$ be the size of $\calD^{Q}$. Clearly, we have $n!$ $R$-rankings and $(k!)^{n}$ $H$-rankings, and so $|\calD^{Q}| = (n!)(k!)^{n} = 2^{\Omega(n \lg n + n k \lg k)}$. As stated in the introduction, $Q$-rankings are essential for obtaining the lower bound. It turns out that $H$-rankings are the core of $Q$-rankings, for $(k!)^{n}$ already begins to dominate $n!$ when $k$ is larger than $\lg n$. Now we explain the idea behind the design of $H$-rankings. Recall that our goal is to have $(\scrG_{f})^{\omega} \not \in \scrL(\calS)$ for any $Q$-ranking $f$ as well as $((\scrG_{f})^{+}(\scrG_{f'})^{+})^{\omega} \subseteq \scrL(\calS)$ for any two different $Q$-rankings $f$ and $f'$. For simplicity, we ignore $R$-rankings and assume $Q$-rankings are just $H$-rankings. We say that a finite path \emph{discharges} obligation $j$ if the path visits $B(j)$ and a finite path \emph{owes} obligation $j$ if the path visits $G(j)$ but does not visit $B(j)$. As shown below, for each $i \in [n]$, $q_{i}$-track in $\scrG_{f}$ is associated with the $k$-tuple $f(q_{i})$, which is a permutation of $I$, and exactly $k$ \emph{full} paths in $\scrG_{f}$ goes from the beginning of $q_{i}$-track to the end of $q_{i}$-track. We say that those paths \emph{on $q_{i}$-track}. For each $i\in [n]$ and $j \in I$, the $j$-th full path on $q_{i}$-track owes exactly the obligation $f(q_{i})[j]$. Let $\varrho=\varrho_{0} \cat \varrho_{1} \cat \cdots$ be an infinite path in $(\scrG_{f})^{\omega}$ where $\varrho_{t}$ ($t \ge 0$) is a full path in the $t$-th $\scrG_{f}$. Without $R$-rankings, our construction prescribes that all $\varrho_{t}$ start and end at a specific track, say $q_{i}$-track, and hence are associated with $f(q_{i})$. Obligations associated with all $\varrho_{t}$ simply form a subset $I'$ of $I$. However, we impose an ordering $\prec_{f, i}$ on $I'$ (different from the standard numeric ordering) such that $f(q_{i})[j] \prec_{f, i} f(q_{i})[j']$ if and only if $j < j'$. The ordering $\prec_{f, i}$ is total thanks to $f(q_{i})$ being a permutation of $I$. Then a condition in our construction guarantees that the minimum obligation with respect to $\prec_{f,i}$ will never be discharged on $\varrho$, and therefore $\varrho$ violates $\langle G,B\rangle_{I}$. Since this $\varrho$ is chosen arbitrarily, we have $(\scrG_{f})^{\omega} \not \in \scrL(\calS)$. Now let $\scrG \in ((\scrG_{f})^{+}(\scrG_{f'})^{+})^{\omega}$. To show $\scrG \in \scrL(\calS)$, we construct an infinite path $\varrho=\varrho_{0} \cat \varrho_{1} \cat \cdots$ in $\scrG$ that satisfies $\langle G,B\rangle_{I}$, where $\varrho_{t}$ ($t \ge 0$) is a full path in the $t$-th subgraph (which is either $\scrG_{f}$ or $\scrG_{f'}$). Let $i$ be such that $f(q_{i}) \not = f'(q_{i})$ (it is always possible by the assumption $f\not =f'$). Different from before, $q_{i}$-track in $\scrG_{f}$ is associated with $f(q_{i})$ and $q_{i}$-track in $\scrG_{f'}$ is associated with $f'(q_{i})$. Since $f(q_{i})$ and $f'(q_{i})$ are different permutations of $I$, a condition in our construction ensures that a full path $\varrho_{f}$ in $\scrG_{f}$ and a full path $\varrho_{f'}$ in $\scrG_{f'}$, both on $q_{i}$-track, mutually discharge each other's obligations. So we let all $\varrho_{t}$ in $\scrG_{f}$ be $\varrho_{f}$ and all $\varrho_{t}$ in $\scrG_{f'}$ be $\varrho_{f'}$. Since there are infinitely many $\varrho_{f}$ and $\varrho_{f'}$ in $\varrho$, $\varrho$ satisfies $\langle G,B\rangle_{I}$, giving us $\scrG \in \scrL(\calS)$. Since $\scrG$ is chosen arbitrarily, we have $((\scrG_{f})^{+}(\scrG_{f'})^{+})^{\omega} \subseteq \scrL(\calS)$. Now we are read to formally define $Q$-words. \begin{definition}[$Q$-Word] \label{def:Q-word} A finite $\Delta$-graph $\scrG$ is called a \emph{$Q$-word} if every level of $\scrG$ is ranked by the same $Q$-ranking $f=\langle r,h \rangle$ and $\scrG$ satisfies the following additional conditions. \begin{enumerate}[label=\ref{def:Q-word}.\arabic*,ref=\ref{def:Q-word}.\arabic*] \item\label{en:Q-word-1} For every $q, q' \in Q$, if $r(q)>r(q')$, there exists a full path $\varrho$ from $\langle q, 0\rangle$ to $\langle q', |\scrG|\rangle$ such that $\varrho$ visits all of $B(1), \ldots, B(k)$. \item\label{en:Q-word-2} For every $q \in Q$, there exist \emph{exactly} $k$ full paths $\varrho_{1}, \ldots, \varrho_{k}$ from $\langle q, 0\rangle$ to $\langle q, |\scrG|\rangle$ such that for every $i \in I$, $\varrho_{i}$ does not visit $B(h(q)[j])$ for $j \le i$, but visits $B(h(q)[j])$ for $i < j$, and $\varrho_{i}$ does not visit $G(h(q)[j])$ for $j < i$, but visits $G(h(q)[i])$. \item\label{en:Q-word-3} Only $Q$-vertices have outgoing edges at the first level and incoming edges at the last level. \item\label{en:Q-word-4} For every $q, q' \in Q$, there exists no full path from $\langle q, 0\rangle$ to $\langle q', |\scrG|\rangle$ if $r(q) < r(q')$. \end{enumerate} \end{definition} Property~\eqref{en:Q-word-1} concerns with only $R$-rankings. It says that for every two tracks with different $R$-ranks, a path exists that goes from the track with higher rank to the track with the lower rank, and such a path discharges all obligations in $I$. So if those (finite) paths occur infinitely often as fragments of an infinite path $\varrho$, then $\varrho$ clearly satisfies the Streett condition $\langle G,B\rangle_{I}$. Property~\eqref{en:Q-word-2} concerns with only $H$-rankings. It says that exactly $k$ full \emph{``parallel''} paths exist between the two ends of every track, and each owes exactly one distinct obligation in $I$. As shown in Theorem~\ref{thm:lower-bound}, Property~\eqref{en:Q-word-2} is the core of the whole construction and proof, because with $k$ increasing, $H$-rankings contribute more and more to the overall complexity. Properties~\eqref{en:Q-word-3} and~\eqref{en:Q-word-4} are merely technical; they ensure that no other \emph{full paths} exist besides those prescribed by Properties~\eqref{en:Q-word-1} and~\eqref{en:Q-word-2}. Note that in general more than one $Q$-word could exist for a $Q$-ranking $f$. We simply pick an arbitrary one and call it the $Q$-word of $f$, denoted by $\scrG_{f}$. \begin{theorem}[$Q$-Word] \label{thm:existence} A $Q$-word exists for every $Q$-ranking. \end{theorem} \begin{example}[$Q$-Word] \label{ex:Q-word} Let us consider a full Streett automaton $\calS$ where $n=3$, $k=2$, \begin{align*} Q & = \{q_{0}, q_{1}, q_{2} \}, & T & =\{ t \}, & P_{\mathrm{B}} & = \{b_{1}, b_{2} \}, & P_{\mathrm{G}} &= \{g_{1}, g_{2}\}, \end{align*} and the following $Q$-ranking $f=\langle r,h \rangle$: \begin{align*} r(q_{0}) &= 2, & r(q_{1}) &= 1, & r(q_{2}) &= 3, & h(q_{0}) &= \langle 1,2 \rangle, & h(q_{1}) &= \langle 1,2 \rangle, & h(q_{2}) &= \langle 2,1 \rangle \, . \end{align*} Figure~\ref{fig:Q-word} shows a $Q$-word $\scrG_{f}$, which consists of two subgraphs $\scrG_{r}$ and $\scrG_{h}$, where $\scrG_{r}$ in turn consists of two parts: $\scrG^{(1)}_{r}$ (level $0$ to level $3$) and $\scrG^{(2)}_{r}$ (level $3$ to $6$), and $\scrG_{h}$ in turn consists of three parts: $\scrG^{(0)}_{h}$ (level $6$ to level $12$), $\scrG^{(1)}_{h}$ (level $12$ to level $18$), and $\scrG^{(2)}_{h}$ (level $18$ to level $24$). $\scrG_{r}$ and $\scrG_{h}$ are aimed to satisfy Properties~\eqref{en:Q-word-1} and~\eqref{en:Q-word-2}, respectively. The $R$-rank (numeric rank) of every level of $\scrG_{r}$ is $(2,1,3)$. In $\scrG^{(1)}_{r}$, a full path $\varrho_{r}$ starts from $\langle q_{2}, 0 \rangle$ whose $R$-rank is the highest. The path visits $\langle b_{1}, 1 \rangle$, $\langle b_{2}, 2 \rangle$ and then $\langle q_{0}, 3 \rangle$ whose $R$-rank is one less than that of $q_{2}$. Similarly in $\scrG^{(2)}_{r}$, the path continues from $\langle q_{2}, 3 \rangle$, visits $\langle b_{1}, 4 \rangle$, $\langle b_{2}, 5 \rangle$ and ends at $\langle q_{1}, 6 \rangle$ whose $R$-rank is one less than that of $q_{0}$. The $H$-rank (index rank) of every level of $\scrG_{h}$ is $(\langle 1,2\rangle, \langle 1,2\rangle, \langle 2,1\rangle)$. Let us take a look at $\scrG^{(1)}_{h}$. A full path $\varrho_{h}$ (marked green except the last edge) starts at $\langle q_{1}, 12 \rangle$, visits $\langle b_{2}, 13 \rangle$ and $\langle g_{1}, 14 \rangle$ (because of $h(q_{1})[1]=1$), and enters $t$-track (the bypass track $\{t\} \times \bbN$) at $\langle t, 15 \rangle$, from where it stays on $t$-track till reaching $\langle t, 17 \rangle$. Another full path $\varrho'_{h}$ (marked red except the last edge) starts at $\langle q_{1}, 12 \rangle$ too, takes $q_{1}$-track to $\langle q_{1}, 15 \rangle$, and then visits $\langle g_{2}, 16 \rangle$ (because of $h(q_{1})[2]=2$), and enters $t$-track at $\langle t, 17 \rangle$. Both $\varrho_{h}$ and $\varrho'_{h}$ return to $q_{1}$-track at $\langle q_{1}, 18 \rangle$ using the edge $\langle \langle t,17\rangle, \langle q_{1},18\rangle\rangle$ (marked blue). By $\varrho_{0\to6}$, $\varrho_{6\to12}$ and $\varrho_{18\to24}$ (all marked blue) we denote the $q_{1}$-tracks in $\scrG_{r}$, in $\scrG^{(0)}_{h}$ and in $\scrG^{(2)}_{h}$, respectively. It is easy to verify that Property~\eqref{en:Q-word-1} with respect to $q_{2}$ and $q_{1}$ is satisfied by both $\varrho_{r} \cat \varrho_{6\to12} \cat \varrho_{h} \cat \varrho_{18\to24}$ and $\varrho_{r} \cat \varrho_{6\to12} \cat \varrho_{h'} \cat \varrho_{18\to24}$. Also easily seen is that Property~\eqref{en:Q-word-2} with respect to $q_{1}$ is satisfied by $\varrho_{0\to6} \cat \varrho_{6\to12} \cat \varrho_{h} \cat \varrho_{18\to24}$ and $\varrho_{0\to6} \cat \varrho_{6\to12} \cat \varrho_{h'} \cat \varrho_{18\to24}$. \end{example} We are ready for the lower bound proof. Let $J \subseteq I$. We use $\langle G,B\rangle_{J}$ to denote the Streett condition with respect to only indices in $J$. The corresponding Rabin condition $[G,B]_{J}$ is similarly defined. When $J$ is a singleton, say $J=\{j\}$, we simply write $\langle G(j),B(j)\rangle$ for $\langle G,B\rangle_{J}$ and $[G(j),B(j)]$ for $[G,B]_{J}$. Obviously, if an infinite run satisfies $\langle G,B\rangle_{J}$ (resp. $[G,B]_{J}$), then the run also satisfies $\langle G,B\rangle_{J'}$ (resp. $[G,B]_{J'}$) for $J' \subseteq J$ (resp. $J \subseteq J' \subseteq I$). \begin{lemma} \label{lem:Rabin-nature} For every $Q$-ranking $f$, $(\scrG_{f})^{\omega} \not \in \scrL(\calS)$. \end{lemma} \begin{proof} Let $f=\langle r,h \rangle$, $\scrG = (\scrG_{f})^{\omega}$ and $\varrho$ an infinite path in $\scrG$. For simplicity, we assume $\varrho$ only lists states appearing on the boundaries of $\scrG_{f}$ fragments; for any $j \ge 0$, $\varrho(j)$ (resp. $\varrho(j+1)$) is a state in the first (resp. last) level of the $j$-th $\scrG_{f}$ fragment. Let $\varrho[j, j+1]$ denote the finite fragment from $\varrho(j)$ to $\varrho(j+1)$. Let $\varrho[j, \infty]$ denote the suffix of $\varrho$ beginning from $\varrho(j)$. By Property~\eqref{en:Q-word-3}, $\varrho(i) \in Q$ for $i \ge 0$. By Property~\eqref{en:Q-word-4}, $\varrho$ eventually stabilizes on $R$-ranks in the sense that there exists a $j_{0}$ such that for any $j \ge j_{0}$, $r(\varrho(j))=r(\varrho(j+1))$. Because every level of $\scrG$ has the same rank, $\varrho$ stabilizes on a (horizontal) track after $j_{0}$, i.e., there exists $i \in [n]$ such that $\varrho(j) = q_{i}$ for $j \ge j_{0}$. Property~\eqref{en:Q-word-2} says that there are \emph{exactly} $k$ \emph{full} paths $\varrho_{1}, \ldots, \varrho_{k}$ from $\langle q_{i},0\rangle$ to $\langle q_{i}, |\scrG_{f}|\rangle$ in $\scrG_{f}$. Therefore, $\varrho[j_{0}, \infty]$ can be divided into the infinite sequence $\varrho[j_{0}, j_{0}+1], \varrho[j_{0}+1, j_{0}+2], \ldots$, each of which is one of $\varrho_{1}, \ldots, \varrho_{k}$. Let $k_{0} \in I$ be the smallest index such that $\varrho_{k_{0}}$ appears infinitely often in this sequence, i.e., for some $j_{1} \ge j_{0}$, none of $\varrho_{1}, \ldots, \varrho_{k_{0}-1}$ appears in $\varrho[j_{1}, \infty]$. By Property~\eqref{en:Q-word-2} again, $\varrho[j_{1}, \infty]$ visits none of $B(h(q_{i})[1]), \ldots, B(h(q_{i})[k_{0}])$, but visits $G(h(q_{i})[k_{0}])$ infinitely often (because $\varrho_{k_{0}}$ appears infinitely often). In particular, $\varrho$ satisfies $[G(t),B(t)]$ for $t=h(q_{i})[k_{0}]$ and hence $[G,B]_{I}$. Because $\varrho$ is chosen arbitrarily, we have $\scrG \not \in \scrL(\calS)$. \end{proof} \begin{lemma} \label{lem:Streett-potential} For every two different $Q$-rankings $f$ and $f'$, $((\scrG_{f})^{+} \cat (\scrG_{f'})^{+})^{\omega} \subseteq \scrL(\calS)$. \end{lemma} \begin{proof} Let $\scrG \in ((\scrG_{f})^{+} \cat (\scrG_{f'})^{+})^{\omega}$ be an $\omega$-word where both $\scrG_{f}$ and $\scrG_{f'}$ occur infinitely often in $\scrG$. Let $f = \langle r,h \rangle$ and $f'=\langle r',h' \rangle$. We have two cases: either $r \not = r'$ or $h \not = h'$. If $r \not = r'$. Since both $r$ and $r'$ are one-to-one functions from $Q$ to $[1..n]$, there must be $i,j \in [n]$ such that $r(q_{i})>r(q_{j})$ and $r'(q_{j})>r'(q_{i})$. By Property~\eqref{en:Q-word-1}, $\scrG_{f}$ contains a full path $\varrho_{i \to j}$ from $\langle q_{i}, 0 \rangle$ to $\langle q_{j}, |\scrG_{f}|\rangle$ that visits all of $B(1), \ldots, B(k)$. By the same property, $\scrG_{f'}$ contains a path $\varrho'_{j \to i}$ from $\langle q_{j}, 0 \rangle$ to $\langle q_{i}, |\scrG_{f'}|\rangle$ that also visits all of $B(1), \ldots, B(k)$. Then $\varrho_{i \to j} \cat \varrho'_{j \to i}$ is a path in $\scrG_{f} \cat \scrG_{f'}$ that visits all of $B(1), \ldots, B(k)$. Also by Property~\eqref{en:Q-word-2}, $\scrG_{f}$ (resp. $\scrG_{f'}$) contains a path $\varrho_{i \to i}$ (resp. $\varrho'_{i \to i}$) from $\langle q_{i}, 0 \rangle$ to $\langle q_{i}, |\scrG_{f}|\rangle$ (resp. from $\langle q_{i}, 0 \rangle$ to $\langle q_{i}, |\scrG_{f'}|\rangle$). Now we define an infinite path $\hat{\varrho}$ in $\scrG$ as follows. We pick the finite path $\varrho_{i \to i}$ in every $\scrG_{f}$ fragment and $\varrho'_{i \to i}$ in every $\scrG_{f'}$ fragment, except that in the case where a $\scrG_{f}$ fragment is followed immediately by a $\scrG_{f'}$ fragment, we pick $\varrho_{i \to j}$ in the preceding $\scrG_{f}$ and $\varrho'_{j \to i}$ in the following $\scrG_{f'}$. It is easily seen that $\hat{\varrho}$, in the form \begin{align*} ((\varrho_{i \to i})^{*} \cat (\varrho_{i \to j} \cat \varrho'_{j \to i})^{+} \cat (\varrho'_{i \to i})^{*})^{\omega} \, , \end{align*} visits all of $B(1), \ldots, B(k)$ infinitely often, and hence it satisfies the Streett condition $\langle G,B\rangle_{I}$. If $h \not = h'$. Then there exist $i \in [n]$, $j \in I$ such that $h(q_{i})[j] \not = h'(q_{i})[j]$ and $h(q_{i})[j^{*}]=h'(q_{i})[j^{*}]$ for $j^{*} \in [1..j-1]$. Since both $h(q_{i})$ and $h'(q_{i})$ are permutations of $I$, we have $j<k$ and \begin{align} \label{eq:set-equal} \{\ h(q_{i})[j^{*}] \ \mid \ j^{*} \in [j..k] \ \} = \{\ h'(q_{i})[j^{*}] \ \mid \ j^{*} \in [j..k] \ \} \, . \end{align} By Property~\eqref{en:Q-word-2}, in $\scrG_{f}$ there exists a path $\varrho_{i \to i}$ from $\langle q_{i}, 0 \rangle$ to $\langle q_{i}, |\scrG_{f}|\rangle$ that visits none of $G(h(q_{i})[j^{*}])$ for $j^{*} \in [1..j-1]$, but visits all of $B(h(q_{i})[j^{*}])$ for $j^{*} \in [j+1..k]$. Similarly, in $\scrG_{f'}$ there exists a path $\varrho'_{i \to i}$ from $\langle q_{i}, 0 \rangle$ to $\langle q_{i}, |\scrG_{f'}|\rangle$ that visits none of $G(h'(q_{i})[j^{*}])$ for $j^{*} \in [1..j-1]$, but visits all of $B(h'(q_{i})[j^{*}])$ for $j^{*} \in [j+1..k]$. Because $h(q_{i})$ and $h'(q_{i})$ are different permutations of $I$, $h'(q_{i})[j]=h(q_{i})[j_{0}]$ for some $j_{0} \in [j+1..k]$ and $h(q_{i})[j]=h'(q_{i})[j_{1}]$ for some $j_{1} \in [j+1..k]$. It follows that both sides of~\eqref{eq:set-equal} are equal to \begin{align*} &\ \ \{\ h(q_{i})[j^{*}] \ \mid \ j^{*} \in [j+1..k] \ \} \cup \{\ h'(q_{i})[j^{*}] \ \mid \ j^{*} \in [j+1..k] \ \} \, . \end{align*} Therefore $\varrho_{i \to i} \cat \varrho'_{i \to i}$ (in $\scrG_{f} \cat \scrG_{f'}$) visits all of $B(h(q_{i})[j^{*}])$ for $j^{*} \in [j..k]$. Now let $\hat{\varrho}$ be defined as follows: $\hat{\varrho}$ takes $\varrho_{i \to i}$ in every $\scrG_{f}$ fragment and $\varrho'_{i \to i}$ in every $\scrG_{f'}$ fragment. That is, $\hat{\varrho}$ takes the following form \begin{align*} ((\varrho_{i \to i})^{+} \cat (\varrho'_{i \to i})^{+})^{\omega} \, . \end{align*} Recall that $h(q_{i})[j^{*}]=h'(q_{i})[j^{*}]$ for $j^{*} \in [1..j-1]$. It follows that $\hat{\varrho}$ does not visit any of $G(h(q_{i})[j^{*}])$ for $j^{*} \in [1..j-1]$ because neither $\varrho_{i \to i}$ nor $\varrho'_{i \to i}$ does. Also since both $\scrG_{f}$ and $\scrG_{f'}$ occur infinitely often in $\scrG$, $\hat{\varrho}$ contains infinitely many $\varrho_{i \to i} \cat \varrho'_{i \to i}$, which implies that $\hat{\varrho}$ visits all of $B(h(q_{i})[j^{*}])$ for $j^{*} \in [j..k]$ infinitely often. Since $h(q_{i})$ is a permutation of $I$, $\hat{\varrho}$ satisfies $\langle G,B\rangle_{I}$. In either case (whether $r \not = r'$ or $h \not = h'$), $\scrG$ contains a path that satisfies $\langle G,B\rangle_{I}$, which means $\scrG \in \scrL(\calS)$. Because $\scrG$ is arbitrarily chosen, we have $((\scrG_{f})^{+} \cat (\scrG_{f'})^{+})^{\omega} \subseteq \scrL(\calS)$. \end{proof} The following lemma is the core of Michel's scheme~\cite{Mic88,Lod99}, recast in the setting of full automata with rankings~\cite{Yan06,CZL09}. Recall that $\calD^{Q}$ denotes the set of all $Q$-rankings and $|\calD^{Q}|$ denotes the cardinality of $\calD^{Q}$. \begin{lemma} \label{lem:cost-of-separation} A union-closed automaton that complements $\calS$ must have at least $|\calD^{Q}|$ states. \end{lemma} \begin{proof} Let $\calC$ be a union-closed automaton that complements $\calS$. By Lemma~\ref{lem:Rabin-nature}, for every $Q$-ranking $f$, $(\scrG_{f})^{\omega} \in \scrL(\calC)$. Let $f$, $f'$ be two different $Q$-rankings and $\scrG_{f}$ and $\scrG_{f'}$ the corresponding $Q$-words. Let $\varrho$ and $\varrho'$ be the corresponding accepting runs of $(\scrG_{f})^{\omega}$ and $(\scrG_{f'})^{\omega}$, respectively. Also let $\varrho_{0}$ and $\varrho_{0}'$, respectively, be the accepting runs of $(\scrG_{f})^{\omega}$ and $(\scrG_{f'})^{\omega}$ when we treat $\scrG_{f}$ and $\scrG_{f'}$ as \emph{atomic} letters, that is, $\varrho_{0}$ (resp. $\varrho'_{0}$) only records states visited at the boundary of $\scrG_{f}$ (resp. $\scrG_{f'}$) and is a subsequence of $\varrho$ (resp. $\varrho'$). Obviously, $\Inf(\varrho_{0}) \subseteq \Inf(\varrho)$, $\Inf(\varrho'_{0}) \subseteq \Inf(\varrho')$, $\Inf(\varrho_{0}) \not = \emptyset$ and $\Inf(\varrho'_{0}) \not = \emptyset$. If $\Inf(\varrho_{0}) \cap \Inf(\varrho'_{0}) = \emptyset$ for any pair of $f$ and $f'$, then clearly $\calC$ has at least $|\calD^{Q}|$ states because the state set of $\calC$ contains $|\calD^{Q}|$ pairwise disjoint nonempty subsets. Therefore we can assume that $\Inf(\varrho_{0}) \cap \Inf(\varrho'_{0}) \not = \emptyset$ for a fixed pair of $f$ and $f'$. Let $q$ be a state in $\Inf(\varrho_{0}) \cap \Inf(\varrho'_{0})$. Because $q$ occurs infinitely often in $\varrho$, then for some $m > 0$, there exists a path in $(\scrG_{f})^{m}$ that goes from $q$ to $q$ and visits exactly all states in $\Inf(\varrho)$ (or equivalently speaking, $\calC$, upon reading the input word $(\scrG_{f})^{m}$, runs from state $q$ to $q$, visiting exactly all states in $\Inf(\varrho)$ during the run). By $q \xrightarrow[! \Inf(\varrho)]{(\scrG_{f})^{m}} q$ we denote the existence of such a path. Similarly, we have $q \xrightarrow[!\Inf(\varrho')]{(\scrG_{f'})^{m'}} q$ for some $m' > 0$. Also we have $q_{0} \xrightarrow{(\scrG_{f})^{m_{0}}} q$ where $q_{0}$ is an initial state of $\calC$. Now consider the following infinite run $\varrho^{*}$ in the form \begin{multline*} q_{0} \xrightarrow{(\scrG_{f})^{m_{0}}} q \xrightarrow[!\Inf(\varrho)]{(\scrG_{f})^{m}} q \xrightarrow[!\Inf(\varrho')]{(\scrG_{f'})^{m'}} q \xrightarrow[!\Inf(\varrho)]{(\scrG_{f})^{m}} q \xrightarrow[!\Inf(\varrho')]{(\scrG_{f'})^{m'}} q \cdots \end{multline*} which is an accepting run of $\calC$ for $(\scrG_{f})^{m_{0}} \cat ((\scrG_{f})^{m} \cat (\scrG_{f'})^{m'})^{\omega}$ because $\Inf(\varrho^{*}) = \Inf(\varrho) \cup \Inf(\varrho')$. However, by Lemma~\ref{lem:Streett-potential}, $(\scrG_{f})^{m_{0}} \cat ((\scrG_{f})^{m} \cat (\scrG_{f'})^{m'})^{\omega} \in ((\scrG_{f})^{+} \cat (\scrG_{f'})^{+})^{\omega} \subseteq \scrL(\calS)$, a contradiction. \end{proof} \begin{theorem} \label{thm:lower-bound} Streett complementation is in $2^{\Omega(n\lg n + k n\lg k)}$ for $k=O(n)$ and in $2^{\Omega(n^{2}\lg n)}$ for $k=\omega(n)$, where $n$ and $k$ are the state size and index size of a complementation instance. \end{theorem} \begin{proof} Here we switch to use $n_{0}$ and $k_{0}$, respectively, for the effective state size and index size in our construction $\calS$. We have $n=2k_{0}+n_{0}+1$. By Lemma~\ref{lem:cost-of-separation}, the complementation of $\calS$ requires $|\calD^{Q}|=2^{\Omega(n_{0}\lg n_{0} + n_{0}k_{0}\lg k_{0})}$ states. If $k_{0} \le k$, we can construct a full Streett automaton $\calS'$ with state size $n$ and index size $k$ as follows. $\calS'$ is almost identical to $\calS$ except that its acceptance condition is defined as $\calF'=\langle G',B'\rangle_{I'}$ (for $I'=[1..k]$) such that for $i \in [1..k_{0}]$, $G'(i) = G(i)$ and $B'(i)=B(i)$ and for $i \in [k_{0}+1, k]$, $G'(i)=B'(i)=\emptyset$. It is easily seen that $\calS'$ is equivalent to $\calS$ and hence the complementation lower bound for $\calS$ also applies to that for $\calS'$. Now when $k=O(n)$, we can always find $n_{0}$ and $k_{0}$ such that $k_{0} \le k$, yet $n_{0}=\Omega(n)$ and $k_{0}=\Omega(k)$, and hence we have the lower bound $2^{\Omega(n\lg n + k n\lg k)}$. When $k=\omega(n)$, we set $k_{0}=n_{0}$ so that $k_{0} \le k$, $n_{0}=\Omega(n)$ and $k_{0}=\Omega(n)$, and hence we have the lower bound $2^{\Omega(n^{2}\lg n)}$. \end{proof} \section{Preliminaries} \label{sec:preliminaries} \paragraph{Basic Notations.} Let $\bbN$ be the set of natural numbers. We write $[i..j]$ for $\{ k \in \bbN \, \mid \, i \le k \le j\}$, $[i..j)$ for $[i..j-1]$, $[n]$ for $[0..n)$. For an infinite sequence $\varrho$, we use $\varrho(i)$ to denote the $i$-th component for $i \in \bbN$, $\varrho[i..j]$ (resp. $\varrho[i..j)$) to denote the subsequence of $\varrho$ from position $i$ to position $j$ (resp. $j-1$). Similar notations for finite sequences and we use $|\varrho|$ to denote the length of $\varrho$. We assume readers are familiar with notations in language theory, such as $\alpha \cat \alpha'$, $\alpha^{*}$, $\alpha^{+}$ and $\alpha^{\omega}$ where $\alpha$ and $\alpha'$ are sequences and $\alpha$ is finite, and similar ones such as $S \cat S'$, $S^{*}$, $S^{+}$ and $S^{\omega}$ where $S$ is a set of finite sequences and $S'$ is a set of sequences. \paragraph{Automata and Runs.} A finite (nondeterministic) automaton on infinite words ($\omega$-automaton) is a $5$-tuple $\calA=\langle\Sigma, S, Q, \Delta, \calF\rangle$, where $\Sigma$ is an alphabet, $S$ is a finite set of states, $Q \subseteq S$ is a set of initial states, $\Delta \subseteq S \times \Sigma \times S$ is a transition relation, and $\calF$ is an acceptance condition. An infinite word ($\omega$-words) over $\Sigma$ is an infinite sequence of letters in $\Sigma$. A \emph{run} $\varrho$ of $\calA$ over an $\omega$-word $w$ is an infinite sequence of states in $S$ such that $\varrho (0)\in Q$ and, $\langle\varrho(i),w(i),\varrho(i\!+\!1)\rangle \in \Delta$ for $i \in \bbN$. Finite runs are defined similarly. Let $\Inf(\varrho)$ the set of states that occur infinitely many times in $\varrho$. An automaton accepts $w$ if there exists a run $\varrho$ over $w$ that satisfies $\calF$, which usually is defined as a predicate on $\Inf(\varrho)$. We use $\scrL(\calA)$ to denote the set of $\omega$-words accepted by $\calA$ and $\overline{\scrL(\calA)}$ the complement of $\scrL(\calA)$. \paragraph{Acceptance Conditions and Automata Types.} $\omega$-automata are classified according their acceptance conditions. Below we list three types of $\omega$-automata relevant to this paper. Let $F$ be a subset of $Q$ and $G, B$ two functions $I \to 2^{Q}$ where $I=[1..k]$ is called the \emph{index set}. \begin{itemize} \item \emph{B\"{u}chi}: $\langle F \rangle$: $\Inf(\varrho) \cap F\neq\emptyset$. \item \emph{Streett}: $\langle G,B\rangle_{I}$: $\forall i \in I$, $\Inf(\varrho) \cap G(i)\neq\emptyset \to \Inf(\varrho)\cap B(i)\neq\emptyset$. \item \emph{Rabin}: $[G,B]_{I}$: $\exists i \in I$, $\Inf(\varrho)\cap G(i)\neq\emptyset \wedge \Inf(\varrho)\cap B(i)=\emptyset$. \end{itemize} Note that Streett and Rabin are dual to each other. An automaton $\calA$ is called \emph{union-closed} if when two runs $\varrho$ and $\varrho'$ are accepting, so is any run $\varrho''$ if $\Inf(\varrho'') = \Inf(\varrho) \cup \Inf(\varrho')$. It is easy to verify that both B\"{u}chi and Streett automata are union-closed while Rabin automata are not. Let $J \subseteq I$. We use $\langle G,B\rangle_{J}$ to denote the Streett condition with respect to only indices in $J$. When $J$ is a singleton, say $J=\{j\}$, we simply write $\langle G(j),B(j)\rangle$ for $\langle G,B\rangle_{J}$. We can assume that $B$ is injective and the index size $k$ is bound by $2^{n}$, because if $B(i)=B(i')$ for two different $i,i' \in I$, then we can shrink the index set $I$ by replacing $\langle G,B\rangle_{\{i,i'\}}$ by $\langle G(i)\cup G(i'), B(i) \rangle$. The same convention and assumption are used for Rabin condition. \paragraph{$\Delta$-Graphs.} A \emph{$\Delta$-graph} (run graph) of an $\omega$-word $w$ under $\calA$ is a directed graph $\scrG_{w}=(V,E)$ where $V=S \times \bbN$ and $E=\{\langle\langle s,l\rangle,\langle s',l+1\rangle\rangle \in V \times V \, \mid \, s, s' \in S,\ l \in \bbN, \langle s,w(l),s'\rangle\in\Delta \, \}$. By the \emph{$l$-th level}, we mean the vertex set $S \times \{l\}$. Let $S=\{s_{0}, \ldots, s_{n-1}\}$. By \emph{$s_{l}$-track} we mean the vertex set $\{s_{l}\} \times \bbN$. For a subset $X$ of $S$, we call a vertex $\langle s, l\rangle$ an $X$-vertex if $s \in X$. We simply use $s$ for $\langle s, l \rangle$ when the index is irrelevant. A \emph{$\Delta$-graph} $\scrG_{w}$ of a finite word $w$ is defined similarly. By $|\scrG_{w}|$ we denote the length of $\scrG_{w}$, which is the same as $|w|$. $\scrG_{\sigma}$ for $\sigma \in \Sigma$ is called a \emph{unit} $\Delta$-graph. A path in $\scrG_{w}$ is called a \emph{full path} if the path goes from level $0$ to level $|\scrG_{w}|$. By $\scrG_{w} \cat \scrG_{w'}$, we mean the concatenation of $\scrG_{w}$ and $\scrG_{w'}$, which is the graph obtained by merging the last level of $\scrG_{w}$ with the first level of $\scrG_{w'}$. Note that $\scrG_{w} \cat \scrG_{w'} = \scrG_{w \cat w'}$. Let $w$ be a finite word. For $l, l' \in \bbN$, $s, s' \in S$ we write $\langle s, l\rangle \xrightarrow{w} \langle s', l'\rangle$ to mean that there exists a run $\varrho$ of $\calA$ such that $\varrho[l..l']$, the subsequence $\varrho(l)\varrho(l+1) \cdots \varrho(l')$ of $\varrho$, is a finite run of $\calA$ from $s$ to $s'$ over $w$. We simply write $s \xrightarrow{w} s'$, when omitting level indices causes no confusion. \paragraph{Full Automata.} A full automaton $\langle\Sigma, S, Q,\Delta,\calF\rangle$ is a finite automaton with the following conditions: $\Sigma = 2^{S \times S}$, $\Delta \subseteq S \times 2^{S \times S} \times S$, and for all $s, s' \in S$, $\sigma \in \Sigma$, $\langle s, \sigma, s' \rangle \in \Delta$ if and only if $\langle s, s' \rangle \in \sigma$~\cite{SS78,Yan06,CZL09}. For full automata, the alphabet $\Sigma$ and the transition relation $\Delta$ are completely determined by $S$. As stated in the introduction, the essence of full automaton technique is to use run graphs as free as possible, without worrying which word generates which run graph. Let the functional version of $\Delta$ be $\delta: \Sigma \to 2^{S \times S}$, where for every $s,s' \in S$ and every $\sigma \in \Sigma$, $\langle s, s' \rangle \in \delta(\sigma)$ if and only if $\langle s, \sigma, s' \rangle \in \Delta$. The function $\delta$ maps a letter $\sigma$ to a unit $\Delta$-graph $\scrG_{\sigma}$, which represents the complete behavior of $\calA$ over $\sigma$ (technically speaking, $\scrG_{\sigma}$, with index dropped, is the graph of $\delta(\sigma)$). In the setting of full automata, $\delta$ is simply the identity function on $2^{S \times S}$. Words and run graphs are essentially the same thing. From now on we use the two terms interchangeably. For example, for a word $w$, $s \xrightarrow{w} s'$ is equivalent to say that a full path in $\scrG_{w}$ goes from $s$ to $s'$. \section{Proofs} \label{sec:proofs} In this section we prove Theorem~\ref{thm:existence}. Recall that we need a construction to simultaneously satisfy all properties in Definition~\ref{def:Q-word}, which are parameterized with pairs of states (Condition~\eqref{en:Q-word-1}) or states (Condition~\eqref{en:Q-word-2}). The idea is to concatenate a sequence of finite $\Delta$-graphs, each of which satisfies the properties with respect to a specific pair of states or a specific state. With the help of the bypass track ($t$-track), properties associated with each individual subgraph are all preserved in the final concatenation, giving us a desired $Q$-word. Let $f=\langle r,h \rangle$. $\scrG_{f}$ divides into two sequential subgraphs $\scrG_{r}$ and $\scrG_{h}$, which satisfy Properties~\eqref{en:Q-word-1} and~\eqref{en:Q-word-2}, respectively. Properties~\eqref{en:Q-word-3} and~\eqref{en:Q-word-4} are obvious once the final construction is shown. As stated earlier, Property~\eqref{en:Q-word-1} and Property~\eqref{en:Q-word-2} are orthogonal; Property~\eqref{en:Q-word-1} only relies on $R$-rankings and Property~\eqref{en:Q-word-2} only relies on $H$-rankings. We call a finite $\Delta$-graph whose every level is ranked by the same $R$-ranking, an \emph{$R$-word} if it satisfies Properties~\eqref{en:Q-word-1},~\eqref{en:Q-word-3} and~\eqref{en:Q-word-4}. Similarly, a finite $\Delta$-graph whose every level is ranked by the same $H$-ranking, is called an \emph{$H$-word} if it satisfies Properties~\eqref{en:Q-word-2},~\eqref{en:Q-word-3} and~\eqref{en:Q-word-4}. As $Q$-words, $H$-words (resp. $R$-words) are not uniquely determined by $H$-rankings (resp. $R$-rankings). Nevertheless, all $H$-words (resp. $R$-words) corresponding to a specific $h$ (resp. $r$) serve the construction purpose equally well, and hence we simply name an arbitrarily chosen one by $\scrG_{h}$ (resp. $\scrG_{r}$). Theorem~\ref{thm:existence} builds on Lemmas~\ref{lem:R-word} and~\ref{lem:H-word}. \begin{lemma}[$R$-Word] \label{lem:R-word} An $R$-word exists for every $R$-ranking. \end{lemma} \begin{proof} Let $r$ be an $R$-ranking. $\scrG_{r}$ is constructed as follows. We order $Q$ as $q_{m_{1}}, \ldots, q_{m_{n}}$ such that $r(q_{m_{1}}) > \cdots > r(q_{m_{n}})$. $\scrG_{r}$ has $n-1$ parts $\scrG^{(1)}_{r}, \ldots, \scrG^{(n-1)}_{r}$. In $\scrG^{(i)}_{r}$ ($i \in [1..n-1]$), a path leaves $q_{m_{i}}$ whose $R$-rank is the $i$-th large, visits all $B(j)$-vertices ($j \in I$) and ends at $q_{m_{i+1}}$ whose $R$-rank is the $(i+1)$-th large. Formally we define the following letters \begin{align*} Id(Q) &= \{ \, \langle q_{i}, q_{i} \rangle \ \mid \ i \in [n] \, \}, && \displaybreak[0]\\ Q(i)ToB(1) &= Id(Q) \cup \{\, \langle q_{i}, b_{1} \rangle \, \}, && (i \in [n]) \displaybreak[0]\\ B(i)ToB(i+1) &= Id(Q) \cup \{\, \langle b_{i}, b_{i+1} \rangle\, \}, && (i \in [1..k-2]) \displaybreak[0]\\ B(k)ToQ(i) &= Id(Q) \cup \{\, \langle b_{k}, q_{i} \rangle\, \}, && (i \in [n]) \displaybreak[0] \end{align*} and then define $\scrG_{r}$ as \begin{multline*} Q(m_{1})ToB(1) \cat B(1)ToB(2) \cat \cdots \cat B(k)ToQ(m_{2}) \\ \cat \cdots \cat Q(m_{n-1})ToB(1) \cat B(1)ToB(2) \cat \cdots \cat B(k)ToQ(m_{n}) \, . \end{multline*} We verify that $\scrG_{r}$ satisfies Property~\eqref{en:Q-word-1}. Let $q, q' \in Q$ be such that $r(q) > r(q')$. Let $i, i' \in [1..n]$ be such that $i < i'$, $q=q_{m_{i}}$ and $q'=q_{m_{i'}}$. Recall that by a \emph{full path} in $\scrG$ we mean a path going from level $0$ to level $|\scrG|$. We define a full path $\varrho_{i, i'}$ in $\scrG_{r}$ as follows. The path $\varrho_{i, i'}$ takes $q_{m_{i}}$-track until it reaches the left boundary of the letter $Q(m_{i})ToB(1)$, from where it leaves $q_{m_{i}}$-track to visit $b_{1}, \ldots, b_{k}$ (in this order) and then $q_{m_{i+1}}$. Continuing from $q_{m_{i+1}}$, $\varrho_{i, i'}$ follows the same pattern till it reaches $q_{m_{i+2}}$. Repeating this pattern $i'-i$ times, $\varrho_{i, i'}$ reaches $q_{m_{i'}}$ from where it takes $q_{m_{i'}}$-track till the end of $\scrG_{r}$. In summary, $\varrho_{i, i'}$ takes the form \begin{multline*} q=q_{m_{i}} \to \cdots \to q_{m_{i}} \displaybreak[0] \to b_{1} \to \cdots \to b_{k} \to q_{m_{i+1}} \displaybreak[0] \to b_{1} \to \cdots \to b_{k} \to q_{m_{i+2}} \displaybreak[0] \\ \to \ \cdots \cdots \cdots \ \to b_{1} \to \cdots \to b_{k} \to q_{m_{i'}} \to \cdots \to q_{m_{i'}}=q' \, . \end{multline*} Easily seen from the construction, with respect to any pair $q$ and $q'$ where $r(q) > r(q')$, Property~\eqref{en:Q-word-1} is satisfied by the corresponding $\varrho_{i, i'}$. Properties~\eqref{en:Q-word-3} and~\eqref{en:Q-word-4} are immediate from the construction. \end{proof} \begin{example}[$R$-Word] \label{ex:R-word} Let us revisit Example~\ref{ex:Q-word}. $Q$ is ordered as $q_{2}, q_{0}, q_{1}$ for $r(q_{2}) > r(q_{0}) > r(q_{1})$. So $m_{1}=2$, $m_{2}=0$ and $m_{3}=1$. In Figure~\ref{fig:Q-word}, the $R$-word $\scrG_{r}$ consists of two parts: $\scrG^{(1)}_{r}$ (level $0$ to level $3$) and $\scrG^{(2)}_{r}$ (level $3$ to $6$), defined as follows. \begin{align*} \scrG^{(1)}_{r} &: Q(2)ToB(1) \cat B(1)ToB(2) \cat B(2)ToQ(0), & \scrG^{(2)}_{r} &: Q(0)ToB(1) \cat B(1)ToB(2) \cat B(2)ToQ(1). \end{align*} For Property~\eqref{en:Q-word-1} with respect to $q_{2}$ and $q_{1}$, we can obtain the desired $\varrho_{2, 1}$ as follows. In $\scrG^{(1)}_{r}$, $\varrho_{2, 1}$ starts from $\langle q_{2}, 0 \rangle$, visits $\langle b_{1}, 1 \rangle$, $\langle b_{2}, 2 \rangle$ and then $\langle q_{0}, 3 \rangle$. In $\scrG^{(2)}_{r}$, $\varrho_{2, 1}$ continues from $\langle q_{0}, 3 \rangle$, visits $\langle b_{1}, 4 \rangle$, $\langle b_{2}, 5 \rangle$ and lands at $\langle q_{1}, 6 \rangle$. For Property~\eqref{en:Q-word-1} with respect to $q_{0}$ and $q_{1}$, we can obtain the desired $\varrho_{0, 1}$ as follows. In $\scrG^{(1)}_{r}$, $\varrho_{0, 1}$ starts from $\langle q_{0}, 0 \rangle$, passes through $\scrG^{(1)}_{r}$ via $q_{0}$-track until it reaches $\langle q_{0}, 3 \rangle$ from where it visits $\langle b_{1}, 4 \rangle$, $\langle b_{2}, 5 \rangle$ and lands at $\langle q_{1}, 6 \rangle$. \end{example} \begin{lemma}[$H$-Word] \label{lem:H-word} An $H$-word exists for every $H$-ranking. \end{lemma} \begin{proof} Let $h$ be an $H$-ranking. $\scrG_{h}$ is constructed as follows. $\scrG_{h}$ comprises $n$ sequential parts $\scrG^{(0)}_{h}, \ldots \scrG^{(n-1)}_{h}$, and for each $i \in [n]$, $\scrG^{(i)}_{h}$ in turn comprises $k$ sequential parts $\scrG^{(i,1)}_{h}, \ldots, \scrG^{(i,k)}_{h}$. To fulfill the requirement with respect to a pair $q_{i} \in Q$ ($i \in [n]$) and $j \in I$ in Property~\eqref{en:Q-word-2}, we select a full path $\varrho_{i,j}$ in $\scrG_{h}$ as follows. The path starts from $\langle q_{i}, 0 \rangle$ and ends at $\langle q_{i}, |\scrG_{h}|\rangle$. The path $\varrho_{i,j}$ simply passes through, via $q_{i}$-track, all $\scrG^{(i')}_{h}$ for $i' \not = i$. In $\scrG^{(i)}_{h}$, $\varrho_{i,j}$ also passes through $\scrG^{(i,1)}_{h}, \ldots, \scrG^{(i,j-1)}_{h}$ via $q_{i}$ track until it reaches the beginning of $\scrG^{(i,j)}_{h}$, from where it visits $B(h(q_{i})[j+1]), \ldots, B(h(q_{i})[k]), G(h(q_{i})[j])$ (in this order), and then enters $t$-track. The path continues and stays on $t$-track till arriving at the second last level of $\scrG^{(i)}_{h}$, and then ending at $\langle q_{i}, |\scrG^{(i)}_{h}|\rangle$. Formally we define the following letters. \begin{align*} Id(Q) &= \{\, \langle q, q \rangle \, \mid \, q \in Q \}, && \displaybreak[0] \\ Id(T) &= \{\, \langle t, t \rangle \}, && \displaybreak[0] \\ Q(i)toB(j) &= Id(Q) \cup Id(T) \cup \{\, \langle q_{i},b_{j}\rangle\, \}, && (i \in [n], j \in I) \displaybreak[0] \\ B(i)ToB(j) &= Id(Q) \cup Id(T) \cup \{\, \langle b_{i},b_{j}\rangle\, \}, && (i,j \in I) \displaybreak[0] \\ B(i)ToG(j) &= Id(Q) \cup Id(T) \cup \{\, \langle b_{i},g_{j}\rangle\, \}, && (i,j \in I) \displaybreak[0] \\ Q(i)ToG(j) &= Id(Q) \cup Id(T) \cup \{\, \langle q_{i},g_{j}\rangle\, \}, && (i \in [n], j \in I) \displaybreak[0] \\ G(i)ToT &= Id(Q) \cup Id(T) \cup \{\, \langle g_{i}, t \rangle\, \}, && (i \in I) \displaybreak[0] \\ Q(i)To^{-}G(j) &= Id(Q) \cup Id(T) \cup \{\, \langle q_{i},g_{j}\rangle\, \} \setminus \{\, \langle q_{i},q_{i}\rangle\, \}, && (i \in [n], j \in I) \displaybreak[0] \\ G(i)To^{-}T &= Id(Q) \cup Id(T) \cup \{\, \langle g_{i}, t \rangle\, \} \setminus \{\, \langle q_{i},q_{i}\rangle\, \}, && (i \in I) \displaybreak[0] \\ TTo^{-}Q(i) &= Id(Q) \cup \{\, \langle t, q_{i} \rangle\, \} \setminus \{\, \langle q_{i},q_{i}\rangle\, \}. && (i \in [n]) \displaybreak[0] \\ \end{align*} Note that letters of the forms $Q(i)To^{-}G(j)$ or $G(i)To^{-}T$ do not contain horizontal edges $\langle q_{i},q_{i}\rangle$. These letters are used in $\scrG^{(i,k)}_{h}$ so that a full path in $\scrG^{(i)}_{h}$ from $\langle q_{i},0\rangle$ to $\langle q_{i}, |\scrG^{(i)}_{h}|\rangle$ has to leave $q_{i}$-track first and end up at $t$-track. Letters $TTo^{-}Q(i)$ contain neither the bypass edges $\langle t,t\rangle$ nor horizontal edges $\langle q_{i},q_{i}\rangle$. These letters are also used in $\scrG^{(i,k)}_{h}$ so that all full paths in $\scrG^{(i)}_{h}$ using $t$-track end at $\langle q_{i}, |\scrG^{(i)}_{h}|\rangle$. Formally, $\scrG_{h}=\scrG^{(0)}_{h} \cat \scrG^{(2)}_{h} \cat \cdots \cat \scrG^{(n-1)}_{h}$, and for each $i \in [n]$, $\scrG^{(i)}_{h}=\scrG^{(i,1)}_{h} \cat \scrG^{(i,2)}_{h} \cat \cdots \cat \scrG^{(i,k)}_{h}$, where for each $j \in [1..k-1]$, $\scrG^{(i,j)}_{h}$ is \begin{multline} Q(i)ToB(h(q_{i})[j+1]) \cat B(h(q_{i})[j+1])ToB(h(q_{i})[j+2]) \\ \cat \cdots \cat B(h(q_{i})[k-1])ToB(h(q_{i})[k]) \cat B(h(q_{i})[k])ToG(h(q_{i})[j]) \cat G(h(q_{i})[j])ToT \, , \label{eq:rho_i_j} \end{multline} and finally $\scrG^{(i,k)}_{h}$ is \begin{align} Q(i)To^{-}G(h(q_{i})[k]) \cat G(h(q_{i})[k])To^{-}T \cat TTo^{-}Q(i) . \label{eq:rho_i_k} \end{align} We verify that Property~\eqref{en:Q-word-2} holds for every pair $q_{i} \in Q$ ($i \in [n]$) and $j \in I$. First consider $j \in [1..k-1]$. By~\eqref{eq:rho_i_j}, in $\scrG^{(i,j)}_{h}$ a full path $\varrho'_{i,j}$ exists that starts from $\langle q_{i}, 0 \rangle$, visits $B(h(q_{i})[j+1]), \ldots, B(h(q_{i})[k])$ and $G(h(q_{i})[j])$, and finally ends at $\langle t, |\scrG^{(i,j)}_{h}|\rangle$. We extend $\varrho'_{i,j}$ to a full path $\varrho_{i,j}$ in $\scrG_{h}$ as follows. The path $\varrho_{i,j}$ takes $q_{i}$-track in all $\scrG^{(i')}_{h}$ for $i' \not = i$. Inside $\scrG^{(i)}_{h}$, $\varrho_{i,j}$ also takes $q_{i}$-track in all $\scrG^{(i,j')}_{h}$ for $j' < j$. Inside $\scrG^{(i,j)}_{h}$, $\varrho_{i,j}$ is just $\varrho'_{i,j}$. In $\scrG^{(i,j')}_{h}$ for $j' > j$, $\varrho_{i,j}$ takes $t$-track till it reaches the second last level of $\scrG^{(i,k)}_{h}$, from where it takes the edge $\langle t, q_{i} \rangle$ to the $q_{i}$ at the end of $\scrG^{(i)}_{h}$. Put all together, for every $i \in [n], j \in [1..k-1]$, $\varrho_{i,j}$ takes the form \begin{multline*} q_{i} \to \cdots \to q_{i} \to b_{h(q_{i})[j+1]} \to \cdots \to b_{h(q_{i})[k]} \to g_{h(q_{i})[j]} \to t \to \cdots \to t \to q_{i} \to \cdots \to q_{i} \, . \end{multline*} The case $j = k$ is similar. By~\eqref{eq:rho_i_k}, in $\scrG^{(i,k)}_{h}$ a full path $\varrho'_{i,k}$ exists that starts from $\langle q_{i}, 0 \rangle$, visits $G(h(q_{i})[k])$, arrives at the $t$ at the second last level of $\scrG^{(i,k)}_{h}$, and finally takes the edge $\langle t, q_{i} \rangle$ back to the $q_{i}$ at the end of $\scrG^{(i,k)}_{h}$ (also at the end of $\scrG^{(i)}_{h}$). We extend $\varrho'_{i,k}$ to a full path $\varrho_{i,k}$ in $\scrG_{h}$ in the same way as before. The only difference is that $\varrho_{i,k}$ simply passes through, via $q_{i}$-track, all $\scrG^{(i',j')}_{h}$ for any $i' \not = i$ or $j' \not = k$. Put all together, for every $i \in [n]$, $\varrho_{i,k}$ takes the form \begin{align*} q_{i} \to \cdots \to q_{i} \to g_{h(q_{i})[k]} \to t \to q_{i} \to \cdots \to q_{i} \, . \end{align*} Note that for any $i \in [n]$, $j \in I$, the path $\varrho_{i,j}$ has to leave $q_{i}$-track in $\scrG^{(i,j)}_{h}$ to fulfill the requirement with respect to $q_{i}$ and $j$, and it has to use $t$-track and the edge $\langle t, q_{i} \rangle$ in $\scrG^{(i,k)}_{h}$ to return to $q_{i}$-track at the end of $\scrG^{(i)}_{h}$, because the $q_{i}$-track in $\scrG^{(i,k)}_{h}$ is broken at the vertex from where $\varrho_{i,k}$ starts to fulfill the requirement with respect to $q_{i}$ and $k$ in Property~\eqref{en:Q-word-2}. We are done with the existence part of Property~\eqref{en:Q-word-2}. As of the exactness part, we note that the following facts hold for every $i\in [n]$. \begin{enumerate} \item Vertices $q_{i}$ have only a horizontal outgoing edge in all $\scrG^{(i')}_{h}$ for $i' \not= i$. \item Each $\scrG^{(i,j)}$ ($j \in I$) contains exactly one $q_{i}$-vertex that has exactly one \emph{non-horizontal} outgoing edge. So in $\scrG^{(i)}_{h}$, there are $k$ such $q_{i}$-vertices in total. \item The $q_{i}$-track is broken in $\scrG^{(i)}_{h}$ (more precisely, at the beginning of $\scrG^{(i,k)}_{h}$). \item Any full path in $\scrG_{h}$ from $\langle q_{i}, 0\rangle$ to $\langle q_{i}, |\scrG_{h}|\rangle$ has to take one of non-horizontal outgoing edges in $\scrG^{(i)}_{h}$. \item If a path in $\scrG^{(i)}_{h}$ takes a non-horizontal edge to leave $q_{i}$-track, then the path has to land on $t$-track and stay on $t$-track till returning to the $q_{i}$ at the end of $\scrG^{(i)}_{h}$. \end{enumerate} The exactness part then follows. Property~\eqref{en:Q-word-3} is immediate as before. Property~\eqref{en:Q-word-4} holds due to the fact that for every $i, j \in [n]$, any full path in $\scrG^{(i)}_{h}$ that starts from $\langle q_{j},0\rangle$ ends at $\langle q_{j},|\scrG^{(i)}_{h}|\rangle$. \end{proof} \begin{example}[$H$-Word] \label{ex:H-word} Let us revisit Example~\ref{ex:Q-word}. In Figure~\ref{fig:Q-word}, the $H$-word $\scrG_{h}$ consists of three parts: $\scrG^{(1)}_{h}$ (level $6$ to level $12$), $\scrG^{(2)}_{h}$ (level $12$ to level $18$), and $\scrG^{(3)}_{h}$ (level $18$ to level $24$), defined as follows: $\scrG^{(i)}_{h}=\scrG^{(i,1)}_{h} \cat \scrG^{(i,2)}_{h}$ for $i \in [3]$ and \begin{align*} \scrG^{(0,1)}_{h} &= Q(0)ToB(2) \cat B(2)ToG(1) \cat G(1)ToT , & \scrG^{(0,2)}_{h} &= Q(0)To^{-}G(2) \cat G(2)To^{-}T \cat TTo^{-}Q(0) \, ,\displaybreak[0]\\ \scrG^{(1,1)}_{h} &= Q(1)ToB(2) \cat B(2)ToG(1) \cat G(1)ToT , & \scrG^{(1,2)}_{h} &= Q(1)To^{-}G(2) \cat G(2)To^{-}T \cat TTo^{-}Q(1) \, ,\displaybreak[0]\\ \scrG^{(2,1)}_{h} &= Q(2)ToB(1) \cat B(1)ToG(2) \cat G(2)ToT , & \scrG^{(2,2)}_{h} &= Q(2)To^{-}G(1) \cat G(1)To^{-}T \cat TTo^{-}Q(2) \, . \displaybreak[0] \end{align*} Let us take a look the paths $\varrho_{h}$ and $\varrho_{h'}$ (in $\scrG^{(1)}_{h}$) defined in Example~\ref{ex:Q-word}. The path $\varrho_{h}$ (marked green except the last edge) starts at $\langle q_{1}, 12 \rangle$, visits $\langle b_{2}, 13 \rangle$ and $\langle g_{1}, 14 \rangle$, and enters $t$-track at $\langle t, 15 \rangle$. It continues on $t$-track till reaching $\langle t, 17 \rangle$, and then takes $\langle \langle t, 17 \rangle, \langle q_{1}, 18 \rangle \rangle$ (marked blue) to the end. The path $\varrho_{h'}$ (marked red except the last edge) starts at $\langle q_{1}, 12 \rangle$, takes $q_{1}$-track to reach $\langle q_{1}, 15 \rangle$, from where it visits $\langle g_{2}, 16 \rangle$ and then enters $t$-track at $\langle t, 17 \rangle$. Same as $\varrho_{h}$, $\varrho_{h'}$ returns to $q_{1}$-track via $\langle \langle t, 17 \rangle, \langle q_{1}, 18 \rangle \rangle$. \end{example} \begin{varthm}{\ref{thm:existence}~[$Q$-Words].} A $Q$-word exists for every $Q$-ranking. \end{varthm} \begin{proof} By Lemmas~\ref{lem:R-word} and~\ref{lem:H-word}, we have $\scrG_{r}$ and $\scrG_{h}$ as an $R$-word and an $H$-word, respectively. The desired $Q$-word $\scrG$ is just $\scrG_{r} \cat \scrG_{h}$. Properties~\eqref{en:Q-word-3} and~\eqref{en:Q-word-4} follow immediately because they hold both in $\scrG_{r}$ and $\scrG_{h}$. Let $\varrho^{r}_{i, i'}$ be the full path in $\scrG_{r}$ that satisfies Property~\eqref{en:Q-word-1} for $q_{i}$ and $q_{i'}$ where $i, i' \in [n]$ and $r(q_{i}) > r(q_{i'})$, and $\varrho^{h}_{i', k}$ the full path in $\scrG_{h}$ that satisfies Property~\eqref{en:Q-word-2} (with respect to $q_{i'}$ and index $k$). Then $\varrho^{r}_{i, i'} \cat \varrho^{h}_{i', k}$ is the path that Property~\eqref{en:Q-word-1} requires for vertex pair $q_{i}$ and $q_{i'}$. Let $i \in [n]$ and $j \in I$. Let $\varrho^{r}_{i,i}$ be the full $q_{i}$-track in $\scrG_{r}$, and $\varrho^{h}_{i, j}$ the full path in $\scrG_{h}$ that satisfies Property~\eqref{en:Q-word-2} (with respect to vertex $q_{i}$ and index $j$). Then in $\scrG$, for each $q \in Q$, we have $k$ full paths $\varrho^{r}_{i, i} \cat \varrho^{h}_{i, j}$ ($j \in I$), which takes care of the existence part of Property~\eqref{en:Q-word-2}. The exactness part follows from the exactness part of Property~\eqref{en:Q-word-2} for $\scrG_{h}$, and the fact that for each $i \in [n]$, $\varrho^{r}_{i,i}$ is unique in $\scrG_{r}$. \end{proof}